I have an issue while I'm trying to capture specific information inside of the page.
Website: https://www.target.com/p/prairie-farms-vitamin-d-milk-1gal/-/A-47103206#lnk=sametab
On this page, there are hidden tabs named 'Label info', 'Shipping & Returns', 'Q&A' next to 'Details' tab under 'About this items' that I want to scrap.
I found that I need to click on these elements before doing scrapping using Beautifulsoup.
Here is my code, let's say I've got pid for each link.
url = 'https://www.target.com' + str(pid) driver.get(url) driver.implicitly_wait(5) soup = bs(driver.page_source, "html.parser") wait = WebDriverWait(driver, 3) button = soup.find_all('li', attrs={'class': "TabHeader__StyledLI-sc-25s16a-0 jMvtGI"}) index = button.index('tab-ShippingReturns') print('The index of ShippingReturns is:', index) if search(button, 'tab-ShippingReturns'): button_shipping_returns = button[index].find_element_by_id("tab-ShippingReturns") button_shipping_returns.click() time.sleep(3) My code returns ResultSet object has no attribute 'find_element_by_id'. You're probably treating a list of elements like a single element. Did you call find_all() when you meant to call find()?
Can anyone kindly guide me how to resolve this?
https://stackoverflow.com/questions/66631861/web-scrapping-using-beautifulsoup-click-on-element-for-hidden-tab March 15, 2021 at 11:05AM
没有评论:
发表评论