Trying to scrape subscribers data from this page. https://happs.tv/@Pablo .This is exactly like the facebook's likes box, which opens when we click on likes on a post. I need to scroll inside the pop-up which shows all those who liked a post. That works. However, the issue is that after 3-4000 names, the new names start taking an awful amount of time to load, 40 seconds for a new name, sometimes. Even so, the script fails, doesn't exit because there is no break
but then keeps repeating the same names. What could I improve to get past this. I tried increasing the driver wait, should I increase it more? Kind of stuck here. Here is the part after the pop-up div with all the subscribers is open.
current_len = len(driver.find_elements_by_xpath('//*[@id="userInfo"]/a')) while True: driver.find_element_by_xpath('//*[@id="userInfo"]/a').send_keys(Keys.END) try: WebDriverWait(driver, 35).until(lambda x: len(driver.find_elements_by_xpath('//*[@id="userInfo"]/a')) > current_len) current_len = len(driver.find_elements_by_xpath('//*[@id="userInfo"]/a')) except TimeoutException: name_eles = [name_ele for name_ele in driver.find_elements_by_xpath('//*[@id="userInfo"]/a')] time.sleep(5) for name in name_eles: nt = name.text n_li = name.get_attribute('href') print(nt) print(n_li) dict1 = {"Given Name": nt, "URI": n_li} with open('happstv.csv', 'a+', encoding='utf-8-sig') as f: w = csv.DictWriter(f, dict1.keys()) if not header_added: w.writeheader() header_added = True w.writerow(dict1)
https://stackoverflow.com/questions/66997198/elements-take-too-much-time-to-load-in-a-popup-div April 08, 2021 at 12:06PM
没有评论:
发表评论