I'm trying to print first 5 pages links displayed in google search.. But getting StateElementReferenceException Not sure which one went wrong..
public class GoogleCoInTest {
static WebDriver driver = null;
public static void main(String[] args) throws InterruptedException {
System.setProperty("webdriver.gecko.driver", "D:\\bala back up\\personel\\selenium\\Jars\\Drivers\\geckodriver.exe");
driver=new FirefoxDriver();
driver.manage().window().maximize();
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
driver.get("https://www.google.co.in/");
//driver.findElement(By.xpath("//input[class='gsfi']")).sendKeys("Banduchode");;
WebElement search=driver.findElement(By.cssSelector("input#lst-ib"));
search.sendKeys("Banduchode");
search.sendKeys(Keys.ENTER);
printLinksName();
List<WebElement> fiveLinks=driver.findElements(By.xpath(".//*[#id='nav']/tbody/tr/td/a"));
for(int i=0;i<5;i++){
System.out.println(fiveLinks.get(i).getText());
fiveLinks.get(i).click();
Thread.sleep(5000);
printLinksName();
}
}
public static void printLinksName() throws InterruptedException{
List<WebElement> allLinks=driver.findElements(By.xpath("//*[#id='rso']/div/div/div/div/div/h3/a"));
System.out.println(allLinks.size());
//print all list
for(int i=0;i<allLinks.size();i++){
System.out.println("Sno"+(i+1)+":"+allLinks.get(i).getText());
}
}
}
it prints fine till 2nd page , but there after I am getting
Exception in thread "main" org.openqa.selenium.StaleElementReferenceException: The element reference of <a class="fl"> stale: either the element is no longer attached to the DOM or the page has been refreshed
For documentation on this error, please visit: http://seleniumhq.org/exceptions/stale_element_reference.html
A couple of things:
Your script prints the result from the first 2 pages as expected.
When you call printLinksName() for the first time it works.
Next, you are storing the 10 PageNumbers in a Generic List of type WebElement.
First time within the for() loop you are clicking on the WebElement of Page 2 and then printing all the links by calling printLinksName().
While you are in the second iteration within for() loop, the reference of List<WebElement> fiveLinks is lost as the DOM have changed. Hence, you see StaleElementReferenceException.
Solution
A simple solution to avoid StaleElementReferenceException would be to move the line of code List<WebElement> fiveLinks=driver.findElements(By.xpath(".//*[#id='nav']/tbody/tr/td/a")); with in the for() loop. So your code block will look like:
import java.util.List;
import java.util.concurrent.TimeUnit;
import org.openqa.selenium.By;
import org.openqa.selenium.Keys;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.firefox.FirefoxDriver;
public class Q44970712_stale
{
static WebDriver driver = null;
public static void main(String[] args) throws InterruptedException
{
System.setProperty("webdriver.gecko.driver", "C:\\Utility\\BrowserDrivers\\geckodriver.exe");
driver=new FirefoxDriver();
driver.manage().window().maximize();
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
driver.get("https://www.google.co.in/");
//driver.findElement(By.xpath("//input[class='gsfi']")).sendKeys("Banduchode");;
WebElement search=driver.findElement(By.cssSelector("input#lst-ib"));
search.sendKeys("Banduchode");
search.sendKeys(Keys.ENTER);
printLinksName();
for(int i=0;i<5;i++)
{
List<WebElement> fiveLinks=driver.findElements(By.xpath(".//*[#id='nav']/tbody/tr/td/a"));
System.out.println(fiveLinks.get(i).getText());
fiveLinks.get(i).click();
Thread.sleep(5000);
printLinksName();
}
}
public static void printLinksName() throws InterruptedException
{
List<WebElement> allLinks=driver.findElements(By.xpath("//*[#id='rso']/div/div/div/div/div/h3/a"));
System.out.println(allLinks.size());
//print all list
for(int i=0;i<allLinks.size();i++)
{
System.out.println("Sno"+(i+1)+":"+allLinks.get(i).getText());
}
}
}
Note: In this simple solution when you finish printing the second page, next when you will create List<WebElement> fiveLinks through xpath with .//*[#id='nav']/tbody/tr/td/a for second time, Page 1 is the first element which gets stored in the fiveLinks List. Hence you may be again redirected to Page 1. To avoid that you may consider to take help of xpath with proper indexing.
your script is trying to click on each link from the first page, which brings you to a new page. once it completes work on that page, it doesn't seem to return to the first page, so the script can't find the next link in your list.
even if it did return to the first page, you would still have a stale element because the page has been reloaded. You'll need to keep track of the links in the first page by something else (like the href maybe?), and find the link again by that identifier before you click on it.
This is due to referencing object after moving to another page. please try to add the following lines inside the for loops at the end. It may resolve stale reference issue.
driver.navigate().back();
fiveLinks=driver.findElements(By.xpath(".//*[#id='nav']/tbody/tr/td/a"));
Related
so I am learning on this website to learn how to automate using selenium with Java.
my task was to enter Ind in the edit box called Suggession Class Example and then there will be a list that appears like in this image :
I have to click on India and then check if the editText had the value India as supposed or not.
I manage to do all of that except for one thing that I am facing right now, I can't extract the text in the editText back. every time I try to extract it using .getText() method, it returns null string ("") although the visible text is India. I tried many thing like using .getAttribute("value"); it returned Ind, I tried using .getAttribute("innerText"); it returned null string (""). this is the HTML code of the part I want to extract its text:
<input type="text" id="autocomplete" class="inputs ui-autocomplete-input" placeholder="Type to Select Countries" autocomplete="off">
as you see there is no attribute that I can use to get the text back.
this is my attempt so far:
import java.util.List;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.interactions.Actions;
public class Assignment8 {
public static void main(String[] args) throws InterruptedException {
// TODO Auto-generated method stub
System.setProperty("webdriver.chrome.driver", "C:\\chrome driver\\chromedriver.exe");
WebDriver driver = new ChromeDriver();
driver.get("https://rahulshettyacademy.com/AutomationPractice/");
WebElement editText = driver.findElement(By.xpath("//input[#id='autocomplete']"));
editText.sendKeys("Ind");
Thread.sleep(500);
List<WebElement> suggested = driver.findElements(By.className("ui-menu-item"));
for(WebElement e : suggested)
if (e.getText().equalsIgnoreCase("India")) {
e.click();
break;
}
Thread.sleep(500);
System.out.println(editText.getText());
if(editText.getText().equalsIgnoreCase("India"))
System.out.println("Success");
else
System.out.println("Failed");
}
}
and this is the output in the console:
Failed
everything is done successfully like the clicking part, but my problem is extracting the text back of that editText, I don't have any prior knowledge in HTML, thought that someone could help me here.
so I figured out what is the problem, the problem was in this line :
Thread.sleep(500);
the delay wasn't enough to get the text written back when clicked on the list, so I change it to :
Thread.sleep(1500);
and it worked.
also I was supposed to use .getAttribute("value"); instead of .getText(); to Get the visible text (i.e. hidden by CSS)
I'm testing a website where the xpaths are dynamic and looking for a way to click on various links in a list. I decided to try creating a List of the webelements at the ul tag which is static. However, when I do this I get an out of bounds exception like the following:
Exception in thread "main" java.lang.IndexOutOfBoundsException: Index:
2, Size: 1
at java.util.ArrayList.rangeCheck(ArrayList.java:657)
at java.util.ArrayList.get(ArrayList.java:433)
at automationFramework.ThirdIronTest.main(ThirdIronTest.java:40)
I know the li elements are more than size 1 and I implicitly wait before searching for each element so that I ensure each page loads but it still doesn't seem to work. What am I missing?
Here is my code:
public static void main(String[] args) throws InterruptedException, IOException {
// Open Chrome browser
WebDriver driver = new ChromeDriver();
// Maximize browser window
driver.manage().window().maximize();
// Navigate to QA Environment to begin test
driver.get("https://qa-safari-develop.browzine.com/libraries/14/subjects");
// Allow search for elements to wait for page(s) to load
driver.manage().timeouts().implicitlyWait(20, TimeUnit.SECONDS);
//** HERE WE ARE CREATING ELEMNTS OF SUBJECT LIST
List<WebElement> subjectElems = driver.findElements(By.xpath("//*[#id=\"subjects-list\"]"));
// Click on the subject Biomedical and Health Sciences from the Browse Subjects Navigation
subjectElems.get(2).click();
The index out of bounds exception happens because you are getting a list of WebElements with size equals to one. I mean, you want to retrieve a <LI> element which contains the link Biomedical and Health Science. However, the piece of code driver.findElements(By.xpath("//*[#id=\"subjects-list\"]")) gets the <UL> element with the id subject-list, which is unique in the page. So, the list has size one and the exception is thrown when you call subjectElems.get(2).click().
Summarizing, to make it works, you should do something similar to:
// Navigate to QA Environment to begin test
driver.get("https://qa-safari-develop.browzine.com/libraries/14/subjects");
// Allow search for elements to wait for page(s) to load
driver.manage().timeouts().implicitlyWait(20, TimeUnit.SECONDS);
//** Getting the ul with the links
WebElement subjectElems = driver.findElement(By.xpath("//*[#id=\"subjects-list\"]"));
// looking for an element with link text = Biomedical and Health Sciences
WebElement biomedicalAndHealthSciences = subjectElems.findElement(By.linkText("Biomedical and Health Sciences"));
System.out.println(biomedicalAndHealthSciences.getText());
// Click on the subject Biomedical and Health Sciences from the Browse Subjects Navigation
biomedicalAndHealthSciences.click();
If you want to iterate over all links in the <UL>:
WebElement subjectElems = driver.findElement(By.xpath("//*[#id=\"subjects-list\"]"));
List<WebElement> linkList = subjectElems.findElements(By.tagName("a"));
for(WebElement link: linkList) {
System.out.println(link.getText());
link.click();
}
To create a List of all the available Subjects you need to induce WebDriverWait for visibility of the elements and then click on the subject of your choice e.g. Biomedical and Health Sciences and you can use the following solution:
Code Block:
import java.util.List;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.WebDriverWait;
public class click_list_item {
public static void main(String[] args) {
System.setProperty("webdriver.gecko.driver", "C:\\Utility\\BrowserDrivers\\geckodriver.exe");
WebDriver driver = new FirefoxDriver();
driver.get("https://qa-safari-develop.browzine.com/libraries/14/subjects");
List<WebElement> myElements = new WebDriverWait(driver, 20).until(ExpectedConditions.visibilityOfAllElementsLocatedBy(By.xpath("//a[#class='subjects-list-subject ember-view']/span[#class='subjects-list-subject-name']")));
for(WebElement elem:myElements)
if(elem.getAttribute("innerHTML").contains("Biomedical and Health Sciences"))
{
elem.click();
break;
}
}
}
Browser Snapshot:
I am new on automation testing and have difficulty when trying to pratice using selenium 3 on booking.com website
There is auto suggestion text box, when you type word, shown auto suggestion and you can click from the list i.e Downtown Singapore
Have try with xpath id("basiclayout")/div[#class="leftwide rilt-left"]/div[#class="sb-searchbox__outer"]/form[#id="frm"]/div[#class="sb-searchbox__row u-clearfix"]/div[1]/div[#class="c-autocomplete
sb-destination"]/ul[#class="c-autocomplete__list sb-autocomplete__list -visible"]/li[#class="c-autocomplete__item sb-autocomplete__item sb-autocomplete__item--city sb-autocomplete__item__item--elipsis"]
or css c-autocomplete__item sb-autocomplete__item sb-autocomplete__item--city sb-autocomplete__item__item--elipsis
all scenario failed when i run my testcases on selenium java
How to handle on such web element?
Complete code:
public class Selenium3Testing {
private WebDriver driver;
#Before
public void setUp() {
String baseUrl = "https://www.booking.com/";
System.setProperty("webdriver.chrome.driver", "src/test/resources/drivers/chromedriver.exe");
DesiredCapabilities capabilities = new DesiredCapabilities();
driver = new ChromeDriver(capabilities);
driver.get(baseUrl);
}
#After
public void tearDown() {
driver.quit();
}
#Test
public void openBookingDotCom() {
driver.findElement(By.id("ss")).click();
driver.findElement(By.id("ss")).clear();
driver.findElement(By.id("ss")).sendKeys("Singapore");
//click on auto suggestion row number 2
driver.findElement(By.css("c-autocomplete__item sb-autocomplete__item sb-autocomplete__item--city sb-autocomplete__item__item--elipsis")).click();
}
}
I just typing from mobile, so no code, here the way we can do it.
For giving input to input box, I hope if we pass total word in sendkeys, suggestion's may not load or delayed. So best way I follow is pass each character..may be sleep say 300 milli sec for each char. Write as small method which will loop for all chars in word.
To click on suggestion list, try for xpath contains text..or any one works well.
Kindly help me to read Text of the element which I'm trying to access using x-path (I tried absolute and partial x-path but could not able to read value. below is my code. Getting message -"INFO: Detected dialect: W3C"
import java.util.concurrent.TimeUnit;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
public class xpathPractice {
public static void main(String[] args) {
System.setProperty("webdriver.gecko.driver", "C:\\BrowserDriver\\geckodriver.exe");
WebDriver driver = new FirefoxDriver();
driver.manage().timeouts().implicitlyWait(50,TimeUnit.SECONDS);
driver.get("http://www.lavasa.com/learn/acca.aspx");
String str3 = driver.findElement(By.xpath("//*[#id='main-nav']/ul/li[4]/ul/li[1]/a")).getText();
System.out.println(str3);
//After executing this code, I see the line in console as "INFO: Detected dialect: W3C"
}
}
the element you are trying to find is actually hidden, which is revealed on mouseover. So, we'll have to first make the web element visible, only then you can use the getText() function.
Step 1: Identify the web element you want to mouseover:
WebElement ele = driver.findElement(By.xpath(".//*[#id='main-nav']/ul/li[4]/a"));
Step 2: Use the Actions Class to moverover the webelement:
Actions act = new Actions(driver);
act.moveToElement(ele);
act.build().perform();
Step 3: Now, once the element is visble, go ahead and use getText() to get the text of the element.
String str3 = driver.findElement(By.xpath("//*[#id='main-nav']/ul/li[4]/ul/li[1]/a")).getText();
System.out.println(str3);
You are trying to gettext from a webelement which is not displaying on the current screen.
We can getText by two ways.
Using Javascript executor
System.setProperty("webdriver.gecko.driver", "C:\\BrowserDriver\\geckodriver.exe");
WebDriver driver = new FirefoxDriver();
driver.manage().timeouts().implicitlyWait(50,TimeUnit.SECONDS);
JavascriptExecutor js = (JavascriptExecutor) driver;
driver.get("http://www.lavasa.com/learn/acca.aspx");
WebElement element = driver.findElement(By.xpath("//*[#id='main-nav']/ul/li[4]/ul/li[1]/a"));
String script =(String)js.executeScript("return arguments[0].innerHTML;",element);
System.out.println(script);
2.Using actions mouse hover the element and then use getText.
Instead you can use getAttribute() method of selenium as follows:
driver.findElement(By.xpath("//*[#id='main-nav']/ul/li[4]/ul/li[1]/a")).getAttribute("innerHTML");
What I'm trying to do is checking elements on each page, if visible and if it is visible on current page i would like to make some assertion.
My code looks like below:
package com.example.tests;
import java.util.Iterator;
import java.util.List;
import java.util.regex.Pattern;
import java.util.concurrent.TimeUnit;
import org.junit.*;
import static org.junit.Assert.*;
import static org.hamcrest.CoreMatchers.*;
import org.openqa.selenium.*;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.support.ui.Select;
import com.thoughtworks.selenium.Selenium;
import com.thoughtworks.selenium.webdriven.WebDriverBackedSelenium;
public class Webdriver_class {
private WebDriver driver;
private String baseUrl;
private boolean acceptNextAlert = true;
private StringBuffer verificationErrors = new StringBuffer();
#Before
public void setUp() throws Exception {
driver = new FirefoxDriver();
baseUrl = "http://www.lotto.pl/";
driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS);
}
#Test
public void testUntitled() throws Exception {
driver.get(baseUrl + "/lotto/wyniki-i-wygrane/wygrane");
assertEquals("Wyniki i wygrane Lotto i Lotto Plus | Lotto, Kaskada, Multi Multi, Mini Lotto, Joker, Zdrapki - lotto.pl", driver.getTitle());
assertEquals("28-07-11", driver.findElement(By.xpath("//div[#id='page']/div[3]/div[2]/div[2]/table/tbody/tr[77]/td[5]")).getText());
///number of pages///
String xpath = "html/body/div[3]/div[1]/div/div[3]/div[2]/div[2]/div[3]/div/ul/li";
List<WebElement> elements = driver.findElements(By.xpath(xpath));
int x=elements.size();
System.out.println("liczba stron = "+elements.size());
////end//////
for(int i=1; i<=x; i++){
if(driver.findElement(By.xpath("//div[#id='page']/div[3]/div[2]/div[2]/table/tbody/tr[contains(td[3],'Warszawa') and contains (td[5],'21-02-09')]/td[1]")) != null)
{ assertEquals("100.", driver.findElement(By.xpath("//div[#id='page']/div[3]/div[2]/div[2]/table/tbody/tr[contains(td[3],'Warszawa') and contains (td[5],'21-02-09')]/td[1]")).getText());
assertEquals("100.", driver.findElement(By.xpath("//div[#id='page']/div[3]/div[2]/div[2]/table/tbody/tr[contains(td[3],'Warszawa') and contains (td[5],'21-02-09')]/td[1]")).getText());
};
if(driver.findElement(By.xpath("//div[#id='page']/div[3]/div[2]/div[2]/table/tbody/tr[contains(td[3],'Nowa Sól') and contains (td[5],'05-04-12')]/td[1]")) != null)
{ assertEquals("99.", driver.findElement(By.xpath("//div[#id='page']/div[3]/div[2]/div[2]/table/tbody/tr[contains(td[3],'Nowa Sól') and contains (td[5],'05-04-12')]/td[1]")).getText());
};
//go to the new page//
driver.findElement(By.xpath("html/body/div[3]/div[1]/div/div[3]/div[2]/div[2]/div[3]/div/ul/li/a["+i+"]")).click();
for (int second = 0;; second++) {
if (second >= 60) fail("timeout- nie znalazł 'Wyniki i wygrane Lotto i Lotto Plus' ");
try { if ("Wyniki i wygrane Lotto i Lotto Plus".equals(driver.findElement(By.cssSelector("h1.title")).getText())) break; } catch (Exception e) {}
Thread.sleep(50000);
}
}
}
}
All elements are visible on the first page so on first page is ok but when it goes to the second page I get error:
org.openqa.selenium.NoSuchElementException: Unable to locate element: {"method":"xpath","selector":"//div[#id='page']/div[3]/div[2]/div[2]/table/tbody/tr[contains(td[3],'Warszawa') and contains (td[5],'21-02-09')]/td[1]"}
Command duration or timeout: 30.10 seconds
For documentation on this error, please visit: http://seleniumhq.org/exceptions/no_such_element.html
Can anyone help with it. Why I get this error when I use IF statement. 'Webdriver' doesnt find element = doesnt make assertion, does it?
So when it doesnt find element then should go further
Please, help me what to do to make it working :)
thanks
The possible cause for this error might be that the element that you are trying to find with xpath :-
//div[#id='page']/div[3]/div[2]/div[2]/table/tbody/tr[contains(td[3],'Warszawa') and contains (td[5],'21-02-09')]/td[1]
is not present in the second page. So, even if you have used if for checking whether the element is present or not, if webdriver could not find the element, then selenium will throw NoSuchElementException exception. To make it simpler, let's put it in this way :-
if (driver.findElement(By.xpath(element_xpath)) != null)
{
do_some_stuff
}
Now let's break it as follows :-
is_present = driver.findElement(By.xpath(element_xpath))
if(is_present != null)
{
do_some_stuff
}
The above piece of code is the simplified version of the previous one and this is how your previous code is evaluated while you run the program. That is, first the condition inside if block is evaluated (in your case driver.findElement(By.xpath()) and then the result is compared with null.
But in the first case, while webdriver tries to locate the element with the given xpath, it couldn't be found. So at that point itself webdriver throws NoSuchElementException exception. So, once exception is encountered, python doesn't evaluate it any more and the program terminates.
To solve it, put it inside try-catch block :-
try
{
if (driver.findElement(By.xpath(element_xpath)) != null)
{
do_some_stuff
}
}
catch(Exception e)
{
System.out.println(e);
System.out.println("Element not found");
}
So, even if webdriver finds any Exception failing to locate the given element, instead of terminating the program it will move further and the next piece of your code will be executed, because the catch block will take care of the exception and let your program move further.
I would suggest you to put each if block within try-catch, so, even though any of your if block has an exception, the next if block would not be affected by it's exception.
Hope, this would solve your issue.