How to iterate for each loop for duplicate values - java

What I expect to happen: The program should find the expected Web Element from the list, click on it, find the contract id and match with the given contract id. If yes, break the loop else click back button and proceed until the conditions are satisfied.
Actual issue :
On running this for each loop; the program finds the first web element in the list and it passes the first if condition. While after clicking the web Element, as the second if condition is not satisfied it gets out of the loop and checks for each loop once again but the program or the code break here and throws error like" stale element reference: element is not attached to the page document ":(
how to get over this error?
Note :-" Where as my required Web Element is 3 rd in the List for the given contract id ".
// Selenium Web Driver with Java-Code :-
WebElement membership = driver.findElement(By.xpath(".//*[#id='content_tab_memberships']/table[2]/tbody"));
List<WebElement> rownames = membership.findElements(By.xpath(".//tr[#class='status_active']/td[1]/a"));
// where rownames contains list of webElement with duplicate webElement names eg:- {SimpleMem, SimpleMem , SimpleMem ,Protata} but with unique contarct id (which is displayed after clicking webElement)
for (WebElement actual_element : rownames) {
String Memname = actual_element.getAttribute("innerHTML");
System.out.println("the membershipname"+ Memname);
if (Memname.equalsIgnoreCase(memname1)) {
actual_element.click();
String actualcontractid = cp.contarct_id.getText();
if (actualcontractid.equalsIgnoreCase(contractid)) {
break;
} else {
cp.Back_Btn.click();
Thread.sleep(1000L);
}
}
}

After clicking on row element, you are navigating away from current page DOM. On the new page, if Contract Id is not matched, you are navigating back to the previous page.
You are expecting that you should be able to access the element from the list [rows] which was present earlier when you performed foreach loop. But now that DOM is reloaded earlier elements are not accessible hence the Stale Element Reference Exception.
Can you please try below sample code?
public WebElement getRowOnMatchingMemberNameAndContractID(String memberName, String contractId, int startFromRowNo){
WebElement membership = driver.findElement(By.xpath(".//*[#id='content_tab_memberships']/table[2]/tbody"));
List<WebElement> rowNames = membership.findElements(By.xpath(".//tr[#class='status_active']/td[1]/a"));
// where rownames contains list of webElement with duplicate webElement names eg:- {SimpleMem, SimpleMem , SimpleMem ,Protata} but with unique contarct id (which is displayed after clicking webElement)
for(int i=startFromRowNo; i<=rowNames.size(); i++){
String actualMemberName = rowNames.get(i).getAttribute("innerHTML");
if(actualMemberName.equalsIgnoreCase(memberName)){
rowNames.get(i).click();
String actualContractId = cp.contarct_id.getText();
if(actualContractId.equalsIgnoreCase(contractId)){
return rowNames.get(i);
}else {
cp.Back_Btn.click();
return getRowOnMatchingMemberNameAndContractID(i+1);
}
}
}
return null;
}
I have used the recursion and additional parameter to handle the previously clicked row. You can call above method with 0 as starting row like-
WebElement row = getRowOnMatchingMemberNameAndContractID(expectedMemberName, expectedContractID,0);

Related

Selenium JS Executor Failure: "Failed to execute elementsFromPoint on Document"

I'm running code that fetches a span by the value of its text and then goes to rightclick it using this function:
public void rightClickElement(WebElement element) {
Actions action = new Actions(driver);
actions.contextClick(element).perform();
}
Basically I iterate over a list of filenames and select the element I want to manipulate by its filename by using the following XPATH:
//span[contains(text(), 'PLACEHOLDER')]
with a function that replaces PLACEHOLDER by the current value of the array of filenames I'm iterating over.
This is my code:
*Note: getAssertedElement is just a function I wrote that asserts an element's existence and returns it at the same time.
List<WebElement> textFilesElements = driver.findElements(By.xpath("//span[(#class='document') and contains(text(), '.txt')]"));
ArrayList<String> filesToDelete = new ArrayList<String>();
waitSeconds(1);
for (int i = 0; i < textFilesElements.size(); i++) {
filesToDelete.add(textFilesElements.get(i).getText());
}
for (int i = 0; i < filesToDelete.size(); i++) {
WebElement elementToDelete = getAssertedElement("Cannot find the current element",
replacePlaceholderInString(
"//span[contains(text(), 'PLACEHOLDER')]",
filesToDelete.get(i)
),
"xpath");
System.out.println("FICHIER TO DELETE" + elementToDelete.getText());
rightClickElement(elementToDelete);
// do things with element
...
}
This works fine the first time through the second for statement, but when I move on to the next filename, even though the element is visible and clickable, the test fails with the following error when it reaches the rightClickElement call:
javascript error: Failed to execute 'elementsFromPoint' on 'Document': The provided double value is non-finite.
I don't understand why, much less how to fix it.

Selenium Webdriver - how to apply explicit wait in table (in the for loop) using java

from the table each row collect the td[3] value, below is my java source code
WebElement biboSection = driver.findElement(By.xpath("//*#id='Label1']/div/table[2]/tbody"));
List<WebElement>rowsCount = biboSection.findElements(By.tagName("tr"));
for (int k =1;k<=rowsCount.size();k++){
String biblioTable = driver.findElement(By.xpath("//*#id='Label1']/div/table[2]/tbody/tr["+k+"]/td[3]")).getText().trim();
}
problem is if any one of the row td[3] tag not available, so its becomes failed, getting the below error
org.openqa.selenium.NoSuchElementException: no such element: Unable to locate element: {"method":"xpath","selector":"//*[#id='Label1']/div/table[2]/tbody/tr[5]/td[3]"}
In general - single element we can use explicit wait to avoid the above exception, but in table how can i continue with rest of the rows if particular tab not available (i.e if tr[5]/td[3] not available then move to next set tr[6]/td[3])
What you need to do is, you need to return a value when you get a NoSuchElementException. The risk of this is, you can get a false positive, so you need to make sure you do something with the exception.
What you could do is, you could take the string biblioTable elsewhere, allowing you to check for the element first. Try the following (I'm typing this without IDE, so excuse any mistakes etc):
var el;
String exception;
for (int k =1;k<=rowsCount.size();k++){
try
{
el = driver.findElement(By.xpath("//*#id='Label1']/div/table[2]/tbody/tr["+k+"]/td[3]"));
return el;
}
catch (NoSuchElementException)
{
exception = "Whatever you want"
return exception;
}
if(exception == "Whatever you want")
{
Console.Writeline.....
}
else{
String biblioTable = el.getText().trim();
}
}
So what you are doing is:
First you are creating a var and a string which you are going to return in your try/catch. Then in the try/catch, you are first going to try to return the webelement el. If you get a NoSuchElementException, you are going to return the string exception with a value, which you are going to use as a condition to create the string biblioTable. If its filled, it means that the webelement el was not there. So you can do whatever else you want and end the loop. Else, you can create the string biblioTable.
First, I think you have a typo in two places: your xpath is missing the open square bracket "[" after "//*"
Secondly, I think you can accomplish what you want with one declaration:
List<WebELement> biblioTable = driver.findElements(By.xpath("//*[#id='Label1']/div/table[2]/tbody/tr/td[3]"));
Then you can access the text elements via:
for (WebElement text : biblioTable)
String name = text.getText();

Selenium WebDriver: findElement() in each WebElement from List<WebElement> always returns contents of first element

Page page = new Page();
page.populateProductList( driver.findElement( By.xpath("//div[#id='my_25_products']") ) );
class Page
{
public final static String ALL_PRODUCTS_PATTERN = "//div[#id='all_products']";
private List<Product> productList = new ArrayList<>();
public void populateProductList(final WebElement pProductSectionElement)
{
final List<WebElement> productElements = pProductSectionElement.findElements(By.xpath(ALL_PRODUCTS_PATTERN));
for ( WebElement productElement : productElements ) {
// test 1 - works
// System.out.println( "Product block: " + productElement.getText() );
final Product product = new Product(productElement);
// test 2 - wrong
// System.out.println( "Title: " + product.getUrl() );
// System.out.println( "Url: " + product.getTitle() );
productList.add(product);
}
// test 3 - works
//System.out.println(productElements.get(0).findElement(Product.URL_PATTERN).getAttribute("href"));
//System.out.println(productElements.get(1).findElement(Product.URL_PATTERN).getAttribute("href"));
}
}
class Product
{
public final static String URL_PATTERN = "//div[#class='url']/a";
public final static String TITLE_PATTERN = "//div[#class='title']";
private String url;
private String title;
public Product(final WebElement productElement)
{
url = productElement.findElement(By.xpath(URL_PATTERN)).getAttribute("href");
title = productElement.findElement(By.xpath(TITLE_PATTERN)).getText();
}
/* ... */
}
The webpage I am trying to 'parse' with Selenium has a lot of code. I need to deal with just a smaller portion of it that contains the products grid.
To the populateProductList() call I pass the resulting portion of the DOM that contains all the products.
(Running that XPath in Chrome returns the expected all_products node.)
In that method, I split the products into 25 individual WebElements, i.e., the product blocks.
(Here I also confirm that works in Chrome and returns the list of nodes, each containing the product data)
Next I want to iterate through the resulting list and pass each WebElement into the Product() constructor that initializes the Product for me.
Before I do that I run a small test and print out the product block (see test 1); individual blocks are printed out in each iteraion.
After performing the product assignments (again, xpath confirmed in Chrome) I run another test (see test 2).
Problem: this time the test returns only the url/title pair from the FIRST product for EACH iteration.
Among other things, I tried moving the Product's findElement() calls into the loop and still had the same problem. Next, I tried running a findElement**s**() and do a get(i).getAttribute("href") on the result; this time it correctly returned individual product URLs (see test 3).
Then when I do a findElements(URL_PATTERN) on a single productElement inside the loop, and it magically returns ALL product urls... This means that findElement() always returns the first product from the set of 25 products, whereas I would expect the WebElement to contain only one product.
I think this looks like a problem with references, but I have not been able to come up with anything or find a solution online.
Any help with this? Thanks!
java 1.7.0_15, Selenium 2.45.0 and FF 37
The problem is in the XPATH of the Product locators.
Below xpath expression in selenium means you are looking for a matching element which CAN BE ANYWHERE in document. Not relative to the parent as you are thinking!!
//div[#class='url']/a
This is why it always returns the same first element.
So, in order to make it relative to the parent element it should be as given below. (just a . before //)
public final static String URL_PATTERN = ".//div[#class='url']/a";
public final static String TITLE_PATTERN = ".//div[#class='title']";
Now you make it search for matching child element relative to the parent.
XPATH in selenium works as given below.
/a/b/c --> Absolute - from the root
//a/b --> Matching element which can be anywhere in the document (even outside the parent).
.//a/b --> Matching element inside the given parent

Jsoup selectors returning all values instead of searched values

I'm learning how to use jsoup and I've created a method called search which uses jsoup's selectors containsand containsOwn to search for a given item and return it's price. (For now the item name is hardcoded for testing purposes but the method will later take in a parameter to accept any item name).
The problem I'm having is that the selector isn't working and all the prices on the page are being returned instead of the one item being searched for, in this case "blinds". So in this example if you follow the link, only one item on that page says blinds and the price is listed as "$30 - $110 original $18 - $66 sale" but every item on that page gets returned instead.
I am aware that with jsoup I can explicitly call the name of the div and just extract the information from it that way. But I wanted to turn this into a bigger project and also extract prices from the same item from other chains such as Walmart, Sears, Macy's etc. Not just that particular website I used in my code. So I can't explicitly call the div name if I wanted to do that because that would only solve the problem for one site, but not the others and I wanted to take on an approach that encompasses the majority of those sites all at once.
How do I extract the price associated with its rightful item? Is there any way of doing it so that the item and price extracting will apply to most websites?
I would appreciate any help.
private static String search(){
Document doc;
String priceText = null;
try{
doc = Jsoup.connect("http://www.jcpenney.com/for-the-home/sale/cat.jump?id=cat100590341&deptId=dept20000011").get();
Elements divs = doc.select("div");
HashMap items = new HashMap();
for(Element element : doc.select("div:contains(blinds)")){
//For those items that say "buy 1 get 1 free"
String buyOneText = divs.select(":containsOwn(buy 1)").text();
Element all = divs.select(":containsOwn($)").first();
priceText = element.select(":containsOwn($)").text();
items.put(element, priceText);
}
System.out.println(priceText);
}catch(Exception e){
e.printStackTrace();
}
return priceText;
}
If you have tried at least to debug your app, then for sure, you will spot where is your mistake.
Put breakpoint for example on this line:
String buyOneText = divs.select(":containsOwn(buy 1)").text();
and then you will see, that really this element in loop contains blinds text. (and as all, that were selected)
I don't know why to make super universal tools, that will be working everywhere - IMO is not possible, and for every page you have to make your own crawler. In this case probably your code should be looking like this (I have to added timeout + this code is not working fully on my side, as I have default currency PLN):
private static String search() {
Document doc;
String priceText = null;
try {
doc = Jsoup.connect("http://www.jcpenney.com/for-the-home/sale/cat.jump?id=cat100590341&deptId=dept20000011").timeout(10000).get();
Elements divs = doc.select("div[class=price_description]");
HashMap items = new HashMap();
for (Element element : divs.select("div:contains(blinds)")) {
//For those items that say "buy 1 get 1 free"
String buyOneText = divs.select(":containsOwn(buy 1)").text();
Element all = divs.select(":containsOwn($)").first();
priceText = element.select(":containsOwn($)").text();
items.put(element, priceText);
}
System.out.println(priceText);
} catch (Exception e) {
e.printStackTrace();
}
return priceText;
}

Selenium Webdriver to wait until the asyn postback update panel request is completed

I have a table with clickable column header. On first click, the column should get sorted in ascending order and on second click column should get sorted in descending order.
The sorting has been implemented using async postback update-panel (I am not sure how it is done, it is an aspx page).
I would like to automate the sorting functionality using Selenium Webdriver. How can I implement the WAIT condition for the page where page doesn't get reloaded but only the table contents are reloaded.
waitForElementPresent wouldn't work, as no new element is displayed or hid on clicking the header.
PS: Java implentation required.
I have added a sample program that is related to a jquery table. Below is the flow of execution of the code:
First, it will navigate to the site.
Since I am taking the second column "Position" into consideration, it will retrieve first text under the column.
Then, click on the column header "Position" for sorting in ascending
Wait, 10 seconds(max), till the first text changes.
Print the result accordingly.
Again, click on the column header "Position" for sorting in descending
Wait, 10 seconds(max), till the first text changes.
Print the result accordingly.
public class TestSortTable{
static WebDriver driver;
public static void main(String[] args){
driver = new FirefoxDriver();
driver.manage().window().maximize();
driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS);
driver.get("http://www.datatables.net/examples/basic_init/table_sorting.html");
//For Ascending in column "Position"
String result = clickAndWaitForChangeText(By.xpath("//table[#id='example']//th[2]"), By.xpath("//table[#id='example']//tr[1]/td[2]"), "Ascending");
if(result.contains("Fail"))
System.err.println(result);
else
System.out.println(result);
//For Descending in column "Position"
result = clickAndWaitForChangeText(By.xpath("//table[#id='example']//th[2]"), By.xpath("//table[#id='example']//tr[1]/td[2]"),"Descending");
if(result.contains("Fail"))
System.err.println(result);
else
System.out.println(result);
driver.close();//closing browser instance
}
//For clicking on header and waiting till the first text in the column changes
public static String clickAndWaitForChangeText(By Header_locator, By first_text_locator, String sortorder){
try{
String FirstText = driver.findElement(first_text_locator).getText();
System.out.println("Clicking on the header for sorting in: "+sortorder); //sortorder -> String representing sort order Ascending/Descending
driver.findElement(Header_locator).click();//Click for ascending/Descending
//Below code will wait till the First Text changes for ascending/descending
boolean b = new WebDriverWait(driver,10).until(ExpectedConditions.invisibilityOfElementWithText(first_text_locator, FirstText));
if(b==true){
return "Pass: Waiting Ends. Text has changed from '"+FirstText+"' to '"+driver.findElement(first_text_locator).getText()+"'";
}
else{
return "Fail: Waiting Ends. Text hasn't changed from '"+FirstText+"'.";
}
}catch(Throwable e){
return "Fail: Error while clicking and waiting for the text to change: "+e.getMessage();
}
}
}
NOTE:- You can use the method clickAndWaitForChangeText accordingly in your code for the relevant result(s).
You should wait until the JQuery.active retuns 0. Mine is written in c#. Addition to this, we can also wait for a specific element that you know will satisfy your wait criteria. You can use fluentWait or write your custom wait to wait until the element exists.
public void WaitForAjax()
{
var wait = new WebDriverWait(driver, TimeSpan.FromSeconds(15));
wait.Until(d => (bool)(d as IJavaScriptExecutor).ExecuteScript("return jQuery.active == 0"));
}
EDIT: Java version
public void waitForAjaxLoad(WebDriver driver) throws InterruptedException{
JavascriptExecutor executor = (JavascriptExecutor)driver;
if((Boolean) executor.executeScript("return window.jQuery != undefined")){
while(!(Boolean) executor.executeScript("return jQuery.active == 0")){
Thread.sleep(1000);
}
}
return;
}
Directly taken from here

Categories