I have this piece of code that used to help me to find an element by the index, with Android I have no problem, but my problem is with iOS because since "IosUIAutomation" was deprecated I can't find how to replace it.
This is my code:
private void SelectByIndex(LocatorTypesEnum locatorType, String locator, String index, String condition,
int timeoutForWaitCondition) {
try {
element = getElement(locator, locatorType, condition, timeoutForWaitCondition);
System.out.println("Element: "+element);
MobileElement listItem = null;
if (os_name.equalsIgnoreCase("Android")){
listItem = element.findElement(
MobileBy.AndroidUIAutomator("new UiSelector().index(" + Integer.parseInt(index) + ")"));
} else if (os_name.equalsIgnoreCase("ios")) {
/*listItem = element.findElement(
MobileBy.IosUIAutomation("new UiSelector().index(" + Integer.parseInt(index) + ")"));*/
//This is the part I need to replace
}
assertNotNull(listItem.getLocation());
ar = this.setActionResultValues(ar, _takeScreenshotToStep, "", true, false, null);
listItem.click();
} catch (Exception e) {
...
}
}
I am new to this topic, please if anyone can help me, I will be very grateful.
I tried to use "iOSNsPredicateString" but it didn't work either because it doesn't find the element for the given index.
else if (os_name.equalsIgnoreCase("ios")) {
listItem = element.findElement (MobileBy.iOSNsPredicateString ("new UiSelector().index(" + Integer.parseInt(index) + ")"));
}
Related
I'm writing an app for a client who doesn't have an official API but wants the app to extract video links from his website so I wrote a logic using jsoup. Everything seems to work fine except some of the links don't start with https so I'm trying to add it before the URL.
Here's my code:
new Thread(() -> {
final StringBuilder jsoupStr = new StringBuilder();
String URL = "https://example.com" +titleString
.replaceAll(":", "")
.replaceAll(",", "")
.replaceAll(" ", "-")
.toLowerCase();
Log.d("CALLING_URL", " " +URL);
try {
Document doc = Jsoup.connect(URL).get();
Element content = doc.getElementById("list-eps");
Elements links = content.getElementsByTag("a");
for (Element link : links) {
jsoupStr.append("\n").append(link.attr("player-data"));
}
} catch (IOException e) {
e.getMessage();
}
String linksStr = jsoupStr.toString().trim();
if (!linksStr.startsWith("https://")) {
linksStr = "https:" + linksStr;
}
String[] links_array = linksStr.split("\n");
arrayList.addAll(Arrays.asList(links_array));
}).start();
The website contains about 10 links per video but some links start like "//" instead of https.
This code adds the https but only for the first link it finds missing.
if (!linksStr.startsWith("https://")) {
linksStr = "https:" + linksStr;
}
You need to iterate over your final array to apply your function to all links.
String[] links_array = linksStr.split("\n");
for(int i = 0; i < length; i++)
if(!links_array[i].startsWith("https://"))
links_array[i] = "https:" + links_array[i];
If this code working just for first missing link:
if (!linksStr.startsWith("https://")) {
linksStr = "https:" + linksStr;
}
I believe you can use loop for control every link.
I am facing an issue when i try to query the queue using createquery api to fetch the queue element.
I am getting an error at the while statement stating the error below as
errorjava.lang.illegalstateexception :unread block data
i dont know why i am getting this error. I can able to use the fetchcount() api to get the count of workitem in the queue but the hasnext() api is not working nor next().
Is there any reason why this statement is not getting executed. is this related to any java issue. Can any one help
The code is
VWSession session = new VWSession();
session.setBootstrapCEURI(Ceuri);
session.logon(cename, fnPassword, connectionPoint);
VWQueue queue = session.getQueue(queue));
int queryFlag = VWQueue.QUERY_NO_OPTIONS;
int fetchType = VWFetchType.FETCH_TYPE_STEP_ELEMENT;
VWQueueQuery queueQuery = queue.createQuery(null,null, null,queryFlag, null, null, fetchType);
while (queueQuery.hasNext()) {
queueElement = (VWStepElement) queueQuery.next();
}
In you main (calling) method, do this :
VWSession vwsession = new VWSession();
vwsession.setBootstrapCEURI("http://servername:9080/wsi/FNCEWS40MTOM/");
vwsession.logon("userid", "password", "ConnPTName");
IteratePEWorkItems queueTest = new IteratePEWorkItems();
queueTest.testQueueElements(vwsession);
Later on create below metioned helper method:
public void testQueueElements(VWSession vwsession) {
System.out.println("Inside getListOfWorkitems: : ");
VWRoster roster = vwsession.getRoster("DefaultRoster");
int fetchType = VWFetchType.FETCH_TYPE_STEP_ELEMENT;
int queryFlags = VWQueue.QUERY_READ_UNWRITABLE;
try {
dispatchWorkItems(roster, fetchType, queryFlags, vwsession);
} catch (Exception exception) {
log.error(exception.getMessage());
}
}
public void dispatchWorkItems(VWRoster roster, int fetchType, int queryFlags, VWSession vwsession) {
String filter = "SLA_Date>=:A";
// get value and replace with 1234567890 as shown in process administrator
Object[] subVars = { 1234567890 };
VWRosterQuery rosterQuery = roster.createQuery(null, null, null,
VWRoster.QUERY_MIN_VALUES_INCLUSIVE | VWRoster.QUERY_MAX_VALUES_INCLUSIVE, filter, subVars,
VWFetchType.FETCH_TYPE_WORKOBJECT);
int i = 0;
// Iterate work items here...
while (rosterQuery.hasNext() == true) {
VWWorkObject workObject = (VWWorkObject) rosterQuery.next();
try {
i++;
System.out.println(" Subject: " + workObject.getFieldValue("F_Subject") + " Count: " + i);
} catch (Exception exception) {
exception.printStackTrace();
log.error(exception);
}
}
}
Try it and share the output.
I am trying to find an element using xpath.
I tried this method:
if(a_chromeWebdriver.findElement(By.xpath(XPATH1)) != null){
homeTable = a_chromeWebdriver.findElement(By.xpath(XPATH1));
}
else{
homeTable = a_chromeWebdriver.findElement(By.xpath(XPATH2));
}
I assumed that if the first xpath won't be found, it will try he second one. But it throws an exception of element not found.
I also tried to check size = 0 instead of null, but got the same result.
You can use this method to check whether your xpath is present or not :
create a method : isElementPresent
public boolean isElementPresent(By by) {
try {
driver.findElements(by);
return true;
} catch (org.openqa.selenium.NoSuchElementException e) {
return false;
}
}
Call it using xpath like this :
isElementPresent(By.xpath(XPATH1));
So your code would become :
if(isElementPresent(By.xpath(XPATH1))){
homeTable = a_chromeWebdriver.findElement(By.xpath(XPATH1));
}
else{
homeTable = a_chromeWebdriver.findElement(By.xpath(XPATH2));
}
You could use findElements instead of findElement and then check the size:
List<WebElement> elements = a_chromeWebdriver.findElements(By.xpath(XPATH1));
if(elements.size() > 0){
homeTable = elements.get(0);
} else{
homeTable = a_chromeWebdriver.findElement(By.xpath(XPATH2));
}
But a better way would be to combine the 2 XPath in a single one with |:
homeTable = a_chromeWebdriver.findElement(By.xpath(XPATH1 + "|" + XPATH2));
You could create a method,
public WebElement getElement(By by) {
try {
return a_chromeWebdriver.findElement(by);
} catch (org.openqa.selenium.NoSuchElementException e) {
return null;
}
}
You could use it as follows,
WebElement element = getElement(By.xpath(XPATH1));
if (element == null)
element = getElement(By.xpath(XPATH2));
First add implicit wait to your code that will handle synchronization issues
driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS);
Add OR Operator rather than adding condition as other answers points out. homeTable = a_chromeWebdriver.findElement(By.xpath(XPATH1 | XPATH2));
Basically what I'm attempting to do is input the song and artist in the url which will then bring me to the page with the song's lyrics I'm then going to find the correct way to get those lyrics. I'm new to using Jsoup. So far the issue I've been having is I can't figure out the correct way to get the lyrics. I've tried getting the first "div" after the "b" but it doesn't seem to work out the way I plan.
public static void search() throws MalformedURLException {
Scanner search = new Scanner(System.in);
String artist;
String song;
artist = search.nextLine();
artist = artist.toLowerCase();
System.out.println("Artist saved");
song = search.nextLine();
song = song.toLowerCase();
System.out.println("Song saved");
artist = artist.replaceAll(" ", "");
System.out.println(artist);
song = song.replaceAll(" ", "");
System.out.println(song);
try {
Document doc;
doc = Jsoup.connect("http://www.azlyrics.com/lyrics/"+artist+"/"+song+".html").get();
System.out.println(doc.title());
for(Element element : doc.select("div")) {
if(element.hasText()) {
System.out.println(element.text());
break;
}
}
} catch (IOException e){
e.printStackTrace();
}
}
I don't know if this is consistent or not in all song pages, but in the page you have shown, the lyrics appear with the div element whose first attribute is margin. If this is consistent, you could try something on the order of...
Elements eles = doc.select("div[style^=margin]");
System.out.println(eles.html());
Or if it's always the 6th div element with lyrics, you could use that:
Elements eles = doc.select("div");
if (eles.size() >= 6) {
System.out.println(eles.get(6).html());
}
I am using Selenium WebDriver with java.
I am fetching all links from webpage and trying to click each link one by one. I am getting below error:
error org.openqa.selenium.StaleElementReferenceException: Element not found in the cache - perhaps the page has changed since it was looked up
Command duration or timeout: 30.01 seconds
For documentation on this error, please visit: http://seleniumhq.org/exceptions/stale_element_reference.html
Build info: version: '2.25.0', revision: '17482', time: '2012-07-18 21:09:54'
and here is my code :
public void getLinks()throws Exception{
try {
List<WebElement> links = driver.findElements(By.tagName("a"));
int linkcount = links.size();
System.out.println(links.size());
for (WebElement myElement : links){
String link = myElement.getText();
System.out.println(link);
System.out.println(myElement);
if (link !=""){
myElement.click();
Thread.sleep(2000);
System.out.println("third");
}
//Thread.sleep(5000);
}
}catch (Exception e){
System.out.println("error "+e);
}
}
actually, it's displaying in output
[[FirefoxDriver: firefox on XP (ce0da229-f77b-4fb8-b017-df517845fa78)] -> tag name: a]
as link, I want to eliminate these form result.
There is no such a good idea to have following scenario :
for (WebElement element : webDriver.findElements(locator.getBy())){
element.click();
}
Why? Because there is no guarantee that the element.click(); will have no effect on other found elements, so the DOM may be changed, so hence the StaleElementReferenceException.
It is better to use the following scenario :
int numberOfElementsFound = getNumberOfElementsFound(locator);
for (int pos = 0; pos < numberOfElementsFound; pos++) {
getElementWithIndex(locator, pos).click();
}
This is better because you will always take the WebElement refreshed, even the previous click had some effects on it.
EDIT : Example added
public int getNumberOfElementsFound(By by) {
return webDriver.findElements(by).size();
}
public WebElement getElementWithIndex(By by, int pos) {
return webDriver.findElements(by).get(pos);
}
Hope to be enough.
Credit goes to "loan".
I am also getting "stale exception" so I used 'loan' answer and works perfectly. Just if anyone need to know how to click on each link from results page try this (java)
clickAllHyperLinksByTagName("h3"); where "h3" tag contains hyperlink
public static void clickAllHyperLinksByTagName(String tagName){
int numberOfElementsFound = getNumberOfElementsFound(By.tagName(tagName));
System.out.println(numberOfElementsFound);
for (int pos = 0; pos < numberOfElementsFound; pos++) {
getElementWithIndex(By.tagName(tagName), pos).click();
driver.navigate().back();
}
}
public static int getNumberOfElementsFound(By by) {
return driver.findElements(by).size();
}
public static WebElement getElementWithIndex(By by, int pos) {
return driver.findElements(by).get(pos);
}
WebDriver _driver = new InternetExplorerDriver();
_driver.navigate().to("http://www.google.co.in/");
List <WebElement> alllinks = _driver.findElements(By.tagName("a"));
for(int i=0;i<alllinks.size();i++)
System.out.println(alllinks.get(i).getText());
for(int i=0;i<alllinks.size();i++){
alllinks.get(i).click();
_driver.navigate().back();
}
If you're OK using WebDriver.get() instead of WebElement.click() to test the links, an alternate approach is to save the href value of each found WebElement in a separate list. This way you avoid the StaleElementReferenceException because you're not trying to reuse subsequent WebElements after navigating away with the first WebElement.click().
Basic example:
List<String> hrefs = new ArrayList<String>();
List<WebElement> anchors = driver.findElements(By.tagName("a"));
for ( WebElement anchor : anchors ) {
hrefs.add(anchor.getAttribute("href"));
}
for ( String href : hrefs ) {
driver.get(href);
}
//extract the link texts of each link element
for (WebElement elements : linkElements) {
linkTexts[i] = elements.getText();
i++;
}
//test each link
for (String t : linkTexts) {
driver.findElement(By.linkText(t)).click();
if (driver.getTitle().equals(notWorkingUrlTitle )) {
System.out.println("\"" + t + "\""
+ " is not working.");
} else {
System.out.println("\"" + t + "\""
+ " is working.");
}
driver.navigate().back();
}
driver.quit();
}
For complete Explanation Read This POST
List <WebElement> links = driver.findElements(By.tagName("a"));
int linkCount=links.size();
System.out.println("Total number of page on the webpage:"+ linkCount);
String[] texts=new String[linkCount];
int t=0;
for (WebElement text:links){
texts[t]=text.getText();//extract text from link and put in Array
//System.out.println(texts[t]);
t++;
}
for (String clicks:texts) {
driver.findElement(By.linkText(clicks)).click();
if (driver.getTitle().equals("notWorkingUrlTitle" )) {
System.out.println("\"" + t + "\""
+ " is not working.");
} else {
System.out.println("\"" + t + "\""
+ " is working.");
}
driver.navigate().back();
}
driver.quit();