getByXpath() not working inside frame - java

i am new to Htmlunit and trying to extract data from a website http://capitaline.com/new/index.asp. I have logged into the website successfully. When we log into website there are three frames.
One on the top to search for the company(like ACC ltd.) for which we are extracting data.
2nd frame has a tree which provide links to various data we want to look at.
3rd frame has the resulted data outcome on the basis of link you clicked in frame.
I managed to get the frame i need below:
HtmlPage companyAtGlanceTopWindow =(HtmlPage)companyAtGlanceLink.click().getEnclosingWindow().getTopWindow().getEnclosedPage();
HtmlPage companyAtGlanceFrame = (HtmlPage)companyAtGlanceTopWindow.getFrameByName("mid2").getEnclosedPage();
System.out.println(companyAtGlanceFrame.toString()); // This line returns the frame URL as i can see in my browser.
Output of print statement is
HtmlPage(http://capitaline.com/user/companyatglance.asp?id=CGO&cocode=6)#1194282974
Now i want my code to navigate down to the table inside this frame and for that i am using getByXPath() but it gives me nullPointerException. Here is the code for that.
HtmlTable companyGlanceTable1 = companyAtGlanceFrame.getFirstByXPath("/html/body/table[4]/tbody/tr/td/table/tbody/tr/td[1]/table");
My XPath for the current webpage(after i clicked the link)from which i am trying to extract table is seems correct, as it is copied from chrome element inspect. Please suggest some way to extract the table. I have done this type of extraction before but there i had id of table so, i used it.
Here is the HTML code for the table in the webpage.
<table width="100%" class = "tablelines" border = "0" >

I want to know that can you see the inner contents of each iframes in console (print asXml()), are they nested iframes?
well try this
List<WebWindow> windows = webClient.getWebWindows();
for(WebWindow w : windows){
HtmlPage hpage = (HtmlPage) w.getEnclosedPage();
System.out.println(hpage.asXml());
}
once you can see the contents,
HtmlPage hpage = (HtmlPage)webClient.getWebWindowByName(some_name).getEnclosedPage();
then using xpath grab your table contents(make sure your xpath is correct). It will work.(worked for me)

Thank you RDD for your feedback.
I solved the problem. Actually issue was not with the frame but with the XPath provided by chrome.
XPath Provided by chrome is:
/html/body/**table[4]**/tbody/tr/td/table/tbody/tr/td[1]/table
But the XPath worked for me is:
/html/body/**table[3]**/tbody/tr/td/table/tbody/tr/td[1]/table
It seems as, XPath provided by chrome has some glitch when there is a table within the path(Or may be some bug in htmlunit itself). I did many experiments and found that chrome always gives ../../table[row+1]/.. as XPath, while working XPath for htmlunit is ../../table[row]/..
SO, this code is working fine for me
HtmlTable companyGlanceTable1 = companyAtGlanceFrame.getFirstByXPath("/html/body/table[3]/tbody/tr/td/table/tbody/tr/td[1]/table");

Related

Jsoup extract Hrefs from the HTML content

My problem is that I try to get the Hrefs from this site with JSoup
https://www.amazon.de/s?k=kissen&__mk_de_DE=%C3%85M%C3%85%C5%BD%C3%95%C3%91&ref=nb_sb_noss_2
but it does not work.
I tried to select the class from the Href like this
Elements elements = documentMainSite.select(".a-link-normal");
and after that I tried to extract the Hrefs with the following piece of code.
for (Element element : elements) {
String href = element.attributes().get("href");
}
but unfortunately it gives me nothing...
Can someone tell me where is my mistake please?
I don't just connect to the website. I also save the hrefs in a string by extracting them with
String href = element.attributes().get("href");
after that I've print the href String but is empty.
On another side the code works with another css selector. so it has nothing to do with the code by it self. its just the css selector (.a-link-normal) that is probably wrong.
You won't get anything by simply connecting to the url via Jsoup.
Document document = Jsoup.connect(yourUrl).get();
String bodyText = document.getElementsByTag("body").get(0).text();
Here is the translation of the body text, which I got from the above code.
Enter the characters below We ask for your understanding and want to
be sure that you are not a bot. For best results, please use a browser
that accepts cookies. Type the characters you see in the image: Enter
characters Try another image Continue shopping Terms & Conditions
Privacy Policy © 1996-2015, Amazon.com, Inc. or its affiliates
Either you need to bypass captcha or emulate a browser by means of Selenium, for example.

Going to next page on an aspx form with JSoup

I'm trying to go to the next page on an aspx form using JSoup.
I can find the next button itself. I just don't know what to do with it.
The idea is that, for that particular form, if the next button exists, we would simulate a click and go to the next page. But any other solution other than simulating a click would be fine, as long as we get to the next page.
I also need to update the results once we go to the next page.
// Connecting, entering the data and making the first request
...
// Submitting the form
Document searchResults = form.submit().cookies(resp.cookies()).post();
// reading the data. Everything up to this point works as expected
...
// finding the next button (this part also works as expected)
Element nextBtn = searchResults.getElementById("ctl00_MainContent_btnNext");
if (nextBtn != null) {
// click? I don't know what to do here.
searchResults = ??? // updating the search results to include the results from the second page
}
The page itself is www.somePage.com/someForm.aspx, so I can't use the solution stated here:
Android jsoup, how to select item and go to next page
I was unable to find any other suggestions.
Any ideas? What am I missing? Is simulating a click even possible with JSoup? The documentation says nothing about it. But I'm sure people are able to navigate these type of forms.
Also, I'm working with Android, so I can't use HtmlUnit, as stated here:
importing HtmlUnit to Android project
Thank you.
This is not Jsoup work! Jsoup is a parser with a nice DOM API that allows you to deal with wild HTML as if it were well-formed and not crippled with errors and nonsenses.
In your specific case you may be able to scrape the target site directly from your app by finding links and retrieving HTML pages recursively. Something like
private void scrape(String url) {
Document doc = Jsoup.connect(url).get();
// Analyze current document content here...
// Then continue
for (Element link : doc.select(".ctl00_MainContent_btnNext")) {
scrape(link.attr("href"));
}
}
But in the general case what you want to do requires far more functionality that Jsoup provides: a user agent capable of interpreting HTML, CSS and Javascript with a scriptable API that you can call from your app to simulate a click. For example Selenium:
WebDriver driver = new FirefoxDriver();
driver.findElement(By.name("next_page")).click();
Selenium can't be bundled in an Android app, so I suggest you put your Selenium code on a server and make it accessible with some REST API.
Pagination on ASPX can be a pain. The best thing you can do is to use your browser to see the data parameters it sends to the server, then try to emulate this in code.
I've written a detailed tutorial on how to handle it here but it uses the univocity HTML parser (which is commercial closed source) instead of JSoup.
In short, you should try to get a <form> element with id="aspnetForm", and read the form elements to generate a POST request for the next page. The form data usually comes out with stuff such as this:
__EVENTTARGET =
__EVENTARGUMENT =
__VIEWSTATE = /wEPDwUKMTU0OTkzNjExNg8WBB4JU29ydE9yZ ... a very long string
__VIEWSTATEGENERATOR = 32423F7A
... and other gibberish
Then you need to look at each one of these and compare with what your browser sends. Sometimes you need to get values from other elements of the page to generate a similar POST request. You may have to REMOVE some of the parameters you get - again, make your code behave exactly the same as your browser
After some (frustrating) trial and error you will get it working. The server should return a pipe-delimited result, which you can break down and parse. Something like:
25081|updatePanel|ctl00_ContentPlaceHolder1_pnlgrdSearchResult|
<div>
<div style="font-weight: bold;">
... more stuff
|__EVENTARGUMENT||343908|hiddenField|__VIEWSTATE|/wEPDwU... another very long string ...1Pni|8|hiddenField|__VIEWSTATEGENERATOR|32423F7A| other gibberish
From THAT sort of response you need to generate new POST requests for the subsequent pages, for example:
String viewState = substringBetween(ajaxResponse, "__VIEWSTATE|", "|");
Then:
request.setDataParameter("__VIEWSTATE", viewState);
There are will be more data parameters to get from each response. But a lot depends on the site you are targeting.
Hope this helps a little.

How to select items from drop down menu using Selenium?

I have been trying to automate a search using Selenium. I simply want to search terms (say Pink Floyd) but the file type should be pdf. Here is what I have done so far:
//Query term
WebElement element = driver.findElement(By.name("as_q"));
String finalQuery = "pink floyd";
element.sendKeys(finalQuery);
//File type selection
WebElement elem = driver.findElement(By.id("as_filetype_button"));
elem.sendKeys("Adobe Acrobat pdf (.pdf)");
driver.findElement(By.xpath("/html/body/div[1]/div[4]/form/div[5]/div[9]/div[2]/input[#type='submit']")).click();
This puts the term in the appropriate place and the drop down for file types are expanded but pdf option is not selected. Any help?
I am using Selenium 2.53.0.
EDIT
The following code segment perfectly worked as per the accepted answer for this question. However, all on a sudden the code segment is not working. I am a bit surprised to find this out. Previously, I was able to select PDF automatically with the following code segment but now, nothing gets selected.
WebElement element = driver.findElement(By.name("as_q"));
String finalQuery = "pink floyd";
element.sendKeys(finalQuery);
driver.findElement(By.id("as_filetype_button")).click();
driver.findElement(By.xpath("//li[#class=class-name][#value='pdf']")).click();
This is how i do it, find the li that matches the class='goog-menuitem' and value='pdf', i inspected the element. You can go directly with value='pdf' but just to make sure we are looking at the file type dropdown we added the class.
driver.findElement(By.id("as_filetype_button")).click();
driver.findElement(By.xpath("//li[#class='goog-menuitem'][#value='pdf']")).click();
You can still declare it with WebElement, i just prefer it shorthand. Hope this helps.

Finding and switching to the right frame with Selenium Webdriver

I am trying to write a WebDriver test for the following site for learning purposes: http://www.mate1.com
On clicking the login link, I get a form to fill in my email id and password. As far as I understand, the form is shown in an iframe.
To enter the credentials, I tried to identify the number of iframes for that particular page (and found it as 7) and tried switching in to each iframe and search for the email field by XPath and the ID. However, I wasn't successful to find it. So how can this be done?
This is my code:
driver.get("http:www.mate1.com");
driver.findElement(By.xpath("//*[#id='header-container']/div/a")).click();
List <WebElement> frames = driver.findElements(By.tagName("iframe"));
System.out.println("Totalframes -> "+ frames.size());
driver.switchTo().frame(0);
driver.findElement(By.xpath("//[#id='email']")).sendKeys("xxxxxxxxx#rocketmail.com");
This is likely a good situation to use switchTo().frame(WebElement):
driver.switchTo().frame(findElement(By.cssSelector(".iframe-container>iframe")));
The login and password fields are not in an iframe, you can directly use the following code -
driver.findelement(By.id("email").sendKeys("xxxxxxxxx#rocketmail.com");
Try not to switch to any iframe and execute the above line, this will work.

The Web Browser show the correct values but when I use Jsoup the HTML doesn't have the values

I'm trying to get some values from a site but these values only appears when I use a Browser, like Mozilla. When I use the Jsoup I can get the HTML from the site but without values, only with the tags.
This is the site I'm trying to parse:
http://www.submarinoviagens.com.br/Passagens/selecionarvoo?Origem=nat&Destino=mia&Data=05/11/2012&Hora=&Origem=mia&Destino=nat&Data=09/11/2012&Hora=&NumADT=1&NumCHD=0&NumINF=0&SomenteDireto=0&Cia=&SelCabin=&utm_source=&utm_medium=&utm_campaign=&CPId=
I'm trying to get the values that appears inside these span tags:
If I access the previous URL from a web browser I can see the following values: '', 'R$ 2634,22' and 'R$ 2634,22', but when I use the following code the values disapears.
URL url = new URL("http://www.submarinoviagens.com.br/Passagens/selecionarvoo?Origem=nat&Destino=mia&Data=05/11/2012&Hora=&Origem=mia&Destino=nat"+
"&Data=09/11/2012&Hora=&NumADT=1&NumCHD=0&NumINF=0&SomenteDireto=0&Cia=&SelCabin=&utm_source=&utm_medium=&utm_campaign=&CPId=");
Document doc = Jsoup.parse(url, 100000);
String title = doc.title();
System.out.println(doc.toString());
If I try to see the source code via Mozilla Firefox the values disapears too.
But If I use the firebug plugin I can see them.
Thank's for the help!
The website uses JavaScript to populate all of the values you are trying to parse. You will have to use a library that can compute the javascript within the page. Not sure if there is one though.
anyone else?
Htmlunit is a headless browser that renders Javascript and should be able to present this page correctly.

Categories