Java HTML Unit save Unexpected page - java

Am trying to access the servlet page using htmlunit which contains one image.
I need to save the image or need to save the servlet page into html page.
Now am using
(UnexpectedPage) webClient.getPage(new URL("https://www.xxxx.com/servlet/xxxSer")
WebResponse response = currentPage.getWebResponse();
response.getContentType();
After that I do not know what to do. Is there any idea to do this job.
Thanks in advance.

You need to get the text content of the WebResponse (you also don't need the URL object):
Page page = webClient.getPage("https://www.xxxx.com/servlet/xxxSer");
String content = page.getWebResponse().getContentAsString();
Regarding the image you should be more clear on how you're getting it. If it is an image that is referenced in an IMG tag then use an HtmlPage and an HtmlImage. If you're requesting the image directly probably you should use page.getWebResponse().getContentAsStream()

Try this code
HtmlPage htmlpage = webClient.getPage(new URL("https://www.xxxx.com/servlet/xxxSer"));
String htmlcode = htmlpage.getWebResponse().getContentAsString();
Best

The problem is that HTML Unit is not able to cast incompleted HTML Pages (some unclosing tags, for example). So, I could solve this error using HTMLParser which is included in HTMLUnit's packages (I'm using 2.36.0v). HTMLParser completes and handles this kind of casting errors. HtmlPage works if you need to execute JS.
//Web client creation.
Page page = webClient.getPage(url);
HtmlPage tmpPage = HTMLParser.parseHtml(page.getWebResponse(), webClient.getCurrentWindow());
// use tmpPage here

Related

Jsoup - read from an html url where code is hidden

I'm trying to using the jsoup library to get 'li' from a website. The problem is this:
If I open the source of website with CTRL+U(which is the same read by jsoup), the 'ul' tag is hidden.
if I open the code with the fuction "inspect code" of google chrome,'li' are shown.
Posting the code is not necessary; I only want to know how can access to this 'li' with jsoup or other java free libraries, Whereas in the source code(and through jsoup) these informations are hidden.
The site is https://farmaci.agenziafarmaco.gov.it/bancadatifarmaci/cerca-farmaco and try to search something(i.e. Tachi)
The problem with Jsoup is that it won't handle scripts. It is just getting html as it is before the AJAX code is executed.
You can use something like HtmlUnit, which is basically a GUI-less browser. So, it can handle scripts.
You can try something like this after getting the HtmlUnit library:
String url = "https://farmaci.agenziafarmaco.gov.it/bancadatifarmaci/cerca-farmaco?search=Tachi";
try(final WebClient webClient = new WebClient()) {
final HtmlPage page = webClient.getPage(url);
final HtmlUnorderedList list = page.getHtmlElementById("ul_farm_results");
System.out.println(list.asText());
}
I couldn't check the code as the website's certificate is improperly configured and I didn't want to import it's certificate. You may want to take a look at this to resolve the certificate errors.
JSoup does not execute all the scripts, it just gets the HTML returned by the server. What you are looking for is call rendered HTML, that is the HTML produced by the browser after executing all the scripts.
The best solution in Java is to use Selenium with your preferred browser. Selenium was developed for UI testing, it is however very popular as a scraping tool.
A good getting started page is to be found here.
Some code example with Firefox:
WebDriver driver = new FirefoxDriver();
driver.get("https://farmaci.agenziafarmaco.gov.it/bancadatifarmaci/cerca-farmaco");
// Find the element
String id = "ul_farm_results";
WebElement element = driver.findElement(By.id(id));

HtmlUnit does not find the element

I'm trying to get the textbox with u_0_1e as id, from the page wall but HtmlUnit does not find anything. The last line prints null.
Here's the code:
java.util.logging.Logger.getLogger("com.gargoylesoftware").setLevel(java.util.logging.Level.OFF);
WebClient client = new WebClient(BrowserVersion.CHROME);
JavaScriptEngine engine = new JavaScriptEngine(client);
client.setJavaScriptEngine(engine);
HtmlPage home = client.getPage("https://www.facebook.com/login.php");
HtmlSubmitInput login = (HtmlSubmitInput) home.getElementById("u_0_1");
HtmlTextInput name = (HtmlTextInput) home.getElementById("email");
HtmlPasswordInput pass = (HtmlPasswordInput) home.getElementById("pass");
name.setValueAttribute("myname");
pass.setValueAttribute("mypass");
HtmlPage page = login.click();
HtmlPage wall = client.getPage("https://www.facebook.com/");
System.out.println(wall.getElementById("u_0_1e"));
I have some comments about your issue.
First of all, you have disabled HtmlUnit's logging. So if you have any JavaScript issue then you are not going to see it. If you are actually getting a JavaScript error then the JavaScript code won't be fully executed. If the element you're trying to fetch was dynamically fetched from the server (probably using AJAX) then the JavaScript errors, if any, might result in that element not being fetched.
If you are webscraping, which is clearly the case, then you don't have any control over the JS so you can only accept it as not working or disable JS and manually processing the AJAX requests.
Of course, you will see the page perfectly working on a real browser but take into consideration that the JavaScript engine HtmlUnit uses is different from the real browsers.
Secondly, the two lines containing the word engine are absolutely unneeded.
Thirdly, as I mentioned in a previous question of yours, this will be more suitable to be handled by means of the Facebook API.
Finally, you might find this other answer useful:
JavaScript not being properly executed in HtmlUnit

Prevent HtmlUnit 2.13 from executing JavaScript

Here is my code to get the page:
WebClient webClient = new WebClient();
HtmlPage page = webClient.getPage(url);
The problem is the webClient always executes javascript automatically and throws me a list of error. I just want to get the raw source. How can I prevent it from executing script? I've found there is a way in version 2.9:
webClient.setJavaScriptEnabled(false);
But setJavaScriptEnabled() function was deprecated. Anyone knows how to solve this problem? Please help me. Thank you so much.
Although setJavaScriptEnabled(boolean) was deprecated it was added to the WebClientOptions member of the WebClient. Here is the doc.
In order to disable JavaScript you should do this:
webClient.getOptions().setJavaScriptEnabled(false);
Additionally, if you you want to get the raw HTML code from the webpage you should take a look at this question:
How to get the pure raw HTML of a page in HTMLUnit while ignoring JavaScript and CSS?
Take into account that even the asXml() method change the formatting as well as the content of the web page you fetch (even if JavaScript is disabled).

The Web Browser show the correct values but when I use Jsoup the HTML doesn't have the values

I'm trying to get some values from a site but these values only appears when I use a Browser, like Mozilla. When I use the Jsoup I can get the HTML from the site but without values, only with the tags.
This is the site I'm trying to parse:
http://www.submarinoviagens.com.br/Passagens/selecionarvoo?Origem=nat&Destino=mia&Data=05/11/2012&Hora=&Origem=mia&Destino=nat&Data=09/11/2012&Hora=&NumADT=1&NumCHD=0&NumINF=0&SomenteDireto=0&Cia=&SelCabin=&utm_source=&utm_medium=&utm_campaign=&CPId=
I'm trying to get the values that appears inside these span tags:
If I access the previous URL from a web browser I can see the following values: '', 'R$ 2634,22' and 'R$ 2634,22', but when I use the following code the values disapears.
URL url = new URL("http://www.submarinoviagens.com.br/Passagens/selecionarvoo?Origem=nat&Destino=mia&Data=05/11/2012&Hora=&Origem=mia&Destino=nat"+
"&Data=09/11/2012&Hora=&NumADT=1&NumCHD=0&NumINF=0&SomenteDireto=0&Cia=&SelCabin=&utm_source=&utm_medium=&utm_campaign=&CPId=");
Document doc = Jsoup.parse(url, 100000);
String title = doc.title();
System.out.println(doc.toString());
If I try to see the source code via Mozilla Firefox the values disapears too.
But If I use the firebug plugin I can see them.
Thank's for the help!
The website uses JavaScript to populate all of the values you are trying to parse. You will have to use a library that can compute the javascript within the page. Not sure if there is one though.
anyone else?
Htmlunit is a headless browser that renders Javascript and should be able to present this page correctly.

How to fill HTTP forms through java?

I want to fill a text field of a HTTP form through java and then want to click on the submit button through java so as to get the page source of the document returned after submitting the form.
I can do this by sending HTTP request directly but I don't to this in this way.
I usually do it using HtmlUnit. They have an example on their page :
#Test
public void submittingForm() throws Exception {
final WebClient webClient = new WebClient();
// Get the first page
final HtmlPage page1 = webClient.getPage("http://some_url");
// Get the form that we are dealing with and within that form,
// find the submit button and the field that we want to change.
final HtmlForm form = page1.getFormByName("myform");
final HtmlSubmitInput button = form.getInputByName("submitbutton");
final HtmlTextInput textField = form.getInputByName("userid");
// Change the value of the text field
textField.setValueAttribute("root");
// Now submit the form by clicking the button and get back the second page.
final HtmlPage page2 = button.click();
}
And you can read more here.
If you don't want to talk HTTP directly (why?), then take a look at Watij.
It allows you to invoke a browser (IE) as a COM control within your Java process, navigate through page elements by using their document ids etc., fill in forms and press buttons. Because it's running a browser, Javascript will run as normal (like if you were doing this manually).
You would probably need to write a Java Applet, as the only other way than sending a direct request would be to have it interface with the browser.
Of course, for this to work, you would have to embed the applet in the page. If you don't control the page, this can't be done. If you do control the page, you might as well be using Javascript, instead of trying to get a Java Applet to do it, which would be much more cumbersome and difficult.
Just to clarify, what is the problem you are having creating an HTTP Request and why do you want to use a different method?

Categories