How to call Javascript __doPostBack from JSOUP - java

I have a website that calls a __doPostBack for a specific link. I have tried loading the page that the link loads, but inputting POST data manually and also setting the __EVENTTARGET and __EVENTARGUMENT manually, but I keep receiving an error page.. If anyone has used the JSOUP library and has found a workaround for this problem please let me know. Here is the code for calling the website with POST data:
Connection.Response res = Jsoup.connect("https://parentaccess.ocps.net/Progress/ProgressSummary.aspx?T=2").data(target.substring(0,target.length()-5)+"txtClass_DBID",dBID).data("__LASTFOCUS","").data("__EVENTVALIDATION", eventValidation).data("__VIEWSTATE", viewState).cookie("ASP.NET_SessionId", cookie).data("__EVENTTARGET",target).data("DropDownListGradingPeriod","3").data("__EVENTARGUMENT","").header("Content-Type","text/html; charset=utf-8").header("Connection", "keep-alive").header("Cache-Control", "private").method(Method.POST).execute();
Document doc = res.parse();
Document doc2 = Jsoup.connect("https://parentaccess.ocps.net/Progress/ProgressDetails.aspx").data(target.substring(0,target.length()-5)+"txtClass_DBID",dBID).data("__LASTFOCUS","").data("__EVENTVALIDATION", eventValidation).data("__VIEWSTATE", viewState).cookie("ASP.NET_SessionId", cookie).data("__EVENTTARGET",target).data("DropDownListGradingPeriod","3").data("__EVENTARGUMENT","").header("Content-Type","text/html; charset=utf-8").header("Connection", "keep-alive").header("Cache-Control", "private").get();
Please note that I have tried both the Connection.Response and the JSOUP.connect with POST data, but have been receiving errors for doc2 (res will load the page, but the information must not be passed because no table is generated with the given POST data). Thanks!

Use HTMLUNIT for handling javascript.

After spending few days the only solution what i found was htmlunit.
Here is the working code:
HtmlPage page = webClient.getPage("PAGE_URL");
InputStream inputStream =
page.getElementById({EL_ID}").click().getWebResponse().getContentAsStream();
OutputStream outputStream = new FileOutputStream(new File("FILE_NAME"));
IOUtils.copy(inputStream, outputStream);

Related

JSOUP java.io.IOException: Input is binary and unsupported

I have a project that requires me to use JSOUP for web scraping. I was able to get the data from the main page of the website that I want to scrape. but, as I scrape deeper into the page by looping into the hyperlink and accessing it, I get the following errors:
java.io.IOException: Input is binary and unsupported
at org.jsoup.UncheckedIOException.<init>(UncheckedIOException.java:11)
at org.jsoup.parser.CharacterReader.<init>(CharacterReader.java:38)
at org.jsoup.parser.CharacterReader.<init>(CharacterReader.java:43)
at org.jsoup.parser.TreeBuilder.initialiseParse(TreeBuilder.java:38)
at org.jsoup.parser.HtmlTreeBuilder.initialiseParse(HtmlTreeBuilder.java:65)
at org.jsoup.parser.TreeBuilder.parse(TreeBuilder.java:46)
at org.jsoup.parser.Parser.parseInput(Parser.java:35)
at org.jsoup.helper.DataUtil.parseInputStream(DataUtil.java:169)
at org.jsoup.helper.HttpConnection$Response.parse(HttpConnection.java:835)
at org.jsoup.helper.HttpConnection.get(HttpConnection.java:285)
when I inspect the website, there are parts of the website that contains a commented binary data and I think it caused the problem. I've tried using this code:
Document docs2 = Jsoup.connect("https://www.kiatravels.co.id/group_tour/index?TOUR_ID=1467&ID=15803").ignoreContentType(true).get();
but still didn't work.
Here's hoping some brainy code master can help!
It looks like you navigated to the "Download Itinerary" link, which opens a pdf. Before parsing the link with Jsoup, you'll want to check the content-type of the url response.
Connection.Response res = Jsoup.connect(url).execute();
String contentType = res.contentType();
You'll probably want to ignore MIME types that are not text/html.

jsoup select() method not found

I want to make a web parser in java. I'm using jsoup. But I got an error like this
how to fix it? is it because of my classpath?
also this are my imports
Use below code.
Document doc = Jsoup.connect(I).get();
Elements links = doc.select("div#content > p").first();
You have to get the Document from Connection.
Connection con = Jsoup.connect(url);
Document doc = con.get();

How to programmarically read the source code of a webpage after JavaScript has altered the DOM?

I want to view source code of a web page, but the JavaScript change it.
E.g. https://delicious.com/search/ali this is a site page when we click CTRL+U it shows the source code which JavaScript changed not actual one. If you see code using Inspect Element than it shows the complete source code. so I want to get the complete source code.
kindly let me know is there any technique to get the source code provided by the Inspect Element. I am building a software and this is the requirement of that. It is Good if the technique or api you are going to refer me is in JAVA.
I am going to build a software which gets urls from this site.
But because of change made by the JavaScript I can't get the actual Source code.
I'm not sure, but this might be what you are asking for. The code takes a URL object, gets the server's response, and returns the body of the response. This should be a HTML document in your case.
String getSource(URL url) {
HttpURLConnection connection = url.openConnection();
connection.setDoOutput(true);
connection.setDoInput(true);
connection.getOutputStream().write(42);
byte[] bytes = new byte[512];
try (BufferedInputStream bis = new BufferedInputStream(connection.getInputStream())) {
StringBuilder response = new StringBuilder(500);
int in;
while ((in = bis.read(bytes)) != -1) {
response.append(new String(bytes, 0, in));
}
return response.toString().split("\r\n\r\n")[1];
};
}

Web Crawler specifically for downloading images and files

I am doing an assignment for one of my classes.
I am supposed to write a webcrawler that download files and images from a website given a specified crawl depth.
I am allowed to use third party parsing api so I am using Jsoup. I've also tried htmlparser. Both nice softwares but they are not perfect.
I used the default java URLConnection to check content type before processing the url but it becomes really slow as the number of links grows.
Question : Anyone know any specialized parser api for images and links ?
I could start writing mine using Jsoup but am being lazy. Besides why reinvent the wheel if there could be a working solution out there? Any help would be appreciated.
i need to check contentType while looping through the links to check if the link is to a file, in an effective way but Jsoup does not have what i need. Heres what i have:
**
HttpConnection mimeConn =null;
Response mimeResponse = null;
for(Element link: links){
String linkurl =link.absUrl("href");
if(!linkurl.contains("#")){
if(DownloadRepository.curlExists(link.absUrl("href"))){
continue;
}
mimeConn = (HttpConnection) Jsoup.connect(linkurl);
mimeConn.ignoreContentType(true);
mimeConn.ignoreHttpErrors(true);
mimeResponse =(Response) mimeConn.execute();
WebUrl webUrl = new WebUrl(linkurl,currentDepth+1);
String contentType = mimeResponse.contentType();
if(contentType.contains("html")){
page.addToCrawledPages(new WebPage(webUrl));
}else if(contentType.contains("image")){
page.addToImages(new WebImage(webUrl));
}else{
page.addToFiles(new WebFile(webUrl));
}
DownloadRepository.addCrawledURL(linkurl);
}**
UPDATE
Based on Yoshi's answer, I was able to get my code to work right. Here's the link:
https://github.com/unekwu/cs_nemesis/blob/master/crawler/crawler/src/cu/cs/cpsc215/project1/parser/Parser.java
Use jSoup i think this API is good enough for your purpose. Also you can find good Cookbook on this site.
Several steps:
Jsoup: how to get an image's absolute url?
how to download image from any web page in java
You can write your own recursion method which walk through links on page which contains nesessary domain name or relative links. Use this way to grab all links and find all images on it. Write it yourself it's not bad practice.
You don't need to use URLConnection class, jSoup have wrapper for it.
e.g
You can use only one line of code to get DOM object:
Document doc = Jsoup.connect("http://en.wikipedia.org/").get();
Instead of this code:
URL oracle = new URL("http://www.oracle.com/");
URLConnection yc = oracle.openConnection();
BufferedReader in = new BufferedReader(new InputStreamReader(
yc.getInputStream()));
String inputLine;
while ((inputLine = in.readLine()) != null)
System.out.println(inputLine);
in.close();
Update1
try to add in your code next lines:
Connection.Response res = Jsoup.connect("http://en.wikipedia.org/").execute();
String pageContentType = res.contentType();

Java getting source code from a website

I have a problem once again where I cant find the source code because its hidden or something... When my java program indexes the page it finds everything but the info i need... I assume its hidden for a reason but is there anyway around this?
Its just a bunch of tr/td tags that show up in firebug but dont show up when viewing the page source or when i do below
URL url = new URL("my url");
URLConnection yc = url.openConnection();
BufferedReader in = new BufferedReader(new InputStreamReader(yc.getInputStream()));
String inputLine;
while ((inputLine = in.readLine()) != null) {
I really have no idea how to attempt to get the info that i need...
The reason for this behavior is because probably those tags are dynamically injected into the DOM using javascript and are not part of the initial HTML which is what you can fetch with an URLConnection. They might even be created using AJAX. You will need a javascript interpreter on your server if you want to fetch those.
If they don't show up in the page source, they're likely being added dynamically by Javascript code. There's no way to get them from your server-side script short of including a javascript interpreter, which is rather high-overhead.
The information in the tags is presumably coming from somewhere, though. Why not track that down and grab it straight from there?
Try Using Jsoup.
Document doc = doc=Jsoup.parse("http:\\",10000);
System.out.print(doc.toString());
Assuming that the issue is that the "missing" content is being injected using javascript, the following SO Question is pertinent:
What's a good tool to screen-scrape with Javascript support?

Categories