Loading raw HTML in Java/Android - java

I have written a couple of live wallpapers in recent weeks using local resources. Now a potential client wants me to make one that loads and displays the photos (usually between 3 and 10) from his daily news report posted online. The report file has a URL along the lines of http://example.com/dailytext/report.html which loads images along the lines of http://example.com/dailymedia/obama.jpg The references in report.html look like
img src="../dailymedia/obama.jpg" ...
Am I supposed to use a WebView to do this? That doesn't seem quite right, because I don't want to display the HTML. I would think that I want to throw the raw HTML into an array, parse the HTML looking for the instances of "img src...", reconstruct the full URLs, then load the bitmaps. I'm getting the impression this is more of a pure Java task than anything to do with Android's specialized classes, but I don't know. Any suggestions about "best practice?"

Unless I have misunderstood, this really isn't hard. You need to do the following:
Fetch the HTML, using either the native java networking api or something like HttpClient
Use a parser like Jericho or Dom4j to extract out the image links
Construct the absolute URLs, can be done with just java.net.URL
Fetch the images

You could also prefer using Jsoup
HTML Parser.

Related

Modify all urls from HTML fragments (migration to S3)

We use a clone of Amazon S3 called GreenQloud Storage. But they are shutting down so we have to take all the files here and migrate them in S3.
Our app is a little similar to a CMS in that our DB has fields that contains HTML fragments. And these fragments may reference GreenQloud urls, so we have to replace all these urls with S3 urls.
The files are already migrated. Here is a sample file in both storage providers:
https://s.greenqloud.com/com.stample.s3/stample-1420827843028-spotlight.png
https://stample-files.s3.amazonaws.com/stample-1420827843028-spotlight.png
I'm thinking of using a HTML parser to extract tags like a, img and http urls found in text nodes, but I'm frightened to miss some urls this way. Do you see any problem with that?
I'm also considering using regexes but some people advice against using regexes to parse HTML. But anyway, in my case I'm not really sure this could be considered "parsing HTML" as I just want to replace a pattern by another.
So, I would appreciate to know which solution is the best / most secure for this migration. I'm not concerned so much about migration throughput / performances but rather to migrate correctly all the links accurately.
We use Java/Scala, and all our fields to migrate are in MongoDB so any Java / MongoDB based snippets are welcome.
Also note that some old HTML fragments may not be well-formed in our DB, but a Java parser can generally fix that.
Thanks
Edit
A typical MongoDB document might look like:
{
_id: ObjectId(xxx),
title: "yyy",
content: "HTML FRAGMENT CONTAINING GREENQLOUD URLS",
mainPictureUrl: "GREENQLOUD URL"
}
I can't really give any example for the html fragment, as it can come in many different shapes.

Does JSoup achieve this?

I want to collect domain names (crawling). I have wrote a simple Java application that reads HTML page and save the code in text file. Now, I want to parse this text in order to collect all domain names without douplicates. But I need the domain names without "http://www.", just domainname.topleveldmian or the possibilities of dmianname.subdomain.topleveldomain or whatever number of subdomains (then, the collected links need to be extracted the same way and collect the links inside them till I reach certain number of links, say 100).
I have asked about this in previous posts https://stackoverflow.com/questions/11113568/simple-efficient-java-web-crawler-to-extract-hostnames , and searched. JSoup seems good solution but I have not worked with JSoup before, so before going deeply on it. I just want to ask: Does it achieve what I want to do ?? Any other suggestions for achieving my simple crawling in a simple way are welcome.
jsoup is a Java library for working with real-world HTML. It provides
a very convenient API for extracting and manipulating data, using the
best of DOM, CSS, and jquery-like methods
So yes, you can connect to a website extract its html and parse it with jsoup.
The logic of extracting the top level domain is "your part" you will need to write the code logic yourself.
Take a look at the docs for more options...
Use selector-syntax to find elements
Use DOM methods to navigate a document

Fetch complete web page using java code

I want to implement a java method which takes URL as input and stores the entire webpage including css, images, js (all related resources) on my disk. I have used Jsoup html parser to fetch html page. Now the only option I am thinking to implement is get the page using jsoup and now parse the html content and convert relative path to absolute path and then make another get requests for javascript, images etc. and save them on disk.
I also read about html cleaner, htmlunit parsers but i think in all these cases I have to parse the html content to fetch images,css and javascript files.
Any advice whether i am thinking right or not.
Or is there any easy way to accomplish this task ??
Basically, you can do it with Jsoup:
Document doc = Jsoup.connect("http://rabotalux.com.ua/vacancy/4f4f800c8bc1597dc6fc7aff").get();
Elements links = doc.select("link");
Elements scripts = doc.select("script");
for (Element element : links) {
System.out.println(element.absUrl("href"));
}
for (Element element : scripts) {
System.out.println(element.absUrl("src"));
}
And so on with images and all related resources.
BUT if your site creates some elements with javaScript, Jsoup will skip it, as it cant execute javaScript
I have encountered the similar problem before couple of years where we have used exactly the same mechanism which you are planing. parse the html content and convert relative path to absolute path and also we have used multiple threads to run simultaneously and retrieve images, java script etc for performance optimization. I don't know it should done as we did or not but at the end it works for us.:-)
This GitHub project does this, using jSoup. No need to write it again if it already exists!
EDIT: I made an improved version of this class, and added new features :
It can:
Extract URL's from Linked or Inline CSS, eg. for background images, and download & save those too.
It does multithreaded downloading of all the files, (images, scripts, etc.)
Gives details about progress and errors.
Can get HTML frames embedded in the HTML document, and nested frames also.
Some caveats:
Uses JSoup and OkHttp, so you need to have those libraries.
GPL licenced, for now anyway.

How can I extract only the main textual content from an HTML page?

Update
Boilerpipe appears to work really well, but I realized that I don't need only the main content because many pages don't have an article, but only links with some short description to the entire texts (this is common in news portals) and I don't want to discard these shorts text.
So if an API does this, get the different textual parts/the blocks splitting each one in some manner that differ from a single text (all in only one text is not useful), please report.
The Question
I download some pages from random sites, and now I want to analyze the textual content of the page.
The problem is that a web page have a lot of content like menus, publicity, banners, etc.
I want to try to exclude all that is not related with the content of the page.
Taking this page as example, I don't want the menus above neither the links in the footer.
Important: All pages are HTML and are pages from various differents sites. I need suggestion of how to exclude these contents.
At moment, I think in excluding content inside "menu" and "banner" classes from the HTML and consecutive words that looks like a proper name (first capital letter).
The solutions can be based in the the text content(without HTML tags) or in the HTML content (with the HTML tags)
Edit: I want to do this inside my Java code, not an external application (if this can be possible).
I tried a way parsing the HTML content described in this question : https://stackoverflow.com/questions/7035150/how-to-traverse-the-dom-tree-using-jsoup-doing-some-content-filtering
Take a look at Boilerpipe. It is designed to do exactly what your looking for, remove the surplus "clutter" (boilerplate, templates) around the main textual content of a web page.
There are a few ways to feed HTML into Boilerpipe and extract HTML.
You can use a URL:
ArticleExtractor.INSTANCE.getText(url);
You can use a String:
ArticleExtractor.INSTANCE.getText(myHtml);
There are also options to use a Reader, which opens up a large number of options.
You can also use boilerpipe to segment the text into blocks of full-text/non-full-text, instead of just returning one of them (essentially, boilerpipe segments first, then returns a String).
Assuming you have your HTML accessible from a java.io.Reader, just let boilerpipe segment the HTML and classify the segments for you:
Reader reader = ...
InputSource is = new InputSource(reader);
// parse the document into boilerpipe's internal data structure
TextDocument doc = new BoilerpipeSAXInput(is).getTextDocument();
// perform the extraction/classification process on "doc"
ArticleExtractor.INSTANCE.process(doc);
// iterate over all blocks (= segments as "ArticleExtractor" sees them)
for (TextBlock block : getTextBlocks()) {
// block.isContent() tells you if it's likely to be content or not
// block.getText() gives you the block's text
}
TextBlock has some more exciting methods, feel free to play around!
There appears to be a possible problem with Boilerpipe. Why?
Well, it appears that is suited to certain kinds of web pages, such as web pages that have a single body of content.
So one can crudely classify web pages into three kinds in respect to Boilerpipe:
a web page with a single article in it (Boilerpipe worthy!)
a web with multiple articles in it, such as the front page of the New York times
a web page that really doesn't have any article in it, but has some content in respect to links, but may also have some degree of clutter.
Boilerpipe works on case #1. But if one is doing a lot of automated text processing, then how does one's software "know" what kind of web page it is dealing with? If the web page itself could be classified into one of these three buckets, then Boilerpipe could be applied for case #1. Case #2 is a problem, and case#3 is a problem as well - it might require an aggregate of related web pages to determine what is clutter and what isn't.
You can use some libs like goose. It works best on articles/news.
You can also check javascript code that does similar extraction as goose with the readability bookmarklet
My first instinct was to go with your initial method of using Jsoup. At least with that, you can use selectors and retrieve only the elements that you want (i.e. Elements posts = doc.select("p"); and not have to worry about the other elements with random content.
On the matter of your other post, was the issue of false positives your only reasoning for straying away from Jsoup? If so, couldn't you just tweak the number of MIN_WORDS_SEQUENCE or be more selective with your selectors (i.e. do not retrieve div elements)
http://kapowsoftware.com/products/kapow-katalyst-platform/robo-server.php
Proprietary software, but it makes it very easy to extract from webpages and integrates well with java.
You use a provided application to design xml files read by the roboserver api to parse webpages. The xml files are built by you analyzing the pages you wish to parse inside the provided application (fairly easy) and applying rules for gathering the data (generally, websites follow the same patterns). You can setup the scheduling, running, and db integration using the provided Java API.
If you're against using software and doing it yourself, I'd suggest not trying to apply 1 rule to all sites. Find a way to separate tags and then build per-site
You're looking for what are known as "HTML scrapers" or "screen scrapers". Here are a couple of links to some options for you:
Tag Soup
HTML Unit
You can filter the html junk and then parse the required details or use the apis of the existing site.
Refer the below link to filter the html, i hope it helps.
http://thewiredguy.com/wordpress/index.php/2011/07/dont-have-an-apirip-dat-off-the-page/
You could use the textracto api, it extracts the main 'article' text and there is also the opportunity to extract all other textual content. By 'subtracting' these texts you could split the navigation texts, preview texts, etc. from the main textual content.

Extract All Images From HTML Using JAVA

I want to get the list of all Image urls from HTML source of a webpage(Both abosulte and relative urls). I used Jsoup to parse the HTML but its not giving all images. For example when I am parsing google.com HTML source its showing zero images..In google.com HTML source image links are in form..
"background:url(/intl/en_com/images/srpr/logo1w.png)
And in rediff.com the images links are in form..
videoArr[j]=new Array("http://ishare.rediff.com/video/entertainment/bappi-da-the-first-indian-in-grammy-jury/2684982","http://datastore.rediff.com/h86-w116/thumb/5E5669666658606D6A6B6272/v3np2zgbla4vdccf.D.0.bappi.jpg","Bappi Da - the first Indian In Grammy jury","http://mypage.rediff.com/profile/getprofile/LehrenTV/12669275","LehrenTV","(2:33)");
j = 1
videoArr[j]=new Array("http://ishare.rediff.com/video/entertainment/bebo-shahid-jab-they-met-again-/2681664","http://datastore.rediff.com/h86-w116/thumb/5E5669666658606D6A6B6272/ra8p9eeig8zy5qvd.D.0.They-Met-Again.jpg","Bebo-Shahid : Jab they met again!","http://mypage.rediff.com/profile/getprofile/LehrenTV/12669275","LehrenTV","(2:17)");
All images are not with in "img" tags..I also want to extract images which are not even with in "img" tags as shown in above HTML source.
How can I do this..?Please help me on this..
Thanks
This is going to be a bit difficult, I think. You basically need a library that will download a web page, construct the page's DOM and execute any javascript that may alter the DOM. After all that is done you have to extract all the possible images from the DOM. Another possible option is to intercept all calls by library to download resources, examine the URL and if the URL is an image record that URL.
My suggestion would be to start by playing with HtmlUnit(http://htmlunit.sourceforge.net/gettingStarted.html.) It does a good job of building the DOM. I'm not sure what types of hooks it has, for intercepting the methods that download resources. Of course if it doesn't provide you with the hooks you can always use AspectJ or simply modify the HtmlUnit source code. Good luck, this sounds like a reasonably interesting problem. You should post your solution, when you figure it out.
If you just want every image referred to in the page, can't you just scan the HTML and any linked javascript or CSS with a simple regex? How likely is it you'd get [-:_./%a-zA-Z0-9]*(.jpg|.png|.gif) in the HTML/JS/CSS that's not an image? I'd guess not very likely. And you should be allowing for broken links anyway.
Karthik's suggestion would be more correct, but I imagine it's more important to you to just get absolutely everything and filter out uninteresting images.

Categories