Scrape data from HTML pages using Java, output to database [closed] - java

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I need to know how to create a scraper (in Java) to gather data from HTML pages and output to a database...do not have a clue where to start so any information you can give me on this would be great. Also, you can't be too basic or simple here...thanks :)

First you need to get familiar with a HTML DOM parser in Java like JTidy. This will help you to extract the stuff you want from a HTML file. Once you have the essential stuff, you can use JDBC to put in the database.
It might be tempting to use regular expression for this job. But don't. HTML is not a regular language so regex are not the way to go.

I am running a scraper using JSoup I'm a noob yet found it to be very intuitive and easy to work with. It is also capable of parsing a wide range or sources html, XML, RSS, etc.
I experimented with htmlunit with little to no success.

i successfully used lobo browser API in a project that scraped HTML pages. the lobo browser project offers a browser but you can also use the API behind it very easily. it will also execute javascript and if that javascript manipulates the DOM, then that will also be reflected in the DOM when you investigate the DOM. so, in short, the API allows you mimic a browser, you can also work with cookies and stuff.
now for getting the data out of the HTML, i would first transform the HTML to valid XHTML. you can use jtidy for this. since XHTML is valid XML, you can use XPath to retrieve the data you want very easily. if you try to write code that parses the data from the raw HTML, your code will become a mess quickly. therefore i'd use XPath.
Once you have the data, you can insert it into a DB with JDBC or maybe use Hibernate if you want to avoid writing too much SQL

A HUGE percentage of websites are build on malformed HTML code. It is essential that you use something like HtmlCleaner to clean up the source code that you want to parse.
Then you can successfully use XPath to extract Nodes and Regex to parse specific part of the strings you extracted from the page.
At least this is the technique I used.
You can use the xHtml that is returned from HtmlCleaner as a sort of Interface between your Application and the remote Page you're trying to parse. You should test against this and in the case the remote page changes you just have to extract the new xHtml cleaned by HtmlCleaner, re-adapt the XPath Queries to extract what you need and re-test your Application code against the new Interface.
In the case you want to create a MultiThreaded 'scraper' be aware that HtmlCleaner is not Thread Safe (refer my post here).
This post can give you an idea of how to parse a correctly formatted xHtml using XPath.
Good Luck! ;)
note: at the time I implemented my Scraper, HtmlCleaner did a better job in normalizing the pages I wanted to parse. In some cases jTidy was failing in doing the same job so I'd suggest you to give it a try

Using JTidy you can scrap data from HTML. Then yoou can use JDBC.

Related

prevent XSS attack on JSTL & JSP scriptlet [duplicate]

I'm writing a servlet-based application in which I need to provide a messaging system. I'm in a rush, so I choose CKEditor to provide editing capabilities, and I currently insert the generated html directly in the web page displaying all messages (messages are stored in a MySQL databse, fyi). CKEditor already filters HTML based on a white list, but a user can still inject malicious code with a POST request, so this is not enough.
A good library already exists to prevent XSS attacks by filtering HTML tags, but it's written in PHP: HTML Purifier
So, is there a similar mature library that can be used in Java ?
A simple string replacement based on a white list doesn't seem to be enough, since I'd like to filter malformed tags too (which could alter the design of the page on which the message is displayed).
If there isn't, then how should I proceed? An XML parser seems overkill.
Note: There are a lot of questions about this on SO, but all the answers refer to filter ALL HTML tags: I want to keep valid formatting tags.
I'd recommend using Jsoup for this. Here's an extract of relevance from its site.
Sanitize untrusted HTML
Problem
You want to allow untrusted users to supply HTML for output on your website (e.g. as comment submission). You need to clean this HTML to avoid cross-site scripting (XSS) attacks.
Solution
Use the jsoup HTML Cleaner with a configuration specified by a Whitelist.
String unsafe =
"<p><a href='http://example.com/' onclick='stealCookies()'>Link</a></p>";
String safe = Jsoup.clean(unsafe, Whitelist.basic());
// now: <p>Link</p>
Jsoup offers more advantages than that as well. See also Pros and Cons of HTML parsers in Java.
You should use AntiSamy. (That's what I did)
If none of the ready-made options seem like enough, there is an excellent series of articles on XSS and attack prevention at Google Code. It should provide plenty of information to work with, if you end up going down that path.

Reading HTML+JavaScript using Java

I can read the HTML contents via http (for example, http://www.foo.com) using Java (with URL and BufferedReader classes). However, a couple of them contain JavaScript. My current app cannot process JavaScript.
What's the best way to read HTML content with JavaScript using Java?
I am open using other languages if it is easier.
Thanks in advance for your help.
UPDATE - Clarification:
A couple HTML contents are generated dynamically using JavaScript. I can see the result (in pure HTML after the JavaScript processing) when viewing them on a browser.
On the other hand, when my Java app retrieves the HTML contents, it says that there is no JavaScript on my app.
Ideally, I want to be able to get the same result as on the browser using my Java app.
Thanks for everyone's response.
HtmlUnit has good JavaScript support and it should (almost) parse the HTML as a web browser.
http://htmlunit.sourceforge.net/
http://htmlunit.sourceforge.net/javascript.html
Cobra (http://lobobrowser.org/cobra/getting-started.jsp) will fit your needs
For just HTML parsing you can use HTMLParser (org.htmlparser). However from the way you described your problem, it seems you need a browser, because executing is totally different than just parsing. Cheers.
With no doubt you need to use Java html parser:
Java Open Source HTML Parsers
Which Html Parser is best?
HTML/XML Parser for Java
HTML PARSER in java [closed]

What is the best way to create and send html emails though java application?

There have been some similar questions in stackoverflow but none of them answers my question.
We want to send html , emails to users after they complete some action. We have written email templates in xsl and use DOM elements to create nodes, add dynamic data, parse xsl and substitute data.
Although this works fine, it eats up too much memory.
Is there any alternate solution ?
I do not want to write html code in java.
If you are using Spring see example with Velocity.
One alternative is Velocity, it's known as a webpage-templating framework but you can use it to create templates for your emails too.
It occurred to me you might also try swapping out xsl processors and see if you can find a faster one, that would be less work than switching to Velocity.
we use HTML.Template.java. You can also leverage a jsp.

What is the best way to screen scrape poorly formed XHTML pages for a java app

I want to be able to grab content from web pages, especially the tags and the content within them. I have tried XQuery and XPath but they don't seem to work for malformed XHTML and REGEX is just a pain.
Is there a better solution. Ideally I would like to be able to ask for all the links and get back an array of URLs, or ask for the text of the links and get back an array of Strings with the text of the links, or ask for all the bold text etc.
Run the XHTML through something like JTidy, which should give you back valid XML.
You may want to look at Watij. I have only used its Ruby cousin, Watir, but with it I was able to load a webpage and request all URLs of the page in exactly the manner you describe.
It was very easy to work with - it literally fires up a webbrowser and gives you back information in nice forms. IE support seemed best, but at least with Watir Firefox was also supported.
I had some problems with JTidy back in the day. I think it was related to tags that weren't closed that made JTidy fail. I don't know if thats fixed now. I ended up using something that was a wrapper around TagSoup, although I don't remember the exact project's name. Theres also HTMLCleaner.
I've used http://htmlparser.sourceforge.net/. It can parse poorly formed html and allows data extraction quite easily.

How do you grab a text from webpage (Java)?

I'm planning to write a simple J2SE application to aggregate information from multiple web sources.
The most difficult part, I think, is extraction of meaningful information from web pages, if it isn't available as RSS or Atom feeds. For example, I might want to extract a list of questions from stackoverflow, but I absolutely don't need that huge tag cloud or navbar.
What technique/library would you advice?
Updates/Remarks
Speed doesn't matter — as long as it can parse about 5MB of HTML in less than 10 minutes.
It sould be really simple.
You may use HTMLParser (http://htmlparser.sourceforge.net/)in combination with URL#getInputStream() to parse the content of HTML pages hosted on Internet.
You could look at how httpunit does it. They use couple of decent html parsers, one is nekohtml.
As far as getting data you can use whats built into the jdk (httpurlconnection), or use apache's
http://hc.apache.org/httpclient-3.x/
If you want to take advantage of any structural or semantic markup, you might want to explore converting the HTML to XML and using XQuery to extract the information in a standard form. Take a look at this IBM developerWorks article for some typical code, excerpted below (they're outputting HTML, which is, of course, not required):
<table>
{
for $d in //td[contains(a/small/text(), "New York, NY")]
for $row in $d/parent::tr/parent::table/tr
where contains($d/a/small/text()[1], "New York")
return <tr><td>{data($row/td[1])}</td>
<td>{data($row/td[2])}</td>
<td>{$row/td[3]//img}</td> </tr>
}
</table>
In short, you may either parse the whole page and pick things you need(for speed I recommend looking at SAXParser) or running the HTML through a regexp that trims of all of the HTML... you can also convert it all into DOM, but that's going to be expensive especially if you're shooting for having a decent throughput.
You seem to want to screen scrape. You would probably want to write a framework which via an adapter / plugin per source site (as each site's format will differ), you could parse the html source and extract the text. you would prob use java's io API to connect to the URL and stream the data via InputStreams.
If you want to do it the old fashioned way , you need to connect with a socket to the webserver's port , and then send the following data :
GET /file.html HTTP/1.0
Host: site.com
<ENTER>
<ENTER>
then use the Socket#getInputStream , and then read the data using a BufferedReader , and parse the data using whatever you like.
You can use nekohtml to parse your html document. You will get a DOM document. You may use XPATH to retrieve data you need.
If your "web sources" are regular websites using HTML (as opposed to structured XML format like RSS) I would suggest to take a look at HTMLUnit.
This library, while targeted for testing, is a really general purpose "Java browser". It is built on a Apache httpclient, Nekohtml parser and Rhino for Javascript support. It provides a really nice API to the web page and allows to traverse website easily.
Have you considered taking advantage of RSS/Atom feeds? Why scrape the content when it's usually available for you in a consumable format? There are libraries available for consuming RSS in just about any language you can think of, and it'll be a lot less dependent on the markup of the page than attempting to scrape the content.
If you absolutely MUST scrape content, look for microformats in the markup, most blogs (especially WordPress based blogs) have this by default. There are also libraries and parsers available for locating and extracting microformats from webpages.
Finally, aggregation services/applications such as Yahoo Pipes may be able to do this work for you without reinventing the wheel.
Check this out http://www.alchemyapi.com/api/demo.html
They return pretty good results and have an SDK for most platforms. Not only text extraction but they do keywords analysis etc.

Categories