How to improve memory usage when converting BufferedReader to ByteArrayInputStream? - java

I am running into some out of memory exceptions when reading in very very large XML strings and converting them into a Document object.
The way I am doing this is I am opening a URL stream to the XML file, wrapping that in an InputStreamReader, then wrapping that in a BufferedReader.
Then I read from the BufferedReader and append to a StringBuffer:
StringBuffer doc = new StringBuffer();
BufferedReader in = new BufferedReader(newInputStreamReader(downloadURL.openStream()));
String inputLine;
while ((inputLine = in.readLine()) != null) {
doc.append(inputLine);
}
Now this is the part I am having an issue with. I am using toString on the StringBuffer to be able to get the bytes to create a byte array which is then used to create a ByteArrayInputStream. I believe that this step is causing me to have the same data in memory twice, is that right?
Here is what I am doing:
byte xmlBytes[] = doc.toString().getBytes();
ByteArrayInputStream is = new ByteArrayInputStream(xmlBytes);
XMLReader xmlReader = XMLReaderFactory.createXMLReader();
Builder xmlBuilder = new Builder(xmlReader,false);
Document d = xmlBuilder.build(is);
Is there a way I can avoid creating duplicate memory (if I am doing so in the first place) or is there a way to convert the BufferedReader straight into a ByteArrayInputStream?
Thanks

Here is how you can consume an InputStream to create a Document using a DOM parser:
DocumentBuilderFactory domFactory = DocumentBuilderFactory.newInstance();
DocumentBuilder builder = domFactory.newDocumentBuilder();
Document document = builder.parse(inputStream);
This creates less intermediary copies. However, if the XML document is very large, instead of parsing it completely in memory, the best solution is to use a StAX parser.
With a StAX parser, you don't load the entire parsed document in memory. Instead, you handle each element found sequentially (and the element is thrown away immediately).
Here is a good explanation: Java: Parsing XML files: DOM, SAX or StAX?
There are also SAX parsers, but it's much easier to use StAX. Discussion here: When should I choose SAX over StAX?

If your XML (or JSON) file is large then it is not a good idea to load the whole content to memory because as you mentioned the parsing process consumes huge memory.
This issue can be more serious in case of more users (I mean more then one thread). Just imagine what will happen if your application needs to serve two, ten or more parallel requests...
The best way to process huge file as a stream and after you read the payload from the stream you can close it without read the stream till the end. It is more faster and memory friendly solution.
Apache Commons IO can help you to do the job:
LineIterator it = FileUtils.lineIterator(theFile, "UTF-8");
try {
while (it.hasNext()) {
String line = it.nextLine();
// do something with line
}
} finally {
LineIterator.closeQuietly(it);
}
The another way to handle this issue is to split your XML file to parts and then you can process the smaller parts without any issue.

Related

Do I have any performance gain using the BufferedReader in this case?

I want to send a CSV file encoded in base64 from Client to Server, in order to parse it and use the data.
I want to get the InputStream directly from the Request object and pipe it to the reader used by the CSV parser.
Is there any performance or memory gain using this method?
Can the following code achieve this ? I feel like there's something missing while decoding the content.
Is BufferedReader really needed in this example ?
/* Suppose I get a Base64 encoded CSV file from the client */
String csvContent = "Column 1;Column 2;Column 3\r\nValue 1;Value 2;Value 3\r\n";
ByteArrayInputStream inputStream = new ByteArrayInputStream(Base64.encodeBase64(csvContent.getBytes()));
/* retrieving the content UPDATED */
Base64InputStream b64InputStream = new Base64InputStream(inputStream, false);
/* Parsing the CSV content */
Reader reader = new BufferedReader(
new InputStreamReader(b64InputStream));
CSVParser csvParser = new CSVParser(reader, FORMAT_EXCEL_FR);
/* printing results */
csvParser.forEach(record -> printRecord(record));
Update
I replaced the byte[] array with a Base64InputStream from org.apache.commons.codec
Probably not. A BufferedReader ... uses a buffer. It is commonly used when your data is not in java memory yet. ( e.g. socket communication, reading data from a file , ... )
In your case, you are wrapping a byte[], which means that the data is already in memory. So there is no point in adding a buffer.
The javadoc describes a BufferedReader as follows:
Reads text from a character-input stream, buffering characters so as to provide for the efficient reading of characters, arrays, and lines.
Now, let's say for example you want to read the content of a file, and want to check something byte-per-byte. So you do a lot of byte b = in.read(); calls. In that case, a buffered reader will actually fetch those bytes in chunks internally.
So, basically, whenever it is more efficient to fetch data in chunks, use a BufferedReader.
Update
In response to your update. No, also in this case it's not necessary to add a BufferedReader. As Holger pointed out:
It's likely that the CSVParser does that already (i.e. buffering).
I checked the source code of the CSVParser, and look what's in the constructor.
public CSVParser(final Reader reader, final CSVFormat format, final long characterOffset, final long recordNumber)
throws IOException {
...
this.lexer = new Lexer(format, new ExtendedBufferedReader(reader));
...
}
It wraps some kind of buffered reader by default. So, there's no need to add one yourself.

Reusing an InputSource through application life cycle

I am using InputSource to parse a large xml file (1.2 mb). I need to keep this InputSource in memory. I do not want to keep loading it. What's the best why to do this? I have tried using a singleton, but the Sax Parser complains that the document is missing an end tag after the 2nd time the object reference is accessed.
Any suggestions would be greatly appreciated.
InputStream ins = getResources().openRawResource(R.raw.cfr_title_index);
InputSource xmlSource = new InputSource(ins);
MySinglton.xmlInput = xmlSource;
Thanks
Streams are "read once". You should not plan to hang onto it. If you need its contents in memory, read them into an object or data structure and cache it.

Stop Jsoup from encoding

I'm trying to parese an URL with JSoup which contains the following Text: Ætterni.
After parsing the document the same string looks like that: Ætterni.
How do I prevent this form happening? I want the document 1:1 exactly like it was.
Code:
doc = Jsoup.connect(url).get();
String docEncoding=doc.outputSettings().charset().name();
OutputStreamWriter writer = new OutputStreamWriter(new FileOutputStream(localLink),docEncoding);
writer.write(doc.html());
writer.close();
Use
doc.outputSettings().escapeMode(EscapeMode.xhtml);
for avoiding entities conversion.
You seem to be not utilizing the Jsoup's powers in any way. I'd just stream the HTML plain using java.net.URL. This way you have a 1:1 copy of the response.
InputStream input = new URL(url).openStream();
OutputStream output = new FileOutputStream(localLink);
// Now copy input to output the usual Java IO way.
You should not use Reader/Writer for this as this may malform the characters of sources in unknown encoding, because the platform default encoding would be used instead.

How is the best way to extract the entire content from a BufferedReader object in Java?

i'm trying to get an entire WebPage through a URLConnection.
What's the most efficient way to do this?
I'm doing this already:
URL url = new URL("http://www.google.com/");
URLConnection connection;
connection = url.openConnection();
InputStream in = connection.getInputStream();
BufferedReader bf = new BufferedReader(new InputStreamReader(in));
StringBuffer html = new StringBuffer();
String line = bf.readLine();
while(line!=null){
html.append(line);
line = bf.readLine();
}
bf.close();
html has the entire HTML page.
I think this is the best way. The size of the page is fixed ("it is what it is"), so you can't improve on memory. Perhaps you can compress the contents once you have them, but they aren't very useful in that form. I would imagine that eventually you'll want to parse the HTML into a DOM tree.
Anything you do to parallelize the reading would overly complicate the solution.
I'd recommend using a StringBuilder with a default size of 2048 or 4096.
Why are you thinking that the code you posted isn't sufficient? You sound like you're guilty of premature optimization.
Run with what you have and sleep at night.
What do you want to do with the obtained HTML? Parse it? It may be good to know that a bit decent HTML parser can already have a constructor or method argument which takes straight an URL or InputStream so that you don't need to worry about streaming performance like that.
Assuming that all you want to do is described in your previous question, with for example Jsoup you could obtain all those news links extraordinary easy like follows:
Document document = Jsoup.connect("http://news.google.com.ar/nwshp?hl=es&tab=wn").get();
Elements newsLinks = document.select("h2.title a:eq(0)");
for (Element newsLink : newsLinks) {
System.out.println(newsLink.attr("href"));
}
This yields the following after only a few seconds:
http://www.infobae.com/mundo/541259-100970-0-Pinera-confirmo-que-el-rescate-comenzara-las-20-y-durara-24-y-48-horas
http://www.lagaceta.com.ar/nota/403112/Argentina/Boudou-disculpo-con-DAIA-pero-volvio-cuestionar-medios.html
http://www.abc.es/agencias/noticia.asp?noticia=550415
http://www.google.com/hostednews/epa/article/ALeqM5i6x9rhP150KfqGJvwh56O-thi4VA?docId=1383133
http://www.abc.es/agencias/noticia.asp?noticia=550292
http://www.univision.com/contentroot/wirefeeds/noticias/8307387.shtml
http://noticias.terra.com.ar/internacionales/ecuador-apoya-reclamo-argentino-por-ejercicios-en-malvinas,3361af2a712ab210VgnVCM4000009bf154d0RCRD.html
http://www.infocielo.com/IC/Home/index.php?ver_nota=22642
http://www.larazon.com.ar/economia/Cristina-Fernandez-Censo-indispensable-pais_0_176100098.html
http://www.infobae.com/finanzas/541254-101275-0-Energeticas-llevaron-la-Bolsa-portena-ganancias
http://www.telam.com.ar/vernota.php?tipo=N&idPub=200661&id=381154&dis=1&sec=1
http://www.ambito.com/noticia.asp?id=547722
http://www.canal-ar.com.ar/noticias/noticiamuestra.asp?Id=9469
http://www.pagina12.com.ar/diario/cdigital/31-154760-2010-10-12.html
http://www.lanacion.com.ar/nota.asp?nota_id=1314014
http://www.rpp.com.pe/2010-10-12-ganador-del-pulitzer-destaca-nobel-de-mvll-noticia_302221.html
http://www.lanueva.com/hoy/nota/b44a7553a7/1/79481.html
http://www.larazon.com.ar/show/sdf_0_176100096.html
http://www.losandes.com.ar/notas/2010/10/12/batista-siento-comodo-dieron-respaldo-520595.asp
http://deportes.terra.com.ar/futbol/los-rumores-empiezan-a-complicar-la-vida-de-river-y-vuelve-a-sonar-gallego,a24483b8702ab210VgnVCM20000099f154d0RCRD.html
http://www.clarin.com/deportes/futbol/Exigieron-Roman-regreso-Huracan_0_352164993.html
http://www.el-litoral.com.ar/leer_noticia.asp?idnoticia=146622
http://www.nuevodiarioweb.com.ar/nota/181453/Locales/C%C3%A1ncer_mama:_200_casos_a%C3%B1o_Santiago.html
http://www.ultimahora.com/notas/367322-Funcionarios-sanitarios-capacitaran-sobre-cancer-de-mama
http://www.lanueva.com/hoy/nota/65092f2044/1/79477.html
http://www.infobae.com/policiales/541220-101275-0-Se-suspendio-la-declaracion-del-marido-Fernanda-Lemos
http://www.clarin.com/sociedad/educacion/titulo_0_352164863.html
Did someone already said that regex is absolutely the wrong tool to parse HTML? ;)
See also:
Pros and cons of HTML parsers in Java
Your approach looks pretty good, however you can make it somewhat more efficient by avoiding the creation of intermediate String objects for each line.
The way to do this is to read directly into a temporary char[] buffer.
Here is a slightly modified version of your code that does this (minus all the error checking, exception handling etc. for clarity):
URL url = new URL("http://www.google.com/");
URLConnection connection;
connection = url.openConnection();
InputStream in = connection.getInputStream();
BufferedReader bf = new BufferedReader(new InputStreamReader(in));
StringBuffer html = new StringBuffer();
char[] charBuffer = new char[4096];
int count=0;
do {
count=bf.read(charBuffer, 0, 4096);
if (count>=0) html.append(charBuffer,0,count);
} while (count>0);
bf.close();
For even more performance, you can of course do little extra things like pre-allocating the character array and StringBuffer if this code is going to be called frequently.
You can try using commons-io from apache (http://commons.apache.org/io/api-release/org/apache/commons/io/IOUtils.html)
new String(IOUtils.toCharArray(connection.getInputStream()))
There are some technical considerations. You may wish to use HTTPURLConnection instead of URLConnection.
HTTPURLConnection supports chunked transfer encoding, which allows you to process the data in chunks, rather than buffering all of the content before you start doing work. This can lead to an improved user experience.
Also, HTTPURLConnection supports persistent connections. Why close that connection if you're going to request another resource right away? Keeping the TCP connection open with the web server allows your application to quickly download multiple resources without spending the overhead (latency) of establishing a new TCP connection for each resource.
Tell the server that you support gzip and wrap a BufferedReader around GZIPInputStream if the response header says the content is compressed.

SAXReader not re-ecape characters

I'm reading a XML file with dom4j. The file looks like this:
...
<Field>
hello, world...</Field>
...
I read the file with SAXReader into a Document. When I use getText() on a the node I obtain the followin String:
\r\n hello, world...
I do some processing and then write another file using asXml(). But the characters are not escaped as in the original file which results in error in the external system which uses the file.
How can I escape the special character and have
when writing the file?
You cannot easily. Those aren't 'escapes', they are 'character entities'. They are a fundamental part of XML. Xerces has some very complex support for 'unparsed entities', but I doubt that it applies to these, as opposed to the species that are defined in a DTD.
It depends on what you're getting and what you want (see my previous comment.)
The SAX reader is doing nothing wrong - your XML is giving you a literal newline character. If you control this XML, then instead of the newline characters, you will need to insert a \ (backslash) character following by the "r" or "n" characters (or both.)
If you do not control this XML, then you will need to do a literal conversion of the newline character to "\r\n" after you've gotten your string back. In C# it would be something like:
myString = myString.Replace("\r\n", "\\r\\n");
XML entities are abstracted away in DOM. Content is exposed with String without the need to bother about the encoding -- which in most of the case is what you want.
But SAX has some support for how entities are processed. You could try to create a XMLReader with a custom EntityResolver#resolveEntity, and pass it as parameter to the SAXReader. But I feat it may not work:
The Parser will call this method
before opening any external entity
except the top-level document entity
(including the external DTD subset,
external entities referenced within
the DTD, and external entities
referenced within the document
element)
Otherwise you could try to configure a LexicalHandler for SAX in a way to be notified when an entity is encountered. Javadoc for LexicalHandler#startEntity says:
Report the beginning of some internal
and external XML entities.
You will not be able to change the resolving, but that may still help.
EDIT
You must read and write XML with the SAXReader and XMLWriter provided by dom4j. See reading a XML file and writing an XML file. Don't use asXml() and dump the file yourself.
FileOutputStream fos = new FileOutputStream("simple.xml");
OutputFormat format = OutputFormat.createPrettyPrint();
XMLWriter writer = new XMLWriter(fos, format);
writer.write(doc);
writer.flush();
You can pre-process the input stream to replace & to e.g. [$AMPERSAND_CHARACTER$], then do the stuff with dom4j, and post-process the output stream making the back substitution.
Example (using streamflyer):
import com.github.rwitzel.streamflyer.util.ModifyingReaderFactory;
import com.github.rwitzel.streamflyer.util.ModifyingWriterFactory;
// Pre-process
Reader originalReader = new InputStreamReader(myInputStream, "utf-8");
Reader modifyingReader = new ModifyingReaderFactory().createRegexModifyingReader(originalReader, "&", "[\\$AMPERSAND_CHARACTER\\$]");
// Read and modify XML via dom4j
SAXReader xmlReader = new SAXReader();
Document xmlDocument = xmlReader.read(modifyingReader);
// ...
// Post-process
Writer originalWriter = new OutputStreamWriter(myOutputStream, "utf-8");
Writer modifyingWriter = new ModifyingWriterFactory().createRegexModifyingWriter(originalWriter, "\\[\\$AMPERSAND_CHARACTER\\$\\]", "&");
// Write to output stream
OutputFormat xmlOutputFormat = OutputFormat.createPrettyPrint();
XMLWriter xmlWriter = new XMLWriter(modifyingWriter, xmlOutputFormat);
xmlWriter.write(xmlDocument);
xmlWriter.close();
You can also use FilterInputStream/FilterOutputStream, PipedInputStream/PipedOutputStream, or ProxyInputStream/ProxyOutputStream for pre- and post-processing.

Categories