Analyzing memory with MAT - question about UTF characters - java

I get an .hprof file and I'm analyzing it with Eclipse Memory Analyser (MAT).
I run Top Component report and, in Duplicate Strings section, MAT detects some String instances with identical content.
I'm working with String.intern() and other homework for me, but now this is not my question.
That report shows me duplicated Strings like these:
\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000....
\u000a\u0009\u0009
\u000a\u0009\u0009\u0009\u0009
And so on.
Other Strings are readable, but, how about these ones? I'm thinking they are from XML parsing (I use JibX in my app).
My questions are:
What do you think these strings are coming? How can I analyse them better?
If they are from XML parsing or something else, how can I clean/clear them after parsing? Maybe is JibX 1.0.1 Release too old for these issues?
Any suggestion about these UTF-8 like Strings would be very appreciated. Thanks in advance.

You can right-click on the suspicious String and select List Objects/With Incoming References. This will show you the objects that reference your Strings.

It is interesting to see Strings with many \u0000 characters, which is very uncommon given the fact that Strings are not 0-terminated in Java, so they are created from a String(byte[]) constructor, maybe a String(byte[],encoding) constructor, from byte arrays containing 0s.
I would use a profiler and analyse the call graphs of these constructors. Then you will find the culprit.

Related

In Java, how to copy data from String to char[]/byte[] efficiently?

I need to copy many big and different String strs' content to a static char array and use the array frequently in a efficiency-demanding job, thus it's important to avoid allocating too much new space.
For the reason above, str.toCharArray() was banned, since it allocates space for every String.
As we all know, charAt(i) is more slowly and more complex than using square brackets [i]. So I want to use byte[] or char[].
One good news is, there's a str.getBytes(srcBegin, srcEnd, dst, dstBegin). But the bad news is it was (or is to be?) deprecated.
So how can we finish this demanding job?
I believe you want getChars(int, int, char[], int). That will copy the characters into the specified array, and I'd expect it to do it "as efficiently as reasonably possible".
You should avoid converting between text and binary representations unless you really need to. Aside from anything else, that conversion itself is likely to be time-consuming.
A small stocktaking:
String does Unicode text; it can be normalized (java.text.Normalizer).
int[] code points are Unicode symbols
char[] is Unicode UTF-16BE (2 bytes per char), sometimes for a code point 2 chars are needed: a surrogate pair.
byte[] is for binary data. Holding Unicode text in UTF-8 is relative compact when there is much ASCII resp. Latin-1.
Processing might be done on a ByteBuffer, CharBuffer, IntBuffer.
When dealing with Asian scripts, int code points probably is most feasible.
Otherwise bytes seem best.
Code points (or chars) also make sense when the Character class is utilized for classification of Unicode blocks and scripts, digits in several scripts, emoji, whatever.
Performance would best be done in bytes as often most compact. UTF-8 probably.
One cannot efficiently deal with memory allocation. getBytes should be used with a Charset. Almost always a kind of conversion happens. As new java versions can keep a byte array instead of a char array for an encoding like Latin-1, ISO-8859-1, even using an internal char array would not do. And new arrays are created.
What one can do, is using fast ByteBuffers.
Alternatively for lingual analysis one can use databases, maybe graph databases. At least something which can exploit parallelism.
You are pretty much restricted to the APIs offered within the string class, and obviously, that deprecated method is supposed to be replaced with getBytes() (or an alternative that allows to specify a charset.
In other words: that problem you are talking about "having many large strings, that need to go into arrays" can't be solved easily.
Thus a distinct non-answer: look into your design. If performance is really critical, then do not create those many large strings upfront!
In other words: if your measurements convince you that you do have real performance issue, then adapt your design as needed. Maybe there is a chance that in the place where your strings are "coming" in ... you already do not use String objects, but something that works better for you, later on, performance wise.
But of course: that will lead to a complex, error prone solution, where you do a lot of "memory management" yourself. Thus, as said: measure first. Ensure that you have a real problem, and it actually sits in the place you think it sits.
str.getBytes(srcBegin, srcEnd, dst, dstBegin) is indeed deprecated. The relevant documentation recommends getBytes() instead. If you needed str.getBytes(srcBegin, srcEnd, dst, dstBegin) because sometimes you don't have to convert the entire string I suppose you could substring() first, but I'm not sure how badly that would impact your code's efficiency, if at all. Or if it's all the same to you if you store it in char[] then you can use getChars(int,int,char[],int) which is not deprecated.

Java Strings : how the memory works with immutable Strings

I have a simple question.
byte[] responseData = ...;
String str = new String(responseData);
String withKey = "{\"Abcd\":" + str + "}";
in the above code, are these three lines taking 3X memory. for example if the responseData is 1mb, then line 2 will take an extra 1mb in memory and then line 3 will take extra 1mb + xx. is this true? if no, then how it is going to work. if yes, then what is the optimal way to fix this. will StringBuffer help here?
Yes, that sounds about right. Probably even more because your 1MB byte array needs to be turned into UTF-16, so depending on the encoding, it may be even bigger (2MB if the input was ASCII).
Note that the garbage collector can reclaim memory as soon as the variables that use it go out of scope. You could set them to null as early as possible to help it make this as timely as possible (for example responseData = null; after you constructed your String).
if yes, then what is the optimal way to fix this
"Fix" implies a problem. If you have enough memory there is no problem.
the problem is that I am getting OutOfMemoryException as the byte[] data coming from server is quite big,
If you don't, you have to think about a better alternative to keeping a 1MB string in memory. Maybe you can stream the data off a file? Or work on the byte array directly? What kind of data is this?
The problem is that I am getting OutOfMemoryException as the byte[] data coming from server is quite big, thats why I need to figure it out first that am I doing something wrong ....
Yes. Well basically your fundamental problem is that you are trying to hold the entire string in memory at one time. This is always going to fail for a sufficiently large string ... even if you code it in the most optimal memory efficient fashion possible. (And that would be complicated in itself.)
The ultimate solution (i.e. the one that "scales") is to do one of the following:
stream the data to the file system, or
process it in such a way that you don't need ever need the entire "string" to be represented.
You asked if StringBuffer will help. It might help a bit ... provided that you use it correctly. The trick is to make sure that you preallocate the StringBuffer (actually a StringBuilder is better!!) to be big enough to hold all of the characters required. Then copy data into it using a charset decoder (directly or using a Reader pipeline).
But even with optimal coding, you are likely to need a peak of 3 times the size of your input byte[].
Note that your OOME problem is probably nothing to do with GC or storage leaks. It is actually about the fundamental space requirements of the data types you are using ... and the fact that Java does not offer a "string of bytes" data type.
There is no such OutOfMemoryException in my apidocs. If it's OutOfMemoryError, especially on the server-side, you definitely got a problem.
When you receive big requests from clients, those String related statements are not the first problem. Reducing 3X to 1X is not the solution.
I'm sorry I can't help without any further codes.
Use back-end storage
You should not store the whole request body on byte[]. You can store them directly on any back-end storage such as a local file, a remote database, or cloud storage.
I would
copy stream from request to back-end with small chunked buffer
Use streams
If can use Streams not Objects.
I would
response.getWriter().write("{\"Abcd\":");
copy <your back-end stored data as stream>);
response.getWriter().write("}");
Yes, if you use a Stringbuffer for the code you have, you would save 1mb of heap space in the last step. However, considering the size of data you have, I recommend an external memory algorithm where you bring only part of your data to memory, process it and put it back to storage.
As others have mentioned, you should really try not to have such a big Object in your mobile app, and that streaming should be your best solution.
That said, there are some techniques to reduce the amount memory your app is using now:
Remove byte[] responseData entirely if possible, so the memory it used can be released ASAP (assuming it is not used anywhere else)
Create the largest String first, and then substring() it, Android uses Apache Harmony for its standard Java library implementation. If you check its String class implementation, you'll see that substring() is implemented simply by creating a new String object with the proper start and end offset to the original data and no duplicate copy is created. So doing the following would cuts the overall memory consumption by at least 1/3:
String withKey = StringBuilder().append("{\"Abcd\").append(str).append("}").toString();
String str = withKey.substring("{\"Abcd\".length(), withKey.length()-"}".length());
Never ever use something like "{\"Abcd\":" + str + "}" for large Strings, under the hood "string_a"+"string_b" is implemented as new StringBuilder().append("string_a").append("string_b").toString(); so implicitly you are creating two (or at least one if the compiler is mart) StringBuilders. For large Strings, it's better that you take over this process yourself as you have deep domain knowledge about your program that the compiler doesn't, and knows how to best manipulate the strings.

Good way to serialize array in Java that is readable from Python?

I have some Java's serialized objects (arrays of doubles) in MySql database fields that I generated previously. Now I needed to read them from Python and I have just realized that it is probably not possible to do directly.
Then I tried to convert them to strings in java (simply comma delimited), and manually parse them from Python. But, it turned out that parsing from Python works painfully slow this way. Is there any better way for serializing arrays that is compatible between Java and Python?
Edit: Sorry, my parsing code was the problem, of course. I replaced it with this:
stringList = string.split(', ')
svdVector = [float(x) for x in stringList]
..and now it is almost instant for my case of 1000x1000 doubles. Although it still feels wrong to store doubles as strings instead of binary, but since it's easy to code and runs fast enough, it is fine.
Python comes with modules for CSV files, XML, and JSON, so one of those would likely do the trick quite well.
If you really want to try binary serialization, check out the built-in struct and array modules for help with interpreting the data in Python.

How do I convert a Java Hashtable to an NSDictionary (obj-C)?

At the server end (GAE), I've got a java Hashtable.
At the client end (iPhone), I'm trying to create an NSDictionary.
myHashTable.toString() gets me something that looks darned-close-to-but-not-quite-the-same-as [myDictionary description]. If they were the same, I could write the string to a file and do:
NSDictionary *dict = [NSDictionary dictionaryWithContentsOfFile:tmpFile];
I could write a little parser in obj-C to deal with myHashtable.toString(), but I'm sort-of hoping that there's a shortcut already built into something, somewhere -- I just can't seem to find it.
(So, being a geek, I'll spend far longer searching the web for a shortcut than it would take me to write & debug the parser... ;)
Anyway -- hints?
Thanks!
I would convert the Hashtable into something JSON-like and take it on the iPhone side.
Hashtable.toString() is not ideal, it will have problem with spaces, comma and quotation marks.
For JSON-to-NSDictionary, you can find the json-framework tools under http://www.json.org/
As j-16 SDiZ mentioned, you need to serialize your hashtable. It can be to json, xml or some other format. Once serialized, you need to deserialize them into an NSDictionary. JSON is probably the easiest format to do this with plenty of libraries for both Objective-C and Java. http://json.org has a list of libraries.

Java and JVM confusion (if Java can handle a large string why can't groovy?)

I recently ran into an issue with Groovy where I was attempting to deal with a very large string (100k characters). I got an error that said the string could not be more than 65,535 characters. I did some searches to try to find out more info and ran across this link that said the problem was with the JVM - https://issues.apache.org/jira/browse/GROOVY-2382.
I thought Java ran on the JVM as well and in Java I have had much larger strings. Just trying to understand. Can anyone shed some light on this for me. Thank you in advance.
Sean
This is a limitation on string literals, i.e. Strings in the source code.
It is not a problem for Strings read from a File or some other InputStream.
You should move your huge String into a separate text file.
Looking at the source for java.lang.String the limit is that of Integer.MAX_VALUE which is pretty big.
So yes there is a limit but 100K is no where near it.
The limit that the Groovy bug refers to it that of a string literal, this isn't the same as creating a very big string.

Categories