I used a while loop to fetch message from Amazon SQS. Partial code is as follows:
ReceiveMessageRequest receiveMessageRequest = new ReceiveMessageRequest(myQueueUrl);
while (true) {
List<Message> messages = sqs.receiveMessage(receiveMessageRequest).getMessages();
if (messages.size() > 0) {
MemcachedClient c = new MemcachedClient(new BinaryConnectionFactory(), AddrUtil.getAddresses(memAddress));
for (Message message : messages) {
// get message from aws sqs
String messageid = message.getBody();
String messageRecieptHandle = message.getReceiptHandle();
sqs.deleteMessage(new DeleteMessageRequest(myQueueUrl, messageRecieptHandle));
// get details info from memcache
String result = null;
String key = null;
key = "message-"+messageid;
result = c.get(key);
}
c.shutdown();
}
}
Will it cause memory leak in such case?
I checked using "ps aux". What I found is that the RSS (resident set size, the non-swapped physical memory that a task used) is growing slowly.
You can't evaluate whether your Java application has a memory leak simply based on the RSS of the process. Most JVMs are pretty greedy, they would rather take more memory from the OS than spend a lot of work on Garbage Collection.
That said your while loop doesn't seem like it has any obvious memory "leaks" either, but that depends on what some of the method calls do (which isn't included above). If you are storing things in static variables, that can be a cause of concern but if the only references are within the scope of the loop you're probably fine.
The simplest way to know if you have a memory leak in a certain area of code is to rigorously exercise that code within a single run of your application (potentially set with a relatively low maximum heap size). If you get an OutOfMemoryError, you probably have a memory leak.
Sorry, but I don't see here code to remove message from the message queue. Did you clean the message list? In case that DeleteRequest removes message from the queue then you try to modify message list which you itereate.
Also you can get better memory usage statistic with visualvm tool which is part of JDK now.
Related
I am currently working on a spring based API which has to transform csv data and to expose them as json.
it has to read big CSV files which will contain more than 500 columns and 2.5 millions lines each.
I am not guaranteed to have the same header between files (each file can have a completly different header than another), so I have no way to create a dedicated class which would provide mapping with the CSV headers.
Currently the api controller is calling a csv service which reads the CSV data using a BufferReader.
The code works fine on my local machine but it is very slow : it takes about 20 seconds to process 450 columns and 40 000 lines.
To improve speed processing, I tried to implement multithreading with Callable(s) but I am not familiar with that kind of concept, so the implementation might be wrong.
Other than that the api is running out of heap memory when running on the server, I know that a solution would be to enhance the amount of available memory but I suspect that the replace() and split() operations on strings made in the Callable(s) are responsible for consuming a large amout of heap memory.
So I actually have several questions :
#1. How could I improve the speed of the CSV reading ?
#2. Is the multithread implementation with Callable correct ?
#3. How could I reduce the amount of heap memory used in the process ?
#4. Do you know of a different approach to split at comas and replace the double quotes in each CSV line ? Would StringBuilder be of any healp here ? What about StringTokenizer ?
Here below the CSV method
public static final int NUMBER_OF_THREADS = 10;
public static List<List<String>> readCsv(InputStream inputStream) {
List<List<String>> rowList = new ArrayList<>();
ExecutorService pool = Executors.newFixedThreadPool(NUMBER_OF_THREADS);
List<Future<List<String>>> listOfFutures = new ArrayList<>();
try {
BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream, StandardCharsets.UTF_8));
String line = null;
while ((line = reader.readLine()) != null) {
CallableLineReader callableLineReader = new CallableLineReader(line);
Future<List<String>> futureCounterResult = pool.submit(callableLineReader);
listOfFutures.add(futureCounterResult);
}
reader.close();
pool.shutdown();
} catch (Exception e) {
//log Error reading csv file
}
for (Future<List<String>> future : listOfFutures) {
try {
List<String> row = future.get();
}
catch ( ExecutionException | InterruptedException e) {
//log Error CSV processing interrupted during execution
}
}
return rowList;
}
And the Callable implementation
public class CallableLineReader implements Callable<List<String>> {
private final String line;
public CallableLineReader(String line) {
this.line = line;
}
#Override
public List<String> call() throws Exception {
return Arrays.asList(line.replace("\"", "").split(","));
}
}
I don't think that splitting this work onto multiple threads is going to provide much improvement, and may in fact make the problem worse by consuming even more memory. The main problem is using too much heap memory, and the performance problem is likely to be due to excessive garbage collection when the remaining available heap is very small (but it's best to measure and profile to determine the exact cause of performance problems).
The memory consumption would be less from the replace and split operations, and more from the fact that the entire contents of the file need to be read into memory in this approach. Each line may not consume much memory, but multiplied by millions of lines, it all adds up.
If you have enough memory available on the machine to assign a heap size large enough to hold the entire contents, that will be the simplest solution, as it won't require changing the code.
Otherwise, the best way to deal with large amounts of data in a bounded amount of memory is to use a streaming approach. This means that each line of the file is processed and then passed directly to the output, without collecting all of the lines in memory in between. This will require changing the method signature to use a return type other than List. Assuming you are using Java 8 or later, the Stream API can be very helpful. You could rewrite the method like this:
public static Stream<List<String>> readCsv(InputStream inputStream) {
BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream, StandardCharsets.UTF_8));
return reader.lines().map(line -> Arrays.asList(line.replace("\"", "").split(",")));
}
Note that this throws unchecked exceptions in case of an I/O error.
This will read and transform each line of input as needed by the caller of the method, and will allow previous lines to be garbage collected if they are no longer referenced. This then requires that the caller of this method also consume the data line by line, which can be tricky when generating JSON. The JakartaEE JsonGenerator API offers one possible approach. If you need help with this part of it, please open a new question including details of how you're currently generating JSON.
Instead of trying out a different approach, try to run with a profiler first and see where time is actually being spent. And use this information to change the approach.
Async-profiler is a very solid profiler (and free!) and will give you a very good impression of where time is being spent. And it will also show the time spend on garbage collection. So you can easily see the ratio of CPU utilization caused by garbage collection. It also has the ability to do allocation profiling to figure out which objects are being created (and where).
For a tutorial see the following link.
Try using Spring batch and see if it helps your scenario.
Ref : https://howtodoinjava.com/spring-batch/flatfileitemreader-read-csv-example/
I have this existing API running in loop which creates a connection and fetches huge json(1 GB) in each iteration from a service through restTemplate as follow:
ResponseEntity<String> response = restTemplate.exchange(uri.get().toString(), HttpMethod.POST,entity,
String.class);
The response is then converted to a complex java object through GSON. The problem with above approach is, rest template converts the inputstream to String through StringBuffer which ends up creating lots of char[] eventually running out of memory (OOM) when the loop is iterated for too long which usually is the case. In place of RestEntity, I even used HttpClient, it too does the same (expand char array).
To solve OOM issue, I refactored the API to stream the data to file. Creating a temp file in each iteration and converting to objects as follow:
File targetFile = new File("somepath\\response.tmp");
FileUtils.copyInputStreamToFile(response.getEntity().getContent(), targetFile);
List<Object> objects = gson.fromJson(reader, new TypeToken<List<Object>>(){}.getType());
is this the way to go or is there any effective approach to solve such problems? Maybe pooling connections instead of creating new in each iteration (will this be considerable change?)
Also, on analysing the API on 4GB xmx, 2GB xms heap, jvisual shows the below:
As it can be seen, the running thread has been allocated huge bytes.
Heap size during the API runtime:
I have a big txt file with integers in it. Each line in file has two integer numbers separated by whitespace. Size of a file is 63 Mb.
Pattern p = Pattern.compile("\\s");
try (BufferedReader reader = new BufferedReader(new FileReader(filePath))) {
String line;
while ((line = reader.readLine()) != null) {
String[] tokens = p.split(line);
String s1 = new String(tokens[0]);
String s2 = new String(tokens[1]);
int startLabel = Integer.valueOf(s1) - 1;
int endLabel = Integer.valueOf(s2) - 1;
Vertex fromV = vertices.get(startLabel);
Vertex toV = vertices.get(endLabel);
Edge edge = new Edge(fromV, toV);
fromV.addEdge(edge);
toV.addEdge(edge);
edges.add(edge);
System.out.println("Edge from " + fromV.getLabel() + " to " + toV.getLabel());
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOfRange(Arrays.java:2694)
at java.lang.String.<init>(String.java:203)
at java.lang.String.substring(String.java:1913)
at java.lang.String.subSequence(String.java:1946)
at java.util.regex.Pattern.split(Pattern.java:1202)
at java.util.regex.Pattern.split(Pattern.java:1259)
at SCC.main(SCC.java:25)
Why am I getting this exception? How can I change my code to avoid it?
EDIT:
I've already increase heap size to 2048m.
What is consuming it? That's what I would want to know also.
For all I know jvm should allocate memory to list of vertices, set of edges, buffer for buffered reader and one small string "line". I don't see where this outOfMemory coming from.
I read about string.split() method. I think it's causing memory leak, but I don't know what should I do about it.
What you should try first is reduce the file to small enough that it works. That will allow you to appraise just how large a problem you have.
Second, your problem is definitely unrelated to String#split since you are using it on just one line at a time. What is consuming your heap are the Vertex and Edge instances. You'll have to redesign this towards a smaller footprint, or completely overhaul your algorithms to be able to work with only a part of the graph in memory, the rest on the disk.
P.S. Just a general Java note: don't write
String s1 = new String(tokens[0]);
String s2 = new String(tokens[1]);
you just need
String s1 = tokens[0];
String s2 = tokens[1];
or even just use tokens[0] directly instead of s1, since it's about as clear.
Easiest way: increase your heap size:
Add -Xmx512m -Xms512m (or even more) arguments to jvm
Increase the heap memory limit, using the -Xmx JVM option.
More info here.
You are getting this exception because your program is storing too much data in the java heap.
Although your exception is showing up in the Pattern.split() method, the actual culprit could be any large memory user in your code, such as the graph you are building. Looking at what you provided, I suspect the graph data structure is storing much redundant data. You may want to research a more space-efficient graph structure.
If you are using the Sun JVM, try the JVM option -XX:+HeapDumpOnOutOfMemoryError to create a heap dump and analyze that for any heavy memory users, and use that analysis to optimize your code. See Using HeapDumpOnOutOfMemoryError parameter for heap dump for JBoss for more info.
If that's too much work for you, as others have indicated, try increasing the JVM heap space to a point where your program no longer crashes.
When ever you get an OOM while trying to parse stuff, its just that the method you are using is not scalable. Even though increasing the heap might solve the issue temporarily, it is not scalable. Example, if tomorrow your file size increases by an order or magnitude, you would be back in square one.
I would recommend trying to read the file in pieces, cache x lines of the file, read off it, clear the cache and re-do the process.
You can use either ehcache or guava cache.
The way you parse the string could be changed.
try (Scanner scanner = new Scanner(new FileReader(filePath))) {
while (scanner.hasNextInt()) {
int startLabel = scanner.nextInt();
int endLabel = scanner.nextInt();
scanner.nextLine(); // discard the rest of the line.
// use start and end.
}
I suspect the memory consumption is actually in the data structure you build rather than how you read the data, but this should make it more obvious.
I have the following JAVA class to read from a file containing many lines of tab delimited strings. An example line is like the following:
GO:0085044 GO:0085044 GO:0085044
The code read each line and use split function to put three sub strings into an array, then it put them into a two level hash.
public class LCAReader {
public static void main(String[] args) {
Map<String, Map<String, String>> termPairLCA = new HashMap<String, Map<String, String>>();
File ifile = new File("LCA1.txt");
try {
BufferedReader reader = new BufferedReader(new FileReader(ifile));
String line = null;
while( (line=reader.readLine()) != null ) {
String[] arr = line.split("\t");
if( termPairLCA.containsKey(arr[0]) ) {
if( termPairLCA.get(arr[0]).containsKey(arr[1]) ) {
System.out.println("Error: Duplicate term in LCACache");
} else {
termPairLCA.get(arr[0]).put(new String(arr[1]), new String(arr[2]));
}
} else {
Map<String, String> tempMap = new HashMap<String, String>();
tempMap.put( new String(arr[1]), new String(arr[2]) );
termPairLCA.put( new String(arr[0]), tempMap );
}
}
reader.close();
} catch (IOException e) {
System.out.println(e.getMessage());
}
}
}
When I ran the program, I got the following run time error after some time of running. I noticed the memory usage kept increasing.
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.regex.Pattern.compile(Pattern.java:1469)
at java.util.regex.Pattern.(Pattern.java:1150)
at java.util.regex.Pattern.compile(Pattern.java:840)
at java.lang.String.split(String.java:2304)
at java.lang.String.split(String.java:2346)
at LCAReader.main(LCAReader.java:17)
The input file is almost 2G and the machine I ran the program has 8G memory. I also tried -Xmx4096m parameter to run the program but that did not help. So I guess there is some memory leak in my code, but I cannot find them.
Can anyone help me on this? Thanks in advance!
There's no memory leak; you're just trying to store too much data. 2GB of text will take 4GB of RAM as Java characters; plus there's about 48 bytes per String object overhead. Assuming the text is in 100 character lines, there's about another GB, for a total of 5GB -- and we haven't even counted the Map.Entry objects yet! You'd need a Java heap of at least, conservatively, 6GB to run this program on your data, and maybe more.
There are a couple of easy things you can do to improve this. First, lose the new String() constructors -- they're useless and just make the garbage collector work harder. Strings are immutable so you never need to copy them. Second, you could use the intern pool to share duplicate strings -- this may or may not help, depending on what the data actually looks like. But you could try, for example,
tempMap.put(arr[1].intern(), arr[2].intern() );
These simple steps might help a lot.
I don't see any leak, you simply need a very huge amount of memory to store your map.
There is a very good tool for verifying this: making a heap dump with the option -XX:+HeapDumpOnOutOfMemoryError and import it into Eclipse Memory Analyzer which comes in a standalone version. It can show you the biggest retained objects and the references tree that could prevent the garbage collector to do its job.
In addition a profiler such as Netbeans Profiler can give you a lot of interesting real-time informations (for instance to check the number of String and Char instances).
Also it is a good practice to split your code into different classes each having a different responsability: the "two keys map" class (TreeMap) on one side and a "parser" class on the other side, it should make debugging easier...
This is definitely not a good idea to store this huge map inside the RAM... or you need to make a benchkmark with some smaller files and extrapolate to obtain the estimated RAM you need to have on your system to fit your worste case... And set Xmx to the proper value.
Why don't you use a Key Value store such as Berckley DB: simpler than a Relational DB and should fit exactly you need of two levels indexing.
Check this post for the choice of the store: key-value store suggestion
Good luck
You probably shouldn't use String.split and store the information as pure String as this generates lots of String objects on the fly.
Try using a char based approach since your format seems rather fixed so you know the exact indizes of the different data points on one line.
If your a bit more into experimenting you could try to use a NIO-backed approach with a memory mapped DirectByteBuffer or a CharBuffer that is used to traverse the file. There you could just mark the indizes of different data points into Marker-objects and only load the real String-data later on in the process when needed.
So I'm using Java to do multi-way external merge sorts of large on-disk files of line-delimited tuples. Batches of tuples are read into a TreeSet, which are then dumped into on-disk sorted batches. Once all of the data have been exhausted, these batches are then merge-sorted to the output.
Currently I'm using magic numbers for figuring out how many tuples we can fit into memory. This is based on a static figure indicating how may tuples can be roughly fit per MB of heap space, and how much heap space is available using:
long max = Runtime.getRuntime().maxMemory();
long used = Runtime.getRuntime().totalMemory();
long free = Runtime.getRuntime().freeMemory();
long space = free + (max - used);
However, this does not always work so well since we may be sorting different length tuples (for which the static tuple-per-MB figure might be too conservative) and I now want to use flyweight patterns to jam more in there, which may make the figure even more variable.
So I'm looking for a better way to fill the heap-space to the brim. Ideally the solution should be:
reliable (no risk of heap-space exceptions)
flexible (not based on static numbers)
efficient (e.g., not polling runtime memory estimates after every tuple)
Any ideas?
Filling the heap to the brim might be a bad idea due to garbage collector trashing. (As the memory gets nearly full, the efficiency of garbage collection approaches 0, because the effort for collection depends on heap size, but the amount of memory freed depends on the size of the objects identified as unreachable).
However, if you must, can't you simply do it as follows?
for (;;) {
long freeSpace = getFreeSpace();
if (freeSpace < 1000000) break;
for (;;freeSpace > 0) {
treeSet.add(readRecord());
freeSpace -= MAX_RECORD_SIZE;
}
}
The calls to discover the free memory will be rare, so shouldn't tax performance much. For instance, if you have 1 GB heap space, and leave 1MB empty, and MAX_RECORD_SIZE is ten times average record size, getFreeSpace() will be invoked a mere log(1000) / -log(0.9) ~= 66 times.
Why bother with calculating how many items you can hold? How about letting java tell you when you've used up all your memory, catching the exception and continuing. For example,
// prepare output medium now so we don't need to worry about having enough
// memory once the treeset has been filled.
BufferedWriter writer = new BufferedWriter(new FileWriter("output"));
Set<?> set = new TreeSet<?>();
int linesRead = 0;
{
BufferedReader reader = new BufferedReader(new FileReader("input"));
try {
String line = reader.readLine();
while (reader != null) {
set.add(parseTuple(line));
linesRead += 1;
line = reader.readLine();
}
// end of file reached
linesRead = -1;
} catch (OutOfMemoryError e) {
// while loop broken
} finally {
reader.close();
}
// since reader and line were declared in a block their resources will
// now be released
}
// output treeset to file
for (Object o: set) {
writer.write(o.toString());
}
writer.close();
// use linesRead to find position in file for next pass
// or continue on to next file, depending on value of linesRead
If you still have trouble with memory, just make the reader's buffer extra large so as to reserve more memory.
The default size for the buffer in a BufferedReader is 4096 bytes. So when finishing reading you will release upwards of 4k of memory. After this your additional memory needs will be minimal. You need enough memory to create an iterator for the set, let's be generous and assume 200 bytes. You will also need memory to store the string output of your tuples (but only temporarily). You say the tuples contain about 200 characters. Let's double that to take account for separators -- 400 characters, which is 800 bytes. So all you really need is an additional 1k bytes. So you're fine as you've just released 4k bytes.
The reason you don't need to worry about the memory used to store the string output of your tuples is because they are short lived and only referred to within the output for loop. Note that the Writer will copy the contents into its buffer and then discard the string. Thus, the next time the garbage collector runs the memory can be reclaimed.
I've checked and, a OOME in add will not leave a TreeSet in an inconsistent state, and the memory allocation for a new Entry (the internal implementation for storing a key/value pair) happens before the internal representation is modified.
You can really fill the heap to the brim using direct memory writing (it does exist in Java!). It's in sun.misc.Unsafe, but isn't really recommended for use. See here for more details. I'd probably advise writing some JNI code instead, and using existing C++ algorithms.
I'll add this as an idea I was playing around with, involving using a SoftReference as a "sniffer" for low memory.
SoftReference<Byte[]> sniffer = new SoftReference<String>(new Byte[8192]);
while(iter.hasNext()){
tuple = iter.next();
treeset.add(tuple);
if(sniffer.get()==null){
dump(treeset);
treeset.clear();
sniffer = new SoftReference<String>(new Byte[8192]);
}
}
This might work well in theory, but I don't know the exact behaviour of SoftReference.
All soft references to softly-reachable objects are guaranteed to have been cleared before the virtual machine throws an OutOfMemoryError. Otherwise no constraints are placed upon the time at which a soft reference will be cleared or the order in which a set of such references to different objects will be cleared. Virtual machine implementations are, however, encouraged to bias against clearing recently-created or recently-used soft references.
Would like to hear feedback as it seems to me like an elegant solution, although behaviour might vary between VMs?
Testing on my laptop, I found that it the soft-reference is cleared infrequently, but sometimes is cleared too early, so I'm thinking to combine it with meriton's answer:
SoftReference<Byte[]> sniffer = new SoftReference<String>(new Byte[8192]);
while(iter.hasNext()){
tuple = iter.next();
treeset.add(tuple);
if(sniffer.get()==null){
free = MemoryManager.estimateFreeSpace();
if(free < MIN_SAFE_MEMORY){
dump(treeset);
treeset.clear();
sniffer = new SoftReference<String>(new Byte[8192]);
}
}
}
Again, thoughts welcome!