Writing in a file sequential vs in bulk - java

I have a program which writes some 8 millions line of data in a flat file. As of now, the program is calling bufferedwriter.write for each of the record and I was planning to write in bulk with the following strategy
Keep a data structure (I used array) to hold a specific number of records.
write the details in a file using the array. here is the code snippet (array is the name of the Array which stores the record and threshold count is the kick off for writing process)
if (array.length==thresholdCount) {
writeBulk(array);
}
public void writeBulk(String[] inpArray) {
for (String line:inpArray) {
if (line!=null) {
try {
writer.write(line +"\n");
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
However I am not seeing much performance improvement. I want to know if there is a way to determine the optimal threshold count?
I was also planning to further tune the code so as to store each element in the array as a concatenation of some n number of records and then call the bulk method. For ex an array with length 5000 will actually contain 50000 records whereby each index in the array contains 10 records. however before doing so, I need the expert opinion.

Writes to files are already buffered in a similar fashion before they are pushed to disk (unless you flush -- which actually doesn't always do exactly that either). Thus pre-buffering the writes will not speed up the overall process. Note: that some IO Classes try to do immediate writes by inserting flush requests after each write. For those special cases pre-buffering can sometimes help, but usually you just use a Buffered version of the Class in the first place rather than manually buffer yourself.
If you were writing to somewhere other than the end of the file, then you could see an improvement as writing to the middle of a file wouldn't need to copy the contents of the already flushed entries sitting on your hard-disk.

Related

Peformance issues reading CSV files in a Java (Spring Boot) application

I am currently working on a spring based API which has to transform csv data and to expose them as json.
it has to read big CSV files which will contain more than 500 columns and 2.5 millions lines each.
I am not guaranteed to have the same header between files (each file can have a completly different header than another), so I have no way to create a dedicated class which would provide mapping with the CSV headers.
Currently the api controller is calling a csv service which reads the CSV data using a BufferReader.
The code works fine on my local machine but it is very slow : it takes about 20 seconds to process 450 columns and 40 000 lines.
To improve speed processing, I tried to implement multithreading with Callable(s) but I am not familiar with that kind of concept, so the implementation might be wrong.
Other than that the api is running out of heap memory when running on the server, I know that a solution would be to enhance the amount of available memory but I suspect that the replace() and split() operations on strings made in the Callable(s) are responsible for consuming a large amout of heap memory.
So I actually have several questions :
#1. How could I improve the speed of the CSV reading ?
#2. Is the multithread implementation with Callable correct ?
#3. How could I reduce the amount of heap memory used in the process ?
#4. Do you know of a different approach to split at comas and replace the double quotes in each CSV line ? Would StringBuilder be of any healp here ? What about StringTokenizer ?
Here below the CSV method
public static final int NUMBER_OF_THREADS = 10;
public static List<List<String>> readCsv(InputStream inputStream) {
List<List<String>> rowList = new ArrayList<>();
ExecutorService pool = Executors.newFixedThreadPool(NUMBER_OF_THREADS);
List<Future<List<String>>> listOfFutures = new ArrayList<>();
try {
BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream, StandardCharsets.UTF_8));
String line = null;
while ((line = reader.readLine()) != null) {
CallableLineReader callableLineReader = new CallableLineReader(line);
Future<List<String>> futureCounterResult = pool.submit(callableLineReader);
listOfFutures.add(futureCounterResult);
}
reader.close();
pool.shutdown();
} catch (Exception e) {
//log Error reading csv file
}
for (Future<List<String>> future : listOfFutures) {
try {
List<String> row = future.get();
}
catch ( ExecutionException | InterruptedException e) {
//log Error CSV processing interrupted during execution
}
}
return rowList;
}
And the Callable implementation
public class CallableLineReader implements Callable<List<String>> {
private final String line;
public CallableLineReader(String line) {
this.line = line;
}
#Override
public List<String> call() throws Exception {
return Arrays.asList(line.replace("\"", "").split(","));
}
}
I don't think that splitting this work onto multiple threads is going to provide much improvement, and may in fact make the problem worse by consuming even more memory. The main problem is using too much heap memory, and the performance problem is likely to be due to excessive garbage collection when the remaining available heap is very small (but it's best to measure and profile to determine the exact cause of performance problems).
The memory consumption would be less from the replace and split operations, and more from the fact that the entire contents of the file need to be read into memory in this approach. Each line may not consume much memory, but multiplied by millions of lines, it all adds up.
If you have enough memory available on the machine to assign a heap size large enough to hold the entire contents, that will be the simplest solution, as it won't require changing the code.
Otherwise, the best way to deal with large amounts of data in a bounded amount of memory is to use a streaming approach. This means that each line of the file is processed and then passed directly to the output, without collecting all of the lines in memory in between. This will require changing the method signature to use a return type other than List. Assuming you are using Java 8 or later, the Stream API can be very helpful. You could rewrite the method like this:
public static Stream<List<String>> readCsv(InputStream inputStream) {
BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream, StandardCharsets.UTF_8));
return reader.lines().map(line -> Arrays.asList(line.replace("\"", "").split(",")));
}
Note that this throws unchecked exceptions in case of an I/O error.
This will read and transform each line of input as needed by the caller of the method, and will allow previous lines to be garbage collected if they are no longer referenced. This then requires that the caller of this method also consume the data line by line, which can be tricky when generating JSON. The JakartaEE JsonGenerator API offers one possible approach. If you need help with this part of it, please open a new question including details of how you're currently generating JSON.
Instead of trying out a different approach, try to run with a profiler first and see where time is actually being spent. And use this information to change the approach.
Async-profiler is a very solid profiler (and free!) and will give you a very good impression of where time is being spent. And it will also show the time spend on garbage collection. So you can easily see the ratio of CPU utilization caused by garbage collection. It also has the ability to do allocation profiling to figure out which objects are being created (and where).
For a tutorial see the following link.
Try using Spring batch and see if it helps your scenario.
Ref : https://howtodoinjava.com/spring-batch/flatfileitemreader-read-csv-example/

Java Memory issue for huge CSV file

I am developing a system which loads a huge CSV file (with more than 1 million lines) and saves into database. Also every line has more than one thousand field. A CSV file is considered as one batch and each line is considered as its child object. During the time of adding objects, every object will be saved in List of single batch and at some point I am running out of memory as the List will have more than 1 million objects being added. I cannot split the file into N numbers since there is dependency between the lines which are not in serial order(any line can have dependency to other lines).
Following is the general logic:
Batch batch = new Batch();
while (csvLine !=null ){
{
String[] values = csvLine.split( ",", -1 );
Transaction txn = new Transaction();
txn.setType(values[0]);
txn.setAmount(values[1]);
/*
There are more than one thousand transaction fields in one line
*/
batch.addTransaction (txn);
}
batch.save();
Is there any way we can handle this type of condition with the server having low memory?
In the old times, we used to process large quantities of data stored on sequential tapes with little memory and disk. But it took loooong time!
Basically, you build a buffer of lines than can fit in your memory, browse all file to resolve dependencies and fully process those lines. Then you iterate on next buffer until you have processed all file. If requires a full read of the file per each buffer, but allows to save memory.
There may be another problem here, because you want to store all records in a single batch. The batch will have to require enough memory to store all the records, so here again you have a risk to exhaust memory. But you can again use the good old methods, and save many batches of smaller size.
If you want to make sure that everything will be either fully inserted in database or everything will be rejected, you can simply use a transaction:
declare transaction at the beginning of your job
save all your batches inside this single transaction
commit the transaction when everithing is done
Professional grade databases (MySQL, PostgreSQL, Oracle, etc.) can use rollback segments on disk to be able to process one transaction without exhausting memory. Of course it is far slower than in memory operations (not speaking if for any reason you have to rollback such a transaction!) but at least it works unless you exhaust the available physical disk...
Dedicate a separate database table just for the CSV import. Maybe with additional fields for those cross-references you mentioned.
If you need to analize CSV fields in java, restrain the number of value instances by caching:
public class SharedStrings {
private Map<String, String> sharedStrings = new HashMap<>();
public String share(String s) {
if (s.length() <= 15) {
String t = sharedStrings.putIfAbsent(s, s); // Since java 8
if (t != null) {
s = t;
}
/*
// Older java:
String t = sharedString.get(s);
if (t == null) {
sharedString.put(s, s);
} else {
s = t;
}
*/
}
return s;
}
}
In your case, with long records, it might even make sence to GZipOutputStream the read line, as bytes, to a shorter byte array.
But then a database seems more logical.
The following will possibly not apply if you are using all fields of a csvLine.
String#split uses String#substring, which in turn does not create a new string but keeps the original string in memory and references the respective portion.
So this line would keep the original string in memory:
String a = "...very long and comma separated";
String[] split = a.split(",");
String b = split[1];
a = null;
So if your are not using all data of the csvLine you should wrap every entry of values in a new String, i.e. in the above example you would do
String b = new String(split[1]);
otherwise the gc is unable to free string a.
I ran into this while i was extracting one column of a csv file with millions of lines.

How to avoid frequently file write in Java

I have the problem:
in a loop, each time I need to write a large string into one file(or temporary file), then process take the file as an argument for the next step.
Something along:
for(int i=0;i<n;i++){
File f = File.createTmpFile("xxx","xxx");
// write into f etc.
String result = func(f);
}
Since I think each time creating a File and writing string into it seem to be much costly, so is there any alternative methods?
If these Strings do not need to be immediately persisted to a File, you could store them in memory, some sort of Collection, e.g. an ArrayList. And when the list gets "large", say, every tenth time, write all ten at once to a file. This cuts file creation by 10X.
The danger is that if there is a crash you may lose up to 9 values.

Buffered reader and priority queue working together?

I'm dealing with a program that reads in items from a .csv file, and writes them to a remote database. I'm trying to multithread the program, and to that end I have created 2 process threads with distinct connections. To this end, the .csv file is read into a buffered reader, and the contents of the buffered reader are processed. However, it seems that the threads keep replicating the data (writing two copies of every tuple into the database).
I've been trying to figure out how to mutex a buffer in Java, and the closest thing I could come up with is a priority queue.
My question is, can you use a buffered reader to read a file into a priority queue line by line? I.E.
public void readFile(Connection connection) {
BufferedReader bufReader = null;
try{
bufReader = new BufferedReader(new FileReader(RECS_FILE));
bufReader.readLine(); //skip header line
String line;
while((line = bufReader.readLine()) != null) {
//extract fields from each line of the RECS_FILE
Pattern pattern = Pattern.compile( "\"([^\"]+)\",\"([^\"]+)\",\"([^\"]+)\",\"([^\"]+)\"");
Matcher matcher = pattern.matcher(line);
if(!matcher.matches()) {
System.err.println("Unexpected line in "+RECS_FILE+": \""+line+"\"");
continue;
}
String stockSymbol = matcher.group(1);
String recDateStr = matcher.group(2);
String direction = matcher.group(3);
String completeUrl = matcher.group(4);
//create recommendation object to populate required fields
// and insert it into the database
System.out.println("Inserting to DB!");
Recommendation rec = new Recommendation(stockSymbol, recDate, direction, completeUrl);
rec.insertToDb(connection);
}
} catch (IOException e) {
System.err.println("Unable to read "+RECS_FILE);
e.printStackTrace();
} finally {
if(bufReader != null) {
try{
bufReader.close();
} catch (IOException e) {
}
}
}
}
You'll see that a buffered reader is used to read in the .csv file. Is there a way to set up a priority queue outside the function such that the buffered reader is putting tuples in a priority queue, and each program thread then accesses the priority queue?
Buffered readers, or indeed any reader or stream are by their nature for single-thread use only. Priority queues are a completely separate structure which, depending on the actual implementation, may or may not be usable by multiple threads. So the short answer is: no, they're two completely unrelated concepts.
To address your original problem: you can't use streamed file access with multiple threads. You can use RandomAccessFile in theory, except that your lines aren't fixed width and therefore you can't seek() to the beginning of a line without reading everything in the file up to that point. Moreover, even if your data consists of fixed-with records, it might be impractical to read a file with two different threads.
The only thing you can parallelise is the database insert, with the obvious caveat that you lose transactionality, as you have to use separate transactions for each thread. (If you don't, you have to synchronise your database operations, which once again means that you haven't won anything.)
So a solution can be to read the lines from one thread and pass on the strings to a processing method invoked via an ExecutorService. That would scale well, but again there is a caveat: the increased overhead of database locking will probably nullify the advantage of using multiple threads.
The ultimate lesson is probably not to overcomplicate things: try the simple way and only look for a more complex solution if the simple one didn't work. The other lesson is perhaps that multithreading doesn't help I/O-bound programs.
#Biziclop's answer is spot on (+1) but I thought I'd add something about bulk database inserts.
In case you didn't know, turning off database auto-commit in most SQL databases is a big win during bulk inserts. Typically after each SQL statement, the database commits it to disk storage which updates indexes and makes all of the changes to the disk structures. By turning off this auto-commit, the database only has to make these changes when you call commit at the end. Typically you would do something like:
conn.setAutoCommit(false);
for (Recommendation rec : toBeInsertedList) {
rec.insertToDb(connection);
}
conn.setAutoCommit(true);
In addition, if auto-commit is not supported by your database, often wrapping the inserts in a transaction accomplishes the same thing.
Here are some another answers that may help:
Slow bulk insert for table with many indexes
Clarification of Java/SQLite batch and auto-commit

Fastest Java way to remove the first/top line of a file (like a stack)

I am trying to improve an external sort implementation in java.
I have a bunch of BufferedReader objects open for temporary files. I repeatedly remove the top line from each of these files. This pushes the limits of the Java's Heap.
I would like a more scalable method of doing this without loosing speed because of a bunch of constructor calls.
One solution is to only open files when they are needed, then read the first line and then delete it. But I am afraid that this will be significantly slower.
So using Java libraries what is the most efficient method of doing this.
--Edit--
For external sort, the usual method is to break a large file up into several chunk files. Sort each of the chunks. And then treat the sorted files like buffers, pop the top item from each file, the smallest of all those is the global minimum. Then continue until for all items.
http://en.wikipedia.org/wiki/External_sorting
My temporary files (buffers) are basically BufferedReader objects. The operations performed on these files are the same as stack/queue operations (peek and pop, no push needed).
I am trying to make these peek and pop operations more efficient. This is because using many BufferedReader objects takes up too much space.
I'm away from my compiler at the moment, but I think this will work. Edit: works fine.
I urge you to profile it and see. I bet the constructor calls are going to be nothing compared to the file I/O and your comparison operations.
public class FileStack {
private File file;
private long position = 0;
private String cache = null;
public FileStack(File file) {
this.file = file;
}
public String peek() throws IOException {
if (cache != null) {
return cache;
}
BufferedReader r = new BufferedReader(new FileReader(file));
try {
r.skip(position);
cache = r.readLine();
return cache;
} finally {
r.close();
}
}
public String pop() throws IOException {
String r = peek();
if (r != null) {
// if you have \r\n line endings, you may need +2 instead of +1
// if lines could end either way, you'll need something more complicated
position += r.length() + 1;
cache = null;
}
return r;
}
}
If heap space is the main concern, use the [2nd form of the BufferedReader constructor][1] and specify a small buffer size.
[1]: http://java.sun.com/j2se/1.5.0/docs/api/java/io/BufferedReader.html#BufferedReader(java.io.Reader, int)
I have a bunch of BufferedReader objects open for temporary files. I repeatedly remove the top line from each of these files. This pushes the limits of the Java's Heap.
This is a really surprising claim. Unless you have thousands files open at the same time, there is no way that that should stress the heap. The default buffer size for a BufferedReader is 8192 bytes, and there should be little extra space required. 8192 * 1000 is only ~8Mbytes, and that is tiny compared with a typical Java application's memory usage.
Consider the possibility that something else is causing the heap problems. For example, if your program retained references to each line that it read, THAT would lead to heap problems.
(Or maybe your notion of what is "too much space" is unrealistic.)
One solution is to only open files when they are needed, then read the first line and then delete it. But I am afraid that this will be significantly slower.
There is no doubt that it would be significantly slower! There is simply no efficient way to delete the first line from a file. Not in Java, or in any other language. Deleting characters from the beginning or middle of a file entails copying the file to a new one while skipping over the characters that need to be removed. There is no faster alternative.

Categories