How to speed up excel reading/writing - java

I am using Apache POI to read/write to an excel file for my company as an intern here. My program goes through the excel file which is a big square with top rows computer names and left column user names. 240 computers and 342 users. the sheet[computer][user] is 0 in all spaces and the program calls PSLoggedon for each computer and takes the username(s) currently logged on and increments their 0 so after running it after a month, it shows who is logged in the most to each computer. So far it runs in about 25 minutes since I used a socket to check socket.connect before actually calling PSLoggedon.
Without reading or writing at all to the excel file, just calling all the PSLoggedon calls to each computer, takes about 9 minutes. So, the reading and writing apparently takes 10-15 minutes. The thing is, I am calling PSLoggedon on the computer, then opening the excel to find the [x][y] spot of the [computer][user] and then writing to it a +=1 then closing it. So the reason it is taking this long I suppose is because it opens and closes the file so much? I could be completely wrong. But I can't think of a way to make this faster by opening and reading/writing all at once and only opening and closing the file once. Any ideas?

Normally Apache-POI is very fast, if you are running into some issue then you might need to check below points:
POI's logging might be on, you need to turn them off:
You can add one of these –D to your JVM settings to do this:
-Dorg.apache.poi.util.POILogger=org.apache.poi.util.NullLogger
You may be setting your VM heap to low value, try to increase.
Prefer XLS over XLSX.

Get HSQLDB (or another in-process database, but this is what I've used in the past). Add it to your build.
You can now create either a file-based or in-memory database (I would use file-based, as it lets you persist state between runs) simply by using JDBC. Create a table with the columns User, Computer, Count
In your reading thread(s), INSERT or UPDATE your table whenever you find a user with PSLoggedon
Once your data collection is complete, you can SELECT Computer, User, Count from Data ORDER BY Computer, User (or switch the order depending on your excel file layout), loop through the ResultSet and write the results directly.

This is an old question, but from what I see:
Since you are sampling and using Excel, is it safe to assume that consistency and atomicity isn't critical? You're just estimating fractional usage and don't care if a user logged in and logged out between observations.
Is the Excel file stored over a slow network link? Opening and closing a file 240 times could bring significant overhead. How about the following:
You need to open the Excel file once to get the list of computers. At that time, just snapshot the entire contents of the matrix into a Map<ComputerName, Map<UserName, Count>>. Also get a List<ComputerName> and List<UserName> to remember the row/column headings. The entire spreadsheet has less than 90,000 integers --- no need to bring in heavy database machinery.
9 minutes for 240 computers, single-threaded, is roughly 2.25 seconds per computer. Is that the expected throughput of PSLoggedOn? Can you create a thread pool and query all 240 computers at once or in a small number of rounds?
Then, parse the results, increment your map and dump it back to the Excel file. Is there a possibility that you might see new users that were not previously in the Excel? Those will need to be added to the Map and List<UserName>.

Related

Java - read, process, write with large excel

I have a large spread sheet. It has 10 sheets, each with 1m rows. With Java, I need to run an algorithm for each row, return a value for each row and insert back into the excel file.
My idea was to load the file into ram, do calculations for each row, store the result in a list, and insert back to excel in order, but I didn't anticipate the issues dealing with the data size.
I tried XSSF, and it wasn't able to load such a large file. After waiting for a few hours it gave me the OOM error.
I tried increasing heap in run->run configurations->arguments, and in control panel->java. It didn't work.
I tried using the following StreamingReader and it didn't work.
FileInputStream in = new FileInputStream("D:\\work\\calculatepi\\sampleresult.xlsx");
Workbook workbook = StreamingReader.builder()
.rowCacheSize(100)
.bufferSize(4096)
.open(in);
I'm really out of clue and not sure what to do. Is there no easy way to do this?
It is not only about the configuration of that library. It is also about the memory that you give to you JVM! Try increasing the heap space of the JVM, see here for example.
Beyond that: I think you should do two things:
make experiments with smaller sheets. Create one that only has 100 rows, then maybe 10K, 100K. Measure the memory consumption. And from there
see if there are other APIs/libraries that allow you to read/write individual rows without pulling the whole file into memory
and if none of that works, maybe you have to use a completely different design: such as just having some sort of "service". And now, you write some VB script code that you run inside excel, that simply for each row calls that service to fetch the results. Or, ideally: do not misuse Excel as database. This is similar to using a sports car to transport a huge number of goods, just because you already have that sports car. But it would still be more appropriate to get yourself a truck instead. In other words: consider moving your data into a real database. In the long run, everything you do will be "easier" then!

Saving real time data in Android application to access later

I am currently streaming information in the form of a String from an Arduino to an Android application via Bluetooth. My application currently just displays the data on a series of progress bars as the values come in. I would like to store the data as it comes in to be viewed later but am unsure as to how to achieve this.
I have a BluetoothActionListener which runs when new data is available. This is called up to 20 times per second so I am worried as to whether adding a memory save to a file within this function will have an effect on its performance.
The data arrives as a string via bluetooth and I would like to be able to store the entire session's worth of data (upto 10mins) in the same file on a different line. A simple text file with a timestamp on each line is sufficient.
Would using a internal/external memory write cause my foreground visuals to slow down or even become unresponsive, as each new string arriving causes each of the progress bars to update?
At twenty times a second? Probably not. Just keep the file open rather than closing it, and flush it every few writes.

Java- Writing huge data to csv

I am just trying to write huge data which is fetching from mysql db to CSV by using supercsv. How simply I can manage the performance issue. Does super csv write with some limits?
Since you included almost no detail in your question about how you are approaching the problem, it's hard to make concrete recommendations. So, here's a general one:
Unless you are writing your file to a really slow medium (some old USB stick or something), the slowest step in your process should be reading the data from the database.
There are two general ways how you can structure your program:
The bad way: Reading all the data from the database into your application's memory first and then, in a second step, writing it all in one shot to the csv file.
The right way: "Stream" the data from the db into the csv file, i.e. write the data to the csv file as it comes in to your application (record by record or batch by batch).
The idea is to set up something usually referred to as a "pipeline". Think of it like conveyor belt construction in a factory: You have multiple steps in your process of assembling some widget. What you don't want to do is have station 1 process all widgets and have stations 2 and 3 sit idle meanwhile, and then pass the whole container of widgets to station 2 to begin work, while stations 1 and 3 sit idle and so forth. Instead, station 1 needs to send small batches (1 at a time or 10 at a time or so) of widgets that are done to station 2 immediately so that they can start working on it as soon as possible. The goal is to keep all stations as busy as possible at all times.
In your example, station 1 is mysql retrieving the records, station 2 is your application that forwards (and processes?) them, and station 3 is supercsv. So, simply make sure that supercsv can start working as soon as possible, rather than having to wait for mysql to finish the entire request.
If you do this right, you should be able to generate the csv file as quickly as mysql can throw records at you*, and then, if it's still too slow, you need to rethink your database backend.
*I haven't used supercsv yet, so I don't know how well it performs, but given how trivial its job is and how popular it is, I would find it hard to believe that it would end up performing less well (as measured in processing time needed for one record) than mysql in this task. But this might be something that is worth verifying...

Need suggestion on my approach : to read a file which is being written continuously?

I have one csv file, which is being written continuously by script. It writes timestamp and some other data per row. I have to read the latest data first.
Currently I am using RandomAccessFile in java to read the file in reverse way. But as its written continuously, I have to read the new data with priority. I am maintaining which timestamp has been sent and doing the work. It results unnecessary scanning operations.
Is there any better way to deal with this scenario?
Thanks in advance,
You could consider having one thread that reads new lines as they appear and pushes them onto a stack of unprocessed rows, and a second thread that pops the stack and processes the new rows in reverse order.
Depending on how long it takes to process a new row compared to how quickly they are generated, this might be sufficient. If new rows are generated faster than you can process them then this approach probably won't work - the stack will get too big and you'll run out of memory. In that case, depending on your requirements, you might be able to get away with a size-limited stack that discards old entries.
Two ideas:
Use a fixed size record format instead of CSV. Then you can tell exactly what offsets your records are at instead of having to seek around looking for newlines.
If that isn't possible, have a thread that reads items from the file and pushes them onto a stack. Another thread pops items from the stack and processes them. Because it's a stack it'll always be dealing with the most recent available item. You'll need to figure out how you want to deal with cases where the stack gets too big. Do you just want to throw away items that are too old?
If you have access to the original script, write the record to a database, in addition to the CSV file. Then you can do whatever you want with the database; access the last record, run a report, etc.
If your application is running in a Unix environment, you could run
tail -f /csv-file | custom-program
custom-program would simply accept standard input and echo that to a socket connection with your Java program.
I'm assuming that your Java program is some sort of server app that can't be started from the command line like that. If that would actually be okay, then you could replace custom-program with your Java program.
It results unnecessary scanning operations.
I presume that you are referring to the overheads of seeking to some point, and then finding the next valid CSV row start position by reading until you hit the next newline.
I can think of three ways to do this that may be more efficient than what you are currently doing:
Read the entire file and parse out the rows in forwards direction, storing the positions in memory. Then process the in-memory rows in reverse order.
Scan the file from the beginning looking for row starts, and store the row start positions in memory. Then iterate through the positions in reverse order, seeking to each one to read the corresponding row. (You can do the input more efficiently by processing multiple rows in each seek.)
Map the file into memory using a MappedByteBuffer, then you can step through the Byte buffer forwards or backwards to find the row boundaries.
The first approach requires that you can buffer the entire file in memory, but has the lower I/O overheads because you read the file just once with a minimum number of system calls. The third approach has the same the same issue, though you could map an extremely large file into memory in (large) sections to reduce the memory requirements.
But ultimately, there is no simple and efficient way of reading a file backwards in Java.

Designing a process

I challenge you :)
I have a process that someone already implemented. I will try to describe the requirements, and I was hoping I could get some input to the "best way" to do this.
It's for a financial institution.
I have a routing framework that will allow me to recieve files and send requests to other systems. I have a database I can use as I wish but it is only me and my software that has access to this database.
The facts
Via the routing framework I recieve a file.
Each line in this file follows a fixed length format with the identification of a person and an amount (+ lots of other stuff).
This file is 99% of the time im below 100MB ( around 800bytes per line, ie 2,2mb = 2600lines)
Once a year we have 1-3 gb of data instead.
Running on an "appserver"
I can fork subprocesses as I like. (within reason)
I can not ensure consistency when running for more than two days. subprocesses may die, connection to db/framework might be lost, files might move
I can NOT send reliable messages via the framework. The call is synchronus, so I must wait for the answer.
It's possible/likely that sending these getPerson request will crash my "process" when sending LOTS.
We're using java.
Requirements
I must return a file with all the data + I must add some more info for somelines. (about 25-50% of the lines : 25.000 at least)
This info I can only get by doing a getPerson request via the framework to another system. One per person. Takes between 200 and 400msec.
It must be able to complete within two days
Nice to have
Checkpointing. If im going to run for a long time I sure would like to be able to restart the process without starting from the top.
...
How would you design this?
I will later add the current "hack" and my brief idea
========== Current solution ================
It's running on BEA/Oracle Weblogic Integration, not by choice but by definition
When the file is received each line is read into a database with
id, line, status,batchfilename and status 'Needs processing'
When all lines is in the database the rows are seperated by mod 4 and a process is started per each quarter of the rows and each line that needs it is enriched by the getPerson call and status is set to 'Processed'. (38.0000 in the current batch).
When all 4 quaters of the rows has been Processed a writer process startes by select 100 rows from that database, writing them to file and updating their status to 'Written'.
When all is done the new file is handed back to the routing framework, and a "im done" email is sent to the operations crew.
The 4 processing processes can/will fail so its possible to restart them with a http get to a servlet on WLI.
Simplify as much as possible.
The batches (trying to process them as units, and their various sizes) appear to be discardable in terms of the simplest process. It sounds like the rows are atomic, not the batches.
Feed all the lines as separate atomic transactions through an asynchronous FIFO message queue, with a good mechanism for detecting (and appropriately logging and routing failures). Then you can deal with the problems strictly on an exception basis. (A queue table in your database can probably work.)
Maintain batch identity only with a column in the message record, and summarize batches by that means however you need, whenever you need.
When you receive the file, parse it and put the information in the database.
Make one table with a record per line that will need a getPerson request.
Have one or more threads get records from this table, perform the request and put the completed record back in the table.
Once all records are processed, generate the complete file and return it.
if the processing of the file takes 2 days, then I would start by implementing some sort of resume feature. Split the large file into smaller ones and process them one by one. If for some reason the whole processing should be interrupted, then you will not have to start all over again.
By splitting the larger file into smaller files then you could also use more servers to process the files.
You could also use some mass loader(Oracles SQL Loader for example) to take the large amount of data form the file into the table, again adding a column to mark if the line has been processed, so you can pick up where you left off if the process should crash.
The return value could be many small files which at the end would be combined into large single file. If the database approach is chosen you could also save the results in a table which could then be extracted to a csv file.

Categories