I am pretty sure a modified/similar discussion might have already been done here but I want to present the exact problem i am facing with possible solution from my side. Then I want to hear from you guys that what would be better approach or how can I approve my logic.
PROBLEM
I have a huge file which contains lines. Each line is in following format <weight>,<some_name>. Now what I have to do is to add the weight of all the objects which has same name. The problem is
I don't know how frequent some_name exist in the file. it could appear only once or all of the millions could be it
It is not ordered
I am using File Stream (java specific, but it doesn't matter)
SOLUTION 1: Assuming that I have huge ram, What i am planning to do is to read file line by line and use the name as key in my hash_map. If its already there, sum it up otherwise add. It will cost me m ram (m = numer of lines in file) but overall processing would be fast
SOLUTION 2: Assuming that I don't have huge ram, I am going to do in batches. Read first 10,000 in hashtable, sum it up and dump it into the file. Do the for rest of the file. Once done processing file, I will start reading processed files and will repease this process to sum it up all.
What do you guys suggest here ?
Beside your suggestions, Can I do parallel file reading of the file ? I have access to FileInputStream here, Can i work with fileInputStream to make reading of file more efficient ?
The second approach is not going to help you: in order to produce the final output, you need sufficient amount of RAM to hold all keys from the file, along with a single Integer representing the count. Whether you're going to get to it in one big step or by several iterations of 10K rows at a time does not change the footprint that you would need at the end.
What would help is partitioning the keys in some way, e.g. by the first character of the key. If the name starts in a letter, process the file 26 times, the first time taking only the weights for keys starting in 'A' and ignoring all other keys, the second time taking only 'B's, and so on. This will let you end up with 26 files that do not intersect.
Another valid approach would be using an external sorting algorithm to transform an unordered file to an ordered one. This would let you walk the ordered file, calculate totals as you go, and write them to an output, even without the need for an in-memory table.
As far as optimizing the I/O goes, I would recommend using the newBufferedReader(Path path,Charset c) method of the java.nio.file.Files class: it gives you a BufferedReader that is optimized for reading efficiency.
Is the file static when you do this computation? If so, then you could disk sort the file based on the name and add up the consecutive entries.
Related
I have a file (size = ~1.9 GB) which contains ~220,000,000 (~220 million) words / strings. They have duplication, almost 1 duplicate word every 100 words.
In my second program, I want to read the file. I am successful to read the file by lines using BufferedReader.
Now to remove duplicates, we can use Set (and it's implementations), but Set has problems, as described following in 3 different scenarios:
With default JVM size, Set can contain up to 0.7-0.8 million words, and then OutOfMemoryError.
With 512M JVM size, Set can contain up to 5-6 million words, and then OOM error.
With 1024M JVM size, Set can contain up to 12-13 million words, and then OOM error. Here after 10 million records addition into Set, operations become extremely slow. For example, addition of next ~4000 records, it took 60 seconds.
I have restrictions that I can't increase the JVM size further, and I want to remove duplicate words from the file.
Please let me know if you have any idea about any other ways/approaches to remove duplicate words using Java from such a gigantic file. Many Thanks :)
Addition of info to question: My words are basically alpha-numeric and they are IDs which are unique in our system. Hence they are not plain English words.
Use merge sort and remove the duplicates in a second pass. You could even remove the duplicates while merging (just keep the latest word added to output in RAM and compare the candidates to it as well).
Divide the huge file into 26 smaller files based on the first letter of the word. If any of the letter files are still too large, divide that letter file by using the second letter.
Process each of the letter files separately using a Set to remove duplicates.
You might be able to use a trie data structure to do the job in one pass. It has advantages that recommend it for this type of problem. Lookup and insert are quick. And its representation is relatively space efficient. You might be able to represent all of your words in RAM.
If you sort the items, duplicates will be easy to detect and remove, as the duplicates will bunch together.
There is code here you could use to mergesort the large file:
http://www.codeodor.com/index.cfm/2007/5/10/Sorting-really-BIG-files/1194
For large files I try not to read the data into memory but instead operate on a memory mapped file and let the OS page in/out memory as needed. If your set structures contain offsets into this memory mapped file instead of the actual strings it would consume significantly less memory.
Check out this article:
http://javarevisited.blogspot.com/2012/01/memorymapped-file-and-io-in-java.html
Question: Are these really WORDS, or are they something else -- phrases, part numbers, etc?
For WORDS in a common spoken language one would expect that after the first couple of thousand you'd have found most of the unique words, so all you really need to do is read a word in, check it against a dictionary, if found skip it, if not found add it to the dictionary and write it out.
In this case your dictionary is only a few thousand words large. And you don't need to retain the source file since you write out the unique words as soon as you find them (or you can simply dump the dictionary when you're done).
If you have the posibility to insert the words in a temporary table of a database (using batch inserts), then it would be a select distinct towards that table.
One classic way to solve this kind of problem is a Bloom filter. Basically you hash your word a number of times and for each hash result set some bits in a bit vector. If you're checking a word and all the bits from its hashes are set in the vector you've probably (you can set this probability arbitrarily low by increasing the number of hashes/bits in the vector) seen it before and it's a duplicate.
This was how early spell checkers worked. They knew if a word was in the dictionary, but they couldn't tell you what the correct spelling was because it only tell you if the current word is seen.
There are a number of open source implementations out there including java-bloomfilter
I'd tackle this in Java the same way as in every other language: Write a deduplication filter and pipe it as often as necessary.
This is what I mean (in pseudo code):
Input parameters: Offset, Size
Allocate searchable structure of size Size (=Set, but need not be one)
Read Offset (or EOF is encountered) elements from stdin and just copy them to stdout
Read Size elments from stdin (or EOF), store them in Set. If duplicate, drop, else write to stdout.
Read elements from stdin until EOF, if they are in Set then drop, else write to stdout
Now pipe as many instances as you need (If storage is no problem, maybe only as many as you have cores) with increasing Offsets and sane Size. This lets you use more cores, as I suspect the process is CPU bound. You can even use netcat and spread processing over more machines, if you are in a hurry.
Even in English, which has a huge number of words for a natural language, the upper estimates are only about 80000 words. Based on that, you could just use a HashSet and add all your words it (probably in all lower case to avoid case issues):
Set<String> words = new HashSet<String>();
while (read-next-word) {
words.add(word.toLowerCase());
}
If they are real words, this isn't going to cause memory problems, will will be pretty fast too!
To not have to worry to much about implementation you should use a database system, either plain old relational SQL or a No-SQL solution. Im pretty sure you could use e.g. Berkeley DB java edition and then do (pseudo code)
for(word : stream) {
if(!DB.exists(word)) {
DB.put(word)
outstream.add(word)
}
}
The problem is in essence easy, you need to store things on disk because there is not enough memory, then either use sorting O(N log N) (unecessary) or hashing O(N) to find the unique words.
If you want a solution that will very likely work but is not guaranteed to do so use a LRU type hash table. According to the empirical Zpif's law you should be OK.
A follow up question to some smart guy out there, what if I have 64-bit machine and set heap size to say 12GB, shouldn't virtual memory take care of the problem (although not in an optimal way) or is java not designed this way?
Quicksort would be a good option over Mergesort in this case because it needs less memory. This thread has a good explanation as to why.
Most performant solutions arise from omiting unecessary stuff. You look only for duplicates, so just do not store words itself, store hashes. But wait, you are not interested in hashes either, only if they awere seen already - do not store them. Treat hash as really large number, and use bitset to see whether you already seen this number.
So your problem boils down to really big sparse populated bitmap - with size depending on hash width. If your hash is up to 32 bit, you can use riak bitmap.
... gone thinking about really big bitmap for 128+ bit hashes %) (I'll be back )
FILE:
I'm working with a refined csv version of a searchlog file which contains 3.3mio lines of data, with each line resembling a single query and containing various data about that query.
The entries in the file are sorted ascending by the session / userid.
GOAL:
Coupling entries that submitted the same queryterm while belonging to the same userid
APPROACH:
I'm reading the csv file line by line, saving the data in selfmade 'Entry'-object and adding these objects to an arraylist. When this is done, I'll sort the list by two criteria with a custom comparator
PROBLEM:
While reading the lines and adding the Entry-objects to the list (which takes very long) the program terminates with a OutOfMemoryException "Java heap"
So it seems that my approach is too hard on memory (and runtime).
Any ideas for a better approach?
Your approach itself may be valid, and perhaps the simplest solution is to simply boost the memory available to the JVM.
The JVM will only allocate itself a maximum amount of system memory, and you can increase this value via the -Xmx command line attribute. See here for more details.
Obviously this solution doesn't scale, and if (in the future) you want to read much bigger files, then you'll likely need a better solution to reading these files.
Instead of sorting the lines in memory, you could insert the parsed lines in a database with an index based on the columns defining the duplicity.
Another approach would be to dispatch the lines in many files, each file being named, for example, as the first 2 chars of a sha1 of the concatened columns defining the duplicity. So you'd never have to read more than one file for your ultimate operation because all duplicata would be together.
I need to parse a long file in Java and output the results to another file.
Since I need to average across several items, and I need to parse the file to find them, I need to store in memory the current averaged item before to output it to the results file on disk.
Is this approach ok, or am I going to have low performances with a million items file ?
Update: the point here is that each output item can be updated at any time while computation, since I might average an item in the beginning and in the end. So I cannot release it, and write on disk, I guess.
thanks
Another solution could be to do 2 passes: first pass computes (and keeps) the changing values in memory, the second pass creates the output.
Have a look at flatpack It has LargeDataSet implementation for handling largefiles with less memory.
Does the output fit in RAM, say in a
Map<MyItem, Integer>
(if your average value fits in an Integer) ?
If the answer is yes, then the fastest solution is to keep it in memory during the source file traversal and then to write the output file.
If the answer is no, you have to partition the problem and create intermediate results and store them to disk, and then you have to merge the intermediate results to create the end result.
If you have to partition the problem, ask a new question with some figures because the answer will really depends on the context...
I have one csv file, which is being written continuously by script. It writes timestamp and some other data per row. I have to read the latest data first.
Currently I am using RandomAccessFile in java to read the file in reverse way. But as its written continuously, I have to read the new data with priority. I am maintaining which timestamp has been sent and doing the work. It results unnecessary scanning operations.
Is there any better way to deal with this scenario?
Thanks in advance,
You could consider having one thread that reads new lines as they appear and pushes them onto a stack of unprocessed rows, and a second thread that pops the stack and processes the new rows in reverse order.
Depending on how long it takes to process a new row compared to how quickly they are generated, this might be sufficient. If new rows are generated faster than you can process them then this approach probably won't work - the stack will get too big and you'll run out of memory. In that case, depending on your requirements, you might be able to get away with a size-limited stack that discards old entries.
Two ideas:
Use a fixed size record format instead of CSV. Then you can tell exactly what offsets your records are at instead of having to seek around looking for newlines.
If that isn't possible, have a thread that reads items from the file and pushes them onto a stack. Another thread pops items from the stack and processes them. Because it's a stack it'll always be dealing with the most recent available item. You'll need to figure out how you want to deal with cases where the stack gets too big. Do you just want to throw away items that are too old?
If you have access to the original script, write the record to a database, in addition to the CSV file. Then you can do whatever you want with the database; access the last record, run a report, etc.
If your application is running in a Unix environment, you could run
tail -f /csv-file | custom-program
custom-program would simply accept standard input and echo that to a socket connection with your Java program.
I'm assuming that your Java program is some sort of server app that can't be started from the command line like that. If that would actually be okay, then you could replace custom-program with your Java program.
It results unnecessary scanning operations.
I presume that you are referring to the overheads of seeking to some point, and then finding the next valid CSV row start position by reading until you hit the next newline.
I can think of three ways to do this that may be more efficient than what you are currently doing:
Read the entire file and parse out the rows in forwards direction, storing the positions in memory. Then process the in-memory rows in reverse order.
Scan the file from the beginning looking for row starts, and store the row start positions in memory. Then iterate through the positions in reverse order, seeking to each one to read the corresponding row. (You can do the input more efficiently by processing multiple rows in each seek.)
Map the file into memory using a MappedByteBuffer, then you can step through the Byte buffer forwards or backwards to find the row boundaries.
The first approach requires that you can buffer the entire file in memory, but has the lower I/O overheads because you read the file just once with a minimum number of system calls. The third approach has the same the same issue, though you could map an extremely large file into memory in (large) sections to reduce the memory requirements.
But ultimately, there is no simple and efficient way of reading a file backwards in Java.
Points:
We process thousands of flat files in a day, concurrently.
Memory constraint is a major issue.
We use thread for each file process.
We don't sort by columns. Each line (record) in the file is treated as one column.
Can't Do:
We cannot use unix/linux's sort commands.
We cannot use any database system no matter how light they can be.
Now, we cannot just load everything in a collection and use the sort mechanism. It will eat up all the memory and the program is gonna get a heap error.
In that situation, how would you sort the records/lines in a file?
It looks like what you are looking for is
external sorting.
Basically, you sort small chunks of data first, write it back to the disk and then iterate over those to sort all.
As other mentionned, you can process in steps.
I would like to explain this with my own words (differs on point 3) :
Read the file sequentially, process N records at a time in memory (N is arbitrary, depending on your memory constraint and the number T of temporary files that you want).
Sort the N records in memory, write them to a temp file. Loop on T until you are done.
Open all the T temp files at the same time, but read only one record per file. (Of course, with buffers). For each of these T records, find the smaller, write it to the final file, and advance only in that file.
Advantages:
The memory consumption is as low as you want.
You only do the double of disk accesses comparing to a everything-in-memory policy. Not bad! :-)
Exemple with numbers:
Original file with 1 million records.
Choose to have 100 temp files, so read and sort 10 000 records at a time, and drop these in their own temp file.
Open the 100 temp file at a time, read the first record in memory.
Compare the first records, write the smaller and advance this temp file.
Loop on step 5, one million times.
EDITED
You mentionned a multi-threaded application, so I wonder ...
As we seen from these discussions on this need, using less memory gives less performance, with a dramatic factor in this case. So I could also suggest to use only one thread to process only one sort at a time, not as a multi-threaded application.
If you process ten threads, each with a tenth of the memory available, your performance will be miserable, much much less than a tenth of the initial time. If you use only one thread, and queue the 9 other demands and process them in turn, you global performance will be much better, you will finish the ten tasks much faster.
After reading this response :
Sort a file with huge volume of data given memory constraint
I suggest you consider this distribution sort. It could be huge gain in your context.
The improvement over my proposal is that you don't need to open all the temp files at once, you only open one of them. It saves your day! :-)
You can read the files in smaller parts, sort these and write them to temporrary files. Then you read two of them sequentially again and merge them to a bigger temporary file and so on. If there is only one left you have your file sorted. Basically that's the Megresort algorithm performed on external files. It scales quite well with aribitrary large files but causes some extra file I/O.
Edit: If you have some knowledge about the likely variance of the lines in your files you can employ a more efficient algorithm (distribution sort). Simplified you would read the original file once and write each line to a temporary file that takes only lines with the same first char (or a certain range of first chars). Then you iterate over all the (now small) temporary files in ascending order, sort them in memory and append them directly to the output file. If a temporary file turns out to be too big for sorting in memory, you can reapeat the same process for this based on the 2nd char in the lines and so on. So if your first partitioning was good enough to produce small enough files, you will have only 100% I/O overhead regardless how large the file is, but in the worst case it can become much more than with the performance wise stable merge sort.
In spite of your restriction, I would use embedded database SQLITE3. Like yourself, I work weekly with 10-15 millions of flat file lines and it is very, very fast to import and generate sorted data, and you only need a little free of charge executable (sqlite3.exe). For example: Once you download the .exe file, in a command prompt you can do this:
C:> sqlite3.exe dbLines.db
sqlite> create table tabLines(line varchar(5000));
sqlite> create index idx1 on tabLines(line);
sqlite> .separator '\r\n'
sqlite> .import 'FileToImport' TabLines
then:
sqlite> select * from tabLines order by line;
or save to a file:
sqlite> .output out.txt
sqlite> select * from tabLines order by line;
sqlite> .output stdout
I would spin up an EC2 cluster and run Hadoop's MergeSort.
Edit: not sure how much detail you would like, or on what. EC2 is Amazon's Elastic Compute Cloud - it lets you rent virtual servers by the hour at low cost. Here is their website.
Hadoop is an open-source MapReduce framework designed for parallel processing of large data sets. A job is a good candidate for MapReduce when it can be split into subsets that can be processed individually and then merged together, usually by sorting on keys (ie the divide-and-conquer strategy). Here is its website.
As mentioned by the other posters, external sorting is also a good strategy. I think the way I would decide between the two depends on the size of the data and speed requirements. A single machine is likely going to be limited to processing a single file at a time (since you will be using up available memory). So look into something like EC2 only if you need to process files faster than that.
You could use the following divide-and-conquer strategy:
Create a function H() that can assign each record in the input file a number. For a record r2 that will be sorted behind a record r1 it must return a larger number for r2 than for r1. Use this function to partition all the records into separate files that will fit into memory so you can sort them. Once you have done that you can just concatenate the sorted files to get one large sorted file.
Suppose you have this input file where each line represents a record
Alan Smith
Jon Doe
Bill Murray
Johnny Cash
Lets just build H() so that it uses the first letter in the record so you might get up to 26 files but in this example you will just get 3:
<file1>
Alan Smith
<file2>
Bill Murray
<file10>
Jon Doe
Johnny Cash
Now you can sort each individual file. Which would swap "Jon Doe" and "Johnny Cash" in <file10>. Now, if you just concatenate the 3 files you'll have a sorted version of the input.
Note that you divide first and only conquer (sort) later. However, you make sure to do the partitioning in a way that the resulting parts which you need to sort don't overlap which will make merging the result much simpler.
The method by which you implement the partitioning function H() depends very much on the nature of your input data. Once you have that part figured out the rest should be a breeze.
If your restriction is only to not use an external database system, you could try an embedded database (e.g. Apache Derby). That way, you get all the advantages of a database without any external infrastructure dependencies.
Here is a way to do it without heavy use of sorting in-side Java and without using DB.
Assumptions : You have 1TB space and files contain or start with unique number, but are unsorted
Divide the files N times.
Read those N files one by one, and create one file for each line/number
Name that file with corresponding number.While naming keep a counter updated to store least count.
Now you can already have the root folder of files marked for sorting by name or pause your program to give you the time to fire command on your OS to sort the files by names. You can do it programmatically too.
Now you have a folder with files sorted with their name, using the counter start taking each file one by one, put numbers in your OUTPUT file, close it.
When you are done you will have a large file with sorted numbers.
I know you mentioned not using a database no matter how light... so, maybe this is not an option. But, what about hsqldb in memory... submit it, sort it by query, purge it. Just a thought.
You can use SQL Lite file db, load the data to the db and then let it sort and return the results for you.
Advantages: No need to worry about writing the best sorting algorithm.
Disadvantage: You will need disk space, slower processing.
https://sites.google.com/site/arjunwebworld/Home/programming/sorting-large-data-files
You can do it with only two temp files - source and destination - and as little memory as you want.
On first step your source is the original file, on last step the destination is the result file.
On each iteration:
read from the source file into a sliding buffer a chunk of data half size of the buffer;
sort the whole buffer
write to the destination file the first half of the buffer.
shift the second half of the buffer to the beginning and repeat
Keep a boolean flag that says whether you had to move some records in current iteration.
If the flag remains false, your file is sorted.
If it's raised, repeat the process using the destination file as a source.
Max number of iterations: (file size)/(buffer size)*2
You could download gnu sort for windows: http://gnuwin32.sourceforge.net/packages/coreutils.htm Even if that uses too much memory, it can merge smaller sorted files as well. It automatically uses temp files.
There's also the sort that comes with windows within cmd.exe. Both of these commands can specify the character column to sort by.
File sort software for big file https://github.com/lianzhoutw/filesort/ .
It is based on file merge sort algorithm.
If you can move forward/backward in a file (seek), and rewrite parts of the file, then you should use bubble sort.
You will have to scan lines in the file, and only have to have 2 rows in memory at the moment, and then swap them if they are not in the right order. Repeat the process until there are no files to swap.