How do I count repeated words? - java

Given a 1GB(very large) file containing words (some repeated), we need to read the file and output how many times each word is repeated. Please let me know if my solution is high performant or not.
(For simplicity lets assume we have already captured the words in an arraylist<string>)
I think the big O(n) is "n". Am I correct??
public static void main(String[] args) {
ArrayList al = new ArrayList();
al.add("math1");
al.add("raj1");
al.add("raj2");
al.add("math");
al.add("rj2");
al.add("math");
al.add("rj3");
al.add("math2");
al.add("rj1");
al.add("is");
Map<String,Integer> map= new HashMap<String,Integer>();
for (int i=0;i<al.size();i++)
{
String s= (String)al.get(i);
map.put(s,null);
}
for (int i=0;i<al.size();i++)
{
String s= (String)al.get(i);
if(map.get(s)==null)
map.put(s,1);
else
{
int count =(int)map.get(s);
count=count+1;
map.put(s,count);
}
}
System.out.println("");
}

I think you could do better than using a HashMap.
Food for thought on the hashmap solution
Your anwser is acceptable but consider this: For simplicity's sake lets assume you read the file one byte at a time into a StringBuffer until you hit a space. At which point you'll call toString() to convert the StringBuffer into a String. You then check if the string is in the HashMap and either it gets stored or the counter get incremented.
The English dic. included with linux has 400k words and is about 5MBs in size. So of the "1GB" of text you read, we can guess that you'll only be storing about 5MBs of it in your HashMap. The rest of the file, will be converted into strings that will need to be Garbage Collected after your finished looking for them in your map. I could be wrong, but I believe the bytes will be iterated over again during the construction of the String since the byte array needs to be copied internally and again for calculating the HashCode. So, the solution may waste a fair amount of CPU cycles and force GC to occur often.
Its OK to point things like this out in your interview, even if it's the only solution you can think of.
I may consider using a custom RadixTree or Trie like structure
Keep in mind how the insert method of a RadixT/Trie works. Which is to take a stream of chars/bytes (usually a string) and compares each element against the current position in the tree. If the prefix exists it just advances down the tree and byte-stream in lock step. When it hits a new suffix it begins adding nodes into the tree. Once the end of stream is reached it marks that node as EOW. Now consider we could do the same thing while reading a much larger stream, by resetting the current position to the root of the tree anytime we hit a space.
If we wrote our own Radix tree (or maybe a Trie), who's nodes had end-of-word counters (instead of markers) and had the insert method read directly from the file. We could insert nodes into the tree one byte/char at a time until we read a space. At which point the insert method would increment the end-of-word counter (if it's an existing word) and reset the current position in the tree back to the head and start inserting bytes/chars again. The way a radix tree works is to collapse the duplicated prefixs of words. For example:
The following file:
math1 raj1 raj2 math rj2 math rj3
would be converted to:
(root)-math->1->(eow=1)
| |-(eow=2)
|
raj->1->(eow=1)
| |->2->(eow=1)
| |->3->(eow=1)
j2->(eow=1)
The insertion time into a tree like this would be O(k), where k is the length of the longest word. But since we are inserting/comparing as we read each byte. We aren't any more inefficient than just reading the file as we have to already.
Also, note the we would read byte(s) into a temp byte that would be a stack variable, so the only time we need to allocate memory from the heap is when we encounter a new word (actually a new suffix). Therefore, garbage collection wouldn't happen nearly as often. And the total memory used by a Radix tree would be a lot smaller than a HashMap.

Theoretically , since HashMap access is generally O(1), I guess your algorithm is O(n), but in reality has several inefficiencies. Ideally you would iterate over the contents of the file just once, processing (i.e. counting) the words while you read them in. There's no need to store the entire file contents in memory (your ArrayList). You loop over the contents three times - once to read them, and the second and third times in the two loops in your code above. In particular, the first loop in your code above is completely unnecessary. Finally, your use of HashMap will be slower than needed because the default size at construction is very small, and it will have to grow internally a number of times, forcing a rebuilding of the hash table each time. Better to start it off a size appropriate for what you expect it to hold. You also have to consider the load factor into that.

Have you considered using a mapreduce solution? If the dataset gets bigger then it would really be better to split it in pieces and count the words in parallel

You should read through the file with words only once.
No need to put the nulls beforehand - you can do it within the main loop.
The complexity is indeed O(n) in both cases, but you want to make the constant as small as possible. (O(n) = 1000 * O(n), right :) )

To answer your question, first, you need to understand how HashMap works. It consists of buckets, and every bucket is a linked list. If due to hashing another pair need to occupy the same bucket, it will be added to the end of linked list. So, if map has high load factor, searching and inserting will not be O(1) anymore, and algorithm will become inefficient. Moreover, if map load factor exceeds predefined load factor (0.75 by default), the whole map will be rehashed.
This is an excerpt from JavaDoc http://download.oracle.com/javase/6/docs/api/java/util/HashMap.html:
The expected number of entries in the map and its load factor should
be taken into account when setting its initial capacity, so as to
minimize the number of rehash operations. If the initial capacity is
greater than the maximum number of entries divided by the load factor,
no rehash operations will ever occur.
So I would like to recommend you to predefine a map capacity guessing that every word is unique:
Map<String,Integer> map= new HashMap<String,Integer>(al.size());
Without of that, your solution is not efficient enough, though it still has a linear approximation O(3n), because due to amortization of rehashing, an insertion of elements will cost 3n instead of n.

Related

Generating an always balanced binary seach tree only true insertion

So i was thinking of a problem i find very interesting and i would like to share the concept of this, the problem starts of with an hypotetical data structure you define (it can be a list, array, tree, binary search tree, red black tree, Btree, etc.), the goal of this is obviously to optimize insertion, search, delete and update (but you can consider this as a search with replacement), the time complexity has to be has low as possible for every single type of operation (possibly O(1) or O(log(n) try to not use a solution of O(n)) the second part of the problem is that this structure during a normal day of work receives new elements with a key of increasing value starting from 1 to N where N can be Long.MAX_LONG, obviously when a new key is given it has to be inserted immediately so it will go as follows:
[1,2,3,4,...,N]
I think i am close to the solution of this problem but i am missing a little bit more of optimization, i was thinking of using either a Tree or a Hashtable but in the case of Hashtable there is a problem when N becomes very high it's needed to rehash the entire structure or the complexity would become O(n), this however is not a problem with a Tree but i think it may become a sequence of elements (keep in mind that we have to put every new element when it comes) like this:
And in this case you can clearly see that this Tree is not just a Tree it's a List, using a BST would give the same result.
I think the correct structure to use is the BST (or something like it for example Red Black Tree) and find a way to always have it balanced, but i am missing something.
If the "key" is an integer and the key are generated by incrementing a counter starting from 1, then the obvious data structure for representing the key -> value mapping is a ValueType[]. Yes, an array.
There are two problems with this:
Arrays do not "grow" in Java.
Solutions:
Preallocate the array to be big enough to start with.
Use an ArrayList instead of a array.
"Borrow" the algorithm that ArrayList uses to grow a list and use it with a bare array.
Arrays cannot have more than Integer.MAX_VALUE elements. (And ArrayList has the same problem.
Solution: use an array of arrays, and do some arithmetic to convert the long keys into a pair of ints for indexing the arrays.

How to save memory when storing a lot of 'Entry' in a map in java?

I want to store 1*10^8 Objects in a map for searching. When my program start, it will read and store these objects in a map. After reading is end, this map never be updated util program is dead. I don't want jvm to abandon any of them. I learn that HashMap will waste many memory , is there any type of map can store so much objects and save memory?
and I know that jvm will scan these objects, it waste time. how to avid this?
Sorry, The situation is that: I am writing a bolt with apache storm. I want to read data from databases. when a bolt is processing a tuple, I need to calculate with the data in databases. For performance of program I have to store them in memory. I know jvm is not good at managing a lot of memory, So maybe I should to try koloboke?
HashMap need to allocate array of sufficient size in order to minimize hash collisions - it can happen that two or more objects that are not equal have the same hash code - probability of such situation depends on quality of hash function. Collisions are resolved by techniques such as linear probing, which stores entry at next (hash + i) mod length index that is not occupied, quadratic probing which stores entry at next (hash + i^k) mod length index that is not occupied, separate chaining which stores linked list of entries at each bucket. Collision probability is decreased by increasing length of backing array, thus memory wasting.
However, you can use TreeMap which stores entries in tree structure that creates only such a number of nodes that is equal to number of entries i. e. efficient memory usage.
Note, there is a difference in complexity of get, put, remove operations. HashMap has complexity O(1), while TreeMap has complexity O(log n).
Suppose you want to get an entry from map of size 100 000 000, then in worst case (element to be found is leaf i. e. is located at the last level of the tree), path that need to be passed down the tree has length log(100 000 000) = 8.
Well, I am back.
In first I used about 30g to store about 5x10^7 key-value entry.. but gc is not stable.I make a mistake about using string to store double, it is bigger than double in memory and a char is 16bit in java ..after I changed this mistake, gc is better..but not enough. Finally I used 'filedb' in mapdb to fix this.

A good data structure for storing and searching integers?

Edit: Typos fixed and ambiguity tried to fix.
I have a list of five digit integers in a text file. The expected amount can only be as large as what a 5-digit integer can store. Regardless of how many there are, the FIRST line in this file tells me how many integers are present, so resizing will never be necessary. Example:
3
11111
22222
33333
There are 4 lines. The first says there are three 5-digit integers in the file. The next three lines hold these integers.
I want to read this file and store the integers (not the first line). I then want to be able to search this data structure A LOT, nothing else. All I want to do, is read the data, put it in the structure, and then be able to determine if there is a specific integer in there. Deletions will never occur. The only things done on this structure will be insertions and searching.
What would you suggest as an appropriate data structure? My initial thought was a binary tree of sorts; however, upon thinking, a HashTable may be the best implementation. Thoughts and help please?
It seems like the requirements you have are
store a bunch of integers,
where insertions are fast,
where lookups are fast, and
where absolutely nothing else matters.
If you are dealing with a "sufficiently small" range of integers - say, integers up to around 16,000,000 or so, you could just use a bitvector for this. You'd store one bit per number, all initially zero, and then set the bits to active whenever a number is entered. This has extremely fast lookups and extremely fast setting, but is very memory-intensive and infeasible if the integers can be totally arbitrary. This would probably be modeled with by BitSet.
If you are dealing with arbitrary integers, a hash table is probably the best option here. With a good hash function you'll get a great distribution across the table slots and very, very fast lookups. You'd want a HashSet for this.
If you absolutely must guarantee worst-case performance at all costs and you're dealing with arbitrary integers, use a balanced BST. The indirection costs in BSTs make them a bit slower than other data structures, but balanced BSTs can guarantee worst-case efficiency that hash tables can't. This would be represented by TreeSet.
Given that
All numbers are <= 99,999
You only want to check for existence of a number
You can simply use some form of bitmap.
e.g. create a byte[12500] (it is 100,000 bits which means 100,000 booleans to store existence of 0-99,999 )
"Inserting" a number N means turning the N-th bit on. Searching a number N means checking if N-th bit is on.
Pseduo code of the insertion logic is:
bitmap[number / 8] |= (1>> (number %8) );
searching looks like:
bitmap[number/8] & (1 >> (number %8) );
If you understand the rationale, then a even better news for you: In Java we already have BitSet which is doing what I was describing above.
So code looks like this:
BitSet bitset = new BitSet(12500);
// inserting number
bitset.set(number);
// search if number exists
bitset.get(number); // true if exists
If the number of times each number occurs don't matter (as you said, only inserts and see if the number exists), then you'll only have a maximum of 100,000. Just create an array of booleans:
boolean numbers = new boolean[100000];
This should take only 100 kilobytes of memory.
Then instead of add a number, like 11111, 22222, 33333 do:
numbers[11111]=true;
numbers[22222]=true;
numbers[33333]=true;
To see if a number exists, just do:
int whichNumber = 11111;
numberExists = numbers[whichNumber];
There you are. Easy to read, easier to mantain.
A Set is the go-to data structure to "find", and here's a tiny amount of code you need to make it happen:
Scanner scanner = new Scanner(new FileInputStream("myfile.txt"));
Set<Integer> numbers = Stream.generate(scanner::nextInt)
.limit(scanner.nextInt())
.collect(Collectors.toSet());

Which is the fastest structure for processing integers(add,read from thousands of nodes)

I am trying to find out which structure would be the fastest, because i have a problem with my code. I have a large amount of data to store. Maybe thousands of nodes are needed. My first thought was to create an ArrayList and then i started adding integers to use them later. This ArrayList will be useful for fast accessing bytes in Random Access Files. So, i put the first node, which represents a pointer to the first entry in a Random Access File. Then, i put the second, at the same way, and so on..
My program takes too long when putting the integers in the ArrayList.
Could i fix my code using a faster structure??
Yes,
you can use LinkedList, your arraylist have amartized O(1) insertion but when you have a huge arraylist and it needs to be resized, it will take long time to allocate a new arraylist, copy the current elements and continue.
eg: if you have 10 million elements in your arraylist and it s full, when you insert one more, your arraylist has to double the size of current and then copy all the elements to the new one. this is very expensive operation.
If you use LinkedList you have O(1) insertion but not random access. So if you want to access to nth element, you will have to traverse all the nodes up to n. it takes O(n). but do you really do that.
So linkedlist is you option. possibly, doubly linked list.
If you want fast reads as well as fast insertion, you can use Dictionary, HashMap. You have O(1) writes and reads, if and only if you have a perfect hashing.
But again, internally, HashTable, Dictionary uses arrays so once your dictionary grows too large, you will have the same problem, moreover, each time your array expands, your hashcodes are re-calculated.
You can use Trees with logn writes and reads.
You can use Skiplist with logn writes and reads.
An ArrayList is clearly not the fastest thing here, because the ArrayList does not contain int but the Integer wrapper types. Therefore a plain array int[] intArray have the lowest overhead.
On the other hand: if you can omit the list/array completely and do the calculations instantly, this would save some more overhead. This leads in the direction to not do microoptimization but to think about the problem and perhaps use a completely different algorithm.

Most frequently repeated numbers in a huge list of numbers

I have a file which has a many random integers(around a million) each seperated by a white space. I need to find the top 10 most frequently occurring numbers in that file. What is the most efficient way of doing this in java?
I can think of
1. Create a hash map, key is the integer from the file and the value is the count. For every number in the file, check if that key already exists in the hash map, if yes, value++, else make a new entry in hash
2. Make a BST, each node is the integer from the file. For every integer from the file see if there is a node in the BST if yes, do value++, value is part of the node.
I feel hash map is better option if i can come up with good hashing function,
Can some one pl suggest me what is the best of doing this ? Is there is anyother efficient algo that i can use?
Edit #2:
Okay, I screwed up my own first rule--never optimize prematurely. The worst case for this is probably using a stock HashMap with a wide range--so I just did that. It still runs in like a second, so forget everything else here and just do that.
And I'll make ANOTHER note to myself to ALWAYS test speed before worrying about tricky implementations.
(Below is older obsolete post that could still be valid if someone had MANY more points than a million)
A HashSet would work, but if your integers have a reasonable range (say, 1-1000), it would be more efficient to create an array of 1000 integers, and for each of your million integers, increment that element of the array. (Pretty much the same idea as a HashMap, but optimizing out a few of the unknowns that a Hash has to make allowances for should make it a few times faster).
You could also create a tree. Each node in the tree would contain (value, count) and the tree would be organized by value (lower values on the left, higher on the right). Traverse to your node, if it doesn't exist--insert it--if it does, then just increment the count.
The range and distribution of your values would determine which of these two (or a regular hash) would perform better. I think a regular hash wouldn't have many "winning" cases though (It would have to be a wide range and "grouped" data, and even then the tree might win.
Since this is pretty trivial--I recommend you implement more than one solution and test speeds against the actual data set.
Edit: RE the comment
TreeMap would work, but would still add a layer of indirection (and it's so amazingly easy and fun to implement yourself). If you use the stock implementation, you have to use Integers and convert constantly to and from int for every increase. There is the indirection of the pointer to the Integer, and the fact that you are storing at least 2x as many objects. This doesn't even count any overhead for the method calls since they should be inlined with any luck.
Normally this would be an optimization (evil), but when you start to get near hundreds of thousands of nodes, you occasionally have to ensure efficiency, so the built-in TreeMap is going to be inefficient for the same reasons the built-in HashSet will.
Java handles hashing. You don't need to write a hash function. Just start pushing stuff in the hash map.
Also, if this is something that only needs to run once (or only occasionally), then don't both optimizing. It will be fast enough. Only bother if it's something that's going to run within an application.
HashMap
A million integers is not really a lot, even for interpreted languages, but especially for a speedy language like Java. You'll probably barely even notice the execution time. I'd try this first and move to something more complicated if you deem this too slow.
It will probably take longer to do string splitting and parsing to convert to integers than even the simplest algorithm to find frequencies using a HashMap.
Why use a hashtable? Just use an array that is the same size as the range of your numbers. Then you don't waste time executing the hashing function. Then sort the values after you're done. O(N log N)
Allocate an array / vector of the same size as the number of input items you have
Fill the array from your file with numbers, one number per element
Put the list in order
Iterate through the list and keep track of the the top 10 runs of numbers that you have encountered.
Output the top ten runs at the end.
As a refinement on step 4, you only need to step forward through the array in steps equilivent to your 10th longest run. Any run longer than that will overlap with your sampling. If the tenth longest run is 100 elements long, you only need to sample element 100, 200, 300 and at each point count the run of the integer you find there (both forwards and backwards). Any run longer than your 10th longest is sure to overlap with your sampling.
You should apply this optimisation after your 10th run length is very long compared to other runs in the array.
A map is overkill for this question unless you have very few unique numbers each with a large number of repeats.
NB: Similar to gshauger's answer but fleshed out
If you have to make it as efficient as possible, use an array of ints, with the position representing the value and the content representing the count. That way you avoid autoboxing and unboxing, the most likely killer of a standard Java collection.
If the range of numbers is too large then take a look at PJC and its IntKeyIntMap implementations. It will avoid the autoboxing as well. I don't know if it will be fast enough for you, though.
If the range of numbers is small (e.g. 0-1000), use an array. Otherwise, use a HashMap<Integer, int[]>, where the values are all length 1 arrays. It should be much faster to increment a value in an array of primitives than create a new Integer each time you want to increment a value. You're still creating Integer objects for the keys, but that's hard to avoid. It's not feasible to create an array of 2^31-1 ints, after all.
If all of the input is normalized so you don't have values like 01 instead of 1, use Strings as keys in the map so you don't have to create Integer keys.
Use a HashMap to create your dataset (value-count pairs) in memory as you traverse the file. The HashMap should give you close to O(1) access to the elements while you create the dataset (technically, in the worst case HashMap is O(n)). Once you are done searching the file, use Collections.sort() on the value Collection returned by HashMap.values() to create a sorted list of value-count pairs. Using Collections.sort() is guaranteed O(nLogn).
For example:
public static class Count implements Comparable<Count> {
int value;
int count;
public Count(int value) {
this.value = value;
this.count = 1;
}
public void increment() {
count++;
}
public int compareTo(Count other) {
return other.count - count;
}
}
public static void main(String args[]) throws Exception {
Scanner input = new Scanner(new FileInputStream(new File("...")));
HashMap<Integer, Count> dataset = new HashMap<Integer, Count>();
while (input.hasNextInt()) {
int tempInt = input.nextInt();
Count tempCount = dataset.get(tempInt);
if (tempCount != null) {
tempCount.increment();
} else {
dataset.put(tempInt, new Count(tempInt));
}
}
List<Count> counts = new ArrayList<Count>(dataset.values());
Collections.sort(counts);
Actually, there is an O(n) algorithm for doing exactly what you want to do. Your use case is similar to an LFU cache where the element's access count determines whether it syays in the cache or is evicted from it.
http://dhruvbird.blogspot.com/2009/11/o1-approach-to-lfu-page-replacement.html
This is the source for java.lang.Integer.hashCode(), which is the hashing function that will be used if you store your entries as a HashMap<Integer, Integer>:
public int hashCode() {
return value;
}
So in other words, the (default) hash value of a java.lang.Integer is the integer itself.
What is more efficient than that?
The correct way to do it is with a linked list. When you insert an element, you go down the linked list, if its there you increment the nodes count, otherwise create a new node with count of 1. After you inserted each element, you would have a sorted list of elements in O(n*log(n)).
For your methods, you are doing n inserts and then sorting in O(n*log(n)), so your coefficient on the complexity is higher.

Categories