I have a very large file (10^8 lines) with counts of events as follows,
A 10
B 11
C 23
A 11
I need to accumulate the counts for each event, so that my map contains
A 21
B 11
C 23
My current approach:
Read the lines, maintain a map, and update the counts in the map as follows
updateCount(Map<String, Long> countMap, String key,
Long c) {
if (countMap.containsKey(key)) {
Long val = countMap.get(key);
countMap.put(key, val + c);
} else {
countMap.put(key, c);
}
}
Currently this is the slowest part of the code, (takes around 25 ms).
Note that the map is based on MapDB, but I doubt that updates are slow due to that (are they?)
This is the mapdb configs for the map,
DBMaker.newFileDB(dbFile).freeSpaceReclaimQ(3)
.mmapFileEnablePartial()
.transactionDisable()
.cacheLRUEnable()
.closeOnJvmShutdown();
Are there ways to speed this up?
EDIT:
The number of unique keys is of the order of the pages in wikipedia. The data is actually page traffic data from here.
You might try
class Counter {
long count;
}
void updateCount(Map<String, Counter> countMap, String key, int c) {
Counter counter = countMap.get(key);
if (counter == null) {
counter = new Counter();
countMap.put(key, counter);
counter.count = c;
} else {
counter.count += c;
}
}
This does not create many Long wrappers, but just allocates Counters the number of keys.
Note: do not create Long's. Above I made c an int to not oversee long/Long.
As a starting point, I'd suggest thinking about:
What is yardstick by which you're saying that 25ms is actually an unreasonable amount of time for the amount of data involved and for a generic map implementation? if you quantify that, it might help you work out if there is anything wrong.
How much time is being spent re-hashing the map versus other operations (e.g. calculation of hash codes on each put)?
What do your "events" as you call them consist of? How many unique events-- and hence unique keys-- are there? How are keys to the map being generated, and is there a more efficient way to do so? (In a standard hash map, for example, you create additional objects for each association, and actually store the key objects increasing the memory footprint.)
Depending on the answers to the previous, you could potentially roll a more efficient map structure yourself (see this example that you might be able to adapt). Essentially, you need to look specifically at what is taking the time (e.g. hash code calculation per put / cost of rehashing) and try and optimise that part.
If you are using a TreeMap, there are performance tuning options like
The number of entries in each node.
You could also use specific key and value serializer that will speed up the serialization and de-serilization.
You could use Pump mode to build the tree, which is very very fast. But one caveat is that this is useful when you are building a new map from scratch. You can find the full example here
https://github.com/jankotek/MapDB/blob/master/src/test/java/examples/Huge_Insert.java
Related
I wonder that if I use a HashMap to collect the conditions and loop each one in one if statement can I reach higher performance rather than to write one by one if - else if statement?
In my opinion, one-by-one if-else, if statements may be faster because in for loop runs one more condition in each loop like, does the counter reach the target number? So actually each if statement, it runs 2 if statements. Of course inside of the statements different but if we talk about just statement performance, I think one-by-one type would be better?
Edit: this is just a sample code, my question is about the performance differences between the usage of these statements.
Map<String, Integer> words = new HashMap<String, Integer>
String letter ="d";
int n = 4;
words.put("a",1);
words.put("b",2);
words.put("c",3);
words.put("d",4);
words.put("e",5);
words.forEach((word,number)->{
if(letter.equals(word){
System.out.println(number*n);
});
String letter ="d";
int n = 4;
if(letter.equals("a"){
System.out.println(number*1);
}else if(letter.equals("b"){
System.out.println(number*2);
}else if(letter.equals("c"){
System.out.println(number*3);
}else if(letter.equals("d"){
System.out.println(number*4);
}else if(letter.equals("e"){
System.out.println(number*5);
}
For your example, having a HashMap but then doing an iterative lookup seems to be a bad idea. The point of using a HashMap is to be able to do a hash based lookup. That is much faster than doing an iterative lookup.
Also, from your example, cascading if-then tests will definitely be faster, since they will avoid the overhead of the map iterator and extra function calls. Also, they will avoid the overhead of the map iterator skipping empty storage locations in the hash map backing array. A better question is whether the cascading if-thens are faster than iterating across a simple list. That is hard to answer. Cascading if-thens seem likely to be faster, except that if there are a lot of if-thens, then a cost of loading the code should be added.
For string lookups, a list data structure provides adequate behavior up to a limiting value, above which a more sophisticated data structure must be used. What is the limiting value depends on the environment. For string comparisons, I've found the transition between 20 and 100 elements.
For particular lookups, and whether low level optimizations are available, the transition value may be much larger. For example, doing integer lookups using "C", which will can do direct memory lookups, the transition value is much higher.
Typical data structures are HashMaps, Tries, and sorted arrays. Each fits particular patterns of access. For example, sorted arrays are fastest and most compact, but are expensive to update. HashMaps support dynamic updates, and for good hash functions, provide constant time lookups. But, HashMaps are space inefficient, since they depend on having empty cells between hash values.
For cases which do not involve "very large" data sets, and which are not in critical "hot" code paths, HashMaps are the usual structure which is used.
If you have a Map and you want to retrieve one letter, I'm not sure why you would loop at all?
Map<String, Integer> words = new HashMap<String, Integer>
String letter ="d";
int n = 4;
words.put("a",1);
words.put("b",2);
words.put("c",3);
words.put("d",4);
words.put("e",5);
if (words.containsKey(letter) {
System.out.println(words.get(letter)*n);
}
else
{
System.out.println(letter + " doesn't exist in Map");
}
If you aren't using the benefits of a Map, then why use a Map at all?
A forEach will actually touch every key in the list. The number of checks on your if/else is dependent on where it is in the list and how long the list of available letters is. If the letter you choose is the last one in the list then it would complete all checks before printing. If it is first then it will only do one which is much faster than having to check all.
It would be easy for you to write the two examples and run a timer to determine which is actually faster.
https://www.baeldung.com/java-measure-elapsed-time
There are a lot of wasted calculations if you have to run through 1 million if/else statements and only select one which could be anywhere in the list. This doesn't include typos and the horror of code maintenance. Using a Map with an index would be much quicker. If you are only talking about 100 if/else statements (still too many in my opinion) then you may be able to break even on speed.
I'm not used to working with really large datasets and I'm kind of stumped here.
I have the following code:
private static Set<String> extractWords(BufferedReader br) throws IOException {
String strLine;
String tempWord;
Set<String> words = new HashSet<String>();
Utils utils = new Utils();
int articleCounter = 0;
while(((strLine = br.readLine()) != null)){
if(utils.lineIsNotCommentOrLineChange(strLine)){
articleCounter++;
System.out.println("Working article : " + utils.getArticleName(strLine) + " *** Article #" + articleCounter + " of 3.769.926");
strLine = utils.removeURLs(strLine);
strLine = utils.convertUnicode(strLine);
String[] temp = strLine.split("\\W+");
for(int i = 0; i < temp.length; i++){
tempWord = temp[i].trim().toLowerCase();
if(utils.validateWord(tempWord)){
words.add(tempWord);
System.out.println("Added word " + tempWord + " to list");
}
}
}
}
return words;
}
This basically gets a huge text file from the BufferedReader where each line of text is a text from an article. I want to make a list of unique words in this text file, but there are 3.769.926 articles in there, so the word count is quite immense.
From what I understand about Sets, or specifically HashSets, this should be the man for the job so to speak. Everything runs quite smoothly at first, but after 500.000 articles it starts slowing down a bit. When it reaches 700.000 its beginning to get slow enough that it basically stops for a second of two before going on again. There's a bottleneck here somewhere, and I can't see what it is..
Any ideas?
I believe the issue you may be facing is that a Hash Table(set or map) has to be backed by a fixed number of entries it can hold. So your first declaration may have a table able to hold 16 entries. Putting aside things like load factors, once you tried to put 17 entries into the table, it has to grow to accommodate more entries to prevent collisions, so Java will expand it for you.
This expansion includes creating a new table with 2 * previousSize entries, then copying over the old entries. So if you are constantly expanding, you may end up hitting an area, like
524,288 where it will have to grow, but it will create a new table able to handle 1,048,576 entries, but it will have to copy over the entire previous table.
If you don't mind the extra look up time, you might think about using a TreeSet instead of a HashSet. You lookups will now be logarithmic time, but a Tree doesn't have a pre-allocated table and can grow dynamically easily. Either use this, or declare the size of your HashSet so it won't grow dynamically.
Honestly for that sort of scale you are better off going over to a database. You can embed Derby inside your application if you don't want to use a separate one.
Their indexing systems are optimised for this sort of scale, and while HashSet etc will cope if you massage them right you are better off using the right tool for it.
As noted by TheSageMage, the HashSet implementation will constantly resize the underlying HashMap as the data grows. There are a couple of ways of getting around that: initial capacity and load factor. You can set both by using the 2-arg constructor: HashSet(int, float). If you know the approximate number of words you are going to need, you can set the initial capacity to be bigger than that number. This will make smaller maps work a little slower, but will prevent dramatic slow-down for larger maps. The load factor is how full the map must get before increasing the underlying size rehashing. Since this is a relatively time-consuming operation for large maps, you may want to set it to a large fraction, say 0.9. If your initial capacity was set so that you may exceed it but will never exceed twice that size, a large load factor will guarantee that you rehash only once and as late as possible.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I would like to store key-value pairs, where key is an integer and values are ArrayLists of Strings.
I cannot use a database because I have to use code to solve a problem online for a particular contest.
For small amounts of data I am able to work with hashtables without any problem.
But when my data becomes big I run out of heap size. I can not change the heapsize as I have to upload just the code and I cannot provide a working environment.
That is the challenge.
If the strings are repeated often, have natural language frequences, do not use new object instances for the same string.
private Map<String, String> sharedStrings = new HashMap<>().
public void shareString(String s) {
String t = sharedStrings.get(s);
if (t == null) {
t = s;
sharedStrings.put(t, t);
}
return t;
}
A numbering of the strings probably is too slow.
Packing the list of strings in a single one (separator some control character),
and possibly Gzipping the String (GZipOutputStream, GZipInputStream).
Tune the hash map with a sufficient initial capacity. (Sorry if I state the obvious.)
Do your own allocation of all ArrayLists, using huge large String[]:
int count;
String[] allStrings = new String[999999];
Map<Integer, Long> map = new HashMap<>(9999);
void put(int key, List<String> strings) {
int start = count;
for (String s : strings) {
allStrings[count] = s;
++count;
}
// high: start index, low: size
long listDescriptor = (((long)start) << 32) | (count - start);
map.put(key, listDescriptor);
}
There are map implementations using primitives like int and long; the trove library for instance (did not use it myself).
Using a simple array instead of ArrayList may save some additional memory (but not much).
If search performance is not a priority, you may use a Pair<Integer, List<>> and do the search manually.
If the range of integers is limited, just instantiate an array of List[integer_range] and use the array index as key.
Since you are using Strings, you may try to intern() them and make sure there are no repeating values.
Let us know what statistical information about the data you have - what are the keys, whether the values repeat themselves, etc.
Some ideas
If you can write to a file store the data there. You could maybe keep the keys in a set in memory for faster lookup and just write out the values - either to a single file or maybe even a file per entry.
Create your own map implementation that serializes the list of values into a String or byte[] and then compress the serialised data. You will have to deserialise on a read. Every time you do a get/put you will take a big runtime hit for this though. See http://theplateisbad.blogspot.co.uk/2011/04/java-in-memory-compression.html for an example.
Every time the map data is looked up simply calculate the list values every time instead of storing them - if you can.
One possible optimization might be ArrayList.trimToSize which reduces the storage used by ArrayList to minimum.
You could store the ArrayList in serialized (maybe even compressed) ByteBuffers. When you need to access a list, you would need to deserialize it, change/read it, and then store it back.
Operations would be significantly slower, but you could do some caching to keep X Arraylists in the heap and store the rest outside.
If you cannot increase the heap size then you need to limit the size of your hashtable (or any other datastructure you use). I would recommend to try the Apache LRUMap:
LRUMap
An implementation of a Map which has a maximum size and uses a Least Recently Used algorithm to remove items from the Map when
the maximum size is reached and new items are added.
And if you really need a synchronized version then that is also available:
A synchronized version can be obtained with:
Collections.synchronizedMap( theMapToSynchronize ) If it will be
accessed by multiple threads, you must synchronize access to this
Map. Even concurrent get(Object) operations produce indeterminate
behaviour.
And if you don't want to loose using LRU, the data then you need to write an algorithm to keep some data in your datastructer and rest in the persistent storage such as file, etc.
Context: I'm working on an analytics system for an ordering system. There are about 100,000 orders per day and the analytics need to run for the last N (say, 100) days months. The relevant data fits in memory. After N days, all orders are evicted from the memory cache, with an entire day in the past being evicted. Orders can be created or updated.
A traditional approach would use a ConcurrentHashMap<Date, Queue<Order>>. Every day, values for keys representing dates more than N days in the past will be deleted. But, of course, the whole point of using Guava is to avoid this. EDIT: changed Map to ConcurrentHashMap, see the end of the question for rationale.
With Guava collections, a MultiMap <Date, Order> would be simpler. Eviction is similar, implemented explicitly.
While the Cache implementation looks appealing (after all, I am implementing a Cache), I'm not sure about the eviction options. Eviction only happens once a day and its best initiated from outside the cache, I don't want the cache to have to check the age of an order. I'm not even sure if the cache would use a MultiMap, which I think it's a suitable data structure in this case.
Thus, my question is: is it possible to use a Cache that uses and exposes the semantics of a MultiMap and allows evictions controlled from outside itself, in particular with the rule I need ("delete all orders older than N days") ?
As an important clarification, I'm not interested in a LoadingCache but I do need bulk loads (if the application needs to be restarted, the cache has to be populated, from the database, with the last N days of orders).
EDIT: Forgot to mention that the map needs to be concurrent, as orders come in they are evaluated live against the previous orders for the same customer or location etc.
EDIT2: Just stumbled over Guava issue 135. It looks like the MultiMap is not concurrent.
I would use neither a Cache nor a Multimap here. While I like and use both of them, there's not much to gain here.
You want to evict your entries manually, so the features of Cache don't really get used here.
You're considering ConcurrentHashMap<Date, Queue<Order>>, which is in a sense more powerful than a Multimap<Date, Order>.
I'd use a Cache, if I thought about different eviction criteria and if I felt like losing any of its entries anytime1 is fine.
You may find out that you need a ConcurrentMap<Date, Dequeue<Order>> or maybe ConcurrentMap<Date, YouOwnQueueFastSearchList<Order>> or whatever. This could probably be managed somehow by the Multimap, but IMHO it gets more complicated instead of simpler.
I'd ask myself "what do I gain by using Cache or Multimap here?". To me it looks like the plain old ConcurrentMap offers about everything you need.
1 By no means I'm suggesting this would happen with Guava. On the opposite, without an eviction reason (capacity, expiration, ...) it works just like a ConcurrentMap. It's just that what you've described feels more like a Map rather than a Cache.
IMHO The simplest thing to do is to include the date of the order in the order record. (I would expect it is a field already) As you only need to clean the cache once per day it doesn't have to be very efficient, just reasonably timely.
e.g.
public class Main {
static class Order {
final long time;
Order(long time) {
this.time = time;
}
public long getTime() {
return time;
}
}
final Map<String, Order> orders = new LinkedHashMap<String, Order>();
public void expireOrdersOlderThan(long dateTime) {
for (Iterator<Order> iter = orders.values().iterator(); iter.hasNext(); )
if (iter.next().getTime() < dateTime)
iter.remove();
}
private void generateOrders() {
for (int i = 0; i < 120000; i++) {
orders.put("order-" + i, new Order(i));
}
}
public static void main(String... args) {
for (int t = 0; t < 3; t++) {
Main m = new Main();
m.generateOrders();
long start = System.nanoTime();
for (int i = 0; i < 20; i++)
m.expireOrdersOlderThan(i * 1000);
long time = System.nanoTime() - start;
System.out.printf("Took an average of %.3f ms to expire 1%% of entries%n", time / 20 / 1e6);
}
}
}
prints
Took an average of 9.164 ms to expire 1% of entries
Took an average of 8.345 ms to expire 1% of entries
Took an average of 7.812 ms to expire 1% of entries
For 100,000 orders, I would expect this to take ~10 ms which is not so much to incur at a quiet period in the middle of the night.
BTW: You can make this more efficient if your OrderIds are sorted by time. ;)
Have you considered using a sorted list of some sort? It would allow you to pull entries until you hit one that's fresh enough to stay. Of course this assumes that's your primary functio. If what you most need is the O(1) access with a hashmap, my answer doesn't apply.
I'm looking for a way to store a string->int mapping. A HashMap is, of course, a most obvious solution, but as I'm memory constrained and need to store 2 million pairs, 7 characters long keys, I need something that's memory efficient, the retrieval speed is a secondary parameter.
Currently I'm going along the line of:
List<Tuple<String, int>> list = new ArrayList<Tuple<String, int>>();
list.add(...); // load from file
Collections.sort(list);
and then for retrieval:
Collections.binarySearch(list, key); // log(n), acceptable
Should I perhaps go for a custom tree (each node a single character, each leaf with result), or is there an existing collection that fits this nicely? The strings are practically sequential (UK postcodes, they don't differ much), so I'm expecting nice memory savings here.
Edit: I just saw you mentioned the String were UK postcodes so I'm fairly confident you couldn't get very wrong by using a Trove TLongIntHashMap (btw Trove is a small library and it's very easy to use).
Edit 2: Lots of people seem to find this answer interesting so I'm adding some information to it.
The goal here is to use a map containing keys/values in a memory-efficient way so we'll start by looking for memory-efficient collections.
The following SO question is related (but far from identical to this one).
What is the most efficient Java Collections library?
Jon Skeet mentions that Trove is "just a library of collections from primitive types" [sic] and, that, indeed, it doesn't add much functionality. We can also see a few benchmarks (by the.duckman) about memory and speed of Trove compared to the default Collections. Here's a snippet:
100000 put operations 100000 contains operations
java collections 1938 ms 203 ms
trove 234 ms 125 ms
pcj 516 ms 94 ms
And there's also an example showing how much memory can be saved by using Trove instead of a regular Java HashMap:
java collections oscillates between 6644536 and 7168840 bytes
trove 1853296 bytes
pcj 1866112 bytes
So even though benchmarks always need to be taken with a grain of salt, it's pretty obvious that Trove will save not only memory but will always be much faster.
So our goal now becomes to use Trove (seen that by putting millions and millions of entries in a regular HashMap, your app begins to feel unresponsive).
You mentioned 2 million pairs, 7 characters long keys and a String/int mapping.
2 million is really not that much but you'll still feel the "Object" overhead and the constant (un)boxing of primitives to Integer in a regular HashMap{String,Integer} which is why Trove makes a lot of sense here.
However, I'd point out that if you have control over the "7 characters", you could go even further: if you're using say only ASCII or ISO-8859-1 characters, your 7 characters would fit in a long (*). In that case you can dodge altogether objects creation and represent your 7 characters on a long. You'd then use a Trove TLongIntHashMap and bypass the "Java Object" overhead altogether.
You stated specifically that your keys were 7 characters long and then commented they were UK postcodes: I'd map each postcode to a long and save a tremendous amount of memory by fitting millions of keys/values pair into memory using Trove.
The advantage of Trove is basically that it is not doing constant boxing/unboxing of Objects/primitives: Trove works, in many cases, directly with primitives and primitives only.
(*) say you only have at most 256 codepoints/characters used, then it fits on 7*8 == 56 bits, which is small enough to fit in a long.
Sample method for encoding the String keys into long's (assuming ASCII characters, one byte per character for simplification - 7 bits would be enough):
long encode(final String key) {
final int length = key.length();
if (length > 8) {
throw new IndexOutOfBoundsException(
"key is longer than 8 characters");
}
long result = 0;
for (int i = 0; i < length; i++) {
result += ((long) ((byte) key.charAt(i))) << i * 8;
}
return result;
}
Use the Trove library.
The Trove library has optimized HashMap and HashSet classes for primitives. In this case, TObjectIntHashMap<String> will map the parameterized object (String) to a primitive int.
First of, did you measure that LinkedList is indeed more memory efficient than a HashMap, or how did you come to that conclusion? Secondly, a LinkedList's access time of an element is O(n), so you cannot do efficient binary search on it. If you want to do such approach, you should use an ArrayList, which should give you the beast compromise between performance and space. However, again, I doubt that a HashMap, HashTable or - in particular - a TreeMap would consume that much more memory, but the first two would provide constant access and the tree map logarithmic and provide a nicer interface that a normal list. I would try to do some measurements, how much the difference in memory consumption really is.
UPDATE: Given, as Adamski pointed out, that the Strings themselves, not the data structure they are stored in, will consume the most memory, it might be a good idea to look into data structures that are specific for strings, such as tries (especially patricia tries), which might reduce the storage space needed for the strings.
What you are looking for is a succinct-trie - a trie which stores its data in nearly the least amount of space theoretically possible.
Unfortunately, there are no succinct-trie classes libraries currently available for Java. One of my next projects (in a few weeks) is to write one for Java (and other languages).
In the meanwhile, if you don't mind JNI, there are several good native succinct-trie libraries you could reference.
Have you looked at tries. I've not used them but they may fit with what you're doing.
A custom tree would have the same complexity of O(log n), don't bother. Your solution is sound, but I would go with an ArrayList instead of the LinkedList because the linked list allocates one extra object per stored value, which will amount to a lot of objects in your case.
As Erick writes using the Trove library is a good place to start as you save space in storing int primitives rather than Integers.
However, you are still faced with storing 2 million String instances. Given that these are keys in the map, interning them won't offer any benefit so the next thing I'd consider is whether there's some characteristic of the Strings that can be exploited. For example:
If the Strings represent sentences of common words then you could transform the String into a Sentence class, and intern the individual words.
If the Strings only contain a subset of Unicode characters (e.g. only letters A-Z, or letters + digits) you could use a more compact encoding scheme than Java's Unicode.
You could consider transforming each String into a UTF-8 encoded byte array and wrapping this in class: MyString. Obviously the trade-off here is the additional time spent performing look-ups.
You could write the map to a file and then memory map a portion or all of the file.
You could consider libraries such as Berkeley DB that allow you to define persistent maps and cache a portion of the map in memory. This offers a scalable approach.
maybe you can go with a RadixTree?
Use java.util.TreeMap instead of java.util.HashMap. It makes use of a red black binary search tree and doesn't use more memory than what is required for holding notes containing the elements in the map. No extra buckets, unlike HashMap or Hashtable.
I think the solution is to step a little outside of Java. If you have that many values, you should use a database. If you don't feel like installing Oracle, SQLite is quick and easy. That way the data you don't immediately need is stored on the disk, and all of the caching/storage is done for you. Setting up a DB with one table and two columns won't take much time at all.
I'd consider using some cache as these often have the overflow-to-disk ability.
You might create a key class that matches your needs. Perhaps like this:
public class MyKey implements Comparable<MyKey>
{
char[7] keyValue;
public MyKey(String keyValue)
{
... load this.keyValue from the String keyValue.
}
public int compareTo(MyKey rhs)
{
... blah
}
public boolean equals(Object rhs)
{
... blah
}
public int hashCode()
{
... blah
}
}
try this one
OptimizedHashMap<String, int[]> myMap = new OptimizedHashMap<String, int[]>();
for(int i = 0; i < 2000000; i++)
{
myMap.put("iiiiii" + i, new int[]{i});
}
System.out.println(myMap.containsValue(new int[]{3}));
System.out.println(myMap.get("iiiiii" + 1));
public class OptimizedHashMap<K,V> extends HashMap<K,V>
{
public boolean containsValue(Object value) {
if(value != null)
{
Class<? extends Object> aClass = value.getClass();
if(aClass.isArray())
{
Collection values = this.values();
for(Object val : values)
{
int[] newval = (int[]) val;
int[] newvalue = (int[]) value;
if(newval[0] == newvalue[0])
{
return true;
}
}
}
}
return false;
}
Actually HashMap and List are too general for such specific task as a lookup of int by zipcode. You should use advantage of knowledge which data is used. One of the options is to use a prefix tree with leaves that stores the int value. Also, it could be pruned if (my guess) a lot of codes with same prefixes map to the same integer.
Lookup of the int by zipcode will be linear in such tree and will not grow if number of codes is increased, compare to O(log(N)) in case of binary search.
Since you are intending to use hashing, you can try numerical conversions of the strings based on ASCII values.
the simplest idea will be
int sum=0;
for(int i=0;i<arr.length;i++){
sum+=(int)arr[i];
}
hash "sum" using a well defined hash functions. You would use a hash function based on the expected input patterns.
e.g. if you use division method
public int hasher(int sum){
return sum%(a prime number);
}
selecting a prime number which is not close to an exact power of two improves performances and gives better uniformly hashed distribution of keys.
another method is to weigh the characters based on their respective position.
e.g: if you use the above method, both "abc" and "cab" will be hashed into a same location. but if you need them to be stored in two distinct location give weights for locations like we use the number systems.
int sum=0;
int weight=1;
for(int i=0;i<arr.length;i++){
sum+= (int)arr[i]*weight;
weight=weight*2; // using powers of 2 gives better results. (you know why :))
}
As your sample is quite large, you'd avoid collisions by a chaining mechanism rather than using a probe sequence.
After all,What method you would choose totally depends on the nature of your application.
The problem is objects' memory overhead, but using some tricks you can try to implement your own hashset. Something like this. Like others said strings have quite large overhead so you need to "compress" it somehow. Also try not to use too many arrays(lists) in hashtable (if you do chaining type hashtable) as they are also objects and also have overhead. Better yet do open addressing hashtable.