data structure for Time and memory constraint - java

What is the best approach to store and search for the primitive data types? Data structure that can tackle both time constraint and memory constraint? websites/books from where I can get clear knowledge on these things?

Try this book
Data Structures and Algorithms in Java By Adam Drozdek Second Edition
It helped me a lot and it helps in memory management,data compression and helps in a deep knowledge of data structures and algorithms.
To get memory constraint as a thing in data structures,stop using dynamic memory allocation(Dynamic Programming) cause it saves memory.

If you are ready to sacrifice the flexibility around adding an element and deleting an element from your data structure then a sorted int[] is the best bet you got as far as memory efficiency is concerned.
Since array is sorted, you can perform binary search.
if your initial load of this array is going to be few million ints and later there is a chance to add few hundred ints only, you can supplement int[] with another data structure like ArrayList to temporary hold those additions and then merge ArrayList to int[] once size grows significantly.
Deletion can be handled by setting that element with some fairly small unused negative number but yes that would not be a very clean solution. Or you can handle deletion again by a supplementary data structure if deletion is rare.
Its all trade - off for a particular scenario - there is nothing like best for all situations.
Hope it helps !!

Related

Optimize removing process complexity

when I have an Array and I want to remove one value from it I need to shift the next element to lift but the idea is to do shifting one time when a n of null value in array.
Of course it is micro-optimisation, and ArrayList (maybe LinkedList) would be a production quality data structure for dynamic arrays.
Here you might keep an extra list of nulled entries. At a certain threshold one could do **System.arraycopy**s to remove the gaps. If there are many index based inserts too, you might opt for keeping gaps, maybe collecting small gaps together.
This is a traditional technique in editors for text.
For several data structures one might search through guava classes.
For instance write-on-copy data structures.
Or concurrency, compactifying in the background.
For a specific data structure & algorithm maybe someone else can give pointers.

Fastest way to access this object

Lets say I have a list of 1,000,000 users where their unique identifier is their username string. So to compare two User objects I just override the compareTo() method an compare the username members.
Given a username string I wish to find the User object from a list. What, in an average case, would be the fastest way to do this.
I'm guessing a HashMap, mapping usernames to User objects, but I wondered if there was something else that I didn't know about which would be better.
If you don't need to store them in a database (which is the usual scenario), a HashMap<String, User> would work fine - it has O(1) complexity for lookup.
As noted, the usual scenario is to have them in the database. But in order to get faster results, caching is utilized. You can use EhCache - it is similar to ConcurrentHashMap, but it has time-to-live for elements and the option to be distributed across multiple machines.
You should not dump your whole database in memory, because it will be hard to synchronize. You will face issues with invalidating the entries in the map and keeping them up-to-date. Caching frameworks make all this easier. Also note that the database has its own optimizations, and it is not unlikely that your users will be kept in memory there for faster access.
I'm sure you want a hash map. They're the fastest thing going, and memory efficient. As also noted in other replies, a String works as a great key, so you don't need to override anything. (This is also true of the following.)
The chief alternative is a TreeMap. This is slower and a uses a bit more memory. It's a lot more flexible, however. The same map will work great with 5 entries and 5 million entries. You don't need to clue it in in advance. If your list varies wildly in size, the TreeMap will grab memory as it needs and let it go when it doesn't. Hashmaps are not so good about letting go, and as I explain below, they can be awkward when grabbing more memory.
TreeMap's work better with Garbage Collectors. They ask for memory in small, easily found chunks. If you start a hashtable with room for 100,000 entries, when it gets full it will free the 100,000 element (almost a megabye on a 64 bit machine) array and ask for one that's even larger. If it does this repeatedly, it can get ahead of the GC, which tends to throw an out-of-memory exception rather than spend a lot of time gathering up and concentrating scattered bits of free memory. (It prefers to maintain its reputation for speed at the expense of your machine's reputation for having a lot of memory. You really can manage to run out of memory with 90% of your heap unused because it's fragmented.)
So if you are running your program full tilt, your list of names varies wildly in size--and perhaps you even have several lists of names varying wildly in size--a TreeMap will work a lot better for you.
A hash map will no doubt be just what you need. But when things get really crazy, there's the ConcurrentSkipListMap. This is everything a TreeMap is except it's a bit slower. On the other hand, it allows adds, updates, deletes, and reads from multiple threads willy-nilly, with no synchronization. (I mention it just to be complete.)
In terms of data structures the HashMapcan be a good choice. It favours larger datasets. The time for inserts is considered constant O(1).
In this case it sounds like you will be carrying out more lookups than inserts. For lookups the average time complexity is O(1 + n/k), the key factor here (sorry about the pun) is how effective the hashing algorithm is at evenly distributing the data across the buckets.
the risk here is that the usernames are short in length and use a small character set such as a-z. In which case there would be a lot of collisions causing the HashMap to be loaded unevenly and therefore slowing down the lookups. One option to improve this could be to create your own user key object and override the hashcode() method with an algorthim that suits your keys better.
in summary if you have a large data set, a good/suitable hashing algorithm and you have the space to hold it all in memory then HashMap can provide a relatively fast lookup
I think given your last post on the ArrayList and it's scalabilty I would take Bozho's suggestion and go for a purpose build cache such as EhCache. This will allow you to control memory usage and eviction policies. Still a lot faster than db access.
If you don't change your list of users very often then you may want to use Aho-Corasick. You will need a pre-processing step that will take O(T) time and space, where T is the sum of the lengths of all user names. After that you can match user names in O(n) time, where n is the length of the user name you are looking for. Since you will have to look at every character in the user name you are looking for I don't think it's possible to do better than this.

How bad is to use Hashmaps and ArrayLists while using huge data?

I am reading XML document into HashMaps, ArrayLists so that the relationship maintains even in the memory. My code does my job but i am worried about the iterations or function calls i am performing on this huge maps and lists. Currently the xml data i am working is not so huge. but i dont know what happens if it has. What are the testcases i need to perform on my logics that use these hashmaps? How bad is using a Java collections for such a huge data? Is there any alternatives for them? Will the huge data affect the JVM to crash?
Java collections have a certain overhead, which can increase the memory usage a lot (20 times in extreme cases) when they're the primary data structures of an application and the payload data consists of a large number of small objects. This could lead to the application terminating with an OutOfMemoryError even though the actual data is much smaller than the available memory.
ArrayList is actually very efficient for large numbers of elements, but inefficient when you have a large number of lists that are empty or contain only one element. For those cases, you could use Collections.emptyList() and Collections.singletonList() to improve efficiency.
HashMap has the same problem as well as a considerable overhead for each element stored in it. So the same advice applies as for ArrayList. If you have a large number of elements, there may be alternative Map implementations that are more efficient, e.g. Google Guava.
The biggest overheads happen when you store primitive values such as int or long in collections, as the need to be wrapped as objects. In those cases, the GNU Trove collections offer an alternative.
In your case specifically, the question is whether you really need to keep the entire data from the XML in memory at once, or whether you can process it in small chunks. This would probably be the best solution if your data can grow arbitrarily large.
The easiest short term solution would be to simply buy more memory. It's cheap.
JVM will not crash in what you describe. What may happen is an OutOfMemoryError. Also if you retain the data in those Collections for long you may have issues with the garbage collection. Do you really need to store the whole XML data in memory?
If you are dealing with temporary data and you need to have a fast access to it you do not have to many alternatives. The question is what do you mean when you say "huge"? MegaBytes? GigaBytes? TeraBytes?
While your data does not exceed 1G IMHO holding it in memory may be OK. Otherwise you should think about alternatives like DB (relational or NoSql) files etc.
In your specific example I'd think about replacing ArrayList to LinkedList unless you need random access list. ArrayList is just a wrapper over array, so when you need 1 million elements it allocates 1 million elements long array. Linked list is better for when number of elements is big but it is rate of access of element by index is o(n/2). If you need both (i.e. huge list and fast access) use TreeMap with index as a key instead. You will get log(n) access rate.
What are the testcases i need to perform on my logics that use these hashmaps?
Why not to generate large XML files (for example, 5 times larger, than your current data samples) and check your parsers/memory storages with them? Because only you knows what files are possible in your case, how fast will they grow, this is the only solution.
How bad is using a Java collections for such a huge data? Is there any
alternatives for them? Will the huge data affect the JVM to crash?
Of course, is it possible that you will have OutOfMemory exception if you try to store too much data in memory, and it is not eligible for GC. This library: http://trove.starlight-systems.com/ declares, that it uses less memory, but I didn't use it myself. Some discussion is available here: What is the most efficient Java Collections library?
How bad is using a Java collections for such a huge data?
Java Map implementations and (to a lesser extent) Collection implementations do tend to use a fair amount of memory. The effect is most pronounced when the key / value / element types are wrapper types for primitive types.
Is there any alternatives for them?
There are alternative implementations of "collections" of primitive types that use less memory; e.g. the GNU Trove libraries. But they don't implement the standard Java collection APIs, and that severely limits their usefulness.
If your collections don't use the primitive wrapper classes, then your options are more limited. You might be able to implement your own custom data structures to use less memory, but the saving won't be that great (in percentage terms) and you've got a significant amount of work to do to implement the code.
A better solution is to redesign your application so that it doesn't need to represent the entire XML data structure in memory. (If you can achieve this.)
Will the huge data affect the JVM to crash?
It could cause a JVM to throw an OutOfMemoryError. That's not technically a crash, but in your use-case it probably means that the application has no choice but to give up.

What is the fastest way to deal with large arrays of data in Android?

could you please suggest me (novice in Android/JAVA) what`s the most efficient way to deal with a relatively large amounts of data?
I need to compute some stuff for each of the 1000...5000 of elements in say a big datatype (x1,y1,z1 - double, flag1...flagn - boolean, desc1...descn - string) quite often (once a sec), that is why I want to do is as fast as possible.
What way would be the best? To declare a multidimensional array, or produce an array for each element (x1[i], y1[i]...), special class, some sort of JavaBean? Which one is the most efficient in terms of speed etc? Which is the most common way to deal with that sort of thing in Java?
Many thanks in advance!
Nick, you've asked a very generally questions. I'll do my best to answer it, but please be aware if you want anything more specific, you're going to need to drill down your question a bit.
Some back-envolope-calculations show that for and array of 5000 doubles you'll use 8 bytes * 5000 = 40,000 bytes or roughly 40 kB of memory. This isn't too bad as memory on most android devices is on the order of mega or even giga bytes. A good 'ol ArrayList should do just fine for storing this data. You could probably make things a little faster by specifying the ArrayLists length when you constructor. That way the Arraylist doesn't have to dynamically expand every time you add more data to it.
Word of caution though. Since we are on a memory restricted device, what could potentially happen is if you generate a lot of these ArrayLists rapidly in succession, you might start triggering the garbage collector a lot. This could cause your app to slow down (the whole device actually). If you're really going to be generating lots of data, then don't store it in memory. Store it off on disk where you'll have plenty of room and won't be triggering the garbage collector all the time.
I think that the efficiency with which you write the computation you need to do on each element is way more important than the data structure you use to store it. The difference between using an array for each element or an array of objects (each of which is the instance of a class containing all elements) should practically be negligible. Use whatever data structures you feel most comfortable with and focus on writing efficient algorithms.

HashMap alternatives for memory-efficient data storage

I've currently got a spreadsheet type program that keeps its data in an ArrayList of HashMaps. You'll no doubt be shocked when I tell you that this hasn't proven ideal. The overhead seems to use 5x more memory than the data itself.
This question asks about efficient collections libraries, and the answer was use Google Collections. My follow up is "which part?". I've been reading through the documentation but don't feel like it gives a very good sense of which classes are a good fit for this. (I'm also open to other libraries or suggestions).
So I'm looking for something that will let me store dense spreadsheet-type data with minimal memory overhead.
My columns are currently referenced by Field objects, rows by their indexes, and values are Objects, almost always Strings
Some columns will have a lot of repeated values
primary operations are to update or remove records based on values of certain fields, and also adding/removing/combining columns
I'm aware of options like H2 and Derby but in this case I'm not looking to use an embedded database.
EDIT: If you're suggesting libraries, I'd also appreciate it if you could point me to a particular class or two in them that would apply here. Whereas Sun's documentation usually includes information about which operations are O(1), which are O(N), etc, I'm not seeing much of that in third-party libraries, nor really any description of which classes are best suited for what.
Some columns will have a lot of
repeated values
immediately suggests to me the possible use of the FlyWeight pattern, regardless of the solution you choose for your collections.
Trove collections should have a particular care about space occupied (I think they also have tailored data structures if you stick to primitive types).. take a look here.
Otherwise you can try with Apache collections.. just do your benchmarks!
In anycase, if you've got many references around to same elements try to design some suited pattern (like flyweight)
Chronicle Map could have overhead of less than 20 bytes per entry (see a test proving this). For comparison, java.util.HashMap's overhead varies from 37-42 bytes with -XX:+UseCompressedOops to 58-69 bytes without compressed oops (reference).
Additionally, Chronicle Map stores keys and values off-heap, so it doesn't store Object headers, which are not accounted as HashMap's overhead above. Chronicle Map integrates with Chronicle-Values, a library for generation of flyweight implementations of interfaces, the pattern suggested by Brian Agnew in another answer.
So I'm assuming that you have a map of Map<ColumnName,Column>, where the column is actually something like ArrayList<Object>.
A few possibilities -
Are you completely sure that memory is an issue? If you're just generally worried about size it'd be worth confirming that this will really be an issue in a running program. It takes an awful lot of rows and maps to fill up a JVM.
You could test your data set with different types of maps in the collections. Depending on your data, you can also initialize maps with preset size/load factor combinations that may help. I've messed around with this in the past, you might get a 30% reduction in memory if you're lucky.
What about storing your data in a single matrix-like data structure (an existing library implementation or something like a wrapper around a List of Lists), with a single map that maps column keys to matrix columns?
Assuming all your rows have most of the same columns, you can just use an array for each row, and a Map<ColumnKey, Integer> to lookup which columns refers to which cell. This way you have only 4-8 bytes of overhead per cell.
If Strings are often repeated, you could use a String pool to reduce duplication of strings. Object pools for other immutable types may be useful in reducing memory consumed.
EDIT: You can structure your data as either row based or column based. If its rows based (one array of cells per row) adding/removing the row is just a matter of removing this row. If its columns based, you can have an array per column. This can make handling primitive types much more efficent. i.e. you can have one column which is int[] and another which is double[], its much more common for an entire column to have the same data type, rather than having the same data type for a whole row.
However, either way you struture the data it will be optmised for either row or column modification and performing an add/remove of the other type will result in a rebuild of the entire dataset.
(Something I do is have row based data and add columnns to the end, assuming if a row isn't long enough, the column has a default value, this avoids a rebuild when adding a column. Rather than removing a column, I have a means of ignoring it)
Guava does include a Table interface and a hash-based implementation. Seems like a natural fit to your problem. Note that this is still marked as beta.
keeps its data in an ArrayList of HashMaps
Well, this part seems terribly inefficient to me. Empty HashMap will already allocate 16 * size of a pointer bytes (16 stands for default initial capacity), plus some variables for hash object (14 + psize). If you have a lot of sparsely filled rows, this could be a big problem.
One option would be to use a single large hash with composite key (combining row and column). Although, that doesn't make operations on whole rows very effective.
Also, since you don't mention the operation of adding cell, you can create hashes with only necessary inner storage (initialCapacity parameter).
I don't know much about google collections, so can't help there. Also, if you find any useful optimization, please do post here! It would be interesting to know.
I've been experimenting with using the SparseObjectMatrix2D from the Colt project. My data is pretty dense but their Matrix classes don't really offer any way to enlarge them, so I went with a sparse matrix set to the maximum size.
It seems to use roughly 10% less memory and loads about 15% faster for the same data, as well as offering some clever manipulation methods. Still interested in other options though.
From your description, it seems that instead of an ArrayList of HashMaps you rather want a (Linked)HashMap of ArrayList (each ArrayList would be a column).
I'd add a double map from field-name to column-number, and some clever getters/setters that never throw IndexOutOfBoundsException.
You can also use a ArrayList<ArrayList<Object>> (basically a jagged dinamically growing matrix) and keep the mapping to field (column) names outside.
Some columns will have a lot of
repeated values
I doubt this matters, specially if they are Strings, (they are internalized) and your collection would store references to them.
Why don't you try using cache implementation like EHCache.
This turned out to be very effective for me, when I hit the same situation.
You can simply store your collection within the EHcache implementation.
There are configurations like:
Maximum bytes to be used from Local heap.
Once the bytes used by your application overflows that configured in the cache, then cache implementation takes care of writing the data to the disk. Also you can configure the amount of time after which the objects are written to disk using Least Recent Used algorithm.
You can be sure of avoiding any out of memory errors, using this types of cache implementations.
It only increases the IO operations of your application by a small degree.
This is just a birds eye view of the configuration. There are a lot of configurations to optimize your requirements.
For me apache commons collections did not save any space, here are two similar heap dumps just before OOME comparing Java 11 HashMap to Apache Commons HashedMap:
The Apache Commons HashedMap doesn't appear to make any meaningful difference.

Categories