Java - When to use Iterators? - java

I am trying to better understand when I should and should not use Iterators. To me, whenever I have a potentially large amount of data to iterate through, I write an Iterator for it. If it also lends itself to the Iterator interface, then it seems like a win.
I was reading a little bit that there is a lot of overhead with using an Iterator.
A good example of where I used an Iterator was to iterate through a bunch of SQL scripts to execute one query at a time, reading it in, then executing it.
Is there another performance trade off I should be aware of? Before I used iterators, I would read the entire String of SQL commands to execute into an ArrayList, and the iterate through that. If the import is rather large (like for geolocation data, then the server tends to get bogged down).
Walter

I think your question is when you should 'stream' input rather than load it all into memory and the process it. It's not really a question of using Iterator or not I think.
"It depends," of course, though in your given example it sounds like streaming the input rather than loading it all into memory is a clear win, so iterate indeed.
The benefit of loading into memory is usually that the code is simpler, and maybe you get some benefit from loading large chunks into memory at once rather than reading bits at a time. The benefit of "streaming" is that you limit your memory requirements, and, gain performance associated with that.
As a very crude rule of thumb, I wouldn't load anything like this into memory unless I were sure it was under 100K or so.

A good example of where I used an Iterator was to iterate through a bunch of SQL scripts to execute one query at a time, reading it in, then executing it.
In this scenario the overhead of an Iterator is likely dwarfed by the time it takes to run the queries.
Before I used iterators, I would read the entire String of SQL commands to execute into an ArrayList, and the iterate through that. If the import is rather large (like for geolocation data, then the server tends to get bogged down).
Any particular reason you need to collect them all into an ArrayList? You could just execute them one by one as you read the statements.
Iterators are particularly suited for streaming cases where the data is loaded/created on the fly/lazily. They do not require the data to be completely in memory upfront.

Related

How large should my list of objects be to warrant the use of java 8's parallelStream?

I have a list of objects from the database and i want to filter this list using the filter() method of the Stream class. New objects will be added to the database continuously so the list of objects could potentially become very large, possibly thousands of objects. I want to use a parallelStream to speed up the filter process but i was wondering how large the object list should approximately be to make the use of parallelStream benificial. I've read this thread about it: Should I always use a parallel stream when possible?
And in this thread they agree that the dataset should be really large if you want to have any benefit from using a parallel stream. But how large is large? Say I have 200 records stored in my database and i retrieve them all for filtering, is using a parallelstream justified in this case? If not, how large should the dataset be? a 1000? 2000 perhaps? I'd love to know. Thank you.
According to this and depending on the operation it would require at least 10_000, but not elements; instead N * Q where N = number of elements and Q = cost per element.
But this is a general formula you push against, without measuring this is close to impossible to say (read guess here); proper tests will prove you wrong or right.
For some simple operations, it is almost never the case when you would actually need parallel processing for the purpose of speed-up.
Some other things to mention here, is that this heavily depends on the source - how easy it is to split. Anything array-based or index-based are easy to split (and fast), but a Queue or lines from a File do not, so you will probably lose more time splitting rather than computing, unless, of course, there are enough elements to cover for this. And enough is something you actually measure.
from 'Modern java in Action':
"Although it may seem odd at first, often the fastest way to filter
a collection...is to convert it to a stream, process it in parallel, and then convert it back to a list"

The fastest way to populate a In Memory Data Grid Hazelcast

What is the fastest way to populate a Hazelcast Data Grid. Reading through documentation I can see couple of variants:
Use multithreading and IMap.set
Use multithreading and IMap.putAll
Use a Distributed Execution in order to start populating the grid from all participants.
My performance benchmark shows that IMap.putAll is faster than IMap.Set. But it is stated in the Hazelcasty Documentation that IMap.putAll does not come with guarantees that everything will be inserted atomically.
Can someone clarify a little bit about what would be the fastest way to populate a data grid with data ?
Is variant number 3 good ?
I would see the same three options. Anyhow as you mentioned, option two does not guarantee that everything was put into the map atomically but if you just load data and wait for all threads to finish loading data using IMap::putAll you should be fine.
Apart from that IMap::set would be the alternative. In any case you want to multithread the loading process. I would play around a bit with different thread numbers and loading data from a client is normally recommended to keep nodes free for storage operations.
I personally never benchmarked your third option, anyhow it would be possible as well. Just not sure it is worth the additional work.
How much data do you want to load that you're concerned it could be slow? Do you already know that loading is slow? Do you use Java Serialization (this is a huge performance killer)? Do you use indexes (those have to be generated while putting data)?
There's normally a lot of optimizations to apply to speed up, not only, data loading but also normal operation.

How store a big list of strings to optimize both initialization time and search speed

I'm writing an android application which stores a set of ~50.000
strings, and I need input on how to best store them.
My objective is to be able to query with low latency for a list of
strings matching a pattern (like Hello W* or *m Aliv*), but avoid
a huge initialization time.
I thought of the following 2 ways:
A java collection. I imagine a java collection should be quick to
search, but given that it's fairly large I'm afraid it might have a
big impact on the app initialization time.
A table in a SQLite database. I imagine this would go easy on
initialization time (since it doesn't need to be loaded to memory),
but I'm afraid the query would impose some relevant latency since it
needs to start a SQLite process (or doesn't it?).
Are my "imagine"s correct or horribly wrong? Which way would be best?
If you want quick (as in instant) search times, what you need is a full-text index of your strings. Fortunately, SQLite has some full-text search support with the FTS extension. SQLite is part of the Android APIs and the initialisation time is totally negligible. What you you do have to watch is that the index (the .sqlite file) has to either be shipped with your app in the .apk, or be re-created the first time it opens (and that can take quite some time)
Look at data structures like a patricia trie (http://en.wikipedia.org/wiki/Radix_tree) or a Ternary Search Tree (http://en.wikipedia.org/wiki/Ternary_search_tree). They will dramatically reduce your search time and depending on the amount of overlap in your strings may actually reduce the memory requirements. The Java collections are good for many purposes but are not optimal for large sets of short strings.
I would definitely stick to SQLite. It's really fast in the both initialization and querying. SQLite runs in application process, thus there is almost no time penalties on initialization. A query is normally fired in a background thread to not block main thread. It will be very fast on 50.000 records and you won't load all data in memory, which is also important.
your string no are 50 in this case you can use java collection database will be time taking.

Java : which of these two methods is more efficient?

I have a Huge data file and I only need specific data from this file, and later on, I will be using these data frequently.
So which of these two methods would be more efficient :
save this data in global variables (maybe LinkedList) and use them every time I need
save them in a file, and read the file every time I need the data
I should mention that these data could be a huge amount of integers.
Which of the mentioned two ways would give better performance with respect to speed and memory ?
If the file I/O overhead is not an issue for you: Save them in a file and create an index file mapping keys to file positions so you do not have to read your huge file.
If the data fits in your RAM and you want to be able to access it quickly - go by the first approach (but maybe without an index file) but read the data into memory at startup or when needed the first time.
As long as it fits in memory, working with memory is surely some orders of magnitude faster. But do not use LinkedList - it has a huge overhead. And do not use any standard Collection at all since it means boxing and blows the memory overhead by a factor 3 at least.
You could use int[] or a specialized collection for primitive types.
I'd recommend using a file via java.nio.IntBuffer. This way the data reside primarily on the disk but get mapped into memory too.
Probably the first one.
But there really isn't enough information there to answer you properly.
Firstly a linked list is fine if you only ever traverse it in order. However, if you need random access to it (5th element, then 100th, then 12th, then 45th...), it's lousy, and you'd be better with an ArrayList or something. Secondly, if you're storing lots of ints, if you use one of the standard Java collections, each int will be boxed, which may present a performance overhead.
Then you haven't said what 'huge' means. Thousands? Millions?
So, yeah, you need to say what kind of numbers you're dealing with, and what the access patterns are likely to be. And is the 'filtering' step a one-off--or is it done quite frequently?
It depends on system spec, if you are designing your app for one machine - the task is simple, elsewhere you should take into account memory and/or disk space limit on client's computer.
I think you cannot compare these two attitudes performance, as each one has it's own benefits and drawbacks. I'm certain that there are some algorithms available that you could further investigate, connected with reading part of a file into the memory, or creating a cache (when you read a number from a file, store it in memory, so next time you load it - it will be stored in memory).

how to handle large lists of data

We have a part of an application where, say, 20% of the time it needs to read in a huge amount of data that exceeds memory limits. While we can increase memory limits, we hesitate to do so to since it requires having a high allocation when most times it's not necessary.
We are considering using a customized java.util.List implementation to spool to disk when we hit peak loads like this, but under lighter circumstances will remain in memory.
The data is loaded once into the collection, subsequently iterated over and processed, and then thrown away. It doesn't need to be sorted once it's in the collection.
Does anyone have pros/cons regarding such an approach?
Is there an open source product that provides some sort of List impl like this?
Thanks!
Updates:
Not to be cheeky, but by 'huge' I mean exceeding the amount of memory we're willing to allocate without interfering with other processes on the same hardware. What other details do you need?
The application is, essentially a batch processor that loads in data from multiple database tables and conducts extensive business logic on it. All of the data in the list is required since aggregate operations are part of the logic done.
I just came across this post which offers a very good option: STXXL equivalent in Java
Do you really need to use a List? Write an implementation of Iterator (it may help to extend AbstractIterator) that steps through your data instead. Then you can make use of helpful utilities like these with that iterator. None of this will cause huge amounts of data to be loaded eagerly into memory -- instead, records are read from your source only as the iterator is advanced.
If you're working with huge amounts of data, you might want to consider using a database instead.
Back it up to a database and do lazy loading on the items.
An ORM framework may be in order. It depends on your usage. It may be pretty straight forward, or the worst of your nightmares it is hard to tell from what you've described.
I'm optimist and I think that using a ORM framework ( such as Hibernate ) would solve your problem in about 3 - 5 days
Is there sorting/processing that's going on while the data is being read into the collection? Where is it being read from?
If it's being read from disk already, would it be possible to simply batch-process it directly from disk, instead of reading it into a list completely and then iterating? How inter-dependent is the data?
I would also question why you need to load all of the data in memory to process it. Typically, you should be able to do the processing as it is being loaded and then use the result. That would keep the actual data out of memory.

Categories