Best way to parallelize thousands of downloads - java

I am creating an application in which I have to download thousands of images (~1 MB each) using Java.
I take a list of Album URLs in my REST request, each Album contains multiple number of images.
So my request looks something like:
[
"www.abc.xyz/album1",
"www.abc.xyz/album2",
"www.abc.xyz/album3",
"www.abc.xyz/album4",
"www.abc.xyz/album5"
]
Suppose each of these albums have 1000 images, so I need to download 50000 images in parallel.
Right now I have implemented it using parallelStream() but I feel that I can optimize it further.
There are two principle classes - AlbumDownloader and ImageDownloader (Spring components).
So the main application creates a parallelStream() on the list of albums:
albumData.parallelStream().forEach(ad -> albumDownloader.downloadAlbum(ad));
And a parallelStream() inside AlbumDownloader -> downloadAlbum() method as well:
List<Boolean> downloadStatus = albumData.getImageDownloadData().parallelStream().map(idd -> imageDownloader.downloadImage(idd)).collect(Collectors.toList());
I am thinking about using CompletableFuture with ExecutorService but I am not sure what pool size should I use?
Should I create a separate pool for each Album?
ExecutorService executor = Executors.newFixedThreadPool(Math.min(albumData.getImageDownloadData().size(), 1000));
That would create 5 different pools of 1000 threads each, that'll be like 5000 threads which might degrade the performance instead of improving.
Could you please give me some ideas to make it very very fast ?
I am using Apache Commons IO FileUtils to download files by the way and I have a machine with 12 available CPU cores.

Suppose each of these albums have 1000 images, so I need to download 50000 images in parallel.
It's wrong to think of your application doing 50000 things in parallel. What you are trying to do is to optimize your throughput – you are trying to download all of the images in the shortest amount of time.
You should try one fixed-sized thread-pool and then play around with the number of threads in the pool until your optimize your throughput – maybe start with double the number of processors. If your application is mostly waiting for network or the server then maybe you can increase the number of threads in the pool but you wouldn't want to overload the server so that it slows to a crawl and you wouldn't want to thrash your application with a huge number of threads.
That would create 5 different pools of 1000 threads each, that'll be like 5000 threads which might degrade the performance instead of improving.
I see no point in multiple pools unless there are different servers for each album or some other reason why the downloads from each album are different.

The only way to make it "very very fast" is to get a "very very fast" network connection to the server; e.g. co-locate your client with the server that you are downloading from.
Your download speeds are going to be constrained by a number of potential bottlenecks. These include:
The performance of the server; i.e. how fast it can assemble the data to send to you and push it through its network interface.
Per-user request limits imposed by the service.
The end-to-end performance of the network path between your client and the server.
The performance of the machine you are running on in terms of moving data from the network and putting it (I guess) onto your local disk.
The bottleneck could be any of these, or a combination of them.
Throwing thousands of threads at the problem is unlikely to improve things. Indeed, if anything it is likely to make performance less than optimal. For example:
it could congest your network link, or
it could trigger anti-hogging or anti-DOS defenses in the server you are fetching from.
A better (simple) idea would be to use an ExecutorService with a small bounded worker pool, and submit the downloads to the pool as tasks.
Other things:
Try to keep HTTP / HTTPS connections open between downloads from the same server. Some client libraries will do this kind of thing for you.
If you have to download from a number of different servers, try to balance the load across the servers. Consider implementing per-server queues and trying to balance work so that individual servers don't see "bursts" of activity.
I would also advise you to make sure that you have permission to do what you are doing. Companies in the music publishing business have good lawyers. They could make your life unpleasant1 if they perceive you to be violating their terms and conditions or stealing their intellectual property.
1 - Like blocking your IP address or issuing take-down requests to your service provider.

Related

Thread Management Best Practices for Network Bound Web Server

I have a network server that has a few dozen backend controllers, each which process a different user request (i.e., user clicking something on the website).
Each controller makes network calls to a handful of services to get the data it needs. Each network call takes somewhere around 200ms. These network calls can be done in parallel, so I want to launch one thread for each of them and then collect at the end - maximizing parallelization. (5 network calls in parallel takes 200ms, where 5 in sequence will take 1000ms).
However I am unsure of best practice to design the thread management strategy.
Should I have one threadpool with say 1000 (arbitrary number for example) threads in it, and each controller draws from that pool?
Should I not have a threadpool at all and create new threads in each controller as I need them? This option seems "dumb" but I wonder - how much is the cost in CPU cycles of creating a thread, compared to waiting for network response? Quite minimal.
Should I have one threadpool per controller? (Meaning dozens of threadpools, each with around 5 or 6 threads for that specific controller).
Seeking pros / cons of each strategy, best practices, or an alternate strategy I haven't considered.

Serving single HTTP request with multiple threads

Angular 4 application sends a list of records to a Java spring MVC application that has been deployed in Websphere 8 Servlet container. The list is then inserted into to a temp table. After the batch insert, a procedure call is made in order to do some calculations and return results. Depending on the size of the list that was inserted into temp table it may take anywhere between: 3000ms( N ~ 500 ), 6000ms( N ~ 1000 ), 50,000+ms ( N > 2000 ).
My asendach would be to create chunks of data and simultaneously send them to database for processing. After threads (Futures) return results I would aggregate them and return back to the client. To sum up, I would split a synchronous call into multiple asynchronous processes(simultaneously executed) and return back to the client over the same thread that initiated HTTP call - landed into my controller.
Everything would be fine and I would not be asking this questions if a more experienced colleague of mine was not strongly disagreeing with this approach. His reasoning is that using this approach is prone to exceptions due to thread interrupts / timeouts / semaphores and so on. Hi is going as far as saying that multithreading should be avoided within a web container because it can crash the Servlet container in case it runs out of threads.
He proposes that we should have the browser send multiple AJAX requests and aggregates/present data in chunks.
Can you please help me understand which approach is better and why?
I would say that your approach is much better.
Threads created by application logic aren't application container threads and limited only by operating system. While each AJAX request uses a thread from application container. So the second approach reduces throughput and increases the possibility of reaching application container limit while and the first one not. Performance also should be considered because it's much cheaper to create a thread than to send a request over network. Plus each network requests uses additional resources for authentication/authorization/encryption etc.
It's definetely harder to write correct multithread code and it can easily prone to errors. However it shouldn't stop you from doing it because concurrency can significantly increase your performance. It's pretty straightforward to handle interrupts and timeouts using Future and you for sure don't need semaphores here.
Exposing this logic to client looks like breaking of encapsulation. Imagine that you use rest api which forces you to send multiple request by splitting you data in chunks. What chunk size should i use? How to deal with timeouts/interrupts? How many requests should i sent? etc. You will have almost the same challenges in both approaches, but it's much easier to deal with them using specially designed for this libraries like ExecutorService and Future.

does multi threading improve performance? scenario java [duplicate]

This question already has answers here:
Does multi-threading improve performance? How?
(2 answers)
Closed 8 years ago.
I have a List<Object> objectsToProcess.Lets say it contains 1000000 item`s. For all items in the array you then process each one like this :
for(Object : objectsToProcess){
Go to database retrieve data.
process
save data
}
My question is : would multi threading improve performance? I would of thought that multi threads are allocated by default by the processor anyways?
In the described scenario, given that process is a time-consuming task, and given that the CPU has more than one core, multi-threading will indeed improve the performance.
The processor is not the one who allocates the threads. The processor is the one who provides the resources (virtual CPUs / virtual processors) that can be used by threads by providing more than one execution unit / execution context. Programs need to create multiple threads themselves in order to utilize multiple CPU cores at the same time.
The two major reasons for multi-threading are:
Making use of multiple CPU cores which would otherwise be unused or at least not contribute to reducing the time it takes to solve a given problem - if the problem can be divided into subproblems which can be processed independently of each other (parallelization possible).
Making the program act and react on multiple things at the same time (i.e. Event Thread vs. Swing Worker).
There are programming languages and execution environments in which threads will be created automatically in order to process problems that can be parallelized. Java is not (yet) one of them, but since Java 8 it's on a good way to that, and Java 9 maybe will bring even more.
Usually you do not want significantly more threads than the CPU provides CPU cores, for the simple reason that thread-switching and thread-synchronization is overhead that slows down.
The package java.util.concurrent provides many classes that help with typical problems of multithreading. What you want is an ExecutorService to which you assign the tasks that should be run and completed in parallel. The class Executors provides factor methods for creating popular types of ExecutorServices. If your problem just needs to be solved in parallel, you might want to go for Executors.newCachedThreadPool(). If your problem is urgent, you might want to go for Executors.newWorkStealingPool().
Your code thus could look like this:
final ExecutorService service = Executors.newWorkStealingPool();
for (final Object object : objectsToProcess) {
service.submit(() -> {
Go to database retrieve data.
process
save data
}
});
}
Please note that the sequence in which the objects would be processed is no longer guaranteed if you go for this approach of multithreading.
If your objectsToProcess are something which can provide a parallel stream, you could also do this:
objectsToProcess.parallelStream().forEach(object -> {
Go to database retrieve data.
process
save data
});
This will leave the decisions about how to handle the threads to the VM, which often will be better than implementing the multi-threading ourselves.
Further reading:
http://docs.oracle.com/javase/tutorial/collections/streams/parallelism.html#executing_streams_in_parallel
http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/package-summary.html
Depends on where the time is spent.
If you have a load of calculations to do then allocating work to more threads can help, as you say each thread may execute on a separate CPU. In such a situation there is no value in having more threads than CPUs. As Corbin says you have to figure out how to split the work across the threads and have responsibility for starting the threads, waiting for completion and aggregating the results.
If, as in your case, you are waiting for a database then there can be additional value in using threads. A database can serve several requests in paraallel (the database server itself is multi-threaded) so instead of coding
for(Object : objectsToProcess){
Go to database retrieve data.
process
save data
}
Where you wait for each response before issuing the next, you want to have several worker threads each performing
Go to database retrieve data.
process
save data
Then you get better throughput. The trick though is not to have too many worker threads. Several reasons for that:
Each thread is uses some resources, it has it's own stack, its own
connection to the database. You would not want 10,000 such threads.
Each request uses resources on the server, each connection uses memory, each database server will only serve so many requests in parallel. You have no benefit in submitting thousands of simultaneous requests if it can only server tens of them in parallel. Also If the database is shared you probably don't want to saturate the database with your requests, you need to be a "good citizen".
Net: you will almost certainly get benefit by having a number of worker threads. The number of threads that helps will be determined by factors such as the number of CPUs you have and the ratio between the amount of processing you do and the response time from the DB. You can only really determine that by experiment, so make the number of threads configurable and investigate. Start with say 5, then 10. Keep your eye on the load on the DB as you increase the number of threads.

Java pooling connection optimization

Which are the commons guidelines/advices to configure, in Java, a http connection pool to support huge number of concurrent http calls to the same server? I mean:
max total connections
max default connection per route
reuse strategy
keep alive strategy
keep alive duration
connection timeout
....
(I am using Apache http components 4.3, but I am available to explore new solutions)
In order to be more clear, this is my situation:
I developed a REST resource that needs to perform about 10 http calls to AWS CloudSearch in order to obtain search results to be collected in a final result (that I really cannot obtain through a single query).
The whole operation must take less than 0.25 seconds. So, I run http calls in parallel in 10 different threads.
During a benchamarking test, I noticed that with few concurrent request, 5, my objective is reached. But, increasing concurrent requests to 30, there is a tremendous degradation of performance due to the connection time that takes about 1 second. With few concurrent requests, instead, the connection time is about 150 ms (to be more precise, the first connection takes 1 second, all the following connections take about 150 ms). I can ensure that CloudSearch returns its response in less than 15 ms, so there is a problem somewhere in my connection pool.
Thank you!
The amount of threads/connections that are best for your implementation depend on that implementation (which you did not post), but here are some guidelines as requested:
If those threads never block at all, you should have as many threads as cores (Runtime.availableCores(), this will include hyperthread-cores). Simply because more than 100% CPU usage isn't possible.
If your threads rarely block, cores * 2 is a good start for benchmarking.
If your threads frequently block, you absolutely need to benchmark your application with various settings to find the best solution for your implementation, OS and hardware.
Now the most optimal case is obviously the first one, but to get to this one, you need to remove blocking from your code as much as you can. Java can do this for IO operations if you use the NIO package in non-blocking mode (which is not how the Apache package does it).
Then you have 1 thread that waits on a selector and awakes as soon as any data is ready to be sent or read. This thread then only copies the data from it's source to the destination and returns to the selector. In case of a read (incoming data), this destination is a blocking queue, on which core amount of threads wait. One of those threads will then pull out the received data and process it, now without any blocking.
You can then use the length of the blocking queue to adjust how many parallel requests are reasonable for your task and hardware.
The first connection takes >1 second, because it actually has to look-up the address via DNS. All other connections are put on hold for the moment, as there is no sense in doing this twice. You can circumvent that by either calling the IP (probably not good if you talk to a load-balancer) or by "warming-up" the connections with an initial request. Any new connection afterwards will use the cached DNS result, but still needs to perform other initializations, so reusing connections as much as you can will reduce latency a lot. With NIO this is a very easy task.
In addition there are HTTP-multi-requests, that is: you make one connection but request several URLs in one request and get several responses over "the same line". This massively reduces connection overhead, but needs to be supported by the server.

nonblocking-io vs blocking-io on raw data throughput

In apache HTTPComponent document there is a statement:
Contrary to the popular belief, the performance of NIO in terms of raw data throughput is significantly lower than that of blocking I/O."
Is that true? Can someone explain this in more details? And what is a typical use case where
request / response handling needs to be decoupled
Non blocking IO should be used when you can handle the request, dispatch it for processing on some other execution context (different thread, RPC call to another server, some other async mechanism) and release the web-server's thread to handle more incoming requests. When the processing of the response will be completed, a response handling thread will be invoked, and it will send the response to the client.
I would recommend reading netty documentation for better understanding of the concept.
As for higher throughput: When your server sends/recieves large amounts of data, all those context switches, and passing data between threads, can really hurt overall performance. Think of it like this: you receive a large request (PUT request with a large file). All you need to do is to save it to disk, and return OK. Starting to toss it between threads could result in few more mem-copy operations that would have been needed in case you've just threw it to disk in the same thread. And handling this operation in async manner would not improve performance: though you could have released the request handling thread back to web-server's thread pool and let it process other requests, your main performance bottleneck is your disk IO, and in this case - trying to save more files simultaneously, would only make things slower.
I hope I was clear enough. Please feel free to ask more questions in comments if you need more explanations.
The first statement is true only when the number of concurrent requests is relatively small (rather in tens than thousands). It's all about using many threads (blocking) instead of one or few threads (non-blocking). Let's say you want to write an application which only downloads a file from remote server. If your application need to download only one file at a time you need only one thread. But if you have a crawler which runs thousands of HTTP requests then you need to have thousands of threads (or use limited number of threads + NIO instead). For so big number of threads the problem is context switching which can slow down your application dramatically (therefore for this number of concurrent requests NIO is better).
But let's back to your question. Why NIO can be slower in terms of raw data throughput ? The reason is the amount of CPU time used by NIO driven applications. For such case in blocking model your code is doing only one thing - waiting for data (it executes recv() operation in a loop). In the NIO application the logic is much more complicated: in a loop the code is using the selector to select a set of keys (which involves epoll_wait system call on Linux, Oracle JVM), then iterate through the set, pick up a channel for every key and then read the data from the channel (read() operation in OS). In standard blocking model all you do is to execute the recv() system function. In summary: NIO driven application in such case use more CPU time and generates more mode switch operations because of higher number of system calls (by saying mode switch I mean the switch from user to kernel mode). Therefore the time needed to download the file will be higher.

Categories