Is one Thread per Client the answer? - java

I have been poring over some research for multithreaded programming, such as this academic webpage, and have noticed that it is quite popular to create one Thread for each client that connects to a given server. In fact, I have found some sample client-server program that does just this. (I attempted to adopt the idea, but now, I am in doubt.) According to Java: How to Program, it is recommended that I use the ExecutorService to create and manage Threads since the programmer cannot predict when a Thread will actually be dispatched by the system, despite the order in which threads are created and started.
What I intend to do
As mentioned earlier, I am creating a server that creates a Thread for each client. The clients are to send data to me, and the Thread will fetch the data, store it in a file, and log the data.
My question
Would using the ExecutorService to create Threads (and manage them!) be effectively the same as giving each client a Thread , but more manageable? Also, would it eliminate the overhead caused by the famous "one-thread-per-client" idea?

Would using the ExecutorService to create Threads (and manage them!) be effectively the same as giving each client a Thread , but more manageable?
yes
Also, would it eliminate the overhead caused by the famous "one-thread-per-client" idea?
no. The overhead is usually in terms of the number of active threads which is not changed by using a thread pool.

Related

How can multiple threads invoke one same service thread?

The question
The software runs on one DELL server with Linux.
Language could be C++, JAVA or Python.
Both thread A and thread B assign tasks to the service thread. When receiving tasks, service thread will put the tasks on its own task queue. When thread is free, it will execute the tasks and return task results to thread A or thread B depending on who sent request.
Thread A has a higher priority than thread B.
My thoughts
It is very similar to server/client in socket programming. However, since this software runs on one same server, TCPIP does not seem a good solution to me.
Another thought is to use a common database for this, such as redis. But Redis runs also on TCPIP and I am not sure if this could be a good fit.
Someone also suggests using a service DLL and both Thread A and Thread B can invoke service DLL directly. However I do not have experiences building DLL simultaneously serving several threads. Is this possible?
My question is: how to achieve this in a suitable way?

Multithreading with websockets

This is more a design question. I have the following implementation
Multiple Client connections -----> Server ------> Corresponding DB conns
The client/server communication is done using web sockets. It's a single threaded application currently. Evidently, this design does not scale as the the load on the server is too high and response back to the clients takes too long.
Back end operations involve handling large amounts of data.
My question: is it a good idea to create a new thread for every web socket connection? This would imply 500 threads for 500 clients (the number of web sockets would be the same whether it's multi-threading or single threaded). This would ease the load on the server and hence would make life a lot more easier.
or
Is there a better logic to attain scalability? One of them could be create threads on the merit of the job and get the rest processed by the main thread. This somehow seems to be going back to the same problem again in the future.
Any help here would be greatly appreciated.
There are two approaches to this kind of problem
one thread per request
a fixed number of threads to manage all requests
Actually you are using the second approach but using only 1 thread.
You can improve it using a pool of thread to handle your requests instead of only one.
The number of threads to use for the second approach depends on your application. If you have a strong use of cpu and a certain number of long I/O operations (read or write to disk or network) you can increase this number.
If you haven't I/O operations the number of thread should be closer to the number of cpu cores.
Note: existing web servers use this two approaches for http requests. Just as an example Apache use the first (one thread for one request) and NodeJs use the second (it is event driven).
In any case use a system of timeout to unblock very long requests before server crashes.
You can have a look at two very good scalable web servers, Apache and Node.js.
Apache, when operating in multi-threaded (worker) mode, will create new threads for new connections (note that requests from the same browser are served from the same thread, via keep-alive).
Node.js is vastly different, and uses an asynschronous workflow by delegating tasks.
Consequently, Apache scales very well for computationally intensive tasks, while Node.js scales well for multiple (huge) small, event based requests.
You mention that you do some heavy tasks on the backend. This means that you should create multiple threads. How? Create a thread queue, with a MAX_THREADS limit, and a MAX_THREADS_PER_CLIENT limit, serving repeated requests by a client using the same thread. Your main thread must only spawn new threads.
If you can, you can incorporate some good Node.js features as well. If some task on the thread is taking too long, kill that thread with a callback for the task to create a new one when the job is done. You can do a benchmark to even train a NN to find out when to do this!
Have a blast!

Java RMI specification on thread usage: "..may or may not execute in a separate thread"

I might have a problem with my application. There is a client running multiple threads which might execute rather time consuming calls to the server over Java RMI. Of course a time consuming call from one client should not block everyone else.
I tested it, and it works on my machine. So I created two Threads on the client and a dummy call on the server. On startup the clients both call the dummy method which just does a huge number of sysout. It can be seen that these calls are handled in parallel, without blocking.
I was very satisfied until a collegue indicated that the RMI spec does not necessarily guarantee that behavior.
And really a text on the hp of the university of Lancaster states that
“A method dispatched by the RMI runtime to a remote object
implementation (a server) may or may not execute in a separate thread.
Calls originating from different clients Virtual Machines will execute
in different threads. From the same client machine it is not
guaranteed that each method will run in a separate thread” [1]
What can I do about that? Is it possible that it just won't work in practice?
in theory, yes, you may have to worry about this. in reality, all mainstream rmi impls multi-thread all incoming calls, so unless you are running against some obscure jvm, you don't have anything to worry about.
What that wording means is that you can't assume it will all execute in the same thread. So you are responsible for any required synchronization.
Based on my testing on a Mac laptop, every single client request received in parallel seems to be executed on a separate thread (I tried upto a thousand threads without any issues. I don't know if there is an upper bound though. My guess is that the max no. of threads will be limited only by memory).
These threads then hang around for some time (a minute or two), in case they can service more clients. If they are unused for some time, they get GC'ed.
Note that I used Thread.sleep() on the server to hold up every request, hence none of the threads could finish the task and move on to another request.
The point is that, if required, the JVM can even allocate a separate thread for each client request. If work is done and threads are free, it could reuse existing threads without creating new ones.
I don't see a situation where any client request would be stuck waiting due to RMI constraints. No matter how many threads on the server are "busy" processing existing requests, new client requests will be received.

How can I limit the performance of sandboxed Java code?

I'm working on a multi-user Java webapp, where it is possible for clients to use the webapp API to do potentially naughty things, by passing code which will execute on our server in a sandbox.
For example, it is possible for a client to write a tight while(true) loop that impacts the performance of other clients.
Can you guys think of ways to limit the damage caused by these sorts of behaviors to other clients' performance?
We are using Glassfish for our application server.
The halting problem show that there is no way that a computer can reliably identify code that will not terminate.
The only way to do this reliably is to execute your code in a separate JVM which you then ask the operating system to shut down when it times out. A JVM not timing out can process more tasks so you can just reuse it.
One more idea would be byte-code instrumentation. Before you load the code sent by your client, manipulate it so it adds a short sleep in every loop and for every method call (or method entry).
This avoids clients clogging a whole CPU until they are done. Of course, they still block a Thread object (which takes some memory), and the slowing down is for every client, not only the malicious ones. Maybe make the first some tries free, then scale the waiting time up with each try (and set it down again if the thread has to wait for other reasons).
Modern day app servers use Thread Pooling for better performance. The problem is that one bad apple can spoil the bunch. What you need is an app server with one thread or maybe process per request. Of course there are going to be trade offs. but the OS will handle making sure that processing time gets allocated evenly.
NOTE: After researching a little more what you need is an engine that will create another process per request. If not a user can either cripple you servlet engine by having servlets with infinite loops and then posting multiple requests. Or he could simply do a System.exit in his code and bring everybody down.
You could use a parent thread to launch each request in a separate thread as suggested already, but then monitor the CPU time used by the threads using the ThreadMXBean class. You could then have the parent thread kill any threads that are misbehaving. This is if, of course, you can establish some kind of reasonable criteria for how much CPU time a thread should or should not be using. Maybe the rule could be that a certain initial amount of time plus a certain additional amount per second of wall clock time is OK?
I would make these client request threads have lower priority than the thread responsible for monitoring them.

Java Thread Performance

I am working on a bittorrent client. While communicating with the peers the easiest way for me to communicate with them is to spawn a new thread for each one of them. But if the user wants to keep connections with large number of peers that my cause me to spawn a lot of threads.
Another solution i thought of is have one thread to iterate through peer objects and run them for e period.
I checked other libraries mostly in ruby( mine is in java ) and they spawn one thread for each new peer. Do you think spawning one thread will degrade performence if user sets the number of connections to a high number like 100 or 200?
It shouldn't be a problem unless you're running thousands of threads. I'd look into a compromise, using a threadpool. You can detect the number of CPUs at runtime and decide how many threads to spin up based on that, and then hand out work to the threadpool as it comes along.
You can avoid the problem altogether by using Non-blocking IO (java.nio.*).
I'd recommend using an Executor to keep the number of threads pooled.
Executors.newFixedThreadPool(numberOfThreads);
With this, you can basically add "tasks" to the pool and they will complete as soon as threads become available. This way, you're not exhausting all of the enduser's computer's threads and still getting a lot done at the same time. If you set it to like 16, you'd be pretty safe, though you could always allow the user to change this number if they wanted to.
No.....
Once I had this very same doubt and created a .net app (4 years ago) with 400 threads....
Provided they don't do a lot of work, with a decent machine you should be fine...
A few hundred threads is not a problem for most workstation-class machines, and is simpler to code.
However, if you are interested in pursuing your idea, you can use the non-blocking IO features provided by Java's NIO packages. Jean-Francois Arcand's blog contains a lot of good tips learned from creating the Grizzly connector for Glassfish.
Well in 32bit Windows for example there is actually a maximum number of native Threads you can create (2 Gigs / (number of Threads * ThreadStackSize (default is 2MB)) or something like that). So with too many connections you simply might run out of Virtual Memory address space.
I think a compromise might work: Use a Thread Pool with e.g. 10 Threads (depending on the machine) running and Distribute the connections evenly. Inside the Thread loop through the peers assigned to this Thread. And limit the maximum number of connections.
Use a thread pool and you should be safe with a fairly large pool size (100 or so). CPU will not be a problem since you are IO bound with this type of application.
You can easily make the pools size configurable and put in a reasonable maximum, just to prevent memory related issues with all the threads. Of course that should only occur if all the threads are actually being used.

Categories