Java Multithread and Nodejs Cluster - java

We can use nodejs cluster to run multiple processes...
While the equivalent in java is multi-thread...
I have a http listener running on nodejs (without clustering), and I'm using Java to call this nodejs http (using java.lang.Thread class)
If I have concurrently 300 request, will it create multiple instances of nodejs? Will nodejs be a bottle neck?

NodeJS is single-threaded. It means that whatever number of http calls you make, it will queue them and process them. You'll have a longer response time thought if you overload Node JS with hundreds on call in a few seconds.
See this guide about the event loop for further informations
Edit : I did not see the cluster part. It'll allow you to use multiple instances, hence using more cores in your processor and processing more actions at the same time. I would say that the best thing to do is to benchmark a lot of operations to see if it's enough to process hundreds of call in a few seconds

Even though NodeJS is single-threaded, asynchronous operations are run in separate threads thanks to its Event Loop architecture.
If I have concurrently 300 request, will it create multiple instances of nodejs?
No, unless you are running a node cluster, only a single Node proccess (and thread) will handle the requests.
Will nodejs be a bottle neck?
If most of your work is asynchronous, then it will be able to perform those tasks in parallel and shouldn't be a bottleneck. Also, you can scale the application by creating a node process for each available core in the CPU and/or by deploying the process in multiple computer instances.
However, it's important to note the distinctions between a Java multithread application and a Node cluster application (or multiproccess).
processes are typically independent, while threads exist as subsets of a process
processes carry considerably more state information than threads, whereas multiple
threads within a process share process state as well as memory and other resources
processes have separate address spaces, whereas threads share their address space
processes interact only through system-provided inter-process communication mechanisms
context switching between threads in the same process is typically faster than context switching between processes.
Therefore, if memory is scarce in your context, and if your instance has a multi-core processor, then NodeJS might indeed become a bottleneck.

Related

Execute sequential from two instance of the same Java application

I have a Java application named 'X'. In Windows environment, at a given point of time there might be more than one instance of the application.
I want a common piece of code to be executed sequentially in the Application 'X' no matter how many instances of the application are running. Is that something possible and can be achieved ? Any suggestions will help.
Example :- I have a class named Executor where a method execute() will be invoked. Assuming there might be two or more instances of the application at any given point of time, how can i have the method execute() run sequential from different instances ?
Is there something like a lock which can be accessed from two instances and see if the lock is currently active or not ? Any help ?
I think what you are looking for is a distributed lock (i.e. a lock which is visible and controllable from many processes). There are quite a few 3rd party libraries that have been developed with this in mind and some of them are discussed on this page.
Distributed Lock Service
There are also some other suggestions in this post which use a file on the underlying system as a synchornization mechanism.
Cross process synchronization in Java
To my knowledge, you cannot do this that easily. You could implement TCP calls between processes... but well I wouldn't advice it.
You should better create an external process in charge of executing the task and a request all the the tasks to execute by sending a message to a JMS queue that your executor process would consume.
...Or maybe you don't really need to have several processes running in the same time but what you might require is just an application that would have several threads performing things in the same time and having one thread dedicated to the Executor. That way, synchronizing the execute()method (or the whole Executor) would be enough and spare you some time.
You cannot achieve this with Executors or anything like that because Java virtual machines will be separate.
If you really need to synchronize between multiple independent instances, one of the approaches would be to dedicate internal port and implement a simple internal server within the application. Look into ServerSocket or RMI is full blown solution if you need extensive communications. First instance binds to the dedicated application port and becomes the master node. All later instances find the application port taken but then can use it to make HTTP (or just TCP/IP) call to the master node reporting about activities they need to do.
As you only need to execute some action sequentially, any slave node may ask master to do this rather than executing itself.
A potential problem with this approach is that if the user shuts down the master node, it may be complex to implement approach how another running node could take its place. If only one node is active at any time (receiving input from the user), it may take a role of the master node after discovering that the master is not responding and then the port is not occupied.
A distributed queue, could be used for this type of load-balancing. You put one or more 'request messages' into a queue, and the next available consumer application picks it up and processes it. Each such request message could describe your task to process.
This type of queue could be implemented as JMS queue (e.g. using ActiveMQ http://activemq.apache.org/), or on Windows there is also MSMQ: https://msdn.microsoft.com/en-us/library/ms711472(v=vs.85).aspx.
If performance is an issue and you can have C/C++ develepors, also the 'shared memory queue' could be interesting: shmemq API

Scalability of Redis Cluster using Jedis 2.8.0 to benchmark throughput

I have an instance of JedisCluster shared between N threads that perform set operations.
When I run with 64 threads, the throughput of set operations is only slightly increased (compared to running using 8 threads).
How to configure the JedisCluster instance using the GenericObjectPoolConfig so that I can maximize throughput as I increase the thread count?
I have tried
GenericObjectPoolConfig poolConfig = new GenericObjectPoolConfig();
poolConfig.setMaxTotal(64);
jedisCluster = new JedisCluster(jedisClusterNodes, poolConfig);
believing this could increase the number of jedisCluster connection to the cluster and so boost throughput.
However, I observed a minimal effect.
When talking about performance, we need to dig into details a bit before I can actually answer your question.
A naive approach suggests: The more Threads (concurrency), the higher the throughput.
My statement is not wrong, but it is also not true. Concurrency and the resulting performance are not (always) linear because there is so many involved behind the scenes. Turning something from sequential to concurrent processing might result in something that executes twice of the work compared to sequential execution. This example assumes that you run a multi-core machine, that is not occupied by anything else and it has enough bandwidth for the required work processing (I/O, Network, Memory). If you scale this example from two threads to eight, but your machine has only four physical cores, weird things might happen.
First of all, the processor needs to schedule two threads so each of the threads probably behaves as if they would run sequentially, except that the process, the OS, and the processor have increased overhead caused by twice as many threads as cores. Orchestrating these guys comes at a cost that needs to be paid at least in memory allocation and CPU time. If the workload requires heavy I/O, then the work processing might be limited by your I/O bandwidth and running things concurrently may increase throughput as the CPU is mostly waiting until the I/O comes back with the data to process. In that scenario, 4 threads might be blocked by I/O while the other 4 threads are doing some work. Similar applies to memory and other resources utilized by your application. Actually, there's much more that digs into context switching, branch prediction, L1/L2/L3 caching, locking and much more that is enough to write a 500-page book. Let's stay at a basic level.
Resource sharing and certain limitations lead to different scalability profiles. Some are linear until a certain concurrency level, some hit a roof and adding more concurrency results in the same throughput, some have a knee when adding concurrency makes it even slower because of $reasons.
Now, we can analyze how Redis, Redis Cluster, and concurrency are related.
Redis is a network service which requires network I/O. Networking might be obvious, but we require to add this fact to our considerations meaning a Redis server shares its network connection with other things running on the same host and things that use the switches, routers, hubs, etc. Same applies to the client, even in the case you told everybody else not to run anything while you're testing.
The next thing is, Redis uses a single-threaded processing model for user tasks (Don't want to dig into Background I/O, lazy-free memory freeing and asynchronous replication). So you could assume that Redis uses one CPU core for its work but, in fact, it can use more than that. If multiple clients send commands at a time, Redis processes commands sequentially, in the order of arrival (except for blocking operations, but let's leave this out for this post). If you run N Redis instances on one machine where N is also the number of CPU cores, you can easily run again into a sharing scenario - That is something you might want to avoid.
You have one or many clients that talk to your Redis server(s). Depending on the number of clients involved in your test, this has an effect. Running 64 threads on a 8 core machine might be not the best idea since only 8 cores can execute work at a time (let's leave hyper-threading and all that out of here, don't want to confuse you too much). Requesting more than 8 threads causes time-sharing effects. Running a bit more threads than CPU cores for Redis and other networked services isn't a too bad of an idea since there is always some overhead/lag coming from the I/O (network). You need to send packets from Java (through the JVM, the OS, the network adapter, routers) to Redis (routers, network, yadda yadda yadda), Redis has to process the commands and send the response back. This usually takes some time.
The client itself (assuming concurrency on one JVM) locks certain resources for synchronization. Especially requesting new connections with using existing/creating new connections is a scenario for locking. You already found a link to the Pool config. While one thread locks a resource, no other thread can access the resource.
Knowing the basics, we can dig into how to measure throughput using jedis and Redis Cluster:
Congestion on Redis Cluster can be an issue. If all client threads are talking to the same cluster node, then other cluster nodes are idle, and you effectively measured how one node behaves but not the cluster: Solution: Create an even workload (Level: Hard!)
Congestion on the Client: Running 64 threads on a 8 core machine (that is just my assumption here, so please don't beat me up if I'm wrong) is not the best idea. Raising the number of threads on a client a bit above the number of Cluster nodes (assuming even workload for each cluster node) and a bit over the number of CPU cores can improve performance is never a too bad idea. Having 8x threads (compared to the number of CPU cores) is an overkill because it adds scheduling overhead at all levels. In general, performance engineering is related to finding the best ratio between work, overhead, bandwidth limitations and concurrency. So finding the best number of threads is an own field in computer science.
If running a test using multiple systems, that run a number of total threads, is something that might be closer to a production environment than running a test from one system. Distributed performance testing is a master class (Level: Very hard!) The trick here is to monitor all resources that are used by your test making sure nothing is overloaded or finding the tipping point where you identify the limit of a particular resource. Monitoring the client and the server are just the easy parts.
Since I do not know your setup (number of Redis Cluster nodes, distribution of Cluster nodes amongst different servers, load on the Redis servers, the client, and the network during test caused by other things than your test), it is impossible to say what's the cause.

Multithreading with websockets

This is more a design question. I have the following implementation
Multiple Client connections -----> Server ------> Corresponding DB conns
The client/server communication is done using web sockets. It's a single threaded application currently. Evidently, this design does not scale as the the load on the server is too high and response back to the clients takes too long.
Back end operations involve handling large amounts of data.
My question: is it a good idea to create a new thread for every web socket connection? This would imply 500 threads for 500 clients (the number of web sockets would be the same whether it's multi-threading or single threaded). This would ease the load on the server and hence would make life a lot more easier.
or
Is there a better logic to attain scalability? One of them could be create threads on the merit of the job and get the rest processed by the main thread. This somehow seems to be going back to the same problem again in the future.
Any help here would be greatly appreciated.
There are two approaches to this kind of problem
one thread per request
a fixed number of threads to manage all requests
Actually you are using the second approach but using only 1 thread.
You can improve it using a pool of thread to handle your requests instead of only one.
The number of threads to use for the second approach depends on your application. If you have a strong use of cpu and a certain number of long I/O operations (read or write to disk or network) you can increase this number.
If you haven't I/O operations the number of thread should be closer to the number of cpu cores.
Note: existing web servers use this two approaches for http requests. Just as an example Apache use the first (one thread for one request) and NodeJs use the second (it is event driven).
In any case use a system of timeout to unblock very long requests before server crashes.
You can have a look at two very good scalable web servers, Apache and Node.js.
Apache, when operating in multi-threaded (worker) mode, will create new threads for new connections (note that requests from the same browser are served from the same thread, via keep-alive).
Node.js is vastly different, and uses an asynschronous workflow by delegating tasks.
Consequently, Apache scales very well for computationally intensive tasks, while Node.js scales well for multiple (huge) small, event based requests.
You mention that you do some heavy tasks on the backend. This means that you should create multiple threads. How? Create a thread queue, with a MAX_THREADS limit, and a MAX_THREADS_PER_CLIENT limit, serving repeated requests by a client using the same thread. Your main thread must only spawn new threads.
If you can, you can incorporate some good Node.js features as well. If some task on the thread is taking too long, kill that thread with a callback for the task to create a new one when the job is done. You can do a benchmark to even train a NN to find out when to do this!
Have a blast!

Java RMI specification on thread usage: "..may or may not execute in a separate thread"

I might have a problem with my application. There is a client running multiple threads which might execute rather time consuming calls to the server over Java RMI. Of course a time consuming call from one client should not block everyone else.
I tested it, and it works on my machine. So I created two Threads on the client and a dummy call on the server. On startup the clients both call the dummy method which just does a huge number of sysout. It can be seen that these calls are handled in parallel, without blocking.
I was very satisfied until a collegue indicated that the RMI spec does not necessarily guarantee that behavior.
And really a text on the hp of the university of Lancaster states that
“A method dispatched by the RMI runtime to a remote object
implementation (a server) may or may not execute in a separate thread.
Calls originating from different clients Virtual Machines will execute
in different threads. From the same client machine it is not
guaranteed that each method will run in a separate thread” [1]
What can I do about that? Is it possible that it just won't work in practice?
in theory, yes, you may have to worry about this. in reality, all mainstream rmi impls multi-thread all incoming calls, so unless you are running against some obscure jvm, you don't have anything to worry about.
What that wording means is that you can't assume it will all execute in the same thread. So you are responsible for any required synchronization.
Based on my testing on a Mac laptop, every single client request received in parallel seems to be executed on a separate thread (I tried upto a thousand threads without any issues. I don't know if there is an upper bound though. My guess is that the max no. of threads will be limited only by memory).
These threads then hang around for some time (a minute or two), in case they can service more clients. If they are unused for some time, they get GC'ed.
Note that I used Thread.sleep() on the server to hold up every request, hence none of the threads could finish the task and move on to another request.
The point is that, if required, the JVM can even allocate a separate thread for each client request. If work is done and threads are free, it could reuse existing threads without creating new ones.
I don't see a situation where any client request would be stuck waiting due to RMI constraints. No matter how many threads on the server are "busy" processing existing requests, new client requests will be received.

How can I limit the performance of sandboxed Java code?

I'm working on a multi-user Java webapp, where it is possible for clients to use the webapp API to do potentially naughty things, by passing code which will execute on our server in a sandbox.
For example, it is possible for a client to write a tight while(true) loop that impacts the performance of other clients.
Can you guys think of ways to limit the damage caused by these sorts of behaviors to other clients' performance?
We are using Glassfish for our application server.
The halting problem show that there is no way that a computer can reliably identify code that will not terminate.
The only way to do this reliably is to execute your code in a separate JVM which you then ask the operating system to shut down when it times out. A JVM not timing out can process more tasks so you can just reuse it.
One more idea would be byte-code instrumentation. Before you load the code sent by your client, manipulate it so it adds a short sleep in every loop and for every method call (or method entry).
This avoids clients clogging a whole CPU until they are done. Of course, they still block a Thread object (which takes some memory), and the slowing down is for every client, not only the malicious ones. Maybe make the first some tries free, then scale the waiting time up with each try (and set it down again if the thread has to wait for other reasons).
Modern day app servers use Thread Pooling for better performance. The problem is that one bad apple can spoil the bunch. What you need is an app server with one thread or maybe process per request. Of course there are going to be trade offs. but the OS will handle making sure that processing time gets allocated evenly.
NOTE: After researching a little more what you need is an engine that will create another process per request. If not a user can either cripple you servlet engine by having servlets with infinite loops and then posting multiple requests. Or he could simply do a System.exit in his code and bring everybody down.
You could use a parent thread to launch each request in a separate thread as suggested already, but then monitor the CPU time used by the threads using the ThreadMXBean class. You could then have the parent thread kill any threads that are misbehaving. This is if, of course, you can establish some kind of reasonable criteria for how much CPU time a thread should or should not be using. Maybe the rule could be that a certain initial amount of time plus a certain additional amount per second of wall clock time is OK?
I would make these client request threads have lower priority than the thread responsible for monitoring them.

Categories