I'm building a small client/Server chat application. I came across NIO.2 after I tried to simulate it using the classic NIO library.
The goal of my "simulation" of the NIO.2 lib with the classisc NIO, was to use multiple selectors in multiple threads which are in pairs connected through a ArrayBlockingQueue, to avoid the network read and write times.
My question is, how are multiple events at the same time handled with in the NIO.2 lib using AsynchronousSocketChannels and CompletionHandlers (which act to my understanding as callbacks)?
The classic NIO lib uses Selectors which deliver after a select call a key set. This key set can then be iterated over and each event(read,accept and write) can be handled one after another.
The NIO.2 callbacks on the other hand, don't have such a sequence. They are asyncronous. So what happens if, for example, 2 clients send at exact the same moment a message to the server ?
Do then 2 callbacks run at the same time? And if yes, then how?
Do they each run in seperate threads or not?
And if I were to take those messages from each of the callbacks and tried to enqueue them in a as before mentioned ArrayBlockingQueue, would they wait for each other or not ?
So what happens if, for example, 2 clients send at exact the same moment a message to the server ?
The clients do not share a common connection with the server. Server-sided, you'd call AsynchronousSocketChannel#read with your callback for both clients, which would fire when some bytes arrive.
For that reason, two callbacks can run simultaneously (as they're asynchronous), but they're still independent for each client, so there won't be a problem.
Do they each run in seperate threads or not?
This depends on the backing AsynchronousChannelGroup's thread pool (which you can specify yourself or use the default group).
I created a simple networking library with NIO.2, which I think would help you: https://github.com/jhg023/SimpleNet
Related
I have a Java application named 'X'. In Windows environment, at a given point of time there might be more than one instance of the application.
I want a common piece of code to be executed sequentially in the Application 'X' no matter how many instances of the application are running. Is that something possible and can be achieved ? Any suggestions will help.
Example :- I have a class named Executor where a method execute() will be invoked. Assuming there might be two or more instances of the application at any given point of time, how can i have the method execute() run sequential from different instances ?
Is there something like a lock which can be accessed from two instances and see if the lock is currently active or not ? Any help ?
I think what you are looking for is a distributed lock (i.e. a lock which is visible and controllable from many processes). There are quite a few 3rd party libraries that have been developed with this in mind and some of them are discussed on this page.
Distributed Lock Service
There are also some other suggestions in this post which use a file on the underlying system as a synchornization mechanism.
Cross process synchronization in Java
To my knowledge, you cannot do this that easily. You could implement TCP calls between processes... but well I wouldn't advice it.
You should better create an external process in charge of executing the task and a request all the the tasks to execute by sending a message to a JMS queue that your executor process would consume.
...Or maybe you don't really need to have several processes running in the same time but what you might require is just an application that would have several threads performing things in the same time and having one thread dedicated to the Executor. That way, synchronizing the execute()method (or the whole Executor) would be enough and spare you some time.
You cannot achieve this with Executors or anything like that because Java virtual machines will be separate.
If you really need to synchronize between multiple independent instances, one of the approaches would be to dedicate internal port and implement a simple internal server within the application. Look into ServerSocket or RMI is full blown solution if you need extensive communications. First instance binds to the dedicated application port and becomes the master node. All later instances find the application port taken but then can use it to make HTTP (or just TCP/IP) call to the master node reporting about activities they need to do.
As you only need to execute some action sequentially, any slave node may ask master to do this rather than executing itself.
A potential problem with this approach is that if the user shuts down the master node, it may be complex to implement approach how another running node could take its place. If only one node is active at any time (receiving input from the user), it may take a role of the master node after discovering that the master is not responding and then the port is not occupied.
A distributed queue, could be used for this type of load-balancing. You put one or more 'request messages' into a queue, and the next available consumer application picks it up and processes it. Each such request message could describe your task to process.
This type of queue could be implemented as JMS queue (e.g. using ActiveMQ http://activemq.apache.org/), or on Windows there is also MSMQ: https://msdn.microsoft.com/en-us/library/ms711472(v=vs.85).aspx.
If performance is an issue and you can have C/C++ develepors, also the 'shared memory queue' could be interesting: shmemq API
I have 2 java processes, Process1 is responsible for importing some external data to the database, Process2 is running the rest of the application using the same database, i.e. it hosts the web module the everything else. Process1 would normally import data once a day.
What I require is when Process1 has finished it's work it should notify the Process2 about it, so that it can perform some subsequent tasks. That is it, this will be their limit of interaction with each other. No other data has to be shared later.
No I know I can do this in one of the following ways:
Have the Process1 write an entry in the database when it has finished its execution and have a demon thread in Process2 looking for that entry. Once this entry is read, complete the task in Process2. Even though this might be the easiest to implement in the existing ecosystem, I think having a thread loop the database just for one notification looks kind of ugly. However, it could be optimised by starting the thread only when the import job starts and killing it after the notification is received.
Use a socket. I have never worked with sockets before, so this might be an interesting learning curve. But after my initial readings I am afraid it might be an overkill.
Use RMI
I would like to hear from people who have worked on similar problems, and what approach they choose and why and also would like to know what will be an appropriate solution for my problem.
Edit.
I went through this but found that for a starter in interprocess communication it lacks basic examples. That is what I am looking in this post.
I would say take a look at Chronicle-Queue
It uses a memory mapped file and saves data off-heap (so no problem with GC). Also, Provides TCP replication for failover scenarios.
It scales pretty well and supports distributed processing when more than one machine is available.
main aim = minimize time of execution process.
Want to create system process with running some programm, and reuse it.
For example
command = "/client.exe -ip=127.0.0.1 -port=1234" + somecommand
execute it
Process(command).lineStream.mkString
Result of execution is very slow.
How can I run client.exe once, and reuse this process. Just send some new commands every time to the existing process client.exe.
Any ideas how to increase speed of execution ?
Thanks.
What you want is actually interprocess communication and/or remote procedure call. You can use several methods to achieve this. Some of them are:
Using REST/HTTP, spray is probably simplest and best solution for this.
Using Akka, Akka supports remote actors, this means you can spawn an actor on the main process and access it from other processes and send/receive messages.
If you are on a *nix system you can use raw sockets.
Use a message queue, check RabbitMQ
if client.exe has sequential execution and it is designed to quit after work done, then You can't do much. Executable should be written to handle interprocess communication.
This is more a design question. I have the following implementation
Multiple Client connections -----> Server ------> Corresponding DB conns
The client/server communication is done using web sockets. It's a single threaded application currently. Evidently, this design does not scale as the the load on the server is too high and response back to the clients takes too long.
Back end operations involve handling large amounts of data.
My question: is it a good idea to create a new thread for every web socket connection? This would imply 500 threads for 500 clients (the number of web sockets would be the same whether it's multi-threading or single threaded). This would ease the load on the server and hence would make life a lot more easier.
or
Is there a better logic to attain scalability? One of them could be create threads on the merit of the job and get the rest processed by the main thread. This somehow seems to be going back to the same problem again in the future.
Any help here would be greatly appreciated.
There are two approaches to this kind of problem
one thread per request
a fixed number of threads to manage all requests
Actually you are using the second approach but using only 1 thread.
You can improve it using a pool of thread to handle your requests instead of only one.
The number of threads to use for the second approach depends on your application. If you have a strong use of cpu and a certain number of long I/O operations (read or write to disk or network) you can increase this number.
If you haven't I/O operations the number of thread should be closer to the number of cpu cores.
Note: existing web servers use this two approaches for http requests. Just as an example Apache use the first (one thread for one request) and NodeJs use the second (it is event driven).
In any case use a system of timeout to unblock very long requests before server crashes.
You can have a look at two very good scalable web servers, Apache and Node.js.
Apache, when operating in multi-threaded (worker) mode, will create new threads for new connections (note that requests from the same browser are served from the same thread, via keep-alive).
Node.js is vastly different, and uses an asynschronous workflow by delegating tasks.
Consequently, Apache scales very well for computationally intensive tasks, while Node.js scales well for multiple (huge) small, event based requests.
You mention that you do some heavy tasks on the backend. This means that you should create multiple threads. How? Create a thread queue, with a MAX_THREADS limit, and a MAX_THREADS_PER_CLIENT limit, serving repeated requests by a client using the same thread. Your main thread must only spawn new threads.
If you can, you can incorporate some good Node.js features as well. If some task on the thread is taking too long, kill that thread with a callback for the task to create a new one when the job is done. You can do a benchmark to even train a NN to find out when to do this!
Have a blast!
I am writing a program in Java where I have opened 256 network connections on one thread. Whenever there is any data on a socket, I should read it and process the same. Currently, I am using the following approach :
while true
do
iterate over all network connections
do
if non-blocking read on socket says there is data
then
read data
process it
endif
done
sleep for 10 milli-seconds
done
Is there a better way to do the same on Java ?? I know there is a poll method in C/C++. But after googling for it, I did not get concrete idea about Java's polling. Can somebody explain this ??
The java.nio package sounds right for what you want to do. It provides ways to perform asynchronous IO.
Take a look to http://netty.io/ (this is a non-blocking framework to build network application on java). https://community.jboss.org/wiki/NettyExampleOfPingPongUsingObject - 'hello world' on netty.