Java Inter-thread communication in socket programming issues - java

I started working on Java Socket programming. I had already made following apps:
1. client send message to server and server responds on recieving
2. client and server are chatting like Point to point chat
Now i want to develop an application in which, whenever a request arrives at server, it'll generate a thread for it. But now the problem is, i am unable to recognize the threads. If one thread is sending message then how server can identify that a message came from this client and i have to forward this to that client. how two clients (actually their threads) can communicate?
I searched on it, i found synchronized keyword and i know its use, i know about wait(), notify() and notifyAll() but still i am unable to provide a communication between them.
Please give me knowledge regarding this if i am doing something wrong or i need to know about some concepts before jumping into this one.
TIA

You cannot pass data to threads, you can only pass data to a data structure they are reading.
I suggest you use this simple pattern using two threads per connection and an output queue. These threads are created when the connection is accepted and run until the connection is closed. You can use a thread pool if you have short lived connections.
Each connection has a reader thread which reads from the socket until the connection is closed using blocking IO. THis reading thread also processes the work on the clients behave.
Another thread for each connection reads from a BlockingQueue for a message which it writes to the socket.
When a user connects they pass a unique token to say who they are and this is stored in a ConcurrentMap<String, BlockingQueue> where the BlockingQueue is the output queue for that connection.
This way whenever a connection sends a message to go to a particular users, you add it to the queue associated with that user.
You can reduce this model to use less threads e.g. with Selectors you only need one thread, but this is much more complex.

Related

handling a lot of sockets in a server efficiently

I have to write a program in which clients send the server some number and wait to its response, other random number. It works Infinitely-send number and wait for response and so on...
I would like to write a server which gets a lot of connections ( and creates sockets) how can I do that in effeicient way (without creating thread to every socket created)?
Is it better to open and close sockets for every request and response?
Is there a way to send answer over a socket when I don't know which one is the right socket, but I know that all the sockets starts from the same client computer and I know the port source of the client
(I thought about making sockets array)
how can I do that in effeicient way (without creating thread to every socket created)?
You are assuming without proof that a new thread per socket is inefficient. It isn't.
Is it better to open and close sockets for every request and response?
No. Take a look at the history of HTTP. The major change between 1.0 and 1.1 was the introduction of persistent connections, which was done regardless of server-side architectures.
Is there a way to send answer over a socket when I don't know which one is the right socket
I don't understand how that situation could possibly arise. The answer only makes sense in the context of a specific session, which is associated with a specific socket. If you aren't retaining that information you should be. It's just a data structure problem.
but I know that all the sockets starts from the same client computer and I know the port source of the client (I thought about making sockets array)
If you can remember the source port you can remember the socket itself. Again, this is just a data structure problem. And there is no need for the assumption/constraint that all connections are from the same client. And unless that client is multi-threaded there is no need for multiple connections from it at all.

Java Threading best practise for Sessions

Hello #all on StackOverflow,
i am currently developing a Server Client application which communicates over HTTPS and does some Task which have to run in a seperate thread on the Server aswell as on the Client.
I am not really concert about thread efficiency on the Client side.
The normal Server Task Looks like this:
HTTPS Server recieves Login equest.
Opens up one longpolling thread for communication.
Server recieves instructions to open
Server opens a Client socket and a thread to read from it.
Server recieves message to Close socket.
Clientsocket thread should wait now.
-Besides: The longpolling thread should wait() as long as it has not recieved any data from the socketthread.
So in most cases one user can ahve multiple sockets on the serverside so one session consists of:
LongpollingThread<1---1>USER<1---0..5>Socket
My question now is what is the best practise to get some decend scalability?
Is it better to write permanent Thread which has a while loop inside.
Or is it better to write Tasks which run on a Threadpool and die after one I/O cycle.
CanĀ“t find a good answer online.
Maybe it is to specific..
Thanks in advance
Bladerox
I think you should use some kind of servlet engine or application server. Lots of your problems will be solved in there. Eg using async servlet processing will help you in your server component.
At client site: did you have a look at the multicast things in java.nio?

Difference between using same socket or different sockets for multiple connection

I am facing a problem regarding designing my app with datagram socket. My app needs to communicate with different servers using udp connections. now I am not sure which of the following will be good. Is there any advantage of any of the following ( by performance or by other measures ). or is there any better option?
Option 1
create a single Datagram socket, and create a single thread to receive data of that. While sending to different servers set the address of the datagram packets. and in the receiving thread check the address and process data accordingly
Option 2
create different datagram sockets to communicate with servers. use socket.connet() to connect to the relevant server. And create threads for every socket to receive data.
N.B. I am actually working on an android app. if you have any query you can ask in comment
Unless you are we are talking about 100000 of connections, I would create single socket per thread. It speeds up application and guarantee the thread safety of sockets and that receaved data wont get mixed up.
The most important is however, that if one channel will fail or latency will get high, it will have no influence on other channels (sockets).
The drawback is that you are consuming more resources.
All depends on purpose of app.
My opinion is you can create a single socket to because creating more socket will bring down your app.

Java thread pooled server using blocking I/O

I have implemented a server in Java, upon receiving data from some client it simply forwards the data to all other clients (including the sender). I'm happy with my OO-design, I wrap all sockets in classes that provide 'callbacks'. These are called when some data are ready (or when the socket closes) -- using this design I could easily implement a simple TLV protocol to atomically send packets: the callback is not called until a full packet is received.
Now, I use the java.io package blocking I/O calls to the socket streams (and make them appear 'asynchronous' through those callbacks). So I use threads inside my socket wrapper classes: when a socket is opened, that function returns a Runnable implementation that, when run, will do the blocking calls to the InputStream, buffer data and eventually call the callback.
=> In a client application, I simply launch this Runnable in a Thread instance, because it's just one thread.
=> In my server, I submit all Runnable implementations I get upon creating new sockets (i. e. when accepting new clients) into a ThreadPoolExecutor. (FYI: the callbacks of the sockets simply put the received packets in a BlockingQueue. A single, separate (non-pooled) "dispatcher" Thread instance constantly takes the packets from this queue and writes them to all sockets currently connected to the server.)
QUESTION: This all works great, however I'm unsure about my use of the ThreadPoolExecutor, because the Runnable instances submitted are almost always blocking. Will the ThreadPoolExecutor react to this? Or will the pooled threads simply block? Because, if all pooled threads are all blocking while executing their Runnable and next, a new Runnable is submitted, then what? Suspend the new Runnable? That's not good, because then the newly connected client will have zero responsiveness until some older client disconnects. If by contrast the thread pool chooses to spawn a new thread to handle the Runnable, then I actually get a thread-per-client scenario.
I want the thread pool to 'preempt' the blocking threads and use them to handle other sockets, like an operating system that suspends I/O bound processes and doesn't schedule them again until their I/O is complete. Is that at all possible, or will I have to rewrite everything using nio in order to do this? (if nio is required, could you point out where I should start reading?)
Thanks in advance!
About the ThreadPoolExecutor: it depends. An Executors.newCachedThreadPool() will just create new threads for new Runnables. See also this question and the accepted answer. But you will end up with a thread-per-client scenario.
Nio prevents the thread-per-client scenario (if there are many clients sending relative small messages with pauses in between, see also (the summary of) this article), I advice against trying to build your own nio clone.
Implementing nio from the ground up is not easy, a tutorial can be found here. It might be easier to use a nio server like Netty.
Another alternative is to use a technology designed to handle many clients that send and receive small messages. It takes some time to learn and setup, but I managed to get a Tomcat WebSockets server talking with a Jetty WebSocket client pretty quickly. A rewrite to use this technology could be less work.

thread per request architecture implementation

I'm trying to build a multithreaded server with TCP connections, that can talk to multiple clients and concurrently stream some data. I am using Java, Java IO and Java Thread libraries. I believe my implementation should be built as a 'thread per request' model. Any idea where I can kickstart or a tutorial you can point me into?
A thread per request model is quite simple to write, as multithreaded code goes. Basically what you need is:
A thread pool
A server socket
A main thread that dispatches worker threads
Set up the server socket to listen for requests. Have the main thread run a loop. The loop waits for a request to come in, then takes a thread from the thread pool to service the request.
If you want to write a efficient Server. Use NIO. One thread per client is old way. It's memory/cpu intensive.
See this one. This is a good place to start.
http://rox-xmlrpc.sourceforge.net/niotut/
Once you understand NIO and implement your server that way you will be glad you implemented it. In past I converted a One thread per client model to NIO and performance gains were tremendous.

Categories