Handle multiple clients with one server - java

I try to get a connection to multiple clients using the Sockets in Java. Everything seems to work, but the problem is, that the server just listens to the first client. If there are multiple clients, the server can send them all messages, but he can just listen to the messages that came from the first client. I tried this all out (I'm at this problem since yesterday). So I'm pretty sure, that the fault has to be in the class "ClientListener".
Explanation:
There is a List with clients (connection to communicate with Strings). In the GUI there is a list, where I can choose, with which client I'd like to communicate. If I change the client, the variable currentClient (int) switches to another number
networkClients is an ArrayList, where all the different connections are "stored".
The first connected client is exactly the same as the other clients, there is nothing special about him. He is called, when the variable currentClient is set to 0 (per default). The variable-switching is working. Like I said, all the clients give me a response if I send them an order, but just networkClients.get(0) is heard by the server (ClientListener).
class ClientListener implements Runnable {
String request;
#Override
public void run() {
try {
while (networkClients.size() < 1) {
Thread.sleep(1000);
}
//***I'm pretty sure, that the problem is in this line
while ((request = networkClients.get(currentClient).getCommunicationReader().readLine()) != null) {
//***
myFileList.add(new MyFile(request));
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
I hope someone can help me. I tried many things, but nothing worked.
EDIT: Like I wrote in the code example, is it possible that the while-loop isn't able to switch the number of "currentClient" (which is handled by another Thread)? I tested/simulated something similar in a testclass and the result was, that a while-loop of course can can update the state in it (meaning, that if a variable changes in the () of a while loop, it will of course be checked after every repeat).

You should take a look at multithreading.
Your server program should be made out of:
- The main thread
- A thread that handles new connections.
(Upon creating a new connection, start a new thread and pass the connection on to that thread)
- A thread for each connected client, listening to the each client separately
Take a look at some examples like: (1) (2)

I found the solution:
The Thread sits in the declared method I mentioned in the starting post (in the code snippet) and waits unlimited time for a new response of the client.
So changing the index of the list "networkClients" won't do anything, because nothing will happen there, until there is a new order sent by the client (which lets the thread go further).
So you need to implement an extra listener for each client.

Related

RMI Service run similar to sockets

So if I have a socket server, I can accept each socket and pass it to a executory
while(true){
Socket conn = socketServ.accept();
Runnable task = new Runnable() {
#Override
public void run() {
try{
server.executor(conn);
} catch(IOException e){
}
}
};
exec1.execute(task);
}
Doing this allows my server to run on my threads and does not block the same thread. Because I also have reference to that socket... called "conn" I can successfully return messages as well.
Now I have an RMI interface, which basically lets me call methods back and forth.
for example if I had this method:
public MusicServerResponseImpl CreatePlayerlist(String Name, UserObjectImpl uo) throws RemoteException {
MusicServerResponseImpl res = new MusicServerResponseImpl();
return res;
}
Which returns a serializable object. My concern is when this message gets called, I think it is going to get called in the main thread of the server, and thus will block that thread and slow down parallelism.
What I think is the solution is to have every single RMI method also create a task for an executor.. to speed up the execution of everything...this issue I am seeing however is unlike the socket where I have an object to send information back to, I am unsure how I would return a response from the RMI method, without somehow having to block the thread.
Does that make sense? Basically I am asking how I can execute in parallel with RMI methods while still being able to return results!
Thanks for the help!
Does that make sense?
No. Concurrent calls are natively supported.
See this documentation page and look for the property named maxConnectionThreads.
You could also have tested your assumptions by, for example, printing the current thread name in your server code, and trying to execute concurrent calls and see what happens.

How do I keep track of requests issued/completed without the use of additional state variables? (Java/Grpc)

I am using the StreamObserver class found in the grpc-java project to set up some bidirectional streaming.
When I run my program, I make an undetermined number of requests to the server, and I only want to call onCompleted() on the requestObserver once I have finished making all of the requests.
Currently, to solve this, I am using a variable "inFlight" to keep track of the requests that have been issued, and when a response comes back, I decrement "inFlight". So, something like this.
// issuing requests
while (haveRequests) {
MessageRequest request = mkRequest();
this.requestObserver.onNext(request);
this.inFlight++;
}
// response observer
StreamObserver<Message> responseObserver = new StreamObserver<Message> {
#Override
public void onNext(Message response) {
if (--this.onFlight == 0) {
this.requestObserver.onCompleted();
}
// work on message
}
// other methods
}
A bit pseudo-codey, but this logic works. However, I would like to get rid of the "inFlight" variable if possible. Is there anything within the StreamObserver class that allows this sort of functionality, without the need of an additional variable to track state? Something that would tell the number of requests issued and when they completed.
I've tried inspecting the object within the intellij IDE debugger, but nothing is popping out to me.
To answer your direct question, you can simply call onComplete from the while loop. All the messages passed to onNext. Under the hood, gRPC will send what is called a "half close", indicating that it won't send any more messages, but it is willing to receive them. Specifically:
// issuing requests
while (haveRequests) {
MessageRequest request = mkRequest();
this.requestObserver.onNext(request);
this.inFlight++;
}
requestObserver.onCompleted();
This ensures that all responses are sent, and in the order that you sent them. On the server side, when it sees the corresponding onCompleted callback, it can half-close its side of the connection by calling onComplete on its observer. (There are two observers on the server side one for receiving info from the client, one for sending info).
Back on the client side, you just need to wait for the server to half close to know that all messages were received and processed. Note that if there were any errors, you would get an onError callback instead.
If you don't know how many requests you are going to make on the client side, you might consider using an AtomicInteger, and call decrementAndGet when you get back a response. If the return value is 0, you'll know all the requests have completed.

Java - can two threads on client side use the same input stream from server?

I'm working on a Java client/server application with a pretty specific set of rules as to how I have to develop it. The server creates a ClientHandler instance that has input and output streams to the client socket, and any input and output between them is triggered by events in the client GUI.
I have now added in functionality server-side that will send out periodic updates to all connected clients (done by storing each created PrintWriter object from the ClientHandlers in an ArrayList<PrintWriter>). I need an equivalent mechanism client-side to process these messages, and have been told this needs to happen in a second client-side thread whose run() method uses a do...while(true) loop until the client disconnects.
This all makes sense to me so far, what I am struggling with is the fact that the two threads will have to share the one input stream, and essentially 'ignore' any messages that aren't of the type that they handle. In my head, it should look something like this:
Assuming that every message from server sends a boolean of value true on a message-to-all, and one of value false on a message to an individual client...
Existing Client Thread
//method called from actionPerformed(ActionEvent e)
//handles server response to bid request
public void receiveResponse()
{
//thread should only process to-specific-client messages
if (networkInput.nextBoolean() == false)
{
//process server response...
}
}
Second Client-side Thread
//should handle all messages set to all clients
run()
{
do {
if (networkInput.nextBoolean() == true)
{
//process broadcasted message...
} while (true);
}
As they need to use the same input stream, I would obviously be adding some synchronized, wait/notify calls, but generally, is what I'm looking to do here possible? Or will the two threads trying to read in from the same input stream interfere with each other too much?
Please let me know what you think!
Thanks,
Mark
You can do it, though it will be complicated to test and get right. How much is "too much" depends on you. A simpler solution is to have a reader thread pass messages to the two worker threads.
ExecutorService thread1 = Executors.newSingleThreadedExecutors();
ExecutorService thread2 = Executors.newSingleThreadedExecutors();
while(running) {
Message message = input.readMessage();
if (message.isTypeOne())
thread1.submit(() -> process(message));
else if (message.isTypeTwo())
thread2.submit(() -> process(message));
else
// do something else.
}
thread1.shutdown();
thread2.shutdown();

How do I properly mimic pass-by-reference in Java RMI with serializable objects and a callback method?

I have a distributed application using java RMI and a main object(CoreApplication) that implements java.io.Serializable. Every minute, this main object is sent to a remote computer and processed on that JVM's thread pool. Since it's asynchronous, the object is processed without blocking the main thread on the Master computer.
When the CoreApplication object is finished processing on the remote thread, it invokes a call back method and is sent back to the main computer.
Here is some code of the remote machine processing a job invoked from the Master computer via RMI and the sendJob method
public void sendJob(final CoreApplication aJob) throws RemoteException{
Runnable r = new Runnable(){
public void run(){
try {
WorkResponse wr = aJob.process();
client.coreApplicationHandler(aJob,wr);
}catch(RemoteException e){}
}
};
workQueue.execute(r);
}
You can see client.coreApplicationHandler is the callback method to the main server and sends the CoreApplication object back with it, along with a response object.
Here is the coreApplicationHandler method code on the main machine
public void coreApplicationHandler(CoreApplication j,WorkResponse wr){
String ticker = j.getTickerSymbol();
coreApplicationObjects.put(ticker, j);
if(GlobalParameters.DEBUG_MODE){
System.out.println("WORK RESPONSE IS "+wr.getMessage());
}
}
My question is, is replacing the CoreApplication object each time on the call back method the best way to make sure it's up-to-date for the next minute it's sent? CoreApplication is fluid and changes and the state must be preserved. I am sending it back to the Master computer, so it's state can be monitored from a central location. If I had 100 computation nodes and they didn't return their objects, it would get really messy I think to keep track of them all.
It works pretty good so far unless the job isn't processed by the time it tries to send out another and results in sending a stale object with an old state (ie, the same object as the last minute). Please comment if this doesn't make sense and I will do my best to explain it.
RMI is not a way to synchronize objects across a cluster. But there are tools to do just that. Look at http://www.hazelcast.com/, for example.
If you have a cluster of computers and need synchronization then you need to use clustering via the server's tools or a third party tool.
I recommend hazelcast. It is very easy to use and will allow local clusters using fast UDP or WAN clusters using TCP socket to TCP socket.
For example Hazelcast will let you do something like this:
import com.hazelcast.core.MultiMap;
import com.hazelcast.core.Hazelcast;
import java.util.Collection;
// a live shared multimap shared across all cluster nodes
MultiMap<String, Order> mmCustomerOrders = Hazelcast.getMultiMap("customerOrders");
mmCustomerOrders.put("1", new Order ("iPhone", 340));
Thread.Sleep( 1000 );
Order order = (Order) mmCustomerOrders.get("1");
System.out.println( Order.quantity() ); // 340 ?? Nobody knows, it might have been changed
What is the output? If a cluster member changed the item "1" in the map, then you will get that value automagically. No more coding necessary….
Hope it helps
-Alex

Design(Classes, methods, interfaces) of real time applications(server/client)

I´ve been looking for a good book or article about this topic but didnt find much. I didnt find a good example - piece of code - for a specific scenario. Like clients/server conversation.
In my application´s protocol they have to send/recieve messages. Like:
Server want to send a file to a client
Client can accpet or no,
if he accepts, server will send bytes over the same connection/socket.
The rest of my application all uses blocking methods, server has a method
Heres what I did:
Server method:
public synchronized void sendFile(File file)
{
//send messsage asking if I can send a file
//block on read, waiting for client responde
//if client answers yes, start sending the bytes
//else return
}
Client methods:
public void reciveCommand()
{
//read/listen for a command from socket
//if is a send file command handleSendFileCommand();
//after the return of handleSendFileCommand() listen for another command
}
public void handleSendFileCommand()
{
//get the file server want to send
//check if it already has the file
//if it already has, then send a command to the socket saying it already has and return
//else send a command saying server can send the file
//create a FileInputStream, recive bytes and then return method
}
I am 100% sure this is wrong because, there is no way server and clients would talk bidirecional, I mean, when server wants to send a command to a server, they have to follow an order of commands until that conversation is finished, only then, they can send/recive another sequence of commands. Thats why I made all methods that send requests synchronized
It didnt took me a lot of time to realize I need to study about design patterns for that kind of application...
I read about Chain of Responsibility design pattern but I dont get it how can I use it or another good design pattern in that situation.
I hope someone can help me with some code example-like.
Thanks in advance
synchronized keyword in Java means something completely different - it marks a method or a code block as a critical section that only single thread can execute at a time. You don't need it here.
Then, a TCP connection is bi-directional on the byte-stream level. The synchronization between the server and a client is driven by the messages exchanged. Think of a client (same pretty much applies to the server) as a state machine. Some types of messages are acceptable in the current state, some are not, some switch the node into different state.
Since you are looking into design patterns, the State pattern is very applicable here.

Categories