Websockets and Threaded server-side Endpoint - java

Kind of a "noob" problem here.
I have a small application to write (like a simple game). There is server-side and client-side. It has to use websockets as the way of communication. Server has a server class (with main() that starts the server) as well as server endpoint class. However, the game is not turn based, but real time based. So the server has to do certain computations every "tick" b/c of the dynamic field.
I assume that Threads would suit well in this case, but I don't know how to put threads with this kind of server.
As I can see, the only thing that can receive/send messages is endpoint. If I make it implement Runnable and pause every 0.5 of a sec, it won't accept messages during that pause time. If I define a different class for that purpose, I have no idea how I start it inside of an endpoint and make a way for them to communicate.
Does anyone have any suggestions/info/links/anything that may help?
Thank you in advance.

Server endpoint will continuously receive data from client side. All you have to do is to process that data in some other thread. You can define a different class for that purpose (a thread). This thread class will have two different queues.
In queue - to receive data from the endpoint
Out queue - to send data to the endpoint
(You can use ConcurrentLinkedQueue for that. more help -> How to use ConcurrentLinkedQueue?)
Start this processing thread inside the endpoint. when endpoint receives data, put them into the In Queue. Continuously listen to the Out Queue and send that data again to the client side.
Endpoint code
#OnMessage
public void onMessage(String message,Session peer) throws IOException{
processingThread t = new processThread(peer);
t.inQueue.add(data);
t.start();
String s;
//listen to the Out Queue
while (true) {
while ((s =t.outQueue.poll()) != null) {
peer.getBasicRemote.sendText(dataToBeSent);
}
}
}
processingThread Code
public class processingThread extends Thread{
public ConcurrentLinkedQueue<String> inQueue = new ConcurrentLinkedQueue<String>();
public ConcurrentLinkedQueue<String> outQueue = new ConcurrentLinkedQueue<String>();
public void run(){
//listen to in queue and process
//after processing put to the out queue
}
}
Hope this will help :)

Related

Java - can two threads on client side use the same input stream from server?

I'm working on a Java client/server application with a pretty specific set of rules as to how I have to develop it. The server creates a ClientHandler instance that has input and output streams to the client socket, and any input and output between them is triggered by events in the client GUI.
I have now added in functionality server-side that will send out periodic updates to all connected clients (done by storing each created PrintWriter object from the ClientHandlers in an ArrayList<PrintWriter>). I need an equivalent mechanism client-side to process these messages, and have been told this needs to happen in a second client-side thread whose run() method uses a do...while(true) loop until the client disconnects.
This all makes sense to me so far, what I am struggling with is the fact that the two threads will have to share the one input stream, and essentially 'ignore' any messages that aren't of the type that they handle. In my head, it should look something like this:
Assuming that every message from server sends a boolean of value true on a message-to-all, and one of value false on a message to an individual client...
Existing Client Thread
//method called from actionPerformed(ActionEvent e)
//handles server response to bid request
public void receiveResponse()
{
//thread should only process to-specific-client messages
if (networkInput.nextBoolean() == false)
{
//process server response...
}
}
Second Client-side Thread
//should handle all messages set to all clients
run()
{
do {
if (networkInput.nextBoolean() == true)
{
//process broadcasted message...
} while (true);
}
As they need to use the same input stream, I would obviously be adding some synchronized, wait/notify calls, but generally, is what I'm looking to do here possible? Or will the two threads trying to read in from the same input stream interfere with each other too much?
Please let me know what you think!
Thanks,
Mark
You can do it, though it will be complicated to test and get right. How much is "too much" depends on you. A simpler solution is to have a reader thread pass messages to the two worker threads.
ExecutorService thread1 = Executors.newSingleThreadedExecutors();
ExecutorService thread2 = Executors.newSingleThreadedExecutors();
while(running) {
Message message = input.readMessage();
if (message.isTypeOne())
thread1.submit(() -> process(message));
else if (message.isTypeTwo())
thread2.submit(() -> process(message));
else
// do something else.
}
thread1.shutdown();
thread2.shutdown();

Handle multiple clients with one server

I try to get a connection to multiple clients using the Sockets in Java. Everything seems to work, but the problem is, that the server just listens to the first client. If there are multiple clients, the server can send them all messages, but he can just listen to the messages that came from the first client. I tried this all out (I'm at this problem since yesterday). So I'm pretty sure, that the fault has to be in the class "ClientListener".
Explanation:
There is a List with clients (connection to communicate with Strings). In the GUI there is a list, where I can choose, with which client I'd like to communicate. If I change the client, the variable currentClient (int) switches to another number
networkClients is an ArrayList, where all the different connections are "stored".
The first connected client is exactly the same as the other clients, there is nothing special about him. He is called, when the variable currentClient is set to 0 (per default). The variable-switching is working. Like I said, all the clients give me a response if I send them an order, but just networkClients.get(0) is heard by the server (ClientListener).
class ClientListener implements Runnable {
String request;
#Override
public void run() {
try {
while (networkClients.size() < 1) {
Thread.sleep(1000);
}
//***I'm pretty sure, that the problem is in this line
while ((request = networkClients.get(currentClient).getCommunicationReader().readLine()) != null) {
//***
myFileList.add(new MyFile(request));
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
I hope someone can help me. I tried many things, but nothing worked.
EDIT: Like I wrote in the code example, is it possible that the while-loop isn't able to switch the number of "currentClient" (which is handled by another Thread)? I tested/simulated something similar in a testclass and the result was, that a while-loop of course can can update the state in it (meaning, that if a variable changes in the () of a while loop, it will of course be checked after every repeat).
You should take a look at multithreading.
Your server program should be made out of:
- The main thread
- A thread that handles new connections.
(Upon creating a new connection, start a new thread and pass the connection on to that thread)
- A thread for each connected client, listening to the each client separately
Take a look at some examples like: (1) (2)
I found the solution:
The Thread sits in the declared method I mentioned in the starting post (in the code snippet) and waits unlimited time for a new response of the client.
So changing the index of the list "networkClients" won't do anything, because nothing will happen there, until there is a new order sent by the client (which lets the thread go further).
So you need to implement an extra listener for each client.

Async NIO: Same client sending multiple messages to Server

Regarding Java NIO2.
Suppose we have the following to listen to client requests...
asyncServerSocketChannel.accept(null, new CompletionHandler <AsynchronousSocketChannel, Object>() {
#Override
public void completed(final AsynchronousSocketChannel asyncSocketChannel, Object attachment) {
// Put the execution of the Completeion handler on another thread so that
// we don't block another channel being accepted.
executer.submit(new Runnable() {
public void run() {
handle(asyncSocketChannel);
}
});
// call another.
asyncServerSocketChannel.accept(null, this);
}
#Override
public void failed(Throwable exc, Object attachment) {
// TODO Auto-generated method stub
}
});
This code will accept a client connection process it and then accept another.
To communicate with the server the client opens up an AsyncSocketChannel and fires the message.
The Completion handler completed() method is then invoked.
However, this means if the client wants to send another message on the same AsyncSocket instance it can't.
It has to create another AsycnSocket instance - which I believe means another TCP connection - which is performance hit.
Any ideas how to get around this?
Or to put the question another way, any ideas how to make the same asyncSocketChannel receive multipe CompleteionHandler completed() events?
edit:
My handling code is like this...
public void handle(AsynchronousSocketChannel asyncSocketChannel) {
ByteBuffer readBuffer = ByteBuffer.allocate(100);
try {
// read a message from the client, timeout after 10 seconds
Future<Integer> futureReadResult = asyncSocketChannel.read(readBuffer);
futureReadResult.get(10, TimeUnit.SECONDS);
String receivedMessage = new String(readBuffer.array());
// some logic based on the message here...
// after the logic is a return message to client
ByteBuffer returnMessage = ByteBuffer.wrap((RESPONSE_FINISHED_REQUEST + " " + client
+ ", " + RESPONSE_COUNTER_EQUALS + value).getBytes());
Future<Integer> futureWriteResult = asyncSocketChannel.write(returnMessage);
futureWriteResult.get(10, TimeUnit.SECONDS);
} ...
So that's it my server reads a message from the async channe and returns an answer.
The client blocks until it gets the answer. But this is ok. I don't care if client blocks.
Whent this is finished, client tries to send another message on same async channel and it doesn't work.
There are 2 phases of connection and 2 different kind of completion handlers.
First phase is to handle a connection request, this is what you have programmed (BTW as Jonas said, no need to use another executor). Second phase (which can be repeated multiple times) is to issue an I/O request and to handle request completion. For this, you have to supply a memory buffer holding data to read or write, and you did not show any code for this. When you do the second phase, you'll see that there is no such problem as you wrote: "if the client wants to send another message on the same AsyncSocket instance it can't".
One problem with NIO2 is that on one hand, programmer have to avoid multiple async operations of the same kind (accept, read, or write) on the same channel (or else an error occur), and on the other hand, programmer have to avoid blocking wait in handlers. This problem is solved in df4j-nio2 subproject of the df4j actor framework, where both AsyncServerSocketChannel and AsyncSocketChannel are represented as actors. (df4j is developed by me.)
First, you should not use an executer like you have in the completed-method. The completed-method is already handled in a new worker-thread.
In your completed-method for .accept(...), you should call asychSocketChannel.read(...) to read the data. The client can just send another message on the same socket. This message will be handled with a new call to the completed-method, perhaps by another worker-thread on your server.

Android Threads, Services, and two way communication between them

I'm struggling to wrap my head around what needs to happen here. I'm currently working on an app that runs a service. The service when started opens a webserver that runs in a background thread.
At any point while this service is running the user can send commands to the device from a browser. The current sequence of events is as follows.
User sends request to server
Server sends a message to the service via the msg handler construct, it sends data such as the url parameters
The service does what it wants with the data, and wants to send some feedback message to the user in the browser
?????
The server's response to the request contains a feed back message from the service.
The way my functions are set up I need to pause my serve() function while waiting for a response from the service and then once the message is received resume and send an http response.
WebServer.java
public Response serve( String uri, String method, Properties header, Properties parms, Properties files )
{
Bundle b = Utilities.convertToBundle(parms);
Message msg = new Message();
msg.setData(b);
handler.sendMessage(msg);
//sending a message to the handler in the service
return new NanoHTTPD.Response();
}
CommandService.java
public class CommandService extends Service {
private WebServer webserver;
public Handler handler = new Handler() {
#Override
public void handleMessage(Message msg) {
execute_command(msg.getData());//some type of message should be sent back after this executes
};
Any suggestions? Is this structure the best way to go about it, or can you think of a better design that would lead to a cleaner implementation?
I think the lack of answers is because you haven't been very specific in what your question is. In my experience it's easier to get answers to simple or direct questions that general architecture advice on StackOverflow.
I'm no expert on Android but I'll give it a shot. My question is why you have a Webservice running in the background of a Service, why not just have one class, make your Service the Webservice?
Regarding threading and communication and sleeping, the main thing to remember is that a webserver needs to always be available to serve new requests, whilst serving current requests. Other than that, it's normal that a client will wait for a thread to complete its task (i.e. the thread "blocks"). So most webservers spawn new a thread to handle each request that comes in. If you have a background thread but you block the initial thread while you wait for the background thread to complete its task, then you're no better off than just completing everything on the one thread. Actually, the latter would be preferable for the sake of simplicity.
If Android is actually spawning new threads for you when requests come in, then there's no need for a background thread. Just do everything synchronously on one thread and rejoice in the simplicity!

Linking two Threads in a Client-Server Socket program - Java

I create threads of class A and each sends a serialized object to a Server using ObjectOutputStream.
The Server creates new Threads B for each socket connection (whenever a new A client connects)
B will call a synchronized method on a Shared Resource Mutex which causes it (B) to wait() until some internal condition in the Mutex is true.
In this case how A can know that B is currently waiting?
Hope this description is clear.
Class Arrangement:
A1--------->B1-------->| |
A2--------->B2-------->| Mutex |
A3--------->B3-------->| |
EDIT:
it's a must to have wait(), notify() or notifyAll(), since this is for an academic project where concurrency is tested.
Normally A would read on the socket, which would "block" (i.e. not return, hang up) until some data was sent back by B. It doesn't need to be written to deal with the waiting status of B. It just reads and that inherently involves waiting for something to read.
Update So you want A's user interface to stay responsive. By far the best way to do that is take advantage of the user interface library's event queue system. All GUI frameworks have a central event loop that dispatches events to handlers (button click, mouse move, timer, etc.) There is usually a way for a background thread to post something to that event queue so that it will be executed on the main UI thread. The details will depend on the framework you're using.
For example, in Swing, a background thread can do this:
SwingUtilities.invokeAndWait(someRunnableObject);
So suppose you define this interface:
public interface ServerReplyHandler {
void handleReply(Object reply);
}
Then make a nice API for your GUI code to use when it wants to submit a request to the server:
public class Communications {
public static void callServer(Object inputs, ServerReplyHandler handler);
}
So your client code can call the server like this:
showWaitMessage();
Communications.callServer(myInputs, new ServerReplyHandler() {
public void handleReply(Object myOutputs) {
hideWaitMessage();
// do something with myOutputs...
}
});
To implement the above API, you'd have a thread-safe queue of request objects, which store the inputs object and the handler for each request. And a background thread which just does nothing but pull requests from the queue, send the serialised inputs to the server, read back the reply and deserialise it, and then do this:
final ServerReplyHandler currentHandler = ...
final Object currentReply = ...
SwingUtilities.invokeAndWait(new Runnable() {
public void run() {
currentHandler.handleReply(currentReply);
}
});
So as soon as the background thread has read back the reply, it passes it back into the main UI thread via a callback.
This is exactly how browsers do asynchronous communication from JS code. If you're familiar with jQuery, the above Communications.callServer method is the same pattern as:
showWaitMessage();
$.get('http://...', function(reply) {
hideWaitMessage();
// do something with 'reply'
});
The only difference in this case is that you are writing the whole communication stack by hand.
Update 2
You asked:
You mean I can pass "new ObjectOutputStream().writeObject(obj)" as
"myInputs" in Communications.callServer?
If all information is passed as serialised objects, you can build the serialisation into callServer. The calling code just passes some object that supports serialisation. The implementation of callServer would serialise that object into a byte[] and post that to the work queue. The background thread would pop it from the queue and send the bytes to the server.
Note that this avoids serialising the object on the background thread. The advantage of this is that all background thread activity is separated from the UI code. The UI code can be completely unaware that you're using threads for communication.
Re: wait and notify, etc. You don't need to write your own code to use those. Use one of the standard implementations of the BlockingQueue interface. In this case you could use LinkedBlockingQueue with the default constructor so it can accept an unlimited number of items. That means that submitting to the queue will always happen without blocking. So:
private static class Request {
public byte[] send;
public ServerReplyHandler handler;
};
private BlockingQueue<Request> requestQueue;
public static callServer(Object inputs, ServerReplyHandler handler) {
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
new ObjectOutputStream(byteStream).writeObject(inputs);
Request r = new Request();
r.send = byteStream.toByteArray();
r.handler = handler;
requestQueue.put(r);
}
Meanwhile the background worker thread is doing this:
for (;;) {
Request r = requestQueue.take();
if (r == shutdown) {
break;
}
// connect to server, send r.send bytes to it
// read back the response as a byte array:
byte[] response = ...
SwingUtilities.invokeAndWait(new Runnable() {
public void run() {
currentHandler.handleReply(
new ObjectInputStream(
new ByteArrayInputStream(response)
).readObject()
);
}
});
}
The shutdown variable is just:
private static Request shutdown = new Request();
i.e. it's a dummy request used as a special signal. This allows you to have another public static method to allow the UI to ask the background thread to quit (would presumably clear the queue before putting shutdown on it).
Note the essentials of the pattern: UI objects are never accessed on the background thread. They are only manipulated from the UI thread. There is a clear separation of ownership. Data is passed between threads as byte arrays.
You could start multiple workers if you wanted to support more than one request happening simultaneously.

Categories