I am using a simple Http server from com.sun.net.httpserver, as described in simple HTTP server in Java using only Java SE API.
Everything works fine, however I am unsure how can I cleanly shutdown the server once I no longer need it. I do not want to stop the server while it is still sending the data for some request, only once it is idle.
My particular scenario is a desktop application performing OAuth dance. I use a local web server embedded in the application to provide a callback response, the application launches a desktop browser to show the server response using java.awt.Desktop.browse API.
I have tried calling HttpServer.stop(0) directly from my HttpHandler.handle function, after I have written the response page into the HttpExchange response output stream, but this is too early, the browser shows ERR_EMPTY_RESPONSE.
The same happens when I stop the server once my main application has finished its work - this is often too early, the HttpServer has not completed sending the data out yet at that moment.
I could provide a few seconds delay value to stop, but I would like to achieve a proper and clean synchronization.
What is a proper solution to this?
I can offer a solution based on one CountDownLatch + custom ExecutorService that is set up as executor for HttpServer:
public void runServer() throws Exception {
final ExecutorService ex = Executors.newSingleThreadExecutor();
final CountDownLatch c = new CountDownLatch(1);
HttpServer server = HttpServer.create(new InetSocketAddress(8888), 0);
server.createContext("/test", (HttpExchange h) -> {
StringBuilder resp = new StringBuilder();
for (int i = 0; i < 1_000_000; i++)
resp.append(i).append(", ");
String response = resp.toString();
h.sendResponseHeaders(200, response.length());
OutputStream os = h.getResponseBody();
os.write(response.getBytes());
os.close();
c.countDown(); // count down, letting `c.await()` to return
});
server.setExecutor(ex); // set up a custom executor for the server
server.start(); // start the server
System.out.println("HTTP server started");
c.await(); // wait until `c.countDown()` is invoked
ex.shutdown(); // send shutdown command to executor
// wait until all tasks complete (i. e. all responses are sent)
ex.awaitTermination(1, TimeUnit.HOURS);
server.stop(0);
System.out.println("HTTP server stopped");
}
I did test this on our work network environment, and it seems to work correctly. HttpServer stops no earlier than response is fully sent, so I think this is exactly what you need.
Another approach may be not shutting down the executor ex, but sending there a new task containing server.stop() after the response is written to a stream. As ex is constructed single-threaded, such task will execute not earlier than previous task completes, i. e. a response is fully sent:
public void run() throws Exception {
final ExecutorService ex = Executors.newSingleThreadExecutor();
final HttpServer server = HttpServer.create(new InetSocketAddress(8888), 0);
server.createContext("/test", (HttpExchange h) -> {
// ... generate and write a response
ex.submit(() -> {
server.stop(0);
System.out.println("HTTP server stopped");
});
});
server.setExecutor(ex);
server.start();
System.out.println("HTTP server started");
}
For more information, see ExecutorService
Related
Currently, I'm reading the book "Reactive Programming with RxJava" by Tomasz Nurkiewicz. In chapter 5 he compares two different approaches to build an HTTP server which one of them is based on a netty framework.
And I can't figure out how using such a framework can help to build more responsive server compare to the classic approach with a thread per request blocking IO.
The main concept is to utilize as few threads as possible but if there is some blocking IO operation such as DB access that means the very limited number on concurrent connection could be handled at a time
I've reproduced an example from that book.
Initializing the server:
public static void main(String[] args) throws Exception {
EventLoopGroup bossGroup = new NioEventLoopGroup(1);
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
new ServerBootstrap()
.option(ChannelOption.SO_BACKLOG, 50_000)
.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new HttpInitializer())
.bind(8080)
.sync()
.channel()
.closeFuture()
.sync();
} finally {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
The size of worker group thread pool is availableProcessors * 2 = 8 on my machine.
To simulate some IO operation and be able to see what is going on in the LOG I've added latency(but it could be some business logic invocation) of 1sec to the handler:
class HttpInitializer extends ChannelInitializer<SocketChannel> {
private final HttpHandler httpHandler = new HttpHandler();
#Override
public void initChannel(SocketChannel ch) {
ch
.pipeline()
.addLast(new HttpServerCodec())
.addLast(httpHandler);
}
}
And the handler itself:
class HttpHandler extends ChannelInboundHandlerAdapter {
private static final Logger log = LoggerFactory.getLogger(HttpHandler.class);
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
if (msg instanceof HttpRequest) {
try {
System.out.println(format("Request received on thread '%s' from '%s'", Thread.currentThread().getName(), ((NioSocketChannel)ctx.channel()).remoteAddress()));
} catch (Exception ex) {}
sendResponse(ctx);
}
}
private void sendResponse(ChannelHandlerContext ctx) {
final DefaultFullHttpResponse response = new DefaultFullHttpResponse(
HTTP_1_1,
HttpResponseStatus.OK,
Unpooled.wrappedBuffer("OK".getBytes(UTF_8)));
try {
TimeUnit.SECONDS.sleep(1);
} catch (Exception ex) {
System.out.println("Ex catched " + ex);
}
response.headers().add("Content-length", 2);
ctx.writeAndFlush(response);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
log.error("Error", cause);
ctx.close();
}
}
The client to simulate multiple concurrent connections:
public class NettyClient {
public static void main(String[] args) throws Exception {
NettyClient nettyClient = new NettyClient();
for (int i = 0; i < 100; i++) {
new Thread(() -> {
try {
nettyClient.startClient();
} catch (Exception ex) {
}
}).start();
}
TimeUnit.SECONDS.sleep(5);
}
public void startClient()
throws IOException, InterruptedException {
InetSocketAddress hostAddress = new InetSocketAddress("localhost", 8080);
SocketChannel client = SocketChannel.open(hostAddress);
System.out.println("Client... started");
String threadName = Thread.currentThread().getName();
// Send messages to server
String[] messages = new String[]
{"GET / HTTP/1.1\n" +
"Host: localhost:8080\n" +
"Connection: keep-alive\n" +
"Cache-Control: max-age=0\n" +
"Upgrade-Insecure-Requests: 1\n" +
"User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36\n" +
"Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3\n" +
"Accept-Encoding: gzip, deflate, br\n" +
"Accept-Language: ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7"};
for (int i = 0; i < messages.length; i++) {
byte[] message = new String(messages[i]).getBytes();
ByteBuffer buffer = ByteBuffer.wrap(message);
client.write(buffer);
System.out.println(messages[i]);
buffer.clear();
}
client.close();
}
}
Expected -
Our case is the blue line with the only difference that delay was set to 0.1sec instead of 1sec as I explained above. With 100 concurrent connection, I was expecting 100 RPS because there were 90k RPS with 100k concurrent connection with 0.1 delays as the graph shows.
Actual - netty handles only 8 concurrent connection at a time, wait while sleep expires, take another bunch of 8 requests and so on. As a result, it took about 13sec to complete all requests. It's obvious to handle more clients I need to allocate more threads.
But this is exactly how the classic blocking IO approach works! Here the logs on the server-side, as you can see first 8 requests handled and one second later another 8 requests
2019-07-19T12:34:10.791Z Request received on thread 'nioEventLoopGroup-3-2' from '/127.0.0.1:49466'
2019-07-19T12:34:10.791Z Request received on thread 'nioEventLoopGroup-3-1' from '/127.0.0.1:49465'
2019-07-19T12:34:10.792Z Request received on thread 'nioEventLoopGroup-3-8' from '/127.0.0.1:49464'
2019-07-19T12:34:10.793Z Request received on thread 'nioEventLoopGroup-3-7' from '/127.0.0.1:49463'
2019-07-19T12:34:10.799Z Request received on thread 'nioEventLoopGroup-3-6' from '/127.0.0.1:49462'
2019-07-19T12:34:10.802Z Request received on thread 'nioEventLoopGroup-3-3' from '/127.0.0.1:49467'
2019-07-19T12:34:10.802Z Request received on thread 'nioEventLoopGroup-3-4' from '/127.0.0.1:49461'
2019-07-19T12:34:10.803Z Request received on thread 'nioEventLoopGroup-3-5' from '/127.0.0.1:49460'
2019-07-19T12:34:11.798Z Request received on thread 'nioEventLoopGroup-3-8' from '/127.0.0.1:49552'
2019-07-19T12:34:11.798Z Request received on thread 'nioEventLoopGroup-3-1' from '/127.0.0.1:49553'
2019-07-19T12:34:11.799Z Request received on thread 'nioEventLoopGroup-3-2' from '/127.0.0.1:49554'
2019-07-19T12:34:11.801Z Request received on thread 'nioEventLoopGroup-3-6' from '/127.0.0.1:49470'
2019-07-19T12:34:11.802Z Request received on thread 'nioEventLoopGroup-3-3' from '/127.0.0.1:49475'
2019-07-19T12:34:11.805Z Request received on thread 'nioEventLoopGroup-3-7' from '/127.0.0.1:49559'
2019-07-19T12:34:11.805Z Request received on thread 'nioEventLoopGroup-3-4' from '/127.0.0.1:49468'
2019-07-19T12:34:11.806Z Request received on thread 'nioEventLoopGroup-3-5' from '/127.0.0.1:49469'
So my question is - how could netty (or something similar) with its non-blocking and event-driven architecture utilize the CPU more effectively? If we only had 1 thread per each loop group the pipeline would be as follows:
ServerChannel selection key set to ON_ACCEPT
ServerChannel accept a connection and ClientChannel selection key set to ON_READ
Worker thread read the content of this ClientChannel and pass to the chain of handlers.
Even if the ServerChannel thread accept another client connection
and put it to some sort of queue, worker thread can't do anything before all handlers in the chain finish their job. From my
the perspective of view thread can't just switch to another job since
even waiting for the response from remote DB requires CPU ticks.
"how could netty (or something similar) with its non-blocking and event-driven architecture utilize the CPU more effectively? "
It cannot.
The goal of asynchronous (non-blocking and event-driven) programming is to save core memory, when tasks are used instead of threads as units of parallel work. This allows to have millions of parallel activities instead of thousands.
CPU cycles cannot be saved automatically - it is always an intellectual job.
I have set up an HttpsServer in Java. All of my communication works perfectly. I set up multiple contexts, load a self-signed certificate, and even start up based on an external configuration file.
My problem now is getting multiple clients to be able to hit my secure server. To do so, I would like to somehow multi-thread the requests that come in from the HttpsServer but cannot figure out how to do so. Below is my basic HttpsConfiguration.
HttpsServer server = HttpsServer.create(new InetSocketAddress(secureConnection.getPort()), 0);
SSLContext sslContext = SSLContext.getInstance("TLS");
sslContext.init(secureConnection.getKeyManager().getKeyManagers(), secureConnection.getTrustManager().getTrustManagers(), null);
server.setHttpsConfigurator(new SecureServerConfiguration(sslContext));
server.createContext("/", new RootHandler());
server.createContext("/test", new TestHandler());
server.setExecutor(Executors.newCachedThreadPool());
server.start();
Where secureConnection is a custom class containing server setup and certificate information.
I attempted to set the executor to Executors.newCachedThreadPool() and a couple of other ones. However, they all produced the same result. Each managed the threads differently but the first request had to finish before the second could process.
I also tried writing my own Executor
public class AsyncExecutor extends ThreadPoolExecutor implements Executor
{
public static Executor create()
{
return new AsyncExecutor();
}
public AsyncExecutor()
{
super(5, 10, 10000, TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(12));
}
#Override
public void execute(Runnable process)
{
System.out.println("New Process");
Thread newProcess = new Thread(process);
newProcess.setDaemon(false);
newProcess.start();
System.out.println("Thread created");
}
}
Unfortunately, with the same result as the other Executors.
To test I am using Postman to hit the /Test endpoint which is simulating a long running task by doing a Thread.sleep(10000). While that is running, I am using my Chrome browser to hit the root endpoint. The root page does not load until the 10 second sleep is over.
Any thoughts on how to handle multiple concurrent requests to the HTTPS server?
For ease of testing, I replicated my scenario using the standard HttpServer and condensed everything into a single java program.
import java.io.IOException;
import java.io.OutputStream;
import java.net.InetSocketAddress;
import java.util.concurrent.Executors;
import com.sun.net.httpserver.HttpExchange;
import com.sun.net.httpserver.HttpHandler;
import com.sun.net.httpserver.HttpServer;
public class Example
{
private final static int PORT = 80;
private final static int BACKLOG = 10;
/**
* To test hit:
* <p><b>http://localhost/test</b></p>
* <p>This will hit the endoint with the thread sleep<br>
* Then hit:</p>
* <p><b>http://localhost</b></p>
* <p>I would expect this to come back right away. However, it does not come back until the
* first request finishes. This can be tested with only a basic browser.</p>
* #param args
* #throws Exception
*/
public static void main(String[] args) throws Exception
{
new Example().start();
}
private void start() throws Exception
{
HttpServer server = HttpServer.create(new InetSocketAddress(PORT), BACKLOG);
server.createContext("/", new RootHandler());
server.createContext("/test", new TestHandler());
server.setExecutor(Executors.newCachedThreadPool());
server.start();
System.out.println("Server Started on " + PORT);
}
class RootHandler implements HttpHandler
{
#Override
public void handle(HttpExchange httpExchange) throws IOException
{
String body = "<html>Hello World</html>";
httpExchange.sendResponseHeaders(200, body.length());
OutputStream outputStream = httpExchange.getResponseBody();
outputStream.write(body.getBytes("UTF-8"));
outputStream.close();
}
}
class TestHandler implements HttpHandler
{
#Override
public void handle(HttpExchange httpExchange) throws IOException
{
try
{
Thread.sleep(10000);
}
catch (InterruptedException e)
{
e.printStackTrace();
}
String body = "<html>Test Handled</html>";
httpExchange.sendResponseHeaders(200, body.length());
OutputStream outputStream = httpExchange.getResponseBody();
outputStream.write(body.getBytes("UTF-8"));
outputStream.close();
}
}
}
TL;DR: It's OK, just use two different browsers, or specialized tool to test it.
You original implementation is OK and it work as expected, no custom Executor needed. For each request it executes method of "shared" handler class instance. It always picks up free thread from pool, so each method call is executed in different thread.
The problem seems to be, that when you use multiple windows of the same browser to test this behavior... for some reason requests get executed in serialised way (only one at the time). Tested with latest Firefox, Chrome, Edge and Postman. Edge and Postman work as expected. Also anonymous mode of Firefox and Chrome helps.
Same local URL opened at the same time from two Chrome windows. In first the page loaded after 5s, I got Thread.sleep(5000) so that's OK. Second window loaded respons in 8,71s, so there is 3,71s delay of unknown origin.
My guess? Probably some browser internal optimization or failsafe mechanism.
Try specifying a non-zero maximum backlog (the second argument to create()):
HttpsServer server = HttpsServer.create(new InetSocketAddress(secureConnection.getPort()), 10);
I did some experiments and what works for me is:
public void handler(HttpExchange exchange) {
executor.submit(new SomeOtherHandler());
}
public class SomeOtherHandler implements Runnable {
}
where the executor is the one you created as thread pool.
Retry Connection in Netty
I am building a client socket system. The requirements are:
First attemtp to connect to the remote server
When the first attempt fails keep on trying until the server is online.
I would like to know whether there is such feature in netty to do it or how best can I solve that.
Thank you very much
This is the code snippet I am struggling with:
protected void connect() throws Exception {
this.bootstrap = new ClientBootstrap(new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()));
// Configure the event pipeline factory.
bootstrap.setPipelineFactory(new SmpPipelineFactory());
bootstrap.setOption("writeBufferHighWaterMark", 10 * 64 * 1024);
bootstrap.setOption("sendBufferSize", 1048576);
bootstrap.setOption("receiveBufferSize", 1048576);
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
// Make a new connection.
final ChannelFuture connectFuture = bootstrap
.connect(new InetSocketAddress(config.getRemoteAddr(), config
.getRemotePort()));
channel = connectFuture.getChannel();
connectFuture.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future)
throws Exception {
if (connectFuture.isSuccess()) {
// Connection attempt succeeded:
// Begin to accept incoming traffic.
channel.setReadable(true);
} else {
// Close the connection if the connection attempt has
// failed.
channel.close();
logger.info("Unable to Connect to the Remote Socket server");
}
}
});
}
Assuming netty 3.x the simplest example would be:
// Configure the client.
ClientBootstrap bootstrap = new ClientBootstrap(
new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()));
ChannelFuture future = null;
while (true)
{
future = bootstrap.connect(new InetSocketAddress("127.0.0.1", 80));
future.awaitUninterruptibly();
if (future.isSuccess())
{
break;
}
}
Obviously you'd want to have your own logic for the loop that set a max number of tries, etc. Netty 4.x has a slightly different bootstrap but the logic is the same. This is also synchronous, blocking, and ignores InterruptedException; in a real application you might register a ChannelFutureListener with the Future and be notified when the Future completes.
Add after OP edited question:
You have a ChannelFutureListener that is getting notified. If you want to then retry the connection you're going to have to either have that listener hold a reference to the bootstrap, or communicate back to your main thread that the connection attempt failed and have it retry the operation. If you have the listener do it (which is the simplest way) be aware that you need to limit the number of retries to prevent an infinite recursion - it's being executed in the context of the Netty worker thread. If you exhaust your retries, again, you'll need to communicate that back to your main thread; you could do that via a volatile variable, or the observer pattern could be used.
When dealing with async you really have to think concurrently. There's a number of ways to skin that particular cat.
Thank you Brian Roach. The connected variable is a volatile and can be accessed outside the code or further processing.
final InetSocketAddress sockAddr = new InetSocketAddress(
config.getRemoteAddr(), config.getRemotePort());
final ChannelFuture connectFuture = bootstrap
.connect(sockAddr);
channel = connectFuture.getChannel();
connectFuture.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future)
throws Exception {
if (future.isSuccess()) {
// Connection attempt succeeded:
// Begin to accept incoming traffic.
channel.setReadable(true);
connected = true;
} else {
// Close the connection if the connection attempt has
// failed.
channel.close();
if(!connected){
logger.debug("Attempt to connect within " + ((double)frequency/(double)1000) + " seconds");
try {
Thread.sleep(frequency);
} catch (InterruptedException e) {
logger.error(e.getMessage());
}
bootstrap.connect(sockAddr).addListener(this);
}
}
}
});
We just finished building a server to store data to disk and fronted it with Netty. During load testing we were seeing Netty scaling to about 8,000 messages per second. Given our systems, this looked really low. For a benchmark, we wrote a Tomcat front-end and run the same load tests. With these tests we were getting roughly 25,000 messages per second.
Here are the specs for our load testing machine:
Macbook Pro Quad core
16GB of RAM
Java 1.6
Here is the load test setup for Netty:
10 threads
100,000 messages per thread
Netty server code (pretty standard) - our Netty pipeline on the server is two handlers: a FrameDecoder and a SimpleChannelHandler that handles the request and response.
Client side JIO using Commons Pool to pool and reuse connections (the pool was sized the same as the # of threads)
Here is the load test setup for Tomcat:
10 threads
100,000 messages per thread
Tomcat 7.0.16 with default configuration using a Servlet to call the server code
Client side using URLConnection without any pooling
My main question is why such a huge different in performance? Is there something obvious with respect to Netty that can get it to run faster than Tomcat?
Edit: Here is the main Netty server code:
NioServerSocketChannelFactory factory = new NioServerSocketChannelFactory();
ServerBootstrap server = new ServerBootstrap(factory);
server.setPipelineFactory(new ChannelPipelineFactory() {
public ChannelPipeline getPipeline() {
RequestDecoder decoder = injector.getInstance(RequestDecoder.class);
ContentStoreChannelHandler handler = injector.getInstance(ContentStoreChannelHandler.class);
return Channels.pipeline(decoder, handler);
}
});
server.setOption("child.tcpNoDelay", true);
server.setOption("child.keepAlive", true);
Channel channel = server.bind(new InetSocketAddress(port));
allChannels.add(channel);
Our handlers look like this:
public class RequestDecoder extends FrameDecoder {
#Override
protected ChannelBuffer decode(ChannelHandlerContext ctx, Channel channel, ChannelBuffer buffer) {
if (buffer.readableBytes() < 4) {
return null;
}
buffer.markReaderIndex();
int length = buffer.readInt();
if (buffer.readableBytes() < length) {
buffer.resetReaderIndex();
return null;
}
return buffer;
}
}
public class ContentStoreChannelHandler extends SimpleChannelHandler {
private final RequestHandler handler;
#Inject
public ContentStoreChannelHandler(RequestHandler handler) {
this.handler = handler;
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
ChannelBuffer in = (ChannelBuffer) e.getMessage();
in.readerIndex(4);
ChannelBuffer out = ChannelBuffers.dynamicBuffer(512);
out.writerIndex(8); // Skip the length and status code
boolean success = handler.handle(new ChannelBufferInputStream(in), new ChannelBufferOutputStream(out), new NettyErrorStream(out));
if (success) {
out.setInt(0, out.writerIndex() - 8); // length
out.setInt(4, 0); // Status
}
Channels.write(e.getChannel(), out, e.getRemoteAddress());
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
Throwable throwable = e.getCause();
ChannelBuffer out = ChannelBuffers.dynamicBuffer(8);
out.writeInt(0); // Length
out.writeInt(Errors.generalException.getCode()); // status
Channels.write(ctx, e.getFuture(), out);
}
#Override
public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent e) {
NettyContentStoreServer.allChannels.add(e.getChannel());
}
}
UPDATE:
I've managed to get my Netty solution to within 4,000/second. A few weeks back I was testing a client side PING in my connection pool as a safe guard against idle sockets but I forgot to remove that code before I started load testing. This code effectively PINGed the server every time a Socket was checked out from the pool (using Commons Pool). I commented that code out and I'm now getting 21,000/second with Netty and 25,000/second with Tomcat.
Although, this is great news on the Netty side, I'm still getting 4,000/second less with Netty than Tomcat. I can post my client side (which I thought I had ruled out but apparently not) if anyone is interested in seeing that.
The method messageReceived is executed using a worker thread that is possibly getting blocked by RequestHandler#handle which may be busy doing some I/O work.
You could try adding into the channel pipeline an OrderdMemoryAwareThreadPoolExecutor (recommended) for executing the handlers or alternatively, try dispatching your handler work to a new ThreadPoolExecutor and passing a reference to the socket channel for later writing the response back to client. Ex.:
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
executor.submit(new Runnable() {
processHandlerAndRespond(e);
});
}
private void processHandlerAndRespond(MessageEvent e) {
ChannelBuffer in = (ChannelBuffer) e.getMessage();
in.readerIndex(4);
ChannelBuffer out = ChannelBuffers.dynamicBuffer(512);
out.writerIndex(8); // Skip the length and status code
boolean success = handler.handle(new ChannelBufferInputStream(in), new ChannelBufferOutputStream(out), new NettyErrorStream(out));
if (success) {
out.setInt(0, out.writerIndex() - 8); // length
out.setInt(4, 0); // Status
}
Channels.write(e.getChannel(), out, e.getRemoteAddress());
}
I'm trying to create a client server game using java sockets. I have a thread server which controls the logic of the game. I also have client threads that communicate with the server. I use multiple client handler threads to facilitate server-to-client communication. I use multiple threads to communicate with other client threads using sockets.
Now, I have a problem on how to facilitate communication between the server thread and the multiple client threads. For example, should the server select the next player to play, how should it signal the client handler thread, and in turn communicate with the client thread through sockets?
I have done this before in the following way. I have a Server socket
public Server(int port, int numPlayers) {
game = new PRGameController(numPlayers);
try {
MessageOutput.info("Opening port on " + port);
ServerSocket clientConnectorSocket = new ServerSocket(port);
MessageOutput.info("Listening for connections");
while (!game.isFull()) {
// block until we get a connection from a client
final Socket client = clientConnectorSocket.accept();
MessageOutput.info("Client connected from " + client.getInetAddress());
Runnable runnable = new Runnable() {
public synchronized void run() {
PRGamePlayer player = new PRGamePlayer(client, game);
}
};
new Thread(runnable).start();
}
} catch (IOException io) {
MessageOutput.error("Server Connection Manager Failed...Shutting Down...", io);
// if the connection manager fails we want to closedown the server
System.exit(0);
}
}
Then on the client side, I have something like this..
public void connect(String ip) {
try {
comms = new Socket(ip, 12345);
comms.setTcpNoDelay(true);
// get the streams from the socket and wrap them round a ZIP Stream
// then wrap them around a reader and writer, as we are writing strings
this.input = new CompressedInputStream(comms.getInputStream());
this.output = new CompressedOutputStream(comms.getOutputStream());
this.connected = true;
startServerResponseThread();
} catch (IOException e) {
ui.displayMessage("Unable to connect to server, please check and try again");
this.connected = false;
}
if (connected) {
String name = ui.getUserInput("Please choose a player name");
sendXML(XMLUtil.getXML(new NameSetAction(name, Server.VERSION)));
}
}
/**
* This method sets up the server response thread. The thread, sits patiently
* waiting for input from the server, in a seperate thread, so not to hold
* up any client side activities. When data is recieved from the server
* it is processed, to perform the appropriate action.
*/
public void startServerResponseThread() {
// create the runnable that will be used by the serverListenerThread,
// to listen for requests from the server
Runnable runnable = new Runnable() {
public void run () {
try {
// loop forever, or until the server closes the connection
while (true) {
processRequest(input.readCompressedString());
}
} catch (SocketException sx) {
MessageOutput.error("Socket closed, user has shutdown the connection, or network has failed");
} catch (IOException ex) {
MessageOutput.error(ex.getMessage(), ex);
} catch (Exception ex) {
MessageOutput.error(ex.getMessage(), ex);
} finally {
(PRClone.this).connected = false;
// only shutdown the server if the listener thread has not already been
// destroyed, otherwise the server will have already been shutdown
if (serverListenerThread != null) {
// shutdown the thread and inform the application the communications has closed
MessageOutput.debug("Shutting down server listener Thread");
}
}
}
};
// create the thread
serverListenerThread = new Thread(runnable);
// start the thread
serverListenerThread.start();
}
The client is able to send requests to the server via the outputstream, and read server data from the input stream.
The server can accept requests from the client, and process it in the GameController, and can also send notifications from the server using outputstream, again in the GameController.
EDIT: Also, I should note that all my communication is done via XML, and the controller on the client or the server decodes the XML and performs the relevant request.
Hope this helps. It certainly does the job for me, and allows my multi-player games to work well.
I suspect that your client threads are hanging on a blocking read operation. To "release" these threads and make them send data instead, you'd have to interrupt them through thread.interrupt(). (Which would cause the blocking read to throw an InterruptedException.)
However, I've written a few network games myself, and I would really recommend you to look into the java.nio packages and especially the Selector class. Using this class you could easily make the whole server single-threaded. This would save you a lot of headaches when it comes to synchronizing all those client threads.
I think using an existing communication infrastructure like ActiveMQ would be very useful here to deal with the low-level piping stuff and allow you to tackle the game design issues at a higher conceptual level rather than dealing with the low-level intricacies.
That being said. If I understood you then you have a game-client with mutiple threads, one of which deals with comms to the server. On the server there is a comms thread for each client and the game server logic.
I would only use sockets for remote communication and Queues for communication between the server threads. On the queues send immutable objects (or copies) back and forth so you do not need to synchronize access to the data in the messages. As a base for synchronisation you can block on the Socket or a BlockingQueue, then you do not need to manually synch things, however this requires careful protocol design.