I'm new to Netty and I wrote based on an example I found a Netty http server, that keeps http connections open to send server-sent-events to the browser client.
Problem is that it only accepts up to about ~5 connections and after that blocks new connections. I googled and found most answers said to set SO_LOGBACK to a higher value. Tried different values and while I saw no difference. I even set it to MAX_INTEGER value and still had only 5 connections.
Server code (Using Netty version 4.1.6.Final):
package server;
import static io.netty.buffer.Unpooled.copiedBuffer;
import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.handler.codec.http.DefaultFullHttpResponse;
import io.netty.handler.codec.http.FullHttpResponse;
import io.netty.handler.codec.http.HttpHeaders;
import io.netty.handler.codec.http.HttpObjectAggregator;
import io.netty.handler.codec.http.HttpResponseStatus;
import io.netty.handler.codec.http.HttpServerCodec;
import io.netty.handler.codec.http.HttpVersion;
public class NettyHttpServer {
private ChannelFuture channel;
private final EventLoopGroup masterGroup;
public NettyHttpServer() {
masterGroup = new NioEventLoopGroup(100);
}
public void start() {
try {
final ServerBootstrap bootstrap = new ServerBootstrap().group(masterGroup)
.channel(NioServerSocketChannel.class).childHandler(new ChannelInitializer < SocketChannel > () {
#Override
public void initChannel(final SocketChannel ch) throws Exception {
ch.pipeline().addLast("codec", new HttpServerCodec());
ch.pipeline().addLast("aggregator", new HttpObjectAggregator(512 * 1024));
ch.pipeline().addLast("request", new ChannelInboundHandlerAdapter() {
#Override
public void channelRead(final ChannelHandlerContext ctx, final Object msg)
throws Exception {
System.out.println(msg);
registerToPubSub(ctx, msg);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
ctx.flush();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
ctx.writeAndFlush(new DefaultFullHttpResponse(HttpVersion.HTTP_1_1,
HttpResponseStatus.INTERNAL_SERVER_ERROR,
copiedBuffer(cause.getMessage().getBytes())));
}
});
}
}).option(ChannelOption.SO_BACKLOG, Integer.MAX_VALUE)
.childOption(ChannelOption.SO_KEEPALIVE, true);
channel = bootstrap.bind(8081).sync();
// channels.add(bootstrap.bind(8080).sync());
} catch (final InterruptedException e) {}
}
public void shutdown() {
masterGroup.shutdownGracefully();
try {
channel.channel().closeFuture().sync();
} catch (InterruptedException e) {}
}
private void registerToPubSub(final ChannelHandlerContext ctx, Object msg) {
new Thread() {
#Override
public void run() {
while (true) {
final String responseMessage = "data:abcdef\n\n";
FullHttpResponse response = new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK,
copiedBuffer(responseMessage.getBytes()));
response.headers().set(HttpHeaders.Names.CONNECTION, HttpHeaders.Values.KEEP_ALIVE);
response.headers().set(HttpHeaders.Names.CONTENT_TYPE, "text/event-stream");
response.headers().set(HttpHeaders.Names.ACCESS_CONTROL_ALLOW_ORIGIN, "*");
response.headers().set("Cache-Control", "no-cache");
ctx.writeAndFlush(response);
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
};
}.start();
}
public static void main(String[] args) {
new NettyHttpServer().start();
}
}
Client js code (I run it more than 5 times from my browser in different tabs, and the not all of them get:
var source = new EventSource("http://localhost:8081");
source.onmessage = function(event) {
console.log(event.data);
};
source.onerror= function(err){console.log(err); source.close()};
source.onopen = function(event){console.log('open'); console.log(event)}
You need to let the browser know that you are done sending the response, and for that you have three options.
Set a content length
Send it chunked
Close the connection when you are done
You aren't doing any of those. I suspect your browser is still waiting for the full response to each request you send, and is using a new connection for each request in your testing. After 5 requests your browser must be refusing to create new connections.
Another thing I noticed is that you are creating a new thread for each request in your server, and never letting it die. That will cause problems down the line as you try to scale. If you really want that code to run in a different thread then I suggest looking at overloaded methods for adding handlers to the pipeline; those should let you specify a thread pool to run them in.
Related
Problem description:
My program creates a Netty server and client, then it makes 2^17 connections to that server, at some point the client starts to receive this exception:
java.io.IOException: Istniejące połączenie zostało gwałtownie zamknięte przez zdalnego hosta.
The english equivalent is:
java.io.IOException: An existing connection was forcibly closed by the remote host
Obviously it is not desired that server is forcibly closing existing connections.
Steps to reproduce:
For convenience of anyone willing to reproduce this problem I've created this "single runnable java file" program that reproduces it, it needs only the netty-all-4.1.12.Final.jar dependency. It starts netty server on some free port, then creates client, perform requests, waits a bit to give the server chance to process the requests, then print statistics about how many connections was made, how many connections did server process, how many connections was lost, how many and what kind of exceptions did server encountered, and how many and what kind of exceptions did client encountered.
package netty.exception.tst;
import java.io.PrintWriter;
import java.io.StringWriter;
import java.net.InetSocketAddress;
import java.util.Collections;
import java.util.Map;
import java.util.Map.Entry;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.stream.Collectors;
import io.netty.bootstrap.Bootstrap;
import io.netty.bootstrap.ServerBootstrap;
import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelFutureListener;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.channel.socket.nio.NioSocketChannel;
public class NettyException {
public static void main(String[] args) throws InterruptedException {
System.out.println("starting server");
NettyServer server = new NettyServer(0);
int port = server.getPort();
System.out.println("server started at port: " + port);
System.out.println("staring client");
NettyClient client = new NettyClient();
System.out.println("client started");
int noOfConnectionsToPerform = 1 << 17;
System.out.println("performing " + noOfConnectionsToPerform + " connections");
for (int n = 0; n < noOfConnectionsToPerform; n++) {
// send a request
ChannelFuture f = client.getBootstrap().connect("localhost", port);
}
System.out.println("client performed " + noOfConnectionsToPerform + " connections");
System.out.println("wait a bit to give a chance for server to finish processing incoming requests");
Thread.currentThread().sleep(80000);
System.out.println("shutting down server and client");
server.stop();
client.stop();
System.out.println("stopped, server received: " + server.connectionsCount() + " connections");
int numberOfLostConnections = noOfConnectionsToPerform - server.connectionsCount();
if (numberOfLostConnections > 0) {
System.out.println("Where do we lost " + numberOfLostConnections + " connections?");
}
System.out.println("srerver exceptions: ");
printExceptions(server.getExceptions());
System.out.println("client exceptions: ");
printExceptions(client.getExceptions());
}
private static void printExceptions(Map<String, Integer> exceptions) {
if (exceptions.isEmpty()) {
System.out.println("There was no exceptions");
}
for (Entry<String, Integer> exception : exceptions.entrySet()) {
System.out.println("There was " + exception.getValue() + " times this exception:");
System.out.println(exception.getKey());
}
}
public static class NettyServer {
private ChannelFuture channelFuture;
private EventLoopGroup bossGroup;
private EventLoopGroup workerGroup;
private AtomicInteger connections = new AtomicInteger(0);
private ExceptionCounter exceptionCounter = new ExceptionCounter();
public NettyServer(int port) throws InterruptedException {
bossGroup = new NioEventLoopGroup();
workerGroup = new NioEventLoopGroup();
ServerBootstrap serverBootstrap = new ServerBootstrap();
serverBootstrap.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new TimeServerHandler() {
#Override
public void channelActive(final ChannelHandlerContext ctx) {
connections.incrementAndGet();
super.channelActive(ctx);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
exceptionCounter.countException(cause);
super.exceptionCaught(ctx, cause);
}
});
}
}).option(ChannelOption.SO_BACKLOG, 128).childOption(ChannelOption.SO_KEEPALIVE, true);
channelFuture = serverBootstrap.bind(port).sync();
}
public int getPort() {
return ((InetSocketAddress) channelFuture.channel().localAddress()).getPort();
}
public int connectionsCount() {
return connections.get();
}
public Map<String, Integer> getExceptions() {
return exceptionCounter.getExceptions();
}
public void stop() {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
try {
bossGroup.awaitTermination(Long.MAX_VALUE, TimeUnit.DAYS);
workerGroup.awaitTermination(Long.MAX_VALUE, TimeUnit.DAYS);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
public static class NettyClient {
private Bootstrap bootstrap;
private EventLoopGroup workerGroup;
private ExceptionCounter exceptionCounter = new ExceptionCounter();
public NettyClient() {
workerGroup = new NioEventLoopGroup();
bootstrap = new Bootstrap();
bootstrap.group(workerGroup);
bootstrap.channel(NioSocketChannel.class);
bootstrap.option(ChannelOption.SO_KEEPALIVE, true);
bootstrap.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new TimeClientHandler() {
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
exceptionCounter.countException(cause);
super.exceptionCaught(ctx, cause);
}
});
}
});
}
public Bootstrap getBootstrap() {
return bootstrap;
}
public void stop() {
workerGroup.shutdownGracefully();
try {
workerGroup.awaitTermination(Long.MAX_VALUE, TimeUnit.DAYS);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
public Map<String, Integer> getExceptions() {
return exceptionCounter.getExceptions();
}
}
public static class TimeServerHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelActive(final ChannelHandlerContext ctx) {
final ByteBuf time = ctx.alloc().buffer(4);
time.writeInt((int) (System.currentTimeMillis() / 1000L + 2208988800L));
final ChannelFuture f = ctx.writeAndFlush(time);
f.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) {
assert f == future;
ctx.close();
}
});
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
ctx.close();
}
}
public static class TimeClientHandler extends ChannelInboundHandlerAdapter {
private ThreadLocal<ByteBuf> buf = new ThreadLocal<ByteBuf>();
#Override
public void handlerAdded(ChannelHandlerContext ctx) {
buf.set(ctx.alloc().buffer(4));
}
#Override
public void handlerRemoved(ChannelHandlerContext ctx) {
buf.get().release();
buf.remove();
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
ByteBuf m = (ByteBuf) msg;
buf.get().writeBytes(m);
m.release();
if (buf.get().readableBytes() >= 4) {
long currentTimeMillis = (buf.get().readUnsignedInt() - 2208988800L) * 1000L;
ctx.close();
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
ctx.close();
}
}
public static class ExceptionCounter {
private ConcurrentHashMap<String, AtomicInteger> exceptions = new ConcurrentHashMap<String, AtomicInteger>();
private void countException(Throwable cause) {
StringWriter writer = new StringWriter();
cause.printStackTrace(new PrintWriter(writer));
String stackTrace = writer.toString();
AtomicInteger exceptionCount = exceptions.get(stackTrace);
if (exceptionCount == null) {
exceptionCount = new AtomicInteger(0);
AtomicInteger prevCount = exceptions.putIfAbsent(stackTrace, exceptionCount);
if (prevCount != null) {
exceptionCount = prevCount;
}
}
exceptionCount.incrementAndGet();
}
public Map<String, Integer> getExceptions() {
Map<String, Integer> newMap = exceptions.entrySet().stream()
.collect(Collectors.toMap(Map.Entry::getKey, e -> e.getValue().get()));
return Collections.unmodifiableMap(newMap);
}
}
}
The output is:
starting server
server started at port: 56069
staring client
client started
performing 131072 connections
client performed 131072 connections
wait a bit to give a chance for server to finish processing incoming requests
shutting down server and client
stopped, server received: 34735 connections
Where do we lost 96337 connections?
srerver exceptions:
There was no exceptions
client exceptions:
There was 258 times this exception:
java.io.IOException: Istniejące połączenie zostało gwałtownie zamknięte przez zdalnego hosta
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:288)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1100)
at io.netty.buffer.WrappedByteBuf.writeBytes(WrappedByteBuf.java:813)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:372)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
at java.lang.Thread.run(Thread.java:745)
There was 30312 times this exception:
java.io.IOException: Istniejące połączenie zostało gwałtownie zamknięte przez zdalnego hosta
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:288)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1100)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:372)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
at java.lang.Thread.run(Thread.java:745)
Questions:
Why this exception is thrown?
Where the lost connections has gone? Why there is no error for them?
How to avoid it, what is the correct way to program this kind of "high throughput" application to not have such problems like loosing/breaking existing connections?
No related to subject, but maybe some Netty expert will know: Why when I change in TimeClientHandler the field declaration of private ThreadLocal<ByteBuf> buf to be also static, I have null pointer exception in TimeClientHandler.handlerRemoved? This is very strange, is this class somehow replicated? or are the Threads from NioEventLoopGroup somehow strange?
Environment:
Netty version: netty-all-4.1.12.Final.jar
JVM version: jdk1.8.0_111 64 bit
OS version: Windows 10 64 bit
There is a limit of 64k ports per IP address, so you cannot open 2^17 ports. Since each socket uses a file handle, you might be hitting the limit of max open files per process. See "Max open files" for working process.
I am trying to use TooTallNate's Java-Websocket to connect to OkCoin. I found this simple code example somewhere, but I can't get it to work. The connection is immediately closed and so the call mWs.send(...) throws a WebsocketNotConnectedException. I can't figure out why; so far I have found a number of similar questions, none of which have an answer.
import org.java_websocket.client.WebSocketClient;
import org.java_websocket.handshake.ServerHandshake;
import org.json.JSONObject;
import java.net.URI;
import java.net.URISyntaxException;
public class TestApp {
public static void main(String[] args) {
try {
URI uri = new URI("wss://real.okcoin.cn:10440/websocket/okcoinapi");
final WebSocketClient mWs = new WebSocketClient(uri) {
#Override
public void onMessage(String message) {
JSONObject obj = new JSONObject(message);
}
#Override
public void onOpen(ServerHandshake handshake) {
System.out.println("opened connection");
}
#Override
public void onClose(int code, String reason, boolean remote) {
System.out.println("closed connection");
}
#Override
public void onError(Exception ex) {
ex.printStackTrace();
}
};
mWs.connect();
JSONObject obj = new JSONObject();
obj.put("event", "addChannel");
obj.put("channel", "ok_btccny_ticker");
mWs.send(obj.toString());
} catch (URISyntaxException e) {
System.err.println("URI not formatted correctly");
}
}
}
Use mWs.connectBlocking() instead of mWs.connect() with this it will not close automatically.
See
import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.Channel;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.epoll.EpollChannelOption;
import io.netty.channel.epoll.EpollEventLoopGroup;
import io.netty.channel.epoll.EpollServerSocketChannel;
import io.netty.channel.socket.SocketChannel;
import sun.misc.Signal;
import sun.misc.SignalHandler;
import java.net.InetSocketAddress;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.LinkedList;
import java.util.List;
public class ReusePortServer {
private final int port;
private List<Channel> bindingChannels = new LinkedList<>();
public ReusePortServer(int port) {
this.port = port;
}
private void initSignals() {
Signal.handle(new Signal("BUS"), new SignalHandler() {
#Override public void handle(Signal signal) {
System.out.println("signal arrived");
closeChannels();
}
});
}
synchronized private void closeChannels() {
for (Channel channel : bindingChannels) {
channel.close();
}
bindingChannels.clear();
}
synchronized private void registerChannel(Channel channel) {
bindingChannels.add(channel);
}
public void start() throws Exception {
initSignals();
EventLoopGroup group = new EpollEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(group)
.channel(EpollServerSocketChannel.class)
.option(EpollChannelOption.SO_REUSEPORT, true)
.localAddress(new InetSocketAddress(port))
.childHandler(new ChannelInitializer<SocketChannel>(){
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new ReusePortHandler());
registerChannel(ch);
}
});
for (StackTraceElement e : Thread.currentThread().getStackTrace()) {
System.out.println(e.toString());
}
ChannelFuture f = b.bind().sync();
log(String.format("%s started and listen on %s", ReusePortServer.class.getName(), f.channel().localAddress()));
// registerChannel(ch); // ---------------I also tried to register this channel, but after my signaling, it closes my client's connection, rather than keeping it.
f.channel().closeFuture().sync();
} finally {
group.shutdownGracefully().sync();
}
}
private final static SimpleDateFormat datefmt = new SimpleDateFormat("HH:mm:ss ");
public static void log(final String msg) {
System.out.print(datefmt.format(new Date()));
System.out.println(msg);
System.out.flush();
}
public static void main(final String[] args) throws Exception {
int port = 12355;
new ReusePortServer(port).start();
}
}
Hi, I am looking a way to stop netty from listening and accepting on server socket, but to finish up any ongoing job on current connections.
I come across the following question:
How to stop netty from listening and accepting on server socket
and according to it, I wrote the above code, which receive signal (kill -7) to do the closing.
But the result is not expected, it closes the tcp connections, and netty can still accept new connection.
Do I use the correct way of stop netty from listening and accepting?
What's wrong here?
You should close the ServerChannel like this:
ChannelFuture f = b.bind().sync();
// Call this once you want to stop accepting new connections.
f.channel().close().sync();
I've spent about two days about this problem. I try this, and it work for me :
First, I declare:
static ChannelFuture future;
Then, when I bind port, I assign future variable:
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ServerInitializer())
.option(ChannelOption.SO_BACKLOG, NetUtil.SOMAXCONN)
.childOption(ChannelOption.SO_KEEPALIVE, true);
future = b.bind(WS_PORT).sync();
Final, I add a function to handler event when web application stop.
#PreDestroy
void shutdownWorkers() throws InterruptedException {
future.channel().close().sync();
}
We are using Websockets from the Grizzly project and had expected that the implementation would allow multiple incoming messages over a connection to be processed at the same time. It appears that this is not the case or there is a configuration step that we have missed. To validate this I have created a modified echo test that delays in the onMessage after echoing the text. When a client sends multiple messages over the same connection the server always blocks until onMessage completes before processing a subsequent message. Is this the expected functionality?
The simplified server code is as follows:
package com.grorange.samples.echo;
import java.util.concurrent.atomic.AtomicBoolean;
import org.glassfish.grizzly.http.server.HttpServer;
import org.glassfish.grizzly.http.server.NetworkListener;
import org.glassfish.grizzly.websockets.DataFrame;
import org.glassfish.grizzly.websockets.WebSocket;
import org.glassfish.grizzly.websockets.WebSocketAddOn;
import org.glassfish.grizzly.websockets.WebSocketApplication;
import org.glassfish.grizzly.websockets.WebSocketEngine;
public class Echo extends WebSocketApplication {
private final AtomicBoolean inMessage = new AtomicBoolean(false);
#Override
public void onClose(WebSocket socket, DataFrame frame) {
super.onClose(socket, frame);
System.out.println("Disconnected!");
}
#Override
public void onConnect(WebSocket socket) {
System.out.println("Connected!");
}
#Override
public void onMessage(WebSocket socket, String text) {
System.out.println("Server: " + text);
socket.send(text);
if (this.inMessage.compareAndSet(false, true)) {
try {
Thread.sleep(10000);
} catch (Exception e) {}
this.inMessage.set(false);
}
}
#Override
public void onMessage(WebSocket socket, byte[] bytes) {
socket.send(bytes);
if (this.inMessage.compareAndSet(false, true)) {
try {
Thread.sleep(Long.MAX_VALUE);
} catch (Exception e) {}
this.inMessage.set(false);
}
}
public static void main(String[] args) throws Exception {
HttpServer server = HttpServer.createSimpleServer("http://0.0.0.0", 8083);
WebSocketAddOn addOn = new WebSocketAddOn();
addOn.setTimeoutInSeconds(60);
for (NetworkListener listener : server.getListeners()) {
listener.registerAddOn(addOn);
}
WebSocketEngine.getEngine().register("", "/Echo", new Echo());
server.start();
Thread.sleep(Long.MAX_VALUE);
}
}
The simplified client code is:
Yes, it's expected.
The way to go is to pass message processing, inside onMessage, to a different thread.
I am trying to implement a TCP Server in Java using nio.
Its simply using the Selector's select method to get the ready keys. And then processing those keys if they are acceptable, readable and so. Server is working just fine till im using a single thread. But when im trying to use more threads to process the keys, the server's response gets slowed and eventually stops responding, say after 4-5 requests.
This is all what im doing:(Pseudo)
Iterator<SelectionKey> keyIterator = selector.selectedKeys().iterator();
while (keyIterator.hasNext()) {
SelectionKey readyKey = keyIterator.next();
if (readyKey.isAcceptable()) {
//A new connection attempt, registering socket channel with selector
} else {
Worker.add( readyKey );
}
Worker is the thread class that performs Input/Output from the channel.
This is the code of my Worker class:
private static List<SelectionKey> keyPool = Collections.synchronizedList(new LinkedList());
public static void add(SelectionKey key) {
synchronized (keyPool) {
keyPool.add(key);
keyPool.notifyAll();
}
}
public void run() {
while ( true ) {
SelectionKey myKey = null;
synchronized (keyPool) {
try {
while (keyPool.isEmpty()) {
keyPool.wait();
}
} catch (InterruptedException ex) {
}
myKey = keyPool.remove(0);
keyPool.notifyAll();
}
if (myKey != null && myKey.isValid() ) {
if (myKey.isReadable()) {
//Performing reading
} else if (myKey.isWritable()) {
//performing writing
myKey.cancel();
}
}
}
My basic idea is to add the key to the keyPool from which various threads can get a key, one at a time.
My BaseServer class itself is running as a thread. It is creating 10 Worker threads before the event loop to begin. I also tried to increase the priority of BaseServer thread, so that it gets more chance to accept the acceptable keys. Still, to it stops responding after approx 8 requests. Please help, were I am going wrong. Thanks in advance. :)
Third, you aren't removing anything from the selected-key set. You must do that every time around the loop, e.g. by calling keyIterator.remove() after you call next().
You need to read the NIO Tutorials.
First of all, you should not really be using wait() and notify() calls anymore since there exist good Standard Java (1.5+) wrapper classes in java.util.concurrent, such as BlockingQueue.
Second, it's suggested to do IO in the selecting thread itself, not in the worker threads. The worker threads should just queue up reads/and writes to the selector thread(s).
This page explains it pretty good and even provides working code samples of a simple TCP/IP server: http://rox-xmlrpc.sourceforge.net/niotut/
Sorry, I don't yet have time to look at your specific example.
Try using xsocket library. It saved me a lot of time reading on forums.
Download: http://xsocket.org/
Tutorial: http://xsocket.sourceforge.net/core/tutorial/V2/TutorialCore.htm
Server Code:
import org.xsocket.connection.*;
/**
*
* #author wsserver
*/
public class XServer {
protected static IServer server;
public static void main(String[] args) {
try {
server = new Server(9905, new XServerHandler());
server.start();
} catch (Exception ex) {
System.out.println(ex.getMessage());
}
}
protected static void shutdownServer(){
try{
server.close();
}catch(Exception ex){
System.out.println(ex.getMessage());
}
}
}
Server Handler:
import java.io.IOException;
import java.nio.BufferUnderflowException;
import java.nio.ByteBuffer;
import java.nio.channels.ClosedChannelException;
import java.nio.charset.Charset;
import java.nio.charset.CharsetDecoder;
import java.nio.charset.CharsetEncoder;
import java.util.*;
import org.xsocket.*;
import org.xsocket.connection.*;
public class XServerHandler implements IConnectHandler, IDisconnectHandler, IDataHandler {
private Set<ConnectedClients> sessions = Collections.synchronizedSet(new HashSet<ConnectedClients>());
Charset charset = Charset.forName("ISO-8859-1");
CharsetEncoder encoder = charset.newEncoder();
CharsetDecoder decoder = charset.newDecoder();
ByteBuffer buffer = ByteBuffer.allocate(1024);
#Override
public boolean onConnect(INonBlockingConnection inbc) throws IOException, BufferUnderflowException, MaxReadSizeExceededException {
try {
synchronized (sessions) {
sessions.add(new ConnectedClients(inbc, inbc.getRemoteAddress()));
}
System.out.println("onConnect"+" IP:"+inbc.getRemoteAddress().getHostAddress()+" Port:"+inbc.getRemotePort());
} catch (Exception ex) {
System.out.println("onConnect: " + ex.getMessage());
}
return true;
}
#Override
public boolean onDisconnect(INonBlockingConnection inbc) throws IOException {
try {
synchronized (sessions) {
sessions.remove(inbc);
}
System.out.println("onDisconnect");
} catch (Exception ex) {
System.out.println("onDisconnect: " + ex.getMessage());
}
return true;
}
#Override
public boolean onData(INonBlockingConnection inbc) throws IOException, BufferUnderflowException, ClosedChannelException, MaxReadSizeExceededException {
inbc.read(buffer);
buffer.flip();
String request = decoder.decode(buffer).toString();
System.out.println("request:"+request);
buffer.clear();
return true;
}
}
Connected Clients:
import java.net.InetAddress;
import org.xsocket.connection.INonBlockingConnection;
/**
*
* #author wsserver
*/
public class ConnectedClients {
private INonBlockingConnection inbc;
private InetAddress address;
//CONSTRUCTOR
public ConnectedClients(INonBlockingConnection inbc, InetAddress address) {
this.inbc = inbc;
this.address = address;
}
//GETERS AND SETTERS
public INonBlockingConnection getInbc() {
return inbc;
}
public void setInbc(INonBlockingConnection inbc) {
this.inbc = inbc;
}
public InetAddress getAddress() {
return address;
}
public void setAddress(InetAddress address) {
this.address = address;
}
}
Client Code:
import java.net.InetAddress;
import org.xsocket.connection.INonBlockingConnection;
import org.xsocket.connection.NonBlockingConnection;
/**
*
* #author wsserver
*/
public class XClient {
protected static INonBlockingConnection inbc;
public static void main(String[] args) {
try {
inbc = new NonBlockingConnection(InetAddress.getByName("localhost"), 9905, new XClientHandler());
while(true){
}
} catch (Exception ex) {
System.out.println(ex.getMessage());
}
}
}
Client Handler:
import java.io.IOException;
import java.nio.BufferUnderflowException;
import java.nio.ByteBuffer;
import java.nio.channels.ClosedChannelException;
import java.nio.charset.Charset;
import java.nio.charset.CharsetDecoder;
import java.nio.charset.CharsetEncoder;
import org.xsocket.MaxReadSizeExceededException;
import org.xsocket.connection.IConnectExceptionHandler;
import org.xsocket.connection.IConnectHandler;
import org.xsocket.connection.IDataHandler;
import org.xsocket.connection.IDisconnectHandler;
import org.xsocket.connection.INonBlockingConnection;
/**
*
* #author wsserver
*/
public class XClientHandler implements IConnectHandler, IDataHandler,IDisconnectHandler, IConnectExceptionHandler {
Charset charset = Charset.forName("ISO-8859-1");
CharsetEncoder encoder = charset.newEncoder();
CharsetDecoder decoder = charset.newDecoder();
ByteBuffer buffer = ByteBuffer.allocate(1024);
#Override
public boolean onConnect(INonBlockingConnection nbc) throws IOException {
System.out.println("Connected to server");
nbc.write("hello server\r\n");
return true;
}
#Override
public boolean onConnectException(INonBlockingConnection nbc, IOException ioe) throws IOException {
System.out.println("On connect exception:"+ioe.getMessage());
return true;
}
#Override
public boolean onDisconnect(INonBlockingConnection nbc) throws IOException {
System.out.println("Dissconected from server");
return true;
}
#Override
public boolean onData(INonBlockingConnection inbc) throws IOException, BufferUnderflowException, ClosedChannelException, MaxReadSizeExceededException {
inbc.read(buffer);
buffer.flip();
String request = decoder.decode(buffer).toString();
System.out.println(request);
buffer.clear();
return true;
}
}
I spent a lot of time reading on forums about this, i hope i can help u with my code.