I am trying to implement a TCP Server in Java using nio.
Its simply using the Selector's select method to get the ready keys. And then processing those keys if they are acceptable, readable and so. Server is working just fine till im using a single thread. But when im trying to use more threads to process the keys, the server's response gets slowed and eventually stops responding, say after 4-5 requests.
This is all what im doing:(Pseudo)
Iterator<SelectionKey> keyIterator = selector.selectedKeys().iterator();
while (keyIterator.hasNext()) {
SelectionKey readyKey = keyIterator.next();
if (readyKey.isAcceptable()) {
//A new connection attempt, registering socket channel with selector
} else {
Worker.add( readyKey );
}
Worker is the thread class that performs Input/Output from the channel.
This is the code of my Worker class:
private static List<SelectionKey> keyPool = Collections.synchronizedList(new LinkedList());
public static void add(SelectionKey key) {
synchronized (keyPool) {
keyPool.add(key);
keyPool.notifyAll();
}
}
public void run() {
while ( true ) {
SelectionKey myKey = null;
synchronized (keyPool) {
try {
while (keyPool.isEmpty()) {
keyPool.wait();
}
} catch (InterruptedException ex) {
}
myKey = keyPool.remove(0);
keyPool.notifyAll();
}
if (myKey != null && myKey.isValid() ) {
if (myKey.isReadable()) {
//Performing reading
} else if (myKey.isWritable()) {
//performing writing
myKey.cancel();
}
}
}
My basic idea is to add the key to the keyPool from which various threads can get a key, one at a time.
My BaseServer class itself is running as a thread. It is creating 10 Worker threads before the event loop to begin. I also tried to increase the priority of BaseServer thread, so that it gets more chance to accept the acceptable keys. Still, to it stops responding after approx 8 requests. Please help, were I am going wrong. Thanks in advance. :)
Third, you aren't removing anything from the selected-key set. You must do that every time around the loop, e.g. by calling keyIterator.remove() after you call next().
You need to read the NIO Tutorials.
First of all, you should not really be using wait() and notify() calls anymore since there exist good Standard Java (1.5+) wrapper classes in java.util.concurrent, such as BlockingQueue.
Second, it's suggested to do IO in the selecting thread itself, not in the worker threads. The worker threads should just queue up reads/and writes to the selector thread(s).
This page explains it pretty good and even provides working code samples of a simple TCP/IP server: http://rox-xmlrpc.sourceforge.net/niotut/
Sorry, I don't yet have time to look at your specific example.
Try using xsocket library. It saved me a lot of time reading on forums.
Download: http://xsocket.org/
Tutorial: http://xsocket.sourceforge.net/core/tutorial/V2/TutorialCore.htm
Server Code:
import org.xsocket.connection.*;
/**
*
* #author wsserver
*/
public class XServer {
protected static IServer server;
public static void main(String[] args) {
try {
server = new Server(9905, new XServerHandler());
server.start();
} catch (Exception ex) {
System.out.println(ex.getMessage());
}
}
protected static void shutdownServer(){
try{
server.close();
}catch(Exception ex){
System.out.println(ex.getMessage());
}
}
}
Server Handler:
import java.io.IOException;
import java.nio.BufferUnderflowException;
import java.nio.ByteBuffer;
import java.nio.channels.ClosedChannelException;
import java.nio.charset.Charset;
import java.nio.charset.CharsetDecoder;
import java.nio.charset.CharsetEncoder;
import java.util.*;
import org.xsocket.*;
import org.xsocket.connection.*;
public class XServerHandler implements IConnectHandler, IDisconnectHandler, IDataHandler {
private Set<ConnectedClients> sessions = Collections.synchronizedSet(new HashSet<ConnectedClients>());
Charset charset = Charset.forName("ISO-8859-1");
CharsetEncoder encoder = charset.newEncoder();
CharsetDecoder decoder = charset.newDecoder();
ByteBuffer buffer = ByteBuffer.allocate(1024);
#Override
public boolean onConnect(INonBlockingConnection inbc) throws IOException, BufferUnderflowException, MaxReadSizeExceededException {
try {
synchronized (sessions) {
sessions.add(new ConnectedClients(inbc, inbc.getRemoteAddress()));
}
System.out.println("onConnect"+" IP:"+inbc.getRemoteAddress().getHostAddress()+" Port:"+inbc.getRemotePort());
} catch (Exception ex) {
System.out.println("onConnect: " + ex.getMessage());
}
return true;
}
#Override
public boolean onDisconnect(INonBlockingConnection inbc) throws IOException {
try {
synchronized (sessions) {
sessions.remove(inbc);
}
System.out.println("onDisconnect");
} catch (Exception ex) {
System.out.println("onDisconnect: " + ex.getMessage());
}
return true;
}
#Override
public boolean onData(INonBlockingConnection inbc) throws IOException, BufferUnderflowException, ClosedChannelException, MaxReadSizeExceededException {
inbc.read(buffer);
buffer.flip();
String request = decoder.decode(buffer).toString();
System.out.println("request:"+request);
buffer.clear();
return true;
}
}
Connected Clients:
import java.net.InetAddress;
import org.xsocket.connection.INonBlockingConnection;
/**
*
* #author wsserver
*/
public class ConnectedClients {
private INonBlockingConnection inbc;
private InetAddress address;
//CONSTRUCTOR
public ConnectedClients(INonBlockingConnection inbc, InetAddress address) {
this.inbc = inbc;
this.address = address;
}
//GETERS AND SETTERS
public INonBlockingConnection getInbc() {
return inbc;
}
public void setInbc(INonBlockingConnection inbc) {
this.inbc = inbc;
}
public InetAddress getAddress() {
return address;
}
public void setAddress(InetAddress address) {
this.address = address;
}
}
Client Code:
import java.net.InetAddress;
import org.xsocket.connection.INonBlockingConnection;
import org.xsocket.connection.NonBlockingConnection;
/**
*
* #author wsserver
*/
public class XClient {
protected static INonBlockingConnection inbc;
public static void main(String[] args) {
try {
inbc = new NonBlockingConnection(InetAddress.getByName("localhost"), 9905, new XClientHandler());
while(true){
}
} catch (Exception ex) {
System.out.println(ex.getMessage());
}
}
}
Client Handler:
import java.io.IOException;
import java.nio.BufferUnderflowException;
import java.nio.ByteBuffer;
import java.nio.channels.ClosedChannelException;
import java.nio.charset.Charset;
import java.nio.charset.CharsetDecoder;
import java.nio.charset.CharsetEncoder;
import org.xsocket.MaxReadSizeExceededException;
import org.xsocket.connection.IConnectExceptionHandler;
import org.xsocket.connection.IConnectHandler;
import org.xsocket.connection.IDataHandler;
import org.xsocket.connection.IDisconnectHandler;
import org.xsocket.connection.INonBlockingConnection;
/**
*
* #author wsserver
*/
public class XClientHandler implements IConnectHandler, IDataHandler,IDisconnectHandler, IConnectExceptionHandler {
Charset charset = Charset.forName("ISO-8859-1");
CharsetEncoder encoder = charset.newEncoder();
CharsetDecoder decoder = charset.newDecoder();
ByteBuffer buffer = ByteBuffer.allocate(1024);
#Override
public boolean onConnect(INonBlockingConnection nbc) throws IOException {
System.out.println("Connected to server");
nbc.write("hello server\r\n");
return true;
}
#Override
public boolean onConnectException(INonBlockingConnection nbc, IOException ioe) throws IOException {
System.out.println("On connect exception:"+ioe.getMessage());
return true;
}
#Override
public boolean onDisconnect(INonBlockingConnection nbc) throws IOException {
System.out.println("Dissconected from server");
return true;
}
#Override
public boolean onData(INonBlockingConnection inbc) throws IOException, BufferUnderflowException, ClosedChannelException, MaxReadSizeExceededException {
inbc.read(buffer);
buffer.flip();
String request = decoder.decode(buffer).toString();
System.out.println(request);
buffer.clear();
return true;
}
}
I spent a lot of time reading on forums about this, i hope i can help u with my code.
Related
I'm new to Netty and I wrote based on an example I found a Netty http server, that keeps http connections open to send server-sent-events to the browser client.
Problem is that it only accepts up to about ~5 connections and after that blocks new connections. I googled and found most answers said to set SO_LOGBACK to a higher value. Tried different values and while I saw no difference. I even set it to MAX_INTEGER value and still had only 5 connections.
Server code (Using Netty version 4.1.6.Final):
package server;
import static io.netty.buffer.Unpooled.copiedBuffer;
import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.handler.codec.http.DefaultFullHttpResponse;
import io.netty.handler.codec.http.FullHttpResponse;
import io.netty.handler.codec.http.HttpHeaders;
import io.netty.handler.codec.http.HttpObjectAggregator;
import io.netty.handler.codec.http.HttpResponseStatus;
import io.netty.handler.codec.http.HttpServerCodec;
import io.netty.handler.codec.http.HttpVersion;
public class NettyHttpServer {
private ChannelFuture channel;
private final EventLoopGroup masterGroup;
public NettyHttpServer() {
masterGroup = new NioEventLoopGroup(100);
}
public void start() {
try {
final ServerBootstrap bootstrap = new ServerBootstrap().group(masterGroup)
.channel(NioServerSocketChannel.class).childHandler(new ChannelInitializer < SocketChannel > () {
#Override
public void initChannel(final SocketChannel ch) throws Exception {
ch.pipeline().addLast("codec", new HttpServerCodec());
ch.pipeline().addLast("aggregator", new HttpObjectAggregator(512 * 1024));
ch.pipeline().addLast("request", new ChannelInboundHandlerAdapter() {
#Override
public void channelRead(final ChannelHandlerContext ctx, final Object msg)
throws Exception {
System.out.println(msg);
registerToPubSub(ctx, msg);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
ctx.flush();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
ctx.writeAndFlush(new DefaultFullHttpResponse(HttpVersion.HTTP_1_1,
HttpResponseStatus.INTERNAL_SERVER_ERROR,
copiedBuffer(cause.getMessage().getBytes())));
}
});
}
}).option(ChannelOption.SO_BACKLOG, Integer.MAX_VALUE)
.childOption(ChannelOption.SO_KEEPALIVE, true);
channel = bootstrap.bind(8081).sync();
// channels.add(bootstrap.bind(8080).sync());
} catch (final InterruptedException e) {}
}
public void shutdown() {
masterGroup.shutdownGracefully();
try {
channel.channel().closeFuture().sync();
} catch (InterruptedException e) {}
}
private void registerToPubSub(final ChannelHandlerContext ctx, Object msg) {
new Thread() {
#Override
public void run() {
while (true) {
final String responseMessage = "data:abcdef\n\n";
FullHttpResponse response = new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK,
copiedBuffer(responseMessage.getBytes()));
response.headers().set(HttpHeaders.Names.CONNECTION, HttpHeaders.Values.KEEP_ALIVE);
response.headers().set(HttpHeaders.Names.CONTENT_TYPE, "text/event-stream");
response.headers().set(HttpHeaders.Names.ACCESS_CONTROL_ALLOW_ORIGIN, "*");
response.headers().set("Cache-Control", "no-cache");
ctx.writeAndFlush(response);
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
};
}.start();
}
public static void main(String[] args) {
new NettyHttpServer().start();
}
}
Client js code (I run it more than 5 times from my browser in different tabs, and the not all of them get:
var source = new EventSource("http://localhost:8081");
source.onmessage = function(event) {
console.log(event.data);
};
source.onerror= function(err){console.log(err); source.close()};
source.onopen = function(event){console.log('open'); console.log(event)}
You need to let the browser know that you are done sending the response, and for that you have three options.
Set a content length
Send it chunked
Close the connection when you are done
You aren't doing any of those. I suspect your browser is still waiting for the full response to each request you send, and is using a new connection for each request in your testing. After 5 requests your browser must be refusing to create new connections.
Another thing I noticed is that you are creating a new thread for each request in your server, and never letting it die. That will cause problems down the line as you try to scale. If you really want that code to run in a different thread then I suggest looking at overloaded methods for adding handlers to the pipeline; those should let you specify a thread pool to run them in.
I wanted to practice a little on Network Programming and Thread Pools with Java. Here is a sample code I written:
/* User: koray#tugay.biz Date: 21/02/15 Time: 13:30 */
import java.io.IOException;
import java.net.ServerSocket;
import java.net.Socket;
import java.util.ArrayList;
import java.util.List;
public class MyServer {
static List<ServerSocketThread> myThreadPool = new ArrayList<ServerSocketThread>();
static int numberOfCurrentConnections = 0;
public static void main(String[] args) throws IOException {
ServerSocket serverSocket = new ServerSocket(8888);
ServerSocketThread threadOne = new ServerSocketThread(null);
ServerSocketThread threadTwo = new ServerSocketThread(null);
myThreadPool.add(threadOne);
myThreadPool.add(threadTwo);
while (true) {
if(numberOfCurrentConnections < 2) {
Socket accept = serverSocket.accept();
ServerSocketThread thread = myThreadPool.get(numberOfCurrentConnections);
thread.setSocket(accept);
thread.start();
numberOfCurrentConnections++;
} else {
// I want to force the client to wait until a new Thread is available from the pool.
}
}
}
public static void informFinished() {
numberOfCurrentConnections--;
}
}
and the ServerSocketThread class is as follows:
/* User: koray#tugay.biz Date: 21/02/15 Time: 18:14 */
import java.io.*;
import java.net.Socket;
import java.util.Scanner;
public class ServerSocketThread extends Thread {
Socket socket;
public ServerSocketThread(Socket accept) {
this.socket = accept;
}
#Override
public void run() {
try {
Scanner scanner = new Scanner(socket.getInputStream());
String readLine;
while (!(readLine = scanner.nextLine()).equals("bye")) {
System.out.println(readLine);
}
new PrintWriter(socket.getOutputStream()).write("Bye then..");
socket.close();
MyServer.informFinished();
} catch (IOException e) {
e.printStackTrace();
}
}
public void setSocket(Socket socket) {
this.socket = socket;
}
}
Well I can connect to my server with 2 different terminals like this just fine:
Korays-MacBook-Pro:~ koraytugay$ telnet localhost 8888
Trying ::1...
Connected to localhost.
Escape character is '^]'.
laylay
bombom
And the 3rd connection (if made) will not be served as there are 2 Threads only in the Thread Pool. But I can not find a way to make the 3rd client wait until a client says "bye". What I want to do is, after one of the 2 first connected clients disconnect, a Thread is allocated to the waiting 3rd Client, but how?
I will answer my own question, I made it work like this:
/* User: koray#tugay.biz Date: 21/02/15 Time: 21:12 */
import java.io.IOException;
import java.net.ServerSocket;
import java.net.Socket;
import java.util.Stack;
public class MyConnectionAccepter {
private Stack<MySocketThread> mySocketThreads = new Stack<MySocketThread>();
private volatile int currentNumberOfConnections = 0;
public MyConnectionAccepter() {
MySocketThread mySocketThreadOne = new MySocketThread(this);
MySocketThread mySocketThreadTwo = new MySocketThread(this);
mySocketThreadOne.setDaemon(true);
mySocketThreadTwo.setDaemon(true);
mySocketThreadOne.start();
mySocketThreadTwo.start();
mySocketThreads.push(mySocketThreadOne);
mySocketThreads.push(mySocketThreadTwo);
}
public void start() throws IOException {
ServerSocket serverSocket = new ServerSocket(8888);
while (true) {
while (currentNumberOfConnections < 2) {
System.out.println("Blocking now:");
Socket accept = serverSocket.accept();
System.out.println("Connection accepted..");
MySocketThread mySocketThread = mySocketThreads.pop();
mySocketThread.setSocket(accept);
System.out.println("Incrementing connections..");
currentNumberOfConnections++;
System.out.println("End of while..");
}
}
}
public void informIAmDone(MySocketThread mySocketThread) {
mySocketThreads.push(mySocketThread);
currentNumberOfConnections--;
}
}
and
/* User: koray#tugay.biz Date: 21/02/15 Time: 21:04 */
import java.io.IOException;
import java.net.Socket;
import java.util.Scanner;
public class MySocketThread extends Thread {
private volatile Socket socket;
MyConnectionAccepter myConnectionAccepter;
public MySocketThread(MyConnectionAccepter myConnectionAccepter) {
this.myConnectionAccepter = myConnectionAccepter;
}
#Override
public synchronized void run() {
System.out.println("Started...");
serve();
}
public void setSocket(Socket socket) {
this.socket = socket;
System.out.println("Socket not null anymore..");
}
public void serve() {
while(socket == null) {
}
while (socket != null) {
Scanner scanner = null;
try {
scanner = new Scanner(socket.getInputStream());
} catch (IOException e) {
e.printStackTrace();
}
String readLine;
while (!(readLine = scanner.nextLine()).equals("bye")) {
System.out.println(readLine);
}
try {
socket.close();
} catch (IOException e) {
e.printStackTrace();
}
socket = null;
myConnectionAccepter.informIAmDone(this);
}
serve();
}
}
and the Test Class:
/* User: koray#tugay.biz Date: 21/02/15 Time: 21:18 */
import java.io.IOException;
public class MyTestClass {
public static void main(String[] args) throws IOException {
MyConnectionAccepter myConnectionAccepter = new MyConnectionAccepter();
myConnectionAccepter.start();
}
}
I would suggest that you should create a threadpool as described in this article: http://tutorials.jenkov.com/java-concurrency/thread-pools.html
So basically in addition to the pool of threads you also maintain a queue of tasks. Each thread in the pool is continuously polling the task queue for tasks. Whenever a task is available (queue is not empty), it is picked up by a thread and executed. In your case the task would be handling the client connection. The the number of threads in the pool is limited ( 2 in this case). So at any time number of connections that can be processed simultaneously is 2. Only when one of two threads is done executing the current task will it pick the next one. Each time you receive a new connection request, you add a new task to the queue.
Hope this helps!
We are using Websockets from the Grizzly project and had expected that the implementation would allow multiple incoming messages over a connection to be processed at the same time. It appears that this is not the case or there is a configuration step that we have missed. To validate this I have created a modified echo test that delays in the onMessage after echoing the text. When a client sends multiple messages over the same connection the server always blocks until onMessage completes before processing a subsequent message. Is this the expected functionality?
The simplified server code is as follows:
package com.grorange.samples.echo;
import java.util.concurrent.atomic.AtomicBoolean;
import org.glassfish.grizzly.http.server.HttpServer;
import org.glassfish.grizzly.http.server.NetworkListener;
import org.glassfish.grizzly.websockets.DataFrame;
import org.glassfish.grizzly.websockets.WebSocket;
import org.glassfish.grizzly.websockets.WebSocketAddOn;
import org.glassfish.grizzly.websockets.WebSocketApplication;
import org.glassfish.grizzly.websockets.WebSocketEngine;
public class Echo extends WebSocketApplication {
private final AtomicBoolean inMessage = new AtomicBoolean(false);
#Override
public void onClose(WebSocket socket, DataFrame frame) {
super.onClose(socket, frame);
System.out.println("Disconnected!");
}
#Override
public void onConnect(WebSocket socket) {
System.out.println("Connected!");
}
#Override
public void onMessage(WebSocket socket, String text) {
System.out.println("Server: " + text);
socket.send(text);
if (this.inMessage.compareAndSet(false, true)) {
try {
Thread.sleep(10000);
} catch (Exception e) {}
this.inMessage.set(false);
}
}
#Override
public void onMessage(WebSocket socket, byte[] bytes) {
socket.send(bytes);
if (this.inMessage.compareAndSet(false, true)) {
try {
Thread.sleep(Long.MAX_VALUE);
} catch (Exception e) {}
this.inMessage.set(false);
}
}
public static void main(String[] args) throws Exception {
HttpServer server = HttpServer.createSimpleServer("http://0.0.0.0", 8083);
WebSocketAddOn addOn = new WebSocketAddOn();
addOn.setTimeoutInSeconds(60);
for (NetworkListener listener : server.getListeners()) {
listener.registerAddOn(addOn);
}
WebSocketEngine.getEngine().register("", "/Echo", new Echo());
server.start();
Thread.sleep(Long.MAX_VALUE);
}
}
The simplified client code is:
Yes, it's expected.
The way to go is to pass message processing, inside onMessage, to a different thread.
Sorry, I searched around for 2 days before I had to post this question. There are similar questions, but none of them helped me.
I am trying to create a simple chat application where the client uses (non-NIO) Socket to connect to the server that listens with a NIO ServerSocketChannel. The server uses a Selector. Until the first client connects, the Selector.select() method is blocked, as expected. But after the first client connects, Selector.select() does not block and returns immediately. This causes my while loop to run continuously.
Sorry, I've pasted the entire code so that you can copy-paste it and run it. I've just started with Java, so any help/pointers will be very much appreciated. Thank you.
P.S.: Right now, the client sends serialized object (Message object) over the socket connection and the Server reads it. Since the connection is non-blocking, the serialized object is pre-fixed with the object size (in bytes) before it is sent to the server. This allows the server to read the next "x" bytes and un-serialize into a Message object. The server code is a work in progress.
CLIENT CODE----------
import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.DataOutputStream;
import java.io.ObjectOutputStream;
import java.io.ObjectInputStream;
import java.net.InetAddress;
import java.net.Socket;
import java.net.UnknownHostException;
import java.nio.ByteBuffer;
public class ChatClient {
void go(){
User u = new User();
u.setName("UserA");
try{
u.setInet(InetAddress.getLocalHost());
}catch (UnknownHostException ex){
System.out.println(ex);
return;
}
Message m = new Message();
m.setType(3);
m.setText("This is the 1st message.");
m.setFromUser(u);
try{
Socket sock = new Socket (InetAddress.getLocalHost(), 5000);
DataOutputStream dataOut = new DataOutputStream(sock.getOutputStream());
ByteArrayOutputStream byteTemp = new ByteArrayOutputStream();
ObjectOutputStream objOut = new ObjectOutputStream (byteTemp);
objOut.writeObject(m);
objOut.flush();
objOut.close();
byte[] byteMessage = byteTemp.toByteArray();
ByteBuffer bb = ByteBuffer.allocate(4);
bb.putInt(byteMessage.length);
byte[] size = new byte[4];
size = bb.array();
System.out.println("Object size = "+byteMessage.length); //370
ByteArrayOutputStream byteOut = new ByteArrayOutputStream();
byteOut.write(size);
byteOut.write(byteMessage);
byte[] finalMessage = byteOut.toByteArray();
dataOut.write(finalMessage,0,finalMessage.length);
dataOut.flush();
System.out.println("Flushed out");
}catch (Exception ex){
System.out.println(ex);
}
}
public static void main (String args[]){
new CopyOfChatClient().go();
}
}
SERVER CODE ---------------
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.SelectionKey;
import java.nio.channels.Selector;
import java.nio.channels.ServerSocketChannel;
import java.nio.channels.SocketChannel;
import java.util.Iterator;
import java.util.Set;
import java.util.concurrent.locks.ReentrantLock;
public class CopyOfChatServer {
Object a, b;//Dummy objects for synchronization
SocketChannel clientSock=null;
Selector selector;
SelectionKey key;
void go(){
try{
a=new Object();//Dummy objects for synchronization
b=new Object();//Dummy objects for synchronization
ServerSocketChannel serverSock = ServerSocketChannel.open();
serverSock.socket().bind(new InetSocketAddress(5000));
//Note: ServerSocketChannel is blocking, but each new connection returned by accept() will be made non-blocking (see below)
selector = Selector.open();
new Thread(new SelectorThread()).start(); //Start the SelectorThread
int i=0;
while (true){
clientSock = serverSock.accept();
if (clientSock!=null){
clientSock.configureBlocking(false); //The default client socket returned by accept() is blocking. Set it to non-blocking.
synchronized (b){
selector.wakeup();
synchronized (a){
key = clientSock.register(selector, SelectionKey.OP_READ); //register new client Socket with selector
key.attach(clientSock);
}//sync(a)
}//sync(b)
i++;
}
System.out.println("Here");
}//while(true)
}catch (Exception ex){
System.out.println(ex);
}
}
class SelectorThread implements Runnable{
Set <SelectionKey> selectedKeys;
int readyChannels;
public void run(){
while (true){
try {
synchronized(a){
System.out.println("1. Selector trying to select");
readyChannels = selector.select();//Note: select() is blocking ?? Does not block. Behaves like non-blocking
System.out.println("2. Selector has selected");
}//sync a
synchronized (b){
//just wait till registration is done in main thread
}
if (readyChannels == 0) continue; //Even if select() is blocking, this check is to handle suprious wake-ups
System.out.println("readyChannels>0");
selectedKeys = selector.selectedKeys();
Iterator<SelectionKey> keyIterator = selectedKeys.iterator();
while (keyIterator.hasNext()){
SelectionKey key = keyIterator.next();
keyIterator.remove();//added after the first answer to my question
if (key.isReadable()){
System.out.println("3. Got incoming data");
SocketChannel tempSock = (SocketChannel)key.attachment();
ByteBuffer bb=ByteBuffer.allocate(8000);
int bytesRead=tempSock.read(bb);
System.out.println("4. Bytes read = "+bytesRead);
if (bytesRead>4){
bb.flip();
bb.rewind();
int size = bb.getInt();
System.out.println("5. Size of object = "+size);
byte[] objIn = new byte[size];
for (int i=0;i<size;i++){
objIn[i]=bb.get();
}
bb.compact();
ByteArrayInputStream bIn= new ByteArrayInputStream(objIn);
ObjectInputStream objStream= new ObjectInputStream(bIn);
Message temp1 = (Message) objStream.readObject();
System.out.println("6. Read object back");
System.out.println(temp1.getFromUser().getName());
}
}
}
selectedKeys.clear();
} catch (IOException e) {
e.printStackTrace();
} catch (ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}
public static void main (String args[]){
new CopyOfChatServer().go();
}
}
MESSAGE Class ----
import java.io.Serializable;
public class Message implements Serializable{
private int type;
private User fromUser;
private User toUser;
private String text;
public int getType() {
return type;
}
public void setType(int type) {
this.type = type;
}
public User getFromUser() {
return fromUser;
}
public void setFromUser(User fromUser) {
this.fromUser = fromUser;
}
public User getToUser() {
return toUser;
}
public void setToUser(User toUser) {
this.toUser = toUser;
}
public String getText() {
return text;
}
public void setText(String text) {
this.text = text;
}
}
USER CLASS --------
import java.io.Serializable;
import java.net.InetAddress;
public class User implements Serializable{
private String name;
private InetAddress inet;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public InetAddress getInet() {
return inet;
}
public void setInet(InetAddress inet) {
this.inet = inet;
}
}
You must put
keyIterator.remove()
after
keyIterator.next()
The selector doesn't remove anything from selectedKeys() itself.
NB You don't need to attach the channel to the key as an attachment. You can get it from key.channel().
I am currently developing an application that requires random access to many (60k-100k) relatively large files.
Since opening and closing streams is a rather costly operation, I'd prefer to keep the FileChannels for the largest files open until they are no longer needed.
The problem is that since this kind of behaviour is not covered by Java 7's try-with statement, I'm required to close all the FileChannels manually.
But that is becoming increasingly too complicated since the same files could be accessed concurrently throughout the software.
I have implemented a ChannelPool class that can keep track of opened FileChannel instances for each registered Path. The ChannelPool can then be issued to close those channels whose Path is only weakly referenced by the pool itself in certain intervals.
I would prefer an event-listener approach, but I'd also rather not have to listen to the GC.
The FileChannelPool from Apache Commons doesn't address my problem, because channels still need to be closed manually.
Is there a more elegant solution to this problem? And if not, how can my implementation be improved?
import java.io.IOException;
import java.lang.ref.WeakReference;
import java.nio.channels.FileChannel;
import java.nio.file.Path;
import java.nio.file.StandardOpenOption;
import java.util.Map;
import java.util.Timer;
import java.util.TimerTask;
import java.util.concurrent.ConcurrentHashMap;
public class ChannelPool {
private static final ChannelPool defaultInstance = new ChannelPool();
private final ConcurrentHashMap<String, ChannelRef> channels;
private final Timer timer;
private ChannelPool(){
channels = new ConcurrentHashMap<>();
timer = new Timer();
}
public static ChannelPool getDefault(){
return defaultInstance;
}
public void initCleanUp(){
// wait 2 seconds then repeat clean-up every 10 seconds.
timer.schedule(new CleanUpTask(this), 2000, 10000);
}
public void shutDown(){
// must be called manually.
timer.cancel();
closeAll();
}
public FileChannel getChannel(Path path){
ChannelRef cref = channels.get(path.toString());
System.out.println("getChannel called " + channels.size());
if (cref == null){
cref = ChannelRef.newInstance(path);
if (cref == null){
// failed to open channel
return null;
}
ChannelRef oldRef = channels.putIfAbsent(path.toString(), cref);
if (oldRef != null){
try{
// close new channel and let GC dispose of it
cref.channel().close();
System.out.println("redundant channel closed");
}
catch (IOException ex) {}
cref = oldRef;
}
}
return cref.channel();
}
private void remove(String str) {
ChannelRef ref = channels.remove(str);
if (ref != null){
try {
ref.channel().close();
System.out.println("old channel closed");
}
catch (IOException ex) {}
}
}
private void closeAll() {
for (Map.Entry<String, ChannelRef> e : channels.entrySet()){
remove(e.getKey());
}
}
private void maintain() {
// close channels for derefenced paths
for (Map.Entry<String, ChannelRef> e : channels.entrySet()){
ChannelRef ref = e.getValue();
if (ref != null){
Path p = ref.pathRef().get();
if (p == null){
// gc'd
remove(e.getKey());
}
}
}
}
private static class ChannelRef{
private FileChannel channel;
private WeakReference<Path> ref;
private ChannelRef(FileChannel channel, WeakReference<Path> ref) {
this.channel = channel;
this.ref = ref;
}
private static ChannelRef newInstance(Path path) {
FileChannel fc;
try {
fc = FileChannel.open(path, StandardOpenOption.READ);
}
catch (IOException ex) {
return null;
}
return new ChannelRef(fc, new WeakReference<>(path));
}
private FileChannel channel() {
return channel;
}
private WeakReference<Path> pathRef() {
return ref;
}
}
private static class CleanUpTask extends TimerTask {
private ChannelPool pool;
private CleanUpTask(ChannelPool pool){
super();
this.pool = pool;
}
#Override
public void run() {
pool.maintain();
pool.printState();
}
}
private void printState(){
System.out.println("Clean up performed. " + channels.size() + " channels remain. -- " + System.currentTimeMillis());
for (Map.Entry<String, ChannelRef> e : channels.entrySet()){
ChannelRef cref = e.getValue();
String out = "open: " + cref.channel().isOpen() + " - " + cref.channel().toString();
System.out.println(out);
}
}
}
EDIT:
Thanks to fge's answer I have now exactly what I needed. Thanks!
import com.google.common.cache.CacheBuilder;
import com.google.common.cache.CacheLoader;
import com.google.common.cache.LoadingCache;
import com.google.common.cache.RemovalListener;
import com.google.common.cache.RemovalNotification;
import java.io.IOException;
import java.nio.channels.FileChannel;
import java.nio.file.Path;
import java.nio.file.StandardOpenOption;
import java.util.concurrent.ExecutionException;
public class Channels {
private static final LoadingCache<Path, FileChannel> channelCache =
CacheBuilder.newBuilder()
.weakKeys()
.removalListener(
new RemovalListener<Path, FileChannel>(){
#Override
public void onRemoval(RemovalNotification<Path, FileChannel> removal) {
FileChannel fc = removal.getValue();
try {
fc.close();
}
catch (IOException ex) {}
}
}
)
.build(
new CacheLoader<Path, FileChannel>() {
#Override
public FileChannel load(Path path) throws IOException {
return FileChannel.open(path, StandardOpenOption.READ);
}
}
);
public static FileChannel get(Path path){
try {
return channelCache.get(path);
}
catch (ExecutionException ex){}
return null;
}
}
Have a look here:
http://code.google.com/p/guava-libraries/wiki/CachesExplained
You can use a LoadingCache with a removal listener which would close the channel for you when it expires, and you can specify expiry after access or write.