When I write a producer to publish message to my server. I've seen this:
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:384)
at io.netty.buffer.UnpooledUnsafeDirectByteBuf.setBytes(UnpooledUnsafeDirectByteBuf.java:447)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:242)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
I've searched all around and was told that because of channel is closed.
But, in my code. I'm just close my channel when my channel pool destroy the channel.
Here my code:
public static class ChannelFactory implements PoolableObjectFactory<Channel> {
private final Bootstrap bootstrap;
private String host;
private int port;
public ChannelFactory(Bootstrap bootstrap, String host, int port) {
this.bootstrap = bootstrap;
this.host = host;
this.port = port;
}
#Override
public Channel makeObject() throws Exception {
System.out.println("Create new channel!!!");
bootstrap.validate();
return bootstrap.connect(host, port).channel();
}
#Override
public void destroyObject(Channel channel) throws Exception {
ChannelFuture close = channel.close();
if (close.isSuccess()) {
System.out.println(channel + " close successfully");
}
}
#Override
public boolean validateObject(Channel channel) {
System.out.println("Validate object");
return (channel.isOpen());
}
#Override
public void activateObject(Channel channel) throws Exception {
System.out.println(channel + " is activated");
}
#Override
public void passivateObject(Channel channel) throws Exception {
System.out.println(channel + " is passivated");
}
/**
* #return the host
*/
public String getHost() {
return host;
}
/**
* #param host the host to set
* #return
*/
public ChannelFactory setHost(String host) {
this.host = host;
return this;
}
/**
* #return the port
*/
public int getPort() {
return port;
}
/**
* #param port the port to set
* #return
*/
public ChannelFactory setPort(int port) {
this.port = port;
return this;
}
}
And here is my Runner:
public static class Runner implements Runnable {
private Channel channel;
private ButtyMessage message;
private MyChannelPool channelPool;
public Runner(MyChannelPool channelPool, Channel channel, ButtyMessage message) {
this.channel = channel;
this.message = message;
this.channelPool = channelPool;
}
#Override
public void run() {
channel.writeAndFlush(message.content()).syncUninterruptibly().addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
channelPool.returnObject(future.channel());
}
});
}
}
And my main:
public static void main(String[] args) throws InterruptedException {
final String host = "127.0.0.1";
final int port = 8080;
int jobSize = 100;
int jobNumber = 10000;
final Bootstrap b = func(host, port);
final MyChannelPool channelPool = new MyChannelPool(new ChannelFactory(b, host, port));
ExecutorService threadPool = Executors.newFixedThreadPool(1);
for (int i = 0; i < jobNumber; i++) {
try {
threadPool.execute(new Runner(channelPool, channelPool.borrowObject(), new ButtyMessage()));
} catch (Exception ex) {
System.out.println("ex = " + ex.getMessage());
}
}
}
With ButtyMessage extends ByteBufHolder.
In my Runner class, if I sleep(10) after writeAndFlush it run quite OK. But I don't want to reply on sleep. So I use ChannelFutureListener, but the result is bad. If I send about 1000 to 10.000 messages, it will crash and throw exception above. Is there any way to avoid this?
Thanks all.
Sorry for my bad explain and my English :)
You have several issues that could explain this. Most of them are related to wrong usage of asynchronous operations and future usage.
I don't know if it could be in link with your issue but, if you really want to print when the channel is really closed, you have to wait on the future, since the future on close() (or any other operations) immediately returns, without waiting for the real close. Therefore your test if (close.isSuccess()) shall be always false.
public void destroyObject(final Channel channel) throws Exception {
channel.close().addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture close) {
if (close.isSuccess()) {
System.out.println(channel + " close successfully");
}
}
});
}
However, as I suppose it is only for debug purpose, it is not mandatory.
Another one: you send back to your pool a channel that is not already connected (which could explain your sleep(10) maybe?). You have to wait on the connect().
public Channel makeObject() throws Exception {
System.out.println("Create new channel!!!");
//bootstrap.validate(); // this is implicitely called in connect()
ChannelFuture future = bootstrap.connect(host, port).awaitUninterruptibly();
if (future.isSuccess()) {
return future.channel();
} else {
// do what you need to do when the connection is not done
}
}
third one: validation of a connected channel might be better using isActive():
#Override
public boolean validateObject(Channel channel) {
System.out.println("Validate object");
return channel.isActive(); // instead of isOpen()
}
fourth one: in your runner, you wrongly await on the future while you should not. You can remove your syncUninterruptibly() and let the rest as is.
#Override
public void run() {
Channel.writeAndFlush(message.content()).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
channelPool.returnObject(future.channel());
}
});
}
And finally, I suppose you know your test is completely sequential (1 thread in your pool), such that each client will reuse over and over the very same channel?
Could you try to change the 4 points to see if it corrects your issue?
EDIT: after requester comment
For syncUntinterruptibly(), I did not read carefully. If you want to block on write, then you don't need the extra addListener since the future is done once the sync is over. So you can directly call your channelPool.returnObject as next command just after your sync.
So you should write it this way, simpler.
#Override
public void run() {
Channel.writeAndFlush(message.content()).syncUntinterruptibly();
channelPool.returnObject(future.channel());
}
For fireChannelActive, it will be called as soon as the connect finished (so from makeObject, sometime in the future). Moreover, once disconnected (as you did have notice in your exception), the channel is no more usable and must be recreated from zero. So I would suggest to use isActive however, such that, if not active, it will be removed using destroyObject...
Take a look at the channel state model here.
Finally, I've found a solution for myself. But, I'm still think about another solution. (this solution is exactly copy from 4.0.28 netty release note)
final String host = "127.0.0.1";
final int port = 8080;
int jobNumber = 100000;
final EventLoopGroup group = new NioEventLoopGroup(100);
ChannelPoolMap<InetSocketAddress, MyChannelPool> poolMap = new AbstractChannelPoolMap<InetSocketAddress, MyChannelPool>() {
#Override
protected MyChannelPool newPool(InetSocketAddress key) {
Bootstrap bootstrap = func(group, key.getHostName(), key.getPort());
return new MyChannelPool(bootstrap, new _AbstractChannelPoolHandler());
}
};
ChannelPoolMap<InetSocketAddress, FixedChannelPool> poolMap1 = new AbstractChannelPoolMap<InetSocketAddress, FixedChannelPool>() {
#Override
protected FixedChannelPool newPool(InetSocketAddress key) {
Bootstrap bootstrap = func(group, key.getHostName(), key.getPort());
return new FixedChannelPool(bootstrap, new _AbstractChannelPoolHandler(), 10);
}
};
final ChannelPool myChannelPool = poolMap.get(new InetSocketAddress(host, port));
final CountDownLatch latch = new CountDownLatch(jobNumber);
for (int i = 0; i < jobNumber; i++) {
final int counter = i;
final Future<Channel> future = myChannelPool.acquire();
future.addListener(new FutureListener<Channel>() {
#Override
public void operationComplete(Future<Channel> f) {
if (f.isSuccess()) {
Channel ch = f.getNow();
// Do somethings
ch.writeAndFlush(new ButtyMessage().content()).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
System.out.println("counter = " + counter);
System.out.println("future = " + future.channel());
latch.countDown();
}
}
});
// Release back to pool
myChannelPool.release(ch);
} else {
System.out.println(f.cause().getMessage());
f.cause().printStackTrace();
}
}
});
}
try {
latch.await();
System.exit(0);
} catch (InterruptedException ex) {
System.out.println("ex = " + ex.getMessage());
}
As you can see, I use SimpleChannelPool and FixedChannelPool (an implementation of SimpleChannelPool provided by netty).
What it can do:
SimpleChannelPool: open channels as much as it need ---> if you has 100.000 msg -> cuz error, of course. Many socket open, then IOExeption: Too many file open occur. (is that really pool? Create as much as possible and throw exception? I don't call this is pooling)
FixedChannelPool: not work in my case (Still study why? =)) Sorry for my stupidness)
Indeed, I want to use ObjectPool instead. And I may post it as soon as when I finish. Tks #Frederic Brégier for helping me so much!
Related
There are two classes Client and ChatWindow, client has DatagramSocket, InetAddress and port fields along with methods for sending, receiving, and closing the socket. To close a socket I use an anonymous thread "socketCLOSE"
Client class
public class Client {
private static final long serialVersionUID = 1L;
private DatagramSocket socket;
private String name, address;
private int port;
private InetAddress ip;
private Thread send;
private int ID = -1;
private boolean flag = false;
public Client(String name, String address, int port) {
this.name = name;
this.address = address;
this.port = port;
}
public String receive() {
byte[] data = new byte[1024];
DatagramPacket packet = new DatagramPacket(data, data.length);
try {
socket.receive(packet);
} catch (IOException e) {
e.printStackTrace();
}
String message = new String(packet.getData());
return message;
}
public void send(final byte[] data) {
send = new Thread("Send") {
public void run() {
DatagramPacket packet = new DatagramPacket(data, data.length, ip, port);
try {
socket.send(packet);
} catch (IOException e) {
e.printStackTrace();
}
}
};
send.start();
}
public int close() {
System.err.println("close function called");
new Thread("socketClOSE") {
public void run() {
synchronized (socket) {
socket.close();
System.err.println("is socket closed "+socket.isClosed());
}
}
}.start();
return 0;
}
ChatWindow class is sort of a GUI which extends JPanel and implements Runnable, There are two threads inside the class - run and Listen.
public class ClientWindow extends JFrame implements Runnable {
private static final long serialVersionUID = 1L;
private Thread run, listen;
private Client client;
private boolean running = false;
public ClientWindow(String name, String address, int port) {
client = new Client(name, address, port);
createWindow();
console("Attempting a connection to " + address + ":" + port + ", user: " + name);
String connection = "/c/" + name + "/e/";
client.send(connection.getBytes());
running = true;
run = new Thread(this, "Running");
run.start();
}
private void createWindow() {
{
//Jcomponents and Layouts here
}
addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent e) {
String disconnect = "/d/" + client.getID() + "/e/";
send(disconnect, false);
running = false;
client.close();
dispose();
}
});
setVisible(true);
txtMessage.requestFocusInWindow();
}
public void run() {
listen();
}
private void send(String message, boolean text) {
if (message.equals("")) return;
if (text) {
message = client.getName() + ": " + message;
message = "/m/" + message + "/e/";
txtMessage.setText("");
}
client.send(message.getBytes());
}
public void listen() {
listen = new Thread("Listen") {
public void run() {
while (running) {
String message = client.receive();
if (message.startsWith("/c/")) {
client.setID(Integer.parseInt(message.split("/c/|/e/")[1]));
console("Successfully connected to server! ID: " + client.getID());
} else if (message.startsWith("/m/")) {
String text = message.substring(3);
text = text.split("/e/")[0];
console(text);
} else if (message.startsWith("/i/")) {
String text = "/i/" + client.getID() + "/e/";
send(text, false);
} else if (message.startsWith("/u/")) {
String[] u = message.split("/u/|/n/|/e/");
users.update(Arrays.copyOfRange(u, 1, u.length - 1));
}
}
}
};
listen.start();
}
public void console(String message) {
}
}
Whenever the client is closed, the client.close() is called which spawns socketCLOSE thread, but the thread does nothing, it enters the blocked state, as revealed by stack trace -
Name: socketClOSE
State: BLOCKED on java.net.DatagramSocket#1de1602 owned by: Listen
Total blocked: 1 Total waited: 0
Stack trace:
app//com.server.Client$2.run(Client.java:90)
Name: Listen
State: RUNNABLE
Total blocked: 0 Total waited: 0
Stack trace:
java.base#14.0.1/java.net.DualStackPlainDatagramSocketImpl.socketReceiveOrPeekData(Native Method)
java.base#14.0.1/java.net.DualStackPlainDatagramSocketImpl.receive0(DualStackPlainDatagramSocketImpl.java:130)
locked java.net.DualStackPlainDatagramSocketImpl#3dd26cc7
java.base#14.0.1/java.net.AbstractPlainDatagramSocketImpl.receive(AbstractPlainDatagramSocketImpl.java:181)
locked java.net.DualStackPlainDatagramSocketImpl#3dd26cc7
java.base#14.0.1/java.net.DatagramSocket.receive(DatagramSocket.java:864)
locked java.net.DatagramPacket#6d21ecb
locked java.net.DatagramSocket#1de1602
app//com.thecherno.chernochat.Client.receive(Client.java:59)
app//com.thecherno.chernochat.ClientWindow$5.run(ClientWindow.java:183)
This doesn’t let SocketCLOSE thread close the socket inside the synchronized block as lock on socket is held by Listen thread. How can I make the listen thread release its lock, the program terminates without closing the socket and debugger shows Listen thread as still runnable. Is the implementation flawed itself or can this be solved ?
I can reproduce the problem with JDK 14, but not with JDK 15 or newer.
This seems plausible, as JDK-8235674, JEP 373: Reimplement the Legacy DatagramSocket API indicates that the implementation has been rewritten for JDK 15. The report even says “The implementation also has several concurrency issues (e.g., with asynchronous close) that require an overhaul to address properly.”
However, you can get rid of the problem with JDK 14 too; just remove the synchronized. Nothing in the documentation says that synchronization was required to call close() and when I removed it, my testcase worked as intended.
When you want to coordinate the multithreaded access to the socket of your application, you should use a different lock object than the socket instance itself.
I'm trying to set up a peer to peer connection in java.
I'm trying to set up my program to listen for an incoming connection while outwardly being able to connect to a different client.
How can I instantiate my socket connection: socketConnection as whatever is connected to the program. Ideally like so:
if(socketConnection.isConnectedToExternalPeer()){
//do stuff
} else if (socketConnection.hasAnIncomingConnection()){
//do stuff
}
After consulting #L.Spillner 's solution I've put together the following code below, this only issue is that I can't quite grasp how to go about accepting a connection, this is evident from the fact that when I try to set up streams the program ends up in a loop while waiting for the peer's reply:
public class Client implements AutoCloseable {
// Any other ThreadPool can be used as well
private ExecutorService cachedExecutor = null;
private ExecutorService singleThreadExecutor = null;
// port this client shall listen on
private int port = 0;
// Name of the client
private String name = null;
// indicates that a connection is ongoing
private boolean isConnected = false;
// the socket the Client is currently connected with
private Socket activeConenctionSocket = null;
// The ServerSocket which will be listening for any incoming connection
private ServerSocket listener = null;
// The socket which has been accepted by the ServerSocket
private Future<Socket> acceptedSocket;
private ObjectInputStream inputStream = null;
private ObjectOutputStream outputStream = null;
private BloomChain bloomChain = null;
/**
* #param port Port number by which this client shall be accessed.
* #param name The name of this Client.
*/
public Client( int port, String name )
{
this.port = port;
this.name = name;
this.bloomChain = new BloomChain();
this.cachedExecutor = Executors.newCachedThreadPool();
this.singleThreadExecutor = Executors.newSingleThreadExecutor();
this.listener = createListeningSocket();
startListening();
}
private ServerSocket createListeningSocket()
{
ServerSocket temp = null;
try
{
temp = new ServerSocket( this.port );
}
catch ( IOException e )
{
e.printStackTrace();
}
return temp;
}
private void startListening()
{
if ( !this.isConnected )
{
this.listener = createListeningSocket();
this.acceptedSocket = this.cachedExecutor.submit( new ServAccept( this.listener ) );
}
}
/**
* Attempts to connect to any other socket specified by the hostname and the targetport.
*
* #param host The hostname of the target to connect.
* #param targetport The port of the target.
*/
public void connect( String host, int targetport )
{
try
{ System.out.println(host);
System.out.println(targetport);
this.activeConenctionSocket = new Socket( InetAddress.getByName( host ), targetport );
setUpStreams(this.activeConenctionSocket);
this.isConnected = true;
System.out.println(InetAddress.getAllByName(host));
}
catch ( IOException e )
{
e.printStackTrace();
}
try
{
this.listener.close();
}
catch ( IOException e )
{
// this will almost certainly throw an exception but it is intended.
}
}
public void setUpStreams(Socket socket) throws IOException {
this.outputStream = new ObjectOutputStream(socket.getOutputStream());
this.outputStream.flush();
this.inputStream = new ObjectInputStream(socket.getInputStream());
}
#Override
public void close() throws Exception
{
// close logic (can be rather nasty)
}
public void sendMessage(String message){
if(bloomChain.size()<1){
bloomChain.addBlock(new Block(message, "0"));
} else {
bloomChain.addBlock(new Block(message, bloomChain.get(bloomChain.size()-1).getPreviousHash()));
}
try {
this.outputStream.writeObject(bloomChain);
this.outputStream.flush();
} catch (IOException e) {
e.printStackTrace();
}
}
public String mineMessage(){
final String[] receivedMessage = {null};
final Block tempBlock = this.bloomChain.get(this.bloomChain.size()-1);
this.singleThreadExecutor.submit(()->{
tempBlock.mineBlock(bloomChain.getDifficulty());
receivedMessage[0] = tempBlock.getData();
});
return receivedMessage[0];
}
public String dataListener(){
if(isConnected) {
try {
BloomChain tempChain = (BloomChain) this.inputStream.readObject();
if (tempChain.isChainValid()) {
this.bloomChain = tempChain;
return mineMessage();
}
} catch (IOException e) {
e.printStackTrace();
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
}
return null;
}
public ServerSocket getListener() {
return this.listener;
}
public boolean isConnected(){
return isConnected;
}
public ObjectOutputStream getOutputStream(){
return this.outputStream;
}
public ObjectInputStream getInputStream(){
return this.inputStream;
}
}
EDIT 2:
I tried to await for acceptedSocket.get() to return a socket in a separate thread as follows:
new Thread(()->{
setupStreams(this.acceptedSocket.get());
//try-catch blocks omitted
}).start();
This successfully wait for acceptedSocket to return a connected socket however when I try to connect to another locally running client i get the following error: java.net.SocketException: socket closed
Okay after some tinkering I finally figured out a neat little solution:
We want to be able to listen and connect at the same time so we need a ServerSocket and issue an ServerSocket#accept call to accept incoming cnnections.
However this method is blocking the thread so in order to being able to proceed with our programm we have to outsource this call into another thread and luckly the default Java API does provide a simple way to do so.
The following codesample is not finished but provides the core functionality:
Client.java:
public class Client
implements AutoCloseable
{
// Any other ThreadPool can be used as well
private ExecutorService es = Executors.newCachedThreadPool();
// port this client shall listen on
private int port;
// Name of the client
private String name;
// indicates that a connection is ongoing
private boolean isConnected = false;
// the socket the Client is currently connected with
private Socket activeConenctionSocket;
// The ServerSocket which will be listening for any incoming connection
private ServerSocket listener;
// The socket which has been accepted by the ServerSocket
private Future<Socket> acceptedSocket;
/**
* #param port Port number by which this client shall be accessed.
* #param name The name of this Client.
*/
public Client( int port, String name )
{
this.port = port;
this.name = name;
this.listener = createListeningSocket();
startListening();
}
private ServerSocket createListeningSocket()
{
ServerSocket temp = null;
try
{
temp = new ServerSocket( port );
}
catch ( IOException e )
{
e.printStackTrace();
}
return temp;
}
private void startListening()
{
if ( !isConnected )
{
listener = createListeningSocket();
acceptedSocket = es.submit( new ServAccept( listener ) );
}
}
/**
* Attempts to connect to any other socket specified by the hostname and the targetport.
*
* #param host The hostname of the target to connect.
* #param targetport The port of the target.
*/
public void connect( String host, int targetport )
{
isConnected = true;
try
{
activeConenctionSocket = new Socket( InetAddress.getByName( host ), targetport );
}
catch ( IOException e )
{
e.printStackTrace();
}
try
{
listener.close();
}
catch ( IOException e )
{
// this will almost certainly throw an exception but it is intended.
}
}
#Override
public void close() throws Exception
{
// close logic (can be rather nasty)
}
}
Let's walk through there step by step on how we instantiate a new Client object:
When we instantiate our object we create a new ServerSocket
We start listenting by creating a new Thread of a Callable<V> Object which I've named ServAccept for example purposes.
Now we have a Future<T> object which will contain a socket if any connection gets accepted.
A positive side effect of the startListening() method is, that you can make it public and call it once more if the connection has dropped.
The conenct(...) method almost works the same way as your setupConnection() method but with a small twist. The ServerSocket, which is still listening in another thread, will be close. The reason for this is, that there is no other way to exit the accept() method the other thread is stuck in.
The last thing (which you have to figure out) is when to check if the Future object is already done.
ServAccept.java
public class ServAccept
implements Callable<Socket>
{
ServerSocket serv;
public ServAccept( ServerSocket sock )
{
this.serv = sock;
}
#Override
public Socket call() throws Exception
{
return serv.accept();
}
}
EDIT:
As a matter of fact I have to admit that my approach might not be a very well rounded approach for the task so I decided to change tweak some things. This time instead of using a Future Object I decided to go with Events / a custom EventListener which is just sitting there and listening for a connection to receive. I tested the connection functionality and it works just fine but I haven't implemented a solution to determine if a Client really conncted to a peer. I just made sure that a client can only hold one connection at a time.
The changes:
ServerAccept.java
import java.io.IOException;
import java.net.ServerSocket;
public class ServAccept implements Runnable
{
private ServerSocket serv;
private ConnectionReceivedListener listener;
public ServAccept( ServerSocket sock,ConnectionReceivedListener con )
{
this.serv = sock;
this.listener = con;
}
#Override
public void run()
{
try
{
listener.onConnectionReceived( new ConnectionReceivedEvent( serv.accept() ) );
} catch (IOException e)
{
// planned exception here.
}
}
}
Does no longer implement Callable<V> but Runnable the only reason for that change is that we do not longer await any return since we will work with a listener and some juicy events. Anyway in order to do so we need to create and pass a listener to this object. But first we should take a look at the listener / event structure:
ConnectionReceivedListener.java
import java.util.EventListener;
#FunctionalInterface
public interface ConnectionReceivedListener extends EventListener
{
public void onConnectionReceived(ConnectionReceivedEvent event);
}
Just a simple interface from what we build some anonymous classes or lambda expressions. Nothing to fancy. It doen't even need to extend the EventListener interface but I love to do that to remind me what the purpose of the class is.
ConnectionReceivedEvent.java
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.Socket;
public class ConnectionReceivedEvent
{
private Socket accepted;
public ConnectionReceivedEvent( Socket sock )
{
this.accepted = sock;
}
public Socket getSocket()
{
return accepted;
}
public OutputStream getOutput() throws IOException
{
return accepted.getOutputStream();
}
public InputStream getInput() throws IOException
{
return accepted.getInputStream();
}
public int getPort()
{
return accepted.getPort();
}
}
Nothing to fancy as well, just passing a Socket as a constructor parameter and defining some getters from which most will not be used in this example.
But how to we use it now?
private void startListening()
{
if (!isConnected)
{
closeIfNotNull();
listener = createListeningSocket();
es.execute( new ServAccept( listener, event -> setAccepted( event.getSocket() ) ) );
}
}
private void setAccepted( Socket socket )
{
if (!isConnected)
{
this.activeConenctionSocket = socket;
setUpStreams( socket );
} else
{
sendError( socket );
}
}
We still make use of our ExecutorService and creating a new Thread with the ServAccept class. However since we do not expect any return I changed from ExecutorService#submit to ExecutorService#execute (just a matter of opinion and taste).
But ServAccept needs two arguments now. The ServerSocket and the Listener to use. Fortunately we can use annonymous classes and since our Listener does only feature one method we can even use a lambda expression. event -> setAccepted(event.getSocket()).
As an answer to your 2nd edit: I did a logical mistake. Not the ServerSocket#close method does throw the exception whe interrupting a ServerSocket#accept call but rather the accept() call itself throws the exception. In other words the exception you got was intended and i suppressed another one by mistake.
I am reading "Netty In Action V5". When reading to chapter 2.3 and 2.4, I tried with example EchoServer and EchoClient, when I tested one client connected to server, everything worked perfectly ... then I modified the example to multi clients could connect to server. My purpose was to run a stresstest : 1000 clients would connect to server, and each of client would echo 100 messages to server, and when all clients finished, I would get total time of all of process. Server was deployed on linux machine (VPS), and clients were deployed on window machine.
When run stresstest, I got 2 problems:
Some clients got error message:
java.io.IOException: An existing connection was forcibly closed by the remote host
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at io.netty.buffer.UnpooledUnsafeDirectByteBuf.setBytes(UnpooledUnsafeDirectByteBuf.java:447)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:242)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)\at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
But some clients did not received message from server
Working Enviroment:
Netty-all-4.0.30.Final
JDK1.8.0_25
Echo Clients were deployed on Window 7 Ultimate
Echo Server was deployed on Linux Centos 6
Class NettyClient:
public class NettyClient {
private Bootstrap bootstrap;
private EventLoopGroup group;
public NettyClient(final ChannelInboundHandlerAdapter handler) {
group = new NioEventLoopGroup();
bootstrap = new Bootstrap();
bootstrap.group(group);
bootstrap.channel(NioSocketChannel.class);
bootstrap.handler(new ChannelInitializer<SocketChannel>() {
#Override
protected void initChannel(SocketChannel channel) throws Exception {
channel.pipeline().addLast(handler);
}
});
}
public void start(String host, int port) throws Exception {
bootstrap.remoteAddress(new InetSocketAddress(host, port));
bootstrap.connect();
}
public void stop() {
try {
group.shutdownGracefully().sync();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
Class NettyServer:
public class NettyServer {
private EventLoopGroup parentGroup;
private EventLoopGroup childGroup;
private ServerBootstrap boopstrap;
public NettyServer(final ChannelInboundHandlerAdapter handler) {
parentGroup = new NioEventLoopGroup(300);
childGroup = new NioEventLoopGroup(300);
boopstrap = new ServerBootstrap();
boopstrap.group(parentGroup, childGroup);
boopstrap.channel(NioServerSocketChannel.class);
boopstrap.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
protected void initChannel(SocketChannel channel) throws Exception {
channel.pipeline().addLast(handler);
}
});
}
public void start(int port) throws Exception {
boopstrap.localAddress(new InetSocketAddress(port));
ChannelFuture future = boopstrap.bind().sync();
System.err.println("Start Netty server on port " + port);
future.channel().closeFuture().sync();
}
public void stop() throws Exception {
parentGroup.shutdownGracefully().sync();
childGroup.shutdownGracefully().sync();
}
}
Class EchoClient
public class EchoClient {
private static final String HOST = "203.12.37.22";
private static final int PORT = 3344;
private static final int NUMBER_CONNECTION = 1000;
private static final int NUMBER_ECHO = 10;
private static CountDownLatch counter = new CountDownLatch(NUMBER_CONNECTION);
public static void main(String[] args) throws Exception {
List<NettyClient> listClients = Collections.synchronizedList(new ArrayList<NettyClient>());
for (int i = 0; i < NUMBER_CONNECTION; i++) {
new Thread(new Runnable() {
#Override
public void run() {
try {
NettyClient client = new NettyClient(new EchoClientHandler(NUMBER_ECHO) {
#Override
protected void onFinishEcho() {
counter.countDown();
System.err.println((NUMBER_CONNECTION - counter.getCount()) + "/" + NUMBER_CONNECTION);
}
});
client.start(HOST, PORT);
listClients.add(client);
} catch (Exception ex) {
ex.printStackTrace();
}
}
}).start();
}
long t1 = System.currentTimeMillis();
counter.await();
long t2 = System.currentTimeMillis();
System.err.println("Totla time: " + (t2 - t1));
for (NettyClient client : listClients) {
client.stop();
}
}
private static class EchoClientHandler extends SimpleChannelInboundHandler<ByteBuf> {
private static final String ECHO_MSG = "Echo Echo Echo Echo Echo Echo Echo Echo Echo Echo Echo Echo Echo Echo Echo Echo Echo Echo Echo Echo";
private int numberEcho;
private int curNumberEcho = 0;
public EchoClientHandler(int numberEcho) {
this.numberEcho = numberEcho;
}
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
ctx.writeAndFlush(Unpooled.copiedBuffer(ECHO_MSG, CharsetUtil.UTF_8));
}
#Override
protected void channelRead0(ChannelHandlerContext ctx, ByteBuf in) throws Exception {
curNumberEcho++;
if (curNumberEcho >= numberEcho) {
onFinishEcho();
} else {
ctx.writeAndFlush(Unpooled.copiedBuffer(ECHO_MSG, CharsetUtil.UTF_8));
}
}
protected void onFinishEcho() {
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
cause.printStackTrace();
ctx.close();
}
}
}
Class EchoServer:
public class EchoServer {
private static final int PORT = 3344;
public static void main(String[] args) throws Exception {
NettyServer server = new NettyServer(new EchoServerHandler());
server.start(PORT);
System.err.println("Start server on port " + PORT);
}
#Sharable
private static class EchoServerHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ctx.write(msg);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
ctx.flush();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
ctx.close();
}
}
}
You might change 2 things:
Create only one client bootstrap and reuse it for all your clients instead of creating one per client. So extract your bootstrap build out of the Client part and keep only the connect as you've done in your start. This will limit the number of threads internally.
Close the connection on client side when the number of ping pong is reached. Currently you do only a call to the empty method onFinishEcho, which causes no close at all on client side, so there is no client stopping... And therefore no channel closing too...
You might have reach some limitations on the number of threads on client side.
Also one other element could be an issue: you don't specify any codec (string codec or whatever) which could lead to partial sending from client or server treated as full response however.
For instance you might have a first block of "Echo Echo Echo" sending one packet containing the beginning of your buffer, while the other parts (more "Echo") Will be send through later packets.
To prevent this, you should use one codec to ensure your final handler is getting a real full message, not partial one. If not, you might fall in other issues such as error on the server side trying to send extra packet while the channel would be closed by the client sooner as expected...
I have application which uses both TCP and UDP protocols. Main assumption is that the client connects to server via TCP protocol and when connection is established, UDP datagrams are being send.
I have to support two scenarios of connecting to server:
- client connects when server is running
- client connects when server is down and retries connection until server starts again
For the first scenario everything works pretty fine: I got working both connections.
The problem is with second scenario. When client tries few times to connect via TCP and finally connects, the UDP connection function throws an exception:
java.net.SocketException: No buffer space available (maximum connections reached?): bind
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:344)
at sun.nio.ch.DatagramChannelImpl.bind(DatagramChannelImpl.java:684)
at sun.nio.ch.DatagramSocketAdaptor.bind(DatagramSocketAdaptor.java:91)
at io.netty.channel.socket.nio.NioDatagramChannel.doBind(NioDatagramChannel.java:192)
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:484)
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1080)
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:430)
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:415)
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:903)
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:197)
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:350)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:380)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:722)
When I restart client application without doing anything with server, client will connect with any problems.
What can cause a problem?
In below I attach source code of classes. All source code comes from examples placed in official Netty project page. The only thing which I have midified is that I replaced static variables and functions with non-static ones. It was caused that in future I will need many TCP-UDP connections to multiple servers.
public final class UptimeClient {
static final String HOST = System.getProperty("host", "192.168.2.193");
static final int PORT = Integer.parseInt(System.getProperty("port", "2011"));
static final int RECONNECT_DELAY = Integer.parseInt(System.getProperty("reconnectDelay", "5"));
static final int READ_TIMEOUT = Integer.parseInt(System.getProperty("readTimeout", "10"));
private static UptimeClientHandler handler;
public void runClient() throws Exception {
configureBootstrap(new Bootstrap()).connect();
}
private Bootstrap configureBootstrap(Bootstrap b) {
return configureBootstrap(b, new NioEventLoopGroup());
}
#Override
protected Object clone() throws CloneNotSupportedException {
return super.clone(); //To change body of generated methods, choose Tools | Templates.
}
Bootstrap configureBootstrap(Bootstrap b, EventLoopGroup g) {
if(handler == null){
handler = new UptimeClientHandler(this);
}
b.group(g)
.channel(NioSocketChannel.class)
.remoteAddress(HOST, PORT)
.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new IdleStateHandler(READ_TIMEOUT, 0, 0), handler);
}
});
return b;
}
void connect(Bootstrap b) {
b.connect().addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.cause() != null) {
handler.startTime = -1;
handler.println("Failed to connect: " + future.cause());
}
}
});
}
}
#Sharable
public class UptimeClientHandler extends SimpleChannelInboundHandler<Object> {
UptimeClient client;
public UptimeClientHandler(UptimeClient client){
this.client = client;
}
long startTime = -1;
#Override
public void channelActive(ChannelHandlerContext ctx) {
try {
if (startTime < 0) {
startTime = System.currentTimeMillis();
}
println("Connected to: " + ctx.channel().remoteAddress());
new QuoteOfTheMomentClient(null).run();
} catch (Exception ex) {
Logger.getLogger(UptimeClientHandler.class.getName()).log(Level.SEVERE, null, ex);
}
}
#Override
public void channelRead0(ChannelHandlerContext ctx, Object msg) throws Exception {
}
#Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) {
if (!(evt instanceof IdleStateEvent)) {
return;
}
IdleStateEvent e = (IdleStateEvent) evt;
if (e.state() == IdleState.READER_IDLE) {
// The connection was OK but there was no traffic for last period.
println("Disconnecting due to no inbound traffic");
ctx.close();
}
}
#Override
public void channelInactive(final ChannelHandlerContext ctx) {
println("Disconnected from: " + ctx.channel().remoteAddress());
}
#Override
public void channelUnregistered(final ChannelHandlerContext ctx) throws Exception {
println("Sleeping for: " + UptimeClient.RECONNECT_DELAY + 's');
final EventLoop loop = ctx.channel().eventLoop();
loop.schedule(new Runnable() {
#Override
public void run() {
println("Reconnecting to: " + UptimeClient.HOST + ':' + UptimeClient.PORT);
client.connect(client.configureBootstrap(new Bootstrap(), loop));
}
}, UptimeClient.RECONNECT_DELAY, TimeUnit.SECONDS);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
void println(String msg) {
if (startTime < 0) {
System.err.format("[SERVER IS DOWN] %s%n", msg);
} else {
System.err.format("[UPTIME: %5ds] %s%n", (System.currentTimeMillis() - startTime) / 1000, msg);
}
}
}
public final class QuoteOfTheMomentClient {
private ServerData config;
public QuoteOfTheMomentClient(ServerData config){
this.config = config;
}
public void run() throws Exception {
EventLoopGroup group = new NioEventLoopGroup();
try {
Bootstrap b = new Bootstrap();
b.group(group)
.channel(NioDatagramChannel.class)
.option(ChannelOption.SO_BROADCAST, true)
.handler(new QuoteOfTheMomentClientHandler());
Channel ch = b.bind(0).sync().channel();
ch.writeAndFlush(new DatagramPacket(
Unpooled.copiedBuffer("QOTM?", CharsetUtil.UTF_8),
new InetSocketAddress("192.168.2.193", 8193))).sync();
if (!ch.closeFuture().await(5000)) {
System.err.println("QOTM request timed out.");
}
}
catch(Exception ex)
{
ex.printStackTrace();
}
finally {
group.shutdownGracefully();
}
}
}
public class QuoteOfTheMomentClientHandler extends SimpleChannelInboundHandler<DatagramPacket> {
#Override
public void channelRead0(ChannelHandlerContext ctx, DatagramPacket msg) throws Exception {
String response = msg.content().toString(CharsetUtil.UTF_8);
if (response.startsWith("QOTM: ")) {
System.out.println("Quote of the Moment: " + response.substring(6));
ctx.close();
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
If your server is Windows Server 2008 (R2 or R2 SP1), this problem is likely described and solved by this stackoverflow answer which refers to Microsoft KB article #2577795
This issue occurs because of a race condition in the Ancillary Function Driver
for WinSock (Afd.sys) that causes sockets to be leaked. With time, the issue
that is described in the "Symptoms" section occurs if all available socket
resources are exhausted.
If your server is Windows Server 2003, this problem is likely described and solved by this stackoverflow answer which refers to Microsoft KB article #196271
The default maximum number of ephemeral TCP ports is 5000 in the products that
are included in the "Applies to" section. A new parameter has been added in
these products. To increase the maximum number of ephemeral ports, follow these
steps...
...which basically means that you have run out of ephemeral ports.
I am prototyping a Netty client/server transfer for strings, now I want to pass these strings to file when it arrives to server side.
Client:
private ClientBootstrap bootstrap;
private Channel connector;
private MyHandler handler=new MyHandler();
public boolean start() {
// Standard netty bootstrapping stuff.
Executor bossPool = Executors.newCachedThreadPool();
Executor workerPool = Executors.newCachedThreadPool();
ChannelFactory factory =
new NioClientSocketChannelFactory(bossPool, workerPool);
this.bootstrap = new ClientBootstrap(factory);
// Declared outside to fit under 80 char limit
final DelimiterBasedFrameDecoder frameDecoder =
new DelimiterBasedFrameDecoder(Integer.MAX_VALUE,
Delimiters.lineDelimiter());
this.bootstrap.setPipelineFactory(new ChannelPipelineFactory() {
public ChannelPipeline getPipeline() throws Exception {
return Channels.pipeline(
handler,
frameDecoder,
new StringDecoder(),
new StringEncoder());
}
});
ChannelFuture future = this.bootstrap
.connect(new InetSocketAddress("localhost", 12345));
if (!future.awaitUninterruptibly().isSuccess()) {
System.out.println("--- CLIENT - Failed to connect to server at " +
"localhost:12345.");
this.bootstrap.releaseExternalResources();
return false;
}
this.connector = future.getChannel();
return this.connector.isConnected();
}
public void stop() {
if (this.connector != null) {
this.connector.close().awaitUninterruptibly();
}
this.bootstrap.releaseExternalResources();
System.out.println("--- CLIENT - Stopped.");
}
public boolean sendMessage(String message) {
if (this.connector.isConnected()) {
// Append \n if it's not present, because of the frame delimiter
if (!message.endsWith("\n")) {
this.connector.write(message + '\n');
} else {
this.connector.write(message);
}
System.out.print(message);
return true;
}
return false;
}
Server:
private final String id;
private ServerBootstrap bootstrap;
private ChannelGroup channelGroup;
private MyHandler handler= new MyHandler();
public Server(String id) {
this.id = id;
}
// public methods ---------------------------------------------------------
public boolean start() {
// Pretty standard Netty startup stuff...
// boss/worker executors, channel factory, channel group, pipeline, ...
Executor bossPool = Executors.newCachedThreadPool();
Executor workerPool = Executors.newCachedThreadPool();
ChannelFactory factory =
new NioServerSocketChannelFactory(bossPool, workerPool);
this.bootstrap = new ServerBootstrap(factory);
this.channelGroup = new DefaultChannelGroup(this.id + "-all-channels");
// declared here to fit under the 80 char limit
final ChannelHandler delimiter =
new DelimiterBasedFrameDecoder(Integer.MAX_VALUE,
Delimiters.lineDelimiter());
this.bootstrap.setPipelineFactory(new ChannelPipelineFactory() {
#Override
public ChannelPipeline getPipeline() throws Exception {
SimpleChannelHandler handshakeHandler =
new SimpleChannelHandler();
return Channels.pipeline(
handler,
delimiter,
new StringDecoder(),
new StringEncoder(),
handshakeHandler);
}
});
Channel acceptor = this.bootstrap.bind(new InetSocketAddress(12345));
if (acceptor.isBound()) {
System.out.println("+++ SERVER - bound to *:12345");
this.channelGroup.add(acceptor);
return true;
} else {
System.err.println("+++ SERVER - Failed to bind to *:12345");
this.bootstrap.releaseExternalResources();
return false;
}
}
public void stop() {
this.channelGroup.close().awaitUninterruptibly();
this.bootstrap.releaseExternalResources();
System.err.println("+++ SERVER - Stopped.");
}
Handlers used:
Client handler:
public class MyHandler extends SimpleChannelUpstreamHandler{
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e)
throws Exception {
if(e.getMessage() instanceof String){
System.out.println((String)e.getMessage());
}
System.out.println(e.getMessage().toString());
}
}
Server handler:
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e)
throws Exception {
Channel channel= ctx.getChannel();
channel.write(e.getMessage());
if(e.getMessage() instanceof String){
System.out.println((String)e.getMessage());
}
System.out.println(e.getMessage().toString());
}
client runner:
public static void main(String[] args) throws InterruptedException {
final int nMessages = 5;
try {
Client c = new Client();
if (!c.start()) {
return;
}
for (int i = 0; i < nMessages; i++) {
Thread.sleep(1L);
c.sendMessage((i + 1) + "\n");
}
c.stop();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
Server Runner:
public static void main(String[] args) {
final Server s = new Server("server1");
if (!s.start()) {
return;
}
Runtime.getRuntime().addShutdownHook(new Thread() {
#Override
public void run() {
s.stop();
}
});
}
now what I really need is to print the message that I wrote on the channel on both client and server side and I am really puzzled on this.
Your pipeline creation seems to be wrong at first look. At server side when decoding, the Delimiter needs to come first, then the StringDecoder and then the business handler. You could resolve this probably by just putting breakpoints in these decoders and encoders. Also take a look at this link for very good documentation on how this works.