I written a simple transmission and reception of messages.
If I send few message, everything is fine. If I send a lot of messages, the latter are not processed.
If I send 100 Id i get all.
1 2 3 4 5 6 7 ... 100
If I send 1000 Id i get 1...N (N < 1000)
1 2 3 4 5 6 7 ... 958 959 960
1 2 3 4 5 6 7 ... 448 449 450
1 2 3 4 5 6 7 ... 652 653 654
Server
public class ServerTCP {
private AmountServer server;
public ServerTCP(int _PORT, AmountServer _server) {
final int PORT = _PORT;
server = _server;
// Configure the server.
EventLoopGroup bossGroup = new NioEventLoopGroup(1);
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.option(ChannelOption.AUTO_READ, true)
.option(ChannelOption.SO_BACKLOG, 100)
.handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
p.addLast(new ServerHandler(server));
}
});
// Start the server.
ChannelFuture f = b.bind(PORT).sync();
// Wait until the server socket is closed.
f.channel().closeFuture().sync();
} catch (InterruptedException ex) {
Logger.getLogger(ServerTCP.class.getName()).log(Level.SEVERE, null, ex);
} finally {
// Shut down all event loops to terminate all threads.
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
}
Server-Handler-Read
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
ByteBuf in = (ByteBuf) msg;
while (in.isReadable()) {
int type = in.readInt();
int id = in.readInt();
System.out.println(id);
int amn = in.readLong();
}
in.clear();
in.release();
}
Client
public static void main(String[] args) throws Exception {
EventLoopGroup group = new NioEventLoopGroup();
try {
Bootstrap b = new Bootstrap();
b.group(group)
.channel(NioSocketChannel.class)
.option(ChannelOption.SO_BACKLOG, 500)
.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
p.addLast(new ClientHandler());
}
});
ChannelFuture f = b.connect(HOST, PORT).sync();
int i = 0;
while (i < 1005) {
i++;
ByteBuf firstMessage = Unpooled.buffer(AccountServiceClient.SIZE);
firstMessage.writeInt(1); //Const
firstMessage.writeInt(i); //Id
firstMessage.writeLong(1L);
System.out.println("Step " + i);
f.channel().writeAndFlush(firstMessage);
f.channel().flush();
}
} catch (Exception e) {
e.printStackTrace();
} finally {
// Shut down the event loop to terminate all threads.
group.shutdownGracefully();
}
}
}
Excuse my English
It sounds like you want your message exchange to be more reliable. Consider introducing some handshaking or some mechanism to prevent the client from exiting prematurely. Also it doesn't look like the client is closing the socket correctly. I would try to model this simple use case more closely to the netty echo example to make sure you have your bases covered.
Related
I'm using Netty server to solve this: reading a big file line by line and processing it. Doing it on single machine is still slow, so I've decided to use server to serve chunks of data to clients. That already works, but what I also want is that server shut downs itself when processed whole file. The source code I'm using right now is:
public static void main(String[] args) {
new Thread(() -> {
//reading the big file and populating 'dq' - data queue
}).start();
final EventLoopGroup bGrp = new NioEventLoopGroup(1);
final EventLoopGroup wGrp = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.option(ChannelOption.SO_BACKLOG, 100)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) {
ChannelPipeline p = ch.pipeline();
p.addLast(new StringDecoder());
p.addLast(new StringEncoder());
p.addLast(new ServerHandler(dq, bGrp, wGrp));
}
});
ChannelFuture f = b.bind(PORT).sync();
f.channel().closeFuture().sync();
} finally {
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
class ServerHandler extends ChannelInboundHandlerAdapter {
public ServerHandler(dq, bGrp, wGrp) {
//assigning params to instance fields
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
//creating bulk of data from 'dq' and sending to client
/* e.g.
ctx.write(dq.get());
*/
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
if (dq.isEmpty() /*or other check that file was processed*/ ) {
try {
ctx.channel().closeFuture().sync();
} catch (InterruptedException ie) {
//...
}
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
ctx.close();
ctx.executor().parent().shutdownGracefully();
}
}
Is the server shutdown in channelReadComplete(...) method correct? What I'm afraid is that there can still be another served client (e.g. sending big bulk in other client and with current client reached the end of 'dq').
The base code is from netty EchoServer/DiscardServer examples.
The question is : how to shut down netty server (from handler) when reached specific condition.
Thanks
You can not shutdown a server from a handler. What you could do is signal to a different thread that it should shutdown the server.
I'm implementing some simple netty server and client to send and revieve files. Something which is similar to cloud storage.
I have a server which handles incoming requests and sends the files back to the client. I also want my apps to be able to handle with big files, that's why I divide such files into chunks and send them chunk by chunk. But there's an issue I can't resolve.
Let's say:
We have a 4 gb file on a server.
It is divided into 40 000 chunks.
Then they are sent to a client application, and I can see that all the chunks at the server are written into socket, as I use int field as a message number (chunk number) and put into log a message number which is being written.
But then when a client receives messages (chunks), in the case of large files the process doesn't finish successfully and only some (it depends on the size of a file) of the chunks are received by a client.
A client starts receiving consecutive messages - 1, 2, 3, 4 ... 27878, 27879 and then stops with no exception, although the last message from server was, for example 40000.
Almost forgot to say that I use JavaFX for the client app.
So, I tried to play with xms xmx java vm options but it didn't help.
Server
public class Server {
public void run() throws Exception {
EventLoopGroup mainGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(mainGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
protected void initChannel(SocketChannel socketChannel) throws Exception {
socketChannel.pipeline().addLast(
new ObjectDecoder(Constants.FRAME_SIZE, ClassResolvers.cacheDisabled(null)),
new ObjectEncoder(),
new MainHandler()
);
}
})
.childOption(ChannelOption.SO_KEEPALIVE, true);
ChannelFuture future = b.bind(8189).sync();
future.channel().closeFuture().sync();
} finally {
mainGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
public static void main(String[] args) throws Exception {
new Server().run();
}
}
Server handler
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
try {
if (msg == null) {
return;
}
if (msg instanceof FileRequest) {
FileRequest fr = (FileRequest) msg;
switch (fr.getFileCommand()) {
case DOWNLOAD:
sendFileToClient(ctx, fr.getFilename());
break;
case LIST_FILES:
listFiles(ctx);
break;
case DELETE:
deleteFileOnServer(fr);
listFiles(ctx);
break;
case SEND:
saveFileOnServer(fr);
listFiles(ctx);
break;
case SEND_PARTIAL_DATA:
savePartialDataOnServer(fr);
break;
}
}
} finally {
ReferenceCountUtil.release(msg);
}
}
Methods for sending files in chunks
private void sendFileToClient(ChannelHandlerContext ctx, String fileName) throws IOException {
Path path = Paths.get("server_storage/" + fileName);
if (Files.exists(path)) {
if (Files.size(path) > Constants.FRAME_SIZE) {
sendServerDataFrames(ctx, path);
ctx.writeAndFlush(new FileRequest(FileCommand.LIST_FILES));
} else {
FileMessage fm = new FileMessage(path);
ctx.writeAndFlush(fm);
}
}
}
private void sendServerDataFrames(ChannelHandlerContext ctx, Path path) throws IOException {
byte[] byteBuf = new byte[Constants.FRAME_CHUNK_SIZE];
FileMessage fileMessage = new FileMessage(path, byteBuf, 1);
FileRequest fileRequest = new FileRequest(FileCommand.SEND_PARTIAL_DATA, fileMessage);
FileInputStream fis = new FileInputStream(path.toFile());
int read;
while ((read = fis.read(byteBuf)) > 0) {
if (read < Constants.FRAME_CHUNK_SIZE) {
byteBuf = Arrays.copyOf(byteBuf, read);
fileMessage.setData(byteBuf);
}
ctx.writeAndFlush(fileRequest);
fileMessage.setMessageNumber(fileMessage.getMessageNumber() + 1);
}
System.out.println("server_storage/" + path.getFileName() + ", server last frame number: " + fileMessage.getMessageNumber());
System.out.println("server_storage/" + path.getFileName() + ": closing file stream.");
fis.close();
}
client handlers
#Override
public void initialize(URL location, ResourceBundle resources) {
Network.start();
Thread t = new Thread(() -> {
try {
while (true) {
AbstractMessage am = Network.readObject();
if (am instanceof FileMessage) {
FileMessage fm = (FileMessage) am;
Files.write(Paths.get("client_storage/" + fm.getFilename()), fm.getData(), StandardOpenOption.CREATE);
refreshLocalFilesList();
}
if (am instanceof FilesListMessage) {
FilesListMessage flm = (FilesListMessage) am;
refreshServerFilesList(flm.getFilesList());
}
if (am instanceof FileRequest) {
FileRequest fr = (FileRequest) am;
switch (fr.getFileCommand()) {
case DELETE:
deleteFile(fr.getFilename());
break;
case SEND_PARTIAL_DATA:
receiveFrames(fr);
break;
case LIST_FILES:
refreshLocalFilesList();
break;
}
}
}
} catch (ClassNotFoundException | IOException e) {
e.printStackTrace();
} finally {
Network.stop();
}
});
t.setDaemon(true);
t.start();
refreshLocalFilesList();
Network.sendMsg(new FileRequest(FileCommand.LIST_FILES));
}
private void receiveFrames(FileRequest fm) throws IOException {
Utils.processBytes(fm.getFileMessage(), "client_storage/");
}
public final class Utils {
public static void processBytes(FileMessage fm, String pathPart) {
Path path = Paths.get(pathPart + fm.getFilename());
byte[] data = fm.getData();
System.out.println(pathPart + path.getFileName() + ": " + fm.getMessageNumber());
try {
if (fm.getMessageNumber() == 1) {
Files.write(path, data, StandardOpenOption.CREATE_NEW);
} else {
Files.write(path, data, StandardOpenOption.WRITE, StandardOpenOption.APPEND);
}
}
catch (IOException e) {
e.printStackTrace();
}
}
}
That what I see on server.
server_storage/DVD5_OFFICE_2010_SE_SP2_VOLUME_X86_RU-KROKOZ.iso: 42151
server_storage/DVD5_OFFICE_2010_SE_SP2_VOLUME_X86_RU-KROKOZ.iso: 42152
server_storage/DVD5_OFFICE_2010_SE_SP2_VOLUME_X86_RU-KROKOZ.iso, server last frame number: 42153
server_storage/DVD5_OFFICE_2010_SE_SP2_VOLUME_X86_RU-KROKOZ.iso: closing file stream.
And this one is on a client.
client_storage/DVD5_OFFICE_2010_SE_SP2_VOLUME_X86_RU-KROKOZ.iso: 29055
client_storage/DVD5_OFFICE_2010_SE_SP2_VOLUME_X86_RU-KROKOZ.iso: 29056
client_storage/DVD5_OFFICE_2010_SE_SP2_VOLUME_X86_RU-KROKOZ.iso: 29057
And there is no issue when sending files from the client to the the server. I can see in debugger and in the windows task manager that both processes are working simultaniously but it's not like this when a file is sent from the server to the client. First all the chunks are read and then they are sent to a client and it starts to receive them but failed to get all of them.
Please help. I have no idea what it could be. Thanks in advance.
------------ client-side pipeline (Client can send any type of messages i.e. HTTP requests, or binary packets)--------
Bootstrap bootstrap = new Bootstrap()
.group(group)
// .option(ChannelOption.TCP_NODELAY, true)
// .option(ChannelOption.SO_KEEPALIVE, true)
.channel(NioSocketChannel.class)
.handler(new ChannelInitializer() {
#Override
protected void initChannel(Channel channel) throws Exception {
channel.pipeline()
.addLast("agent-traffic-shaping", ats)
.addLast("length-decoder", new LengthFieldBasedFrameDecoder(Integer.MAX_VALUE, 0, 4, 0, 4))
.addLast("agent-client", new AgentClientHandler())
.addLast("4b-length", new LengthFieldPrepender(4))
;
}
});
------------------------------ Server-side pipeline-----------------
ServerBootstrap b = new ServerBootstrap()
.group(group)
// .option(ChannelOption.TCP_NODELAY, true)
// .option(ChannelOption.SO_KEEPALIVE, true)
.channel(NioServerSocketChannel.class)
.localAddress(new InetSocketAddress(port))
.childHandler(new ChannelInitializer() {
#Override
protected void initChannel(Channel channel) throws Exception {
channel.pipeline()
.addLast("agent-traffic-shaping", ats)
.addLast("length-decoder", new LengthFieldBasedFrameDecoder(Integer.MAX_VALUE, 0, 4, 0, 4))
.addLast(new AgentServerHandler())
.addLast("4b-length", new LengthFieldPrepender(4));
}
}
);
ChannelFuture f = b.bind().sync();
log.info("Started agent-side server at Port {}", port);
-------- Server's channelRead method-----------------
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
ByteBuf data = (ByteBuf) msg;
log.info("SIZE {}", data.capacity());
String s = data.readCharSequence(data.capacity(), Charset.forName("utf-8")).toString();
System.out.print(s);
if (buffer != null) buffer.incomingPacket((ByteBuf) msg);
else {
log.error("Receiving buffer NULL for Remote Agent {}:{} ", remoteAgentIP, remoteAgentPort);
((ByteBuf) msg).release();
}
/* totalBytes += ((ByteBuf) msg).capacity();*/
}
------------ Client writing on Channel (ByteBuf data contains valid HTTP request with size of 87 Bytes)--------
private void writeToAgentChannel(Channel currentChannel, ByteBuf data) {
String s = data.readCharSequence(data.capacity(), Charset.forName("utf-8")).toString();
log.info("SIZE {}", data.capacity());
System.out.print(s);
ChannelFuture cf = currentChannel.write(data);
currentChannel.flush();
/* wCount++;
if (wCount >= request.getRequest().getBufferSize() * request.getRequest().getNumParallelSockets()) {
for (Channel channel : channels)
channel.flush();
wCount = 0;
}*/
cf.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture channelFuture) throws Exception {
if (cf.isSuccess()) {
totalBytes += data.capacity();
}
else log.error("Failed to write packet to channel {}", cf.cause());
}
});
}
However Server receives an empty ByteBuf with size of zero. What could be possible cause here?
Your issue seems to come from the client, where you accidentally consume all bytes inside a bytebuf when you try to debug it.
String s = data.readCharSequence(data.capacity(), Charset.forName("utf-8")).toString();
log.info("SIZE {}", data.capacity());
System.out.print(s);
Calling readCharSequence consumes the data, leaving you with 0 bytes left.
I suggest using a DebugHandler to debug your pipeline, as that is tested not to affect the data
i have an example that uses java 1.8 and netty 4.1.30.Final version IdleStateHandler to output the current time when no action is taken for 400 milliseconds. However, the current time is output at intervals of 2 seconds instead of 400 milliseconds.
here is my example code
Client.java
public void connect() {
EventLoopGroup workerGroup = new OioEventLoopGroup();
try {
Bootstrap bootstrap = new Bootstrap();
bootstrap.group(workerGroup)
.channel(RxtxChannel.class)
.option(RxtxChannelOption.BAUD_RATE, 38400)
.option(RxtxChannelOption.DATA_BITS, RxtxChannelConfig.Databits.DATABITS_8)
.option(RxtxChannelOption.PARITY_BIT, RxtxChannelConfig.Paritybit.NONE)
.option(RxtxChannelOption.STOP_BITS, RxtxChannelConfig.Stopbits.STOPBITS_1)
.handler(new ExampleChannelInitializer());
this.channel = bootstrap.connect(new RxtxDeviceAddress("COM1")).sync().channel();
this.channel.closeFuture().addListener(f -> {
workerGroup.shutdownGracefully();
});
} catch (Exception e) {
throw new ConnectionException(e.getMessage(), e);
}
}
ExampleChannelInitializer.java
#Override
protected void initChannel(RxtxChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast(new IdleStateHandler(0, 0, 400, TimeUnit.MILLISECONDS));
pipeline.addLast(new ChannelInboundHandlerAdapter() {
#Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
System.out.println(LocalDateTime.now());
}
});
}
Console
2018-10-30T10:42:02.762
2018-10-30T10:42:04.789
2018-10-30T10:42:06.818
2018-10-30T10:42:08.844
2018-10-30T10:42:10.871
This is unfortunately just a matter of fact how the OIO transport and so RXTX work under the hood. You can make these more "precise" by using RxtxChannelOption.READ_TIMEOUT and RxtxChannelOption.WAIT_TIME and set these to some smaller value.
I have written a async socketserver using java 7 nio2.
Here is a snipper of the Server.
public class AsyncJava7Server implements Runnable, CounterProtocol, CounterServer{
private int port = 0;
private AsynchronousChannelGroup group;
public AsyncJava7Server(int port) throws IOException, InterruptedException, ExecutionException {
this.port = port;
}
public void run() {
try {
String localhostname = java.net.InetAddress.getLocalHost().getHostName();
group = AsynchronousChannelGroup.withThreadPool(
Executors.newCachedThreadPool(new NamedThreadFactory("Channel_Group_Thread")));
// open a server channel and bind to a free address, then accept a connection
final AsynchronousServerSocketChannel asyncServerSocketChannel =
AsynchronousServerSocketChannel.open(group).bind(
new InetSocketAddress(localhostname, port));
asyncServerSocketChannel.accept(null,
new CompletionHandler <AsynchronousSocketChannel, Object>() {
#Override
public void completed(final AsynchronousSocketChannel asyncSocketChannel,
Object attachment) {
// Invoke simple handle accept code - only takes about 10 milliseconds.
handleAccept(asyncSocketChannel);
asyncServerSocketChannel.accept(null, this);
}
#Override
public void failed(Throwable exc, Object attachment) {
System.out.println("***********" + exc + " statement=" + attachment);
}
});
and here is a snippet of the client code which tries to connect...
public class AsyncJava7Client implements CounterProtocol, CounterClientBridge {
AsynchronousSocketChannel asyncSocketChannel;
private String serverName= null;
private int port;
private String clientName;
public AsyncJava7Client(String clientName, String serverName, int port) throws IOException {
this.clientName = clientName;
this.serverName = serverName;
this.port = port;
}
private void connectToServer() {
Future<Void> connectFuture = null;
try {
log("Opening client async channel...");
asyncSocketChannel = AsynchronousSocketChannel.open();
// Connecting to server
connectFuture = asyncSocketChannel.connect(new InetSocketAddress("Alex-PC", 9999));
} catch (Exception ex) {
ex.printStackTrace();
throw new RuntimeException(ex);
}
// open a new socket channel and connect to the server
long beginTime = 0;
try {
// You have two seconds to connect. This will throw exception if server is not there.
beginTime = System.currentTimeMillis();
Void connectVoid = connectFuture.get(15, TimeUnit.SECONDS);
} catch (Exception ex) {
//EXCEPTIONS THROWN HERE AFTER ABOUT 150 CLIENTS
long endTime = System.currentTimeMillis();
long timeTaken = endTime - beginTime;
log("************* TIME TAKEN=" + timeTaken);
ex.printStackTrace();
throw new RuntimeException(ex);
}
}
I have a test which fires off clients.
#Test
public void testManyClientsAtSametime() throws Exception {
int clientsize = 150;
ScheduledThreadPoolExecutor executor =
(ScheduledThreadPoolExecutor) Executors.newScheduledThreadPool(clientsize + 1,
new NamedThreadFactory("Test_Thread"));
AsyncJava7Server asyncJava7Server = startServer();
List<AsyncJava7Client> clients = new ArrayList<AsyncJava7Client>();
List<Future<String>> results = new ArrayList<Future<String>>();
for (int i = 0; i < clientsize; i++) {
// Now start a client
final AsyncJava7Client client =
new AsyncJava7Client("client" + i, InetAddress.getLocalHost().getHostName(), 9999);
clients.add(client);
}
long beginTime = System.currentTimeMillis();
Random random = new Random();
for (final AsyncJava7Client client: clients) {
Callable<String> callable = new Callable<String>() {
public String call() {
...
... invoke APIs to connect client to server
...
return counterValue;
}
};
long delay = random.nextLong() % 10000; // somewhere between 0 and 10 seconds.
Future<String> startClientFuture = executor.schedule(callable, delay, TimeUnit.MILLISECONDS);
results.add(startClientFuture);
}
It works super for about 100 clients. At about 140+ I get a load of exceptions in the client - when it tries to connect. The exception is: java.util.concurrent.ExecutionException: java.io.IOException: The remote computer refused the network connection.
My test is on a single laptop running windows 7. When it bombs out I check the TCP connections and there about 500 - 600 connections -that's ok. AS I have similiar JDK 1.0 java.net socket programs that can handle 4,000 TCP connections.
No exceptions or anything dodgy looking in server.
So I am at a loss as to what could be wrong here. any ideas?
Try using the form of bind that accepts a backlog limit and set that to a higher number. For example:
final AsynchronousServerSocketChannel asyncServerSocketChannel =
AsynchronousServerSocketChannel.open(group).bind(
new InetSocketAddress(localhostname, port), 1000);
I don't know what the win7 implementation limit is by default but can be a cause of refused connections.