------------ client-side pipeline (Client can send any type of messages i.e. HTTP requests, or binary packets)--------
Bootstrap bootstrap = new Bootstrap()
.group(group)
// .option(ChannelOption.TCP_NODELAY, true)
// .option(ChannelOption.SO_KEEPALIVE, true)
.channel(NioSocketChannel.class)
.handler(new ChannelInitializer() {
#Override
protected void initChannel(Channel channel) throws Exception {
channel.pipeline()
.addLast("agent-traffic-shaping", ats)
.addLast("length-decoder", new LengthFieldBasedFrameDecoder(Integer.MAX_VALUE, 0, 4, 0, 4))
.addLast("agent-client", new AgentClientHandler())
.addLast("4b-length", new LengthFieldPrepender(4))
;
}
});
------------------------------ Server-side pipeline-----------------
ServerBootstrap b = new ServerBootstrap()
.group(group)
// .option(ChannelOption.TCP_NODELAY, true)
// .option(ChannelOption.SO_KEEPALIVE, true)
.channel(NioServerSocketChannel.class)
.localAddress(new InetSocketAddress(port))
.childHandler(new ChannelInitializer() {
#Override
protected void initChannel(Channel channel) throws Exception {
channel.pipeline()
.addLast("agent-traffic-shaping", ats)
.addLast("length-decoder", new LengthFieldBasedFrameDecoder(Integer.MAX_VALUE, 0, 4, 0, 4))
.addLast(new AgentServerHandler())
.addLast("4b-length", new LengthFieldPrepender(4));
}
}
);
ChannelFuture f = b.bind().sync();
log.info("Started agent-side server at Port {}", port);
-------- Server's channelRead method-----------------
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
ByteBuf data = (ByteBuf) msg;
log.info("SIZE {}", data.capacity());
String s = data.readCharSequence(data.capacity(), Charset.forName("utf-8")).toString();
System.out.print(s);
if (buffer != null) buffer.incomingPacket((ByteBuf) msg);
else {
log.error("Receiving buffer NULL for Remote Agent {}:{} ", remoteAgentIP, remoteAgentPort);
((ByteBuf) msg).release();
}
/* totalBytes += ((ByteBuf) msg).capacity();*/
}
------------ Client writing on Channel (ByteBuf data contains valid HTTP request with size of 87 Bytes)--------
private void writeToAgentChannel(Channel currentChannel, ByteBuf data) {
String s = data.readCharSequence(data.capacity(), Charset.forName("utf-8")).toString();
log.info("SIZE {}", data.capacity());
System.out.print(s);
ChannelFuture cf = currentChannel.write(data);
currentChannel.flush();
/* wCount++;
if (wCount >= request.getRequest().getBufferSize() * request.getRequest().getNumParallelSockets()) {
for (Channel channel : channels)
channel.flush();
wCount = 0;
}*/
cf.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture channelFuture) throws Exception {
if (cf.isSuccess()) {
totalBytes += data.capacity();
}
else log.error("Failed to write packet to channel {}", cf.cause());
}
});
}
However Server receives an empty ByteBuf with size of zero. What could be possible cause here?
Your issue seems to come from the client, where you accidentally consume all bytes inside a bytebuf when you try to debug it.
String s = data.readCharSequence(data.capacity(), Charset.forName("utf-8")).toString();
log.info("SIZE {}", data.capacity());
System.out.print(s);
Calling readCharSequence consumes the data, leaving you with 0 bytes left.
I suggest using a DebugHandler to debug your pipeline, as that is tested not to affect the data
Related
I am creating a client to communicate with APNs.
here is my requirement.
jdk 1.6
http/2
tls 1.3
ALPN
so I decided to make it using Netty.
I don't know if I set the header and data well.
Http2Client.java
public class Http2Client {
// static final boolean SSL = System.getProperty("ssl") != null;
static final boolean SSL = true;
static final String HOST = "api.sandbox.push.apple.com";
static final int PORT = 443;
static final String PATH = "/3/device/00fc13adff785122b4ad28809a3420982341241421348097878e577c991de8f0";
// private static final AsciiTest APNS_PATH = new AsciiTest("/3/device/00fc13adff785122b4ad28809a3420982341241421348097878e577c991de8f0");
private static final AsciiTest APNS_EXPIRATION_HEADER = new AsciiTest("apns-expiration");
private static final AsciiTest APNS_TOPIC_HEADER = new AsciiTest("apns-topic");
private static final AsciiTest APNS_PRIORITY_HEADER = new AsciiTest("apns-priority");
private static final AsciiTest APNS_AUTHORIZATION = new AsciiTest("authorization");
private static final AsciiTest APNS_ID_HEADER = new AsciiTest("apns-id");
private static final AsciiTest APNS_PUSH_TYPE_HEADER = new AsciiTest("apns-push-type");
public static void main(String[] args) throws Exception {
EventLoopGroup clientWorkerGroup = new NioEventLoopGroup();
// Configure SSL.
final SslContext sslCtx;
if (SSL) {
SslProvider provider = SslProvider.isAlpnSupported(SslProvider.OPENSSL) ? SslProvider.OPENSSL
: SslProvider.JDK;
sslCtx = SslContextBuilder.forClient()
.sslProvider(provider)
/*
* NOTE: the cipher filter may not include all ciphers required by the HTTP/2
* specification. Please refer to the HTTP/2 specification for cipher
* requirements.
*/
.ciphers(Http2SecurityUtil.CIPHERS, SupportedCipherSuiteFilter.INSTANCE)
.trustManager(InsecureTrustManagerFactory.INSTANCE)
.applicationProtocolConfig(new ApplicationProtocolConfig(
Protocol.ALPN,
// NO_ADVERTISE is currently the only mode supported by both OpenSsl and JDK
// providers.
SelectorFailureBehavior.NO_ADVERTISE,
// ACCEPT is currently the only mode supported by both OpenSsl and JDK
// providers.
SelectedListenerFailureBehavior.ACCEPT,
ApplicationProtocolNames.HTTP_2,
ApplicationProtocolNames.HTTP_1_1))
.build();
} else {
sslCtx = null;
}
try {
// Configure the client.
Bootstrap b = new Bootstrap();
b.group(clientWorkerGroup);
b.channel(NioSocketChannel.class);
b.option(ChannelOption.SO_KEEPALIVE, true);
b.remoteAddress(HOST, PORT);
b.handler(new Http2ClientInit(sslCtx));
// Start the client.
final Channel channel = b.connect().syncUninterruptibly().channel();
System.out.println("Connected to [" + HOST + ':' + PORT + ']');
final Http2ResponseHandler streamFrameResponseHandler =
new Http2ResponseHandler();
final Http2StreamChannelBootstrap streamChannelBootstrap = new Http2StreamChannelBootstrap(channel);
final Http2StreamChannel streamChannel = streamChannelBootstrap.open().syncUninterruptibly().getNow();
streamChannel.pipeline().addLast(streamFrameResponseHandler);
// Send request (a HTTP/2 HEADERS frame - with ':method = POST' in this case)
final Http2Headers headers = new DefaultHttp2Headers();
headers.method(HttpMethod.POST.asciiName());
headers.path(PATH);
headers.scheme(HttpScheme.HTTPS.name());
headers.add(APNS_TOPIC_HEADER, "com.example.MyApp");
headers.add(APNS_AUTHORIZATION,
"bearer eyAia2lkIjogIjhZTDNHM1JSWDciIH0.eyAiaXNzIjogIkM4Nk5WOUpYM0QiLCAiaWF0IjogIjE0NTkxNDM1ODA2NTAiIH0.MEYCIQDzqyahmH1rz1s-LFNkylXEa2lZ_aOCX4daxxTZkVEGzwIhALvkClnx5m5eAT6Lxw7LZtEQcH6JENhJTMArwLf3sXwi");
headers.add(APNS_ID_HEADER, "eabeae54-14a8-11e5-b60b-1697f925ec7b");
headers.add(APNS_PUSH_TYPE_HEADER, "alert");
headers.add(APNS_EXPIRATION_HEADER, "0");
headers.add(APNS_PRIORITY_HEADER, "10");
final Http2HeadersFrame headersFrame = new DefaultHttp2HeadersFrame(headers, true);
streamChannel.writeAndFlush(headersFrame);
System.out.println("Sent HTTP/2 POST request to " + PATH);
// Wait for the responses (or for the latch to expire), then clean up the
// connections
if (!streamFrameResponseHandler.responseSuccessfullyCompleted()) {
System.err.println("Did not get HTTP/2 response in expected time.");
}
System.out.println("Finished HTTP/2 request, will close the connection.");
// Wait until the connection is closed.
channel.close().syncUninterruptibly();
} finally {
clientWorkerGroup.shutdownGracefully();
}
}
}
Http2ResponseHandler.java
public final class Http2ResponseHandler extends SimpleChannelInboundHandler<Http2StreamFrame> {
private final CountDownLatch latch = new CountDownLatch(1);
public void channelActive(ChannelHandlerContext ctx) {
String sendMessage = "{\"aps\":{\"alert\":\"hello\"}}";
ByteBuf messageBuffer = Unpooled.buffer();
messageBuffer.writeBytes(sendMessage.getBytes());
StringBuilder builder = new StringBuilder();
builder.append("request [");
builder.append(sendMessage);
builder.append("]");
System.out.println(builder.toString());
ctx.writeAndFlush(new DefaultHttp2DataFrame(messageBuffer, true));
}
#Override
protected void channelRead0(ChannelHandlerContext ctx, Http2StreamFrame msg) throws Exception {
ByteBuf content = ctx.alloc().buffer();
System.out.println(content);
System.out.println("Received HTTP/2 'stream' frame : " + msg);
// isEndStream() is not from a common interface, so we currently must check both
if (msg instanceof Http2DataFrame && ((Http2DataFrame) msg).isEndStream()) {
ByteBuf data = ((DefaultHttp2DataFrame) msg).content().alloc().buffer();
System.out.println(data.readCharSequence(256, Charset.forName("utf-8")).toString());
latch.countDown();
} else if (msg instanceof Http2HeadersFrame && ((Http2HeadersFrame) msg).isEndStream()) {
latch.countDown();
}
// String readMessage = ((ByteBuf) msg).toString(CharsetUtil.UTF_8);
//
// StringBuilder builder = new StringBuilder();
// builder.append("receive [");
// builder.append(readMessage);
// builder.append("]");
//
// System.out.println(builder.toString());
}
public void channelReadComplete(ChannelHandlerContext ctx) {
// ctx.flush();
ctx.close();
}
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
// Close the connection when an exception is raised.
cause.printStackTrace();
ctx.close();
}
/**
* Waits for the latch to be decremented (i.e. for an end of stream message to be received), or for
* the latch to expire after 5 seconds.
* #return true if a successful HTTP/2 end of stream message was received.
*/
public boolean responseSuccessfullyCompleted() {
try {
return latch.await(5, TimeUnit.SECONDS);
} catch (InterruptedException ie) {
System.err.println("Latch exception: " + ie.getMessage());
return false;
}
}
}
console log
Connected to [api.sandbox.push.apple.com:443]
Sent HTTP/2 POST request to /3/device/00fc13adff785122b4ad28809a3420982341241421348097878e577c991de8f0
PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 256)
Received HTTP/2 'stream' frame : DefaultHttp2HeadersFrame(stream=3, headers=DefaultHttp2Headers[:status: 403, apns-id: eabeae54-14a8-11e5-b60b-1697f925ec7b], endStream=false, padding=0)
PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 256)
Received HTTP/2 'stream' frame : DefaultHttp2DataFrame(stream=3, content=UnpooledSlicedByteBuf(ridx: 0, widx: 33, cap: 33/33, unwrapped: PooledUnsafeDirectByteBuf(ridx: 150, widx: 150, cap: 179)), endStream=true, padding=0)
Question
Did I send the header and data well?
How can i convert this part to String
DefaultHttp2DataFrame(stream=3, content=UnpooledSlicedByteBuf(ridx: 0, widx: 33, cap: 33/33, unwrapped: PooledUnsafeDirectByteBuf(ridx: 150, widx: 150, cap: 179)), endStream=true, padding=0)
If you know the solution, please help me.
Answer myself.
Http2ResponseHandler.java
public final class ResponseHandler extends SimpleChannelInboundHandler<Http2StreamFrame> {
private final CountDownLatch latch = new CountDownLatch(1);
#Override
protected void channelRead0(ChannelHandlerContext ctx, Http2StreamFrame msg) throws Exception {
if (msg instanceof Http2DataFrame && ((Http2DataFrame) msg).isEndStream()) {
DefaultHttp2DataFrame dataFrame = (DefaultHttp2DataFrame) msg;
ByteBuf dataContent = dataFrame.content();
String data = dataContent.toString(Charset.forName("utf-8"));
System.out.println(data);
latch.countDown();
} else if (msg instanceof Http2HeadersFrame && ((Http2HeadersFrame) msg).isEndStream()) {
DefaultHttp2HeadersFrame headerFrame = (DefaultHttp2HeadersFrame) msg;
DefaultHttp2Headers header = (DefaultHttp2Headers) headerFrame.headers();
System.out.println(header.get("apns-id"));
latch.countDown();
}
}
/**
* Waits for the latch to be decremented (i.e. for an end of stream message to be received), or for
* the latch to expire after 5 seconds.
* #return true if a successful HTTP/2 end of stream message was received.
*/
public boolean responseSuccessfullyCompleted() {
try {
return latch.await(5, TimeUnit.SECONDS);
} catch (InterruptedException ie) {
System.err.println("Latch exception: " + ie.getMessage());
return false;
}
}
}
Question
Did I send the header and data well?
-> Answer
final Http2Headers headers = new DefaultHttp2Headers();
headers.method(HttpMethod.POST.asciiName());
headers.scheme(HttpScheme.HTTPS.name());
headers.path(PATH + notification.getToken());
headers.add("apns-topic", topic);
headers.add("key", "value");
// if you have a data frame you have to put false.
final Http2HeadersFrame headersFrame = new DefaultHttp2HeadersFrame(headers, false);
How can i convert this part to String
-> Answer
DefaultHttp2DataFrame dataFrame = (DefaultHttp2DataFrame) msg;
ByteBuf dataContent = dataFrame.content();
String data = dataContent.toString(Charset.forName("utf-8"));
I hope it will be helpful to people who have the same curiosity as me.
I have been stuggling with a configuration using Netty to stream bytes to a ClamAV service. I am running in an Apache Camel route.
Using Netty, I am unable to intercept the "INSTREAM size limit exceeded" message.
INSTREAM
It is mandatory to prefix this command with n or z.
Scan a stream of data. The stream is sent to clamd in chunks, after INSTREAM, on the same socket on which the command was sent. This avoids the overhead of establishing new TCP connections and problems with NAT. The format of the chunk is: '' where is the size of the following data in bytes expressed as a 4 byte unsigned integer in network byte order and is the actual chunk. Streaming is terminated by sending a zero-length chunk. Note: do not exceed StreamMaxLength as defined in clamd.conf, otherwise clamd will reply with INSTREAM size limit exceeded and close the connection.
Using a straight synchronous socket connection I have no issues. Can anyone point me in the right direction for how I should be using Netty to do this? Or should I just stick with a synchronous socket connection.
Implementation using synchronous sockets. Credit to https://github.com/solita/clamav-java "Antti Virtanen".
private class UseSocket implements Processor{
#Override
public void process(Exchange exchange) throws Exception{
try (BufferedInputStream message = new BufferedInputStream(exchange.getIn().getBody(InputStream.class));
Socket socket = new Socket("localhost", 3310);
BufferedOutputStream socketOutput = new BufferedOutputStream(socket.getOutputStream())){
byte[] command = "zINSTREAM\0".getBytes();
socketOutput.write(command);
socketOutput.flush();
byte[] chunk = new byte[2048];
int chunkSize;
try(BufferedInputStream socketInput = new BufferedInputStream(socket.getInputStream())){
for(chunkSize = message.read(chunk);chunkSize > -1;chunkSize = message.read(chunk)){
socketOutput.write(ByteBuffer.allocate(4).putInt(chunkSize).array());
socketOutput.write(chunk, 0, chunkSize);
socketOutput.flush();
if(processReply(socketInput, exchange)){
return;
}
}
socketOutput.write(ByteBuffer.allocate(4).putInt(0).array());
socketOutput.flush();
processReply(socketInput, exchange);
}
}
}
private boolean processReply(BufferedInputStream in, Exchange exchange) throws Exception{
if(in.available() > 0) {
logger.info("processing reply");
byte[] replyBytes = new byte[256];
int replySize = in.read(replyBytes);
if (replySize > 0) {
String reply = new String(replyBytes, 0, replySize, StandardCharsets.UTF_8);
String avStatus = "infected";
if ("stream: OK\0".equals(reply)) {
avStatus = "clean";
} else if ("INSTREAM size limit exceeded. ERROR\0".equals(reply)) {
avStatus = "overflow";
}
exchange.getIn().setHeader("av-status", avStatus);
return true;
}
}
return false;
}
}
Implementation using Netty with inbound and outbound channel handlers.
private class UseNetty implements Processor{
#Override
public void process(Exchange exchange) throws Exception{
logger.info(CLASS_NAME + ": Creating Netty client");
EventLoopGroup eventLoopGroup = new NioEventLoopGroup();
try{
Bootstrap bootstrap = new Bootstrap();
bootstrap.group(eventLoopGroup);
bootstrap.channel(NioSocketChannel.class);
bootstrap.remoteAddress(new InetSocketAddress("localhost", 3310));
bootstrap.handler(new ClamAvChannelIntializer(exchange));
ChannelFuture channelFuture = bootstrap.connect().sync();
channelFuture.channel().closeFuture().sync();
}catch(Exception ex) {
logger.error(CLASS_NAME + ": ERROR", ex);
}
finally
{
eventLoopGroup.shutdownGracefully();
logger.info(CLASS_NAME + ": Netty client closed");
}
}
}
public class ClamAvChannelIntializer extends ChannelInitializer<SocketChannel> {
private Exchange exchange;
public ClamAvChannelIntializer(Exchange exchange){
this.exchange = exchange;
}
#Override
protected void initChannel(SocketChannel socketChannel) throws Exception {
socketChannel.pipeline().addLast(new ClamAvClientWriter());
socketChannel.pipeline().addLast(new ClamAvClientHandler(exchange));
}
}
public class ClamAvClientHandler extends SimpleChannelInboundHandler<ByteBuf> {
String CLASS_NAME;
Logger logger;
private Exchange exchange;
public static final int MAX_BUFFER = 2048;
public ClamAvClientHandler(Exchange exchange){
super();
CLASS_NAME = this.getClass().getName();
logger = LoggerFactory.getLogger(CLASS_NAME);
this.exchange = exchange;
}
#Override
public void channelActive(ChannelHandlerContext channelHandlerContext) throws Exception{
logger.info(CLASS_NAME + ": Entering channelActive");
channelHandlerContext.write(exchange);
logger.info(CLASS_NAME + ": Exiting channelActive");
}
#Override
public void exceptionCaught(ChannelHandlerContext channelHandlerContext, Throwable cause){
cause.printStackTrace();
channelHandlerContext.close();
}
#Override
protected void channelRead0(ChannelHandlerContext channelHandlerContext, ByteBuf byteBuf) {
logger.info(CLASS_NAME + ": Entering channelRead0");
String reply = byteBuf.toString(CharsetUtil.UTF_8);
logger.info(CLASS_NAME + ": Reply = " + reply);
String avStatus = "infected";
if ("stream: OK\0".equals(reply)) {
avStatus = "clean";
} else if ("INSTREAM size limit exceeded. ERROR\0".equals(reply)) {
avStatus = "overflow";
} else{
logger.warn("Infected or unknown reply = " + reply);
}
exchange.getIn().setHeader("av-status", avStatus);
logger.info(CLASS_NAME + ": Exiting channelRead0");
channelHandlerContext.close();
}
}
public class ClamAvClientWriter extends ChannelOutboundHandlerAdapter {
String CLASS_NAME;
Logger logger;
public static final int MAX_BUFFER = 64000;//2^16
public ClamAvClientWriter(){
CLASS_NAME = this.getClass().getName();
logger = LoggerFactory.getLogger(CLASS_NAME);
}
#Override
public void write(ChannelHandlerContext channelHandlerContext, Object o, ChannelPromise channelPromise) throws Exception{
logger.info(CLASS_NAME + ": Entering write");
Exchange exchange = (Exchange)o;
try(BufferedInputStream message = new BufferedInputStream(exchange.getIn().getBody(InputStream.class))){
channelHandlerContext.writeAndFlush(Unpooled.copiedBuffer("zINSTREAM\0".getBytes()));
byte[] chunk = new byte[MAX_BUFFER];
for(int i=message.read(chunk);i>-1;i=message.read(chunk)){
byte[] chunkSize = ByteBuffer.allocate(4).putInt(i).array();
channelHandlerContext.write(Unpooled.copiedBuffer(chunkSize));
channelHandlerContext.writeAndFlush(Unpooled.copiedBuffer(chunk, 0, i));
}
channelHandlerContext.writeAndFlush(Unpooled.copiedBuffer(ByteBuffer.allocate(4).putInt(0).array()));
}
logger.info(CLASS_NAME + ": Exiting write");
}
}
I finally gave up on trying to use Netty for this. I created a new Camel Processor and packaged the socket stream in it. Code below in case anyone runs into a similar issue.
public class ClamAvInstream implements Processor {
Logger logger;
private final int MAX_BUFFER = 2048;
public ClamAvInstream() {
logger = LoggerFactory.getLogger(this.getClass().getName());
}
#Override
public void process(Exchange exchange) throws Exception {
try (BufferedInputStream message = new BufferedInputStream(exchange.getIn().getBody(InputStream.class));
Socket socket = new Socket("localhost", 3310);
BufferedOutputStream socketOutput = new BufferedOutputStream(socket.getOutputStream())) {
byte[] command = "zINSTREAM\0".getBytes();
socketOutput.write(command);
socketOutput.flush();
byte[] chunk = new byte[MAX_BUFFER];
int chunkSize;
try (BufferedInputStream socketInput = new BufferedInputStream(socket.getInputStream())) {
for (chunkSize = message.read(chunk); chunkSize > -1; chunkSize = message.read(chunk)) {
socketOutput.write(ByteBuffer.allocate(4).putInt(chunkSize).array());
socketOutput.write(chunk, 0, chunkSize);
socketOutput.flush();
receivedReply(socketInput, exchange);
}
socketOutput.write(ByteBuffer.allocate(4).putInt(0).array());
socketOutput.flush();
receivedReply(socketInput, exchange);
} catch(ClamAvException ex){ //close socketInput
logger.warn(ex.getMessage());
}
}//close message, socket, socketOutput
}
private class ClamAvException extends Exception{
private ClamAvException(String error){
super(error);
}
}
private void receivedReply(BufferedInputStream in, Exchange exchange) throws Exception{
if(in.available() > 0){
byte[] replyBytes = new byte[256];
int replySize = in.read(replyBytes);
if (replySize > 0) {
String reply = new String(replyBytes, 0, replySize, StandardCharsets.UTF_8);
logger.info("reply="+reply);
if(reply.contains("OK")){
exchange.getIn().setHeader("av-status", "clean");
}else if(reply.contains("ERROR")){
if(reply.equals("INSTREAM size limit exceeded. ERROR\0")){
exchange.getIn().setHeader("av-status", "overflow");
}else {
exchange.getIn().setHeader("av-status", "error");
}
throw new ClamAvException(reply);
}else if(reply.contains("FOUND")){
exchange.getIn().setHeader("av-status", "infected");
}else{
exchange.getIn().setHeader("av-status", "unknown");
}
}
}
}
}
I'm implementing some simple netty server and client to send and revieve files. Something which is similar to cloud storage.
I have a server which handles incoming requests and sends the files back to the client. I also want my apps to be able to handle with big files, that's why I divide such files into chunks and send them chunk by chunk. But there's an issue I can't resolve.
Let's say:
We have a 4 gb file on a server.
It is divided into 40 000 chunks.
Then they are sent to a client application, and I can see that all the chunks at the server are written into socket, as I use int field as a message number (chunk number) and put into log a message number which is being written.
But then when a client receives messages (chunks), in the case of large files the process doesn't finish successfully and only some (it depends on the size of a file) of the chunks are received by a client.
A client starts receiving consecutive messages - 1, 2, 3, 4 ... 27878, 27879 and then stops with no exception, although the last message from server was, for example 40000.
Almost forgot to say that I use JavaFX for the client app.
So, I tried to play with xms xmx java vm options but it didn't help.
Server
public class Server {
public void run() throws Exception {
EventLoopGroup mainGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(mainGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
protected void initChannel(SocketChannel socketChannel) throws Exception {
socketChannel.pipeline().addLast(
new ObjectDecoder(Constants.FRAME_SIZE, ClassResolvers.cacheDisabled(null)),
new ObjectEncoder(),
new MainHandler()
);
}
})
.childOption(ChannelOption.SO_KEEPALIVE, true);
ChannelFuture future = b.bind(8189).sync();
future.channel().closeFuture().sync();
} finally {
mainGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
public static void main(String[] args) throws Exception {
new Server().run();
}
}
Server handler
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
try {
if (msg == null) {
return;
}
if (msg instanceof FileRequest) {
FileRequest fr = (FileRequest) msg;
switch (fr.getFileCommand()) {
case DOWNLOAD:
sendFileToClient(ctx, fr.getFilename());
break;
case LIST_FILES:
listFiles(ctx);
break;
case DELETE:
deleteFileOnServer(fr);
listFiles(ctx);
break;
case SEND:
saveFileOnServer(fr);
listFiles(ctx);
break;
case SEND_PARTIAL_DATA:
savePartialDataOnServer(fr);
break;
}
}
} finally {
ReferenceCountUtil.release(msg);
}
}
Methods for sending files in chunks
private void sendFileToClient(ChannelHandlerContext ctx, String fileName) throws IOException {
Path path = Paths.get("server_storage/" + fileName);
if (Files.exists(path)) {
if (Files.size(path) > Constants.FRAME_SIZE) {
sendServerDataFrames(ctx, path);
ctx.writeAndFlush(new FileRequest(FileCommand.LIST_FILES));
} else {
FileMessage fm = new FileMessage(path);
ctx.writeAndFlush(fm);
}
}
}
private void sendServerDataFrames(ChannelHandlerContext ctx, Path path) throws IOException {
byte[] byteBuf = new byte[Constants.FRAME_CHUNK_SIZE];
FileMessage fileMessage = new FileMessage(path, byteBuf, 1);
FileRequest fileRequest = new FileRequest(FileCommand.SEND_PARTIAL_DATA, fileMessage);
FileInputStream fis = new FileInputStream(path.toFile());
int read;
while ((read = fis.read(byteBuf)) > 0) {
if (read < Constants.FRAME_CHUNK_SIZE) {
byteBuf = Arrays.copyOf(byteBuf, read);
fileMessage.setData(byteBuf);
}
ctx.writeAndFlush(fileRequest);
fileMessage.setMessageNumber(fileMessage.getMessageNumber() + 1);
}
System.out.println("server_storage/" + path.getFileName() + ", server last frame number: " + fileMessage.getMessageNumber());
System.out.println("server_storage/" + path.getFileName() + ": closing file stream.");
fis.close();
}
client handlers
#Override
public void initialize(URL location, ResourceBundle resources) {
Network.start();
Thread t = new Thread(() -> {
try {
while (true) {
AbstractMessage am = Network.readObject();
if (am instanceof FileMessage) {
FileMessage fm = (FileMessage) am;
Files.write(Paths.get("client_storage/" + fm.getFilename()), fm.getData(), StandardOpenOption.CREATE);
refreshLocalFilesList();
}
if (am instanceof FilesListMessage) {
FilesListMessage flm = (FilesListMessage) am;
refreshServerFilesList(flm.getFilesList());
}
if (am instanceof FileRequest) {
FileRequest fr = (FileRequest) am;
switch (fr.getFileCommand()) {
case DELETE:
deleteFile(fr.getFilename());
break;
case SEND_PARTIAL_DATA:
receiveFrames(fr);
break;
case LIST_FILES:
refreshLocalFilesList();
break;
}
}
}
} catch (ClassNotFoundException | IOException e) {
e.printStackTrace();
} finally {
Network.stop();
}
});
t.setDaemon(true);
t.start();
refreshLocalFilesList();
Network.sendMsg(new FileRequest(FileCommand.LIST_FILES));
}
private void receiveFrames(FileRequest fm) throws IOException {
Utils.processBytes(fm.getFileMessage(), "client_storage/");
}
public final class Utils {
public static void processBytes(FileMessage fm, String pathPart) {
Path path = Paths.get(pathPart + fm.getFilename());
byte[] data = fm.getData();
System.out.println(pathPart + path.getFileName() + ": " + fm.getMessageNumber());
try {
if (fm.getMessageNumber() == 1) {
Files.write(path, data, StandardOpenOption.CREATE_NEW);
} else {
Files.write(path, data, StandardOpenOption.WRITE, StandardOpenOption.APPEND);
}
}
catch (IOException e) {
e.printStackTrace();
}
}
}
That what I see on server.
server_storage/DVD5_OFFICE_2010_SE_SP2_VOLUME_X86_RU-KROKOZ.iso: 42151
server_storage/DVD5_OFFICE_2010_SE_SP2_VOLUME_X86_RU-KROKOZ.iso: 42152
server_storage/DVD5_OFFICE_2010_SE_SP2_VOLUME_X86_RU-KROKOZ.iso, server last frame number: 42153
server_storage/DVD5_OFFICE_2010_SE_SP2_VOLUME_X86_RU-KROKOZ.iso: closing file stream.
And this one is on a client.
client_storage/DVD5_OFFICE_2010_SE_SP2_VOLUME_X86_RU-KROKOZ.iso: 29055
client_storage/DVD5_OFFICE_2010_SE_SP2_VOLUME_X86_RU-KROKOZ.iso: 29056
client_storage/DVD5_OFFICE_2010_SE_SP2_VOLUME_X86_RU-KROKOZ.iso: 29057
And there is no issue when sending files from the client to the the server. I can see in debugger and in the windows task manager that both processes are working simultaniously but it's not like this when a file is sent from the server to the client. First all the chunks are read and then they are sent to a client and it starts to receive them but failed to get all of them.
Please help. I have no idea what it could be. Thanks in advance.
I develop a netty http server, but when I write the response in the method ChannelInboundHandlerAdapter.channelRead0, my response result comes from another server and the size of the result is unknown, so its http response headers maybe has content-length or chunked. so I use a buffer, if it's enough (read up full data) regardless of content-length or chunked, I use content-length, otherwise I use chunked.
How I hold the write channel of first connection then pass it to the seconde Handler inorder to write the response. ( I just directly pass ctx to write but nothing returns)
How I conditionally decide write chunked data to channel or normal data with content-length (it seems not to work to add ChunkWriteHandler if chunk is needed when channelRead0.
take a simple code for example:
```java
EventLoopGroup bossGroup = new NioEventLoopGroup();
final EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap serverBootstrap = new ServerBootstrap();
serverBootstrap.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class)
.handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new ChannelInitializer<Channel>(){
#Override
protected void initChannel(Channel ch) throws Exception
{
System.out.println("Start, I accept client");
ChannelPipeline pipeline = ch.pipeline();
// Uncomment the following line if you want HTTPS
// SSLEngine engine =
// SecureChatSslContextFactory.getServerContext().createSSLEngine();
// engine.setUseClientMode(false);
// pipeline.addLast("ssl", new SslHandler(engine));
pipeline.addLast("decoder", new HttpRequestDecoder());
// Uncomment the following line if you don't want to handle HttpChunks.
// pipeline.addLast("aggregator", new HttpChunkAggregator(1048576));
pipeline.addLast("encoder", new HttpResponseEncoder());
// Remove the following line if you don't want automatic content
// compression.
//pipeline.addLast("aggregator", new HttpChunkAggregator(1048576));
pipeline.addLast("chunkedWriter", new ChunkedWriteHandler());
pipeline.addLast("deflater", new HttpContentCompressor());
pipeline.addLast("handler", new SimpleChannelInboundHandler<HttpObject>(){
#Override
protected void channelRead0(ChannelHandlerContext ctx, HttpObject msg) throws Exception
{
System.out.println("msg=" + msg);
final ChannelHandlerContext ctxClient2Me = ctx;
// TODO: Implement this method
Bootstrap bs = new Bootstrap();
try{
//bs.resolver(new DnsAddressResolverGroup(NioDatagramChannel.class, DefaultDnsServerAddressStreamProvider.INSTANCE));
//.option(ChannelOption.TCP_NODELAY, java.lang.Boolean.TRUE)
bs.resolver(DefaultAddressResolverGroup.INSTANCE);
}catch(Exception e){
e.printStackTrace();
}
bs.channel(NioSocketChannel.class);
EventLoopGroup cg = workerGroup;//new NioEventLoopGroup();
bs.group(cg).handler(new ChannelInitializer<Channel>(){
#Override
protected void initChannel(Channel ch) throws Exception
{
System.out.println("start, server accept me");
// TODO: Implement this method
ch.pipeline().addLast("http-request-encode", new HttpRequestEncoder());
ch.pipeline().addLast(new HttpResponseDecoder());
ch.pipeline().addLast("http-res", new SimpleChannelInboundHandler<HttpObject>(){
#Override
protected void channelRead0(ChannelHandlerContext ctx, HttpObject msg) throws Exception
{
// TODO: Implement this method
System.out.println("target = " + msg);
//
if(msg instanceof HttpResponse){
HttpResponse res = (HttpResponse) msg;
HttpUtil.isTransferEncodingChunked(res);
DefaultHttpResponse resClient2Me = new DefaultHttpResponse(HttpVersion.HTTP_1_1, res.getStatus());
//resClient2Me.headers().set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.CHUNKED);
//resClient2Me.headers().set(HttpHeaderNames.CONTENT_LENGTH, "");
ctxClient2Me.write(resClient2Me);
}
if(msg instanceof LastHttpContent){
// now response the request of the client, it wastes x seconds from receiving request to response
ctxClient2Me.writeAndFlush(LastHttpContent.EMPTY_LAST_CONTENT).addListener(ChannelFutureListener.CLOSE);
ctx.close();
}else if( msg instanceof HttpContent){
//ctxClient2Me.write(new DefaultHttpContent(msg)); write chunk by chunk ?
}
}
});
System.out.println("end, server accept me");
}
});
final URI uri = new URI("http://example.com/");
String host = uri.getHost();
ChannelFuture connectFuture= bs.connect(host, 80);
System.out.println("to connect me to server");
connectFuture.addListener(new ChannelFutureListener(){
#Override
public void operationComplete(ChannelFuture cf) throws Exception
{
}
});
ChannelFuture connetedFuture = connectFuture.sync(); // TODO optimize, wait io
System.out.println("connected me to server");
DefaultFullHttpRequest req = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, uri.getRawPath());
//req.headers().set(HttpHeaderNames.HOST, "");
connetedFuture.channel().writeAndFlush(req);
System.out.println("end of Client2Me channelRead0");
System.out.println("For the seponse of Me2Server, see SimpleChannelInboundHandler.channelRead0");
}
});
System.out.println("end, I accept client");
}
});
System.out.println("========");
ChannelFuture channelFuture = serverBootstrap.bind(2080).sync();
channelFuture.channel().closeFuture().sync();
} finally {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
```
After a bit of struggle trying to send response from non-Netty eventloop thread, I finally figured out the problem. If your client is closing the outputstream using
socketChannel.shutdownOutput()
then you need to set ALLOW_HALF_CLOSURE property true in Netty so it won't close the channel.
Here's a sample server. The client is left as an exercise to the reader :-)
final ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.option(ChannelOption.SO_KEEPALIVE, true)
.option(ChannelOption.ALLOW_HALF_CLOSURE, true) // This option doesn't work
.handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new ChannelInitializer<io.netty.channel.socket.SocketChannel>() {
#Override
protected void initChannel(io.netty.channel.socket.SocketChannel ch) throws Exception {
ch.pipeline().addLast(new ChannelInboundHandlerAdapter() {
#Override
public void channelRegistered(ChannelHandlerContext ctx) throws Exception {
ctx.channel().config().setOption(ChannelOption.ALLOW_HALF_CLOSURE, true); // This is important
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ByteBuffer byteBuffer = ((ByteBuf) msg).nioBuffer();
String id = ctx.channel().id().asLongText();
// When Done reading all the bytes, send response 1 second later
timer.schedule(new TimerTask() {
#Override
public void run() {
ctx.write(Unpooled.copiedBuffer(CONTENT.asReadOnlyBuffer()));
ctx.flush();
ctx.close();
log.info("[{}] Server time to first response byte: {}", id, System.currentTimeMillis() - startTimes.get(id));
startTimes.remove(id);
}
}, 1000);
}
}
}
});
Channel ch = b.bind("localhost", PORT).sync().channel();
ch.closeFuture().sync();
Ofcourse, as mentioned by others in the thread, you cannot send Strings, you need to send a ByteBuf using Unpooled.copiedBuffer
See the comments about Channel, so you can reserve the Channel received in ChannelInboundHandlerAdapter.channelRead(ChannelHandlerContext ctx, Object msg) (msg is not released after returning automatically) or SimpleChannelInboundHandler.channelRead0(ChannelHandlerContext ctx, I msg) (it releases the received messages automatically after returning) for later use. Maybe you can refer to the example at the end, pass the channel to another ChannelHandler.
All I/O operations are asynchronous.
All I/O operations in Netty are asynchronous. It means any I/O calls will return immediately with no guarantee that the requested I/O operation has been completed at the end of the call. Instead, you will be returned with a ChannelFuture instance which will notify you when the requested I/O operation has succeeded, failed, or canceled.
public interface Channel extends AttributeMap, Comparable<Channel> {
/**
* Request to write a message via this {#link Channel} through the {#link ChannelPipeline}.
* This method will not request to actual flush, so be sure to call {#link #flush()}
* once you want to request to flush all pending data to the actual transport.
*/
ChannelFuture write(Object msg);
/**
* Request to write a message via this {#link Channel} through the {#link ChannelPipeline}.
* This method will not request to actual flush, so be sure to call {#link #flush()}
* once you want to request to flush all pending data to the actual transport.
*/
ChannelFuture write(Object msg, ChannelPromise promise);
/**
* Request to flush all pending messages.
*/
Channel flush();
/**
* Shortcut for call {#link #write(Object, ChannelPromise)} and {#link #flush()}.
*/
ChannelFuture writeAndFlush(Object msg, ChannelPromise promise);
/**
* Shortcut for call {#link #write(Object)} and {#link #flush()}.
*/
ChannelFuture writeAndFlush(Object msg);
}
There is no need to worry about this if you has added HttpResponseEncoder (it is a subclass of HttpObjectEncoder, which has a private filed private int state = ST_INIT; to remember whether to encode HTTP body data as chunked) into ChannelPipeline, the only thing to do is add a header 'transfer-encoding: chunked', e.g. HttpUtil.setTransferEncodingChunked(srcRes, true);.
```java
public class NettyToServerChat extends SimpleChannelInboundHandler<HttpObject> {
private static final Logger LOGGER = LoggerFactory.getLogger(NettyToServerChat.class);
public static final String CHANNEL_NAME = "NettyToServer";
protected final ChannelHandlerContext ctxClientToNetty;
/** Determines if the response supports keepalive */
private boolean responseKeepalive = true;
/** Determines if the response is chunked */
private boolean responseChunked = false;
public NettyToServerChat(ChannelHandlerContext ctxClientToNetty) {
this.ctxClientToNetty = ctxClientToNetty;
}
#Override
protected void channelRead0(ChannelHandlerContext ctx, HttpObject msg) throws Exception {
if (msg instanceof HttpResponse) {
HttpResponse response = (HttpResponse) msg;
HttpResponseStatus resStatus = response.status();
//LOGGER.info("Status Line: {} {} {}", response.getProtocolVersion(), resStatus.code(), resStatus.reasonPhrase());
if (!response.headers().isEmpty()) {
for (CharSequence name : response.headers().names()) {
for (CharSequence value : response.headers().getAll(name)) {
//LOGGER.info("HEADER: {} = {}", name, value);
}
}
//LOGGER.info("");
}
//response.headers().set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.CHUNKED);
HttpResponse srcRes = new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK);
if (HttpUtil.isTransferEncodingChunked(response)) {
responseChunked = true;
HttpUtil.setTransferEncodingChunked(srcRes, true);
ctxNettyToServer.channel().write(srcRes);
//ctx.channel().pipeline().addAfter(CHANNEL_NAME, "ChunkedWrite", new ChunkedWriteHandler());
} else {
ctxNettyToServer.channel().write(srcRes);
//ctx.channel().pipeline().remove("ChunkedWrite");
}
}
if (msg instanceof LastHttpContent) { // prioritize the subclass interface
ctx.close();
LOGGER.debug("ctxNettyToServer.channel().isWritable() = {}", ctxNettyToServer.channel().isWritable());
Thread.sleep(3000);
LOGGER.debug("ctxNettyToServer.channel().isWritable() = {}", ctxNettyToServer.channel().isWritable());
if(!responseChunked){
HttpContent content = (HttpContent) msg;
// https://github.com/netty/netty/blob/4.1/transport/src/main/java/io/netty/channel/SimpleChannelInboundHandler.java
// #see {#link SimpleChannelInboundHandler<I>#channelRead(ChannelHandlerContext, I)}
ctxNettyToServer.writeAndFlush(content.retain()).addListener(ChannelFutureListener.CLOSE);
}else{
ctxNettyToServer.close();
}
LOGGER.debug("ctxNettyToServer.channel().isWritable() = {}", ctxNettyToServer.channel().isWritable());
} else if (msg instanceof HttpContent) {
HttpContent content = (HttpContent) msg;
// We need to do a ReferenceCountUtil.retain() on the buffer to increase the reference count by 1
ctxNettyToServer.write(content.retain());
}
}
}
```
I written a simple transmission and reception of messages.
If I send few message, everything is fine. If I send a lot of messages, the latter are not processed.
If I send 100 Id i get all.
1 2 3 4 5 6 7 ... 100
If I send 1000 Id i get 1...N (N < 1000)
1 2 3 4 5 6 7 ... 958 959 960
1 2 3 4 5 6 7 ... 448 449 450
1 2 3 4 5 6 7 ... 652 653 654
Server
public class ServerTCP {
private AmountServer server;
public ServerTCP(int _PORT, AmountServer _server) {
final int PORT = _PORT;
server = _server;
// Configure the server.
EventLoopGroup bossGroup = new NioEventLoopGroup(1);
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.option(ChannelOption.AUTO_READ, true)
.option(ChannelOption.SO_BACKLOG, 100)
.handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
p.addLast(new ServerHandler(server));
}
});
// Start the server.
ChannelFuture f = b.bind(PORT).sync();
// Wait until the server socket is closed.
f.channel().closeFuture().sync();
} catch (InterruptedException ex) {
Logger.getLogger(ServerTCP.class.getName()).log(Level.SEVERE, null, ex);
} finally {
// Shut down all event loops to terminate all threads.
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
}
Server-Handler-Read
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
ByteBuf in = (ByteBuf) msg;
while (in.isReadable()) {
int type = in.readInt();
int id = in.readInt();
System.out.println(id);
int amn = in.readLong();
}
in.clear();
in.release();
}
Client
public static void main(String[] args) throws Exception {
EventLoopGroup group = new NioEventLoopGroup();
try {
Bootstrap b = new Bootstrap();
b.group(group)
.channel(NioSocketChannel.class)
.option(ChannelOption.SO_BACKLOG, 500)
.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
p.addLast(new ClientHandler());
}
});
ChannelFuture f = b.connect(HOST, PORT).sync();
int i = 0;
while (i < 1005) {
i++;
ByteBuf firstMessage = Unpooled.buffer(AccountServiceClient.SIZE);
firstMessage.writeInt(1); //Const
firstMessage.writeInt(i); //Id
firstMessage.writeLong(1L);
System.out.println("Step " + i);
f.channel().writeAndFlush(firstMessage);
f.channel().flush();
}
} catch (Exception e) {
e.printStackTrace();
} finally {
// Shut down the event loop to terminate all threads.
group.shutdownGracefully();
}
}
}
Excuse my English
It sounds like you want your message exchange to be more reliable. Consider introducing some handshaking or some mechanism to prevent the client from exiting prematurely. Also it doesn't look like the client is closing the socket correctly. I would try to model this simple use case more closely to the netty echo example to make sure you have your bases covered.