Netty HTTP2 Frame Forwarding/Proxing - pipeline config question - java

I'm trying to create a Netty (4.1) POC which can forward h2c (HTTP2 without TLS) frames onto a h2c server - i.e. essentially creating a Netty h2c proxy service. Wireshark shows Netty sending the frames out, and the h2c server replying (for example with the response header and data), although I'm then having a few issues receiving/processing the response HTTP frames within Netty itself.
As a starting point, I've adapted the multiplex.server example (io.netty.example.http2.helloworld.multiplex.server) so that in HelloWorldHttp2Handler, instead of responding with dummy messages, I connect to a remote node:
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
Channel remoteChannel = null;
// create or retrieve the remote channel (one to one mapping) associated with this incoming (client) channel
synchronized (lock) {
if (!ctx.channel().hasAttr(remoteChannelKey)) {
remoteChannel = this.connectToRemoteBlocking(ctx.channel());
ctx.channel().attr(remoteChannelKey).set(remoteChannel);
} else {
remoteChannel = ctx.channel().attr(remoteChannelKey).get();
}
}
if (msg instanceof Http2HeadersFrame) {
onHeadersRead(remoteChannel, (Http2HeadersFrame) msg);
} else if (msg instanceof Http2DataFrame) {
final Http2DataFrame data = (Http2DataFrame) msg;
onDataRead(remoteChannel, (Http2DataFrame) msg);
send(ctx.channel(), new DefaultHttp2WindowUpdateFrame(data.initialFlowControlledBytes()).stream(data.stream()));
} else {
super.channelRead(ctx, msg);
}
}
private void send(Channel remoteChannel, Http2Frame frame) {
remoteChannel.writeAndFlush(frame).addListener(new GenericFutureListener() {
#Override
public void operationComplete(Future future) throws Exception {
if (!future.isSuccess()) {
future.cause().printStackTrace();
}
}
});
}
/**
* If receive a frame with end-of-stream set, send a pre-canned response.
*/
private void onDataRead(Channel remoteChannel, Http2DataFrame data) throws Exception {
if (data.isEndStream()) {
send(remoteChannel, data);
} else {
// We do not send back the response to the remote-peer, so we need to release it.
data.release();
}
}
/**
* If receive a frame with end-of-stream set, send a pre-canned response.
*/
private void onHeadersRead(Channel remoteChannel, Http2HeadersFrame headers)
throws Exception {
if (headers.isEndStream()) {
send(remoteChannel, headers);
}
}
private Channel connectToRemoteBlocking(Channel clientChannel) {
try {
Bootstrap b = new Bootstrap();
b.group(new NioEventLoopGroup());
b.channel(NioSocketChannel.class);
b.option(ChannelOption.SO_KEEPALIVE, true);
b.remoteAddress("localhost", H2C_SERVER_PORT);
b.handler(new Http2ClientInitializer());
final Channel channel = b.connect().syncUninterruptibly().channel();
channel.config().setAutoRead(true);
channel.attr(clientChannelKey).set(clientChannel);
return channel;
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
When initializing the channel pipeline (in Http2ClientInitializer), if I do something like:
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(Http2MultiplexCodecBuilder.forClient(new Http2OutboundClientHandler()).frameLogger(TESTLOGGER).build());
ch.pipeline().addLast(new UserEventLogger());
}
Then I can see the frames being forwarded correctly in Wireshark and the h2c server replies with the header and frame data, but Netty replies with a GOAWAY [INTERNAL_ERROR] due to:
14:23:09.324 [nioEventLoopGroup-3-1] WARN
i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was
fired, and it reached at the tail of the pipeline. It usually means
the last handler in the pipeline did not handle the exception.
java.lang.IllegalStateException: Stream object required for
identifier: 1 at
io.netty.handler.codec.http2.Http2FrameCodec$FrameListener.requireStream(Http2FrameCodec.java:587)
at
io.netty.handler.codec.http2.Http2FrameCodec$FrameListener.onHeadersRead(Http2FrameCodec.java:550)
at
io.netty.handler.codec.http2.Http2FrameCodec$FrameListener.onHeadersRead(Http2FrameCodec.java:543)...
If I instead try making it have the pipeline configuration from the http2 client example, e.g.:
#Override
public void initChannel(SocketChannel ch) throws Exception {
final Http2Connection connection = new DefaultHttp2Connection(false);
ch.pipeline().addLast(
new Http2ConnectionHandlerBuilder()
.connection(connection)
.frameLogger(TESTLOGGER)
.frameListener(new DelegatingDecompressorFrameListener(connection, new InboundHttp2ToHttpAdapterBuilder(connection)
.maxContentLength(maxContentLength)
.propagateSettings(true)
.build() ))
.build());
}
Then I instead get:
java.lang.UnsupportedOperationException: unsupported message type:
DefaultHttp2HeadersFrame (expected: ByteBuf, FileRegion) at
io.netty.channel.nio.AbstractNioByteChannel.filterOutboundMessage(AbstractNioByteChannel.java:283)
at
io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:882)
at
io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1365)
If I then add in a HTTP2 frame codec (Http2MultiplexCodec or Http2FrameCodec):
#Override
public void initChannel(SocketChannel ch) throws Exception {
final Http2Connection connection = new DefaultHttp2Connection(false);
ch.pipeline().addLast(
new Http2ConnectionHandlerBuilder()
.connection(connection)
.frameLogger(TESTLOGGER)
.frameListener(new DelegatingDecompressorFrameListener(connection, new InboundHttp2ToHttpAdapterBuilder(connection)
.maxContentLength(maxContentLength)
.propagateSettings(true)
.build() ))
.build());
ch.pipeline().addLast(Http2MultiplexCodecBuilder.forClient(new Http2OutboundClientHandler()).frameLogger(TESTLOGGER).build());
}
Then Netty sends two connection preface frames, resulting in the h2c server rejecting with GOAWAY [PROTOCOL_ERROR]:
So that is where I am having issues - i.e. configuring the remote channel pipeline such that it will send the Http2Frame objects without error, but also then receive/process them back within Netty when the response is received.
Does anyone have any ideas/suggestions please?

I ended up getting this working; the following Github issues contain some useful code/info:
Generating a Http2StreamChannel, from a Channel
A Http2Client with Http2MultiplexCode
I need to investigate a few caveats further, although the gist of the approach is that you need to wrap your channel in a Http2StreamChannel, meaning that my connectToRemoteBlocking() method ends up as:
private Http2StreamChannel connectToRemoteBlocking(Channel clientChannel) {
try {
Bootstrap b = new Bootstrap();
b.group(new NioEventLoopGroup()); // TODO reuse existing event loop
b.channel(NioSocketChannel.class);
b.option(ChannelOption.SO_KEEPALIVE, true);
b.remoteAddress("localhost", H2C_SERVER_PORT);
b.handler(new Http2ClientInitializer());
final Channel channel = b.connect().syncUninterruptibly().channel();
channel.config().setAutoRead(true);
channel.attr(clientChannelKey).set(clientChannel);
// TODO make more robust, see example at https://github.com/netty/netty/issues/8692
final Http2StreamChannelBootstrap bs = new Http2StreamChannelBootstrap(channel);
final Http2StreamChannel http2Stream = bs.open().syncUninterruptibly().get();
http2Stream.attr(clientChannelKey).set(clientChannel);
http2Stream.pipeline().addLast(new Http2OutboundClientHandler()); // will read: DefaultHttp2HeadersFrame, DefaultHttp2DataFrame
return http2Stream;
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
Then to prevent the "Stream object required for identifier: 1" error (which is essentially saying: 'This (client) HTTP2 request is new, so why do we have this specific stream?' - since we were implicitly reusing the stream object from the originally received 'server' request), we need to change to use the remote channel's stream when forwarding our data on:
private void onHeadersRead(Http2StreamChannel remoteChannel, Http2HeadersFrame headers) throws Exception {
if (headers.isEndStream()) {
headers.stream(remoteChannel.stream());
send(remoteChannel, headers);
}
}
Then the configured channel inbound handler (which I've called Http2OutboundClientHandler due to its usage) will receive the incoming HTTP2 frames in the normal way:
#Sharable
public class Http2OutboundClientHandler extends SimpleChannelInboundHandler<Http2Frame> {
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
super.exceptionCaught(ctx, cause);
cause.printStackTrace();
ctx.close();
}
#Override
public void channelRead0(ChannelHandlerContext ctx, Http2Frame msg) throws Exception {
System.out.println("Http2OutboundClientHandler Http2Frame Type: " + msg.getClass().toString());
}
}

Related

How to correctly close netty channel without workgroup termination

I have following binding to handle UDP packets
private void doStartServer() {
final UDPPacketHandler udpPacketHandler = new UDPPacketHandler(messageDecodeHandler);
workerGroup = new NioEventLoopGroup(threadPoolSize);
try {
final Bootstrap bootstrap = new Bootstrap();
bootstrap
.group(workerGroup)
.handler(new LoggingHandler(nettyLevel))
.channel(NioDatagramChannel.class)
.option(ChannelOption.SO_BROADCAST, true)
.handler(udpPacketHandler);
bootstrap
.bind(serverIp, serverPort)
.sync()
.channel()
.closeFuture()
.await();
} finally {
stop();
}
}
and handler
#ChannelHandler.Sharable << note this
#Slf4j
#AllArgsConstructor
public class UDPPacketHandler extends SimpleChannelInboundHandler<DatagramPacket> {
private final MessageP54Handler messageP54Handler;
#Override
public void channelReadComplete(final ChannelHandlerContext ctx) {
ctx.flush();
}
#Override
public void exceptionCaught(final ChannelHandlerContext ctx, final Throwable cause) {
log.error("Exception in UDP handler", cause);
ctx.close();
}
}
At some point I get this exception java.net.SocketException: Network dropped connection on reset: no further information which is handled in exceptionCaught. This triggers ChannelHandlerContext to close. And at this point whole my server stops (executing on finally block from first snippet)
How to correctly handle exception so that I can handle new connections even after such exception occurs?
you shouldn't close the ChannelHandlerContext on an IOException when using a DatagramChannel. As DatagramChannel is "connection-less" the exception is specific to one "receive" or one "send" operation. So just log it (or whatever you want to do) and move on.

How to identify the MQTT topic that received the message?

The client is subscribed to a x / # topic. There is the possibility of receiving message in the topics x / start and x / stop, and depending on the topic, it performs an action. I wonder how I can identify if it's coming up in the start or stop topic.
In the current code, I send an "action" key in the JSON: "start" or "stop". I want to delete this key and use the format that said above, identifying the topic.
Any further information they deem necessary, please request that I edit the post!
JDK 8
The code:
private MqttCallback callback = new MqttCallback() {
public void connectionLost(Throwable throwable) {
try {
connect();
} catch (MqttException e) {
e.printStackTrace();
}
}
public void messageArrived(String s, MqttMessage mqttMessage) throws Exception {
String messageReceived = new String(mqttMessage.getPayload());
actionPerformed(messageReceived);
}
public void deliveryComplete(IMqttDeliveryToken iMqttDeliveryToken) {
}
};
private void actionPerformed(String message) throws IOException {
ClientDTO clientDTO = new ObjectMapper().readValue(message, ClientDTO.class);
if (clientDTO.getAction().equalsIgnoreCase("start")) {
startView(clientDTO);
} else if (clientDTO.getAction().equalsIgnoreCase("stop")) {
stopView();
}
}
public void connect() throws MqttException {
MqttConnectOptions options = new MqttConnectOptions();
options.setUserName("a_nice_username");
options.setPassword("a_cool_password".toCharArray());
options.setAutomaticReconnect(true);
MqttClient client = new MqttClient("someaddress", MqttClient.generateClientId());
client.setCallback(callback);
try {
client.connect(options);
client.subscribe(topic);
TaskbarIcon.alteraIconeOnline();
} catch (Exception e) {
TaskbarIcon.alteraIconeOffline();
}
}
public void tipoConexao(int tipoConex) throws IOException {
switch (tipoConex) {
case 0:
topic += "/operador/" + getIdReceived() + "/#";
System.out.println(topic);
break;
//etc
}
The s in this method is the topic: public void messageArrived(String s, MqttMessage mqttMessage)
As is very well documented here:
messageArrived
void messageArrived(java.lang.String topic, MqttMessage message) throws java.lang.Exception
This method is called when a message arrives from the server.
This method is invoked synchronously by the MQTT client. An acknowledgment is not sent back to the server until this method
returns cleanly.
If an implementation of this method throws an Exception, then the client will be shut down. When the client is next re-connected, any
QoS 1 or 2 messages will be redelivered by the server.
Any additional messages which arrive while an implementation of this method is running, will build up in memory, and will then back up
on the network.
If an application needs to persist data, then it should ensure the data is persisted prior to returning from this method, as after
returning from this method, the message is considered to have been
delivered, and will not be reproducible.
It is possible to send a new message within an implementation of this callback (for example, a response to this message), but the
implementation must not disconnect the client, as it will be
impossible to send an acknowledgment for the message being processed,
and a deadlock will occur.
Parameters:
topic - name of the topic on the message was published to
message - the actual message.
Throws:
java.lang.Exception - if a terminal error has occurred, and the client should be shut down.

Netty 4 - The pool returns a channel which is not yet ready to send the the actual message

I have created an inbound handler of type SimpleChannelInboundHandler and added to pipeline. My intention is every time a connection is established, I wanted to send an application message called session open message and make the connection ready to send the actual message. To achieve this, the above inbound handler
over rides channelActive() where session open message is sent, In response to that I would get a session open confirmation message. Only after that I should be able to send any number of actual business message. I am using FixedChannelPool and initialised as follows. This works well some time on startup. But if the remote host closes the connection, after that if a message is sent calling the below sendMessage(), the message is sent even before the session open message through channelActive() and its response is obtained. So the server ignores the message as the session is not open yet when the business message was sent.
What I am looking for is, the pool should return only those channel that has called channelActive() event which has already sent the session open message and it has got its session open confirmation message from the server. How to deal with this situation?
public class SessionHandler extends SimpleChannelInboundHandler<byte[]> {
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
super.channelActive(ctx);
if (ctx.channel().isWritable()) {
ctx.channel().writeAndFlush("open session message".getBytes()).;
}
}
}
// At the time of loading the applicaiton
public void init() {
final Bootstrap bootStrap = new Bootstrap();
bootStrap.group(group).channel(NioSocketChannel.class).remoteAddress(hostname, port);
fixedPool = new FixedChannelPool(bootStrap, getChannelHandler(), 5);
// This is done to intialise connection and the channelActive() from above handler is invoked to keep the session open on startup
for (int i = 0; i < config.getMaxConnections(); i++) {
fixedPool.acquire().addListener(new FutureListener<Channel>() {
#Override
public void operationComplete(Future<Channel> future) throws Exception {
if (future.isSuccess()) {
} else {
LOGGER.error(" Channel initialzation failed...>>", future.cause());
}
}
});
}
}
//To actually send the message following method is invoked by the application.
public void sendMessage(final String businessMessage) {
fixedPool.acquire().addListener(new FutureListener<Channel>() {
#Override
public void operationComplete(Future<Channel> future) throws Exception {
if (future.isSuccess()) {
Channel channel = future.get();
if (channel.isOpen() && channel.isActive() && channel.isWritable()) {
channel.writeAndFlush(businessMessage).addListener(new GenericFutureListener<ChannelFuture>() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
// success msg
} else {
// failure msg
}
}
});
fixedPool.release(channel);
}
} else {
// Failure
}
}
});
}
If there is no specific reason that you need to use a FixedChannelPool then you can use another data structure (List/Map) to store the Channels. You can add a channel to the data structure after sending open session message and remove it in the channelInactive method.
If you need to perform bulk operations on channels you can use a ChannelGroup for the purpose.
If you still want you use the FixedChannelPool you may set an attribute in the channel on whether open message was sent:
ctx.channel().attr(OPEN_MESSAGE_SENT).set(true);
you can get the attribute as follows in your sendMessage function:
boolean sent = ctx.channel().attr(OPEN_MESSAGE_SENT).get();
and in the channelInactive you may set the same to false or remove it.
Note OPEN_MESSAGE_SENT is an AttributeKey:
public static final AttributeKey<Boolean> OPEN_MESSAGE_SENT = AttributeKey.valueOf("OPEN_MESSAGE_SENT");
I know this is a rather old question, but I stumbled across the similar issue, not quite the same, but my issue was the ChannelInitializer in the Bootstrap.handler was never called.
The solution was to add the pipeline handlers to the pool handler's channelCreated method.
Here is my pool definition code that works now:
pool = new FixedChannelPool(httpBootstrap, new ChannelPoolHandler() {
#Override
public void channelCreated(Channel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast(HTTP_CODEC, new HttpClientCodec());
pipeline.addLast(HTTP_HANDLER, new NettyHttpClientHandler());
}
#Override
public void channelAcquired(Channel ch) {
// NOOP
}
#Override
public void channelReleased(Channel ch) {
// NOOP
}
}, 10);
So in the getChannelHandler() method I assume you're creating a ChannelPoolHandler in its channelCreated method you could send your session message (ch.writeAndFlush("open session message".getBytes());) assuming you only need to send the session message once when a connection is created, else you if you need to send the session message every time you could add it to the channelAcquired method.

Netty: Start client after server has bootstraped, why another thread is needed?

I want to start both TCP echo server and client in one app, client after server.
What I do is:
Start a client in a ChannelFutureListener returned by server.bind().sync() like this:
public void runClientAndServer() {
server.run().addListener((ChannelFutureListener) future -> {
// client.run(); //(1) this doesn't work!
new Thread(()->client.run()).start(); //(2) this works!
});
}
and server.run() is like this:
public ChannelFuture run() {
ServerBootstrap b = new ServerBootstrap();
//doing channel config stuff
return b.bind(6666).sync();
}
and client.run() is like this:
public void run() {
Bootstrap b = new Bootstrap();
//do some config stuff
f = b.connect(host, port).sync(); //wait till connected to server
}
What happens is:
In the statement (2) that works just fine; While in the statement (1), the client sent message, that can be observed in Wireshark, and the server replies TCP ACK segment, but no channelRead() method in server side ChannelInboundHandlerAdapter is invoked, nor any attempt to write message to socket can be observed, like this capture:
wireshark capture
I guess there must be something wrong with Netty threads, but I just cannot figure out
I have prepared an example based on the newest netty version (4.1.16.Final) and the code you posted. This works fine without an extra thread maybe you did something wrong initializing your server or client.
private static final NioEventLoopGroup EVENT_LOOP_GROUP = new NioEventLoopGroup();
private static ChannelFuture runServer() throws Exception {
return new ServerBootstrap()
.group(EVENT_LOOP_GROUP)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<Channel>() {
#Override
protected void initChannel(Channel channel) throws Exception {
System.out.println("S - Channel connected: " + channel);
channel.pipeline().addLast(new SimpleChannelInboundHandler<ByteBuf>() {
#Override
protected void channelRead0(ChannelHandlerContext ctx, ByteBuf msg) throws Exception {
System.out.println("S - read: " + msg.toString(StandardCharsets.UTF_8));
}
});
}
})
.bind(6666).sync();
}
private static void runClient() throws Exception {
new Bootstrap()
.group(EVENT_LOOP_GROUP)
.channel(NioSocketChannel.class)
.handler(new ChannelInitializer<Channel>() {
#Override
protected void initChannel(Channel channel) throws Exception {
System.out.println("C - Initilized client");
channel.pipeline().addLast(new SimpleChannelInboundHandler<ByteBuf>() {
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
super.channelActive(ctx);
System.out.println("C - write: Hey this is a test message enjoy!");
ctx.writeAndFlush(Unpooled.copiedBuffer("Hey this is a test message enjoy!".getBytes(StandardCharsets.UTF_8)));
}
#Override
protected void channelRead0(ChannelHandlerContext ctx, ByteBuf msg) throws Exception { }
});
}
})
.connect("localhost", 6666).sync();
}
public static void main(String[] args) throws Exception {
runServer().addListener(future -> {
runClient();
});
}
That's what the output should look like:
C - Initilized client
C - write: Hey this is a test message enjoy!
S - Channel connected: [id: 0x1d676489, L:/127.0.0.1:6666 - R:/127.0.0.1:5079]
S - read: Hey this is a test message enjoy!
I also tried this example with a single threaded eventloopgroup which also worked fine but throw me an BlockingOperationException which did not affect the functionality of the program. If this code should work fine you should probably check your code and try to orientate your code on this example (Please don't create inline ChannelHandler for your real project like I did in this example).

What events do I need to listen to in order to reuse a client connection in Netty (getting "Connection reset by peer")?

I'm getting java.io.IOException: Connection reset by peer when I try to reuse a client connection in Netty (this does not happen if I send one request, but happens every time if I send two requests, even from a single thread). My current approach involves the following implementing a simple ChannelPool whose code is below. Note that the key method obtains a free channel from the freeChannels member or creates a new channel if none are available. The method returnChannel() is the method responsible for freeing up a channel when we are done with the request. It is called inside the pipeline after we process the response (see messageReceived() method of ResponseHandler in the code below). Does anyone see what I'm doing wrong, and why I'm getting an exception?
Channel pool code (note use of freeChannels.pollFirst() to get a free channel that has been returned via a call to returnChannel()):
public class ChannelPool {
private final ClientBootstrap cb;
private Deque<Channel> freeChannels = new ArrayDeque<Channel>();
private static Map<Channel, Channel> proxyToClient = new ConcurrentHashMap<Channel, Channel>();
public ChannelPool(InetSocketAddress address, ChannelPipelineFactory pipelineFactory) {
ChannelFactory clientFactory =
new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
cb = new ClientBootstrap(clientFactory);
cb.setPipelineFactory(pipelineFactory);
}
private void writeToNewChannel(final Object writable, Channel clientChannel) {
ChannelFuture cf;
synchronized (cb) {
cf = cb.connect(new InetSocketAddress("localhost", 18080));
}
final Channel ch = cf.getChannel();
proxyToClient.put(ch, clientChannel);
cf.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture arg0) throws Exception {
System.out.println("channel open, writing: " + ch);
ch.write(writable);
}
});
}
public void executeWrite(Object writable, Channel clientChannel) {
synchronized (freeChannels) {
while (!freeChannels.isEmpty()) {
Channel ch = freeChannels.pollFirst();
System.out.println("trying to reuse channel: " + ch + " " + ch.isOpen());
if (ch.isOpen()) {
proxyToClient.put(ch, clientChannel);
ch.write(writable).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture cf) throws Exception {
System.out.println("write from reused channel complete, success? " + cf.isSuccess());
}
});
// EDIT: I needed a return here
}
}
}
writeToNewChannel(writable, clientChannel);
}
public void returnChannel(Channel ch) {
synchronized (freeChannels) {
freeChannels.addLast(ch);
}
}
public Channel getClientChannel(Channel proxyChannel) {
return proxyToClient.get(proxyChannel);
}
}
Netty pipeline code (Note that RequestHandler calls executeWrite() which uses either a new or an old channel, and ResponseHandler calls returnChannel() after the response is received and the content is set in the response to the client):
public class NettyExample {
private static ChannelPool pool;
public static void main(String[] args) throws Exception {
pool = new ChannelPool(
new InetSocketAddress("localhost", 18080),
new ChannelPipelineFactory() {
public ChannelPipeline getPipeline() {
return Channels.pipeline(
new HttpRequestEncoder(),
new HttpResponseDecoder(),
new ResponseHandler());
}
});
ChannelFactory factory =
new NioServerSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
ServerBootstrap sb = new ServerBootstrap(factory);
sb.setPipelineFactory(new ChannelPipelineFactory() {
public ChannelPipeline getPipeline() {
return Channels.pipeline(
new HttpRequestDecoder(),
new HttpResponseEncoder(),
new RequestHandler());
}
});
sb.setOption("child.tcpNoDelay", true);
sb.setOption("child.keepAlive", true);
sb.bind(new InetSocketAddress(2080));
}
private static class ResponseHandler extends SimpleChannelHandler {
#Override
public void messageReceived(ChannelHandlerContext ctx, final MessageEvent e) {
final HttpResponse proxyResponse = (HttpResponse) e.getMessage();
final Channel proxyChannel = e.getChannel();
Channel clientChannel = pool.getClientChannel(proxyChannel);
HttpResponse clientResponse = new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK);
clientResponse.setHeader(HttpHeaders.Names.CONTENT_TYPE, "text/html; charset=UTF-8");
HttpHeaders.setContentLength(clientResponse, proxyResponse.getContent().readableBytes());
clientResponse.setContent(proxyResponse.getContent());
pool.returnChannel(proxyChannel);
clientChannel.write(clientResponse);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
e.getCause().printStackTrace();
Channel ch = e.getChannel();
ch.close();
}
}
private static class RequestHandler extends SimpleChannelHandler {
#Override
public void messageReceived(ChannelHandlerContext ctx, final MessageEvent e) {
final HttpRequest request = (HttpRequest) e.getMessage();
pool.executeWrite(request, e.getChannel());
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
e.getCause().printStackTrace();
Channel ch = e.getChannel();
ch.close();
}
}
}
EDIT: To give more detail, I've written a trace of what's happening on the proxy connection. Note that the following involves two serial requests performed by a synchronous apache commons client. The first request uses a new channel and completes fine, and the second request attempts to reuse the same channel, which is open and writable, but inexplicably fails (I've not been able to intercept any problem other than noticing the exception thrown from the worker thread). Evidently, the second request completes successfully when a retry is made. Many seconds after both requests complete, both connections finally close (i.e., even if the connection were closed by the peer, this is not reflected by any event I've intercepted):
channel open: [id: 0x6e6fbedf]
channel connect requested: [id: 0x6e6fbedf]
channel open, writing: [id: 0x6e6fbedf, /127.0.0.1:47031 => localhost/127.0.0.1:18080]
channel connected: [id: 0x6e6fbedf, /127.0.0.1:47031 => localhost/127.0.0.1:18080]
trying to reuse channel: [id: 0x6e6fbedf, /127.0.0.1:47031 => localhost/127.0.0.1:18080] true
channel open: [id: 0x3999abd1]
channel connect requested: [id: 0x3999abd1]
channel open, writing: [id: 0x3999abd1, /127.0.0.1:47032 => localhost/127.0.0.1:18080]
channel connected: [id: 0x3999abd1, /127.0.0.1:47032 => localhost/127.0.0.1:18080]
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
at sun.nio.ch.IOUtil.read(IOUtil.java:186)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:63)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:373)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:247)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Finally, figured this out. There were two issues causing a connection reset. First, I was not calling releaseConnection() from the apache commons HttpClient that was sending requests to the proxy (see the follow up question). Second, executeWrite was twice issuing the same call to the proxied server in the case the connection was reused. I needed to return after the first write rather than continuing with the while loop. The result of this double proxy call was that I was issuing duplicate responses to the original client, mangling the connection with the client.

Categories