I would like to be aware of the connection state in a camel/netty environment.
To do so, I tried something like this:
specified my camel route
from("direct:in").marshal().serialization()
.to("netty:tcp://localhost:42123?clientPipelineFactory=#cpf&sync=false");
implemented my pipeline factory
public class ConnectionStatusPipelineFactory extends ClientPipelineFactory {
#Override
public ChannelPipeline getPipeline() throws Exception {
ChannelPipeline cp = Channels.pipeline();
cp.addLast("statusHandler", new ConnectionStatusHandler());
return cp;
}
#Override
public ClientPipelineFactory createPipelineFactory(NettyProducer producer) {
return new ConnectionStatusPipelineFactory();
}
}
implemented my connection status handler
public class ConnectionStatusHandler extends SimpleChannelUpstreamHandler {
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e)
throws Exception {
System.out.println("Event: " + e);
super.channelConnected(ctx, e);
}
#Override
public void channelDisconnected(ChannelHandlerContext ctx,
ChannelStateEvent e) throws Exception {
System.out.println("Event: " + e);
super.channelDisconnected(ctx, e);
}
}
And finally bound "ConnectionStatusPipelineFactory" to "cpf" in my camel registry.
But the following exception occured:
java.lang.IllegalArgumentException: unsupported message type: class [B
Remarks:
"channelConnected" and "channelDisconnected" methods are called as expected.
When I disable this, everything works (message marshalling, connection, remote process...).
Questions are:
what's wrong with that ?
is it the good way to know the connection status (connected or not) ?
Try using the decoder option instead and not the entire client pipeline factory.
eg use option decoder=#myConnectionStatusHandler. And then register your ConnectionStatusHandler in the registry with the name myConnectionStatusHandler.
If you use the pipeline factory then you need to add all the others that Camel add out of the box.
Related
I have following binding to handle UDP packets
private void doStartServer() {
final UDPPacketHandler udpPacketHandler = new UDPPacketHandler(messageDecodeHandler);
workerGroup = new NioEventLoopGroup(threadPoolSize);
try {
final Bootstrap bootstrap = new Bootstrap();
bootstrap
.group(workerGroup)
.handler(new LoggingHandler(nettyLevel))
.channel(NioDatagramChannel.class)
.option(ChannelOption.SO_BROADCAST, true)
.handler(udpPacketHandler);
bootstrap
.bind(serverIp, serverPort)
.sync()
.channel()
.closeFuture()
.await();
} finally {
stop();
}
}
and handler
#ChannelHandler.Sharable << note this
#Slf4j
#AllArgsConstructor
public class UDPPacketHandler extends SimpleChannelInboundHandler<DatagramPacket> {
private final MessageP54Handler messageP54Handler;
#Override
public void channelReadComplete(final ChannelHandlerContext ctx) {
ctx.flush();
}
#Override
public void exceptionCaught(final ChannelHandlerContext ctx, final Throwable cause) {
log.error("Exception in UDP handler", cause);
ctx.close();
}
}
At some point I get this exception java.net.SocketException: Network dropped connection on reset: no further information which is handled in exceptionCaught. This triggers ChannelHandlerContext to close. And at this point whole my server stops (executing on finally block from first snippet)
How to correctly handle exception so that I can handle new connections even after such exception occurs?
you shouldn't close the ChannelHandlerContext on an IOException when using a DatagramChannel. As DatagramChannel is "connection-less" the exception is specific to one "receive" or one "send" operation. So just log it (or whatever you want to do) and move on.
I'm trying to create a Netty (4.1) POC which can forward h2c (HTTP2 without TLS) frames onto a h2c server - i.e. essentially creating a Netty h2c proxy service. Wireshark shows Netty sending the frames out, and the h2c server replying (for example with the response header and data), although I'm then having a few issues receiving/processing the response HTTP frames within Netty itself.
As a starting point, I've adapted the multiplex.server example (io.netty.example.http2.helloworld.multiplex.server) so that in HelloWorldHttp2Handler, instead of responding with dummy messages, I connect to a remote node:
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
Channel remoteChannel = null;
// create or retrieve the remote channel (one to one mapping) associated with this incoming (client) channel
synchronized (lock) {
if (!ctx.channel().hasAttr(remoteChannelKey)) {
remoteChannel = this.connectToRemoteBlocking(ctx.channel());
ctx.channel().attr(remoteChannelKey).set(remoteChannel);
} else {
remoteChannel = ctx.channel().attr(remoteChannelKey).get();
}
}
if (msg instanceof Http2HeadersFrame) {
onHeadersRead(remoteChannel, (Http2HeadersFrame) msg);
} else if (msg instanceof Http2DataFrame) {
final Http2DataFrame data = (Http2DataFrame) msg;
onDataRead(remoteChannel, (Http2DataFrame) msg);
send(ctx.channel(), new DefaultHttp2WindowUpdateFrame(data.initialFlowControlledBytes()).stream(data.stream()));
} else {
super.channelRead(ctx, msg);
}
}
private void send(Channel remoteChannel, Http2Frame frame) {
remoteChannel.writeAndFlush(frame).addListener(new GenericFutureListener() {
#Override
public void operationComplete(Future future) throws Exception {
if (!future.isSuccess()) {
future.cause().printStackTrace();
}
}
});
}
/**
* If receive a frame with end-of-stream set, send a pre-canned response.
*/
private void onDataRead(Channel remoteChannel, Http2DataFrame data) throws Exception {
if (data.isEndStream()) {
send(remoteChannel, data);
} else {
// We do not send back the response to the remote-peer, so we need to release it.
data.release();
}
}
/**
* If receive a frame with end-of-stream set, send a pre-canned response.
*/
private void onHeadersRead(Channel remoteChannel, Http2HeadersFrame headers)
throws Exception {
if (headers.isEndStream()) {
send(remoteChannel, headers);
}
}
private Channel connectToRemoteBlocking(Channel clientChannel) {
try {
Bootstrap b = new Bootstrap();
b.group(new NioEventLoopGroup());
b.channel(NioSocketChannel.class);
b.option(ChannelOption.SO_KEEPALIVE, true);
b.remoteAddress("localhost", H2C_SERVER_PORT);
b.handler(new Http2ClientInitializer());
final Channel channel = b.connect().syncUninterruptibly().channel();
channel.config().setAutoRead(true);
channel.attr(clientChannelKey).set(clientChannel);
return channel;
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
When initializing the channel pipeline (in Http2ClientInitializer), if I do something like:
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(Http2MultiplexCodecBuilder.forClient(new Http2OutboundClientHandler()).frameLogger(TESTLOGGER).build());
ch.pipeline().addLast(new UserEventLogger());
}
Then I can see the frames being forwarded correctly in Wireshark and the h2c server replies with the header and frame data, but Netty replies with a GOAWAY [INTERNAL_ERROR] due to:
14:23:09.324 [nioEventLoopGroup-3-1] WARN
i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was
fired, and it reached at the tail of the pipeline. It usually means
the last handler in the pipeline did not handle the exception.
java.lang.IllegalStateException: Stream object required for
identifier: 1 at
io.netty.handler.codec.http2.Http2FrameCodec$FrameListener.requireStream(Http2FrameCodec.java:587)
at
io.netty.handler.codec.http2.Http2FrameCodec$FrameListener.onHeadersRead(Http2FrameCodec.java:550)
at
io.netty.handler.codec.http2.Http2FrameCodec$FrameListener.onHeadersRead(Http2FrameCodec.java:543)...
If I instead try making it have the pipeline configuration from the http2 client example, e.g.:
#Override
public void initChannel(SocketChannel ch) throws Exception {
final Http2Connection connection = new DefaultHttp2Connection(false);
ch.pipeline().addLast(
new Http2ConnectionHandlerBuilder()
.connection(connection)
.frameLogger(TESTLOGGER)
.frameListener(new DelegatingDecompressorFrameListener(connection, new InboundHttp2ToHttpAdapterBuilder(connection)
.maxContentLength(maxContentLength)
.propagateSettings(true)
.build() ))
.build());
}
Then I instead get:
java.lang.UnsupportedOperationException: unsupported message type:
DefaultHttp2HeadersFrame (expected: ByteBuf, FileRegion) at
io.netty.channel.nio.AbstractNioByteChannel.filterOutboundMessage(AbstractNioByteChannel.java:283)
at
io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:882)
at
io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1365)
If I then add in a HTTP2 frame codec (Http2MultiplexCodec or Http2FrameCodec):
#Override
public void initChannel(SocketChannel ch) throws Exception {
final Http2Connection connection = new DefaultHttp2Connection(false);
ch.pipeline().addLast(
new Http2ConnectionHandlerBuilder()
.connection(connection)
.frameLogger(TESTLOGGER)
.frameListener(new DelegatingDecompressorFrameListener(connection, new InboundHttp2ToHttpAdapterBuilder(connection)
.maxContentLength(maxContentLength)
.propagateSettings(true)
.build() ))
.build());
ch.pipeline().addLast(Http2MultiplexCodecBuilder.forClient(new Http2OutboundClientHandler()).frameLogger(TESTLOGGER).build());
}
Then Netty sends two connection preface frames, resulting in the h2c server rejecting with GOAWAY [PROTOCOL_ERROR]:
So that is where I am having issues - i.e. configuring the remote channel pipeline such that it will send the Http2Frame objects without error, but also then receive/process them back within Netty when the response is received.
Does anyone have any ideas/suggestions please?
I ended up getting this working; the following Github issues contain some useful code/info:
Generating a Http2StreamChannel, from a Channel
A Http2Client with Http2MultiplexCode
I need to investigate a few caveats further, although the gist of the approach is that you need to wrap your channel in a Http2StreamChannel, meaning that my connectToRemoteBlocking() method ends up as:
private Http2StreamChannel connectToRemoteBlocking(Channel clientChannel) {
try {
Bootstrap b = new Bootstrap();
b.group(new NioEventLoopGroup()); // TODO reuse existing event loop
b.channel(NioSocketChannel.class);
b.option(ChannelOption.SO_KEEPALIVE, true);
b.remoteAddress("localhost", H2C_SERVER_PORT);
b.handler(new Http2ClientInitializer());
final Channel channel = b.connect().syncUninterruptibly().channel();
channel.config().setAutoRead(true);
channel.attr(clientChannelKey).set(clientChannel);
// TODO make more robust, see example at https://github.com/netty/netty/issues/8692
final Http2StreamChannelBootstrap bs = new Http2StreamChannelBootstrap(channel);
final Http2StreamChannel http2Stream = bs.open().syncUninterruptibly().get();
http2Stream.attr(clientChannelKey).set(clientChannel);
http2Stream.pipeline().addLast(new Http2OutboundClientHandler()); // will read: DefaultHttp2HeadersFrame, DefaultHttp2DataFrame
return http2Stream;
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
Then to prevent the "Stream object required for identifier: 1" error (which is essentially saying: 'This (client) HTTP2 request is new, so why do we have this specific stream?' - since we were implicitly reusing the stream object from the originally received 'server' request), we need to change to use the remote channel's stream when forwarding our data on:
private void onHeadersRead(Http2StreamChannel remoteChannel, Http2HeadersFrame headers) throws Exception {
if (headers.isEndStream()) {
headers.stream(remoteChannel.stream());
send(remoteChannel, headers);
}
}
Then the configured channel inbound handler (which I've called Http2OutboundClientHandler due to its usage) will receive the incoming HTTP2 frames in the normal way:
#Sharable
public class Http2OutboundClientHandler extends SimpleChannelInboundHandler<Http2Frame> {
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
super.exceptionCaught(ctx, cause);
cause.printStackTrace();
ctx.close();
}
#Override
public void channelRead0(ChannelHandlerContext ctx, Http2Frame msg) throws Exception {
System.out.println("Http2OutboundClientHandler Http2Frame Type: " + msg.getClass().toString());
}
}
I am trying to write a simple Client Server application using netty.
I followed this tutorial, and specifically the time server model along with the POJO model. My problem is with the ByteToMessageDecoder : it runs more times than it has to, meaning instead of stopping when it reads a null ByteBuf it reads one more time and for some reason, that I cannot understand, it finds the previous message that my client has sent! I am certain that the client only sends said message only once!
So the idea is this : a simple Client Server model where the Client sends a "DataPacket" with a "hello world" message in it and the server responds with a "DataPacket" with an "ACK". I am using the DataPacket type because in the future I want to pass more things in it, add a header and built something a bit more complicated...But for starters I need to see what am I doing wrong in this one...
The ERROR :
As you can see the Server starts normally, my Client (Transceiver) sends the message, the encoder activates and converts it from DataPacket to ByteBuf, the message is sent and received from the server, the Decoder from the Server activates and converts it from ByteBuf to DataPacket and then the Server handles it accordingly... It should sent the ACK and repeat the same backwards but this is were things go wrong and I cannot understand why.
I have read some posts here and already tried a LengthFieldBasedFrameDecoder, it did not work and I also want to see what is wrong with this one if possible and not use something else...
Code:
Encoder and Decoder class :
package org.client_server;
import java.util.List;
import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.ByteToMessageDecoder;
import io.netty.handler.codec.MessageToByteEncoder;
import io.netty.util.CharsetUtil;
public class EncoderDecoder {
public static class NettyEncoder extends MessageToByteEncoder<DataPacket> {
#Override
protected void encode(ChannelHandlerContext ctx, DataPacket msg, ByteBuf out)
throws Exception {
System.out.println("Encode: "+msg.getData());
out.writeBytes(msg.convertData());
}
}
public static class NettyDecoder extends ByteToMessageDecoder{
#Override
protected void decode(ChannelHandlerContext ctx, ByteBuf in,
List<Object> out) throws Exception {
if((in.readableBytes() < 4) ) {
return;
}
String msg = in.toString(CharsetUtil.UTF_8);
System.out.println("Decode:"+msg);
out.add(new DataPacket(msg));
}
}
}
Server handler :
class DataAvroHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg)
throws Exception {
try {
DataPacket in = (DataPacket)msg;
System.out.println("[Server]: Message received..."+in.getData());
}finally {
ReferenceCountUtil.release(msg);
//ctx.close();
}
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx)
throws Exception {
System.out.println("[Server]: Read Complete...");
DataPacket pkt = new DataPacket("ACK!");
//pkt.setData(Unpooled.copiedBuffer("ACK", CharsetUtil.UTF_8));
ctx.writeAndFlush(pkt);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause)
throws Exception {
serverLog.warning("[Server]: Error..." + cause.toString());
ctx.close();
}
The Client handler :
class DataAvroHandlerCl extends ChannelInboundHandlerAdapter {
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
System.out.println("[Transceiver]: Channel Active!!!");
DataPacket pkt = new DataPacket("Hello World!");
ChannelFuture f = ctx.writeAndFlush(pkt);
//f.addListener(ChannelFutureListener.CLOSE);
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
try {
DataPacket in = (DataPacket)msg;
System.out.println("[Transceiver]: Message received..."+in.getData());
}finally {
ReferenceCountUtil.release(msg);
//ctx.close();
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
transLog.warning("[Transceiver] : Error..." + cause.getMessage());
ctx.close();
}
}
The Server and Client pipelines :
ch.pipeline().addLast("Decoder", new EncoderDecoder.NettyDecoder());
ch.pipeline().addLast("Encoder", new EncoderDecoder.NettyEncoder());
ch.pipeline().addLast("DataAvroHandler", new DataAvroHandler());
Your problem arises from using the toString() method of the ByteBuf in in your NettyDecoder.
Quoting from the javadoc (http://netty.io/4.0/api/io/netty/buffer/ByteBuf.html#toString%28java.nio.charset.Charset%29):
This method does not modify readerIndex or writerIndex of this buffer.
Now, the ByteToMessageDecoder doesn't know how many bytes you have actually decoded! It looks like you decoded 0 bytes, because the buffer's readerIndex was not modified and therefore you also get the error messages in your console.
You have to modify the readerIndex manually:
String msg = in.toString(CharsetUtil.UTF_8);
in.readerIndex(in.readerIndex() + in.readableBytes());
System.out.println("Decode:"+msg);
There is a middleware in between of two other softwares. In the middleware I'm routing Apache ActiveMQ messages by Apache Camel.
the first software uses middleware to send message to the 3rd software and the 3rd one reply the message to the first(using middleware).
1stSoftware <<=>> Middleware <<=>> 3rdSoftware
Problem:
when with the first one I send message to the middleware, middleware sends that message directly to ActiveMQ.DLQ and the 3rd one can not consume it!(Interesting point is this: when I copy that message to the main queue with the Admin panel of ActiveMQ, software can consume it properly!)
What's the problem?! It was working until I changed the Linux date!!!!!!!
Middleware is like this:
#SuppressWarnings("deprecation")
public class MiddlewareDaemon {
private Main main;
public static void main(String[] args) throws Exception {
MiddlewareDaemon middlewareDaemon = new MiddlewareDaemon();
middlewareDaemon.boot();
}
public void boot() throws Exception {
main = new Main();
main.enableHangupSupport();
//?wireFormat.maxInactivityDuration=0
main.bind("activemq", activeMQComponent("tcp://localhost:61616")); //ToGet
main.bind("activemq2", activeMQComponent("tcp://192.168.10.103:61616")); //ToInOut
main.addRouteBuilder(new MyRouteBuilder());
System.out.println("Starting Camel(MiddlewareDaemon). Use ctrl + c to terminate the JVM.\n");
main.run();
}
private static class MyRouteBuilder extends RouteBuilder {
#Override
public void configure() throws Exception {
intercept().to("log:Midlleware?level=INFO&showHeaders=true&showException=true&showCaughtException=true&showStackTrace=true");
from("activemq:queue:Q.Midlleware")
.process(new Processor() {
public void process(Exchange exchange) {
Map<String, Object> header = null;
try {
Message in = exchange.getIn();
header = in.getHeaders();
} catch (Exception e) {
log.error("Exception:", e);
header.put("Midlleware_Exception", e.getMessage() + " - " + e);
}
}
})
.inOut("activemq2:queue:Q.Comp2")
}
}
}
And the 3rd software(Replier): (this is a daemon like above, i just copied the RouteBuilder part)
private static class MyRouteBuilder extends RouteBuilder {
#Override
public void configure() {
intercept().to("log:Comp2?level=INFO&showHeaders=true&showException=true&showCaughtException=true&showStackTrace=true");
from("activemq:queue:Q.Comp2")
.process(new Processor() {
public void process(Exchange exchange) {
Message in = exchange.getIn();
Map<String, Object> headers = null;
try {
headers = in.getHeaders();
in.setBody(ZipUtil.compress(/*somResults*/));
} catch (Exception e) {
log.error("Exception", e);
in.setBody(ZipUtil.compress("[]"));
in.getHeaders().put("Comp2_Exception", e.getMessage() + " - " + e);
}
}
})
;
}
}
If the only thing you changed is the time on one of the servers, then this might well be the problem.
For communications over MQ to work properly, it is essential that all involved host systems have their clock set in sync. In the case of ActiveMQ there is a default message time-to-live (30 seconds, I think) for response queues. If the responding system is more then 30 seconds in the future relative to the host running ActiveMQ, then ActiveMQ will immediately expire the message and move it to the DLQ.
i need to perform the all operation like the creating the quartz-2 scheduler and deleting on only one Apache camel context
using restful service. when i try using following code .each time its creating the new context object. i do not know how to fix it or where i need to initiate the apache camel context object.
this is my code
this is my java restful services which is call to the quartz scheduler.
java Rest Services.
#Path("/remainder")
public class RemainderResource {
private static org.apache.log4j.Logger log = Logger.getLogger(RemainderResource.class);
RemainderScheduler remainderScheduler=new RemainderScheduler();
CamelContext context = new DefaultCamelContext();
#POST
#Path("/beforeday/{day}")
public void create(#PathParam("day") int day,final String userdata)
{
log.debug("the starting process of the creating the Remainder");
JSONObject data=(JSONObject) JSONSerializer.toJSON(userdata);
String cronExp=data.getString("cronExp");
remainderScheduler.create(cronExp,day,context);
}
}
This is my java class which is schedule job .
public class RemainderScheduler {
private static org.apache.log4j.Logger log = Logger.getLogger(RemainderScheduler.class);
public void sendRemainder(int day)
{
log.debug("the starting of the sending the Remainder to user");
}
public RouteBuilder createMyRoutes(final String cronExp,final int day)
{
return new RouteBuilder()
{
#Override
public void configure() throws Exception {
log.debug("Before set schedulling");
from("quartz2://RemainderGroup/Remainder? cron="+cronExp+"&deleteJob=true&job.name='RemainderServices'").bean(new RemainderScheduler(), "sendRemainder('"+day+"')").routeId("Remainder")
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
}
})
;
log.debug("after set schedulling");
}
};
}
public void stopService(CamelContext context)
{
log.debug("this is going to be stop the route");
try {
context.stopRoute("Remainder");
context.removeRoute("Remainder");
} catch (Exception e) {
e.printStackTrace();
}
}
public void create(final String cronExp,final int day,CamelContext context)
{
try
{
//this for if all ready exist then stop it.
if(context.getRoute("Remainder")!=null)
stopService(context);
log.debug("the starting of the process for creating the Remaider Services");
context.addRoutes(createMyRoutes(cronExp, day));
context.start();
log.debug("the status for removing the services is"+context.removeRoute("Remainder"));
}
catch(Exception e)
{
System.out.println(e.toString());
e.printStackTrace();
}
}
}
if i execute the above code then the each java Restful request create the new context object.
and its will start the job scheduling on new apache camel context object. and if send request for stop the route then also its creating the new apache context object so i am not able to reset or stop the quartz-2 scheduler.
It is not a good practise to create a camel context per request.
I suggest you to use camel-restlet or camel-cxfrs to delegate the request of create and delete scheduler to another camel context.