I am wondering if there is the possibility to know when a message has been delivered. Since I want to shutdown the actorsystem right afterwards. The code below connects to a remote actor and then sends a message. But in some cases the local actorsystem seems to be shut down too early.
ActorSystem system = ActorSystem.create("Test", config.getConfig("webbackend"));
ActorSelection communicator = system.actorSelection("akka.tcp://Midas#127.0.0.1:2555/user/Communicator");
communicator.tell(new TimerTransmissionCmd(channel.getId()), ActorRef.noSender());
//system.shutdown();
Try adding this code after send the message:
Boolean wasProcessed = (Integer)Await.result(Patterns.ask(communicator, new ResultClass(), 5000),
Duration.create(5000, TimeUnit.MILLISECONDS));
if(wasProcessed){
actorSystem.shutdown();
}
You also have to add this in your Actor class:
private boolean wasProcessed = false;
#Override
public void onReceive(Object messageReceived) throws Exception {
if (messageReceived instanceof ResultClass) {
this.workerActor1.tell(wasProcessed, getSender());
} else {
//Put your process code here
wasProcessed = true;
}
}
But I recommend you configure a prudential timeout and after that always shutdown the system.
Related
I have created an inbound handler of type SimpleChannelInboundHandler and added to pipeline. My intention is every time a connection is established, I wanted to send an application message called session open message and make the connection ready to send the actual message. To achieve this, the above inbound handler
over rides channelActive() where session open message is sent, In response to that I would get a session open confirmation message. Only after that I should be able to send any number of actual business message. I am using FixedChannelPool and initialised as follows. This works well some time on startup. But if the remote host closes the connection, after that if a message is sent calling the below sendMessage(), the message is sent even before the session open message through channelActive() and its response is obtained. So the server ignores the message as the session is not open yet when the business message was sent.
What I am looking for is, the pool should return only those channel that has called channelActive() event which has already sent the session open message and it has got its session open confirmation message from the server. How to deal with this situation?
public class SessionHandler extends SimpleChannelInboundHandler<byte[]> {
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
super.channelActive(ctx);
if (ctx.channel().isWritable()) {
ctx.channel().writeAndFlush("open session message".getBytes()).;
}
}
}
// At the time of loading the applicaiton
public void init() {
final Bootstrap bootStrap = new Bootstrap();
bootStrap.group(group).channel(NioSocketChannel.class).remoteAddress(hostname, port);
fixedPool = new FixedChannelPool(bootStrap, getChannelHandler(), 5);
// This is done to intialise connection and the channelActive() from above handler is invoked to keep the session open on startup
for (int i = 0; i < config.getMaxConnections(); i++) {
fixedPool.acquire().addListener(new FutureListener<Channel>() {
#Override
public void operationComplete(Future<Channel> future) throws Exception {
if (future.isSuccess()) {
} else {
LOGGER.error(" Channel initialzation failed...>>", future.cause());
}
}
});
}
}
//To actually send the message following method is invoked by the application.
public void sendMessage(final String businessMessage) {
fixedPool.acquire().addListener(new FutureListener<Channel>() {
#Override
public void operationComplete(Future<Channel> future) throws Exception {
if (future.isSuccess()) {
Channel channel = future.get();
if (channel.isOpen() && channel.isActive() && channel.isWritable()) {
channel.writeAndFlush(businessMessage).addListener(new GenericFutureListener<ChannelFuture>() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
// success msg
} else {
// failure msg
}
}
});
fixedPool.release(channel);
}
} else {
// Failure
}
}
});
}
If there is no specific reason that you need to use a FixedChannelPool then you can use another data structure (List/Map) to store the Channels. You can add a channel to the data structure after sending open session message and remove it in the channelInactive method.
If you need to perform bulk operations on channels you can use a ChannelGroup for the purpose.
If you still want you use the FixedChannelPool you may set an attribute in the channel on whether open message was sent:
ctx.channel().attr(OPEN_MESSAGE_SENT).set(true);
you can get the attribute as follows in your sendMessage function:
boolean sent = ctx.channel().attr(OPEN_MESSAGE_SENT).get();
and in the channelInactive you may set the same to false or remove it.
Note OPEN_MESSAGE_SENT is an AttributeKey:
public static final AttributeKey<Boolean> OPEN_MESSAGE_SENT = AttributeKey.valueOf("OPEN_MESSAGE_SENT");
I know this is a rather old question, but I stumbled across the similar issue, not quite the same, but my issue was the ChannelInitializer in the Bootstrap.handler was never called.
The solution was to add the pipeline handlers to the pool handler's channelCreated method.
Here is my pool definition code that works now:
pool = new FixedChannelPool(httpBootstrap, new ChannelPoolHandler() {
#Override
public void channelCreated(Channel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast(HTTP_CODEC, new HttpClientCodec());
pipeline.addLast(HTTP_HANDLER, new NettyHttpClientHandler());
}
#Override
public void channelAcquired(Channel ch) {
// NOOP
}
#Override
public void channelReleased(Channel ch) {
// NOOP
}
}, 10);
So in the getChannelHandler() method I assume you're creating a ChannelPoolHandler in its channelCreated method you could send your session message (ch.writeAndFlush("open session message".getBytes());) assuming you only need to send the session message once when a connection is created, else you if you need to send the session message every time you could add it to the channelAcquired method.
I am trying to use the following code which is an implementation of web sockets in Netty Nio. I have implment a JavaFx Gui and from the Gui I want to read the messages that are received from the Server or from other clients. The NettyClient code is like the following:
public static ChannelFuture callBack () throws Exception{
String host = "localhost";
int port = 8080;
try {
Bootstrap b = new Bootstrap();
b.group(workerGroup);
b.channel(NioSocketChannel.class);
b.option(ChannelOption.SO_KEEPALIVE, true);
b.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new RequestDataEncoder(), new ResponseDataDecoder(),
new ClientHandler(i -> {
synchronized (lock) {
connectedClients = i;
lock.notifyAll();
}
}));
}
});
ChannelFuture f = b.connect(host, port).sync();
//f.channel().closeFuture().sync();
return f;
}
finally {
//workerGroup.shutdownGracefully();
}
}
public static void main(String[] args) throws Exception {
ChannelFuture ret;
ClientHandler obj = new ClientHandler(i -> {
synchronized (lock) {
connectedClients = i;
lock.notifyAll();
}
});
ret = callBack();
int connected = connectedClients;
if (connected != 2) {
System.out.println("The number if the connected clients is not two before locking");
synchronized (lock) {
while (true) {
connected = connectedClients;
if (connected == 2)
break;
System.out.println("The number if the connected clients is not two");
lock.wait();
}
}
}
System.out.println("The number if the connected clients is two: " + connected );
ret.channel().read(); // can I use that from other parts of the code in order to read the incoming messages?
}
How can I use the returned channelFuture from the callBack from other parts of my code in order to read the incoming messages? Do I need to call again callBack, or how can I received the updated message of the channel? Could I possible use from my code (inside a button event) something like ret.channel().read() (so as to take the last message)?
By reading that code,the NettyClient is used to create connection(ClientHandler ),once connect done,ClientHandler.channelActive is called by Netty,if you want send data to server,you should put some code here. if this connection get message form server, ClientHandler.channelRead is called by Netty, put your code to handle message.
You also need to read doc to know how netty encoder/decoder works.
How can I use the returned channelFuture from the callBack from other parts of my code in order to read the incoming messages?
share those ClientHandler created by NettyClient(NettyClient.java line 29)
Do I need to call again callBack, or how can I received the updated message of the channel?
if server message come,ClientHandler.channelRead is called.
Could I possible use from my code (inside a button event) something like ret.channel().read() (so as to take the last message)?
yes you could,but not a netty way,to play with netty,you write callbacks(when message come,when message sent ...),wait netty call your code,that is : the driver is netty,not you.
last,do you really need such a heavy library to do network?if not ,try This code,it simple,easy to understanding
Im using Akka 2.5.6 in Java 8 and I want to know the right way to finish de ActorSystem, part of the functionality of my code is to process some XML files and validate them, to achieve this I have created 3 actors:
Controller, Processor and Validator.
The Controller is responsible for initiating the process and sending file by file and other information to the Processor, then the Processor create a digital signature of the file and sends the response to the Validator that finally validates the status and sends an OK message to the Controller which is counting the number of files validated and compares them with the total files. Once the total of files with the total of validated files are equal, I call to finish the ActorSystem with the terminate () method.
The method to finish is as follows:
private void endActors()
{
ActorSystem actorSystem = getContext().system();
Future <Terminated> terminated = actorSystem.terminate();
do {
log.info ("Waiting to finish ...");
try {
Thread.sleep (30000L);
} catch (InterruptedException ex) {
log.error ("Error in Thread.");
}
} while (! ended.isCompleted ());
log.info ("Actors finished processing.");
}
The loop never ends because the future is never complete, I dont know if this is the right way, I hope you have understood me and can help me or give me some advice.
Try the following (the key here is the on complete) . I wrote a class along these lines to use in a setup and teardown for junit, to avoid issues from actor system not fully terminating in the teardown of one test before being created in another test. (that caused port already in use issues)
private static ActorSystem system = null;
private static Future<Terminated> terminatedFuture;
public static ActorSystem getFreshActorSystem() {
tearDownActorSystem();
while(system != null) {
try {
Thread.sleep(500L);
} catch (InterruptedException e) {
}
}
system = ActorSystem.create();
return system;
}
public static void tearDownActorSystem() {
if (system !=null && !isInMiddleOfTerminating()) {
terminatedFuture = system.terminate();
terminatedFuture.onComplete( new OnComplete(){
#Override
public void onComplete(Throwable failure, Object success) throws Throwable {
system = null;
terminatedFuture = null;
}
} , system.dispatcher());
}
}
private static boolean isInMiddleOfTerminating() {
return terminatedFuture !=null;
}
I have the following situation:
A new channel connection is opened in this way:
ClientBootstrap bootstrap = new ClientBootstrap(
new OioClientSocketChannelFactory(Executors.newCachedThreadPool()));
icapClientChannelPipeline = new ICAPClientChannelPipeline();
bootstrap.setPipelineFactory(icapClientChannelPipeline);
ChannelFuture future = bootstrap.connect(new InetSocketAddress(host, port));
channel = future.awaitUninterruptibly().getChannel();
This is working as expected.
Stuff is written to the channel in the following way:
channel.write(chunk)
This also works as expected when the connection to the server is still alive. But if the server goes down (machine goes offline), the call hangs and doesn't return.
I confirmed this by adding log statements before and after the channel.write(chunk). When the connection is broken, only the log statement before is displayed.
What is causing this? I thought these calls are all async and return immediately? I also tried with NioClientSocketChannelFactory, same behavior.
I tried to use channel.getCloseFuture() but the listener never gets called, I tried to check the channel before writing with channel.isOpen(), channel.isConnected() and channel.isWritable() and they are always true...
How to work around this? No exception is thrown and nothing really happens... Some questions like this one and this one indicate that it isn't possible to detect a channel disconnect without a heartbeat. But I can't implement a heartbeat because I can't change the server side.
Environment: Netty 3, JDK 1.7
Ok, I solved this one on my own last week so I'll add the answer for completness.
I was wrong in 3. because I thought I'll have to change both the client and the server side for a heartbeat. As described in this question you can use the IdleStateAwareHandler for this purpose. I implemented it like this:
The IdleStateAwareHandler:
public class IdleStateAwareHandler extends IdleStateAwareChannelHandler {
#Override
public void channelIdle(ChannelHandlerContext ctx, IdleStateEvent e) {
if (e.getState() == IdleState.READER_IDLE) {
e.getChannel().write("heartbeat-reader_idle");
}
else if (e.getState() == IdleState.WRITER_IDLE) {
Logger.getLogger(IdleStateAwareHandler.class.getName()).log(
Level.WARNING, "WriteIdle detected, closing channel");
e.getChannel().close();
e.getChannel().write("heartbeat-writer_idle");
}
else if (e.getState() == IdleState.ALL_IDLE) {
e.getChannel().write("heartbeat-all_idle");
}
}
}
The PipeLine:
public class ICAPClientChannelPipeline implements ICAPClientPipeline {
ICAPClientHandler icapClientHandler;
ChannelPipeline pipeline;
public ICAPClientChannelPipeline(){
icapClientHandler = new ICAPClientHandler();
pipeline = pipeline();
pipeline.addLast("idleStateHandler", new IdleStateHandler(new HashedWheelTimer(10, TimeUnit.MILLISECONDS), 5, 5, 5));
pipeline.addLast("idleStateAwareHandler", new IdleStateAwareHandler());
pipeline.addLast("encoder",new IcapRequestEncoder());
pipeline.addLast("chunkSeparator",new IcapChunkSeparator(1024*4));
pipeline.addLast("decoder",new IcapResponseDecoder());
pipeline.addLast("chunkAggregator",new IcapChunkAggregator(1024*4));
pipeline.addLast("handler", icapClientHandler);
}
#Override
public ChannelPipeline getPipeline() throws Exception {
return pipeline;
}
}
This detects any read or write idle state on the channel after 5 seconds.
As you can see it is a little bit ICAP-specific but this doesn't matter for the question.
To react to an idle event I need the following listener:
channel.getCloseFuture().addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
doSomething();
}
});
I have an application with the following route:
from("netty:tcp://localhost:5150?sync=false&keepAlive=true")
.routeId("tcp.input")
.transform()
.simple("insert into tamponems (AVIS) values (\"${in.body}\");")
.to("jdbc:mydb");
This route receives a new message every 59 millisecondes. I want to stop the route when the connection to the database is lost, before that a second message arrives. And mainly, I want to never lose a message.
I proceeded that way:
I added an errorHandler:
errorHandler(deadLetterChannel("direct:backup")
.redeliveryDelay(5L)
.maximumRedeliveries(1)
.retryAttemptedLogLevel(LoggingLevel.WARN)
.logExhausted(false));
My errorHandler tries to redeliver the message and if it fails again, it redirects the message to a deadLetterChannel.
The following deadLetterChannel will stop the tcp.input route and try to redeliver the message to the database:
RoutePolicy policy = new StopRoutePolicy();
from("direct:backup")
.routePolicy(policy)
.errorHandler(
defaultErrorHandler()
.redeliveryDelay(1000L)
.maximumRedeliveries(-1)
.retryAttemptedLogLevel(LoggingLevel.ERROR)
)
.to("jdbc:mydb");
Here is the code of the routePolicy:
public class StopRoutePolicy extends RoutePolicySupport {
private static final Logger LOG = LoggerFactory.getLogger(String.class);
#Override
public void onExchangeDone(Route route, Exchange exchange) {
String stop = "tcp.input";
CamelContext context = exchange.getContext();
if (context.getRouteStatus(stop) != null && context.getRouteStatus(stop).isStarted()) {
try {
exchange.getContext().getInflightRepository().remove(exchange);
LOG.info("STOP ROUTE: {}", stop);
context.stopRoute(stop);
} catch (Exception e) {
getExceptionHandler().handleException(e);
}
}
}
}
My problems with this method are:
In my "direct:backup" route, if I set the maximumRedeliveries to -1 the route tcp.input will never stop
I'm loosing messages during the stop
This method for detecting the connection loss and for stopping the route is too long
Please, does anybody have an idea for make this faster or for make this differently in order to not lose message?
I have finally found a way to resolve my problems. In order to make the application faster, I added asynchronous processes and multithreading with seda.
from("netty:tcp://localhost:5150?sync=false&keepAlive=true").to("seda:break");
from("seda:break").threads(5)
.routeId("tcp.input")
.transform()
.simple("insert into tamponems (AVIS) values (\"${in.body}\");")
.to("jdbc:mydb");
I did the same with the backup route.
from("seda:backup")
.routePolicy(policy)
.errorHandler(
defaultErrorHandler()
.redeliveryDelay(1000L)
.maximumRedeliveries(-1)
.retryAttemptedLogLevel(LoggingLevel.ERROR)
).threads(2).to("jdbc:mydb");
And I modified the routePolicy like that:
public class StopRoutePolicy extends RoutePolicySupport {
private static final Logger LOG = LoggerFactory.getLogger(String.class);
#Override
public void onExchangeBegin(Route route, Exchange exchange) {
String stop = "tcp.input";
CamelContext context = exchange.getContext();
if (context.getRouteStatus(stop) != null && context.getRouteStatus(stop).isStarted()) {
try {
exchange.getContext().getInflightRepository().remove(exchange);
LOG.info("STOP ROUTE: {}", stop);
context.stopRoute(stop);
} catch (Exception e) {
getExceptionHandler().handleException(e);
}
}
}
#Override
public void onExchangeDone(Route route, Exchange exchange) {
String stop = "tcp.input";
CamelContext context = exchange.getContext();
if (context.getRouteStatus(stop) != null && context.getRouteStatus(stop).isStopped()) {
try {
LOG.info("RESTART ROUTE: {}", stop);
context.startRoute(stop);
} catch (Exception e) {
getExceptionHandler().handleException(e);
}
}
}
}
With these updates, the TCP route is stopped before the backup route is processed. And the route is restarted when the jdbc connection is back.
Now, thanks to Camel, the application is able to handle a database failure without losing message and without manual intervention.
I hope this could help you.