Proper MQTT subscription code that maintains persistence - java

I am looking for a java code for MQTT client that subscribes to a given topic, every message published on that topic should reach the client only once.I have written many codes and in all the cases messages are delivered properly to the client when it is connected to the broker but if the subscribed client disconnects from the broker for some time and then again connects back, it does not receive the messages that were sent during the time that it was not connected and I have set the clean session flag also as false but still its not working, the code that I used is given below
import org.fusesource.hawtbuf.*;
import org.fusesource.mqtt.client.*;
/**
* Uses an callback based interface to MQTT. Callback based interfaces
* are harder to use but are slightly more efficient.
*/
class Listener {
public static void main(String []args) throws Exception {
String user = env("APOLLO_USER", "admin");
String password = env("APOLLO_PASSWORD", "password");
String host = env("APOLLO_HOST", "localhost");
int port = Integer.parseInt(env("APOLLO_PORT", "61613"));
final String destination = arg(args, 1, "subject");
MQTT mqtt = new MQTT();
mqtt.setHost(host, port);
mqtt.setUserName(user);
mqtt.setPassword(password);
mqtt.setCleanSession(false);
mqtt.setClientId("newclient");
final CallbackConnection connection = mqtt.callbackConnection();
connection.listener(new org.fusesource.mqtt.client.Listener() {
long count = 0;
long start = System.currentTimeMillis();
public void onConnected() {
}
public void onDisconnected() {
}
public void onFailure(Throwable value) {
value.printStackTrace();
System.exit(-2);
}
public void onPublish(UTF8Buffer topic, Buffer msg, Runnable ack) {
System.out.println("Nisha Messages : " + msg);
System.out.println("Nisha topic" + topic);
System.out.println("Nisha Receive acknowledgement : " + ack);
String body = msg.utf8().toString();
if("SHUTDOWN".equals(body)) {
long diff = System.currentTimeMillis() - start;
System.out.println(String.format("Received %d in %.2f seconds", count, (1.0*diff/1000.0)));
connection.disconnect(new Callback<Void>() {
#Override
public void onSuccess(Void value) {
System.exit(0);
}
#Override
public void onFailure(Throwable value) {
value.printStackTrace();
System.exit(-2);
}
});
} else {
if( count == 0 ) {
start = System.currentTimeMillis();
}
if( count % 1000 == 0 ) {
System.out.println(String.format("Received %d messages.", count));
}
count ++;
}
}
});
connection.connect(new Callback<Void>() {
#Override
public void onSuccess(Void value) {
System.out.println("connected in :::: ");
Topic[] topics = {new Topic(destination, QoS.AT_MOST_ONCE)};
connection.subscribe(topics, new Callback<byte[]>() {
public void onSuccess(byte[] qoses) {
}
public void onFailure(Throwable value) {
value.printStackTrace();
System.exit(-2);
}
});
}
#Override
public void onFailure(Throwable value) {
value.printStackTrace();
System.exit(-2);
}
});
// Wait forever..
synchronized (Listener.class) {
while(true)
Listener.class.wait();
}
}
private static String env(String key, String defaultValue) {
String rc = System.getenv(key);
if( rc== null )
return defaultValue;
return rc;
}
private static String arg(String []args, int index, String defaultValue) {
if( index < args.length )
return args[index];
else
return defaultValue;
}
}
Am I doing something wrong here?

it does not receive the messages that were sent during the time that it was not connected
MQTT does not retain all messages. If the client goes offline, undelivered messages are lost. The retain mechanism retains only the last message published to a topic.
You can read more in the specs point 3.3.1.3 RETAIN

Related

Java Pusher Client disconnects right after connecting

I am connecting to Pusher server using JAVA code and the issue is the client (using Java Client library) disconnects after few seconds of making connection to the server and it does not reconnect itself.
Also, the connection status at the java-client shows is CONNECTED, as shown in the onConnectionStateChange() callback method but in the background it seems like it stays disconnected because when I tried pushing a request from the server java client did not receive anything but on reconnect (manually) I get the request.
Code:
#Component
#Slf4j
public class PosWebSocketClient {
private Pusher pusher;
private boolean isConnected = false;
private String pusherAppKey;
private Timer timer;
private Date activeTime;
private boolean isUserLoggedIn = false;
private PusherOptions pusherOptions;
public synchronized void init(String pusherAppKey) {
log.info("Initializing Pusher");
pusher = new Pusher(pusherAppKey/* , pusherOptions() */);
this.pusherAppKey = pusherAppKey;
this.isUserLoggedIn = true;
pusher.connect(new ConnectionEventListener() {
#Override
public void onConnectionStateChange(ConnectionStateChange change) {
log.info("State changed to " + change.getCurrentState() + " from "
+ change.getPreviousState());
if (change.getCurrentState() == ConnectionState.CONNECTED) {
isConnected = true;
} else {
isConnected = false;
}
log.info("isConnected {}", isConnected);
}
#Override
public void onError(String message, String code, Exception e) {
log.info("Error while connecting to the server with {} {} {}", message, code, e.getMessage());
log.error("Exception: - ",e);
}
}, ConnectionState.ALL);
Channel channel = pusher.subscribe("*****");
channel.bind("any-event-1", sendDataListener());
channel.bind("any-event-2", receiveOrderListener());
channel.bind("any-event-3", logOutListener());
channel.bind("any-event-4", getOrderStatusListener());
activeTime = new Date();
/*
* if (timer == null) { timer = new Timer(); timer.schedule(new MyTask(), 0,
* 1000 * 60 * 2); }
*/
}
class MyTask extends TimerTask {
public void run() {
long idleTimeInMinutes = (new Date().getTime() - activeTime.getTime()) / (1000 * 60);
log.info("Pusher Idle Time {} ", idleTimeInMinutes);
if (isUserLoggedIn && idleTimeInMinutes >= 10 && StringUtils.isNotBlank(pusherAppKey)) {
log.info("Pusher idle time is greater than 10 mins, initializing again");
disconnect();
init(pusherAppKey);
}
}
}
private SubscriptionEventListener logOutListener() {
return new SubscriptionEventListener() {
#Override
public void onEvent(PusherEvent pusherEvent) {
}
};
}
private SubscriptionEventListener sendDataListener() {
return new SubscriptionEventListener() {
#Override
public void onEvent(PusherEvent pusherEvent) {
log.info("Received SendData event");
}
};
}
private SubscriptionEventListener receiveOrderListener() {
log.info("Received FetchOrder event");
return new SubscriptionEventListener() {
}
};
}
private SubscriptionEventListener getOrderStatusListener() {
log.info("Received SendStatus event");
return new SubscriptionEventListener() {
}
};
}
public synchronized void disconnect() {
// Disconnect from the service (or become disconnected my network conditions)
if (pusher != null && pusher.getConnection() != null) {
log.info("Disconnecting Pusher");
Channel channel = pusher.getChannel("*****");
channel.unbind("any-event-1", sendDataListener());
channel.unbind("any-event-2", receiveOrderListener());
channel.unbind("any-event-3", logOutListener());
channel.unbind("any-event-4", getOrderStatusListener());
pusher.unsubscribe("*****");
pusher.disconnect();
}
}
public void restart() {
if (StringUtils.isNotBlank(pusherAppKey)) {
log.info("Restarting Pusher");
disconnect();
this.init(this.pusherAppKey);
}
}
/*
* private PusherOptions pusherOptions() { if (pusherOptions == null) {
* pusherOptions = new PusherOptions();
* pusherOptions.setMaxReconnectionAttempts(1000000); } return pusherOptions; }
*/
public boolean isConnected() {
return isConnected;
}
private void setPusherStatus() {
activeTime = new Date();
}
public void userLoggedOut() {
this.isUserLoggedIn = false;
}
}
The maven dependency used is
<dependency>
<groupId>com.pusher</groupId>
<artifactId>pusher-java-client</artifactId>
<version>2.0.0</version>
</dependency>
Can anyone please have a look and let me know the issue with the code or dependency or any property I am missing while making connection to the server ? TIA.

How to implement retry policies while sending data to another application?

I am working on my application which sends data to zeromq. Below is what my application does:
I have a class SendToZeroMQ that send data to zeromq.
Add same data to retryQueue in the same class so that it can be retried later on if acknowledgment is not received. It uses guava cache with maximumSize limit.
Have a separate thread which receives acknowledgement from the zeromq for the data that was sent earlier and if acknowledgement is not received, then SendToZeroMQ will retry sending that same piece of data. And if acknowledgement is received, then we will remove it from retryQueue so that it cannot be retried again.
Idea is very simple and I have to make sure my retry policy works fine so that I don't loose my data. This is very rare but in case if we don't receive acknolwedgements.
I am thinking of building two types of RetryPolicies but I am not able to understand how to build that here corresponding to my program:
RetryNTimes: In this it will retry N times with a particular sleep between each retry and after that, it will drop the record.
ExponentialBackoffRetry: In this it will exponentially keep retrying. We can set some max retry limit and after that it won't retry and will drop the record.
Below is my SendToZeroMQ class which sends data to zeromq, also retry every 30 seconds from a background thread and start ResponsePoller runnable which keeps running forever:
public class SendToZeroMQ {
private final ScheduledExecutorService executorService = Executors.newScheduledThreadPool(5);
private final Cache<Long, byte[]> retryQueue =
CacheBuilder
.newBuilder()
.maximumSize(10000000)
.concurrencyLevel(200)
.removalListener(
RemovalListeners.asynchronous(new CustomListener(), executorService)).build();
private static class Holder {
private static final SendToZeroMQ INSTANCE = new SendToZeroMQ();
}
public static SendToZeroMQ getInstance() {
return Holder.INSTANCE;
}
private SendToZeroMQ() {
executorService.submit(new ResponsePoller());
// retry every 30 seconds for now
executorService.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
for (Entry<Long, byte[]> entry : retryQueue.asMap().entrySet()) {
sendTo(entry.getKey(), entry.getValue());
}
}
}, 0, 30, TimeUnit.SECONDS);
}
public boolean sendTo(final long address, final byte[] encodedRecords) {
Optional<ZMQSocketInfo> liveSockets = PoolManager.getInstance().getNextSocket();
if (!liveSockets.isPresent()) {
return false;
}
return sendTo(address, encodedRecords, liveSockets.get().getSocket());
}
public boolean sendTo(final long address, final byte[] encodedByteArray, final Socket socket) {
ZMsg msg = new ZMsg();
msg.add(encodedByteArray);
boolean sent = msg.send(socket);
msg.destroy();
// adding to retry queue
retryQueue.put(address, encodedByteArray);
return sent;
}
public void removeFromRetryQueue(final long address) {
retryQueue.invalidate(address);
}
}
Below is my ResponsePoller class which polls all the acknowledgement from the zeromq. And if we get an acknowledgement back from the zeromq then we will remove that record from the retry queue so that it doesn't get retried otherwise it will get retried.
public class ResponsePoller implements Runnable {
private static final Random random = new Random();
#Override
public void run() {
ZContext ctx = new ZContext();
Socket client = ctx.createSocket(ZMQ.PULL);
String identity = String.format("%04X-%04X", random.nextInt(), random.nextInt());
client.setIdentity(identity.getBytes(ZMQ.CHARSET));
client.bind("tcp://" + TestUtils.getIpaddress() + ":8076");
PollItem[] items = new PollItem[] {new PollItem(client, Poller.POLLIN)};
while (!Thread.currentThread().isInterrupted()) {
// Tick once per second, pulling in arriving messages
for (int centitick = 0; centitick < 100; centitick++) {
ZMQ.poll(items, 10);
if (items[0].isReadable()) {
ZMsg msg = ZMsg.recvMsg(client);
Iterator<ZFrame> it = msg.iterator();
while (it.hasNext()) {
ZFrame frame = it.next();
try {
long address = TestUtils.getAddress(frame.getData());
// remove from retry queue since we got the acknowledgment for this record
SendToZeroMQ.getInstance().removeFromRetryQueue(address);
} catch (Exception ex) {
// log error
} finally {
frame.destroy();
}
}
msg.destroy();
}
}
}
ctx.destroy();
}
}
Question:
As you can see above, I am sending encodedRecords to zeromq using SendToZeroMQ class and then it gets retried every 30 seconds depending on whether we got an acknolwedgement back from ResponsePoller class or not.
For each encodedRecords there is a unique key called address and that's what we will get back from zeromq as an acknowledgement.
How can I go ahead and extend this example to build two retry policies that I mentioned above and then I can pick what retry policy I want to use while sending data. I came up with below interface but then I am not able understand how should I move forward to implement those retry policies and use it in my above code.
public interface RetryPolicy {
/**
* Called when an operation has failed for some reason. This method should return
* true to make another attempt.
*/
public boolean allowRetry(int retryCount, long elapsedTimeMs);
}
Can I use guava-retrying or failsafe here becuase these libraries already have many retry policies which I can use?
I am not able to work out all the details regarding how to use the relevant API-s, but as for algorithm, you could try:
the retry-policy needs to have some sort of state attached to each message (atleast the number of times the current message has been retried, possible what the current delay is). You need to decide whether the RetryPolicy should keep that itself or if you want to store it inside the message.
instead of allowRetry, you could have a method calculating when the next retry should occur (in absolute time or as a number of milliseconds in the future), which will be a function of the state mentioned above
the retry queue should contain information on when each message should be retried.
instead of using scheduleAtFixedRate, find the message in the retry queue which has the lowest when_is_next_retry (possibly by sorting on absolute retry-timestamp and picking the first), and let the executorService reschedule itself using schedule and the time_to_next_retry
for each retry, pull it from the retry queue, send the message, use the RetryPolicy for calculating when the next retry should be (if it is to be retried) and insert back into the retry queue with a new value for when_is_next_retry (if the RetryPolicy returns -1, it could mean that the message shall not be retried any more)
not a perfect way, but can be achieved by below way as well.
public interface RetryPolicy {
public boolean allowRetry();
public void decreaseRetryCount();
}
Create two implementation. For RetryNTimes
public class RetryNTimes implements RetryPolicy {
private int maxRetryCount;
public RetryNTimes(int maxRetryCount) {
this.maxRetryCount = maxRetryCount;
}
public boolean allowRetry() {
return maxRetryCount > 0;
}
public void decreaseRetryCount()
{
maxRetryCount = maxRetryCount-1;
}}
For ExponentialBackoffRetry
public class ExponentialBackoffRetry implements RetryPolicy {
private int maxRetryCount;
private final Date retryUpto;
public ExponentialBackoffRetry(int maxRetryCount, Date retryUpto) {
this.maxRetryCount = maxRetryCount;
this.retryUpto = retryUpto;
}
public boolean allowRetry() {
Date date = new Date();
if(maxRetryCount <= 0 || date.compareTo(retryUpto)>=0)
{
return false;
}
return true;
}
public void decreaseRetryCount() {
maxRetryCount = maxRetryCount-1;
}}
You need to make some changes in SendToZeroMQ class
public class SendToZeroMQ {
private final ScheduledExecutorService executorService = Executors.newScheduledThreadPool(5);
private final Cache<Long,RetryMessage> retryQueue =
CacheBuilder
.newBuilder()
.maximumSize(10000000)
.concurrencyLevel(200)
.removalListener(
RemovalListeners.asynchronous(new CustomListener(), executorService)).build();
private static class Holder {
private static final SendToZeroMQ INSTANCE = new SendToZeroMQ();
}
public static SendToZeroMQ getInstance() {
return Holder.INSTANCE;
}
private SendToZeroMQ() {
executorService.submit(new ResponsePoller());
// retry every 30 seconds for now
executorService.scheduleAtFixedRate(new Runnable() {
public void run() {
for (Map.Entry<Long, RetryMessage> entry : retryQueue.asMap().entrySet()) {
RetryMessage retryMessage = entry.getValue();
if(retryMessage.getRetryPolicy().allowRetry())
{
retryMessage.getRetryPolicy().decreaseRetryCount();
entry.setValue(retryMessage);
sendTo(entry.getKey(), retryMessage.getMessage(),retryMessage);
}else
{
retryQueue.asMap().remove(entry.getKey());
}
}
}
}, 0, 30, TimeUnit.SECONDS);
}
public boolean sendTo(final long address, final byte[] encodedRecords, RetryMessage retryMessage) {
Optional<ZMQSocketInfo> liveSockets = PoolManager.getInstance().getNextSocket();
if (!liveSockets.isPresent()) {
return false;
}
if(null==retryMessage)
{
RetryPolicy retryPolicy = new RetryNTimes(10);
retryMessage = new RetryMessage(retryPolicy,encodedRecords);
retryQueue.asMap().put(address,retryMessage);
}
return sendTo(address, encodedRecords, liveSockets.get().getSocket());
}
public boolean sendTo(final long address, final byte[] encodedByteArray, final ZMQ.Socket socket) {
ZMsg msg = new ZMsg();
msg.add(encodedByteArray);
boolean sent = msg.send(socket);
msg.destroy();
return sent;
}
public void removeFromRetryQueue(final long address) {
retryQueue.invalidate(address);
}}
Here is a working little simulation of your environment that shows how this can be done. Note the Guava cache is the wrong data structure here, since you aren't interested in eviction (I think). So I'm using a concurrent hashmap:
package experimental;
import static java.util.concurrent.TimeUnit.MILLISECONDS;
import java.util.Arrays;
import java.util.Iterator;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.ScheduledExecutorService;
class Experimental {
/** Return the desired backoff delay in millis for the given retry number, which is 1-based. */
interface RetryStrategy {
long getDelayMs(int retry);
}
enum ConstantBackoff implements RetryStrategy {
INSTANCE;
#Override
public long getDelayMs(int retry) {
return 1000L;
}
}
enum ExponentialBackoff implements RetryStrategy {
INSTANCE;
#Override
public long getDelayMs(int retry) {
return 100 + (1L << retry);
}
}
static class Sender {
private final ScheduledExecutorService executorService = Executors.newScheduledThreadPool(4);
private final ConcurrentMap<Long, Retrier> pending = new ConcurrentHashMap<>();
/** Send the given data with given address on the given socket. */
void sendTo(long addr, byte[] data, int socket) {
System.err.println("Sending " + Arrays.toString(data) + "#" + addr + " on " + socket);
}
private class Retrier implements Runnable {
private final RetryStrategy retryStrategy;
private final long addr;
private final byte[] data;
private final int socket;
private int retry;
private Future<?> future;
Retrier(RetryStrategy retryStrategy, long addr, byte[] data, int socket) {
this.retryStrategy = retryStrategy;
this.addr = addr;
this.data = data;
this.socket = socket;
this.retry = 0;
}
synchronized void start() {
if (future == null) {
future = executorService.submit(this);
pending.put(addr, this);
}
}
synchronized void cancel() {
if (future != null) {
future.cancel(true);
future = null;
}
}
private synchronized void reschedule() {
if (future != null) {
future = executorService.schedule(this, retryStrategy.getDelayMs(++retry), MILLISECONDS);
}
}
#Override
synchronized public void run() {
sendTo(addr, data, socket);
reschedule();
}
}
long getVerifiedAddr() {
System.err.println("Pending messages: " + pending.size());
Iterator<Long> i = pending.keySet().iterator();
long addr = i.hasNext() ? i.next() : 0;
return addr;
}
class CancellationPoller implements Runnable {
#Override
public void run() {
while (!Thread.currentThread().isInterrupted()) {
try {
Thread.sleep(1000);
} catch (InterruptedException ex) {
Thread.currentThread().interrupt();
}
long addr = getVerifiedAddr();
if (addr == 0) {
continue;
}
System.err.println("Verified message (to be cancelled) " + addr);
Retrier retrier = pending.remove(addr);
if (retrier != null) {
retrier.cancel();
}
}
}
}
Sender initialize() {
executorService.submit(new CancellationPoller());
return this;
}
void sendWithRetriesTo(RetryStrategy retryStrategy, long addr, byte[] data, int socket) {
new Retrier(retryStrategy, addr, data, socket).start();
}
}
public static void main(String[] args) {
Sender sender = new Sender().initialize();
for (long i = 1; i <= 10; i++) {
sender.sendWithRetriesTo(ConstantBackoff.INSTANCE, i, null, 42);
}
for (long i = -1; i >= -10; i--) {
sender.sendWithRetriesTo(ExponentialBackoff.INSTANCE, i, null, 37);
}
}
}
You can use apache camel. It provides a component for zeromq, and tools like errohandler, redeliverypolicy, deadletter channel and such things are natively provided.

Netty Channel fail when write and flush too many and too fast

When I write a producer to publish message to my server. I've seen this:
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:384)
at io.netty.buffer.UnpooledUnsafeDirectByteBuf.setBytes(UnpooledUnsafeDirectByteBuf.java:447)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:242)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
I've searched all around and was told that because of channel is closed.
But, in my code. I'm just close my channel when my channel pool destroy the channel.
Here my code:
public static class ChannelFactory implements PoolableObjectFactory<Channel> {
private final Bootstrap bootstrap;
private String host;
private int port;
public ChannelFactory(Bootstrap bootstrap, String host, int port) {
this.bootstrap = bootstrap;
this.host = host;
this.port = port;
}
#Override
public Channel makeObject() throws Exception {
System.out.println("Create new channel!!!");
bootstrap.validate();
return bootstrap.connect(host, port).channel();
}
#Override
public void destroyObject(Channel channel) throws Exception {
ChannelFuture close = channel.close();
if (close.isSuccess()) {
System.out.println(channel + " close successfully");
}
}
#Override
public boolean validateObject(Channel channel) {
System.out.println("Validate object");
return (channel.isOpen());
}
#Override
public void activateObject(Channel channel) throws Exception {
System.out.println(channel + " is activated");
}
#Override
public void passivateObject(Channel channel) throws Exception {
System.out.println(channel + " is passivated");
}
/**
* #return the host
*/
public String getHost() {
return host;
}
/**
* #param host the host to set
* #return
*/
public ChannelFactory setHost(String host) {
this.host = host;
return this;
}
/**
* #return the port
*/
public int getPort() {
return port;
}
/**
* #param port the port to set
* #return
*/
public ChannelFactory setPort(int port) {
this.port = port;
return this;
}
}
And here is my Runner:
public static class Runner implements Runnable {
private Channel channel;
private ButtyMessage message;
private MyChannelPool channelPool;
public Runner(MyChannelPool channelPool, Channel channel, ButtyMessage message) {
this.channel = channel;
this.message = message;
this.channelPool = channelPool;
}
#Override
public void run() {
channel.writeAndFlush(message.content()).syncUninterruptibly().addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
channelPool.returnObject(future.channel());
}
});
}
}
And my main:
public static void main(String[] args) throws InterruptedException {
final String host = "127.0.0.1";
final int port = 8080;
int jobSize = 100;
int jobNumber = 10000;
final Bootstrap b = func(host, port);
final MyChannelPool channelPool = new MyChannelPool(new ChannelFactory(b, host, port));
ExecutorService threadPool = Executors.newFixedThreadPool(1);
for (int i = 0; i < jobNumber; i++) {
try {
threadPool.execute(new Runner(channelPool, channelPool.borrowObject(), new ButtyMessage()));
} catch (Exception ex) {
System.out.println("ex = " + ex.getMessage());
}
}
}
With ButtyMessage extends ByteBufHolder.
In my Runner class, if I sleep(10) after writeAndFlush it run quite OK. But I don't want to reply on sleep. So I use ChannelFutureListener, but the result is bad. If I send about 1000 to 10.000 messages, it will crash and throw exception above. Is there any way to avoid this?
Thanks all.
Sorry for my bad explain and my English :)
You have several issues that could explain this. Most of them are related to wrong usage of asynchronous operations and future usage.
I don't know if it could be in link with your issue but, if you really want to print when the channel is really closed, you have to wait on the future, since the future on close() (or any other operations) immediately returns, without waiting for the real close. Therefore your test if (close.isSuccess()) shall be always false.
public void destroyObject(final Channel channel) throws Exception {
channel.close().addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture close) {
if (close.isSuccess()) {
System.out.println(channel + " close successfully");
}
}
});
}
However, as I suppose it is only for debug purpose, it is not mandatory.
Another one: you send back to your pool a channel that is not already connected (which could explain your sleep(10) maybe?). You have to wait on the connect().
public Channel makeObject() throws Exception {
System.out.println("Create new channel!!!");
//bootstrap.validate(); // this is implicitely called in connect()
ChannelFuture future = bootstrap.connect(host, port).awaitUninterruptibly();
if (future.isSuccess()) {
return future.channel();
} else {
// do what you need to do when the connection is not done
}
}
third one: validation of a connected channel might be better using isActive():
#Override
public boolean validateObject(Channel channel) {
System.out.println("Validate object");
return channel.isActive(); // instead of isOpen()
}
fourth one: in your runner, you wrongly await on the future while you should not. You can remove your syncUninterruptibly() and let the rest as is.
#Override
public void run() {
Channel.writeAndFlush(message.content()).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
channelPool.returnObject(future.channel());
}
});
}
And finally, I suppose you know your test is completely sequential (1 thread in your pool), such that each client will reuse over and over the very same channel?
Could you try to change the 4 points to see if it corrects your issue?
EDIT: after requester comment
For syncUntinterruptibly(), I did not read carefully. If you want to block on write, then you don't need the extra addListener since the future is done once the sync is over. So you can directly call your channelPool.returnObject as next command just after your sync.
So you should write it this way, simpler.
#Override
public void run() {
Channel.writeAndFlush(message.content()).syncUntinterruptibly();
channelPool.returnObject(future.channel());
}
For fireChannelActive, it will be called as soon as the connect finished (so from makeObject, sometime in the future). Moreover, once disconnected (as you did have notice in your exception), the channel is no more usable and must be recreated from zero. So I would suggest to use isActive however, such that, if not active, it will be removed using destroyObject...
Take a look at the channel state model here.
Finally, I've found a solution for myself. But, I'm still think about another solution. (this solution is exactly copy from 4.0.28 netty release note)
final String host = "127.0.0.1";
final int port = 8080;
int jobNumber = 100000;
final EventLoopGroup group = new NioEventLoopGroup(100);
ChannelPoolMap<InetSocketAddress, MyChannelPool> poolMap = new AbstractChannelPoolMap<InetSocketAddress, MyChannelPool>() {
#Override
protected MyChannelPool newPool(InetSocketAddress key) {
Bootstrap bootstrap = func(group, key.getHostName(), key.getPort());
return new MyChannelPool(bootstrap, new _AbstractChannelPoolHandler());
}
};
ChannelPoolMap<InetSocketAddress, FixedChannelPool> poolMap1 = new AbstractChannelPoolMap<InetSocketAddress, FixedChannelPool>() {
#Override
protected FixedChannelPool newPool(InetSocketAddress key) {
Bootstrap bootstrap = func(group, key.getHostName(), key.getPort());
return new FixedChannelPool(bootstrap, new _AbstractChannelPoolHandler(), 10);
}
};
final ChannelPool myChannelPool = poolMap.get(new InetSocketAddress(host, port));
final CountDownLatch latch = new CountDownLatch(jobNumber);
for (int i = 0; i < jobNumber; i++) {
final int counter = i;
final Future<Channel> future = myChannelPool.acquire();
future.addListener(new FutureListener<Channel>() {
#Override
public void operationComplete(Future<Channel> f) {
if (f.isSuccess()) {
Channel ch = f.getNow();
// Do somethings
ch.writeAndFlush(new ButtyMessage().content()).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
System.out.println("counter = " + counter);
System.out.println("future = " + future.channel());
latch.countDown();
}
}
});
// Release back to pool
myChannelPool.release(ch);
} else {
System.out.println(f.cause().getMessage());
f.cause().printStackTrace();
}
}
});
}
try {
latch.await();
System.exit(0);
} catch (InterruptedException ex) {
System.out.println("ex = " + ex.getMessage());
}
As you can see, I use SimpleChannelPool and FixedChannelPool (an implementation of SimpleChannelPool provided by netty).
What it can do:
SimpleChannelPool: open channels as much as it need ---> if you has 100.000 msg -> cuz error, of course. Many socket open, then IOExeption: Too many file open occur. (is that really pool? Create as much as possible and throw exception? I don't call this is pooling)
FixedChannelPool: not work in my case (Still study why? =)) Sorry for my stupidness)
Indeed, I want to use ObjectPool instead. And I may post it as soon as when I finish. Tks #Frederic Brégier for helping me so much!

Getting started with akka persistence - Unable to replay messages

I am trying to write a very basic app where messages are sent to an actor at a higher rate, messages are being consumer by the actor at a slower rate, and then after some time, killing the app.
When I ran the app again with same actor system name and same actor name with same persistenceId, I was expecting to see the missed messages replayed, but it is not happening.
(If I delete the journal and snapshot locations, they are created again on the next run with some files which are not 0 byte sized, so something is happening for sure.)
Edit1: RecoveryCompleted object is being received in onReceiveRecover whenever I start the app.
public class App {
public static void main(String[] args) throws InterruptedException {
System.out.println("Hello World!");
ActorSystem actorSystem = ActorSystem.create("sample-actor-system");
ActorRef sampleActor = actorSystem.actorOf(
Props.create(AkkaWorker.class).withDispatcher(
"akka.actor.test-dispatcher"),
"sample-actor");
System.out.println(actorSystem.settings().config());
int i = 1;
//Run this coe the next time so that nothing is published, and only the replayed messages should be executed by the actor
// Thread.sleep(10000);
// System.exit(0);
while (true) {
String msg = "Hello there" + i;
sampleActor.tell(msg, null);
System.out.println("Published message: " + msg);
i++;
// break;
Thread.sleep(100);
if (i == 20) {
Thread.sleep(10000);
System.exit(0);
}
}
}
}
public class AkkaWorker extends UntypedPersistentActor {
public AkkaWorker() {
}
#Override
public String persistenceId() {
return "sample-id-1";
}
#Override
public void onReceiveCommand(Object message) throws Exception {
System.out.println("In onReceiveCommand");
// TODO Auto-generated method stub
if (message instanceof String) {
message = (String) message;
System.out.println("Received message: " + message);
if (((String) message).equalsIgnoreCase("suicide")) {
System.out.println("killing self");
getContext().stop(getSelf());
}
Thread.sleep(1000);
}
}
#Override
public void onReceiveRecover(Object message) {
System.out.println("In onReceiveRecover");
if (message instanceof String) {
System.out.println(message);
} else {
System.out.println("God knows what: "+ message.toString());
}
}
}
In application.conf,
test-dispatcher {
type = Dispatcher
executor = "fork-join-executor"
fork-join-executor {
parallelism-min = 1
parallelism-factor = 1.0
parallelism-max = 1
}
throughput = 1
}
persistence {
journal {
max-message-batch-size = 1
leveldb {
dir = "/Users/neeraj/akka-persist/journal"
native = true
}
}
snapshot-store.local.dir = "/Users/neeraj/akka-persist/snapshot"
}

Why the roster isn't added on both the sides?

user-a sends a subscription request to user-b. Subscription mode has been set to accept_all. Also, packet listener has been registered for both the users.
When user-a sends a request to user-b this method is called :
private void searchUser(java.awt.event.ActionEvent evt) {
try {
String userToSearch = jTextField1.getText();
if(!xmppParamInit) {
initUXmppP();
xmppParamInit = true;
}
Presence subscribe = new Presence(Presence.Type.subscribe);
userToSearch += "#localhost";
subscribe.setTo(userToSearch);
ofConnection.sendPacket(subscribe); // Send the 'subscribe' packet
}catch(Exception exc) {
exc.printStackTrace();
}
}
Prior to this method, following are called :
private void startPLThread() { // start packet-listener-thread
Runnable r = new Runnable() {
#Override
public void run() {
startPL();
}
};
new Thread(r,"packet listener thread").start();
}
private void startPL() {
PacketListener pListener = new PacketListener() {
#Override
public void processPacket(Packet packet) {System.out.println("Inside process packet");
if(packet instanceof Presence) {
Presence presence = (Presence) packet;
Presence subscription = new Presence(Presence.Type.subscribe);
subscription.setTo(presence.getFrom());
System.out.println("presence.getFrom : " + presence.getFrom());
ofConnection.sendPacket(subscription);
}
}
};
PacketFilter pFilter = new PacketFilter() {
#Override
public boolean accept(Packet packet) {
return true;
}
};
ofConnection.addPacketListener(pListener, pFilter);
}
The problem is user-a can see user-b in his roster but user-b cannot see user-a in its roster. I do not understand the reason for this. What could be the problem ?
Subscription mode has been set to accept_all in this method that is called from within search user :
private void initUXmppP() { // Initialize user-xmpp-parameters
Roster roster = ofConnection.getRoster();
roster.setSubscriptionMode(Roster.SubscriptionMode.accept_all);
}
It is a GUI application and I tried keeping both the users online

Categories