I am working on my application which sends data to zeromq. Below is what my application does:
I have a class SendToZeroMQ that send data to zeromq.
Add same data to retryQueue in the same class so that it can be retried later on if acknowledgment is not received. It uses guava cache with maximumSize limit.
Have a separate thread which receives acknowledgement from the zeromq for the data that was sent earlier and if acknowledgement is not received, then SendToZeroMQ will retry sending that same piece of data. And if acknowledgement is received, then we will remove it from retryQueue so that it cannot be retried again.
Idea is very simple and I have to make sure my retry policy works fine so that I don't loose my data. This is very rare but in case if we don't receive acknolwedgements.
I am thinking of building two types of RetryPolicies but I am not able to understand how to build that here corresponding to my program:
RetryNTimes: In this it will retry N times with a particular sleep between each retry and after that, it will drop the record.
ExponentialBackoffRetry: In this it will exponentially keep retrying. We can set some max retry limit and after that it won't retry and will drop the record.
Below is my SendToZeroMQ class which sends data to zeromq, also retry every 30 seconds from a background thread and start ResponsePoller runnable which keeps running forever:
public class SendToZeroMQ {
private final ScheduledExecutorService executorService = Executors.newScheduledThreadPool(5);
private final Cache<Long, byte[]> retryQueue =
CacheBuilder
.newBuilder()
.maximumSize(10000000)
.concurrencyLevel(200)
.removalListener(
RemovalListeners.asynchronous(new CustomListener(), executorService)).build();
private static class Holder {
private static final SendToZeroMQ INSTANCE = new SendToZeroMQ();
}
public static SendToZeroMQ getInstance() {
return Holder.INSTANCE;
}
private SendToZeroMQ() {
executorService.submit(new ResponsePoller());
// retry every 30 seconds for now
executorService.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
for (Entry<Long, byte[]> entry : retryQueue.asMap().entrySet()) {
sendTo(entry.getKey(), entry.getValue());
}
}
}, 0, 30, TimeUnit.SECONDS);
}
public boolean sendTo(final long address, final byte[] encodedRecords) {
Optional<ZMQSocketInfo> liveSockets = PoolManager.getInstance().getNextSocket();
if (!liveSockets.isPresent()) {
return false;
}
return sendTo(address, encodedRecords, liveSockets.get().getSocket());
}
public boolean sendTo(final long address, final byte[] encodedByteArray, final Socket socket) {
ZMsg msg = new ZMsg();
msg.add(encodedByteArray);
boolean sent = msg.send(socket);
msg.destroy();
// adding to retry queue
retryQueue.put(address, encodedByteArray);
return sent;
}
public void removeFromRetryQueue(final long address) {
retryQueue.invalidate(address);
}
}
Below is my ResponsePoller class which polls all the acknowledgement from the zeromq. And if we get an acknowledgement back from the zeromq then we will remove that record from the retry queue so that it doesn't get retried otherwise it will get retried.
public class ResponsePoller implements Runnable {
private static final Random random = new Random();
#Override
public void run() {
ZContext ctx = new ZContext();
Socket client = ctx.createSocket(ZMQ.PULL);
String identity = String.format("%04X-%04X", random.nextInt(), random.nextInt());
client.setIdentity(identity.getBytes(ZMQ.CHARSET));
client.bind("tcp://" + TestUtils.getIpaddress() + ":8076");
PollItem[] items = new PollItem[] {new PollItem(client, Poller.POLLIN)};
while (!Thread.currentThread().isInterrupted()) {
// Tick once per second, pulling in arriving messages
for (int centitick = 0; centitick < 100; centitick++) {
ZMQ.poll(items, 10);
if (items[0].isReadable()) {
ZMsg msg = ZMsg.recvMsg(client);
Iterator<ZFrame> it = msg.iterator();
while (it.hasNext()) {
ZFrame frame = it.next();
try {
long address = TestUtils.getAddress(frame.getData());
// remove from retry queue since we got the acknowledgment for this record
SendToZeroMQ.getInstance().removeFromRetryQueue(address);
} catch (Exception ex) {
// log error
} finally {
frame.destroy();
}
}
msg.destroy();
}
}
}
ctx.destroy();
}
}
Question:
As you can see above, I am sending encodedRecords to zeromq using SendToZeroMQ class and then it gets retried every 30 seconds depending on whether we got an acknolwedgement back from ResponsePoller class or not.
For each encodedRecords there is a unique key called address and that's what we will get back from zeromq as an acknowledgement.
How can I go ahead and extend this example to build two retry policies that I mentioned above and then I can pick what retry policy I want to use while sending data. I came up with below interface but then I am not able understand how should I move forward to implement those retry policies and use it in my above code.
public interface RetryPolicy {
/**
* Called when an operation has failed for some reason. This method should return
* true to make another attempt.
*/
public boolean allowRetry(int retryCount, long elapsedTimeMs);
}
Can I use guava-retrying or failsafe here becuase these libraries already have many retry policies which I can use?
I am not able to work out all the details regarding how to use the relevant API-s, but as for algorithm, you could try:
the retry-policy needs to have some sort of state attached to each message (atleast the number of times the current message has been retried, possible what the current delay is). You need to decide whether the RetryPolicy should keep that itself or if you want to store it inside the message.
instead of allowRetry, you could have a method calculating when the next retry should occur (in absolute time or as a number of milliseconds in the future), which will be a function of the state mentioned above
the retry queue should contain information on when each message should be retried.
instead of using scheduleAtFixedRate, find the message in the retry queue which has the lowest when_is_next_retry (possibly by sorting on absolute retry-timestamp and picking the first), and let the executorService reschedule itself using schedule and the time_to_next_retry
for each retry, pull it from the retry queue, send the message, use the RetryPolicy for calculating when the next retry should be (if it is to be retried) and insert back into the retry queue with a new value for when_is_next_retry (if the RetryPolicy returns -1, it could mean that the message shall not be retried any more)
not a perfect way, but can be achieved by below way as well.
public interface RetryPolicy {
public boolean allowRetry();
public void decreaseRetryCount();
}
Create two implementation. For RetryNTimes
public class RetryNTimes implements RetryPolicy {
private int maxRetryCount;
public RetryNTimes(int maxRetryCount) {
this.maxRetryCount = maxRetryCount;
}
public boolean allowRetry() {
return maxRetryCount > 0;
}
public void decreaseRetryCount()
{
maxRetryCount = maxRetryCount-1;
}}
For ExponentialBackoffRetry
public class ExponentialBackoffRetry implements RetryPolicy {
private int maxRetryCount;
private final Date retryUpto;
public ExponentialBackoffRetry(int maxRetryCount, Date retryUpto) {
this.maxRetryCount = maxRetryCount;
this.retryUpto = retryUpto;
}
public boolean allowRetry() {
Date date = new Date();
if(maxRetryCount <= 0 || date.compareTo(retryUpto)>=0)
{
return false;
}
return true;
}
public void decreaseRetryCount() {
maxRetryCount = maxRetryCount-1;
}}
You need to make some changes in SendToZeroMQ class
public class SendToZeroMQ {
private final ScheduledExecutorService executorService = Executors.newScheduledThreadPool(5);
private final Cache<Long,RetryMessage> retryQueue =
CacheBuilder
.newBuilder()
.maximumSize(10000000)
.concurrencyLevel(200)
.removalListener(
RemovalListeners.asynchronous(new CustomListener(), executorService)).build();
private static class Holder {
private static final SendToZeroMQ INSTANCE = new SendToZeroMQ();
}
public static SendToZeroMQ getInstance() {
return Holder.INSTANCE;
}
private SendToZeroMQ() {
executorService.submit(new ResponsePoller());
// retry every 30 seconds for now
executorService.scheduleAtFixedRate(new Runnable() {
public void run() {
for (Map.Entry<Long, RetryMessage> entry : retryQueue.asMap().entrySet()) {
RetryMessage retryMessage = entry.getValue();
if(retryMessage.getRetryPolicy().allowRetry())
{
retryMessage.getRetryPolicy().decreaseRetryCount();
entry.setValue(retryMessage);
sendTo(entry.getKey(), retryMessage.getMessage(),retryMessage);
}else
{
retryQueue.asMap().remove(entry.getKey());
}
}
}
}, 0, 30, TimeUnit.SECONDS);
}
public boolean sendTo(final long address, final byte[] encodedRecords, RetryMessage retryMessage) {
Optional<ZMQSocketInfo> liveSockets = PoolManager.getInstance().getNextSocket();
if (!liveSockets.isPresent()) {
return false;
}
if(null==retryMessage)
{
RetryPolicy retryPolicy = new RetryNTimes(10);
retryMessage = new RetryMessage(retryPolicy,encodedRecords);
retryQueue.asMap().put(address,retryMessage);
}
return sendTo(address, encodedRecords, liveSockets.get().getSocket());
}
public boolean sendTo(final long address, final byte[] encodedByteArray, final ZMQ.Socket socket) {
ZMsg msg = new ZMsg();
msg.add(encodedByteArray);
boolean sent = msg.send(socket);
msg.destroy();
return sent;
}
public void removeFromRetryQueue(final long address) {
retryQueue.invalidate(address);
}}
Here is a working little simulation of your environment that shows how this can be done. Note the Guava cache is the wrong data structure here, since you aren't interested in eviction (I think). So I'm using a concurrent hashmap:
package experimental;
import static java.util.concurrent.TimeUnit.MILLISECONDS;
import java.util.Arrays;
import java.util.Iterator;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.ScheduledExecutorService;
class Experimental {
/** Return the desired backoff delay in millis for the given retry number, which is 1-based. */
interface RetryStrategy {
long getDelayMs(int retry);
}
enum ConstantBackoff implements RetryStrategy {
INSTANCE;
#Override
public long getDelayMs(int retry) {
return 1000L;
}
}
enum ExponentialBackoff implements RetryStrategy {
INSTANCE;
#Override
public long getDelayMs(int retry) {
return 100 + (1L << retry);
}
}
static class Sender {
private final ScheduledExecutorService executorService = Executors.newScheduledThreadPool(4);
private final ConcurrentMap<Long, Retrier> pending = new ConcurrentHashMap<>();
/** Send the given data with given address on the given socket. */
void sendTo(long addr, byte[] data, int socket) {
System.err.println("Sending " + Arrays.toString(data) + "#" + addr + " on " + socket);
}
private class Retrier implements Runnable {
private final RetryStrategy retryStrategy;
private final long addr;
private final byte[] data;
private final int socket;
private int retry;
private Future<?> future;
Retrier(RetryStrategy retryStrategy, long addr, byte[] data, int socket) {
this.retryStrategy = retryStrategy;
this.addr = addr;
this.data = data;
this.socket = socket;
this.retry = 0;
}
synchronized void start() {
if (future == null) {
future = executorService.submit(this);
pending.put(addr, this);
}
}
synchronized void cancel() {
if (future != null) {
future.cancel(true);
future = null;
}
}
private synchronized void reschedule() {
if (future != null) {
future = executorService.schedule(this, retryStrategy.getDelayMs(++retry), MILLISECONDS);
}
}
#Override
synchronized public void run() {
sendTo(addr, data, socket);
reschedule();
}
}
long getVerifiedAddr() {
System.err.println("Pending messages: " + pending.size());
Iterator<Long> i = pending.keySet().iterator();
long addr = i.hasNext() ? i.next() : 0;
return addr;
}
class CancellationPoller implements Runnable {
#Override
public void run() {
while (!Thread.currentThread().isInterrupted()) {
try {
Thread.sleep(1000);
} catch (InterruptedException ex) {
Thread.currentThread().interrupt();
}
long addr = getVerifiedAddr();
if (addr == 0) {
continue;
}
System.err.println("Verified message (to be cancelled) " + addr);
Retrier retrier = pending.remove(addr);
if (retrier != null) {
retrier.cancel();
}
}
}
}
Sender initialize() {
executorService.submit(new CancellationPoller());
return this;
}
void sendWithRetriesTo(RetryStrategy retryStrategy, long addr, byte[] data, int socket) {
new Retrier(retryStrategy, addr, data, socket).start();
}
}
public static void main(String[] args) {
Sender sender = new Sender().initialize();
for (long i = 1; i <= 10; i++) {
sender.sendWithRetriesTo(ConstantBackoff.INSTANCE, i, null, 42);
}
for (long i = -1; i >= -10; i--) {
sender.sendWithRetriesTo(ExponentialBackoff.INSTANCE, i, null, 37);
}
}
}
You can use apache camel. It provides a component for zeromq, and tools like errohandler, redeliverypolicy, deadletter channel and such things are natively provided.
Related
When I write a producer to publish message to my server. I've seen this:
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:384)
at io.netty.buffer.UnpooledUnsafeDirectByteBuf.setBytes(UnpooledUnsafeDirectByteBuf.java:447)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:242)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
I've searched all around and was told that because of channel is closed.
But, in my code. I'm just close my channel when my channel pool destroy the channel.
Here my code:
public static class ChannelFactory implements PoolableObjectFactory<Channel> {
private final Bootstrap bootstrap;
private String host;
private int port;
public ChannelFactory(Bootstrap bootstrap, String host, int port) {
this.bootstrap = bootstrap;
this.host = host;
this.port = port;
}
#Override
public Channel makeObject() throws Exception {
System.out.println("Create new channel!!!");
bootstrap.validate();
return bootstrap.connect(host, port).channel();
}
#Override
public void destroyObject(Channel channel) throws Exception {
ChannelFuture close = channel.close();
if (close.isSuccess()) {
System.out.println(channel + " close successfully");
}
}
#Override
public boolean validateObject(Channel channel) {
System.out.println("Validate object");
return (channel.isOpen());
}
#Override
public void activateObject(Channel channel) throws Exception {
System.out.println(channel + " is activated");
}
#Override
public void passivateObject(Channel channel) throws Exception {
System.out.println(channel + " is passivated");
}
/**
* #return the host
*/
public String getHost() {
return host;
}
/**
* #param host the host to set
* #return
*/
public ChannelFactory setHost(String host) {
this.host = host;
return this;
}
/**
* #return the port
*/
public int getPort() {
return port;
}
/**
* #param port the port to set
* #return
*/
public ChannelFactory setPort(int port) {
this.port = port;
return this;
}
}
And here is my Runner:
public static class Runner implements Runnable {
private Channel channel;
private ButtyMessage message;
private MyChannelPool channelPool;
public Runner(MyChannelPool channelPool, Channel channel, ButtyMessage message) {
this.channel = channel;
this.message = message;
this.channelPool = channelPool;
}
#Override
public void run() {
channel.writeAndFlush(message.content()).syncUninterruptibly().addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
channelPool.returnObject(future.channel());
}
});
}
}
And my main:
public static void main(String[] args) throws InterruptedException {
final String host = "127.0.0.1";
final int port = 8080;
int jobSize = 100;
int jobNumber = 10000;
final Bootstrap b = func(host, port);
final MyChannelPool channelPool = new MyChannelPool(new ChannelFactory(b, host, port));
ExecutorService threadPool = Executors.newFixedThreadPool(1);
for (int i = 0; i < jobNumber; i++) {
try {
threadPool.execute(new Runner(channelPool, channelPool.borrowObject(), new ButtyMessage()));
} catch (Exception ex) {
System.out.println("ex = " + ex.getMessage());
}
}
}
With ButtyMessage extends ByteBufHolder.
In my Runner class, if I sleep(10) after writeAndFlush it run quite OK. But I don't want to reply on sleep. So I use ChannelFutureListener, but the result is bad. If I send about 1000 to 10.000 messages, it will crash and throw exception above. Is there any way to avoid this?
Thanks all.
Sorry for my bad explain and my English :)
You have several issues that could explain this. Most of them are related to wrong usage of asynchronous operations and future usage.
I don't know if it could be in link with your issue but, if you really want to print when the channel is really closed, you have to wait on the future, since the future on close() (or any other operations) immediately returns, without waiting for the real close. Therefore your test if (close.isSuccess()) shall be always false.
public void destroyObject(final Channel channel) throws Exception {
channel.close().addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture close) {
if (close.isSuccess()) {
System.out.println(channel + " close successfully");
}
}
});
}
However, as I suppose it is only for debug purpose, it is not mandatory.
Another one: you send back to your pool a channel that is not already connected (which could explain your sleep(10) maybe?). You have to wait on the connect().
public Channel makeObject() throws Exception {
System.out.println("Create new channel!!!");
//bootstrap.validate(); // this is implicitely called in connect()
ChannelFuture future = bootstrap.connect(host, port).awaitUninterruptibly();
if (future.isSuccess()) {
return future.channel();
} else {
// do what you need to do when the connection is not done
}
}
third one: validation of a connected channel might be better using isActive():
#Override
public boolean validateObject(Channel channel) {
System.out.println("Validate object");
return channel.isActive(); // instead of isOpen()
}
fourth one: in your runner, you wrongly await on the future while you should not. You can remove your syncUninterruptibly() and let the rest as is.
#Override
public void run() {
Channel.writeAndFlush(message.content()).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
channelPool.returnObject(future.channel());
}
});
}
And finally, I suppose you know your test is completely sequential (1 thread in your pool), such that each client will reuse over and over the very same channel?
Could you try to change the 4 points to see if it corrects your issue?
EDIT: after requester comment
For syncUntinterruptibly(), I did not read carefully. If you want to block on write, then you don't need the extra addListener since the future is done once the sync is over. So you can directly call your channelPool.returnObject as next command just after your sync.
So you should write it this way, simpler.
#Override
public void run() {
Channel.writeAndFlush(message.content()).syncUntinterruptibly();
channelPool.returnObject(future.channel());
}
For fireChannelActive, it will be called as soon as the connect finished (so from makeObject, sometime in the future). Moreover, once disconnected (as you did have notice in your exception), the channel is no more usable and must be recreated from zero. So I would suggest to use isActive however, such that, if not active, it will be removed using destroyObject...
Take a look at the channel state model here.
Finally, I've found a solution for myself. But, I'm still think about another solution. (this solution is exactly copy from 4.0.28 netty release note)
final String host = "127.0.0.1";
final int port = 8080;
int jobNumber = 100000;
final EventLoopGroup group = new NioEventLoopGroup(100);
ChannelPoolMap<InetSocketAddress, MyChannelPool> poolMap = new AbstractChannelPoolMap<InetSocketAddress, MyChannelPool>() {
#Override
protected MyChannelPool newPool(InetSocketAddress key) {
Bootstrap bootstrap = func(group, key.getHostName(), key.getPort());
return new MyChannelPool(bootstrap, new _AbstractChannelPoolHandler());
}
};
ChannelPoolMap<InetSocketAddress, FixedChannelPool> poolMap1 = new AbstractChannelPoolMap<InetSocketAddress, FixedChannelPool>() {
#Override
protected FixedChannelPool newPool(InetSocketAddress key) {
Bootstrap bootstrap = func(group, key.getHostName(), key.getPort());
return new FixedChannelPool(bootstrap, new _AbstractChannelPoolHandler(), 10);
}
};
final ChannelPool myChannelPool = poolMap.get(new InetSocketAddress(host, port));
final CountDownLatch latch = new CountDownLatch(jobNumber);
for (int i = 0; i < jobNumber; i++) {
final int counter = i;
final Future<Channel> future = myChannelPool.acquire();
future.addListener(new FutureListener<Channel>() {
#Override
public void operationComplete(Future<Channel> f) {
if (f.isSuccess()) {
Channel ch = f.getNow();
// Do somethings
ch.writeAndFlush(new ButtyMessage().content()).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
System.out.println("counter = " + counter);
System.out.println("future = " + future.channel());
latch.countDown();
}
}
});
// Release back to pool
myChannelPool.release(ch);
} else {
System.out.println(f.cause().getMessage());
f.cause().printStackTrace();
}
}
});
}
try {
latch.await();
System.exit(0);
} catch (InterruptedException ex) {
System.out.println("ex = " + ex.getMessage());
}
As you can see, I use SimpleChannelPool and FixedChannelPool (an implementation of SimpleChannelPool provided by netty).
What it can do:
SimpleChannelPool: open channels as much as it need ---> if you has 100.000 msg -> cuz error, of course. Many socket open, then IOExeption: Too many file open occur. (is that really pool? Create as much as possible and throw exception? I don't call this is pooling)
FixedChannelPool: not work in my case (Still study why? =)) Sorry for my stupidness)
Indeed, I want to use ObjectPool instead. And I may post it as soon as when I finish. Tks #Frederic Brégier for helping me so much!
I'm trying to do a multi-get on my redis data store which is distributed across multiple shards. However the keys I want to do this on do not belong to the same shard so I can't use redis' inbuilt multi-get.
Instead I'm trying to use futures to achieve this. But after checking the lookup times it almost seems like these cache calls are being made serially.
The request/sec on the server is about 1.5k with an average of 10 ms response time. Literature I've read told me that my threadpool size should be requests/sec * response time. Since I'm spawning 3 threads this becomes 1500 * 0.010 * 3 = 45. I've tried using threadpool sizes of 50,100,300. But this hasn't helped either.
I'm using Jedis as a client. I thought it could be an issue with exceeding Jedis' max total/idle connection limit. But even after increasing this from 8 to 24 I see no difference in lookup times.
I understand that some overhead will be there since there will be context switches and the overhead of spawning new threads.
Can anyone help me figure out where I'm missing out. Let me know if you need more info.
for(String recordKey : pidArr) {
//Adding futures. Max 3
if(count >= 3) {
break;
}
count++;
Callable<String> a = new FeedCacheCaller(recordKey);
Future<String> future = feedThreadPool.submit(a);
futureList.add(future);
}
//Getting the data from the futures
for(Future<String> foo : futureList) {
try {
String data = foo.get();
logger.debug(data);
feedDataList.add(parseInfo(data));
} catch (Exception e) {
logger.error("somethings going wrong in retrieval",e);
}
}
Here's the Callable class
public class FeedCacheCaller implements Callable {
String pid = null;
FeedCache feedCache;
public FeedCacheCaller(String pid) {
this.pid = pid;
this.feedCache = new FeedCache();
}
#Override
public String call() throws Exception {
return feedCache.get(pid);
}
}
Edit 1:
Here's the Jedis side code.
public class FeedCache {
private ShardedJedisPool feedClient = RedisPool.getPool("feed");
public String get(String key) {
ShardedJedis client = null;
String value = null;
try {
client = feedClient.getResource();
byte[] valueByteArray = client.get(key.getBytes(Constants.CHARSET));
if (valueByteArray != null) {
value = new String(CacheUtils.decompress(valueByteArray),
Constants.CHARSET);
}
} catch (JedisConnectionException e) {
if (client != null) {
feedClient.returnBrokenResource(client);
client = null;
}
logger.error(e.getMessage());
} finally {
if (client != null) {
feedClient.returnResource(client);
}
}
return value;
}
}
Here is the code that initializes the ShardedJedisPool
public class RedisPool {
private static final Logger logger = LoggerFactory.getLogger(
RedisPool.class);
private static ConcurrentHashMap<String, ShardedJedisPool> redisPools = new ConcurrentHashMap<String, ShardedJedisPool>();
public static void initializePool(String poolName) {
List<JedisShardInfo> shards = new ArrayList<JedisShardInfo>();
ArrayList<String> servers = new ArrayList<String>(Arrays.asList(
Constants.config.getStringArray(
poolName + "_redis_servers")));
for (int i = 0; i < servers.size(); i++) {
JedisShardInfo shardInfo = new JedisShardInfo(servers.get(i).split(":")[0], Integer.parseInt(servers.get(i).split(":")[1]));
shards.add(shardInfo);
}
redisPools.putIfAbsent(poolName,
new ShardedJedisPool(new GenericObjectPoolConfig(), shards));
}
public static ShardedJedisPool getPool(String poolName) {
if (!redisPools.containsKey(poolName)) {
synchronized (RedisPool.class) {
if (!redisPools.containsKey(poolName)) {
initializePool(poolName);
}
}
}
return redisPools.get(poolName);
}
public static void shutdown(String poolName) {
ShardedJedisPool pool = getPool(poolName);
pool.destroy();
redisPools.remove(poolName);
}
public static void main(String args[]) {
initializePool("vizidtoud");
}
}
I'm trying to run the following code, but the status variable is always "PENDING". Could you please tell me what am I doing wrong?
Job execute = bigquery.jobs().insert(PROJECT_ID, runJob).execute();
String status;
while(status.equalsIgnoreCase("PENDING")) {
status = execute.getStatus().getState();
System.out.println("Status: " + status);
Thread.wait(1000);
}
Your code isn't making a request to BigQuery to get the updated state, it's just checking the state of the Job returned by the insert call.
Instead, you should poll for the state of the job by issuing a jobs.get request, and check that state, e.g.:
Job job = bigquery.jobs().insert(PROJECT_ID, runJob).execute();
String status = job.getStatus().getState();
while(!status.equalsIgnoreCase("DONE")) {
status = bigquery.jobs().get(PROJECT_ID, job.getId()).execute().getStatus().getState();
System.out.println("Status: " + status);
Thread.wait(1000);
}
*Edited based on Jordan Tigani's comment.
I have realized that checking until the status is not "Done" might not yield the error at all times. Sometimes, the error can be caught after the job is in the "Done" state. i.e., Job goes from "pending" to "done" in some errors, skipping the "running" stage. Therefore, it might be good to check the error field in job['status'] even after the job is "Done".
Rather than have a busy wait loop synchronously blocking the thread running the insert, I've gone with a scheduled thread that maintains a queue of job id's. It loops through the jobs and checks their status, logging errors when discovered.
The crucial bits here are,
Schedule a thread to monitor jobs
jobPollScheduler.scheduleAtFixedRate(new JobPoll(), SCHEDULE_SECONDS, SCHEDULE_SECONDS, TimeUnit.SECONDS);
loop through a queue of jobs and check their progress. Re-queue anything that isn't DONE
while ((job = jobs.poll()) != null) {
final Job statusJob = bigQuery.jobs().get(projectId, job.jobId).execute();
if ("DONE".equals(statusJob.getStatus().getState())) {
final ErrorProto errorResult = statusJob.getStatus().getErrorResult();
if (errorResult == null || errorResult.toString() == null) {
logger.debug("status={}, job={}", statusJob.getStatus().getState(), job);
} else {
logger.error("status={}, errorResult={}, job={}", statusJob.getStatus().getState(), errorResult, job);
}
} else {
// job isn't done, yet. Add it back to queue.
add(job.jobId);
logger.debug("will check again, status={}, job={}", statusJob.getStatus().getState(), job);
}
}
The full working set of classes
import com.google.api.services.bigquery.Bigquery;
import com.google.api.services.bigquery.model.ErrorProto;
import com.google.api.services.bigquery.model.Job;
import com.google.common.primitives.Longs;
import com.google.common.util.concurrent.ThreadFactoryBuilder;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.Objects;
import java.util.Queue;
import java.util.concurrent.DelayQueue;
import java.util.concurrent.Delayed;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
import java.util.function.Supplier;
import javax.annotation.Nonnull;
/**
* Monitor BigQuery inserts
*/
public class BigQueryMonitorSo21064586 {
private static final Logger logger = LoggerFactory.getLogger(BigQueryMonitorSo21064586.class);
private static final int SCHEDULE_SECONDS = 5;
private final ScheduledExecutorService jobPollScheduler =
Executors.newSingleThreadScheduledExecutor(new ThreadFactoryBuilder().setNameFormat("big-query-monitory-%d").build());
private final Queue jobs = new DelayQueue();
private final Supplier connectionSupplier;
private final String projectId;
/**
* #param connectionSupplier gives us a connection to BigQuery
* #param projectId Google cloud project
*/
public BigQueryMonitorSo21064586(#Nonnull final Supplier connectionSupplier, #Nonnull final String projectId) {
this.connectionSupplier = connectionSupplier;
this.projectId = projectId;
}
public BigQueryMonitorSo21064586 start() {
jobPollScheduler.scheduleAtFixedRate(new JobPoll(), SCHEDULE_SECONDS, SCHEDULE_SECONDS, TimeUnit.SECONDS);
return this;
}
/**
* #param jobId insert query job id
*/
public void add(final String jobId) {
final DelayedJobCheck job = new DelayedJobCheck(jobId);
try {
if (!jobs.offer(job)) {
logger.error("could not enqueue BigQuery job, job={}", job);
}
} catch (final Exception e) {
logger.error("failed to add job to queue, job={}", job, e);
}
}
public void shutdown() {
jobPollScheduler.shutdown();
}
private class JobPoll implements Runnable {
/**
* go through the queue and remove anything that is done
*/
#Override
public void run() {
try {
final Bigquery bigQuery = connectionSupplier.get();
DelayedJobCheck job;
while ((job = jobs.poll()) != null) {
final Job statusJob = bigQuery.jobs().get(projectId, job.jobId).execute();
if ("DONE".equals(statusJob.getStatus().getState())) {
final ErrorProto errorResult = statusJob.getStatus().getErrorResult();
if (errorResult == null || errorResult.toString() == null) {
logger.debug("status={}, job={}", statusJob.getStatus().getState(), job);
} else {
logger.error("status={}, errorResult={}, job={}", statusJob.getStatus().getState(), errorResult, job);
}
} else {
// job isn't done, yet. Add it back to queue.
add(job.jobId);
logger.debug("will check again, status={}, job={}", statusJob.getStatus().getState(), job);
}
}
} catch (final Exception e) {
logger.error("exception monitoring big query status, size={}", jobs.size(), e);
}
}
}
private static class DelayedJobCheck extends DelayedImpl {
private final String jobId;
DelayedJobCheck(final String jobId) {
super(SCHEDULE_SECONDS, TimeUnit.SECONDS);
this.jobId = jobId;
}
#Override
public boolean equals(final Object obj) {
if (this == obj) {
return true;
}
if (obj == null || getClass() != obj.getClass()) {
return false;
}
if (!super.equals(obj)) {
return false;
}
final DelayedJobCheck other = (DelayedJobCheck) obj;
return Objects.equals(jobId, other.jobId);
}
#Override
public int hashCode() {
return Objects.hash(super.hashCode(), jobId);
}
}
private static class DelayedImpl implements Delayed {
/**
* timestamp when delay expires
*/
private final long expiry;
/**
* #param amount how long the delay should be
* #param timeUnit units of the delay
*/
DelayedImpl(final long amount, final TimeUnit timeUnit) {
final long more = TimeUnit.MILLISECONDS.convert(amount, timeUnit);
expiry = System.currentTimeMillis() + more;
}
#Override
public long getDelay(#Nonnull final TimeUnit unit) {
final long diff = expiry - System.currentTimeMillis();
return unit.convert(diff, TimeUnit.MILLISECONDS);
}
#Override
public int compareTo(#Nonnull final Delayed o) {
return Longs.compare(expiry, ((DelayedImpl) o).expiry);
}
#Override
public boolean equals(final Object obj) {
if (this == obj) {
return true;
}
if (!(obj instanceof DelayedImpl)) {
return false;
}
final DelayedImpl delayed = (DelayedImpl) obj;
return expiry == delayed.expiry;
}
#Override
public int hashCode() {
return Objects.hash(expiry);
}
}
}
I am trying to create a continuous thread where a server recieves/sends messages from a client however when I try to check for a next element it gets stuck:
public void run()
{
try
{
try
{
ArrayList<Socket> connections = parent.getConnections();
in = new Scanner(socket.getInputStream());
while(true)
{
if(in.hasNextLine()) // Gets stuck here
{
String message = in.nextLine();
System.out.println("Client said " + message);
}
}
}
finally
{
socket.close();
}
}
catch(Exception e)
{
e.printStackTrace();
}
How do I make the loop not get stuck at the specified point
Assuming you want to be able to deal with 'lines', I'd probably start with something like this:
public class SocketReader implements Runnable {
private final InputStream stream;
private final Queue<String> destination;
private volatile boolean active = true;
private SocketReader(InputStream stream, Queue<String> destination) {
this.stream = stream;
this.destination = destination;
}
public static SocketReader getReader(Socket toRead, Queue<String> destination) throws IOException {
return new SocketReader(toRead.getInputStream(), destination);
}
public void shutdown() {
active = false;
}
public void run() {
while(active) {
if (stream.hasNextLine() && active) {
final String line = stream.nextLine;
destination.add(line);
}
}
try {
stream.close();
} catch (IOException e) {
// Log somewhere
}
}
}
Drop this into its own thread (or as part of a thread or executor pool, really), and you've made the rest of your application non-blocking with regards to this code. EXPECT this to block while waiting for updates from stream.hasNextLine(). You can even supply a BlockingQueue if you don't wish to actively poll a queue, but are handling updates in some other fashion.
You can then do something like this for output:
public class QueuedPrinter implements Runnable {
private final Queue<String> input;
private final PrintStream destination;
private volatile boolean active;
public QueuedPrinter(Queue<String> input, PrintStream destination) {
this.input = input;
this.destination = destination;
}
public void shutdown() {
active = false;
}
public void run() {
while(active) {
final String line = input.poll();
if (line != null && active) {
destination.println(line);
}
}
}
}
Please note that I haven't tested this, and you may have to adjust things slightly for other Checked exceptions. You probably need to put in additional error-checking code (null-handling comes to mind). Also, this isn't completely threadsafe, but is likely to be 'good enough' for most uses.
I have a Situation where I wrote a simple Producer Consumer model for reading in chunks of data from Bluetooth then every 10k bytes I write that to file. I used a standard P-C Model using a Vector as my message holder. So how do I change this so that multiple Thread consumers can read the same messages, I think the term would be Multicaster? I am actually using this on an Android phone so JMS is probably not an option.
static final int MAXQUEUE = 50000;
private Vector<byte[]> messages = new Vector<byte[]>();
/**
* Put the message in the queue for the Consumer Thread
*/
private synchronized void putMessage(byte[] send) throws InterruptedException {
while ( messages.size() == MAXQUEUE )
wait();
messages.addElement( send );
notify();
}
/**
* This method is called by the consumer to see if any messages in the queue
*/
public synchronized byte[] getMessage()throws InterruptedException {
notify();
while ( messages.size() == 0 && !Thread.interrupted()) {
wait(1);
}
byte[] message = messages.firstElement();
messages.removeElement( message );
return message;
}
I am referencing code from an Oreilly book Message Parser section
Pub-sub mechanism is definitely the way to achieve what you want. I am not sure why developing for Android will restrict you from using JMS, which is as simple a spec as it gets. Check out
this thread on SO.
You should definitely use a queue instead of the Vector!
Give every thread its own queue and, when a new message is received, add() the new message to every thread's queue. For flexibility, a listener pattern may be useful, too.
Edit:
Ok, I feel I should add an example, too:
(Classical observer pattern)
This is the interface, all consumers must implement:
public interface MessageListener {
public void newMessage( byte[] message );
}
A producer might look like this:
public class Producer {
Collection<MessageListener> listeners = new ArrayList<MessageListener>();
// Allow interested parties to register for new messages
public void addListener( MessageListener listener ) {
this.listeners.add( listener );
}
public void removeListener( Object listener ) {
this.listeners.remove( listener );
}
protected void produceMessages() {
byte[] msg = new byte[10];
// Create message and put into msg
// Tell all registered listeners about the new message:
for ( MessageListener l : this.listeners ) {
l.newMessage( msg );
}
}
}
And a consumer class could be (using a blocking queue which does all that wait()ing and notify()ing for us):
public class Consumer implements MessageListener {
BlockingQueue< byte[] > queue = new LinkedBlockingQueue< byte[] >();
// This implements the MessageListener interface:
#Override
public void newMessage( byte[] message ) {
try {
queue.put( message );
} catch (InterruptedException e) {
// won't happen.
}
}
// Execute in another thread:
protected void handleMessages() throws InterruptedException {
while ( true ) {
byte[] newMessage = queue.take();
// handle the new message.
}
}
}
This is what I came up with as an example when digging through some code and modifiying some existing examples.
package test.messaging;
import java.util.ArrayList;
import java.util.concurrent.LinkedBlockingQueue;
public class TestProducerConsumers {
static Broker broker;
public TestProducerConsumers(int maxSize) {
broker = new Broker(maxSize);
Producer p = new Producer();
Consumer c1 = new Consumer("One");
broker.consumers.add(c1);
c1.start();
Consumer c2 = new Consumer("Two");
broker.consumers.add(c2);
c2.start();
p.start();
}
// Test Producer, use your own message producer on a thread to call up
// broker.insert() possibly passing it the message instead.
class Producer extends Thread {
#Override
public void run() {
while (true) {
try {
broker.insert();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
class Consumer extends Thread {
String myName;
LinkedBlockingQueue<String> queue;
Consumer(String m) {
this.myName = m;
queue = new LinkedBlockingQueue<String>();
}
#Override
public void run() {
while(!Thread.interrupted()) {
try {
while (queue.size() == 0 && !Thread.interrupted()) {
;
}
while (queue.peek() == null && !Thread.interrupted()) {
;
}
System.out.println("" + myName + " Consumer: " + queue.poll());
} catch (Exception e) { }
}
}
}
class Broker {
public ArrayList<Consumer> consumers = new ArrayList<Consumer>();
int n;
int maxSize;
public Broker(int maxSize) {
n = 0;
this.maxSize = maxSize;
}
synchronized void insert() throws InterruptedException {
// only here for testing don't want it to runaway and
//memory leak, only testing first 100 samples.
if (n == maxSize)
wait();
System.out.println("Producer: " + n++);
for (Consumer c : consumers) {
c.queue.add("Message " + n);
}
}
}
public static void main(String[] args) {
TestProducerConsumers pc = new TestProducerConsumers(100);
}
}