How to confirm RabbitMQ messages with Java? - java

I tried to figure out how to confirm messages in java but I haven't understood it.
Here is the official RabbitMQ example:
http://hg.rabbitmq.com/rabbitmq-java-client/file/default/test/src/com/rabbitmq/examples/ConfirmDontLoseMessages.java
The problem is that they use 10000 messages to send to an queue and only after that they wait when all messages will be confirmed. I need to send 1 message and only one message per thread and confirm it (In my case I have several equal publishers that have to send messages from time to time). How to confirm one message (not confirm all messages)?
I need something like:
for (long i = 0; i < MSG_COUNT; ++i) {
ch.basicPublish("", QUEUE_NAME,
MessageProperties.PERSISTENT_BASIC,
"nop".getBytes());
ch.wait_for_confirm();
if(ch.isConfirmed){
//OK
}
else{
//Republish
}
}

Read this post:
http://www.rabbitmq.com/blog/2011/02/10/introducing-publisher-confirms/
In short you can use the tx-transactions:
ch.txSelect();
for (int i = 0; i < MSG_COUNT; ++i) {
ch.basicPublish("", QUEUE_NAME,
MessageProperties.PERSISTENT_BASIC,
"nop".getBytes());
ch.txCommit();
}
or the publish-confirmation handler:
ch.addConfirmListener(new ConfirmListener() {....}
The first one is easier but slower than the second one.

You should need to use Acknowledgement for each message you can check the below link :
https://www.rabbitmq.com/confirms.html

Related

Nearly no performance gain between single and multiple consumers using LMAX Disruptor / how to decode many UDP packets properly

I have to transfer larger files (upto 10GB) using UDP. Unfortunately TCP cannot be used in this use case because there is no bidirectional communication between sender and receiver possible.
Sending a file is not the problem. I have written the client using netty. It reads the file, encodes it (unique ID, position in stream and so on) and sends it to the destination at a configurable rate (packets per seconds). All the packets are received at the destination. I have used iptables and Wireshark to verify that.
The problem occurs with the recipient. Receiving upto 90K packets a second works pretty fine. But receiving and decoding it at this rate is not possible using a single thread.
My first approach was to use thread safe queues (one producer and multiple consumer). But using multiple consumers did not lead to better results. Some packets were still lost. It seems that the overhead (locking/unlocking the queue) slows down the process. So I decided to use lmax disruptor with a single producer (receiving the UDP datagrams) and multiple consumer (decoding the packet). But surprisingly, this does not lead to success either. It is hardly a speed advantage to use two lmax consumers and I wonder why.
This is main part receiving UDP packets and call the disruptor
public void receiveUdpStream(DatagramChannel channel) {
boolean exit = false;
// the size of the UDP datagram
int size = shareddata.cr.getDatagramsize();
// the number of decoders (configurable)
int nn_decoders = shareddata.cr.getDecoders();
Udp2flowEventFactory factory = new Udp2flowEventFactory(size);
// the size of the ringbuffer
int bufferSize = 1 << 10;
Disruptor<Udp2flowEvent> disruptor = new Disruptor<>(
factory,
bufferSize,
DaemonThreadFactory.INSTANCE,
ProducerType.SINGLE,
new YieldingWaitStrategy());
// my consumers
Udp2flowDecoder decoder[] = new Udp2flowDecoder[nn_decoders];
for (int i = 0; i < nn_decoders; i++) {
decoder[i] = new Udp2flowDecoder(i, shareddata);
}
disruptor.handleEventsWith(decoder);
RingBuffer<Udp2flowEvent> ringBuffer = disruptor.getRingBuffer();
Udp2flowProducer producer = new Udp2flowProducer(ringBuffer);
disruptor.start();
while (!exit) {
try {
ByteBuffer buf = ByteBuffer.allocate(size);
channel.receive(buf);
receivedDatagrams++; // countig the received packets
buf.flip();
producer.onData(buf);
} catch (Exception e) {
logger.debug("got exeception " + e);
exit = true;
}
}
}
My lmax event is simple...
public class Udp2flowEvent {
ByteBuffer buf;
Udp2flowEvent(int size) {
this.buf = ByteBuffer.allocateDirect(size);
}
public void set(ByteBuffer buf) {
this.buf = buf;
}
public ByteBuffer getEvent() {
return this.buf;
}
}
And this is my factory
public class Udp2flowEventFactory implements EventFactory<Udp2flowEvent> {
private int size;
Udp2flowEventFactory(int size) {
super();
this.size = size;
}
public Udp2flowEvent newInstance() {
return new Udp2flowEvent(size);
}
}
The producer ...
public class Udp2flowProducer {
private final RingBuffer<Udp2flowEvent> ringBuffer;
public Udp2flowProducer(RingBuffer<Udp2flowEvent> ringBuffer)
{
this.ringBuffer = ringBuffer;
}
public void onData(ByteBuffer buf)
{
long sequence = ringBuffer.next(); // Grab the next sequence
try
{
Udp2flowEvent event = ringBuffer.get(sequence);
event.set(buf);
}
finally
{
ringBuffer.publish(sequence);
}
}
}
The interesting but very simple part is the decoder. It looks like this.
public void onEvent(Udp2flowEvent event, long sequence, boolean endOfBatch) {
// each consumer decodes its packets
if (sequence % nn_decoders != decoderid) {
return;
}
ByteBuffer buf = event.getEvent();
event = null; // is it faster to null the event?
shareddata.increaseReceiveddatagrams();
// headertype
// some code omitted. But the code looks something like this
final int headertype = buf.getInt();
final int headerlength = buf.getInt();
final long payloadlength = buf.getLong();
// decoding int and longs works fine.
// but decoding the remaining part not!
byte[] payload = new byte[buf.remaining()];
buf.get(payload);
// some code omitted. The payload is used later on...
}
And here are some interesting facts:
all decoders work well. I see the number of decoders running
all packets are received but the decoding takes too long. More precisely: decoding the first two ints and the long value works fine but decoding the payload takes too long. This leads to a 'backpressure' and some packets are lost.
Fun fact: The code works pretty fine on my MacBook Air but does not work on my server. (MacBook: Core i7; Server: ESXi with 8 virtual Cores on a Xeon #2.6Ghz and no load at all).
Now my questions and I hope that somebody has an idea:
why does it hardly make a difference to use several consumers? The difference is only 5%
In general: What is the best way to receive 60K (or more) UDP packets and decode it? I tried netty as receiver but UDP does not scale very well.
Why is decoding the payload so slow?
Are there any errors that I have overlooked?
Should I use another producer / consumer library? LMAX has a very low latency but what's about throughput?
Ring Buffers don't seem like the right tec for this problem because when a ring buffer has filled all it's capacity it will block and it is also an inherently sequential architecture. You need to know in advance the highest number of packets to expect and size for that. Also UDP is lossy unless you implement a message assurance protocol.
Not sure why you say TCP is not bidirectional, it is and it takes care of lost packets.
To cope with data flooding, you may need to distribute the incoming packets to separate servers if a single one is insufficient. A queue should work to absorb a flood of data. You may need a massive number of decoders awaiting if you want to process this volume of data in near real time.
Suggest you use TCP.

How to avoid log spamming by logging every few seconds in Java

I am trying to find how I can log a debug message every few seconds to avoid log spamming.
Say, I have the below function.
public void doSomething() {
// log is a logger object from org.slf4j
log.debug("doSomething: Enter");
// do some task
log.debug("doSomething: Exit");
return;
}
This function gets called 100 times in a loop
for (int i = 0; i < 100; i++) {
doSomething();
thread.sleep(100); // sleep for 100 milli seconds
}
I do not want the log message to get printed 100 times. I want it to get printed every second or something like that.
Is there someway I can control this? I can think of passing the "iteration i" to doSomething() and printing the log only when I am on certain iteration.
Something like,
public void doSomething(int i) {
if (i == 25) {
// log is a logger object from org.slf4j
log.debug("doSomething: Enter");
}
// do some task
if (i == 25) {
// log is a logger object from org.slf4j
log.debug("doSomething: Exit");
}
return;
}
for (int i = 0; i < 100; i++) {
doSomething(i);
thread.sleep(100); // sleep for 100 milli seconds
}
Is there a better way to do this? Thanks!
I am not sure what kind of logging framework you are using.If you happen to use log4j, I guess you can somehow configure the latency using a system property AsyncLogger.WaitStrategy. Have a look in https://logging.apache.org/log4j/log4j-2.3/manual/async.html.
When I had the same problem, I developed a throttled logger, which initially behaves like a normal logger. But after logging n messages, all further messages are skipped until some time has passed. After each second, some more messages are allowed to be logged.
When starting to suppress messages, you should log about this, as well as when starting to log again.

Asserting I have received 10 messages

I'm trying to assert the fact I have received 10 messages from pubnub. I do infact receive them to the console. However what would be the right way to assert that I have. I'm not entirely sure on what syntax I should use.
#Test
public void testPublisher() throws PubnubException {
// Send 10 messages
for(int i = 0; i <= 10; i++){
service.publish("my_channel", "Message: " + i);
}
// Wait until we have recieved the 10 messages
do{}while(service.count() <= 10);
// For each message print out the details
service.getMessages().forEach(System.out::println);
assertArrayEquals(service.count());
}
You should be able to use
assertTrue(service.count() == 10);
Your do...while loop is known as a "busy spin" which is considered an anti-pattern in most cases and should be avoided. Busy spinning thrashes the CPU whilst it waits and your implelemtation could also run eternally if something goes wrong and 10 messages aren't received.
https://en.wikipedia.org/wiki/Busy_waiting
You should consider a blocking mechanism... possibly with a timeout such as
BlockingQueue.take() or BlockingQueue.poll() or CountdownLatch.await()

Java: If-else statement

I made a twitch irc bot.
I have setup a little "system" to turn itself on whenever the stream is going online and should turn off when the stream isnt online anymore.
im using the following code:
if (TwitchStatus.isstreamlive && multistartprepare == false && multistartprepare2 == false){
livemode = true;
multistartprepare = true;
startedAt = DateTime.now();
startup();
}else{
if (TwitchStatus.isstreamlive == false && multistartprepare){
livemode = false;
multistartprepare = false;
multistartprepare2 = false;
TTmsg.cancel();
TTmsg.purge();
}
}
isstreamlive is a boolean which is either true when a stream is live or false when the stream is offline.
isstreamlive gets updated every 5 secons by making a JSON request and holds the right value the whole time.
the problem now is that the startup() Method will activate a timer for a greeting message in the irc chat. Somehow it happens the timer got executed 2 or 3 times when i start my bot so i guess something is wrong with my if else statement.
the booleans multistartprepare and multistartprepare2 are false on start and are there for the bot to start only once a time, til the stream is over and he can get offline.
is there something wrong above? Guess the code gets executed to many times.
greetings and sorry for bad english :D
It might help if you use your livemode variable in the if as well
if (TwitchStatus.isstreamlive &&
!multistartprepare &&
!multistartprepare2 &&
!livemode) {
You might be able work around this by setting up a timeout that prevents your bot from sending the message if it's been sent in the past few seconds.
long lastSent = 0;
...
if (System.currentTimeMillis() - lastSent > 1000*5) { // 5 seconds elapsed
...
// send message
lastSent = System.currentTimeMillis();
}
You might have something wrong with your setup method, or the server might be sending you multiple went-online messages, but it's hard to tell based on the info you have so far.

Thread.Sleep crashes my app

This app talks to a serial device over an usb to serial dongle. I have been able to get it to process my single queries no problem but I have a command that will send multiple queries to the serial device and It seems to me the buffer if getting overrun. Here is part of my code:
This is my array with 20 query commands:
String [] stringOneArray = {":000101017d", ":0001060178", ":00010B016C", ":000110017D",
":0001150178", ":00011A016C", ":00011F0167", ":0001240178", ":0001290173",
":00012E0167", ":0001330178", ":0001380173", ":00013D0167", ":0001420178",
":0001470173", ":00014C0167", ":0001510178", ":0001560173", ":00015B0167", ":0001600178"};
This is how I use the array:
getVelocitiesButton.setOnClickListener(new View.OnClickListener() {
public void onClick(View v) {
ftDev.setLatencyTimer((byte) 16);
int z;
for (z = 0; z < 19; z++) {
String writeData = (String) stringOneArray[z];
byte[] OutData = writeData.getBytes();
ftDev.write(OutData, writeData.length());
try {
Thread.sleep(50);
} catch (InterruptedException e) { }
}
}
});
Not sure the rest of the code is necessary but will add it if needed.
So ftdev is my serial device. It sends the query command to the serial device, it receives the response in bytes, I use a For loop to build the response until all bytes (31 bytes per response) then I process that response and at that time it should receive the second query command from the array, so on until the last command is sent.. It is all fine an dandy if I allow the FOR loop to send only one or 2 queries but with a larger number of array index and it crashes. Figured I just slow down the FOR loop and add the thread.sleep but it freezes the app and crashes... What gives? Is there any other way to control the speed to which the commands are sent? I rather send them as soon as it is possible but I am afraid I don't know java as much. This has been so far my major stepping stone in finishing this personal project, been stuck for 2 days researching and trying solutions.
Looks like you're sleeping for ~1000ms (well 950 to be exact because your last operation is not being sent to the serial device) plus the time needed to perform the writes over your serial connection. That's a pretty long time to do nothing. Remove the Thread.sleep(50) call and put the entire contents of the onClick into the run method of the following code:
AsyncTask.execute(new Runnable {
#Override
public void run() {
// talk to device here
}
});
Then, ask a different question about the quick writes crashing your connection.

Categories