Apache MQ Scanning Message - java

I am new to Apache Active Message Queues.
While reading(Consuming) the messages from MQ, the de-queue count increasing and that message deleting from MQ storage.
Here, I want to scan the message without deleting the message from MQ and de-queue count as same. means, just I want scan the message and storing it local or printing it at output.
Can Any body Suggest on this? I want to implement it using java.

What you need is an ActiveMQQueueBrowser. You can find an example code here.
But you need to be careful with this approach. Messaging Queues are not designed for this kind of access, only some implementations (like ActiveMQ) provides this access-type for special use-cases. It should be used only if really necessary, and you need to understand the limitations of this:
The returned enumeration might not fetch the full content of the queue
The enumeration might contain a message that has been already dequeued by the time you process it
etc.

Related

Kafka with Domain Events

In my event driven project I have messages of type Commands and in response I have Events.
These Commands and Events messages express the domain so they hold complex types from the domain.
Example:
RegisterClientCommand(Name, Email)
ClientRegisteredEvent(ClientId)
There are tens more of these command and event pairs in the domain.
I was thinking of something like:
RawMessage(payloadMap, sequenceId, createdOn)
The payload would hold the message domain class type name and the message fields.
I was also reading about Avro format but seems like a lot of work defining the message format for each message.
What's the best practice in terms of the message format that's actually transmitted through the Kafka brokers?
There's no single "best" way to do it, it will all depend on the expertise in your team/organization, and the specific requirements for your project.
Kafka itself is indifferent to what messages actually contain. Most of the time, it just sees message values and keys as opaque byte arrays.
Whatever you end up defining your RawMessage as on the Java side, it will have to be serialized as byte arrays to produce it into Kafka, because that's what KafkaProducer requires. Maybe it's a custom string serializer you already have, maybe you can serialize a POJO to JSON using Jackson or something similar. Or maybe you simply send a huge comma-delimited string as the message. It's completely up to you.
What's important is that the consumer, when they pull the message from the kafka topic, are able to correctly and reliably read the data from each field in the message, without any errors, version conflicts, etc. Most serde/schema mechanisms that exist, like Avro, Protobuf or Thrift, try to make this job easier for you. Especially complex things like making sure new messages are backwards-compatible with previous versions of the same message.
Most people end up with some combination of:
Serde mechanisms for creating the byte arrays to produce into Kafka, some popular ones are Avro, Protobuf, Thrift.
Raw JSON strings
A huge string with some kind of internal/custom format that be parsed/analyzed.
Some companies use a centralized schema service. This is so your data consumers don't have to know ahead of time what schema the message contains, they just pull down the message, and request the corresponding schema from the service. Confluent has their own custom schema registry solution that has supported Avro for years, and as of a few weeks ago, officially supports Protobuf now. This is not required, and if you own the producer/consumer end-to-end, you might decide to handle the serialization by yourself, but a lot of people are used to it.
Depending on the message type, sometimes you want compression because the messages could be very repetitive and/or large, so you'd end up saving quite some storage and bandwidth if you send in compressed messages, at the cost of some CPU usage and latency. This could also be handled by yourself on the producer/consumer side, compressing the byte arrays after they've been serialized, or you can request message compression directly on the producer side (look for compression.type in the Kafka docs).

javax.mail separating email threads

I have a java application that monitors an inbox and reads new messages. I only want the latest message in a thread read, however when an email with multiple replies in the same thread is parsed, it reads the whole thing.
Is it possible to read only the latest reply in an email thread using javax.mail? Or would I need to place some logic to look at the header and determine the latest by comparing the send date?
If you have separate messages in your mailbox for each reply, you have to decide how to determine that they're part of the same "thread". There's no perfect way to do this and different mailers will do it differently. A good start is the References and In-Reply-To headers. Once you know the set of messages that are part of a single thread, you can choose the latest one by date.
If you have a single message that includes the text of previous replies in the body of the message and you want to separate the latest reply from the previous replies, you'll have to process the text in the body and decide which parts are previous replies and which part is the current reply. Again, there is no perfect solution and this will require more heuristics.

RabbitMQ, How to drop a message after n re-queuing attempt

I am trying to build a sort of asynchronous server by using RabbitMQ along with JAVA. I have two exchanges Original_Exch and Dead_Exch, and one queue in each. Both the exchanges are declared DLX (dead letter exchange of each other's queue).
Now come to the problem, I am publishing a message to Original_Exch in the form of a json string which contains email Info ( such as To,Subject, Message body, attachment, etc ). After consuming this message from the queue bind to Original_exch, I am sending email to the specified email address. In case email is not sent successfully I am transferring this message to Dead_Exch and after 2 seconds ( using TTL for that ) the message is again being transferred to Original_Exch.
Let's assume a scenario in which a particular message is moving from one exchange to another due to continuous failure. In that case I want to make sure that if it has been transferred to Original_Exch 10 times, it should be dropped ( deleted ) from queue permanently and should not be transferred to Dead_Exch.
There are so many answers for almost similar kind of questions but none of them are satisfactory ( from a learner point of view ).
Thanks..........
Messages which have been dead-letterred have a x-death header with details about which queue(s) it went through and how many times. See an article about dead-letter exchanges on RabbitMQ website.
So you can use this header to do what you want. I see two solutions:
In your consumer, when a mail could not be delivered, look at the x-death header and decide if you want to dead-letter it (Basic.Nack with requeue set to false) or drop it (Basic.Ack).
Use a header exchange type for Dead_Exch and configure the binding to match on x-death.
Because header exchanges only do exact match on the header value, the first solution is more flexible and less error-prone.

Camel send String to JMS queue but byte array is retrieved

I have a question related to Camel and JMS message.
My system contains a JMS topic and a JMS queue, say, TopicInput and QueueInput. A process of mine listens to QueueInput and process the message send into this queue. The result is then passed to another Topic, say, TopicOutput.
The process that processes the message uses Java and Apache Camel. The response my Camel route send out is a String. Therefore the String is sent to TopicOutput.
My problem is that when I send my message to the QueueInput directly, everything is fine, I get a String response from TopicOutput. However, if I send the request message to the TopicInput, which internally bridges to QueueInput anyway, the result I get from TopicOutput will be a byte array representation of the String.
Does anyone know how this could happen? I am not even sure whether this is the Camel's problem or JMS problem.
Any suggestions or hints will be helpful.
Many thanks.
Not quite sure what's going on exactly in your logic.
JMS has BytesMessage and TextMessage. To get a string directly, the message has to be TextMessage, otherwise a String must be constructed from a byte array, which you can retrieve from the message.
When sending messages using Camel, Camel tries to map the payload to the best JMS message type. Check this table out.
To be sure to always produce a TextMessage (that parses to String), convert the payload to String before sending it with a JMS producer. Make sure you are aware of what the message type and payload is in every step of your flow, then you should easily solve your issue.

Obtain last 500 logged messages

We are using SLF4J/Logback combination to perform our logging. One of the requirement we have is if anything fails, send an email to support/dev group with last 500 logged messages.
I was trying to go through the documentation, but haven't found anything relevant.
One of the approach, I can think is obtain the current log file name, read the file and send last 500 records. But I dont know how to get the current log file name. anyone knows how to? or any other better option to retrieve the log tail?
Thanks
It sounds like Log4j's SMTPAppender has the features you require. You could look at its source code as model to guide your own implementation if Logback lacks a similar appender (which would be somewhat surprising).
Essentially, this email appender has a ring buffer of log events. When a triggering event occurs (by default, an event at ERROR level or worse), the buffer is flushed to an email and sent.
Create a custom Appender that would cache the last 500 log messages. You may extend the SMTPAppender to shoot the email by reading the contents from this cache.
start here

Categories