Camel send String to JMS queue but byte array is retrieved - java

I have a question related to Camel and JMS message.
My system contains a JMS topic and a JMS queue, say, TopicInput and QueueInput. A process of mine listens to QueueInput and process the message send into this queue. The result is then passed to another Topic, say, TopicOutput.
The process that processes the message uses Java and Apache Camel. The response my Camel route send out is a String. Therefore the String is sent to TopicOutput.
My problem is that when I send my message to the QueueInput directly, everything is fine, I get a String response from TopicOutput. However, if I send the request message to the TopicInput, which internally bridges to QueueInput anyway, the result I get from TopicOutput will be a byte array representation of the String.
Does anyone know how this could happen? I am not even sure whether this is the Camel's problem or JMS problem.
Any suggestions or hints will be helpful.
Many thanks.

Not quite sure what's going on exactly in your logic.
JMS has BytesMessage and TextMessage. To get a string directly, the message has to be TextMessage, otherwise a String must be constructed from a byte array, which you can retrieve from the message.
When sending messages using Camel, Camel tries to map the payload to the best JMS message type. Check this table out.
To be sure to always produce a TextMessage (that parses to String), convert the payload to String before sending it with a JMS producer. Make sure you are aware of what the message type and payload is in every step of your flow, then you should easily solve your issue.

Related

Kafka with Domain Events

In my event driven project I have messages of type Commands and in response I have Events.
These Commands and Events messages express the domain so they hold complex types from the domain.
Example:
RegisterClientCommand(Name, Email)
ClientRegisteredEvent(ClientId)
There are tens more of these command and event pairs in the domain.
I was thinking of something like:
RawMessage(payloadMap, sequenceId, createdOn)
The payload would hold the message domain class type name and the message fields.
I was also reading about Avro format but seems like a lot of work defining the message format for each message.
What's the best practice in terms of the message format that's actually transmitted through the Kafka brokers?
There's no single "best" way to do it, it will all depend on the expertise in your team/organization, and the specific requirements for your project.
Kafka itself is indifferent to what messages actually contain. Most of the time, it just sees message values and keys as opaque byte arrays.
Whatever you end up defining your RawMessage as on the Java side, it will have to be serialized as byte arrays to produce it into Kafka, because that's what KafkaProducer requires. Maybe it's a custom string serializer you already have, maybe you can serialize a POJO to JSON using Jackson or something similar. Or maybe you simply send a huge comma-delimited string as the message. It's completely up to you.
What's important is that the consumer, when they pull the message from the kafka topic, are able to correctly and reliably read the data from each field in the message, without any errors, version conflicts, etc. Most serde/schema mechanisms that exist, like Avro, Protobuf or Thrift, try to make this job easier for you. Especially complex things like making sure new messages are backwards-compatible with previous versions of the same message.
Most people end up with some combination of:
Serde mechanisms for creating the byte arrays to produce into Kafka, some popular ones are Avro, Protobuf, Thrift.
Raw JSON strings
A huge string with some kind of internal/custom format that be parsed/analyzed.
Some companies use a centralized schema service. This is so your data consumers don't have to know ahead of time what schema the message contains, they just pull down the message, and request the corresponding schema from the service. Confluent has their own custom schema registry solution that has supported Avro for years, and as of a few weeks ago, officially supports Protobuf now. This is not required, and if you own the producer/consumer end-to-end, you might decide to handle the serialization by yourself, but a lot of people are used to it.
Depending on the message type, sometimes you want compression because the messages could be very repetitive and/or large, so you'd end up saving quite some storage and bandwidth if you send in compressed messages, at the cost of some CPU usage and latency. This could also be handled by yourself on the producer/consumer side, compressing the byte arrays after they've been serialized, or you can request message compression directly on the producer side (look for compression.type in the Kafka docs).

Different messages are merging under ByteBuf Object while reading it from Netty library through TCP

I have an application which processes messages which are of 3 different formats, I am using Netty client to receive messages over TCP listener.
So the issue I am facing in that is for receiving messages over TCP, I have to use ByteBuf in my Decoder class, hence messages are concatenating one after the other and I m not able to split them.
I searched over Internet and, I found we can use LineBasedFrameDecoder, DelimeterBasedFrameDecoder or FixedLengthFieldDecoder to resolve this, but the issue is in my message I don't have any fixed size, also I cannot use LineBasedFrameDecoder because LineBasedFrameDecoder splits the messages on the basis of new line i.e '\n' or '\r' and in my messages there can be new lines as well so LineBasedFrameDecoder will not work in this scenario as it will give the partial or half message, also I don't have any specific delimiter from which my messages ends and I can't use DelimeterBasedFrameDecoder
Please suggest me some approach to resolve this problem.
Also, Is there anything I can add to my pipeline for TCP so that my ByteBuf Object will not contain the concatenation of messages and for every decode method call I will have a single message so that I can parse them easily, just like in cases of UDP as they receive the Datagram packets for every single message.
Thanks in advance.

RabbitMQ, How to drop a message after n re-queuing attempt

I am trying to build a sort of asynchronous server by using RabbitMQ along with JAVA. I have two exchanges Original_Exch and Dead_Exch, and one queue in each. Both the exchanges are declared DLX (dead letter exchange of each other's queue).
Now come to the problem, I am publishing a message to Original_Exch in the form of a json string which contains email Info ( such as To,Subject, Message body, attachment, etc ). After consuming this message from the queue bind to Original_exch, I am sending email to the specified email address. In case email is not sent successfully I am transferring this message to Dead_Exch and after 2 seconds ( using TTL for that ) the message is again being transferred to Original_Exch.
Let's assume a scenario in which a particular message is moving from one exchange to another due to continuous failure. In that case I want to make sure that if it has been transferred to Original_Exch 10 times, it should be dropped ( deleted ) from queue permanently and should not be transferred to Dead_Exch.
There are so many answers for almost similar kind of questions but none of them are satisfactory ( from a learner point of view ).
Thanks..........
Messages which have been dead-letterred have a x-death header with details about which queue(s) it went through and how many times. See an article about dead-letter exchanges on RabbitMQ website.
So you can use this header to do what you want. I see two solutions:
In your consumer, when a mail could not be delivered, look at the x-death header and decide if you want to dead-letter it (Basic.Nack with requeue set to false) or drop it (Basic.Ack).
Use a header exchange type for Dead_Exch and configure the binding to match on x-death.
Because header exchanges only do exact match on the header value, the first solution is more flexible and less error-prone.

Apache MQ Scanning Message

I am new to Apache Active Message Queues.
While reading(Consuming) the messages from MQ, the de-queue count increasing and that message deleting from MQ storage.
Here, I want to scan the message without deleting the message from MQ and de-queue count as same. means, just I want scan the message and storing it local or printing it at output.
Can Any body Suggest on this? I want to implement it using java.
What you need is an ActiveMQQueueBrowser. You can find an example code here.
But you need to be careful with this approach. Messaging Queues are not designed for this kind of access, only some implementations (like ActiveMQ) provides this access-type for special use-cases. It should be used only if really necessary, and you need to understand the limitations of this:
The returned enumeration might not fetch the full content of the queue
The enumeration might contain a message that has been already dequeued by the time you process it
etc.

Compressing data sent from netty

I'm using Netty to implement a client/server application, I'm also using Gson to send data from/to the client in json format and convert it from/to a java POJO.
The problem is that if the data exceeds a certain size the message will be truncated and will not be used in the program. So I'm trying to find a compressed format (better than the json provided by the Gson library) or maybe a way to compress the json string and avoid having the messages truncated.
Any help will be appreciated..
If protocol you are using is TCP/IP, you don't have guarantee, that message you send will came in one part. You should put some date to your message, which will allow client to determine if it got whole message (e.g. you can put message length in the begining of the message or some delimiter on the end of the message).
On the client side you should check if whole message came, and if not you should wait for the rest of the message. If you are using netty on the client side, you should put frame decoder in the begining of channel pipeline (e.g. DelimiterBasedFrameDecoder in case of delimiter, LengthFieldBasedFrameDecoder in case of length field).

Categories