What happens if JMS message is sent without JMSPriority header - java

I am currently trying to prioritize messages (JMS) going through a complex application.
In the part that concerns me most, users insert the messages into a table in the DB like table INPUT with columns DESTINATION,PRIORITY, MESSAGE. Priority column is not mandatory, both others are mandatory.
The application then takes the information from entries in this table and creates JMS with JMSPriority = PRIORITY header. Body is filled with BODY column and the JMS is then sent to queue specified in DESTINATION.
CODE SNIPPETS:
//pull requests from database and set headers
from(RouteConstants.READ_REQUESTS_FROM_DATABASE) //this is a route formed by SQL
.transacted("PROPAGATION_REQUIRED_JBOSS")
.process(setHeaderProperties)
.to("direct:jms");
//send JMS to destination
from("direct:jms").setBody(simple("${property.MESSAGE}"))
.convertBodyTo(String.class).recipientList(
simple("jms:queue:${property.DESTINATION}?
exchangePattern=InOnly&jmsMessageType=Text&preserveMessageQos=true
&disableReplyTo=true"));
public class SetHeaderProperties implements Processor {
public void process(Exchange exchange) throws Exception {
LinkedCaseInsensitiveMap body = (LinkedCaseInsensitiveMap) exchange.getIn().getBody();
exchange.setProperty("MESSAGE", body.get("MESSAGE").toString());
exchange.setProperty("DESTINATION", body.get("DESTINATION").toString());
Long priority = getPriorityQuery(); //DAO method that returns value of PRIORITY or null if empty
if(priority != null) exchange.setProperty("PRIORITY", priority);
}
//Receive the JMS. Consider this point to be the same that the message was sent to in the second snippet
from("jms:queue:input-msgs").
log(LoggingLevel.DEBUG, CAMEL_LOGGER_NAME, "Received JMSPriority: ${header.JMSPriority}"). //This getter is problematic, see below snippets
process(generalMessageProcessor);
The application behaves as should as long as PRIORITY column is filled. When the value of PRIORITY is null, the getter always returns 4. I understand, that the priority 4 is default, and I am OK with the message being processed as such, but I need to be able to differentiate when priority 4 was set as a fixed value in the database table and therefore requested, or if the priority was not set at all and therefore the program should behave in slightly different route inside the following processor.
Is this at all possible? I would like to avoid changing DDL of the database and also I cannot just fork the program in the SetHeaderProperties processor, as the information would get rewritten in the GeneralMessageProcessor anyway and the setter processor does not have all the necessary classes and fields exposed.
The naive answer I suppose would work is to call the DAO query again whenever I need to check the priority, but that would strain the database and I would like to know if there is more elegant solution to the problem.

Yes, the value 4 is the default JMS priority. Therefore, every message has a priority and there is no such thing like a null priority or no priority at all.
However, one quite simple workaround that does not strain the database would be to set another message header like prioritySetByApplication or whatever name you like. You can then use this header to differ between a default priority and an "explicit" priority of 4.

Related

Batch consumer camel kafka

I am unable to read in batch with the kafka camel consumer, despite following an example posted here. Are there changes I need to make to my producer, or is the problem most likely with my consumer configuration?
The application in question utilizes the kafka camel component to ingest messages from a rest endpoint, validate them, and place them on a topic. I then have a separate service that consumes them from the topic and persists them in a time-series database.
The messages were being produced and consumed one at a time, but the database expects the messages to be consumed and committed in batch for optimal performance. Without touching the producer, I tried adjusting the consumer to match the example in the answer to this question:
How to transactionally poll Kafka from Camel?
I wasn't sure how the messages would appear, so for now I'm just logging them:
from(kafkaReadingConsumerEndpoint).routeId("rawReadingsConsumer").process(exchange -> {
// simple approach to generating errors
String body = exchange.getIn().getBody(String.class);
if (body.startsWith("error")) {
throw new RuntimeException("can't handle the message");
}
log.info("BODY:{}", body);
}).process(kafkaOffsetManager);
But the messages still appear to be coming across one at a time with no batch read.
My consumer config is this:
kafka:
host: myhost
port: myport
consumer:
seekTo: beginning
maxPartitionFetchBytes: 55000
maxPollRecords: 50
consumerCount: 1
autoOffsetReset: earliest
autoCommitEnable: false
allowManualCommit: true
breakOnFirstError: true
Does my config need work, or are there changes I need to make to the producer to have this work correctly?
At the lowest layer, the KafkaConsumer#poll method is going to return an Iterator<ConsumerRecord>; there's no way around that.
I don't have in-depth experience with Camel, but in order to get a "batch" of records, you'll need some intermediate collection to "queue" the data that you want to eventually send downstream to some "collection consumer" process. Then you will need some "switch" processor that says "wait, process this batch" or "continue filling this batch".
As far as databases go, that process is exactly what Kafka Connect JDBC Sink does with batch.size config.
We solved a similar requirement by using the Aggregation [1] capability provided by Camel
A rough code snippet
#Override
public void configure() throws Exception {
// 1. Define your Aggregation Strat
AggregationStrategy agg = AggregationStrategies.flexible(String.class)
.accumulateInCollection(ArrayList.class)
.pick(body());
from("kafka:your-topic?and-other-params")
// 2. Define your Aggregation Strat Params
.aggregate(constant(true), agg)
.completionInterval(1000)
.completionSize(100)
.parallelProcessing(true)
// 3. Generate bulk insert statement
.process(exchange -> {
List<String> body = (List<String>) exchange.getIn().getBody();
String query = generateBulkInsertQueryStatement("target-table", body);
exchange.getMessage().setBody(query);
})
.to("jdbc:dataSource");
}
There are a variety of strategies that you can implement, but we chose this particular one because it allows you to create a List of strings for the message contents that we need to ingest into the db. [2]
We set a variety of different params such as completionInterval & completionSize. The most important one for us was to set parallellProcessing(true) [3] ; without that our performance wasn't nearly getting the required throughput.
Once the aggregation has either collected 100 messages or 1000 ms has passed, then the processor generates a bulk insert statement, which then gets sent to the db.
[1] https://camel.apache.org/components/3.18.x/eips/aggregate-eip.html
[2] https://camel.apache.org/components/3.18.x/eips/aggregate-eip.html#_aggregating_into_a_list
[3] https://camel.apache.org/components/3.18.x/eips/aggregate-eip.html#_worker_pools

IMB PCF agent returns error while inquiring channel status

I have a problem with PCF message agent while inquiring channel in order to obtain informations about hosts connected to a given queue manager. The code of PCFAgent is
MQGetMessageOptions getMessageOptions = new MQGetMessageOptions();
getMessageOptions.options = MQC.MQGMO_BROWSE_NEXT + MQC.MQGMO_NO_WAIT + MQC.MQGMO_FAIL_IF_QUIESCING;
messageAgent = new PCFMessageAgent(MQEnvironment.hostname, MQEnvironment.port, MQEnvironment.channel);
and code of options is
inquireOptions = new PCFMessage(CMQCFC.MQCMD_INQUIRE_CHANNEL_STATUS);
inquireOptions.addParameter(CMQCFC.MQCACH_CHANNEL_NAME, "*");
inquireOptions.addParameter(CMQCFC.MQIACH_CHANNEL_INSTANCE_TYPE, CMQC.MQOT_CURRENT_CHANNEL);
inquireOptions.addParameter(CMQCFC.MQIACH_CHANNEL_INSTANCE_ATTRS, new int[]{
CMQCFC.MQCACH_CHANNEL_NAME, CMQCFC.MQCACH_CONNECTION_NAME, CMQCFC.MQIACH_MSGS,
CMQCFC.MQCACH_LAST_MSG_DATE, CMQCFC.MQCACH_LAST_MSG_TIME, CMQCFC.MQIACH_CHANNEL_STATUS
});
responses = messageAgent.send(inquireOptions);
Not always, but occasionally the application resturns an exception that says "Completion Code 2, Reason 2100" and my host (the one on which the application is running) leaves a pending connection on the server that is never closed until the MQManager is restarted.
I've read that this exception is due to a conflict in creating dynamic queues, but in my code I don't create any queue.
Someone could help me? Sorry, I've no previous experiences with queue managers.
Thank you
2100 (MQRC_OBJECT_ALREADY EXISTS) is indeed a problem with creating a dynamic queue.
Explanation
An MQOPEN call was issued to create a dynamic queue, but a queue with the same name as the dynamic queue already exists.
Completion code
MQCC_FAILED
Programmer response
If supplying a dynamic queue name in full, ensure that it obeys the naming conventions for dynamic queues; if it does, either supply a different name, or delete the existing queue if it is no longer required. Alternatively, allow the queue manager to generate the name.
If the queue manager is generating the name (either in part or in full), reissue the MQOPEN call.
Under the covers, the PCFMessageAgent will create a PCF Format message using the details you supply in the PCFMessage and will open a model queue to create a temporary queue in order to receive the reply from the Command Server. These temporary queues are named by the queue manager by producing a unique portion using a timestamp, resulting in a name something like AMQ.5E47207E2227AA02. If you have lots of concurrent applications doing this, it is possible that you could end up with a clash of names, and the second request in, at the same time might get a name conflict.
If you have a way to make the temporary queue name more unique should such concurrency be a problem in your system, you can set the prefix used for the temporary queue name using the setReplyQueuePrefix method. You could, for example, user the user ID each application is running under, if that is unique.
public void setReplyQueuePrefix(java.lang.String prefixP)
Sets the string used as the first part of the agent's reply queue name. The reply queue used by the PCFAgent is a temporary dynamic queue. By default, the reply queue prefix used by the PCFAgent is the empty string (""). When this is used, the name of the reply queue is generated entirely by the queue manager. If the method is called with a prefix specified, then the PCFAgent will pass that prefix to the queue manager whenever it needs to create a temporary queue. The queue manager will then generate the rest of the temporary queue name. This means that the queue manager generates a unique name, but the PCFAgent still has some control. The prefix specified should be fewer than 33 characters. If the prefix contains 33 characters or greater, then this method will truncate the prefix to be 32 characters in length. If the prefix does not contain an asterisk (*) character at the end, then an asterisk will be added to the end.

Oracle Advanced Queue - Removing Message after Dequeue

We're using Oracle JMS APIs to read messages from Advanced Queue. We use the following piece of code to read the messages from the queue:
MessageConsumer consumer = sess.createConsumer(q);
for (Message m; (m = consumer.receive()) != null;)
{
new Timer().schedule(new QueueExample(m), 0);
}
The problem is, after the message is received from the queue, it is not completely removed from the queue table, only the STATE field is changed from 0 to 2. Is this the default behavior of the Oracle JMS Client? We would like to completely remove the record from the queue table, after the message has been read from the queue with consumer.receive() method. What is the appropriate api method to do that ?
I think you are experiencing this due to the retention_time parameter on your queue being configured to some high value.
Retention is used for:
Users can specify that messages be retained after consumption. The
systems administrator can specify the duration for which messages will
be retained. Oracle AQ stores information about the history of each
message, preserving the queue and message properties of delay,
expiration, and retention for messages destined for local or remote
recipients. The information contains the ENQUEUE/DEQUEUE time and the
identification of the transaction that executed each request. This
allows users to keep a history of relevant messages. The history can
be used for tracking, data warehouse and data mining operations.
You can verify this by checking the queue-creation script and alter the setting via admin interface or using ALTER_QUEUE.

Parse byteBuffer in Websphere MQ Exit

I'm trying to build a custom mq exit to archive messages that hit a queue. I have the following code.
class MyMqExits implements WMQSendExit, WMQReceiveExit{
#Override
public ByteBuffer channelReceiveExit(MQCXP arg0, MQCD arg1, ByteBuffer arg2) {
// TODO Auto-generated method stub
if ( arg2){
def _bytes = arg2.array()
def results = new String(_bytes)
println results;
}
return arg2;
}
...
The content of the message (header/body) is in the byte buffer, along with some unreadable binary information. How can I parse the message (including the body and the queue name) from arg2? We've gone through IBM's documentation, but haven't found an object or anything that makes this easy.
Assuming the following two points:
1) Your sender application has not hard coded the queue name where it puts messages. So you can change the application configuration to send messages to a different object.
2) MessageId of the archived message is not important, only message body is important.
Then one alternative I can think of is to create an Alias queue that resolves to a Topic and use two subscribers to receive messages.
1) Subscriber 1: An administratively defined durable subscriber with a queue provided to receive messages. Provide the same queue name from which your existing consumer application is receiving messages.
2) Subscriber 2: Another administratively defined durable subscriber with queue provided. You can write a simple java application to get messages from this queue and archive.
3) Both subscribers subscribe to the same topic.
Here are steps:
// Create a topic
define topic(ANY.TOPIC) TOPICSTR('/ANY_TOPIC')
// Create an alias queue that points to above created topic
define qalias(QA.APP) target(ANY.TOPIC) targtype(TOPIC)
// Create a queue for your application that does business logic. If one is available already then no need to create.
define ql(Q.BUSLOGIC)
// Create a durable subscription with destination queue as created in previous step.
define sub(SB.BUSLOGIC) topicstr('/ANY_TOPIC') dest(Q.BUSLOGIC)
// Create a queue for application that archives messages.
define ql(Q.ARCHIVE)
// Create another subscription with destination queue as created in previous step.
define sub(SB.ARCHIVE) topicstr('/ANY_TOPIC') dest(Q.ARCHIVE)
Write a simple MQ Java/JMS application to get messages from Q.ARCHIVE and archive messages.
A receive exit is not going to give you the whole message. Send and receive exits operate on the transmission buffers sent/received by channels. These will contain various protocol flows which are not documented because the protocol is not public, and part of those protocol flows will be chunks of the messages broken down to fit into 32Kb chunks.
You don't give enough information in your question for me to know what type of channel you are using, but I'm guessing it's on the client side since you are writing it in Java and that is the only environment where that is applicable.
Writing the exit at the client side, you'll need to be careful you deal with the cases where the message is not successfully put to the target queue, and you'll need to manage syncpoints etc.
If you were using QMgr-QMgr channels, you should use a message exit to capture the MQXR_MSG invocations where the whole message is given to you. If you put any further messages in a channel message exit, the messages you put are included in the channel's Syncpoint and so committed if the original messages were committed.
Since you are using client-QMgr channels, you could look at an API Exit on the QMgr end (currently client side API Exits are only supported for C clients) and catch all the MQPUT calls. This exit would also give you the MQPUT return codes so you could code your exit to look out for, and deal with failed puts.
Of course, writing an exit is a complicated task, so it may be worth finding out if there are any pre-written tools that could do this for you instead of starting from scratch.
I fully agree with Morag & Shashi, wrong approach. There is an open source project called Message Multiplexer (MMX) that will get a message from a queue and output it to one or more queues. Context information is maintained across the message put(s). For more info on MMX go to: http://www.capitalware.com/mmx_overview.html
If you cannot change the source or target queues to insert MMX into the mix then an API Exit may do the trick. Here is a blog posting about message replication via an API Exit: http://www.capitalware.com/rl_blog/?p=3304
This is quite an old question but it's worth replying with an update that's relevant to MQ 9.2.3 or later. There is a new feature called Streaming Queues (see https://www.ibm.com/docs/en/ibm-mq/9.2?topic=scenarios-streaming-queues) and one of the use-cases it is designed to support is putting a copy of every message sent to a given queue, to an alternative queue. Another application can then consume the duplicate messages and archive them separately to the application that is processing the original messages.

RabbitMQ AMQP.BasicProperties.Builder values

In the RabbitMQ/AMQP Java client, you can create an AMQP.BasicProperties.Builder, and use it to build() an instance of AMQP.BasicProperties. This built properties instance can then be used for all sorts of important things. There are lots of "builder"-style methods available on this builder class:
BasicProperties.Builder propsBuilder = new BasicProperties.Builder();
propsBuilder
.appId(???)
.clusterId(???)
.contentEncoding(???)
.contentType(???)
.correlationId(???)
.deliveryMode(2)
.expiration(???)
.headers(???)
.messageId(???)
.priority(???)
.replyTo(???)
.timestamp(???)
.type(???)
.userId(???);
I'm looking for what fields these builer methods help "build-up", and most importantly, what valid values exist for each field. For instance, what is a clusterId, and what are its valid values? What is type, and what are its valid values? Etc.
I have spent all morning scouring:
The Java client documentation; and
The Javadocs; and
The RabbitMQ full reference guide; and
The AMQP specification
In all these docs, I cannot find clear definitions (besides some vague explanation of what priority, contentEncoding and deliveryMode are) of what each of these fields are, and what their valid values are. Does anybody know? More importantly, does anybody know where these are even documented? Thanks in advance!
Usually I use very simple approach to memorize something. I will provide all details below, but here is a simple picture of BasicProperties field and values. I've also tried to properly highlight queue/server and application context.
If you want me to enhance it a bit - just drop a small comment. What I really want is to provide some visual key and simplify understanding.
High-level description (source 1, source 2):
Please note Clust ID has been deprecated, so I will exclude it.
Application ID - Identifier of the application that produced the message.
Context: application use
Value: Can be any string.
Content Encoding - Message content encoding
Context: application use
Value: MIME content encoding (e.g. gzip)
Content Type - Message content type
Context: application use
Value: MIME content type (e.g. application/json)
Correlation ID - Message correlated to this one, e.g. what request this message is a reply to. Applications are encouraged to use this attribute instead of putting this information into the message payload.
Context: application use
Value: any value
Delivery mode - Should the message be persisted to disk?
Context: queue implementation use
Value: non-persistent (1) or persistent (2)
Expiration - Expiration time after which the message will be deleted. The value of the expiration field describes the TTL period in milliseconds. Please see details below.
Context: queue implementation use
Headers - Arbitrary application-specific message headers.
Context: application use
Message ID - Message identifier as a string. If applications need to identify messages, it is recommended that they use this attribute instead of putting it into the message payload.
Context: application use
Value: any value
Priority - Message priority.
Context: queue implementation use
Values: 0 to 9
ReplyTo - Queue name other apps should send the response to. Commonly used to name a reply queue (or any other identifier that helps a consumer application to direct its response). Applications are encouraged to use this attribute instead of putting this information into the message payload.
Context: application use
Value: any value
Time-stamp - Timestamp of the moment when message was sent.
Context: application use
Value: Seconds since the Epoch.
Type - Message type, e.g. what type of event or command this message represents. Recommended to be used by applications instead of including this information into the message payload.
Context: application use
Value: Can be any string.
User ID - Optional user ID. Verified by RabbitMQ against the actual connection username.
Context: queue implementation use
Value: Should be authenticated user.
BTW, I've finally managed to review latest sever code (rabbitmq-server-3.1.5), there is an example in rabbit_stomp_test_util.erl:
content_type = <<"text/plain">>,
content_encoding = <<"UTF-8">>,
delivery_mode = 2,
priority = 1,
correlation_id = <<"123">>,
reply_to = <<"something">>,
expiration = <<"my-expiration">>,
message_id = <<"M123">>,
timestamp = 123456,
type = <<"freshly-squeezed">>,
user_id = <<"joe">>,
app_id = <<"joe's app">>,
headers = [{<<"str">>, longstr, <<"foo">>},
{<<"int">>, longstr, <<"123">>}]
Good to know somebody wants to know all the details. Because it is much better to use well-known message attributes when possible instead of placing information in the message body. BTW, basic message properties are far from being clear and useful. I would say it is better to use a custom one.
Good example (source)
Update - Expiration field
Important note: expiration belongs to queue context. So message might be dropped by the servers.
README says the following:
expiration is a shortstr; since RabbitMQ will expect this to be
an encoded string, we translate a ttl to the string representation
of its integer value.
Sources:
Additional source 1
Additional source 2
At time of writing:
The latest AMQP standard is AMQP 1.0 OASIS Standard.
The latest version of RabbitMQ is 3.1.5 (server and client), which claims to support AMQP 0.9.1 (pdf and XML schemas zipped).
RabbitMQ provides it's own description of the protocol as XML schema including extensions (i.e. non-standard), plus XML schema without extensions (which is identical to the schema linked via (2)) and pdf doc.
In this answer:
links in (3) are the primary source of detail
(2) pdf doc is used as secondary detail if (3) is inadequate
The source code (java client, erlang server) is used as tertiary detail if (2) is inadequate.
(1) is generally not used - the protocol and schema have been (fairly) significantly evolved for/by OASIS and should apply to future versions of RabbitMQ, but do not apply now. The two exceptions where (1) was used was for textual descriptions of contentType and contentEncoding - which is safe, because these are standard fields with good descriptions in AMQP 1.0.
The following text is paraphrased from these sources by me to make a little more concise or clear.
content-type (AMQP XML type="shortstr"; java type="String"): Optional. The RFC-2046 MIME type for the message’s application-data section (body). Can contain a charset parameter defining the character encoding used: e.g., ’text/plain; charset=“utf-8”’. Where the content type is unknown the content-type SHOULD NOT be set, allowing the recipient to determine the actual type. Where the section is known to be truly opaque binary data, the content-type SHOULD be set to application/octet-stream.
content-encoding (AMQP XML type="shortstr"; java type="String"): Optional. When present, describes additional content encodings applied to the application-data, and thus what decoding mechanisms need to be applied in order to obtain the media-type referenced by the content-type header field. Primarily used to allow a document to be compressed without losing the identity of its underlying content type. A modifier to the content-type, interpreted as per section 3.5 of RFC 2616. Valid content-encodings are registered at IANA. Implementations SHOULD NOT use the compress encoding, except as to remain compatible with messages originally sent with other protocols, e.g. HTTP or SMTP. Implementations SHOULD NOT specify multiple content-encoding values except as to be compatible with messages originally sent with other protocols, e.g. HTTP or SMTP.
headers (AMQP XML type="table"; java type="Map"): Optional. An application-specified list of header parameters and their values. These may be setup for application-only use. Additionally, it is possible to create queues with "Header Exchange Type" - when the queue is created, it is given a series of header property names to match, each with optional values to be matched, so that routing to this queue occurs via header-matching.
deliveryMode (RabbitMQ XML type="octet"; java type="Integer"): 1 (non-persistent) or 2 (persistent). Only works for queues that implement persistence. A persistent message is held securely on disk and guaranteed to be delivered
even if there is a serious network failure, server crash, overflow etc.
priority (AMQP XML type="octet"; java type="Integer"): The relative message priority (0 to 9). A high priority message is [MAY BE?? - GB] sent ahead of lower priority messages waiting in the same message queue. When messages must be discarded in order to maintain a specific service quality level the server will first discard low-priority messages. Only works for queues that implement priorities.
correlation-id (AMQP XML type="octet"; java type="String"): Optional. For application use, no formal (RabbitMQ) behaviour. A client-specific id that can be used to mark or identify messages between clients.
replyTo (AMQP XML type="shortstr"; java type="String"): Optional. For application use, no formal (RabbitMQ) behaviour but may hold the name of a private response queue, when used in request messages. The address of the node to send replies to.
expiration (AMQP XML type="shortstr"; java type="String"): Optional. RabbitMQ AMQP 0.9.1 schema from (3) states "For implementation use, no formal behaviour". The AMQP 0.9.1 schema pdf from (2) states an absolute time when this message is considered to be expired. However, both these descriptions must be ignored because this TTL link and the client/server code indicate the following is true. From the client, expiration is only be populated via custom application initialisation of BasicProperties. At the server, this is used to determine TTL from the point the message is received at the server, prior to queuing. The server selects TTL as the minimum of (1) message TTL (client BasicProperties expiration as a relative time in milliseconds) and (2) queue TTL (configured x-message-ttl in milliseconds). Format: string quoted integer representing number of milliseconds; time of expiry from message being received at server.
message-id (AMQP XML type="shortstr"; java type="String"): Optional. For application use, no formal (RabbitMQ) behaviour. If set, the message producer should set it to a globally unique value. In future (AMQP 1.0), a broker MAY discard a message as a duplicate if the value of the message-id matches that of a previously received message sent to the same node.
timestamp (AMQP XML type="timestamp"; java type="java.util.Date"): Optional. For application use, no formal (RabbitMQ) behaviour. An absolute time when this message was created.
type (AMQP XML type="shortstr"; java type="String"): Optional. For application use, no formal (RabbitMQ) behaviour. [Describes the message as being of / belonging to an application-specific "type" or "form" or "business transaction" - GB]
userId (AMQP XML type="shortstr"; java type="String"): Optional. XML Schema states "For application use, no formal (RabbitMQ) behaviour" - but I believe this has changed in the latest release (read on). If set, the client sets this value as identity of the user responsible for producing the message. From RabbitMQ: If this property is set by a publisher, its value must be equal to the name of the user used to open the connection (i.e. validation occurs to ensure it is the connected/authenticated user). If the user-id property is not set, the publisher's identity remains private.
appId (RabbitMQ XML type="shortstr"; java type="String"): Optional. For application use, no formal (RabbitMQ) behaviour. The creating application id. Can be populated by producers and read by consumers. (Looking at R-MQ server code, this is not used at all by the server, although the "webmachine-wrapper" plugin provides a script and matching templates to create a webmachine - where an admin can provide an appId to the script.)
cluster Id (RabbitMQ XML type="N/A"; java type="String"): Deprecated in AMQP 0.9.1 - i.e. not used. In previous versions, was the intra-cluster routing identifier, for use by cluster applications, which should not be used by client applications (i.e. not populated). However, this has been deprecated and removed from the current schema and is not used by R-MQ server code.
As you can see above, the vast majority of these properties do not have enumerated / constrained / recommended values because they are "application use only" and are not used by RabbitMQ. So you have an easy job. You're free to write/read values that are useful to your application - as long as they match datatype and compile :). ContentType and contentEncoding are as per standard HTTP use. DeliveryMode and priority are constrained numbers.
Note: Useful, but simple constants for AMQP.BasicProperties are available in class MessageProperties.
Cheers :)
UPDATE TO POST:
With many thanks to Renat (see comments), have looked at erlang server code in rabbit_amqqueue_process.erl and documentation at RabbitMQ TTL Extensions to AMQP. Message expiration (time-to-live) can be specified
per queue via:
Map<String, Object> args = new HashMap<String, Object>();
args.put("x-message-ttl", 60000);
channel.queueDeclare("myqueue", false, false, false, args);
or per message via:
byte[] messageBodyBytes = "Hello, world!".getBytes();
AMQP.BasicProperties properties = new AMQP.BasicProperties();
properties.setExpiration("60000");
channel.basicPublish("my-exchange", "routing-key", properties, messageBodyBytes);
Here, the ttl/expiration is in millisecs, so 60 sec in each case.
Have updated above definition of expiration to reflect this.
The AMQP spec defines a generic, extensible model for properties.
AMQP properties are somewhat similar in concept to HTTP headers, in that they represent metadata about the messages in question. Just as in HTTP, they are framed separately to the message payload. But they are basically a key/value map.
Some brokers like RabbitMQ will interpret certain message properties like expiration to add extra vendor-specific value (in that case, enforcing a TTL).
But in the end, AMQP properties are just a big bunch of key/value pairs that get safely sent along with each message, should you choose to do so. Your AMQP broker's documentation will tell you which ones they interpret specially and how to send your own ones.
All that being said, if you're asking this question in the first place then you probably don't need to worry about them at all. You will be successfully able to send messages without having to worry about setting any message properties at all.

Categories