I have tried this to set expiration time of a message and converting and sending it using RabbitMessagingTemplate:
Map<String,Object> headers = new HashMap<>();
headers.put("expiration", "20000");
rabbitMessagingTemplate.convertAndSend(exchange.getName(),routingKey, event, headers);
but it does not work because expiration shall be set as a property and NOT as a header. Unfortunately RabbitMessagingTemplate does not provide a way to pass message properties but only headers. On the other hand I need to convert the message becasue I use JecksonMessageConverter.
How can I add message properties before sending the message with RabbitMessagingTemplate?
Add a MessagePostProcessor to the underlying RabbitRemplate's beforePublishPostProcessors.
I can't look at the code right now but I am surprised it's not mapped.
EDIT
Use header name amqp_expiration. See AmqpHeaders.EXPIRATION. It is mapped to the message property.
Unrecognized headers are mapped to headers.
EDIT2
In any case, given your requirements, you might be better off not using the RabbitMessagingTemplate but use the RabbitTemplate and a MessagePostProcessor instead; it will be a little more efficient...
rabbitTemplate.convertAndSend(exchange.getName(), routingKey, event, m -> {
m.getMessageProperties().setExpiration(...);
...
return m;
};
Related
I am new to Camel, and my use case is as such:
We receive messages from AMQ, and we want to remap this message, and sends this message to different endpoints of customer
each customer has config of what fields to include, and urls of OAuth + url to send message(REST apis) + credentials
Customers are grouped under agents, one agent can dominate several customers. We have config in a Map, organized by "agentId" as key, and list of "customerConfigs" as value.
By one field in the message, we decide which agent this message should send to
And then, we iterate all customers under that agent, checking what fields each one need, and remap message accordingly
We also filter by checking if the message content meets customer's criteria. If yes, we do OAuth against the OAuth url of that customer, and send message to them. If not, skip.
We are doing it with Camel, and by now, all steps from receiving to mapping and retrieving configs and so on, is defined in a bean.(.bean(GeneralBean.class)). It works.
But now, we want to retry against customer endpoints, and I decide to separate steps into several Camel steps, because I don't want to retry the whole receiving/remapping/retrieving configs like now. I just want to retry last step, which is sending.
Now comes the question: which Camel component should I use?
I think recipient list is good, but not sure how. Maybe "Dynamic router" is better?
When defining steps, when I am retrieving the config of each customer, one object in the exchange body(let's call it RemappedMessage) becomes two (RemappedMessage and a list of CustomerConfig). They have one to many relationship. How do I pass down these two objects to next bean? Or should I process them together in one bean? In the Exchange? In #ExchangeProperties Map<String, Object> properties? The latter works, but IMO is not very Camel. Or define a tuple class to combine them? I use it a lot but think it's ugly.
I don't think there is some syntax in Camel to get some properties of object in the Exchange and put it into the to() as url and as basic credentials username and password?
In general, I want to divide the process into several steps in a Camel pipeline, but not sure how to deal with "one object split into more objects and they need to go hand in hand to downstream" problem.
I am not using Spring, but Quarkus.
Now, I am with:
from("activemq:queue:" + appConfig.getQueueName())
.bean(IncomingMessageConverter.class) // use class form so that Camel will cache the bean
.bean(UserIdValidator.class) // validate and if wrong, end route here
.bean(CustomerConfigRetrieverBean.class) // retrieve config of customer, by agent id. How to pass down both??
.bean(EndpointFieldsTailor.class) // remove fields if this customer is not interested. Needs CustomerConfig
.recipientList(xxxxxx) // how?
// what's next?
Because RemappedMessage is the return type of the step .bean(IncomingMessageConverter.class), afterwards Camel can bind args to it so I can have access to the mapped message. But obviously I cannot return 2 objects together.
Recipient List is ideal when you want to send the same message to multiple endpoints and you know what those endpoints are before entering the Recipient List.
The Dynamic Router can route messages to a number of endpoints, the list and order of which not necessarily known at the time the router is entered. Since you have a one-to-many situation, Dynamic Router may be a better fit.
A simpler approach that might work is to prepare a List of tuples. Each tuple would contain a CustomerConfig and a RemappedMessage. You would then split the list and in the splitter send the message to the agent. For the tuple you could use something like ImmutablePair, a Map, or just a two-element List.
As for setting a url and username/password, I'll assume you're using Camel's HTTP component. It's probably best to provide these values as headers since the http component allows this. The URL can be set with the CamelHttpUri header. And the HTTP component will generally pass message headers as HTTP headers, so you can set things like the Authorization header just by setting a message header with the same name.
And there's definitely support for setting headers from values in the exchange and message. E.g.
// assumes the body is a List in which the first element is a CustomerConfig
.setHeader("Authorization", simple("${body[0].authValue}"))
In reality, you'll probably have to do a little bit more work for authorization. E.g, if it's basic auth, you'll have to compute the value you want to use. I'd set that in a header or property and then refer to it like:
.setHeader("Authorization", simple("${header.basicAuthValue}"))
// or
.setHeader("Authorization", simple("${exchangeProperty.basicAuthValue}"))
At last the route looks very complicated, but each bean only does one thing.
// main route, split to direct:split root and return aggregated list of EventResponse
from("activemq:queue:" + getQueue())
// note here unmarshall is manual but also use Jsonb
.bean(EventUnmarshaller.class) // use class form so that Camel will cache the bean
.setProperty("skipAck", simple("${body.skipAck}"))
.bean(SomeFieldValidator.class)
.bean(ConfigDetailsEnricher.class)
.split(body(), (oldExchange, newExchange) -> { // reduce each new EventResponse into the result list
if (oldExchange == null) {
// the first time we aggregate we only have the new exchange,
// so we just return it
return newExchange;
}
List<EventResponse> list = oldExchange.getIn().getBody(List.class);
EventResponse newElement = newExchange.getIn().getBody(EventResponse.class);
list.add(newElement);
oldExchange.getIn().setBody(list);
return oldExchange;
})
.to("direct:individual")
.end() // end split
.to("direct:replyRoute");
// split route, all beans here only deals with one EventSpecificConfig; after all convert to EventResponse
from("direct:individual")
.filter().method(CriteriaFilter.class)
.bean(AnotherFieldsTailor.class)
.bean(RestOperationEnricher.class)
.choice()
.when(simple("${body.restOperation.name} != 'DELETE'"))
.bean(NonDeleteOperationEventFieldsTailor.class)
.end()
.bean(HttpMethodEnricher.class)
.bean(TokenRetriever.class) // call oauth endpoint; as here a cache is used for token so have to use a bean. Set token to EventSpecificConfig
// inject url into header here; as when reaching "toD()", the body already has change to type Event, not EventSpecificConfig
.bean(EventApiConfiguration.class) // after this step, body is already new event(PUT, POST) / null(DELETE)
.choice()
.when(simple("${body} != null"))
.marshal().json(JsonLibrary.Jsonb)
.endChoice()
.end()
.toD("${header.url}", 10) // only cache at most 10 urls
;
from("direct:replyRoute") // list of EventResponse
.bean(EventResponsesProcessor.class)
.log(LoggingLevel.DEBUG, "Reply: " + simple("${body}"))
.setHeader("from", constant("MYAPP"));
And for the api configuration:
#Handler
void configure(EventSpecificConfig eventSpecificConfig, Exchange exchange) {
ConfigDetail configDetail = eventSpecificConfig.getConfigDetail();
String httpMethodUpperCase = eventSpecificConfig.getHttpMethod().getMethodName();
SubscriptionEvent newEvent = eventSpecificConfig.getNewEvent();
Message message = exchange.getIn();
message.setHeader("Authorization", "Bearer " + eventSpecificConfig.getToken());
message.setHeader(Exchange.HTTP_METHOD, httpMethodUpperCase);
message.setHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON);
if (HTTP_METHODS_WITH_PATH.contains(httpMethodUpperCase)) {
message.setHeader(Exchange.HTTP_PATH, "/" + newEvent.getImsi());
}
if (HTTP_METHODS_WITH_BODY.contains(httpMethodUpperCase)) {
message.setHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON);
message.setBody(newEvent);
} else { // DELETE
message.setBody(null); // DELETE has no body
// more logic of setting headers for HTTP query params depending on config values
}
message.setHeader("url", configDetail.url()); // toD() has no access to EventSpecificConfig, only header works
}
I find out that choice().when().endchoice().otherwise().endchoice().end() needs a to() in every branch, I cannot when().setHeader().endchoice(). So at last I decide to put all this logic into a bean.
And you must use end() to end choice(), you cannot use endchoice(); endchoice() is for when() and otherwise(). Quite misleading.
choice()
.when(simple("${header.foo} == 'bar'"))
.to("x")
.endchoice()
.otherwise()
.to("y")
.endchoice()
.end()
I thought endchoice() is on the same level as choice(). Why it's not named like endbranch()?
I have a standard SNS topic and I´ve set the "subscription filter policy" like this:
{
"event": [
"eventName"
]
}
When I publish a message through the AWS console using the attributes message, the message goes to the right SQS subscriber. So the subscription filter worked just fine.
Now I'm trying to do the same on my java code (Spring Boot).
I'm using the lib spring-cloud-aws-messaging which I understand is just a Wrapper of AWS JDK.
The problem is I can't figured out how to set the message attributes just like I did on AWS console.
Doesn´t matter the JSON format I sent to SNS the attributes always are in the body of the SNS message. I guess there is a specific method to set those attributes.
I found com.amazonaws.services.sns.model.MessageAttributeValue
I'm not sure if is the right class, also I couldn't understand how to publish the message and the attributes as the method publish doesn't accept it .
this.amazonSNS.publish(this.snsTopicARN, message, messageAttributes ???);
According to official documentation, there is MessageAttributeValue.Builder, which matches what you need.
https://javadoc.io/static/software.amazon.awssdk/sns/2.10.37/software/amazon/awssdk/services/sns/model/MessageAttributeValue.Builder.html
Map<String, MessageAttributeValue> attributes = new HashMap<>();
attributes.put("event", MessageAttributeValue.builder()
.dataType("String")
.stringValue("eventName")
.build());
PublishRequest request = PublishRequest.builder()
.topicArn("yourTopic")
.message("YourMessageBody")
.messageAttributes(attributes)
.build();
yourDefinedSnsClient.publish(request);
If you want to use spring-cloud-aws you can do something like this:
SnsMessage snsMessage = SnsMessage.builder()
.message("test")
.build();
Map<String, Object> headers = new HashMap<>();
headers.put("event", "eventName");
this.notificationMessagingTemplate.convertAndSend(topicName, snsMessage, headers);
I'm receiving email with spring, with a very basic script so far
ApplicationContext ac = new ClassPathXmlApplicationContext("imap.xml");
DirectChannel inputChannel = ac.getBean("receiveChannel", DirectChannel.class);
inputChannel.subscribe(message -> {
System.out.println(message.getHeaders());
System.out.println(message.getPayload());
MessageHeaders headers = message.getHeaders();
String from = (String) headers.get("mail_from");
});
According to the documentation I thought the headers would get parsed automatically, but the headers I get with the first System.out.println(); are just
{id=c65f55aa-c611-71ee-c56d-6bf13c6f71d0, timestamp=1468869891279}
The second output (for getPayload()) is
org.springframework.integration.mail.AbstractMailReceiver$IntegrationMimeMessage#6af5615d
from outputs null...
I then tried to use the MailToStringTransformer
MailToStringTransformer a = transformer();
a.setCharset("utf-8");
System.out.println(a.transform(message));
Which outputs the payload and all the headers I have expected, but (naturally) as a String.
What do I have to do to get the messages headers and text in an object?
Not sure which documentation are you referring, but the current 4.3 version has this:
By default, the payload of messages produced by the inbound adapters is the raw MimeMessage; you can interrogate the headers and content using that object. Starting with version 4.3, you can provide a HeaderMapper<MimeMessage> to map the headers to MessageHeaders; for convenience, a DefaultMailHeaderMapper is provided for this purpose.
And a bit below:
When you do not provide a header mapper, the message payload is the MimeMessage presented by javax.mail. The framework provides a MailToStringTransformer...
If you need some customization on the mapping you always can provide your own HeaderMapper<MimeMessage> implementation or DefaultMailHeaderMapper extension.
I'm trying to retrieve a resource from web service, but get this warning:
WARNING: Unable to find a converter for this representation : [application/repo.foo+xml]
And my code returns null entity. Here is code
Engine.getInstance().getRegisteredClients().clear();
Engine.getInstance().getRegisteredClients().add(new HttpClientHelper(null));
Engine.getInstance().getRegisteredConverters().add(new JacksonConverter());
ClientResource resource = new ClientResource(path);
ChallengeScheme scheme = ChallengeScheme.HTTP_BASIC;
ChallengeResponse auth = new ChallengeResponse(scheme, "user", "password");
resource.setChallengeResponse(auth);
Repo entity = resource.get(Repo.class);
System.out.println(entity);
UPDATE
My attempts which unfortunately don't work:
resource.getRequestAttributes().put("org.restlet.http.headers", new MediaType("application", "application/repo.foo+xml"));
resource.setAttribute("Content-Type", "application/repo.foo+xml");
There is some confusion in your question.
You are writing a client. You can tell the client to set an accept header. Put simply, this is a hint to the server about what content type you'd like the response to be in. JSON, HTML, XML, whatever. The server can either honour this, send you it's best guess or ignore it completely.
You can try setting the accept header to "application/javascript" and see how it responds. If it continues to send "application/repo.foo+xml" then you will probably be able to parse it with the jackson xml databind package. You may have to register the media type and converter with the client so it knows which converter to use to serialise the object.
Currently, I have two handlers, one for logging and one for signing the SOAP message (which inherently tampers with the SOAP message). Without the handler chain, MTOM works as expected, inserting a reference to the binary content, rather than inlining the base64 binary content.
As soon as I introduce a handler, the MTOM content is now included inline.
Is it possible to use handlers to sign a SOAP message or is there a more appropriate means of doing this?
Update 1
Unable to post the full source. Essentially though, custom SOAPHandler implementation. It performs some basic XMLDsig type operations on a timestamp (in header), custom header and SOAP body. The resultant digest values are then injected into a signature element in the header.
With respect to the logger, it is again a simple SOAPHandler. If either it or the signing handler are used exclusively, the result is the same, an MTOM message with the byte content inlined. The only progress I made was using a MessageHandler for logging. This allowed me to output the SOAP envelope (albeit with the byte content inlined) and still maintain the MTOM separation. So not really a solution but an indication that any modification of the SOAP message needs to occur at a lower level. This is leading me down the path of tubes.
Update 2
The following is an example of the MessageHandler approach. You can see that the raw HTTP dump will contain the multiple part message whereas the actually output inlines the base64. The only difference between this impementation and a SOAPHandler implementation is that the actual HTTP request changes to be a single part inlined MTOM message.
#Override
public boolean handleMessage(MessageHandlerContext context) {
HttpTransportPipe.dump = true;
Boolean isOutgoing = (Boolean) context.get(MessageContext.MESSAGE_OUTBOUND_PROPERTY);
if (isOutgoing) {
System.out.println("\nOutbound message:");
XMLStreamWriter writer = XMLStreamWriterFactory.create(System.out);
try {
context.getMessage().writeTo(writer);
} catch (XMLStreamException e) {
throw new IllegalStateException("Unable to write");
}
} else {
System.out.println("\nInbound message:");
}
return true;
}
I tried to replicate your problem by putting together a simple service that accepts an image transferred by MTOM. I found that if I put the MTOM-enabling code before setting the handler, it encodes the message properly. If I set the handler first, it does not. Here is where I set up the properly functioning client code:
Service service = Service.create(url, qname);
Greeting greeting = service.getPort(Greeting.class);
BindingProvider bp = (BindingProvider) greeting;
SOAPBinding binding = (SOAPBinding) bp.getBinding();
binding.setMTOMEnabled(true);
service.setHandlerResolver(new HandlerResolver() {
#SuppressWarnings("rawtypes")
public List<Handler> getHandlerChain(PortInfo portInfo) {
List<Handler> handlerList = new ArrayList<Handler>();
handlerList.add(new RGBSOAPHandler());
return handlerList;
}
});
Where RGBSOAPHandler is just some example code I took from another SO answer.
EDIT: Also, if I try setting the handler on the binding and not the service then I get the same problem that you do. So if it looks like this:
Service service = Service.create(url, qname);
Greeting greeting = service.getPort(Greeting.class);
BindingProvider bp = (BindingProvider) greeting;
SOAPBinding binding = (SOAPBinding) bp.getBinding();
binding.setMTOMEnabled(true);
List<Handler> handlerList = new ArrayList<Handler>();
handlerList.add(new RGBSOAPHandler());
binding.setHandlerChain(handlerList);
Then my file is encoded in-line. I don't know why this is, but I suppose the answer is "don't do that". Set your handlers on the Service object.
Looks like I'm limited by the framework and the way in which the handlers work. I think at this stage, my only option is to go to a lower level. I did take a look at using tubes but the same behaviour exhibited itself so it looks as though any attempt to work with the XML of the request fails. As such, I'm going to have to abandon handlers for the time being and investigate at a lower level whether I can make use of codecs to do what I'm after. An MTOM implementation sounds like it may do what I'm after at the byte level:
http://jax-ws.java.net/nonav/jax-ws-20-fcs/arch/com/sun/xml/ws/encoding/MtomCodec.html
I imagined this would be a lot less complex to get working but will update with my progress on the codec front.
#David: Thanks for your help on the handler front but looks as though there is no solution at that level.
Update 1
Came up with an alternate solution that works for my purposes.
I sign the necessary parts of the SOAP message using my SOAPHandler.
Wrote a new SOAPHandler that then takes resultant message and manually extracts the incorrectly inlined binary content.
I then create an AttachmentPart and inject the content from Step 2 into that. It takes Base64 encoded text too which is handy. That AttachmentPart then has a reference UUID assigned to it for Content-Id.
I then create a new element in place of the Base64 content in the SOAP body that reference the UUID, along the lines of the following:
<xop:Include xmlns:xop="http://www.w3.org/2004/08/xop/include" href="cid:UUID!!!"></xop:Include>
Will probably write a blog post on this as it's been a bit of an epic journey to this point. It was not the best solution but it was certainly easier than going down the tubes/codec path.