I have a project that currently uses Spring Cloud Streams and RabbitMQ underneath. I've implemented a logic based on the documentation. See below:
#Component
public class ReRouteDlq {
private static final String ORIGINAL_QUEUE = "so8400in.so8400";
private static final String DLQ = ORIGINAL_QUEUE + ".dlq";
private static final String PARKING_LOT = ORIGINAL_QUEUE + ".parkingLot";
private static final String X_RETRIES_HEADER = "x-retries";
private static final String X_ORIGINAL_EXCHANGE_HEADER = RepublishMessageRecoverer.X_ORIGINAL_EXCHANGE;
private static final String X_ORIGINAL_ROUTING_KEY_HEADER = RepublishMessageRecoverer.X_ORIGINAL_ROUTING_KEY;
#Autowired
private RabbitTemplate rabbitTemplate;
#RabbitListener(queues = DLQ)
public void rePublish(Message failedMessage) {
Map<String, Object> headers = failedMessage.getMessageProperties().getHeaders();
Integer retriesHeader = (Integer) headers.get(X_RETRIES_HEADER);
if (retriesHeader == null) {
retriesHeader = Integer.valueOf(0);
}
if (retriesHeader < 3) {
headers.put(X_RETRIES_HEADER, retriesHeader + 1);
String exchange = (String) headers.get(X_ORIGINAL_EXCHANGE_HEADER);
String originalRoutingKey = (String) headers.get(X_ORIGINAL_ROUTING_KEY_HEADER);
this.rabbitTemplate.send(exchange, originalRoutingKey, failedMessage);
}
else {
this.rabbitTemplate.send(PARKING_LOT, failedMessage);
}
}
#Bean
public Queue parkingLot() {
return new Queue(PARKING_LOT);
}
}
It does what it is expected, however, it is binded to RabbitMQ, and my company is planning to stop using this message broker in one year or two (don't know why, must be some crazy business). So, I want to implement the same thing, but detach it from any message broker.
I tried changing the rePublish method this way, but it does not work:
#StreamListener(Sync.DLQ)
public void rePublish(Message failedMessage) {
Map<String, Object> headers = failedMessage.getHeaders();
Integer retriesHeader = (Integer) headers.get(X_RETRIES_HEADER);
if (retriesHeader == null) {
retriesHeader = Integer.valueOf(0);
}
if (retriesHeader < 3) {
headers.put(X_RETRIES_HEADER, retriesHeader + 1);
String exchange = (String) headers.get(X_ORIGINAL_EXCHANGE_HEADER);
String originalRoutingKey = (String) headers.get(X_ORIGINAL_ROUTING_KEY_HEADER);
this.rabbitTemplate.send(exchange, originalRoutingKey, failedMessage);
}
else {
this.rabbitTemplate.send(PARKING_LOT, failedMessage);
}
}
It fails because the Message class has immutable Headers - throws exception on the put attempt saying you can't change its values (uses org.springframework.messaging.Message class).
Is there a way to implement this dead-letter queue handler in a message broker independent way?
Use
MessageBuilder.fromMessage(message)
.setHeader("foo", "bar")
...
.build();
Note that the message in #StreamListener is a spring-messaging Message<?>, not a spring-amqp Message and can't be sent using the template that way; you need an output binding to send the message to.
Related
On my flink script I have a stream that I'm getting from one kafka topic, manipulate it and sending it back to kafka using the sink.
public static void main(String[] args) throws Exception {
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
Properties p = new Properties();
p.setProperty("bootstrap.servers", servers_ip_list);
p.setProperty("gropu.id", "Flink");
FlinkKafkaConsumer<Event_N> kafkaData_N =
new FlinkKafkaConsumer("CorID_0", new Ev_Des_Sch_N(), p);
WatermarkStrategy<Event_N> wmStrategy =
WatermarkStrategy
.<Event_N>forMonotonousTimestamps()
.withIdleness(Duration.ofMinutes(1))
.withTimestampAssigner((Event, timestamp) -> {
return Event.get_Time();
});
DataStream<Event_N> stream_N = env.addSource(
kafkaData_N.assignTimestampsAndWatermarks(wmStrategy));
The part above is working fine no problems at all, the part below instead is where I'm getting the issue.
String ProducerTopic = "CorID_0_f1";
DataStream<Stream_Blocker_Pojo.block> box_stream_p= stream_N
.keyBy((Event_N CorrID) -> CorrID.get_CorrID())
.map(new Stream_Blocker_Pojo());
FlinkKafkaProducer<Stream_Blocker_Pojo.block> myProducer = new FlinkKafkaProducer<>(
ProducerTopic,
new ObjSerializationSchema(ProducerTopic),
p,
FlinkKafkaProducer.Semantic.EXACTLY_ONCE); // fault-tolerance
box_stream_p.addSink(myProducer);
No errors everything works fine, this is the Stream_Blocker_Pojo where I'm mapping a stream manipulating it and sending out a new one.(I have simplify my code, just keeping 4 variables and removing all the math and data processing).
public class Stream_Blocker_Pojo extends RichMapFunction<Event_N, Stream_Blocker_Pojo.block>
{
public class block {
public Double block_id;
public Double block_var2 ;
public Double block_var3;
public Double block_var4;}
private transient ValueState<block> state_a;
#Override
public void open(Configuration parameters) throws Exception {
state_a = getRuntimeContext().getState(new ValueStateDescriptor<>("BoxState_a", block.class));
}
public block map(Event_N input) throws Exception {
p1.Stream_Blocker_Pojo.block current_a = state_a.value();
if (current_a == null) {
current_a = new p1.Stream_Blocker_Pojo.block();
current_a.block_id = 0.0;
current_a.block_var2 = 0.0;
current_a.block_var3 = 0.0;
current_a.block_var4 = 0.0;}
current_a.block_id = input.f_num_id;
current_a.block_var2 = input.f_num_2;
current_a.block_var3 = input.f_num_3;
current_a.tblock_var4 = input.f_num_4;
state_a.update(current_a);
return new block();
};
}
This is the implementation of the Kafka Serialization schema.
public class ObjSerializationSchema implements KafkaSerializationSchema<Stream_Blocker_Pojo.block>{
private String topic;
private ObjectMapper mapper;
public ObjSerializationSchema(String topic) {
super();
this.topic = topic;
}
#Override
public ProducerRecord<byte[], byte[]> serialize(Stream_Blocker_Pojo.block obj, Long timestamp) {
byte[] b = null;
if (mapper == null) {
mapper = new ObjectMapper();
}
try {
b= mapper.writeValueAsBytes(obj);
} catch (JsonProcessingException e) {
}
return new ProducerRecord<byte[], byte[]>(topic, b);
}
}
When I open the messages that i sent from my Flink script using kafka, I find that all the variables are "null"
CorrID b'{"block_id":null,"block_var1":null,"block_var2":null,"block_var3":null,"block_var4":null}
It looks like I'm sending out an empty obj with no values. But I'm struggling to understand what I'm doing wrong. I think that the problem could be into my implementation of the Stream_Blocker_Pojo or maybe into the ObjSerializationSchema, Any help would be really appreciated. Thanks
There are two probable issues here:
Are You sure the variable You are passing of type block doesn't have null fields? You may want to debug that part to be sure.
The reason may also be in ObjectMapper, You should have getters and setters available for Your block otherwise Jackson may not be able to access them.
I'm using BatchingRabbitTemplate to send messages in a batch to amqp endpoint. Now, on the other receiving end, I can use #RabbitListener to receive messages, but my problem is that messages are automatically de-batched so I cannot use #RabbitHandler public void receive (List<SomeObject> so). Is there any simpler way of non-de-batching messages except me doing this:
#RabbitListener(..., containerFactory = "nonDeBatchingContainerFactory")
#Bean
public RabbitListenerContainerFactory nonDeBatchingContainerFactory(){
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setDeBatchingEnabled(false);
factory.setMessageConverter(jackson2JsonMessageConverter());
factory.setAfterReceivePostProcessors(new NonDeBatchingMessagePostProcessor(jackson2JsonMessageConverter()));
return factory;
}
and then implementing this post-processor (that is more or less copy of existing code for de-batching).
public class NonDeBatchingMessagePostProcessor implements MessagePostProcessor {
private MessageConverter payloadConverter;
public NonDeBatchingMessagePostProcessor(MessageConverter payloadConverter) {
this.payloadConverter = payloadConverter;
}
#Override
public Message postProcessMessage(Message message) throws AmqpException {
Object batchFormat = message.getMessageProperties().getHeaders().get(MessageProperties.SPRING_BATCH_FORMAT);
if (MessageProperties.BATCH_FORMAT_LENGTH_HEADER4.equals(batchFormat)) {
List<? super Object> aggregatedObjects = new ArrayList<>();
ByteBuffer byteBuffer = ByteBuffer.wrap(message.getBody());
MessageProperties messageProperties = message.getMessageProperties();
String singleObjectTypeId = messageProperties.getHeaders().get(DEFAULT_CLASSID_FIELD_NAME).toString();
messageProperties.getHeaders().remove(MessageProperties.SPRING_BATCH_FORMAT);
while (byteBuffer.hasRemaining()) {
int length = byteBuffer.getInt();
if (length < 0 || length > byteBuffer.remaining()) {
throw new ListenerExecutionFailedException("Bad batched message received",
new MessageConversionException("Insufficient batch data at offset " + byteBuffer.position()),
message);
}
byte[] body = new byte[length];
byteBuffer.get(body);
messageProperties.setContentLength(length);
// Caveat - shared MessageProperties.
Message fragment = new Message(body, messageProperties);
Object singleObject = this.payloadConverter.fromMessage(fragment);
aggregatedObjects.add(singleObject);
}
Message aggregatedMessages = this.payloadConverter.toMessage(aggregatedObjects, messageProperties);
aggregatedMessages.getMessageProperties().getHeaders().put(DEFAULT_CONTENT_CLASSID_FIELD_NAME, singleObjectTypeId);
return aggregatedMessages;
}
return null;
}
}
I need this use case in order to receive all messages in batch on the rabbit and then do bulk indexing in elastic search. Thanks.
It might be a bit easier to do the batching at the producing application level (send a List<SomeObject>) rather than using the batching template. Then you won't need anything at all on the consumer side.
Uses on-line decomentation I come up with the following code to terminate the current EC2 Instance:
public class Ec2Utility {
static private final String LOCAL_META_DATA_ENDPOINT = "http://169.254.169.254/latest/meta-data/";
static private final String LOCAL_INSTANCE_ID_SERVICE = "instance-id";
static public void terminateMe() throws Exception {
TerminateInstancesRequest terminateRequest = new TerminateInstancesRequest().withInstanceIds(getInstanceId());
AmazonEC2 ec2 = new AmazonEC2Client();
ec2.terminateInstances(terminateRequest);
}
static public String getInstanceId() throws Exception {
//SimpleRestClient, is an internal wrapper on http client.
SimpleRestClient client = new SimpleRestClient(LOCAL_META_DATA_ENDPOINT);
HttpResponse response = client.makeRequest(METHOD.GET, LOCAL_INSTANCE_ID_SERVICE);
return IOUtils.toString(response.getEntity().getContent(), "UTF-8");
}
}
My issue is that my EC2 instance is under an AutoScalingGroup which is under a CloudFormationStack, that is because of my organisation deployment standards though this single EC2 is all there is there for this feature.
So, I want to terminate the entire CloudFormationStack from the JavaSDK, keep in mind, I don't have the CloudFormation Stack Name in advance as I didn't have the EC2 Instance Id so I will have to get it from the code using the API calls.
How can I do that, if I can do it?
you should be able to use the deleteStack method from cloud formation sdk
DeleteStackRequest request = new DeleteStackRequest();
request.setStackName(<stack_name_to_be_deleted>);
AmazonCloudFormationClient client = new AmazonCloudFormationClient (<credentials>);
client.deleteStack(request);
If you don't have the stack name, you should be able to retrieve from the Tag of your instance
DescribeInstancesRequest request =new DescribeInstancesRequest();
request.setInstanceIds(instancesList);
DescribeInstancesResult disresult = ec2.describeInstances(request);
List <Reservation> list = disresult.getReservations();
for (Reservation res:list){
List <Instance> instancelist = res.getInstances();
for (Instance instance:instancelist){
List <Tag> tags = instance.getTags();
for (Tag tag:tags){
if (tag.getKey().equals("aws:cloudformation:stack-name")) {
tag.getValue(); // name of the stack
}
}
At the end I've achieved the desired behaviour using the set of the following util functions I wrote:
/**
* Delete the CloudFormationStack with the given name.
*
* #param stackName
* #throws Exception
*/
static public void deleteCloudFormationStack(String stackName) throws Exception {
AmazonCloudFormationClient client = new AmazonCloudFormationClient();
DeleteStackRequest deleteStackRequest = new DeleteStackRequest().withStackName("");
client.deleteStack(deleteStackRequest);
}
static public String getCloudFormationStackName() throws Exception {
AmazonEC2 ec2 = new AmazonEC2Client();
String instanceId = getInstanceId();
List<Tag> tags = getEc2Tags(ec2, instanceId);
for (Tag t : tags) {
if (t.getKey().equalsIgnoreCase(TAG_KEY_STACK_NAME)) {
return t.getValue();
}
}
throw new Exception("Couldn't find stack name for instanceId:" + instanceId);
}
static private List<Tag> getEc2Tags(AmazonEC2 ec2, String instanceId) throws Exception {
DescribeInstancesRequest describeInstancesRequest = new DescribeInstancesRequest().withInstanceIds(instanceId);
DescribeInstancesResult describeInstances = ec2.describeInstances(describeInstancesRequest);
List<Reservation> reservations = describeInstances.getReservations();
if (reservations.isEmpty()) {
throw new Exception("DescribeInstances didn't returned reservation for instanceId:" + instanceId);
}
List<Instance> instances = reservations.get(0).getInstances();
if (instances.isEmpty()) {
throw new Exception("DescribeInstances didn't returned instance for instanceId:" + instanceId);
}
return instances.get(0).getTags();
}
// XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
// Example of usage from the code:
deleteCloudFormationStack(getCloudFormationStackName());
// XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Play 2.5 Highlights states
Better control over WebSocket frames
The Play 2.5 WebSocket API gives you direct control over WebSocket frames. You can now send and receive binary, text, ping, pong and close frames. If you don’t want to worry about this level of detail, Play will still automatically convert your JSON or XML data into the right kind of frame.
However
https://www.playframework.com/documentation/2.5.x/JavaWebSockets has examples around LegacyWebSocket which is deprecated
What is the recommended API/pattern for Java WebSockets? Is using
LegacyWebSocket the only option for java websockets?
Are there any examples using new Message types ping/pong to implement a heartbeat?
The official documentation on this is disappointingly very sparse. Perhaps in Play 2.6 we'll see an update to this. However, I will provide an example below on how to configure a chat websocket in Play 2.5, just to help out those in need.
Setup
AController.java
#Inject
private Materializer materializer;
private ActorRef chatSocketRouter;
#Inject
public AController(#Named("chatSocketRouter") ActorRef chatInjectedActor) {
this.chatSocketRouter = chatInjectedActor;
}
// Make a chat websocket for a user
public WebSocket chatSocket() {
return WebSocket.Json.acceptOrResult(request -> {
String authToken = getAuthToken();
// Checking of token
if (authToken == null) {
return forbiddenResult("No [authToken] supplied.");
}
// Could we find the token in the database?
final AuthToken token = AuthToken.findByToken(authToken);
if (token == null) {
return forbiddenResult("Could not find [authToken] in DB. Login again.");
}
User user = token.getUser();
if (user == null) {
return forbiddenResult("You are not logged in to view this stream.");
}
Long userId = user.getId();
// Create a function to be run when we initialise a flow.
// A flow basically links actors together.
AbstractFunction1<ActorRef, Props> getWebSocketActor = new AbstractFunction1<ActorRef, Props>() {
#Override
public Props apply(ActorRef connectionProperties) {
// We use the ActorRef provided in the param above to make some properties.
// An ActorRef is a fancy word for thread reference.
// The WebSocketActor manages the web socket connection for one user.
// WebSocketActor.props() means "make one thread (from the WebSocketActor) and return the properties on how to reference it".
// The resulting Props basically state how to construct that thread.
Props properties = ChatSocketActor.props(connectionProperties, chatSocketRouter, userId);
// We can have many connections per user. So we need many ActorRefs (threads) per user. As you can see from the code below, we do exactly that. We have an object called
// chatSocketRouter which holds a Map of userIds -> connectionsThreads and we "tell"
// it a lightweight object (UserMessage) that is made up of this connecting user's ID and the connection.
// As stated above, Props are basically a way of describing an Actor, or dumbed-down, a thread.
// In this line, we are using the Props above to
// reference the ActorRef we've just created above
ActorRef anotherUserDevice = actorSystem.actorOf(properties);
// Create a lightweight object...
UserMessage routeThisUser = new UserMessage(userId, anotherUserDevice);
// ... to tell the thread that has our Map that we have a new connection
// from a user.
chatSocketRouter.tell(routeThisUser, ActorRef.noSender());
// We return the properties to the thread that will be managing this user's connection
return properties;
}
};
final Flow<JsonNode, JsonNode, ?> jsonNodeFlow =
ActorFlow.<JsonNode, JsonNode>actorRef(getWebSocketActor,
100,
OverflowStrategy.dropTail(),
actorSystem,
materializer).asJava();
final F.Either<Result, Flow<JsonNode, JsonNode, ?>> right = F.Either.Right(jsonNodeFlow);
return CompletableFuture.completedFuture(right);
});
}
// Return this whenever we want to reject a
// user from connecting to a websocket
private CompletionStage<F.Either<Result, Flow<JsonNode, JsonNode, ?>>> forbiddenResult(String msg) {
final Result forbidden = Results.forbidden(msg);
final F.Either<Result, Flow<JsonNode, JsonNode, ?>> left = F.Either.Left(forbidden);
return CompletableFuture.completedFuture(left);
}
ChatSocketActor.java
public class ChatSocketActor extends UntypedActor {
private final ActorRef out;
private final Long userId;
private ActorRef chatSocketRouter;
public ChatSocketActor(ActorRef out, ActorRef chatSocketRouter, Long userId) {
this.out = out;
this.userId = userId;
this.chatSocketRouter = chatSocketRouter;
}
public static Props props(ActorRef out, ActorRef chatSocketRouter, Long userId) {
return Props.create(ChatSocketActor.class, out, chatSocketRouter, userId);
}
// Add methods here handling each chat connection...
}
ChatSocketRouter.java
public class ChatSocketRouter extends UntypedActor {
public ChatSocketRouter() {}
// Stores userIds to websockets
private final HashMap<Long, List<ActorRef>> senders = new HashMap<>();
private void addSender(Long userId, ActorRef actorRef){
if (senders.containsKey(userId)) {
final List<ActorRef> actors = senders.get(userId);
actors.add(actorRef);
senders.replace(userId, actors);
} else {
List<ActorRef> l = new ArrayList<>();
l.add(actorRef);
senders.put(userId, l);
}
}
private void removeSender(ActorRef actorRef){
for (List<ActorRef> refs : senders.values()) {
refs.remove(actorRef);
}
}
#Override
public void onReceive(Object message) throws Exception {
ActorRef sender = getSender();
// Handle messages sent to this 'router' here
if (message instanceof UserMessage) {
UserMessage userMessage = (UserMessage) message;
addSender(userMessage.userId, userMessage.actorRef);
// Watch sender so we can detect when they die.
getContext().watch(sender);
} else if (message instanceof Terminated) {
// One of our watched senders has died.
removeSender(sender);
} else {
unhandled(message);
}
}
}
Example
Now whenever you want to send a client with a websocket connection a message you can do something like:
ChatSenderController.java
private ActorRef chatSocketRouter;
#Inject
public ChatSenderController(#Named("chatSocketRouter") ActorRef chatInjectedActor) {
this.chatSocketRouter = chatInjectedActor;
}
public static void sendMessage(Long sendToId) {
// E.g. send the chat router a message that says hi
chatSocketRouter.tell(new Message(sendToId, "Hi"));
}
ChatSocketRouter.java
#Override
public void onReceive(Object message) throws Exception {
// ...
if (message instanceof Message) {
Message messageToSend = (Message) message;
// Loop through the list above and send the message to
// each connection. For example...
for (ActorRef wsConnection : senders.get(messageToSend.getSendToId())) {
// Send "Hi" to each of the other client's
// connected sessions
wsConnection.tell(messageToSend.getMessage());
}
}
// ...
}
Again, I wrote the above to help out those in need. After scouring the web I could not find a reasonable and simple example. There is an open issue for this exact topic. There are also some examples online but none of them were easy to follow. Akka has some great documentation but mixing it in with Play was a tough mental task.
Please help improve this answer if you see anything that is amiss.
I am retrieving multiple rows from table and then want that my processor process them, but i observe camel is calling my processor for every row, i want to pass List of entities
following is my code
from("jpa:com.pns.ab.model.LoanRequest?consumeDelete=false"
+ "&consumer.delay=20000"
+ "&consumer.namedQuery=selectLoanRequests"
+ "&persistenceUnit=LoanServicePU").process(new JpaProcessor());
in processor
LoanRequest lr = exchange.getIn().getBody(LoanRequest.class);
but i want option like
List<LoanRequest> requests = exchange....
Regards,
Use an aggregator:
private static class JpaAggregationRouteBuilder extends RouteBuilder {
#Override
public void configure() {
from("jpa:com.pns.ab.model.LoanRequest?consumeDelete=false"
+ "&consumer.delay=20000"
+ "&consumer.namedQuery=selectLoanRequests"
+ "&persistenceUnit=LoanServicePU")
.aggregate(constant(true), new ArrayListAggregationStrategy())
.completionFromBatchConsumer()
.process(new JpaProcessor());
}
}
// Simply combines Exchange body values into an ArrayList<Object>
// Taken from http://camel.apache.org/aggregator2
private static class ArrayListAggregationStrategy implements AggregationStrategy {
#SuppressWarnings("unchecked")
#Override
public Exchange aggregate(final Exchange oldExchange, final Exchange newExchange) {
final Object newBody = newExchange.getIn().getBody();
ArrayList<Object> list = null;
if (oldExchange == null) {
list = new ArrayList<Object>();
if (newBody != null) {
list.add(newBody);
}
newExchange.getIn().setBody(list);
return newExchange;
} else {
list = oldExchange.getIn().getBody(ArrayList.class);
if (newBody != null) {
list.add(newBody);
}
return oldExchange;
}
}
}
More information about the aggregator can be found on the Camel webpage.