Non de-batching messages in spring amqp - java

I'm using BatchingRabbitTemplate to send messages in a batch to amqp endpoint. Now, on the other receiving end, I can use #RabbitListener to receive messages, but my problem is that messages are automatically de-batched so I cannot use #RabbitHandler public void receive (List<SomeObject> so). Is there any simpler way of non-de-batching messages except me doing this:
#RabbitListener(..., containerFactory = "nonDeBatchingContainerFactory")
#Bean
public RabbitListenerContainerFactory nonDeBatchingContainerFactory(){
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setDeBatchingEnabled(false);
factory.setMessageConverter(jackson2JsonMessageConverter());
factory.setAfterReceivePostProcessors(new NonDeBatchingMessagePostProcessor(jackson2JsonMessageConverter()));
return factory;
}
and then implementing this post-processor (that is more or less copy of existing code for de-batching).
public class NonDeBatchingMessagePostProcessor implements MessagePostProcessor {
private MessageConverter payloadConverter;
public NonDeBatchingMessagePostProcessor(MessageConverter payloadConverter) {
this.payloadConverter = payloadConverter;
}
#Override
public Message postProcessMessage(Message message) throws AmqpException {
Object batchFormat = message.getMessageProperties().getHeaders().get(MessageProperties.SPRING_BATCH_FORMAT);
if (MessageProperties.BATCH_FORMAT_LENGTH_HEADER4.equals(batchFormat)) {
List<? super Object> aggregatedObjects = new ArrayList<>();
ByteBuffer byteBuffer = ByteBuffer.wrap(message.getBody());
MessageProperties messageProperties = message.getMessageProperties();
String singleObjectTypeId = messageProperties.getHeaders().get(DEFAULT_CLASSID_FIELD_NAME).toString();
messageProperties.getHeaders().remove(MessageProperties.SPRING_BATCH_FORMAT);
while (byteBuffer.hasRemaining()) {
int length = byteBuffer.getInt();
if (length < 0 || length > byteBuffer.remaining()) {
throw new ListenerExecutionFailedException("Bad batched message received",
new MessageConversionException("Insufficient batch data at offset " + byteBuffer.position()),
message);
}
byte[] body = new byte[length];
byteBuffer.get(body);
messageProperties.setContentLength(length);
// Caveat - shared MessageProperties.
Message fragment = new Message(body, messageProperties);
Object singleObject = this.payloadConverter.fromMessage(fragment);
aggregatedObjects.add(singleObject);
}
Message aggregatedMessages = this.payloadConverter.toMessage(aggregatedObjects, messageProperties);
aggregatedMessages.getMessageProperties().getHeaders().put(DEFAULT_CONTENT_CLASSID_FIELD_NAME, singleObjectTypeId);
return aggregatedMessages;
}
return null;
}
}
I need this use case in order to receive all messages in batch on the rabbit and then do bulk indexing in elastic search. Thanks.

It might be a bit easier to do the batching at the producing application level (send a List<SomeObject>) rather than using the batching template. Then you won't need anything at all on the consumer side.

Related

Handling dead-letter queue message-broker independent way

I have a project that currently uses Spring Cloud Streams and RabbitMQ underneath. I've implemented a logic based on the documentation. See below:
#Component
public class ReRouteDlq {
private static final String ORIGINAL_QUEUE = "so8400in.so8400";
private static final String DLQ = ORIGINAL_QUEUE + ".dlq";
private static final String PARKING_LOT = ORIGINAL_QUEUE + ".parkingLot";
private static final String X_RETRIES_HEADER = "x-retries";
private static final String X_ORIGINAL_EXCHANGE_HEADER = RepublishMessageRecoverer.X_ORIGINAL_EXCHANGE;
private static final String X_ORIGINAL_ROUTING_KEY_HEADER = RepublishMessageRecoverer.X_ORIGINAL_ROUTING_KEY;
#Autowired
private RabbitTemplate rabbitTemplate;
#RabbitListener(queues = DLQ)
public void rePublish(Message failedMessage) {
Map<String, Object> headers = failedMessage.getMessageProperties().getHeaders();
Integer retriesHeader = (Integer) headers.get(X_RETRIES_HEADER);
if (retriesHeader == null) {
retriesHeader = Integer.valueOf(0);
}
if (retriesHeader < 3) {
headers.put(X_RETRIES_HEADER, retriesHeader + 1);
String exchange = (String) headers.get(X_ORIGINAL_EXCHANGE_HEADER);
String originalRoutingKey = (String) headers.get(X_ORIGINAL_ROUTING_KEY_HEADER);
this.rabbitTemplate.send(exchange, originalRoutingKey, failedMessage);
}
else {
this.rabbitTemplate.send(PARKING_LOT, failedMessage);
}
}
#Bean
public Queue parkingLot() {
return new Queue(PARKING_LOT);
}
}
It does what it is expected, however, it is binded to RabbitMQ, and my company is planning to stop using this message broker in one year or two (don't know why, must be some crazy business). So, I want to implement the same thing, but detach it from any message broker.
I tried changing the rePublish method this way, but it does not work:
#StreamListener(Sync.DLQ)
public void rePublish(Message failedMessage) {
Map<String, Object> headers = failedMessage.getHeaders();
Integer retriesHeader = (Integer) headers.get(X_RETRIES_HEADER);
if (retriesHeader == null) {
retriesHeader = Integer.valueOf(0);
}
if (retriesHeader < 3) {
headers.put(X_RETRIES_HEADER, retriesHeader + 1);
String exchange = (String) headers.get(X_ORIGINAL_EXCHANGE_HEADER);
String originalRoutingKey = (String) headers.get(X_ORIGINAL_ROUTING_KEY_HEADER);
this.rabbitTemplate.send(exchange, originalRoutingKey, failedMessage);
}
else {
this.rabbitTemplate.send(PARKING_LOT, failedMessage);
}
}
It fails because the Message class has immutable Headers - throws exception on the put attempt saying you can't change its values (uses org.springframework.messaging.Message class).
Is there a way to implement this dead-letter queue handler in a message broker independent way?
Use
MessageBuilder.fromMessage(message)
.setHeader("foo", "bar")
...
.build();
Note that the message in #StreamListener is a spring-messaging Message<?>, not a spring-amqp Message and can't be sent using the template that way; you need an output binding to send the message to.

How to handle JmsChannelFactoryBean errors, is there a possibility of custom error channel usage?

I have following configuration for creation of two channels (by using the JmsChannelFactoryBean):
#Bean
public JmsChannelFactoryBean jmsChannel(ActiveMQConnectionFactory activeMQConnectionFactory) {
JmsChannelFactoryBean fb = new JmsChannelFactoryBean(true);
fb.setConnectionFactory(activeMQConnectionFactory);
fb.setDestinationName("something.queue");
fb.setErrorHandler(t -> log.error("something went wrong on jms channel", t));
return fb;
}
#Bean
public JmsChannelFactoryBean jmsChannelDLQ(ActiveMQConnectionFactory activeMQConnectionFactory) {
JmsChannelFactoryBean fb = new JmsChannelFactoryBean(true);
fb.setConnectionFactory(activeMQConnectionFactory);
fb.setDestinationName("something.queue.DLQ");
fb.setErrorHandler(t -> log.error("something went wrong on jms channel", t));
return fb;
}
The something.queue is configured to put the dead letter on something.queue.DLQ. Im using mostly Java DSL to configure the app, and if possible - would like to keep this.
Case is: the message is taken from jmsChannel put to sftp outbound gateway, if there is a problem on sending the file, the message is put back into the jmsChannel as not delivered. After some retries it is designed as poisonus, and put to something.queue.DLQ.
Is it possbile to have the info on error channel when that happens?
What is best practice to handle errors when using JMS backed message channels?
EDIT 2
The integration flow is defined as:
IntegrationFlows.from(filesToProcessChannel).handle(outboundGateway)
Where filesToProcessChannel is the JMS backed channel and outbound gateway is defined as:
#Bean
public SftpOutboundGateway outboundGateway(SftpRemoteFileTemplate sftpRemoteFileTemplate) {
SftpOutboundGateway gateway = new SftpOutboundGateway(sftpRemoteFileTemplate, AbstractRemoteFileOutboundGateway.Command.PUT.getCommand(), EXPRESSION_PAYLOAD);
ArrayList<Advice> adviceChain = new ArrayList<>();
adviceChain.add(errorHandlingAdvice());
gateway.setAdviceChain(adviceChain);
return gateway;
}
Im trying to grab exception using advice:
#Bean
public Advice errorHandlingAdvice() {
RequestHandlerRetryAdvice advice = new RequestHandlerRetryAdvice();
RetryTemplate retryTemplate = new RetryTemplate();
SimpleRetryPolicy retryPolicy = new SimpleRetryPolicy();
retryPolicy.setMaxAttempts(1);
retryTemplate.setRetryPolicy(retryPolicy);
advice.setRetryTemplate(retryTemplate);
advice.setRecoveryCallback(new ErrorMessageSendingRecoverer(filesToProcessErrorChannel));
return advice;
}
Is this the right way?
EDIT 3
There is certanly something wrong with SFTPOutboundGateway and advices (or with me :/):
I used the folowing advice from the spring integration reference:
#Bean
public Advice expressionAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setSuccessChannelName("success.input");
advice.setOnSuccessExpressionString("payload + ' was successful'");
advice.setFailureChannelName("failure.input");
advice.setOnFailureExpressionString(
"payload + ' was bad, with reason: ' + #exception.cause.message");
advice.setTrapException(true);
return advice;
}
#Bean
public IntegrationFlow success() {
return f -> f.handle(System.out::println);
}
#Bean
public IntegrationFlow failure() {
return f -> f.handle(System.out::println);
}
And when I use :
return IntegrationFlows.from(filesToProcessChannel)
.handle((GenericHandler<File>) (payload, headers) -> {
if (payload.equals("x")) {
return null;
}
else {
throw new RuntimeException("some failure");
}
}, spec -> spec.advice(expressionAdvice()))
It gets called, and i get error message printed out (and that is expected), but when I try to use:
return IntegrationFlows.from(filesToProcessChannel)
.handle(outboundGateway, spec -> spec.advice(expressionAdvice()))
The advice is not called, and the error message is put back to JMS.
The app is using Spring Boot v2.0.0.RELEASE, Spring v5.0.4.RELEASE.
EDIT 4
I managed to resolve the advice issue using following configuration, still don't understand why the handler spec will not work:
#Bean
IntegrationFlow files(SftpOutboundGateway outboundGateway,
...
) {
return IntegrationFlows.from(filesToProcessChannel)
.handle(outboundGateway)
...
.log(LoggingHandler.Level.INFO)
.get();
}
#Bean
public SftpOutboundGateway outboundGateway(SftpRemoteFileTemplate sftpRemoteFileTemplate) {
SftpOutboundGateway gateway = new SftpOutboundGateway(sftpRemoteFileTemplate, AbstractRemoteFileOutboundGateway.Command.PUT.getCommand(), EXPRESSION_PAYLOAD);
ArrayList<Advice> adviceChain = new ArrayList<>();
adviceChain.add(expressionAdvice());
gateway.setAdviceChain(adviceChain);
return gateway;
}
#Bean
public ExpressionEvaluatingRequestHandlerAdvice expressionAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setSuccessChannelName("success.input");
advice.setOnSuccessExpressionString("payload + ' was successful'");
advice.setFailureChannelName("failure.input");
advice.setOnFailureExpressionString(
"payload + ' was bad, with reason: ' + #exception.cause.message");
advice.setTrapException(true);
return advice;
}
#Bean
public IntegrationFlow success() {
return f -> f.handle(System.out::println);
}
#Bean
public IntegrationFlow failure() {
return f -> f.handle(System.out::println);
}
Since the movement to the DLQ is performed by the broker, the application has no mechanism to log the situation - it is not even aware that it happened.
You would have to catch the exceptions yourself and publish the message the the DLQ yourself, after some number of attempts (JMSXDeliveryCount header), instead of using the broker policy.
EDIT
Add an Advice to the .handle() step.
.handle(outboundGateway, e -> e.advice(myAdvice))
Where myAdvice implements MethodInterceptor.
In the invoke method, after a failure, you can check the delivery count header and, if it exceeds your threshold, publish the message to the DLQ (e.g. send it to another channel that has a JMS outbound adapter subscribed) and log the error; if the threshold has not been exceeded, simply return the result of the invocation.proceed() (or rethrow the exception).
That way, you control publishing to the DLQ rather than having the broker do it. You can also add more information, such as the exception, to headers.
EDIT2
You need something like this
public class MyAdvice implements MethodInterceptor {
#Autowired
private MessageChannel toJms;
public Object invoke(MethodInvocation invocation) throws Throwable {
try {
return invocation.proceed();
}
catch Exception(e) {
Message<?> message = (Message<?>) invocation.getArguments()[0];
Integer redeliveries = messasge.getHeader("JMXRedeliveryCount", Integer.class);
if (redeliveries != null && redeliveries > 3) {
this.toJms.send(message); // maybe rebuild with additional headers about the error
}
else {
throw e;
}
}
}
}
(it should be close, but I haven't tested it). It assumes your broker populates that header.

Spring Batch Integration Remote Chunking error - Message contained wrong job instance id [25] should have been [24]

I'm running into this bug (more info here) which appears to mean that for multi-threaded batches using remote chunking you can't use a common response channel. I'm not exactly sure how to proceed to get this working. Surely there's a way to get this working, because without it I can't see much benefit to remote chunking.
Here's my DSL config that creates a JobRequest:
#Bean
IntegrationFlow newPollingJobsAdapter(JobLaunchingGateway jobLaunchingGateway) {
// Start by polling the DB for new PollingJobs according to the polling rate
return IntegrationFlows.from(jdbcPollingChannelAdapter(),
c -> c.poller(Pollers.fixedRate(10000)
// Do the polling on one of 10 threads.
.taskExecutor(Executors.newFixedThreadPool(10))
// pull out up to 100 new ids for each poll.
.maxMessagesPerPoll(100)))
.log(LoggingHandler.Level.WARN)
// The polling adapter above returns a list of ids. Split them out into
// individual ids
.split()
// Now push each one onto a separate thread for batch processing.
.channel(MessageChannels.executor(Executors.newFixedThreadPool(10)))
.log(LoggingHandler.Level.WARN)
// Transform each one into a JobLaunchRequest
.<Long, JobLaunchRequest>transform(id -> {
logger.warn("Creating job for ID {}", id);
JobParametersBuilder builder = new JobParametersBuilder()
.addLong("polling-job-id", id, true);
return new JobLaunchRequest(job, builder.toJobParameters());
})
.handle(jobLaunchingGateway)
// TODO: Notify somebody? No idea yet
.<JobExecution>handle(exec -> System.out.println("GOT EXECUTION: " + exec))
.get();
}
Nothing in here is particularly special, no odd configs that I'm aware of.
The job itself is pretty straight-forward, too:
/**
* This is the definition of the entire batch process that runs polling.
* #return
*/
#Bean
Job pollingJobJob() {
return jobBuilderFactory.get("pollingJobJob")
.incrementer(new RunIdIncrementer())
// Ship it down to the slaves for actual processing
.start(remoteChunkingStep())
// Now mark it as complete
.next(markCompleteStep())
.build();
}
/**
* Sends the job to a remote slave via an ActiveMQ-backed JMS queue.
*/
#Bean
TaskletStep remoteChunkingStep() {
return stepBuilderFactory.get("polling-job-step-remote-chunking")
.<Long, String>chunk(20)
.reader(runningPollingJobItemReader)
.processor(toJsonProcessor())
.writer(chunkWriter)
.build();
}
/**
* This step just marks the PollerJob as Complete.
*/
#Bean
Step markCompleteStep() {
return stepBuilderFactory.get("polling-job-step-mark-complete")
// We want each PollerJob instance to be a separate job in batch, and the
// reader is using the id passed in via job params to grab the one we want,
// so we don't need a large chunk size. One at a time is fine.
.<Long, Long>chunk(1)
.reader(runningPollingJobItemReader)
.processor(new PassThroughItemProcessor<Long>())
.writer(this.completeStatusWriter)
.build();
}
Here's the chunk writer config:
/**
* This is part of the bridge between the spring-batch and spring-integration. Nothing special or weird is going
* on, so see the RemoteChunkHandlerFactoryBean for a description.
*/
#Bean
RemoteChunkHandlerFactoryBean<PollerJob> remoteChunkHandlerFactoryBean() {
RemoteChunkHandlerFactoryBean<PollerJob> factory = new RemoteChunkHandlerFactoryBean<>();
factory.setChunkWriter(chunkWriter);
factory.setStep(remoteChunkingStep());
return factory;
}
/**
* This is the writer that will actually send the chunk to the slaves. Note that it also configures the
* internal channel on which replies are expected.
*/
#Bean
#StepScope
ChunkMessageChannelItemWriter<String> chunkWriter() {
ChunkMessageChannelItemWriter<String> writer = new ChunkMessageChannelItemWriter<>();
writer.setMessagingOperations(batchMessagingTemplate());
writer.setReplyChannel(batchResponseChannel());
writer.setThrottleLimit(1000);
return writer;
}
The problem seems to be that last section sets up the ChunkMessageChannelItemWriter such that the replyChannel is the same one used by all of the writers, despite each writer being step-scoped. It would seem that I need to add a replyChannel header to one of the messages, but I'm not sure where in the chain to do that or how to process that (if I need to at all?).
Also, this is being sent to the slaves via JMS/ActiveMQ and I'd like to avoid having just a stupid number of nearly-identical queues on ActiveMQ just to support this.
What are my options?
Given that you are using a shared JMS infrastructure, you will need a router to get the responses back to the correct chunk writer.
If you use prototype scope on the batchResponseChannel() #Bean; you'll get a unique channel for each writer.
I don't have time to figure out how to set up a chunked batch job so the following simulates your environment (non-singleton bean that needs a unique reply channel for each instance). Hopefully it's self-explanatory...
#SpringBootApplication
public class So44806067Application {
public static void main(String[] args) {
ConfigurableApplicationContext context = SpringApplication.run(So44806067Application.class, args);
SomeNonSingletonNeedingDistinctRequestAndReplyChannels chunker1 = context
.getBean(SomeNonSingletonNeedingDistinctRequestAndReplyChannels.class);
SomeNonSingletonNeedingDistinctRequestAndReplyChannels chunker2 = context
.getBean(SomeNonSingletonNeedingDistinctRequestAndReplyChannels.class);
if (chunker1.equals(chunker2)) {
throw new IllegalStateException("Expected different instances");
}
chunker1.sendSome();
chunker2.sendSome();
ChunkResponse results = chunker1.getResults();
if (results == null) {
throw new IllegalStateException("No results1");
}
if (results.getJobId() != 1L) {
throw new IllegalStateException("Incorrect routing1");
}
results = chunker2.getResults();
if (results == null) {
throw new IllegalStateException("No results2");
}
if (results.getJobId() != 2L) {
throw new IllegalStateException("Incorrect routing2");
}
context.close();
}
#Bean
public Map<Long, PollableChannel> registry() {
// TODO: should clean up entry for jobId when job completes.
return new ConcurrentHashMap<>();
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public SomeNonSingletonNeedingDistinctRequestAndReplyChannels chunker() {
MessagingTemplate template = template();
final PollableChannel replyChannel = replyChannel();
SomeNonSingletonNeedingDistinctRequestAndReplyChannels bean =
new SomeNonSingletonNeedingDistinctRequestAndReplyChannels(template, replyChannel);
AbstractSubscribableChannel requestChannel = (AbstractSubscribableChannel) template.getDefaultDestination();
requestChannel.addInterceptor(new ChannelInterceptorAdapter() {
#Override
public Message<?> preSend(Message<?> message, MessageChannel channel) {
registry().putIfAbsent(((ChunkRequest<?>) message.getPayload()).getJobId(), replyChannel);
return message;
}
});
BridgeHandler bridge = bridge();
requestChannel.subscribe(bridge);
return bean;
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public MessagingTemplate template() {
MessagingTemplate messagingTemplate = new MessagingTemplate();
messagingTemplate.setDefaultChannel(requestChannel());
return messagingTemplate;
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public DirectChannel requestChannel() {
return new DirectChannel();
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public PollableChannel replyChannel() {
return new QueueChannel();
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public BridgeHandler bridge() {
BridgeHandler bridgeHandler = new BridgeHandler();
bridgeHandler.setOutputChannel(outboundChannel());
return bridgeHandler;
}
#Bean
public DirectChannel outboundChannel() {
return new DirectChannel();
}
#Bean
public DirectChannel masterReplyChannel() {
return new DirectChannel();
}
#ServiceActivator(inputChannel = "outboundChannel")
public void simulateJmsChannelAdapterPair(ChunkRequest<?> request) {
masterReplyChannel()
.send(new GenericMessage<>(new ChunkResponse(request.getSequence(), request.getJobId(), null)));
}
#Router(inputChannel = "masterReplyChannel")
public MessageChannel route(ChunkResponse reply) {
// TODO: error checking - missing reply channel for jobId
return registry().get(reply.getJobId());
}
public static class SomeNonSingletonNeedingDistinctRequestAndReplyChannels {
private final static AtomicLong jobIds = new AtomicLong();
private final long jobId = jobIds.incrementAndGet();
private final MessagingTemplate template;
private final PollableChannel replyChannel;
public SomeNonSingletonNeedingDistinctRequestAndReplyChannels(MessagingTemplate template,
PollableChannel replyChannel) {
this.template = template;
this.replyChannel = replyChannel;
}
public void sendSome() {
ChunkRequest<String> cr = new ChunkRequest<>(0, Collections.singleton("foo"), this.jobId, null);
this.template.send(new GenericMessage<>(cr));
}
public ChunkResponse getResults() {
#SuppressWarnings("unchecked")
Message<ChunkResponse> received = (Message<ChunkResponse>) this.replyChannel.receive(10_000);
if (received != null) {
if (received.getPayload().getJobId().equals(this.jobId)) {
System.out.println("Got the right one");
}
else {
System.out.println(
"Got the wrong one " + received.getPayload().getJobId() + " instead of " + this.jobId);
}
return received.getPayload();
}
return null;
}
}
}

Java netty rxtx pipeline blocks after writeAndFlush()

I'm currently trying to achieve a somewhat stable connection between a micro-controller and a Java-application using netty 4.0.44.Final and rxtx. From time to time the controller asks for a time-stamp, otherwise it is just forwarding sensor data to my application. The application is able to receive as many packages as I want to until i call writeAndFlush() somewhere in the pipeline (i.e. answering a time-request). The pipeline correctly writes data on the outputstream (when writeAndFlush() is called) and from that point onwards my application is never receiving data again and I have no idea why.
public class WsnViaRxtxConnector extends AbstractWsnConnector{
private static final Logger LOG = LoggerFactory.getLogger(WsnViaRxtxConnector.class);
private String port;
private Provider<MessageDeserializer> deserializerProvider;
private ChannelFuture channelFuture;
public ChannelKeeper keeper;
#Inject
public WsnViaRxtxConnector(Provider<MessageDeserializer> deserializerProvider, ChannelKeeper keeper) {
this.deserializerProvider = deserializerProvider;
this.port = Configuration.getConfig().getString("rest.wsn.port");
this.keeper = keeper;
System.setProperty("gnu.io.rxtx.SerialPorts", this.port);
}
#Override
protected void run() throws Exception
{
EventLoopGroup group = new OioEventLoopGroup();
//final EventExecutorGroup group2 = new DefaultEventExecutorGroup(1500);
try {
Bootstrap b = new Bootstrap();
b.group(group)
.channel(RxtxChannel.class)
.handler(new ChannelInitializer<RxtxChannel>() {
#Override
public void initChannel(RxtxChannel ch) throws Exception {
ch.pipeline().addLast(new DleStxEtxFrameDecoder(), new DleStxEtxFrameEncoder());
ch.pipeline().addLast(new IntegrityCheck(),new IntegrityCalculation());
ch.pipeline().addLast(new AesCcmDecrypter(),new AesCcmEncrypter());
ch.pipeline().addLast(deserializerProvider.get(),new MessageSerializer());
ch.pipeline().addLast(new TimeStampJockel());
}
})
.option(RxtxChannelOption.BAUD_RATE, 19200);
ChannelFuture f = b.connect(new RxtxDeviceAddress(this.port)).sync();
f.channel().closeFuture().sync();
} finally {
group.shutdownGracefully();
}
}
The handlers are all pretty much standard implementations and seem to work when receiving packages only. The pipeline should first generate an object from the raw data, checkCRC, decrypt, deserialize and then compute some logic (aka generate a time-response).
public class TimeStampJockel extends ChannelInboundHandlerAdapter{
private static final Logger LOG = LoggerFactory.getLogger(TimeStampJockel.class);
private EventBus bus;
private ChannelKeeper keeper;
#Inject
public TimeStampJockel(){
this.bus = GlobalEventBus.getInstance();
this.keeper = keeper;
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg){
LOG.debug("Creating packet from received data");
RawPacket raw = (RawPacket)msg;
//EventExecutor ex = ctx.executor();
//LOG.debug("inexecutor.EventLoop(1):" + ex.inEventLoop());
//keeper.addChannelHandlerContext(raw.getSource(),ctx);
ByteBuf buf = raw.getContent();
LOG.debug("\tBuffer: {}", HelperFunctions.getBufferAsHexString(buf));
UnsignedLong mac = UnsignedLong.fromLongBits(21);
while(buf.readerIndex()<buf.writerIndex())
{
int type = buf.readShort();
int length = buf.readShort();
ByteBuf value = buf.readBytes(length);
if(PacketType.getEnum(type).equals(PacketType.MAC))
{
mac = UnsignedLong.valueOf(value.readLong());
}
else
{
AbstractPacket packet = PacketFactory.createPacket(PacketType.getEnum(type), raw.getVersion(), raw.getPacketType(), raw.getSource(), raw.getSource(), raw.getDestination(), mac, value);
if(packet instanceof TimeReqPacket) {
TimeReqPacket timeReqPacket = (TimeReqPacket) packet;
Duration d = HelperFunctions.timeSinceYear2000();
TimeRespPacket newPacket = new TimeRespPacket(Constants.PROTOCOL_VERSION, PacketType.TIME_RESP.getValue(), packet.getGatewayAdr(),UnsignedLong.valueOf(Configuration.getConfig().getLong("rest.wsn.mac", Long.MAX_VALUE)),timeReqPacket.getMac(),timeReqPacket.getMac(),d.getStandardSeconds(),HelperFunctions.getMillisOfDuration(d));
ctx.write(newPacket);
} else {
bus.post(packet);
}
}
}
}
The received sensor data is pushed to a Guava-bus (unless its a time-request) and processed by other components. If the incoming package is a time-request-packet, the previously displayed component should generate a time-stamp-packet and writeAndFlush() is down the pipeline. Any ideas what may cause that issue? I'm pretty much out of ideas - I have been googling the last 10 hours without meaningful results and I have no unchecked resources left. I'm using ubuntu 16.04, thanks in advance.
[EDIT] I tried checking the ChannelFuture, by adding the following code-snippet to the last pipeline handler
ChannelFuture f = ctx.writeAndFlush(newPacket);
f.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (!future.isSuccess()) {
LOG.error("Server failed to send message", future.cause());
future.channel().close();
}
}
[EDIT2] Found my error. It was a netty version conflict. I am working with multiple versions of netty in different projects and was using an older netty version (4.0.13) instead of netty 4.044.final. I have no idea what changed between those versions but I am glad that everything is working properly now.

Play2.5 Java WebSockets

Play 2.5 Highlights states
Better control over WebSocket frames
The Play 2.5 WebSocket API gives you direct control over WebSocket frames. You can now send and receive binary, text, ping, pong and close frames. If you don’t want to worry about this level of detail, Play will still automatically convert your JSON or XML data into the right kind of frame.
However
https://www.playframework.com/documentation/2.5.x/JavaWebSockets has examples around LegacyWebSocket which is deprecated
What is the recommended API/pattern for Java WebSockets? Is using
LegacyWebSocket the only option for java websockets?
Are there any examples using new Message types ping/pong to implement a heartbeat?
The official documentation on this is disappointingly very sparse. Perhaps in Play 2.6 we'll see an update to this. However, I will provide an example below on how to configure a chat websocket in Play 2.5, just to help out those in need.
Setup
AController.java
#Inject
private Materializer materializer;
private ActorRef chatSocketRouter;
#Inject
public AController(#Named("chatSocketRouter") ActorRef chatInjectedActor) {
this.chatSocketRouter = chatInjectedActor;
}
// Make a chat websocket for a user
public WebSocket chatSocket() {
return WebSocket.Json.acceptOrResult(request -> {
String authToken = getAuthToken();
// Checking of token
if (authToken == null) {
return forbiddenResult("No [authToken] supplied.");
}
// Could we find the token in the database?
final AuthToken token = AuthToken.findByToken(authToken);
if (token == null) {
return forbiddenResult("Could not find [authToken] in DB. Login again.");
}
User user = token.getUser();
if (user == null) {
return forbiddenResult("You are not logged in to view this stream.");
}
Long userId = user.getId();
// Create a function to be run when we initialise a flow.
// A flow basically links actors together.
AbstractFunction1<ActorRef, Props> getWebSocketActor = new AbstractFunction1<ActorRef, Props>() {
#Override
public Props apply(ActorRef connectionProperties) {
// We use the ActorRef provided in the param above to make some properties.
// An ActorRef is a fancy word for thread reference.
// The WebSocketActor manages the web socket connection for one user.
// WebSocketActor.props() means "make one thread (from the WebSocketActor) and return the properties on how to reference it".
// The resulting Props basically state how to construct that thread.
Props properties = ChatSocketActor.props(connectionProperties, chatSocketRouter, userId);
// We can have many connections per user. So we need many ActorRefs (threads) per user. As you can see from the code below, we do exactly that. We have an object called
// chatSocketRouter which holds a Map of userIds -> connectionsThreads and we "tell"
// it a lightweight object (UserMessage) that is made up of this connecting user's ID and the connection.
// As stated above, Props are basically a way of describing an Actor, or dumbed-down, a thread.
// In this line, we are using the Props above to
// reference the ActorRef we've just created above
ActorRef anotherUserDevice = actorSystem.actorOf(properties);
// Create a lightweight object...
UserMessage routeThisUser = new UserMessage(userId, anotherUserDevice);
// ... to tell the thread that has our Map that we have a new connection
// from a user.
chatSocketRouter.tell(routeThisUser, ActorRef.noSender());
// We return the properties to the thread that will be managing this user's connection
return properties;
}
};
final Flow<JsonNode, JsonNode, ?> jsonNodeFlow =
ActorFlow.<JsonNode, JsonNode>actorRef(getWebSocketActor,
100,
OverflowStrategy.dropTail(),
actorSystem,
materializer).asJava();
final F.Either<Result, Flow<JsonNode, JsonNode, ?>> right = F.Either.Right(jsonNodeFlow);
return CompletableFuture.completedFuture(right);
});
}
// Return this whenever we want to reject a
// user from connecting to a websocket
private CompletionStage<F.Either<Result, Flow<JsonNode, JsonNode, ?>>> forbiddenResult(String msg) {
final Result forbidden = Results.forbidden(msg);
final F.Either<Result, Flow<JsonNode, JsonNode, ?>> left = F.Either.Left(forbidden);
return CompletableFuture.completedFuture(left);
}
ChatSocketActor.java
public class ChatSocketActor extends UntypedActor {
private final ActorRef out;
private final Long userId;
private ActorRef chatSocketRouter;
public ChatSocketActor(ActorRef out, ActorRef chatSocketRouter, Long userId) {
this.out = out;
this.userId = userId;
this.chatSocketRouter = chatSocketRouter;
}
public static Props props(ActorRef out, ActorRef chatSocketRouter, Long userId) {
return Props.create(ChatSocketActor.class, out, chatSocketRouter, userId);
}
// Add methods here handling each chat connection...
}
ChatSocketRouter.java
public class ChatSocketRouter extends UntypedActor {
public ChatSocketRouter() {}
// Stores userIds to websockets
private final HashMap<Long, List<ActorRef>> senders = new HashMap<>();
private void addSender(Long userId, ActorRef actorRef){
if (senders.containsKey(userId)) {
final List<ActorRef> actors = senders.get(userId);
actors.add(actorRef);
senders.replace(userId, actors);
} else {
List<ActorRef> l = new ArrayList<>();
l.add(actorRef);
senders.put(userId, l);
}
}
private void removeSender(ActorRef actorRef){
for (List<ActorRef> refs : senders.values()) {
refs.remove(actorRef);
}
}
#Override
public void onReceive(Object message) throws Exception {
ActorRef sender = getSender();
// Handle messages sent to this 'router' here
if (message instanceof UserMessage) {
UserMessage userMessage = (UserMessage) message;
addSender(userMessage.userId, userMessage.actorRef);
// Watch sender so we can detect when they die.
getContext().watch(sender);
} else if (message instanceof Terminated) {
// One of our watched senders has died.
removeSender(sender);
} else {
unhandled(message);
}
}
}
Example
Now whenever you want to send a client with a websocket connection a message you can do something like:
ChatSenderController.java
private ActorRef chatSocketRouter;
#Inject
public ChatSenderController(#Named("chatSocketRouter") ActorRef chatInjectedActor) {
this.chatSocketRouter = chatInjectedActor;
}
public static void sendMessage(Long sendToId) {
// E.g. send the chat router a message that says hi
chatSocketRouter.tell(new Message(sendToId, "Hi"));
}
ChatSocketRouter.java
#Override
public void onReceive(Object message) throws Exception {
// ...
if (message instanceof Message) {
Message messageToSend = (Message) message;
// Loop through the list above and send the message to
// each connection. For example...
for (ActorRef wsConnection : senders.get(messageToSend.getSendToId())) {
// Send "Hi" to each of the other client's
// connected sessions
wsConnection.tell(messageToSend.getMessage());
}
}
// ...
}
Again, I wrote the above to help out those in need. After scouring the web I could not find a reasonable and simple example. There is an open issue for this exact topic. There are also some examples online but none of them were easy to follow. Akka has some great documentation but mixing it in with Play was a tough mental task.
Please help improve this answer if you see anything that is amiss.

Categories