I'm trying to publish messages to a topic, using Google Cloud Pub/Sub. I've been following this tutorial. I've successfully managed to create Topic, with the following method.
public static Topic createTopic(String topic) throws IOException {
try (TopicAdminClient topicAdminClient = TopicAdminClient.create(topicAdminSettings)) {
ProjectTopicName topicName = ProjectTopicName.of(projectId, topic);
return topicAdminClient.createTopic(topicName);
}
}
The following method returns the topic-id from the newly created Topic.
public static String getTopicId(String topicName) throws IOException {
String topic_id = null;
try (TopicAdminClient topicAdminClient = TopicAdminClient.create(topicAdminSettings)) {
ListTopicsRequest listTopicsRequest =
ListTopicsRequest.newBuilder().setProject(ProjectName.format(projectId)).build();
ListTopicsPagedResponse response = topicAdminClient.listTopics(listTopicsRequest);
Iterable<Topic> topics = response.iterateAll();
for (Topic topic : topics) {
// return the topic id for the given topic
if (topic.getName().toLowerCase().contains(topicName.toLowerCase())) {
topic_id = topic.getName();
}
}
}
return topic_id;
}
But when I try to publish messages, using the following method
public static void publishMessages(String topic) throws Exception {
String topicId = getTopicId(topic);
ProjectTopicName topicName = ProjectTopicName.of(projectId, topicId);
Publisher publisher = null;
try {
// Create a publisher instance with default settings bound to the topic
publisher = Publisher.newBuilder(topicName).build();
List<String> messages = Arrays.asList("first message", "second message");
for (final String message : messages) {
ByteString data = ByteString.copyFromUtf8(message);
PubsubMessage pubsubMessage = PubsubMessage.newBuilder().setData(data).build();
// Once published, returns a server-assigned message id (unique within the topic)
ApiFuture<String> future = publisher.publish(pubsubMessage);
// Add an asynchronous callback to handle success / failure
ApiFutures.addCallback(
future,
new ApiFutureCallback<String>() {
#Override
public void onFailure(Throwable throwable) {
if (throwable instanceof ApiException) {
ApiException apiException = ((ApiException) throwable);
// details on the API exception
System.out.println(apiException.getStatusCode().getCode());
System.out.println(apiException.isRetryable());
}
System.out.println("Error publishing message : " + message);
}
#Override
public void onSuccess(String messageId) {
// Once published, returns server-assigned message ids (unique within the topic)
System.out.println(messageId);
}
},
MoreExecutors.directExecutor());
}
} finally {
if (publisher != null) {
// When finished with the publisher, shutdown to free up resources.
publisher.shutdown();
publisher.awaitTermination(1, TimeUnit.MINUTES);
}
}
}
I get the following exception
Exception in thread "main" com.google.api.pathtemplate.ValidationException: Invalid character "/" in path section "projects/deft-idiom-234709/topics/test-topic".
at com.google.api.pathtemplate.PathTemplate.encodeUrl(PathTemplate.java:924)
at com.google.api.pathtemplate.PathTemplate.instantiate(PathTemplate.java:721)
at com.google.api.pathtemplate.PathTemplate.instantiate(PathTemplate.java:646)
at com.google.api.pathtemplate.PathTemplate.instantiate(PathTemplate.java:657)
at com.google.pubsub.v1.ProjectTopicName.toString(ProjectTopicName.java:119)
at com.google.cloud.pubsub.v1.Publisher.newBuilder(Publisher.java:460)
at pubsub.TopicAndPubSub.publishMessages(TopicAndPubSub.java:73)
at pubsub.TopicAndPubSub.main(TopicAndPubSub.java:121)
This is the entire class
import com.google.api.core.ApiFuture;
import com.google.api.core.ApiFutureCallback;
import com.google.api.core.ApiFutures;
import com.google.api.gax.core.FixedCredentialsProvider;
import com.google.api.gax.rpc.ApiException;
import com.google.auth.oauth2.ServiceAccountCredentials;
import com.google.cloud.pubsub.v1.Publisher;
import com.google.cloud.pubsub.v1.TopicAdminClient;
import com.google.cloud.pubsub.v1.TopicAdminClient.ListTopicsPagedResponse;
import com.google.cloud.pubsub.v1.TopicAdminSettings;
import com.google.common.util.concurrent.MoreExecutors;
import com.google.protobuf.ByteString;
import com.google.pubsub.v1.ListTopicsRequest;
import com.google.pubsub.v1.ProjectName;
import com.google.pubsub.v1.ProjectTopicName;
import com.google.pubsub.v1.PubsubMessage;
import com.google.pubsub.v1.Topic;
import java.io.FileInputStream;
import java.io.IOException;
import java.util.Arrays;
import java.util.List;
import java.util.concurrent.TimeUnit;
public class TopicAndPubSub {
private static ServiceAccountCredentials creds;
private static TopicAdminSettings topicAdminSettings;
private static String projectId;
static {
try {
creds = ServiceAccountCredentials.fromStream(new FileInputStream("C:/cred/Key.json"));
topicAdminSettings = TopicAdminSettings.newBuilder()
.setCredentialsProvider(FixedCredentialsProvider.create(creds)).build();
projectId = creds.getProjectId();
} catch (IOException e) {
e.printStackTrace();
}
}
public static Topic createTopic(String topic) throws IOException {
try (TopicAdminClient topicAdminClient = TopicAdminClient.create(topicAdminSettings)) {
ProjectTopicName topicName = ProjectTopicName.of(projectId, topic);
return topicAdminClient.createTopic(topicName);
}
}
public static String getTopicId(String topicName) throws IOException {
String topic_id = null;
try (TopicAdminClient topicAdminClient = TopicAdminClient.create(topicAdminSettings)) {
ListTopicsRequest listTopicsRequest =
ListTopicsRequest.newBuilder().setProject(ProjectName.format(projectId)).build();
ListTopicsPagedResponse response = topicAdminClient.listTopics(listTopicsRequest);
Iterable<Topic> topics = response.iterateAll();
for (Topic topic : topics) {
// return the topic id for the given topic
if (topic.getName().toLowerCase().contains(topicName.toLowerCase())) {
topic_id = topic.getName();
}
}
}
return topic_id;
}
public static void publishMessages(String topic) throws Exception {
String topicId = getTopicId(topic);
ProjectTopicName topicName = ProjectTopicName.of(projectId, topicId);
Publisher publisher = null;
try {
// Create a publisher instance with default settings bound to the topic
publisher = Publisher.newBuilder(topicName).build();
List<String> messages = Arrays.asList("first message", "second message");
for (final String message : messages) {
ByteString data = ByteString.copyFromUtf8(message);
PubsubMessage pubsubMessage = PubsubMessage.newBuilder().setData(data).build();
// Once published, returns a server-assigned message id (unique within the topic)
ApiFuture<String> future = publisher.publish(pubsubMessage);
// Add an asynchronous callback to handle success / failure
ApiFutures.addCallback(
future,
new ApiFutureCallback<String>() {
#Override
public void onFailure(Throwable throwable) {
if (throwable instanceof ApiException) {
ApiException apiException = ((ApiException) throwable);
// details on the API exception
System.out.println(apiException.getStatusCode().getCode());
System.out.println(apiException.isRetryable());
}
System.out.println("Error publishing message : " + message);
}
#Override
public void onSuccess(String messageId) {
// Once published, returns server-assigned message ids (unique within the topic)
System.out.println(messageId);
}
},
MoreExecutors.directExecutor());
}
} finally {
if (publisher != null) {
// When finished with the publisher, shutdown to free up resources.
publisher.shutdown();
publisher.awaitTermination(1, TimeUnit.MINUTES);
}
}
}
public static void main(String[] args) throws Exception {
publishMessages("test-topic");
}
}
I can't seem to get it fixed. Can somebody please help?!
Turns out, I needed to build the Publisher using setCredentialsProvider. Had to change the following in publishMessagesmethod, from
publisher = Publisher.newBuilder(topicName).build();
To
publisher = Publisher.newBuilder(
topicName)
.setCredentialsProvider(FixedCredentialsProvider.create(creds))
.build();
Works as expected now!
Related
I want to receive chunk of messages from Queue within some timeLimit(Ex : 300 millisec after receiving the 1st message) using DefaultMessageListenerConatiner Of Spring (By overriding doReceiveAndExecute) as mentioned in the link.
I can group the messages of my batch size i.e 20 when the queue is having too many messages and I can receive less than 20 messages when there are very less messages in Queue.
Issue :
I see it takes too much time(sometimes 1 sec and sometime 2 secs and more) for sending the messages to Listener even when the queue is full.
When I try with DefaultMessageListenerConatiner as such to receive single messages concurrently, I see the messages are received in a delay of few milliseconds(like 1 millisec or max 30 to 60 millisec)
I didn't specify transactionTimeout or receiveTimeout and I didn't link any transactionManager as well.
Can Springers please help me to find where the timeOut can be specified or How can I redeuce the time delay?
BatchMessageListenerContainer :
package com.mypackage;
import javax.jms.Session;
import javax.jms.Connection;
import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.MessageConsumer;
import java.util.ArrayList;
import java.util.List;
import org.springframework.jms.listener.DefaultMessageListenerContainer;
import org.springframework.jms.connection.ConnectionFactoryUtils;
import org.springframework.jms.support.JmsUtils;
import org.springframework.transaction.TransactionStatus;
/**
* Listener Container that allows batch consumption of messages. Works only with transacted sessions
*/
public class BatchMessageListenerContainer extends DefaultMessageListenerContainer {
public static final int DEFAULT_BATCH_SIZE = 20;
private int batchSize = DEFAULT_BATCH_SIZE;
public BatchMessageListenerContainer() {
super();
setSessionTransacted(true);
}
/**
* #return The batch size on this container
*/
public int getBatchSize() {
return batchSize;
}
/**
* #param batchSize The batchSize of this container
*/
public void setBatchSize(int batchSize) {
this.batchSize = batchSize;
}
/**
* The doReceiveAndExecute() method has to be overriden to support multiple-message receives.
*/
#Override
protected boolean doReceiveAndExecute(Object invoker, Session session, MessageConsumer consumer,
TransactionStatus status) throws JMSException {
Connection conToClose = null;
MessageConsumer consumerToClose = null;
Session sessionToClose = null;
try {
Session sessionToUse = session;
MessageConsumer consumerToUse = consumer;
if (sessionToUse == null) {
Connection conToUse = null;
if (sharedConnectionEnabled()) {
conToUse = getSharedConnection();
}
else {
conToUse = createConnection();
conToClose = conToUse;
conToUse.start();
}
sessionToUse = createSession(conToUse);
sessionToClose = sessionToUse;
}
if (consumerToUse == null) {
consumerToUse = createListenerConsumer(sessionToUse);
consumerToClose = consumerToUse;
}
List<Message> messages = new ArrayList<Message>();
int count = 0;
Message message = null;
// Attempt to receive messages with the consumer
do {
message = receiveMessage(consumerToUse);
if (message != null) {
messages.add(message);
}
}
// Exit loop if no message was received in the time out specified, or
// if the max batch size was met
while ((message != null) && (++count < batchSize));
if (messages.size() > 0) {
// Only if messages were collected, notify the listener to consume the same.
try {
doExecuteListener(sessionToUse, messages);
sessionToUse.commit();
}
catch (Throwable ex) {
handleListenerException(ex);
if (ex instanceof JMSException) {
throw (JMSException) ex;
}
}
return true;
}
// No message was received for the period of the timeout, return false.
noMessageReceived(invoker, sessionToUse);
return false;
}
finally {
JmsUtils.closeMessageConsumer(consumerToClose);
JmsUtils.closeSession(sessionToClose);
ConnectionFactoryUtils.releaseConnection(conToClose, getConnectionFactory(), true);
}
}
protected void doExecuteListener(Session session, List<Message> messages) throws JMSException {
if (!isAcceptMessagesWhileStopping() && !isRunning()) {
if (logger.isWarnEnabled()) {
logger.warn("Rejecting received messages because of the listener container "
+ "having been stopped in the meantime: " + messages);
}
rollbackIfNecessary(session);
throw new JMSException("Rejecting received messages as listener container is stopping");
}
#SuppressWarnings("unchecked")
SessionAwareBatchMessageListener<Message> lsnr = (SessionAwareBatchMessageListener<Message>) getMessageListener();
try {
lsnr.onMessages(session, messages);
}
catch (JMSException ex) {
rollbackOnExceptionIfNecessary(session, ex);
throw ex;
}
catch (RuntimeException ex) {
rollbackOnExceptionIfNecessary(session, ex);
throw ex;
}
catch (Error err) {
rollbackOnExceptionIfNecessary(session, err);
throw err;
}
}
#Override
protected void checkMessageListener(Object messageListener) {
if (!(messageListener instanceof SessionAwareBatchMessageListener<?>)) {
throw new IllegalArgumentException("Message listener needs to be of type ["
+ SessionAwareBatchMessageListener.class.getName() + "]");
}
}
#Override
protected void validateConfiguration() {
if (batchSize <= 0) {
throw new IllegalArgumentException("Property batchSize must be a value greater than 0");
}
}
public void setSessionTransacted(boolean transacted) {
if (!transacted) {
throw new IllegalArgumentException("Batch Listener requires a transacted Session");
}
super.setSessionTransacted(transacted);
}
}
SessionAwareBatchMessageListener:
package com.mypackage;
import java.util.List;
import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.Session;
public interface SessionAwareBatchMessageListener<M extends Message> {
/**
* Perform a batch action with the provided list of {#code messages}.
*
* #param session JMS {#code Session} that received the messages
* #param messages List of messages
* #throws JMSException JMSException thrown if there is an error performing the operation.
*/
public void onMessages(Session session, List<M> messages) throws JMSException;
}
Bean in applicationContext.xml:
<bean id="myMessageListener" class="org.mypackage.MyMessageListener">
<bean id="jmsContainer" class="com.mypackage.BatchMessageListenerContainer">
<property name="connectionFactory" ref="connectionFactory"/>
<property name="destinationName" ref="queue"/>
<property name="messageListener" ref="myMessageListener"/>
<property name ="concurrentConsumers" value ="10"/>
<property name ="maxConcurrentConsumers" value ="50"/>
</bean>
MyMessageListner :
package org.mypackage;
import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.MessageListener;
import org.mypackage.service.MyService;
public class MyMessageListener implements SessionAwareBatchMessageListener<TextMessage> {
#Autowired
private MyService myService;
#Override
public void onMessage(Session session, List<TextMessage> messages) {
try {
for(TextMessage tm :messages) {
TextMessage textMessage = (TextMessage) message;
// parse the message and add to list
}
//process list of Objects to DB
} catch (JMSException e1) {
e1.printStackTrace();
}
}
}
i think that the time spent before sending the message to the consumer was caused by your while loop, because you're awaiting each time the list to be full but this one is only filled by the current thread since it is created inside the doReceiveAndExecute method!
// Exit loop if no message was received in the time out specified, or
// if the max batch size was met
while ((message != null) && (++count < batchSize));
maybe this can do it well :
...
List<Message> messages = Collections.synchronizedList(new ArrayList<Message>());
#Override
protected boolean doReceiveAndExecute(Object invoker, Session session, MessageConsumer consumer,
TransactionStatus status) throws JMSException {
Connection conToClose = null;
MessageConsumer consumerToClose = null;
Session sessionToClose = null;
try {
Session sessionToUse = session;
MessageConsumer consumerToUse = consumer;
if (sessionToUse == null) {
Connection conToUse = null;
if (sharedConnectionEnabled()) {
conToUse = getSharedConnection();
}
else {
conToUse = createConnection();
conToClose = conToUse;
conToUse.start();
}
sessionToUse = createSession(conToUse);
sessionToClose = sessionToUse;
}
if (consumerToUse == null) {
consumerToUse = createListenerConsumer(sessionToUse);
consumerToClose = consumerToUse;
}
Message message = null;
// Attempt to receive messages with the consumer
do {
message = receiveMessage(consumerToUse);
if (message != null) {
messages.add(message);
}
}
if (messages.size() >= batchSize)) {
synchronized (messages) {
// Only if messages were collected, notify the listener to consume the same.
try {
doExecuteListener(sessionToUse, messages);
sessionToUse.commit();
// clear the list!!
messages.clear();
}
catch (Throwable ex) {
handleListenerException(ex);
if (ex instanceof JMSException) {
throw (JMSException) ex;
}
}
}
return true;
}
// No message was received for the period of the timeout, return false.
noMessageReceived(invoker, sessionToUse);
return false;
}
finally {
JmsUtils.closeMessageConsumer(consumerToClose);
JmsUtils.closeSession(sessionToClose);
ConnectionFactoryUtils.releaseConnection(conToClose, getConnectionFactory(), true);
}
}
I took the example from here http://www.rabbitmq.com/tutorials/tutorial-six-java.html, added one more RPC call from RPCClient and added some logging into stdout. As a result, when the second call is executed, rabbitmq uses the consumer with wrong correlation id which is not expected behavior. Is it a bug or am I getting anything wrong?
RPCServer:
package com.foo.rabbitmq;
import com.rabbitmq.client.ConnectionFactory;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.Consumer;
import com.rabbitmq.client.DefaultConsumer;
import com.rabbitmq.client.AMQP;
import com.rabbitmq.client.Envelope;
import java.io.IOException;
import java.util.concurrent.TimeoutException;
public class RPCServer {
private static final String RPC_QUEUE_NAME = "sap-consume";
private static int fib(int n) {
if (n ==0) return 0;
if (n == 1) return 1;
return fib(n-1) + fib(n-2);
}
public static void main(String[] argv) {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
factory.setPort(5672);
Connection connection = null;
try {
connection = factory.newConnection();
final Channel channel = connection.createChannel();
channel.queueDeclare(RPC_QUEUE_NAME, false, false, false, null);
channel.basicQos(1);
System.out.println(" [x] Awaiting RPC requests");
Consumer consumer = new DefaultConsumer(channel) {
#Override
public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body) throws IOException {
AMQP.BasicProperties replyProps = new AMQP.BasicProperties
.Builder()
.correlationId(properties.getCorrelationId())
.build();
String response = "";
try {
String message = new String(body,"UTF-8");
int n = Integer.parseInt(message);
System.out.println(" [.] fib(" + message + ")");
response += fib(n);
}
catch (RuntimeException e){
System.out.println(" [.] " + e.toString());
}
finally {
channel.basicPublish( "", properties.getReplyTo(), replyProps, response.getBytes("UTF-8"));
channel.basicAck(envelope.getDeliveryTag(), false);
// RabbitMq consumer worker thread notifies the RPC server owner thread
synchronized(this) {
this.notify();
}
}
}
};
channel.basicConsume(RPC_QUEUE_NAME, false, consumer);
// Wait and be prepared to consume the message from RPC client.
while (true) {
synchronized(consumer) {
try {
consumer.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
} catch (IOException | TimeoutException e) {
e.printStackTrace();
}
finally {
if (connection != null)
try {
connection.close();
} catch (IOException _ignore) {}
}
}
}
RPCCLient:
package com.bar.rabbitmq;
import com.rabbitmq.client.ConnectionFactory;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.DefaultConsumer;
import com.rabbitmq.client.AMQP;
import com.rabbitmq.client.Envelope;
import java.io.IOException;
import java.util.UUID;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.TimeoutException;
public class RPCClient {
private Connection connection;
private Channel channel;
private String requestQueueName = "sap-consume";
private String replyQueueName;
public RPCClient() throws IOException, TimeoutException {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
factory.setPort(5672);
connection = factory.newConnection();
channel = connection.createChannel();
replyQueueName = channel.queueDeclare().getQueue();
}
public String call(String message) throws IOException, InterruptedException {
final String corrId = UUID.randomUUID().toString();
AMQP.BasicProperties props = new AMQP.BasicProperties
.Builder()
.correlationId(corrId)
.replyTo(replyQueueName)
.build();
channel.basicPublish("", requestQueueName, props, message.getBytes("UTF-8"));
final BlockingQueue<String> response = new ArrayBlockingQueue<String>(1);
channel.basicConsume(replyQueueName, true, new DefaultConsumer(channel) {
#Override
public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body) throws IOException {
if (properties.getCorrelationId().equals(corrId)) {
System.out.println("Correlation Id" + properties.getCorrelationId() + " corresponds to expected one.");
response.offer(new String(body, "UTF-8"));
} else {
System.out.println("Correlation Id" + properties.getCorrelationId() + " doesn't correspond to expected one " + corrId);
}
}
});
return response.take();
}
public void close() throws IOException {
connection.close();
}
public static void main(String[] argv) {
RPCClient rpc = null;
String response = null;
try {
rpc = new RPCClient();
System.out.println(" [x] Requesting fib(30)");
response = rpc.call("30");
System.out.println(" [.] Got '" + response + "'");
System.out.println(" [x] Requesting fib(40)");
response = rpc.call("40");
System.out.println(" [.] Got '" + response + "'");
} catch (IOException | TimeoutException | InterruptedException e) {
e.printStackTrace();
} finally {
if (rpc != null) {
try {
rpc.close();
} catch (IOException _ignore) {
}
}
}
}
}
Yes you found a bug in the tutorial code. I have opened a pull request to fix it here and you can find the explanation of what's happening as well:
https://github.com/rabbitmq/rabbitmq-tutorials/pull/174
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
This example is simplistic: it uses one queue for the reply. By sending a second request, you register a new consumer to the reply, but the consumer of the first request is still listening and actually steals the response of the second request. That's why the client seems to use the same correlation ID.
We updated the client code to use an exclusive, auto-delete queue for each request. This queue will be auto-deleted by the server because its only consumer is unsubscribed after the response has been received. This is a bit more involved but closer to a real-world scenario.
Note the best way to deal with the reply queue with RabbitMQ is to use direct reply-to. This uses pseudo-queues which are lighter than real queues. We don't mention direct reply-to in the tutorial to keep it as simple as possible, but this is the preferred feature to use in production.
I am trying to implement a TCP server in java using Netty. I am able to handle message of length < 1024 correctly but when I receive message more than 1024, I am only able to see partial message.
I did some research, found that I should implement replayingdecoder but I am unable to understand how to implement the decode method
My message uses JSON
Netty version 4.1.27
protected void decode(ChannelHandlerContext channelHandlerContext, ByteBuf byteBuf, List<Object> list) throws Exception
My Server setup
EventLoopGroup group;
group = new NioEventLoopGroup(this.numThreads);
try {
ServerBootstrap serverBootstrap;
RequestHandler requestHandler;
ChannelFuture channelFuture;
serverBootstrap = new ServerBootstrap();
serverBootstrap.group(group);
serverBootstrap.channel(NioServerSocketChannel.class);
serverBootstrap.localAddress(new InetSocketAddress("::", this.port));
requestHandler = new RequestHandler(this.responseManager, this.logger);
serverBootstrap.childHandler(new ChannelInitializer<SocketChannel>() {
protected void initChannel(SocketChannel socketChannel) throws Exception {
socketChannel.pipeline().addLast(requestHandler);
}
});
channelFuture = serverBootstrap.bind().sync();
channelFuture.channel().closeFuture().sync();
}
catch(Exception e){
this.logger.info(String.format("Unknown failure %s", e.getMessage()));
}
finally {
try {
group.shutdownGracefully().sync();
}
catch (InterruptedException e) {
this.logger.info(String.format("Error shutting down %s", e.getMessage()));
}
}
My current request handler
package me.chirag7jain.Response;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.ChannelFutureListener;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;
import io.netty.util.CharsetUtil;
import org.apache.logging.log4j.Logger;
import java.net.InetSocketAddress;
public class RequestHandler extends ChannelInboundHandlerAdapter {
private ResponseManager responseManager;
private Logger logger;
public RequestHandler(ResponseManager responseManager, Logger logger) {
this.responseManager = responseManager;
this.logger = logger;
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ByteBuf byteBuf;
String data, hostAddress;
byteBuf = (ByteBuf) msg;
data = byteBuf.toString(CharsetUtil.UTF_8);
hostAddress = ((InetSocketAddress) ctx.channel().remoteAddress()).getAddress().getHostAddress();
if (!data.isEmpty()) {
String reply;
this.logger.info(String.format("Data received %s from %s", data, hostAddress));
reply = this.responseManager.reply(data);
if (reply != null) {
ctx.write(Unpooled.copiedBuffer(reply, CharsetUtil.UTF_8));
}
}
else {
logger.info(String.format("NO Data received from %s", hostAddress));
}
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
ctx.writeAndFlush(Unpooled.EMPTY_BUFFER).addListener(ChannelFutureListener.CLOSE);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
this.logger.info(String.format("Received Exception %s", cause.getMessage()));
ctx.close();
}
I would accept data in channelRead() and accumulate it in a buffer. Before return from channelRead() I would invoke read() on a Channel. You may need to record other data, as per your needs.
When netty invokes channelReadComplete(), there is a moment to send whole buffer to your ResponseManager.
Channel read(): Request to Read data from the Channel into the first
inbound buffer, triggers an
ChannelInboundHandler.channelRead(ChannelHandlerContext, Object) event
if data was read, and triggers a channelReadComplete event so the
handler can decide to continue reading.
Your Channel object is accessible by ctx.channel().
Try this code:
private final AttributeKey<StringBuffer> dataKey = AttributeKey.valueOf("dataBuf");
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ByteBuf byteBuf;
String data, hostAddress;
StringBuffer dataBuf = ctx.attr(dataKey).get();
boolean allocBuf = dataBuf == null;
if (allocBuf) dataBuf = new StringBuffer();
byteBuf = (ByteBuf) msg;
data = byteBuf.toString(CharsetUtil.UTF_8);
hostAddress = ((InetSocketAddress) ctx.channel().remoteAddress()).getAddress().getHostAddress();
if (!data.isEmpty()) {
this.logger.info(String.format("Data received %s from %s", data, hostAddress));
}
else {
logger.info(String.format("NO Data received from %s", hostAddress));
}
dataBuf.append(data);
if (allocBuf) ctx.attr(dataKey).set(dataBuf);
ctx.channel().read();
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
StringBuffer dataBuf = ctx.attr(dataKey).get();
if (dataBuf != null) {
String reply;
reply = this.responseManager.reply(dataBuf.toString());
if (reply != null) {
ctx.write(Unpooled.copiedBuffer(reply, CharsetUtil.UTF_8));
}
}
ctx.attr(dataKey).set(null);
ctx.writeAndFlush(Unpooled.EMPTY_BUFFER).addListener(ChannelFutureListener.CLOSE);
}
An application protocol with variable length messages must have:
a length word
a terminator character or sequence, which in turn implies an escape character in case the data contains the terminator
a self-describing protocol such as XML.
I'm new to mqtt. Getting started I tried publishing and subscribing topics to mosquitto broker. I was able to publish messages. But my subscriber is not listening to the topic, it will start and stop without waiting/polling for messages.
Here is the subscriber code,
public class MqttSubscriber implements MqttCallback {
private static final String TOPIC = "iot/endpoint";
public static void main(String[] args) {
new MqttSubscriber().listen();
}
public void listen() {
MqttClient client = null;
try {
client = MqttClientGenerator.generateSubscriberClient();
client.connect();
System.out.println("Fetching messages...");
client.subscribe(TOPIC);
client.setCallback(this);
client.disconnect();
} catch (MqttException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
}
public void connectionLost(Throwable t) {
t.printStackTrace();
}
public void deliveryComplete(IMqttDeliveryToken arg0) {
}
public void messageArrived(String topic, MqttMessage message) throws Exception {
System.out.println("Message received from broker...");
System.out.println("Received Message: -- ");
System.out.println(message.getPayload().toString());
}
}
MqttClientGenerator :
public class MqttClientGenerator {
private static final String BROKER_URI = "tcp://localhost:1883";
private static final String CLIENT_ID = "pub";
private static final String SUBSCRIBER_ID = "sub";
private MqttClientGenerator () {}
public static MqttClient generatePublisherClient() throws MqttException{
//adding timestamp to make client name unique every time
return new MqttClient(BROKER_URI, CLIENT_ID+new Date().getTime());
}
public static MqttClient generateSubscriberClient() throws MqttException{
//adding timestamp to make client name unique every time
return new MqttClient(BROKER_URI, SUBSCRIBER_ID+new Date().getTime());
}
}
what am i missing here?
Try deleting the line where you disconnect the client.
What I want?
I am trying to write an application where client sends a query and based on the query server gets twitter-stream and pushes to client.
What I have?
I have a simple structure in place where client can connect to server and server responds back
TweetStreamServer
package com.self.tweetstream;
import javax.websocket.OnMessage;
import javax.websocket.server.ServerEndpoint;
#ServerEndpoint("/tweets")
public class TweetStreamServer {
#OnMessage
public String tweets(final String message) {
return message;
}
}
TweetStreamClient
#ClientEndpoint
public class TweetStreamClient {
public static CountDownLatch latch;
public static String response;
#OnOpen
public void onOpen(Session session) {
try{
session.getBasicRemote().sendText("Hello World!");
} catch (IOException e) {
e.printStackTrace();
}
}
#OnMessage
public void printTweets(final String tweet) {
System.out.println("Tweet:" + tweet);
response = tweet;
latch.countDown();
}
}
TweetStreamTest
#Test
public void test() throws URISyntaxException, IOException, DeploymentException, InterruptedException {
System.out.println("URI: " + getEndpointUrl());
TweetStreamClient.latch = new CountDownLatch(1);
Session session = connectToServer(TweetStreamClient.class, "tweets");
assertNotNull(session);
assertTrue(TweetStreamClient.latch.await(10, TimeUnit.SECONDS));
assertEquals("Hello World!", TweetStreamClient.response);
}
Question
I am confused how can I now send continuous tweets that I receive from Twitter because my server method as per API is
#OnMessage
public String tweets(final String message) {
return message;
}
This means it expects a message in order return anything.
How can I send on-coming data from Twitter send to client?
This worked for me
#OnMessage
public void tweets(final String message, Session client) throws IOException, InterruptedException {
int i = 0;
for (Session peer : client.getOpenSessions()) {
while (i < 10) {
System.out.println("sending ...");
peer.getBasicRemote().sendText("Hello");
Thread.sleep(2000);
i++;
}
}
}
Thanks to Arun Gupta for helping through his tweets :)