Camel ActiveMQ Performance Tuning - java

Situation
At present, we use some custom code on top of ActiveMQ libraries for JMS messaging. I have been looking at switching to Camel, for ease of use, ease of maintenance, and reliability.
Problem
With my present configuration, Camel's ActiveMQ implementation is substantially slower than our old implementation, both in terms of delay per message sent and received, and time taken to send and receive a large flood of messages. I've tried tweaking some configuration (e.g. maximum connections), to no avail.
Test Approach
I have two applications, one using our old implementation, one using a Camel implementation. Each application sends JMS messages to a topic on local ActiveMQ server, and also listens for messages on that topic. This is used to test two Scenarios:
- Sending 100,000 messages to the topic in a loop, and seen how long it takes from start of sending to end of handling all of them.
- Sending a message every 100 ms and measuring the delay (in ns) from sending to handling each message.
Question
Can I improve upon the implementation below, in terms of time sent to time processed for both floods of messages, and individual messages? Ideally, improvements would involve tweaking some config that I have missed, or suggesting a better way to do it, and not be too hacky. Explanations of improvements would be appreciated.
Edit: Now that I am sending messages asyncronously, I appear to have a concurrency issue. receivedCount does not reach 100,000. Looking at the ActiveMQ web interface, 100,000 messages are enqueued, and 100,000 dequeued, so it's probably a problem on the message processing side. I've altered receivedCount to be an AtomicInteger and added some logging to aid debugging. Could this be a problem with Camel itself (or the ActiveMQ components), or is there something wrong with the message processing code? As far as I can tell, only ~99,876 messages are making it through to floodProcessor.process.
Test Implementation
Edit: Updated with async sending and logging for concurrency issue.
import java.util.Arrays;
import java.util.List;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import org.apache.activemq.ActiveMQConnectionFactory;
import org.apache.activemq.camel.component.ActiveMQComponent;
import org.apache.activemq.pool.PooledConnectionFactory;
import org.apache.camel.CamelContext;
import org.apache.camel.Exchange;
import org.apache.camel.Processor;
import org.apache.camel.ProducerTemplate;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.component.jms.JmsConfiguration;
import org.apache.camel.impl.DefaultCamelContext;
import org.apache.log4j.Logger;
public class CamelJmsTest{
private static final Logger logger = Logger.getLogger(CamelJmsTest.class);
private static final boolean flood = true;
private static final int NUM_MESSAGES = 100000;
private final CamelContext context;
private final ProducerTemplate producerTemplate;
private long timeSent = 0;
private final AtomicInteger sendCount = new AtomicInteger(0);
private final AtomicInteger receivedCount = new AtomicInteger(0);
public CamelJmsTest() throws Exception {
context = new DefaultCamelContext();
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory("tcp://localhost:61616");
PooledConnectionFactory pooledConnectionFactory = new PooledConnectionFactory(connectionFactory);
JmsConfiguration jmsConfiguration = new JmsConfiguration(pooledConnectionFactory);
logger.info(jmsConfiguration.isTransacted());
ActiveMQComponent activeMQComponent = ActiveMQComponent.activeMQComponent();
activeMQComponent.setConfiguration(jmsConfiguration);
context.addComponent("activemq", activeMQComponent);
RouteBuilder builder = new RouteBuilder() {
#Override
public void configure() {
Processor floodProcessor = new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
int newCount = receivedCount.incrementAndGet();
//TODO: Why doesn't newCount hit 100,000? Remove this logging once fixed
logger.info(newCount + ":" + exchange.getIn().getBody());
if(newCount == NUM_MESSAGES){
logger.info("all messages received at " + System.currentTimeMillis());
}
}
};
Processor spamProcessor = new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
long delay = System.nanoTime() - timeSent;
logger.info("Message received: " + exchange.getIn().getBody(List.class) + " delay: " + delay);
}
};
from("activemq:topic:test?exchangePattern=InOnly")//.threads(8) // Having 8 threads processing appears to make things marginally worse
.choice()
.when(body().isInstanceOf(List.class)).process(flood ? floodProcessor : spamProcessor)
.otherwise().process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
logger.info("Unknown message type received: " + exchange.getIn().getBody());
}
});
}
};
context.addRoutes(builder);
producerTemplate = context.createProducerTemplate();
// For some reason, producerTemplate.asyncSendBody requires an Endpoint to be passed in, so the below is redundant:
// producerTemplate.setDefaultEndpointUri("activemq:topic:test?exchangePattern=InOnly");
}
public void send(){
int newCount = sendCount.incrementAndGet();
producerTemplate.asyncSendBody("activemq:topic:test?exchangePattern=InOnly", Arrays.asList(newCount));
}
public void spam(){
Executors.newSingleThreadScheduledExecutor().scheduleWithFixedDelay(new Runnable() {
#Override
public void run() {
timeSent = System.nanoTime();
send();
}
}, 1000, 100, TimeUnit.MILLISECONDS);
}
public void flood(){
logger.info("starting flood at " + System.currentTimeMillis());
for (int i = 0; i < NUM_MESSAGES; i++) {
send();
}
logger.info("flooded at " + System.currentTimeMillis());
}
public static void main(String... args) throws Exception {
CamelJmsTest camelJmsTest = new CamelJmsTest();
camelJmsTest.context.start();
if(flood){
camelJmsTest.flood();
}else{
camelJmsTest.spam();
}
}
}

It appears from your current JmsConfiguration that you are only consuming messages with a single thread. Was this intended?
If not, you need to set the concurrentConsumers property to something higher. This will create a threadpool of JMS listeners to service your destination.
Example:
JmsConfiguration config = new JmsConfiguration(pooledConnectionFactory);
config.setConcurrentConsumers(10);
This will create 10 JMS listener threads that will process messages concurrently from your queue.
EDIT:
For topics you can do something like this:
JmsConfiguration config = new JmsConfiguration(pooledConnectionFactory);
config.setConcurrentConsumers(1);
config.setMaxConcurrentConsumers(1);
And then in your route:
from("activemq:topic:test?exchangePattern=InOnly").threads(10)
Also, in ActiveMQ you can use a virtual destination. The virtual topic will act like a queue and then you can use the same concurrentConsumers method you would use for a normal queue.
Further Edit (For Sending):
You are currently doing a blocking send. You need to do producerTemplate.asyncSendBody().
Edit
I just built a project with your code and ran it. I set a breakpoint in your floodProcessor method and newCount is reaching 100,000. I think you may be getting thrown off by your logging and the fact that you are sending and receiving asynchronously. On my machine newCount hit 100,000 and the "all messages recieved" message was logged in well under 1 second after execution, but the program continued to log for another 45 seconds afterwards since it was buffered. You can see the effect of logging on how close your newCount number is to your body number by reducing the logging. I turned the logging to info, shutting off camel logging, and the two numbers matched at the end of the logging:
INFO CamelJmsTest - 99996:[99996]
INFO CamelJmsTest - 99997:[99997]
INFO CamelJmsTest - 99998:[99998]
INFO CamelJmsTest - 99999:[99999]
INFO CamelJmsTest - 100000:[100000]
INFO CamelJmsTest - all messages received at 1358778578422

I took over from the original poster in looking at this as part of another task, and found the problem with losing messages was actually in the ActiveMQ config.
We had a setting sendFailIfNoSpace=true, which was resulting in messages being dropped if we were sending fast enough to fill the publishers cache. Playing around with the policyEntry topic cache size I could vary the number of messages that disappeared with as much reliability as can be expected of such a race condition. Setting sendFailIfNoSpace=false (default), I could have any cache size I liked and never fail to receive all messages.
In theory sendFailIfNoSpace should throw a ResourceAllocationException when it drops a message, but that is either not happening(!) or is being ignored somehow. Also interesting is that our custom JMS wrapper code doesn't hit this problem despite running the throughput test faster than Camel. Maybe that code is faster in such a way that it means the publishing cache is being emptied faster, or else we are overriding sendFailIfNoSpace in the connection code somewhere that I haven't found yet.
On the question of speed, we have implemented all the suggestions mentioned here so far except for virtual destinations, but the Camel version test with 100K messages still runs in 16 seconds on my machine compared to 10 seconds for our own wrapper. As mentioned above, I have a sneaking suspicion that we are (implicitly or otherwise) overriding config somewhere in our wrapper, but I doubt it is anything that would cause that big a performance boost within ActiveMQ.
Virtual destinations as mentioned by gwithake might speed up this particular test, but most of the time with our real workloads it is not an appropriate solution.

Related

Google Cloud Pub Sub memory leak on re-deploy (Netty based)

My tomcat web service uses realtime developer notifications for Android, which requires Google Cloud Pub Sub. It is working flawlessly, all the notifications are received immediately. The only problem is that it uses too much RAM that causes the machine to respond very slowly than it is supposed to, and is not releasing it after undeploying the application. It uses HttpServlet (specifically Jersey which provides contextInitialized and contextDestroyed methods to set and clear references) and commenting the pub-sub code actually decreases a lot of the memory usage.
Here is the code for subscribing-unsubscribing for Android subscription notifications.
package com.example.webservice;
import com.example.webservice.Log;
import com.google.api.core.ApiService;
import com.google.api.gax.core.FixedCredentialsProvider;
import com.google.auth.oauth2.GoogleCredentials;
import com.google.cloud.pubsub.v1.MessageReceiver;
import com.google.cloud.pubsub.v1.Subscriber;
import com.google.common.collect.Lists;
import com.google.pubsub.v1.ProjectSubscriptionName;
import java.io.FileInputStream;
public class SubscriptionTest
{
// for hiding purposes
private static final String projectId1 = "api-000000000000000-000000";
private static final String subscriptionId1 = "realtime_notifications_subscription";
private static final String TAG = "SubscriptionTest";
private ApiService subscriberService;
private MessageReceiver receiver;
// Called when "contextInitialized" is called.
public void initializeSubscription()
{
Log.w(TAG, "Initializing subscriptions...");
try
{
GoogleCredentials credentials1 = GoogleCredentials.fromStream(new FileInputStream("googlekeys/apikey.json"))
.createScoped(Lists.newArrayList("https://www.googleapis.com/auth/cloud-platform"));
ProjectSubscriptionName subscriptionName1 = ProjectSubscriptionName.of(projectId1, subscriptionId1);
// Instantiate an asynchronous message receiver
receiver =
(message, consumer) ->
{
consumer.ack();
// do processing
};
// Create a subscriber for "my-subscription-id" bound to the message receiver
Subscriber subscriber1 = Subscriber.newBuilder(subscriptionName1, receiver)
.setCredentialsProvider(FixedCredentialsProvider.create(credentials1))
.build();
subscriberService = subscriber1.startAsync();
}
catch (Throwable e)
{
Log.e(TAG, "Exception while initializing async message receiver.", e);
return;
}
Log.w(TAG, "Subscription initialized. Messages should come now.");
}
// Called when "contextDestroyed" is called.
public void removeSubscription()
{
if (subscriberService != null)
{
subscriberService.stopAsync();
Log.i(TAG, "Awaiting subscriber termination...");
subscriberService.awaitTerminated();
Log.i(TAG, "Subscriber termination done.");
}
subscriberService = null;
receiver = null;
}
}
And this is the statement after the application is undeployed. (Names may not match but it is not important)
org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application
[example] created a ThreadLocal with key of type [java.lang.ThreadLocal]
(value [java.lang.ThreadLocal#2cb2fc20]) and a value of type
[io.grpc.netty.shaded.io.netty.util.internal.InternalThreadLocalMap]
(value [io.grpc.netty.shaded.io.netty.util.internal.InternalThreadLocalMap#4f4c4b1a])
but failed to remove it when the web application was stopped.
Threads are going to be renewed over time to try and avoid a probable memory leak.
From what I've observed, Netty is creating a static ThreadLocal with a strong reference to the value InternalThreadLocalMap which seems to be causing this message to appear. I've tried to delete it by using some sort of code like this (probably it's overkill but none of the answers worked for me so far, and this isn't seem to be working either)
InternalThreadLocalMap.destroy();
FastThreadLocal.destroy();
for (Thread thread : Thread.getAllStackTraces().keySet())
{
if (thread instanceof FastThreadLocalThread)
{
// Handle the memory leak that netty causes.
InternalThreadLocalMap map = ((FastThreadLocalThread) thread).threadLocalMap();
if (map == null)
continue;
for (int i = 0; i < map.size(); i++)
map.setIndexedVariable(i, null);
((FastThreadLocalThread) thread).setThreadLocalMap(null);
}
}
After the undeploy (or stop-start) tomcat detects a memory leak if I click Find leaks (obviously). The problem is, the RAM and CPU that has been used is not released because apparently the subscription is not closed properly. Re-deploying the app causes the used RAM to increase even further on every action like, if it uses 200 MB ram at first, after 2nd deploy it increases to 400, 600, 800 which goes unlimited until the machine slows down enough to die.
It is a serious issue and I have no idea how to solve it, the stop methods are called as defined, awaitTerminated is also called which immediately executes (means that the interface is actually stopped listening) but it does not release the RAM behind it.
So far I've only seen questions about python clients (ref 1, ref 2) but nobody seems to be mentioning the Java client, and I'm kind of losing hope about using this structure.
I've opened an issue about this problem as well.
What should I do to resolve this issue? Any help is appreciated, thank you very much.
I don't know if it will fully fix your issue, but you appear to be leaking some memory by not closing the FileInputStream.
The first option is to extract the FileInputStream into a variable and call the close() method on it after you are done reading the content.
A second (and better) option to work with these kind of streams is to use try-with-resources. Since FileInputStream implements the AutoCloseable interface, it will be closed automatically when exiting the try-with-resources.
Example:
try (FileInputStream stream = new FileInputStream("googlekeys/apikey.json")) {
GoogleCredentials credentials1 = GoogleCredentials.fromStream(stream)
.createScoped(Lists.newArrayList("https://www.googleapis.com/auth/cloud-platform"));
// ...
} catch (Exception e) {
Log.e(TAG, "Exception while initializing async message receiver.", e);
return;
}

Spring AMQP - Channel transacted vs publisher confirms

I have a Jersey application in which I am using spring amqp library to publish messages to rabbitMQ exchanges. I am using CachingConnectionFactory in my rabbit template and initially Channel-Transacted was set to false. I noticed that some messages were not actually published to the exchange, so I changed the channel-transacted value to true.
On doing this, my publishing function started taking 500ms (It was 5ms while the channel transacted was false). Is there something I am missing here because 500ms is way too much.
As an alternative, I tried setting publisherConfirms to true and added a ConfirmCallback. I haven't yet benchmarked this, but would like to know if this will have better performance as compared to channel-transacted, given the sole purpose of this application is to publish a message to an exchange in RabbitMQ?
Also, if I go with publisherConfirms, I would like to implement retries in case of failures or at least be able to throw exceptions. With channel-transacted, I will get exception in case of failures, but the latency is high in that case. I am not sure how to implement retries with publisherConfirms.
I tried retries with publisher confirms but my code just hangs.
Here's my code:
CompleteMessageCorrelationData.java
public class CompleteMessageCorrelationData extends CorrelationData {
private final Message message;
private final int retryCount;
public CompleteMessageCorrelationData(String id, Message message, int retryCount) {
super(id);
this.message = message;
this.retryCount = retryCount;
}
public Message getMessage() {
return this.message;
}
public int getRetryCount() {
return this.retryCount;
}
#Override
public String toString() {
return "CompleteMessageCorrelationData [id=" + getId() + ", message=" + this.message + "]";
}
}
Setting up the CachingConnectionFactory:
private static CachingConnectionFactory factory = new CachingConnectionFactory("host");
static {
factory.setUsername("rmq-user");
factory.setPassword("rmq-password");
factory.setChannelCacheSize(50);
factory.setPublisherConfirms(true);
}
private final RabbitTemplate rabbitTemplate = new RabbitTemplate(factory);
rabbitTemplate.setConfirmCallback((correlation, ack, reason) -> {
if (correlation != null && !ack) {
CompleteMessageCorrelationData data = (CompleteMessageCorrelationData)correlation;
log.info("Received nack for message: " + data.getMessage() + " for reason : " + reason);
int counter = data.getRetryCount();
if (counter < Integer.parseInt(max_retries)){
this.rabbitTemplate.convertAndSend(data.getMessage().getMessageProperties().getReceivedExchange(),
data.getMessage().getMessageProperties().getReceivedRoutingKey(),
data.getMessage(), new CompleteMessageCorrelationData(id, data.getMessage(), counter++));
} else {
log.error("Max retries exceeded for message: " + data.getMessage());
}
}
});
Publishing the message:
rabbitTemplate.convertAndSend(exchangeName, routingKey, message, new CompleteMessageCorrelationData(id, message, 0));
So, in short :
Am I doing something wrong with Channel-transacted that the latency is so high?
If I were to implement publisherConfirms instead, along with retries, what's wrong with my approach and will it perform better than channel transacted, considering there is no other job this application has other than publishing messages to rabbitmq?
As you have found, transactions are expensive and significantly degrade performance; 500ms seems high, though.
I don't believe publisher confirms will help much. You still have to wait for the round-trips to the broker, before releasing the servlet thread. Publisher confirms are useful when you send a bunch of messages and then wait for all the confirms to come back; but when you are only sending one message and then waiting for the confirm, it likely won't be much faster than using a transaction.
You could try it, though, but the code is a bit complex, especially if you want to handle exceptions, which you get for "free" with transactions.

Apache Camel: async operation and backpressure

In Apache Camel 2.19.0, I want to produce messages and consume the result asynchronously on a concurrent seda queue while at the same time blocking if the executors on the seda queue are full.
The use case behind it: I need to process large files with many lines and need to create batches for it because a single message for each individual line is too much overhead, whereas I cannot fit the entire file into heap. But in the end, I need to know whether all batches I triggered have completed successfully.
So effectively, I need a back pressure mechanism to spam the queue while at the same time want to leverage multi-threaded processing.
Here is a quick example in Camel and Spring. The route I configured:
package com.test;
import org.apache.camel.builder.RouteBuilder;
import org.springframework.stereotype.Component;
#Component
public class AsyncCamelRoute extends RouteBuilder {
public static final String ENDPOINT = "seda:async-queue?concurrentConsumers=2&size=2&blockWhenFull=true";
#Override
public void configure() throws Exception {
from(ENDPOINT)
.process(exchange -> {
System.out.println("Processing message " + (String)exchange.getIn().getBody());
Thread.sleep(10_000);
});
}
}
The producer looks like this:
package com.test;
import org.apache.camel.ProducerTemplate;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.event.ContextRefreshedEvent;
import org.springframework.context.event.EventListener;
import org.springframework.stereotype.Component;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.CompletableFuture;
#Component
public class AsyncProducer {
public static final int MAX_MESSAGES = 100;
#Autowired
private ProducerTemplate producerTemplate;
#EventListener
public void handleContextRefresh(ContextRefreshedEvent event) throws Exception {
new Thread(() -> {
// Just wait a bit so everything is initialized
try {
Thread.sleep(5_000);
} catch (InterruptedException e) {
e.printStackTrace();
}
List<CompletableFuture> futures = new ArrayList<>();
System.out.println("Producing messages");
for (int i = 0; i < MAX_MESSAGES; i++) {
CompletableFuture future = producerTemplate.asyncRequestBody(AsyncCamelRoute.ENDPOINT, String.valueOf(i));
futures.add(future);
}
System.out.println("All messages produced");
System.out.println("Waiting for subtasks to finish");
futures.forEach(CompletableFuture::join);
System.out.println("Subtasks finished");
}).start();
}
}
The output of this code looks like:
Producing messages
All messages produced
Waiting for subtasks to finish
Processing message 6
Processing message 1
Processing message 2
Processing message 5
Processing message 8
Processing message 7
Processing message 9
...
Subtasks finished
So it seems that blockIfFull is ignored and all messages are created and put onto the queue prior to processing.
Is there any way to create messages so that I can use async processing in camel while at the same time making sure that putting elements onto the queue will block if there are too many unprocessed elements?
I solved the problem by using streaming and a custom splitter. By doing this, I can split the source lines into chunks using an iterator that returns a list of lines instead of a single line only. With this, it seems to me that I can use Camel as required.
So the route contains the following portion:
.split().method(new SplitterBean(), "splitBody").streaming().parallelProcessing().executorService(customExecutorService)
With a custom-made splitter with the behavior as above.

Why do my threads always seem to be idle?

I have the following code:
import redis.clients.jedis.JedisPubSub;
import javax.sql.DataSource;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class MsgSubscriber extends JedisPubSub {
private final PersistenceService service;
private final ExecutorService pool;
public MsgSubscriber(DataSource dataSource) {
pool = Executors.newFixedThreadPool(4);
service = new PersistenceServiceImpl(dataSource);
}
public void onMessage(String channel, String message) {
pool.execute(new Handler(message, service));
}
}
It is subscribed to a Redis channel, which is receiving hundreds of messages a second.
I am processing each of these messages as they come along and saving them to a data store, the handler looks like this:
public class Handler implements Runnable {
private String msg;
private PersistenceService service;
public MessageHandler(String msg, PersistenceService service) {
this.msg = msg;
this.service = service;
}
#Override
public void run() {
service.save(msg);
}
}
Things seem to be working ok, messages are being written to the database, but I have been running Java VisualVM and am seeing graphs like the following:
I'm concerned because the threads seem to be sitting in this "Parked" state and not running - although with some logging statements I am seeing that the code is being run. I guess my question is firstly, is there a problem with my code, and secondly, why is Visual VM showing me the threads don't seem to be doing anything?
hundreds of messages a second
Redis can easily handle 10K messages per second in 1 thread. With 4 threads it should be well under 1% busy, however this might be too low for VisualVM to detect with sampling and instead it says it is Parked all the time.

Jboss Netty - How to serve 2 connections using 3 worker threads

Just as a simple example, lets say I want to handle 3 simultaneous TCP client connections using only 2 worker threads in netty, how would I do it?
Questions
A)
With the code below, my third connection doesn't get any data from the server - the connection just sits there. Notice - how my worker executor and worker count is 2.
So if I have 2 worker threads and 3 connections, shouldnt all three connections be served by the 2 threads?
B)
Another question is - Does netty use CompletionService of java.util.concurrent? It doesnt seem to use it. Also, I didnt see any source code that does executor.submit or future.get
So all this has added to the confusion of how it handles and serves data to connections that are MORE than its worker threads?
C)
I'm lost on how netty handles 10000+ simultaneous TCP connections....will it create 10000 threads? Thread per connection is not a scalable solution, so I'm confused, because how my test code doesnt work as expected.
import java.net.InetSocketAddress;
import java.nio.channels.ClosedChannelException;
import java.util.Date;
import java.util.concurrent.Executors;
import java.util.logging.Level;
import java.util.logging.Logger;
import org.jboss.netty.bootstrap.ServerBootstrap;
import org.jboss.netty.channel.Channel;
import org.jboss.netty.channel.ChannelFuture;
import org.jboss.netty.channel.ChannelFutureListener;
import org.jboss.netty.channel.ChannelHandlerContext;
import org.jboss.netty.channel.ChannelPipeline;
import org.jboss.netty.channel.ChannelPipelineFactory;
import org.jboss.netty.channel.ChannelStateEvent;
import org.jboss.netty.channel.Channels;
import org.jboss.netty.channel.ExceptionEvent;
import org.jboss.netty.channel.SimpleChannelUpstreamHandler;
import org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory;
import org.jboss.netty.handler.codec.string.StringEncoder;
public class SRNGServer {
public static void main(String[] args) throws Exception {
// Configure the server.
ServerBootstrap bootstrap = new ServerBootstrap(
new NioServerSocketChannelFactory(
Executors.newCachedThreadPool(),
//Executors.newCachedThreadPool()
Executors.newFixedThreadPool(2),2
));
// Configure the pipeline factory.
bootstrap.setPipelineFactory(new SRNGServerPipelineFactoryP());
// Bind and start to accept incoming connections.
bootstrap.bind(new InetSocketAddress(8080));
}
private static class SRNGServerHandlerP extends SimpleChannelUpstreamHandler {
private static final Logger logger = Logger.getLogger(SRNGServerHandlerP.class.getName());
#Override
public void channelConnected(
ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception {
// Send greeting for a new connection.
Channel ch=e.getChannel();
System.out.printf("channelConnected with channel=[%s]%n", ch);
ChannelFuture writeFuture=e.getChannel().write("It is " + new Date() + " now.\r\n");
SRNGChannelFutureListener srngcfl=new SRNGChannelFutureListener();
System.out.printf("Registered listener=[%s] for future=[%s]%n", srngcfl, writeFuture);
writeFuture.addListener(srngcfl);
}
#Override
public void exceptionCaught(
ChannelHandlerContext ctx, ExceptionEvent e) {
logger.log(
Level.WARNING,
"Unexpected exception from downstream.",
e.getCause());
if(e.getCause() instanceof ClosedChannelException){
logger.log(Level.INFO, "****** Connection closed by client - Closing Channel");
}
e.getChannel().close();
}
}
private static class SRNGServerPipelineFactoryP implements ChannelPipelineFactory {
public ChannelPipeline getPipeline() throws Exception {
// Create a default pipeline implementation.
ChannelPipeline pipeline = Channels.pipeline();
pipeline.addLast("encoder", new StringEncoder());
pipeline.addLast("handler", new SRNGServerHandlerP());
return pipeline;
}
}
private static class SRNGChannelFutureListener implements ChannelFutureListener{
public void operationComplete(ChannelFuture future) throws InterruptedException{
Thread.sleep(1000*5);
Channel ch=future.getChannel();
if(ch!=null && ch.isConnected()){
ChannelFuture writeFuture=ch.write("It is " + new Date() + " now.\r\n");
//-- Add this instance as listener itself.
writeFuture.addListener(this);
}
}
}
}
I haven't analyzed your source code in detail, so I don't know exactly why it doesn't work properly. But this line in SRNGChannelFutureListener looks suspicious:
Thread.sleep(1000*5);
This will make the thread that executes it be locked for 5 seconds; the thread will not be available to do any other processing during that time.
Question C: No, it will not create 10,000 threads; the whole point of Netty is that it doesn't do that, because that would indeed not scale very well. Instead, it uses a limited number of threads from a thread pool, generates events whenever something happens, and runs event handlers on the threads in the pool. So, threads and connections are decoupled from each other (there is not a thread for each connection).
To make this mechanism work properly, your event handlers should return as quickly as possible, to make the threads that they run on available for running the next event handler as quickly as possible. If you make a thread sleep for 5 seconds, then you're keeping the thread allocated, so it won't be available for handling other events.
Question B: If you really want to know you could get the source code to Netty and find out. It uses selectors and other java.nio classes for doing asynchronous I/O.

Categories