Why do my threads always seem to be idle? - java

I have the following code:
import redis.clients.jedis.JedisPubSub;
import javax.sql.DataSource;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class MsgSubscriber extends JedisPubSub {
private final PersistenceService service;
private final ExecutorService pool;
public MsgSubscriber(DataSource dataSource) {
pool = Executors.newFixedThreadPool(4);
service = new PersistenceServiceImpl(dataSource);
}
public void onMessage(String channel, String message) {
pool.execute(new Handler(message, service));
}
}
It is subscribed to a Redis channel, which is receiving hundreds of messages a second.
I am processing each of these messages as they come along and saving them to a data store, the handler looks like this:
public class Handler implements Runnable {
private String msg;
private PersistenceService service;
public MessageHandler(String msg, PersistenceService service) {
this.msg = msg;
this.service = service;
}
#Override
public void run() {
service.save(msg);
}
}
Things seem to be working ok, messages are being written to the database, but I have been running Java VisualVM and am seeing graphs like the following:
I'm concerned because the threads seem to be sitting in this "Parked" state and not running - although with some logging statements I am seeing that the code is being run. I guess my question is firstly, is there a problem with my code, and secondly, why is Visual VM showing me the threads don't seem to be doing anything?

hundreds of messages a second
Redis can easily handle 10K messages per second in 1 thread. With 4 threads it should be well under 1% busy, however this might be too low for VisualVM to detect with sampling and instead it says it is Parked all the time.

Related

How Java Thread Pool works?

I am working on an IoT project where devices send data to our Java Application called Gateway Adapter (GA).
In java I am using Thread Pool to start a new thread for every message we received from devices.
I have following code for Thread Allocator and Runnable Thread.
public class ThreadAllocator {
/** The thread pool. */
private ExecutorService executorService = Executors.newWorkStealingPool(1000);
public void allocateThread(IoSession session, String message) {
LOGGER.info("Entering allocateThread");
executorService.execute(new HelperThread(null,session, message));
}
}
public class HelperThread implements Runnable {
private String message;
public HelperThread(String message) {
LOGGER.info("Entering HelperThread");
this.message = message;
}
public void run() {
LOGGER.info("Entering run");
// Process Message
}
}
With above code when i did a load testing by sending around 5000 messages, I could see messages "Entering allocateThread" and "Entering HelperThread" 5000 times in log file but my run method executed only 1000 times means message "Entering run" was there in log file only 1000 times. Due to which other 4000 messages could not processed.
Is it the expected behavior "newWorkStealingPool" Thread Pool? Will it only execute tasks equals to number provided in its constructor? E.g. Executors.newWorkStealingPool(1000);
Kindly suggest the solution?
Am i missing some configuration or it is not the correct Thread Pool for my scenario? Then which Thread Pool will work correctly in this case?
Really appreciate your help.
Regards,
Krishan

Singleton class to manage running tasks in multithreaded environment in Java

I have a similar situation to that described in this question:
Java email sending queue - fixed number of threads sending as many messages as are available
In that I have a blocking queue that gets fed commands(ICommandTask extends Callable{Object}) from which a thread pool takes off and runs. The blocking queue provides thread synchronization and isolation between calling thread and executing thread. Different objects throughout the program can submit ICommandTasks to the command queue which is why I've made AddTask() static.
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.LinkedBlockingQueue;
import com.mypackage.tasks.ICommandTask;
public enum CommandQueue
{
INSTANCE;
private final BlockingQueue<ICommandTask> commandQueue;
private final ExecutorService executor;
private CommandQueue()
{
commandQueue = new LinkedBlockingQueue<ICommandTask>();
executor = Executors.newCachedThreadPool();
}
public static void start()
{
new Thread(INSTANCE.new WaitForProducers()).start();
}
public static void addTask(ICommandTask command)
{
INSTANCE.commandQueue.add(command);
}
private class WaitForProducers implements Runnable
{
#Override
public void run()
{
ICommandTask command;
while(true)
{
try
{
command = INSTANCE.commandQueue.take();
executor.submit(task);
}
catch (InterruptedException e)
{
// logging etc.
}
}
}
}
}
In the main program during start up the Command Queue is started using the following which creates a New CommandQueue object and starts the WaitForProducers in a separate thread.
CommandQueue.Start();
I wanted to ask whether this method of setting up a multiple producers to single executor using the singleton enum (so that different parts of the program can access), and that uses a separate thread to take off tasks from the queue and submit to a ThreadPool is a recommended way of doing what I want to achieve. Particularly in a very multithreaded environment.
So far it seems to be working ok but I plan on creating similar objects to CommandQueue to handle different types of Tasks. They will be stored in their own queues. E.g. OrderQueue, EventQueue, NegotiationQueue etc. So it needs to be somewhat scaleable and threadsafe.
Thanks in advance.

How to implement a constantly running process in JavaEE

How would you suggest to implement the following in JavaEE:
I need to have a background process in the app server (I was thinking a stateful session beans) that constantly monitors "something" and if some conditions apply it does operations with the database.
Most importantly it has to manipulated remotely by various clients.
So, basically, I need a process that will run constantly, keep its state and be open for method invocations by a number of remote clients.
Since I'm new to JavaEE I'm a bit confused which approach/"technology" to use. An help would be appreciated.
You can use a combination of a stateless session or singleton bean with an EJB timer an timer service. The bean would the interface used by the remote clients to control the background process. The timer service would periodically call back a method on the bean to verify the condition. The timers are automatically persisted by the EJB container, so they will do their job when your bean clients are disconnected.
Here is a sketch:
#Singleton
...
public TimerMangerbean implements TimerManager {
#Resource
private TimerService timerService;
public void startMonitoring() {
//start in 5 sec and timeout every 10 minutes
Timer timer = timerService.createTimer(5000, 60000, "MyTimer");
}
public void stopMonitoring() {
Collection<Timer> timers = timerService.getTimers();
for(Timer timer : timers) {
//look for your timer
if("MyTimer".equals(timer.getInfo())) {
timer.cancel();break;
}
}
}
//called every 10 minutes
#Timeout
public void onTimeout() {
//verify the condition and do your processing
}
}
See also: Using the timer service on Oracle JavaEE tutorial
What about Quartz? See the links
http://rwatsh.blogspot.com/2007/03/using-quartz-scheduler-in-java-ee-web.html
http://lanbuithe.blogspot.com/2011/07/using-quartz-scheduler-in-java-ee-web.html
http://www.mkyong.com/tutorials/quartz-scheduler-tutorial/
As you stated yourself, you have two requirements: 1) periodically perform some background job, and 2) respond to client requests.
For 1), you can use the TimerService or spawn a thread with a ServletContextListener. The second is not fully conform, but works. If you use timers, you can either create a periodic timer (as pointed out by #dcernahoschi), or a unique timer that reschedules itself:
#Timeout
public void onTimeout() {
//do something
// create a new timer
}
If your periodic timer fires each 10 sec and you have processing that last form more than 10 seconds, you might have a problem. Having a timer that reschedules itself works better if the processing time is not fixed.
For 2) you can go with statelesss or staefull EJB, that's precisely their purpose.
Java EE is the solution. You will need to follow thoses steps:
build a Java EE application, a jar containing a EJB:
1.1 you will need a IDE : Eclipse Juno is my favorit,
1.2 Many tuto exists on the web. Search for EJB3 and you will find,
have an application server to run your EJB. JBoss is a good choice, Glassfish is an another good choice. With JBoss and the JBoss Tools plugin for Eclipse installed, you will be able to build and run rapidly an basic application.
EDIT : a complete Timer EJB class (with automatic reload if needed)
package clouderial.saas.commons.utils;
import java.util.Map;
import javax.annotation.PreDestroy;
import javax.annotation.Resource;
import javax.ejb.ScheduleExpression;
import javax.ejb.Timeout;
import javax.ejb.Timer;
import javax.ejb.TimerConfig;
import javax.ejb.TimerService;
import javax.inject.Inject;
import jmcnet.libcommun.exception.ExceptionTechnique;
import jmcnet.libcommun.utilit.mail.MailException;
import org.apache.commons.configuration.event.ConfigurationEvent;
import org.apache.commons.configuration.event.ConfigurationListener;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import clouderial.saas.commons.email.EmailSender;
import clouderial.saas.commons.jpamongo.JPAMongoBasePersistenceContextAccessor;
/**
* A base class for a periodic process
* #author jmc
*
*/
public abstract class PeriodicProcessBase extends JPAMongoBasePersistenceContextAccessor implements ConfigurationListener {
private static Logger log = LoggerFactory.getLogger(PeriodicProcessBase.class);
#Resource
private TimerService timerService;
#Inject
protected GlobalConfiguration _config;
#Inject
protected EmailSender _emailSender;
private Timer _timer=null;
private String _processName=null;
private Logger _log = null;
protected void initTimer(String processName, Logger log) {
if (processName != null) _processName = processName;
if (log != null) _log = log;
String second = _config.getString("timer."+_processName+".second","0");
String minute = _config.getString("timer."+_processName+".minute","0");
String hour = _config.getString("timer."+_processName+".hours","4");
String dayOfWeek = _config.getString("timer."+_processName+".dayOfWeek","*");
ScheduleExpression scheduleExp =
new ScheduleExpression().second(second).minute(minute).hour(hour).dayOfWeek(dayOfWeek);
cancelTimer();
if (timerService != null) {
_timer = timerService.createCalendarTimer(scheduleExp, new TimerConfig(_processName, false));
_log.info("{} : timer programmed for '{}'h, '{}'m, '{}'s for days '{}'.", _processName, hour, minute, second, dayOfWeek);
}
else _log.error("{} : no timer programmed because timerService is not initialized. (Normal during tests)", _processName);
// Listen to change
_config.addModificationListener(this); // on timer modification, configurationChanged is called
}
#PreDestroy
private void cancelTimer() {
if (_log != null) _log.info("Stopping timer for '{}'", _processName);
if (_timer != null) _timer.cancel();
_timer = null;
}
#Override
public void configurationChanged(ConfigurationEvent event) {
if (_log != null) _log.info("Configuration have change. Reloading config for ProcessBilling.");
_config.removeModificationListener(this);
initTimer(null, null);
}
#Timeout
private void run(Timer timer) {
runProcess(timer);
}
/**
* The entry point for runner the process. Must be overriden by super class
* #param timer
*/
protected abstract void runProcess(Timer timer); // do the job here
}
I hope this helps.

Camel ActiveMQ Performance Tuning

Situation
At present, we use some custom code on top of ActiveMQ libraries for JMS messaging. I have been looking at switching to Camel, for ease of use, ease of maintenance, and reliability.
Problem
With my present configuration, Camel's ActiveMQ implementation is substantially slower than our old implementation, both in terms of delay per message sent and received, and time taken to send and receive a large flood of messages. I've tried tweaking some configuration (e.g. maximum connections), to no avail.
Test Approach
I have two applications, one using our old implementation, one using a Camel implementation. Each application sends JMS messages to a topic on local ActiveMQ server, and also listens for messages on that topic. This is used to test two Scenarios:
- Sending 100,000 messages to the topic in a loop, and seen how long it takes from start of sending to end of handling all of them.
- Sending a message every 100 ms and measuring the delay (in ns) from sending to handling each message.
Question
Can I improve upon the implementation below, in terms of time sent to time processed for both floods of messages, and individual messages? Ideally, improvements would involve tweaking some config that I have missed, or suggesting a better way to do it, and not be too hacky. Explanations of improvements would be appreciated.
Edit: Now that I am sending messages asyncronously, I appear to have a concurrency issue. receivedCount does not reach 100,000. Looking at the ActiveMQ web interface, 100,000 messages are enqueued, and 100,000 dequeued, so it's probably a problem on the message processing side. I've altered receivedCount to be an AtomicInteger and added some logging to aid debugging. Could this be a problem with Camel itself (or the ActiveMQ components), or is there something wrong with the message processing code? As far as I can tell, only ~99,876 messages are making it through to floodProcessor.process.
Test Implementation
Edit: Updated with async sending and logging for concurrency issue.
import java.util.Arrays;
import java.util.List;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import org.apache.activemq.ActiveMQConnectionFactory;
import org.apache.activemq.camel.component.ActiveMQComponent;
import org.apache.activemq.pool.PooledConnectionFactory;
import org.apache.camel.CamelContext;
import org.apache.camel.Exchange;
import org.apache.camel.Processor;
import org.apache.camel.ProducerTemplate;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.component.jms.JmsConfiguration;
import org.apache.camel.impl.DefaultCamelContext;
import org.apache.log4j.Logger;
public class CamelJmsTest{
private static final Logger logger = Logger.getLogger(CamelJmsTest.class);
private static final boolean flood = true;
private static final int NUM_MESSAGES = 100000;
private final CamelContext context;
private final ProducerTemplate producerTemplate;
private long timeSent = 0;
private final AtomicInteger sendCount = new AtomicInteger(0);
private final AtomicInteger receivedCount = new AtomicInteger(0);
public CamelJmsTest() throws Exception {
context = new DefaultCamelContext();
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory("tcp://localhost:61616");
PooledConnectionFactory pooledConnectionFactory = new PooledConnectionFactory(connectionFactory);
JmsConfiguration jmsConfiguration = new JmsConfiguration(pooledConnectionFactory);
logger.info(jmsConfiguration.isTransacted());
ActiveMQComponent activeMQComponent = ActiveMQComponent.activeMQComponent();
activeMQComponent.setConfiguration(jmsConfiguration);
context.addComponent("activemq", activeMQComponent);
RouteBuilder builder = new RouteBuilder() {
#Override
public void configure() {
Processor floodProcessor = new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
int newCount = receivedCount.incrementAndGet();
//TODO: Why doesn't newCount hit 100,000? Remove this logging once fixed
logger.info(newCount + ":" + exchange.getIn().getBody());
if(newCount == NUM_MESSAGES){
logger.info("all messages received at " + System.currentTimeMillis());
}
}
};
Processor spamProcessor = new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
long delay = System.nanoTime() - timeSent;
logger.info("Message received: " + exchange.getIn().getBody(List.class) + " delay: " + delay);
}
};
from("activemq:topic:test?exchangePattern=InOnly")//.threads(8) // Having 8 threads processing appears to make things marginally worse
.choice()
.when(body().isInstanceOf(List.class)).process(flood ? floodProcessor : spamProcessor)
.otherwise().process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
logger.info("Unknown message type received: " + exchange.getIn().getBody());
}
});
}
};
context.addRoutes(builder);
producerTemplate = context.createProducerTemplate();
// For some reason, producerTemplate.asyncSendBody requires an Endpoint to be passed in, so the below is redundant:
// producerTemplate.setDefaultEndpointUri("activemq:topic:test?exchangePattern=InOnly");
}
public void send(){
int newCount = sendCount.incrementAndGet();
producerTemplate.asyncSendBody("activemq:topic:test?exchangePattern=InOnly", Arrays.asList(newCount));
}
public void spam(){
Executors.newSingleThreadScheduledExecutor().scheduleWithFixedDelay(new Runnable() {
#Override
public void run() {
timeSent = System.nanoTime();
send();
}
}, 1000, 100, TimeUnit.MILLISECONDS);
}
public void flood(){
logger.info("starting flood at " + System.currentTimeMillis());
for (int i = 0; i < NUM_MESSAGES; i++) {
send();
}
logger.info("flooded at " + System.currentTimeMillis());
}
public static void main(String... args) throws Exception {
CamelJmsTest camelJmsTest = new CamelJmsTest();
camelJmsTest.context.start();
if(flood){
camelJmsTest.flood();
}else{
camelJmsTest.spam();
}
}
}
It appears from your current JmsConfiguration that you are only consuming messages with a single thread. Was this intended?
If not, you need to set the concurrentConsumers property to something higher. This will create a threadpool of JMS listeners to service your destination.
Example:
JmsConfiguration config = new JmsConfiguration(pooledConnectionFactory);
config.setConcurrentConsumers(10);
This will create 10 JMS listener threads that will process messages concurrently from your queue.
EDIT:
For topics you can do something like this:
JmsConfiguration config = new JmsConfiguration(pooledConnectionFactory);
config.setConcurrentConsumers(1);
config.setMaxConcurrentConsumers(1);
And then in your route:
from("activemq:topic:test?exchangePattern=InOnly").threads(10)
Also, in ActiveMQ you can use a virtual destination. The virtual topic will act like a queue and then you can use the same concurrentConsumers method you would use for a normal queue.
Further Edit (For Sending):
You are currently doing a blocking send. You need to do producerTemplate.asyncSendBody().
Edit
I just built a project with your code and ran it. I set a breakpoint in your floodProcessor method and newCount is reaching 100,000. I think you may be getting thrown off by your logging and the fact that you are sending and receiving asynchronously. On my machine newCount hit 100,000 and the "all messages recieved" message was logged in well under 1 second after execution, but the program continued to log for another 45 seconds afterwards since it was buffered. You can see the effect of logging on how close your newCount number is to your body number by reducing the logging. I turned the logging to info, shutting off camel logging, and the two numbers matched at the end of the logging:
INFO CamelJmsTest - 99996:[99996]
INFO CamelJmsTest - 99997:[99997]
INFO CamelJmsTest - 99998:[99998]
INFO CamelJmsTest - 99999:[99999]
INFO CamelJmsTest - 100000:[100000]
INFO CamelJmsTest - all messages received at 1358778578422
I took over from the original poster in looking at this as part of another task, and found the problem with losing messages was actually in the ActiveMQ config.
We had a setting sendFailIfNoSpace=true, which was resulting in messages being dropped if we were sending fast enough to fill the publishers cache. Playing around with the policyEntry topic cache size I could vary the number of messages that disappeared with as much reliability as can be expected of such a race condition. Setting sendFailIfNoSpace=false (default), I could have any cache size I liked and never fail to receive all messages.
In theory sendFailIfNoSpace should throw a ResourceAllocationException when it drops a message, but that is either not happening(!) or is being ignored somehow. Also interesting is that our custom JMS wrapper code doesn't hit this problem despite running the throughput test faster than Camel. Maybe that code is faster in such a way that it means the publishing cache is being emptied faster, or else we are overriding sendFailIfNoSpace in the connection code somewhere that I haven't found yet.
On the question of speed, we have implemented all the suggestions mentioned here so far except for virtual destinations, but the Camel version test with 100K messages still runs in 16 seconds on my machine compared to 10 seconds for our own wrapper. As mentioned above, I have a sneaking suspicion that we are (implicitly or otherwise) overriding config somewhere in our wrapper, but I doubt it is anything that would cause that big a performance boost within ActiveMQ.
Virtual destinations as mentioned by gwithake might speed up this particular test, but most of the time with our real workloads it is not an appropriate solution.

Jboss Netty - How to serve 2 connections using 3 worker threads

Just as a simple example, lets say I want to handle 3 simultaneous TCP client connections using only 2 worker threads in netty, how would I do it?
Questions
A)
With the code below, my third connection doesn't get any data from the server - the connection just sits there. Notice - how my worker executor and worker count is 2.
So if I have 2 worker threads and 3 connections, shouldnt all three connections be served by the 2 threads?
B)
Another question is - Does netty use CompletionService of java.util.concurrent? It doesnt seem to use it. Also, I didnt see any source code that does executor.submit or future.get
So all this has added to the confusion of how it handles and serves data to connections that are MORE than its worker threads?
C)
I'm lost on how netty handles 10000+ simultaneous TCP connections....will it create 10000 threads? Thread per connection is not a scalable solution, so I'm confused, because how my test code doesnt work as expected.
import java.net.InetSocketAddress;
import java.nio.channels.ClosedChannelException;
import java.util.Date;
import java.util.concurrent.Executors;
import java.util.logging.Level;
import java.util.logging.Logger;
import org.jboss.netty.bootstrap.ServerBootstrap;
import org.jboss.netty.channel.Channel;
import org.jboss.netty.channel.ChannelFuture;
import org.jboss.netty.channel.ChannelFutureListener;
import org.jboss.netty.channel.ChannelHandlerContext;
import org.jboss.netty.channel.ChannelPipeline;
import org.jboss.netty.channel.ChannelPipelineFactory;
import org.jboss.netty.channel.ChannelStateEvent;
import org.jboss.netty.channel.Channels;
import org.jboss.netty.channel.ExceptionEvent;
import org.jboss.netty.channel.SimpleChannelUpstreamHandler;
import org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory;
import org.jboss.netty.handler.codec.string.StringEncoder;
public class SRNGServer {
public static void main(String[] args) throws Exception {
// Configure the server.
ServerBootstrap bootstrap = new ServerBootstrap(
new NioServerSocketChannelFactory(
Executors.newCachedThreadPool(),
//Executors.newCachedThreadPool()
Executors.newFixedThreadPool(2),2
));
// Configure the pipeline factory.
bootstrap.setPipelineFactory(new SRNGServerPipelineFactoryP());
// Bind and start to accept incoming connections.
bootstrap.bind(new InetSocketAddress(8080));
}
private static class SRNGServerHandlerP extends SimpleChannelUpstreamHandler {
private static final Logger logger = Logger.getLogger(SRNGServerHandlerP.class.getName());
#Override
public void channelConnected(
ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception {
// Send greeting for a new connection.
Channel ch=e.getChannel();
System.out.printf("channelConnected with channel=[%s]%n", ch);
ChannelFuture writeFuture=e.getChannel().write("It is " + new Date() + " now.\r\n");
SRNGChannelFutureListener srngcfl=new SRNGChannelFutureListener();
System.out.printf("Registered listener=[%s] for future=[%s]%n", srngcfl, writeFuture);
writeFuture.addListener(srngcfl);
}
#Override
public void exceptionCaught(
ChannelHandlerContext ctx, ExceptionEvent e) {
logger.log(
Level.WARNING,
"Unexpected exception from downstream.",
e.getCause());
if(e.getCause() instanceof ClosedChannelException){
logger.log(Level.INFO, "****** Connection closed by client - Closing Channel");
}
e.getChannel().close();
}
}
private static class SRNGServerPipelineFactoryP implements ChannelPipelineFactory {
public ChannelPipeline getPipeline() throws Exception {
// Create a default pipeline implementation.
ChannelPipeline pipeline = Channels.pipeline();
pipeline.addLast("encoder", new StringEncoder());
pipeline.addLast("handler", new SRNGServerHandlerP());
return pipeline;
}
}
private static class SRNGChannelFutureListener implements ChannelFutureListener{
public void operationComplete(ChannelFuture future) throws InterruptedException{
Thread.sleep(1000*5);
Channel ch=future.getChannel();
if(ch!=null && ch.isConnected()){
ChannelFuture writeFuture=ch.write("It is " + new Date() + " now.\r\n");
//-- Add this instance as listener itself.
writeFuture.addListener(this);
}
}
}
}
I haven't analyzed your source code in detail, so I don't know exactly why it doesn't work properly. But this line in SRNGChannelFutureListener looks suspicious:
Thread.sleep(1000*5);
This will make the thread that executes it be locked for 5 seconds; the thread will not be available to do any other processing during that time.
Question C: No, it will not create 10,000 threads; the whole point of Netty is that it doesn't do that, because that would indeed not scale very well. Instead, it uses a limited number of threads from a thread pool, generates events whenever something happens, and runs event handlers on the threads in the pool. So, threads and connections are decoupled from each other (there is not a thread for each connection).
To make this mechanism work properly, your event handlers should return as quickly as possible, to make the threads that they run on available for running the next event handler as quickly as possible. If you make a thread sleep for 5 seconds, then you're keeping the thread allocated, so it won't be available for handling other events.
Question B: If you really want to know you could get the source code to Netty and find out. It uses selectors and other java.nio classes for doing asynchronous I/O.

Categories