Should I keep long running threads bare or use Executors? - java

I am working on a streaming java application that is using several long running worker threads. The application receives data, processes it, and then sends it along toward a third party using their SDK. There is an Engine class that recieves data and submits it to Workers. The worker threads will live for as long as the application runs, which could be months if not years.
I have included sample code that represents this key part of this question.
import java.util.Map;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.TimeUnit;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class BarEngine implements Engine
{
static Logger log = LoggerFactory.getLogger(BarEngine.class);
private static final int WORKER_COUNT = 5;
private BlockingQueue<Map<String, Object>> queue;
private FooWorker[] workers = new FooWorker[WORKER_COUNT];
public BarEngine()
{
for (int i = 0; i < WORKER_COUNT; i++)
{
workers[i] = new FooWorker(i, queue);
workers[i].start();
}
}
// From Engine Interface
#Override
public void sendEvent(final Map<String, Object> data)
{
try
{
queue.put(data);
}
catch (InterruptedException e)
{
log.error("Unexpected Exception", e);
}
}
// From Engine Interface
#Override
public void shutDown()
{
// Shuts down engine
}
public static class FooWorker extends Thread
{
static Logger log = LoggerFactory.getLogger(FooWorker.class);
private volatile boolean run = true;
private int id;
private BlockingQueue<Map<String, Object>> queue;
private Client client;
public FooWorker(int id, BlockingQueue<Map<String, Object>> queue)
{
this.id = id;
this.queue = queue;
client = Client.build(id);
}
#Override
public void run()
{
setName("FooWorker-" + id);
while (run)
{
try
{
Map<String, Object> data = queue.poll(5, TimeUnit.SECONDS);
if (null != data)
{
sendEvent(data);
}
}
catch (Throwable e)
{
log.error("Unexpected Exception", e);
}
}
}
private void sendEvent(Map<String, Object> data)
{
try
{
client.submit(data);
}
catch (Throwable e)
{
log.error("Unexpected Exception", e);
}
}
// dummy client classs
public static class Client
{
public void submit(Map<String, Object> data)
{
// submits data
}
public static Client build(int id)
{
// Builds client
return new Client();
}
}
}
}
I have been doing a bit of research, and I have not found a satisfactory answer.
Java Concurrency in Practice : Does not provide much guidance on long running threads.
When should we use Java's Thread over Executor? : Heavily suggest I should ALWAYS use an Executor. Does not cover the application long-life threads per se.
Java Executor and Long-lived Threads : Discusses managing long lived threads with Executor but does not answer if one SHOULD manage long live threads with Executor
My question is: Should I keep these long running Threads bare like this? If not, what should I replace it with (Like ExecutorService or something else)?

Answering your question, if you have threads which has the same lifetime of the application, in my opinion it doesn't matter if you are using a Thread or Executer service (which is again using Threads underneath) as long as you manage the thread's life cycle properly.
From design point of view your application falls in to software category what we called a "middleware". Generally a middleware application should be efficient as well as scalable, both which are essential qualities of such server yet you have ignored both. Your application's threads run busy-wait loops per thread keeping the CPU busy at all time. Even when the incoming load is very low this keeps happening. Which is a not good quality to have for such application.
Alternatively, I'm proposing you to use a ThreadPool implementation such as ThreadPoolExecutor which already have solved what you are trying to accomplish here. ThreadPoolExecutor leverages the functionality of a BlockingQueue if all initially fired up threads are busy at the moment. also it can stop threads if the load is low and fire up again if wanted. I have coded the structure of the design I'm proposing. Take a look at the following code. I assumed that Client is not thread-safe so I'm constructing a Client per thread. If your real client implementation is thread-safe you can use one client across all threads.
import java.util.Map;
import java.util.concurrent.*;
public class BarEngine implements Engine {
private static final int WORKER_COUNT = 5;
private ExecutorService threadPool;
public BarEngine() {
this.threadPool = new ThreadPoolExecutor(1, WORKER_COUNT, 10, TimeUnit.MILLISECONDS, new ArrayBlockingQueue<Runnable>(100));
}
// From Engine Interface
#Override
public void sendEvent(final Map<String, Object> data) {
threadPool.submit(new FooWorker(data));
}
// From Engine Interface
#Override
public void shutDown() {
this.threadPool.shutdown();
// Shuts down engine
}
public static class FooWorker implements Runnable {
private final Client client;
private final Map<String, Object> data;
public FooWorker(Map<String, Object> data) {
client = Client.build(Thread.currentThread().getId());
this.data = data;
}
#Override
public void run() {
try {
if (null != data) {
sendEvent(data);
}
} catch (Throwable e) {
//todo log
}
}
private void sendEvent(Map<String, Object> data) {
try {
client.submit(data);
} catch (Throwable e) {
//todo log
}
}
// dummy client classs
public static class Client {
public void submit(Map<String, Object> data) {
// submits data
}
public static Client build(long id) {
// Builds client
return new Client();
}
}
}
}

Yes, what you posted is exactly something for ExecutorService. Some advices:
There are two main interfaces Executor, ExecutorService.
Executors should be ended if you dont need them via shutdown/shutdownNow methods on ExecutorService interface. If not, you will face memleaks.
Create your ExecutorService via Executors, for example:
Executors.newFixedThreadPool(5);
If you use Executor service, you can push data directly to ExecutorService via Runnable because ExecutorService doing queuing it self and split tasks into its workers...

Related

Semaphores not avoiding thread loss

this is my first question here so please bear with me.
I am currently working on a UNI assignment on multithreading and concurrency in Java where we are asked to implement various versions of a "Call Center" using different thread locking methods, with one of them being Semaphores. I'll get right into the code to show what my problem is:
Producer Class:
public final class Caller implements Runnable {
private final CallCenter callCenter;
public Caller(long id, CallCenter callCenter) {
this.callCenter = callCenter;
}
#Override
public void run() {
try {
callCenter.receive(new Call());
} catch(Exception ex) {
throw new RuntimeException(ex);
}
}
}
Consumer Class:
public final class Operator implements Runnable {
private final CallCenter callCenter;
private Call call;
public Operator(CallCenter callCenter) {
this.callCenter = callCenter;
}
#Override
public void run() {
try {
this.call = callCenter.answer();
} catch(InterruptedException ex) {
throw new RuntimeException(ex);
}
}
public Call getCall() {
return this.call;
}
}
Service:
import java.util.Queue;
import java.util.concurrent.Semaphore;
import java.util.LinkedList;
public final class BoundedCallCenterSemaphore implements BoundedCallCenter {
private final Queue<Call> pendingCalls = new LinkedList<Call>();
private Semaphore semaphore = new Semaphore(MAX_NUMBER_OF_PENDING_CALLS, true);
public void receive(Call call) throws Exception {
semaphore.acquire();
pendingCalls.add(call);
}
public Call answer() throws InterruptedException {
semaphore.release();
return pendingCalls.poll();
}
}
Call Implementation:
import java.util.concurrent.atomic.AtomicLong;
public final class Call {
private static final AtomicLong currentId = new AtomicLong();
private final long id = currentId.getAndIncrement();
public long getId() {
return id;
}
}
Disclaimer
I know I am probably not using the semaphore the way it is intended to be used, but reading the official docs an other blogs/answers does not help at all.
We have the following constraints: only modify the Service Class, solve using Semaphores and only use Semaphore.acquire() and Semaphore.receive() to avoid racing and busy waiting, no other method or thread-locking structure is allowed
Actual Problem:
I'll avoid posting here the entirety of the tests written by our professor, just know that 100 calls are sent to the Service, for simplicity each caller only calls once and each operator only responds once. When implementing the callcenter without semaphores you'll get busy waits generated by a while loop and concurrency is not well-managed as some calls can be answered twice or more if the different threads act simultaneously. The mission here is to eliminate busy waits and ensure each call is received and answered only once. I tried using semaphores as reported above, and while busy wait is eliminated some of the calls end up not being answered at all. Any advice on what I am doing wrong? How do I ensure that each and every call is answered only once?
In the end, I did it using three semaphores. The first semaphore new Semaphore(MAX_NUMBER_OF_PENDING_CALLS, true) guards the queue in the sense of blocking new entries when pendingCalls.size() >= MAX_NUMBER_OF_PENDING_CALLS . The second semaphore new Semaphore(1, true) guards the producer threads, allowing just one thread at a time to access the queue for adding operations. The third and last semaphore starts with no permits and waits for the first producer thread to insert the first call into the buffer new Semaphore(0, true) .
Code
public final class BoundedCallCenterSemaphore implements BoundedCallCenter {
private final LinkedList<Call> pendingCalls = new LinkedList<Call>();
static Semaphore receiver = new Semaphore(1, true);
static Semaphore storage = new Semaphore(MAX_NUMBER_OF_PENDING_CALLS, true);
static Semaphore operants = new Semaphore(0, true);
public void receive(Call call) throws Exception {
try {
storage.acquire();
}
catch (InterruptedException e)
{
}
try {
receiver.acquire();
}
catch (InterruptedException e)
{
}
synchronized (pendingCalls) {
pendingCalls.add(call);
operants.release();
}
}
public Call answer() throws InterruptedException {
try
{
operants.acquire();
}
catch (InterruptedException e)
{
}
Call call = null;
synchronized (pendingCalls) {
call = pendingCalls.poll();
storage.release();
receiver.release();
}
return call;
}
}

Concurrency: how to implement an executor with both incoming and outgoing queues?

As we know, ThreadPoolExecutor uses some BlockingQueue as a queue of incoming tasks. What I want is to have ThreadPoolExecutor that has a second queue for the task results which are ready. I want to use this queue as a source for input/output services which send or store these results.
Why I want to create a separate queue? Because I want to decouple action of sending results from action of obtaining results. Also, I suppose any Exceptions and Delays that accompany input/output operations should not affect my ThreadPoolExecutor which is calculating the result.
I have created some naive implementation of this. I would like to get some criticism on this. May be, it can be implemented with out-of-the-box Java classes better? I use Java 7.
public class ThreadPoolWithResultQueue {
interface Callback<T> {
void complete(T t);
}
public abstract static class CallbackTask<T> implements Runnable {
private final Callback callback;
CallbackTask(Callback callback) {
this.callback = callback;
}
public abstract T execute();
final public void run() {
T t = execute();
callback.complete(t);
}
}
public static class CallBackTaskString extends CallbackTask<String> {
public CallBackTaskString(Callback callback) {
super(callback);
}
#Override
public String execute() {
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
}
return hashCode() + "-" + System.currentTimeMillis();
}
}
public static void main(String[] args) throws InterruptedException {
BlockingQueue<Runnable> workQueue = new LinkedBlockingQueue<Runnable>();
final BlockingQueue<String> resultQueue = new LinkedBlockingQueue<String>();
Callback<String> addToQueueCallback = new Callback<String>() {
#Override
public void complete(String s) {
System.out.println("Adding Result To Queue " + s);
resultQueue.add(s); //adding to outgoing queue. some other executor (or same one?) will process it
}
};
ThreadPoolExecutor executor = new ThreadPoolExecutor(3, 5, 1000l, TimeUnit.DAYS, workQueue);
for (int i = 0; i <= 5; i++) {
executor.submit(new CallBackTaskString(addToQueueCallback));
};
System.out.println("All submitted.");
executor.shutdown();
executor.awaitTermination(10l, TimeUnit.SECONDS);
System.out.println("Result queue size " + resultQueue.size());
}
}
For the sake of makinf a library component, you would have to wrap things up...
You could extend The thread pool executor which has a number of methods to intercept the submitted tasks, so you would queue thing out to a queue passed in the constructor.
That's basically ExecutorCompletionService, but you would allow the user to plug a queue instead of appearing as one.
Otherwise, this is typical proxying of the task. Fair job.

Multithreaded Java worker with a size restricted resource pool

I have this 'Worker' class, which uses a resource 'Client'.
There may be any number of threads, running the 'Worker' at any given time.
The 'Client' is not thread-safe, thus I'm using 'ThreadLocal' for it.
The 'Client' connects to some server and executes a HTTP 'Request' that the worker feeds the 'Client'.
public class Worker {
// Client is NOT thread-safe !!!
private static ThreadLocal<Client> client = new ThreadLocal<Client>();
#Override
protected void onGet(Request req) {
handleRequest(req);
}
private void handleRequest(Request req) {
someRunnableExecutor(new Runnable() {
#Override
public void run() {
get_client().send_req(req);
}
});
}
private Client get_client() {
Client c = client.get();
if (c == null) {
c = new Client();
client.set(c);
}
return c;
}
At the current implementation (above), stripped down for clarity, there are as many "active" 'Clients' as there are running 'Workers'.
This is a problem because the server is being exhausted.
What I can do is only fix the 'Worker'. Have no access to the 'Client', server or the executor that runs the workers.
What I want to do is to have a Queue of 'Client'(s) and a piece of a synchronized code, in the 'Worker', that takes a 'Client' off the Queue, if the Queue is empty the 'Worker' should wait till there is one in the Queue for him to take. Then put the 'Client' back into the Queue - synchronized as well.
I really want to keep it as simple as possible, with the possible minimum changes made to the code.
No new classes, no factories, just some data structure to hold the 'Client'(s) and synchronization.
I am a bit puzzled with how to achieve that generally, as well as by the fact that the 'Client' is not thread-safe and that I have to 'ThreadLocal'(ize) it. Is this how do I put that in a Queue?
private static Queue<ThreadLocal<CLient>> queue =
new LinkedList<ThreadLocal<CLient>>();
Also, how/where do I initialize that Queue, once, with say 5 clients?
Please share your thoughts.
You don't need ThreadLocal here, as you want to have less Clients than Workers. All you need in BlockingQueue.
Notice! I supposed that Client's send_req is synchronous, if it's not - the code needs some changes in run() method
public class Worker {
private static final int CLIENTS_NUMBER = 5;
private static final BlockingQueue<Client> queue = new LinkedBlockingQueue<>(CLIENTS_NUMBER);
static {
for (int i = 0; i < CLIENTS_NUMBER; i++)
queue.put(new Client());
}
#Override
protected void onGet(Request req) {
handleRequest(req);
}
private void handleRequest(Request req) {
someRunnableExecutor(new Runnable() {
#Override
public void run() {
try {
Client client = takeClient();
client.send_req(req);
putClient(client);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
});
}
private Client takeClient() throws InterruptedException {
return queue.take();
}
private void putClient(Client client) throws InterruptedException {
queue.put(client);
}
}

ServerSocket connection with more than one client

Folks. I'm newbie in network programming and come across the following issue. I need to write the server which can maintain a connection with more than one client simultaneously. What I've written is the following:
Main class:
public class Main {
public static void main(String args[]) throws Exception{
ConnectionUtils.waitForClients();
}
}
ConnectionUtils class:
public class ConnectionUtils {
private static ServerSocket server;
static{
try {
server = new ServerSocket(54321);
} catch (Exception e) {
}
}
private static Runnable acceptor = new Runnable() {
#Override
public void run() {
try {
Client c = new Client(server.accept());
new Thread(acceptor).start();
c.sendLine("Hello client \n");
} catch (Exception e) {
}
}
};
public static void waitForClients(){
Thread clientAcceptor = new Thread(acceptor);
clientAcceptor.start();
}
}
and it works, more-or-less. But what is the downside of that approach? I suspect there're too much disadvantage, but I can't catch their.
The problem is that you creating an infinite number of threads where threads are expensive resources. You should be using a ThreadPool to limit the number of threads created in your program.
Consider using Executors instead of using this low-level code, In Oracle documentation about Executors, there is an example similar to what you doing. Check it out!
Heh interesting. I wouldn't expect it to be wrong but it sure isn't how I'd write it.
I'd probably have 1 thread in an infinite (semi-infinite with stop condition) loop that accepts and spawn threads, rather than something that looks like a recursive method but isn't. However as far as I can see it's not wrong.
Having said that, if you don't use your main thread for anything, why not do something like (and keep in mind i'm not a network programmer either)
public class ConnectionUtils {
protected boolean stop = false;
public static void waitForClients() {
while (!stop) {
Client c = new Client(server.accept());
new Thread(new ClientDelegate(c)).start();
}
}
}
public static class ClientDelegate implements Runnable {
private Client client;
public ClientDelegate(Client c) { this.client = c; }
public static void run() {
c.sendLine("Hello client\n");
}
}

BlockingQueue: how can multiple producers stop a single consumer?

I wrote a producer/consumer based program using Java's BlockingQueue. I'm trying to find a way to stop the consumer if all producers are done. There are multiple producers, but only one consumer.
I found several solutions for the "one producer, many consumers" scenario, e.g. using a "done paket / poison pill" (see this discussion), but my scenario is just the opposite.
Are there any best practice solutions?
The best-practice system is to use a count-down latch. Whether this works for you is more interesting.....
Perhaps each producer should register and deregister with the consumer, and when all producers are deregistered (and the queue is empty) then the consumer can terminate too.
Presumably your producers are working in different threads in the same VM, and that they exit when done. I would make another thread that calls join() on all the producers in a loop, and when it exist that loop (because all the producer threads have ended) it then notifies the consumer that it's time to exit. This has to run in another thread because the join() calls will block. Incidentally, rolfl's suggestion of using a count down latch would have the problem, if I understand it correctly.
Alternately, if the producers are Callables, then the consumer can call isDone() and isCanceled() on their Futures in the loop, which won't bock, so it can be used right in the consumer thread.
You could use something like the following, i use registerProducer() and unregisterProducer() for keeping track of the producers. Another possible solution could make use of WeakReferences.
It's worth to mention that this solution will not consume the events that have already been queued when the consumer is shut down, so some events may be lost when shutting down.
You would have to drain the queue if the consumer gets interrupt and then process them.
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicInteger;
public class TestConsumerShutdown {
private static interface SomeEvent {
String getName();
}
private static class Consumer implements Runnable {
private final BlockingQueue<SomeEvent> queue = new ArrayBlockingQueue<>(10);
private final ExecutorService consumerExecutor = Executors.newSingleThreadExecutor();
private final AtomicBoolean isRunning = new AtomicBoolean();
private final AtomicInteger numberProducers = new AtomicInteger(0);
public void startConsumer() {
consumerExecutor.execute(this);
}
public void stopConsumer() {
consumerExecutor.shutdownNow();
try {
consumerExecutor.awaitTermination(Long.MAX_VALUE, TimeUnit.SECONDS);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
public void registerProducer() {
numberProducers.incrementAndGet();
}
public void unregisterProducer() {
if (numberProducers.decrementAndGet() < 1) {
stopConsumer();
}
}
public void produceEvent(SomeEvent event) throws InterruptedException {
queue.put(event);
}
#Override
public void run() {
if (isRunning.compareAndSet(false, true)) {
try {
while (!Thread.currentThread().isInterrupted()) {
SomeEvent event = queue.take();
System.out.println(event.getName());
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} finally {
System.out.println("Consumer stopped.");
isRunning.set(false);
}
}
}
}
public static void main(String[] args) {
final Consumer consumer = new Consumer();
consumer.startConsumer();
final Runnable producerRunnable = new Runnable() {
#Override
public void run() {
final String name = Thread.currentThread().getName();
consumer.registerProducer();
try {
for (int i = 0; i < 10; i++) {
consumer.produceEvent(new SomeEvent() {
#Override
public String getName() {
return name;
}
});
}
System.out.println("Produver " + name + " stopped.");
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} finally {
consumer.unregisterProducer();
}
}
};
List<Thread> producers = new ArrayList<>();
producers.add(new Thread(producerRunnable, "producer-1"));
producers.add(new Thread(producerRunnable, "producer-2"));
producers.add(new Thread(producerRunnable, "producer-3"));
for (Thread t : producers) {
t.start();
}
}
}

Categories