waiting on asynchronous http requests in java - java

I'm using Jetty HTTP Client to make about 50 HTTP calls asynchronously. The code looks something like this:
List<Address> addresses = getAddresses();
final List<String> done = Collections.synchronizedList(new LinkedList<String>());
List<ContentExchange> requests;
for (Address address : addresses) {
ContentExchange ce = new ContentExchange() {
#Override
protected void onResponseComplete() throws IOException {
//handle response
done.add("done");
}
}
ce.setURL(createURL(address));
requests.add(ce);
}
for (ContentExchange ce : requests) {
httpClient.send(ce);
}
while (done.size() != addresses.size()) {
Thread.yield();
}
System.out.println("All addresses processed");
It's calling a rest service that returns back some data about the address. What I expect it to do is this:
Make 50 asynchronous (non-blocking) http calls.
The thread will wait until all 50 are finished.
However, it's not working. It works fine if I don't have the while loop, but I need to wait until all 50 are done. Is there some way to wait until all 50 are done?
Also I know about ExecutorService and multiple thread solution, but I need a single thread solution with non-blocking IO.

Use the java.util.concurrent.CountDownLatch to manage this.
Example from Eclipse Jetty 8.1.10.v20130312's Siege.java test class:
final CountDownLatch latch = new CountDownLatch(concurrent);
for (int i=0;i<concurrent;i++)
{
ConcurrentExchange ex = new ConcurrentExchange(client,latch,uris,repeats);
if (!ex.next()) // this executes the client.send()
{
latch.countDown(); // count down if client.send() was in error
}
}
latch.await(); // wait for all ConcurrentExchange's to complete (or error out)
Note: ConcurrentExchange is a private class within Siege.java.
Then in your HttpExchange object, use the CountDownLatch.countDown() call in the following methods
onConnectionFailed(Throwable x) - example
onException(Throwable x) - example
onExpire() - example
onResponseComplete() - example
Note that all of the examples use a AtomicBoolean counted to make sure that they are only counted once.
if (!counted.getAndSet(true)) // get the value, then set it to true
{
// only get here if counted returned false. (and that will only happen once)
latch.countDown(); // count down this exchange as being done.
}

Related

Right way to handle parallel API calls in Java

I have a web application in Java where as part of client HTTP request handling, I need to make 2 API calls. The way I am planning to implement is to offload 1 API call to thread pool and do the other call in the same thread and then combine the result.
I want to process API1 call in parallel but don't want to block it in queue. Hence if no threads available, I am doing it sequentially.
This is what I have come up with.
//this is already created in setup, just listing here for reference.
ThreadPoolExecutor tpe = new ThreadPoolExecutor(1, 2, 300, TimeUnit.SECONDS, new SynchronousQueue<>());
.....
private Future<Integer> getDataFromAPI1(ThreadPoolExecutor tpe){
try {
return tpe.submit(new Callable<Integer>() {
#Override
public Integer call() throws Exception {
//....make API call here
return 1; //return result
}
});
}catch (RejectedExecutionException r){
//do sequentially and throw any exception encountered
///.....
return CompletableFuture.completedFuture(1); //return the result
}
}
public Integer handle(String reqStub){
Future<Integer> f1 = getDataFromAPI1(tpe);
//make API call2 here in same thread
//... this populates r2
Integer r1 = f1.get();
//now return final result based on 2 results
return r1+r2;
}
Assume that exception handling is done by caller of handle() method.
Does the code snippet look good in terms of correctness and performance.
Are there better ways of achieving the same?

Do not share same socket between two threads at the same time

I have around 60 sockets and 20 threads and I want to make sure each thread works on different socket everytime so I don't want to share same socket between two threads at all.
In my SocketManager class, I have a background thread which runs every 60 seconds and calls updateLiveSockets() method. In the updateLiveSockets() method, I iterate all the sockets I have and then start pinging them one by one by calling send method of SendToQueue class and basis on the response I mark them as live or dead. In the updateLiveSockets() method, I always need to iterate all the sockets and ping them to check whether they are live or dead.
Now all the reader threads will call getNextSocket() method of SocketManager class concurrently to get the next live available socket to send the business message on that socket. So I have two types of messages which I am sending on a socket:
One is ping message on a socket. This is only sent from timer thread calling updateLiveSockets() method in SocketManager class.
Other is business message on a socket. This is done in SendToQueue class.
So if pinger thread is pinging a socket to check whether they are live or not then no other business thread should use that socket. Similarly if business thread is using a socket to send data on it, then pinger thread should not ping that socket. And this applies to all the socket. But I need to make sure that in updateLiveSockets method, we are pinging all the available sockets whenever my background thread starts so that we can figure out which socket is live or dead.
Below is my SocketManager class:
public class SocketManager {
private static final Random random = new Random();
private final ScheduledExecutorService scheduler = Executors.newSingleThreadScheduledExecutor();
private final Map<Datacenters, List<SocketHolder>> liveSocketsByDatacenter =
new ConcurrentHashMap<>();
private final ZContext ctx = new ZContext();
// ...
private SocketManager() {
connectToZMQSockets();
scheduler.scheduleAtFixedRate(new Runnable() {
public void run() {
updateLiveSockets();
}
}, 60, 60, TimeUnit.SECONDS);
}
// during startup, making a connection and populate once
private void connectToZMQSockets() {
Map<Datacenters, List<String>> socketsByDatacenter = Utils.SERVERS;
for (Map.Entry<Datacenters, List<String>> entry : socketsByDatacenter.entrySet()) {
List<SocketHolder> addedColoSockets = connect(entry.getValue(), ZMQ.PUSH);
liveSocketsByDatacenter.put(entry.getKey(), addedColoSockets);
}
}
private List<SocketHolder> connect(List<String> paddes, int socketType) {
List<SocketHolder> socketList = new ArrayList<>();
// ....
return socketList;
}
// this method will be called by multiple threads concurrently to get the next live socket
// is there any concurrency or thread safety issue or race condition here?
public Optional<SocketHolder> getNextSocket() {
for (Datacenters dc : Datacenters.getOrderedDatacenters()) {
Optional<SocketHolder> liveSocket = getLiveSocket(liveSocketsByDatacenter.get(dc));
if (liveSocket.isPresent()) {
return liveSocket;
}
}
return Optional.absent();
}
private Optional<SocketHolder> getLiveSocket(final List<SocketHolder> listOfEndPoints) {
if (!listOfEndPoints.isEmpty()) {
// The list of live sockets
List<SocketHolder> liveOnly = new ArrayList<>(listOfEndPoints.size());
for (SocketHolder obj : listOfEndPoints) {
if (obj.isLive()) {
liveOnly.add(obj);
}
}
if (!liveOnly.isEmpty()) {
// The list is not empty so we shuffle it an return the first element
return Optional.of(liveOnly.get(random.nextInt(liveOnly.size()))); // just pick one
}
}
return Optional.absent();
}
// runs every 60 seconds to ping all the available socket to make sure whether they are alive or not
private void updateLiveSockets() {
Map<Datacenters, List<String>> socketsByDatacenter = Utils.SERVERS;
for (Map.Entry<Datacenters, List<String>> entry : socketsByDatacenter.entrySet()) {
List<SocketHolder> liveSockets = liveSocketsByDatacenter.get(entry.getKey());
List<SocketHolder> liveUpdatedSockets = new ArrayList<>();
for (SocketHolder liveSocket : liveSockets) {
Socket socket = liveSocket.getSocket();
String endpoint = liveSocket.getEndpoint();
Map<byte[], byte[]> holder = populateMap();
Message message = new Message(holder, Partition.COMMAND);
// pinging to see whether a socket is live or not
boolean isLive = SendToQueue.getInstance().send(message.getAddress(), message.getEncodedRecords(), socket);
SocketHolder zmq = new SocketHolder(socket, liveSocket.getContext(), endpoint, isLive);
liveUpdatedSockets.add(zmq);
}
liveSocketsByDatacenter.put(entry.getKey(), Collections.unmodifiableList(liveUpdatedSockets));
}
}
}
And here is my SendToQueue class:
// this method will be called by multiple reader threads (around 20) concurrently to send the data
public boolean sendAsync(final long address, final byte[] encodedRecords) {
PendingMessage m = new PendingMessage(address, encodedRecords, true);
cache.put(address, m);
return doSendAsync(m);
}
private boolean doSendAsync(final PendingMessage pendingMessage) {
Optional<SocketHolder> liveSocket = SocketManager.getInstance().getNextSocket();
if (!liveSocket.isPresent()) {
// log error
return false;
}
ZMsg msg = new ZMsg();
msg.add(pendingMessage.getEncodedRecords());
try {
// send data on a socket LINE A
return msg.send(liveSocket.get().getSocket());
} finally {
msg.destroy();
}
}
public boolean send(final long address, final byte[] encodedRecords, final Socket socket) {
PendingMessage m = new PendingMessage(address, encodedRecords, socket, false);
cache.put(address, m);
try {
if (doSendAsync(m, socket)) {
return m.waitForAck();
}
return false;
} finally {
cache.invalidate(address);
}
}
Problem Statement
Now as you can see above that I am sharing same socket between two threads. It seems getNextSocket() in SocketManager class could return a 0MQ socket to Thread A. Concurrently, the timer thread may access the same 0MQ socket to ping it. In this case Thread A and the timer thread are mutating the same 0MQ socket, which can lead to problems. So I am trying to find a way so that I can prevent different threads from sending data to the same socket at the same time and mucking up my data.
One solution I can think of is using synchronization on a socket while sending the data but if many threads uses the same socket, resources aren't well utilized. Moreover If msg.send(socket); is blocked (technically it shouldn't) all threads waiting for this socket are blocked. So I guess there might be a better way to ensure that every thread uses a different single live socket at the same time instead of synchronization on a particular socket.
So I am trying to find a way so that I can prevent different threads from sending data to the same socket at the same time and mucking up my data.
There are certainly a number of different ways to do this. For me this seems like a BlockingQueue is the right thing to use. The business threads would take a socket from the queue and would be guaranteed that no one else would be using that socket.
private final BlockingQueue<SocketHolder> socketHolderQueue = new LinkedBlockingQueue<>();
...
public Optional<SocketHolder> getNextSocket() {
SocketHolder holder = socketHolderQueue.poll();
return holder;
}
...
public void finishedWithSocket(SocketHolder holder) {
socketHolderQueue.put(holder);
}
I think that synchronizing on the socket is not a good idea for the reasons that you mention – the ping thread will be blocking the business thread.
There are a number of ways of handling the ping thread logic. I would store your Socket with a last use time and then your ping thread could every so often take each of the sockets from the same BlockingQueue, test it, and put each back onto the end of the queue after testing.
public void testSockets() {
// one run this for as many sockets as are in the queue
int numTests = socketHolderQueue.size();
for (int i = 0; i < numTests; i++) {
SocketHolder holder = socketHolderQueue.poll();
if (holder == null) {
break;
}
if (socketIsOk(socketHolder)) {
socketHolderQueue.put(socketHolder);
} else {
// close it here or something
}
}
}
You could also have the getNextSocket() code that dequeues the threads from the queue check the timer and put them on a test queue for the ping thread to use and then take the next one from the queue. The business threads would never be using the same socket at the same time as the ping thread.
Depending on when you want to test the sockets, you can also reset the timer after the business thread returns it to the queue so the ping thread would test the socket after X seconds of no use.
It looks like you should consider using the try-with-resource feature here. You have the SocketHolder or Option class implement the AutoCloseable interface. For instance, let us assume that Option implements this interface. The Option close method will then add back the instance to the container. I created a simple example that shows what I mean. It is not complete but it gives you an idea on how to implement this in your code.
public class ObjectManager implements AutoCloseable {
private static class ObjectManagerFactory {
private static ObjectManager objMgr = new ObjectManager();
}
private ObjectManager() {}
public static ObjectManager getInstance() { return ObjectManagerFactory.objMgr; }
private static final int SIZE = 10;
private static BlockingQueue<AutoCloseable> objects = new LinkedBlockingQueue<AutoCloseable>();
private static ScheduledExecutorService sch;
static {
for(int cnt = 0 ; cnt < SIZE ; cnt++) {
objects.add(new AutoCloseable() {
#Override
public void close() throws Exception {
System.out.println(Thread.currentThread() + " - Adding object back to pool:" + this + " size: " + objects.size());
objects.put(this);
System.out.println(Thread.currentThread() + " - Added object back to pool:" + this);
}
});
}
sch = Executors.newSingleThreadScheduledExecutor();
sch.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
// TODO Auto-generated method stub
updateObjects();
}
}, 10, 10, TimeUnit.MICROSECONDS);
}
static void updateObjects() {
for(int cnt = 0 ; ! objects.isEmpty() && cnt < SIZE ; cnt++ ) {
try(AutoCloseable object = objects.take()) {
System.out.println(Thread.currentThread() + " - updateObjects - updated object: " + object + " size: " + objects.size());
} catch (Throwable t) {
// TODO Auto-generated catch block
t.printStackTrace();
}
}
}
public AutoCloseable getNext() {
try {
return objects.take();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
return null;
}
}
public static void main(String[] args) {
try (ObjectManager mgr = ObjectManager.getInstance()) {
for (int cnt = 0; cnt < 5; cnt++) {
try (AutoCloseable o = mgr.getNext()) {
System.out.println(Thread.currentThread() + " - Working with " + o);
Thread.sleep(1000);
} catch (Throwable t) {
t.printStackTrace();
}
}
} catch (Throwable tt) {
tt.printStackTrace();
}
}
#Override
public void close() throws Exception {
// TODO Auto-generated method stub
ObjectManager.sch.shutdownNow();
}
}
I will make some points here. In the getNextSocket method getOrderedDatacenters method will always return the same ordered list, so you will always pick from the same datacenters from start to end (it's not a problem).
How do you guarantee that two threads wont get the same liveSocket from getNextSocket?
What you are saying here it is true:
Concurrently, the timer thread may access the same 0MQ socket to ping
it.
I think the main problem here is that you don't distinguish between free sockets and reserved sockets.
One option as you said is to synchronize up on each socket. An other option is to keep a list of reserved sockets and when you want to get a next socket or to update sockets, to pick only from the free sockets. You don't want to update a socket which is already reserved.
Also you can take a look at here if it fits your needs.
There's a concept in operating systems software engineering called the critical section. A critical section occurs when 2 or more processes have shared data and they are concurrently executed, in this case, no process should modify or even read this shared data if there's another process accessing these data. So as a process enters the critical section it should notify all other concurrently executed processes that it's currently modifying the critical section, so all other processes should be blocked-waiting-to enter this critical section. you would ask who organize what process enters, this is another problem called process scheduling that controls what process should enter this critical section and the operating system do that for you.
so the best solution to you is using a semaphore where the value of the semaphore is the number of sockets, in your case, I think you have one socket so you will use a semaphore-Binary Semaphore- initialized with a semaphore value = 1, then your code should be divided into four main sections: critical section entry, the critical section, critical section exiting and remainder section.
Critical section entry: where a process enters the critical section and block all other processes. The semaphore will allow one Process-Thread-to enter the critical section-use a socket- and the value of the semaphore will be decremented-equal to zero-.
The critical section: the critical section code that the process should do.
Critical section exiting: the process releasing the critical section for another process to enter. The semaphore value will be incremented-equal to 1-allowing another process to enter
Remainder section: the rest of all your code excluding the previous 3 sections.
Now all you need is to open any Java tutorials about semaphores to know how to apply a semaphore in Java, it's really easy.
Mouhammed Elshaaer is right, but in additional you can also use any concurrent collection, for example ConcurrentHashMap where you can track that each thread works on different socket (for example ConcurrentHashMap key: socket hash code, value: thread hash code or smth else).
As for me it's a little bit stupid solution, but it can be used to.
For the problem of threads (Thread A and timer thread) accessing the same socket, I would keep 2 socket list for each datacenter:
list A: The sockets that are not in use
list B: The sockets that are in use
i.e.,
call synchronisation getNextSocket() to find an not-in-use socket from list A, remove it from list A and add it to list B;
call synchronisation returnSocket(Socket) upon receiving the reponse/ACK for a sent message (either business or ping), to move the socket from list B back to list A. Put a try {} finally {} block around "sending message" to make sure that the socket will be put back to list A even if there is an exception.
I have a simple solution maybe help you. I don't know if in Java you can add a custom attribute to each socket. In Socket.io you can. So I wanna considerate this (I will delete this answer if not).
You will add a boolean attribute called locked to each socket. So, when your thread check the first socket, locked attribute will be True. Any other thread, when ping THIS socket, will check if locked attribute is False. If not, getNextSocket.
So, in this stretch below...
...
for (SocketHolder liveSocket : liveSockets) {
Socket socket = liveSocket.getSocket();
...
You will check if socket is locked or not. If yes, kill this thread or interrupt this or go to next socket. (I don't know how you call it in Java).
So the process is...
Thread get an unlocked socket
Thread set this socket.locked to True.
Thread ping socket and do any stuff you want
Thread set this socket.locked to False.
Thread go to next.
Sorry my bad english :)

Proper termination of a stuck Couchbase Observable

I'm trying to delete a batch of couchbase documents in rapid fashion according to some constraint (or update the document if the constraint isn't satisfied). Each deletion is dubbed a "parcel" according to my terminology.
When executing, I run into a very strange behavior - the thread in charge of this task starts working as expected for a few iterations (at best). After this "grace period", couchbase gets "stuck" and the Observable doesn't call any of its Subscriber's methods (onNext, onComplete, onError) within the defined period of 30 seconds.
When the latch timeout occurs (see implementation below), the method returns but the Observable keeps executing (I noticed that when it kept printing debug messages when stopped with a breakpoint outside the scope of this method).
I suspect couchbase is stuck because after a few seconds, many Observables are left in some kind of a "ghost" state - alive and reporting to their Subscriber, which in turn have nothing to do because the method in which they were created has already finished, eventually leading to java.lang.OutOfMemoryError: GC overhead limit exceeded.
I don't know if what I claim here makes sense, but I can't think of another reason for this behavior.
How should I properly terminate an Observable upon timeout? Should I? Any other way around?
public List<InfoParcel> upsertParcels(final Collection<InfoParcel> parcels) {
final CountDownLatch latch = new CountDownLatch(parcels.size());
final List<JsonDocument> docRetList = new LinkedList<JsonDocument>();
Observable<JsonDocument> obs = Observable
.from(parcels)
.flatMap(parcel ->
Observable.defer(() ->
{
return bucket.async().get(parcel.key).firstOrDefault(null);
})
.map(doc -> {
// In-memory manipulation of the document
return updateDocs(doc, parcel);
})
.flatMap(doc -> {
boolean shouldDelete = ... // Decide by inner logic
if (shouldDelete) {
if (doc.cas() == 0) {
return Observable.just(doc);
}
return bucket.async().remove(doc);
}
return (doc.cas() == 0 ? bucket.async().insert(doc) : bucket.async().replace(doc));
})
);
obs.subscribe(new Subscriber<JsonDocument>() {
#Override
public void onNext(JsonDocument doc) {
docRetList.add(doc);
latch.countDown();
}
#Override
public void onCompleted() {
// Due to a bug in RxJava, onError() / retryWhen() does not intercept exceptions thrown from within the map/flatMap methods.
// Therefore, we need to recalculate the "conflicted" parcels and send them for update again.
while(latch.getCount() > 0) {
latch.countDown();
}
}
#Override
public void onError(Throwable e) {
// Same reason as above
while (latch.getCount() > 0) {
latch.countDown();
}
}
};
);
latch.await(30, TimeUnit.SECONDS);
// Recalculating remaining failed parcels and returning them for another cycle of this method (there's a loop outside)
}
I think this is indeed due to the fact that using a countdown latch doesn't signal the source that the flow of data processing should stop.
You could use more of rxjava, by using toList().timeout(30, TimeUnit.SECONDS).toBlocking().single() instead of collecting in an (un synchronized and thus unsafe) external list and of using the countdownLatch.
This will block until a List of your documents is returned.
When you create your couchbase env in code, set computationPoolSize to something large. When the Couchbase clients runs out of threads using async it just stops working, and wont ever call the callback.

Queue Worker Thread stops working, thread safety issue?

i want to introduce my problem first.
I have several WorkingThreads that are receiving a string, processing the string and afterwards sending the processed string to a global Queue like this:
class Main {
public static Queue<String> Q;
public static void main(String[] args) {
//start working threads
}
}
WorkingThread.java:
class WorkingThread extends Thread {
public void run() {
String input;
//do something with input
Main.q.append(processedString);
}
So now every 800ms another Thread called Inserter dequeues all the entries to formulate some sql, but thats not important.
class Inserter extends Thread {
public void run() {
while(!Main.Q.isEmpty()) {
System.out.print(".");
// dequeue and formulate some SQL
}
}
}
Everything works for about 5 to 10 minutes but then suddenly, i cannot see any dots printed (what is basically a heartbeat for the Inserter). The Queue is not empty i can assure that but the inserter just wont work even though it get started regulary.
I have a suspision that there is a problem when a worker wants to insert something while the Inserter dequeues the Queue, could this possibly be some kind of "deadlock"?
I really hope somebody has an explanation for this behaviour. I am looking forward to learn ;).
EDIT: I am using
Queue<String> Q = new LinkedList<String>();
You are not using a synchronized or thread safe Queue therefore you have a race hazard. Your use of a LinkedList shows a (slightly scary) lack of knowledge of this fact. You may want to read more about threading and thread safety before you try and tackle any more threaded code.
You must either synchronize manually or use one of the existing implementations provided by the JDK. Producer/consumer patterns are usually implemented using one of the BlockingQueue implementations.
A BlockingQueue of a bounded size will block producers trying to put if the queue is full. A BlockingQueue will always block consumers if the queue is empty.
This allows you to remove all of your custom logic that spins on the queue and waits for items.
A simple example using Java 8 lambdas would look like:
public static void main(String[] args) throws Exception {
final BlockingQueue<String> q = new LinkedBlockingQueue<>();
final ExecutorService executorService = Executors.newFixedThreadPool(4);
final Runnable consumer = () -> {
while (true) {
try {
System.out.println(q.take());
} catch (InterruptedException e) {
return;
}
}
};
executorService.submit(consumer);
final Stream<Runnable> producers = IntStream.range(0, 5).mapToObj(i -> () -> {
final Random random = ThreadLocalRandom.current();
while (true) {
q.add("Consumer " + i + " putting " + random.nextDouble());
try {
TimeUnit.MILLISECONDS.sleep(random.nextInt(2000));
} catch (InterruptedException e) {
//ignore
}
}
});
producers.forEach(executorService::submit);
}
The consumer blocks on the BlockingQueue.take method and immediately there is an item available, it will be woken and will print the item. If there are no items, the thread will be suspended - allowing the physical CPU to do something else.
The producers each push a String onto the queue using add. As the queue is unbounded, add will always return true. In the case where there is likely to be a backlog of work the for consumer you can bound the queue and use the put method (that throws an InterruptedException so requires a try..catch which is why it's easier to use add) - this will automatically create flow control.
Seems more like synchronization issue.. You are trying to do a simulation of - Producer - Consumer problem. You need to synchronize your Queue or use a BlockingQueue. You probably have a race condition.
You are going to need to synchronize access to your Queue or
use ConcurrentLinkedQueue see http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ConcurrentLinkedQueue.html
or as also suggested using a BlockingQueue (depending on your requirements) http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/BlockingQueue.html
For a more detailed explanation of the BlockingQueue see
http://tutorials.jenkov.com/java-util-concurrent/blockingqueue.html

Passing a Set of Objects between threads

The current project I am working on requires that I implement a way to efficiently pass a set of objects from one thread, that runs continuously, to the main thread. The current setup is something like the following.
I have a main thread which creates a new thread. This new thread operates continuously and calls a method based on a timer. This method fetches a group of messages from an online source and organizes them in a TreeSet.
This TreeSet then needs to be passed back to the main thread so that the messages it contains can be handled independent of the recurring timer.
For better reference my code looks like the following
// Called by the main thread on start.
void StartProcesses()
{
if(this.IsWindowing)
{
return;
}
this._windowTimer = Executors.newSingleThreadScheduledExecutor();
Runnable task = new Runnable() {
public void run() {
WindowCallback();
}
};
this.CancellationToken = false;
_windowTimer.scheduleAtFixedRate(task,
0, this.SQSWindow, TimeUnit.MILLISECONDS);
this.IsWindowing = true;
}
/////////////////////////////////////////////////////////////////////////////////
private void WindowCallback()
{
ArrayList<Message> messages = new ArrayList<Message>();
//TODO create Monitor
if((!CancellationToken))
{
try
{
//TODO fix epochWindowTime
long epochWindowTime = 0;
int numberOfMessages = 0;
Map<String, String> attributes;
// Setup the SQS client
AmazonSQS client = new AmazonSQSClient(new
ClasspathPropertiesFileCredentialsProvider());
client.setEndpoint(this.AWSSQSServiceUrl);
// get the NumberOfMessages to optimize how to
// Receive all of the messages from the queue
GetQueueAttributesRequest attributesRequest =
new GetQueueAttributesRequest();
attributesRequest.setQueueUrl(this.QueueUrl);
attributesRequest.withAttributeNames(
"ApproximateNumberOfMessages");
attributes = client.getQueueAttributes(attributesRequest).
getAttributes();
numberOfMessages = Integer.valueOf(attributes.get(
"ApproximateNumberOfMessages")).intValue();
// determine if we need to Receive messages from the Queue
if (numberOfMessages > 0)
{
if (numberOfMessages < 10)
{
// just do it inline it's less expensive than
//spinning threads
ReceiveTask(numberOfMessages);
}
else
{
//TODO Create a multithreading version for this
ReceiveTask(numberOfMessages);
}
}
if (!CancellationToken)
{
//TODO testing
_setLock.lock();
Iterator<Message> _setIter = _set.iterator();
//TODO
while(_setIter.hasNext())
{
Message temp = _setIter.next();
Long value = Long.valueOf(temp.getAttributes().
get("Timestamp"));
if(value.longValue() < epochWindowTime)
{
messages.add(temp);
_set.remove(temp);
}
}
_setLock.unlock();
// TODO deduplicate the messages
// TODO reorder the messages
// TODO raise new Event with the results
}
if ((!CancellationToken) && (messages.size() > 0))
{
if (messages.size() < 10)
{
Pair<Integer, Integer> range =
new Pair<Integer, Integer>(Integer.valueOf(0),
Integer.valueOf(messages.size()));
DeleteTask(messages, range);
}
else
{
//TODO Create a way to divide this work among
//several threads
Pair<Integer, Integer> range =
new Pair<Integer, Integer>(Integer.valueOf(0),
Integer.valueOf(messages.size()));
DeleteTask(messages, range);
}
}
}catch (AmazonServiceException ase){
ase.printStackTrace();
}catch (AmazonClientException ace) {
ace.printStackTrace();
}
}
}
As can be seen by some of the commenting, my current preferred way to handle this is by creating an event in the timer thread if there are messages. The main thread will then be listening for this event and handle it appropriately.
Presently I am unfamiliar with how Java handles events, or how to create/listen for them. I also do not know if it is possible to create events and have the information contained within them passed between threads.
Can someone please give me some advice/insight on whether or not my methods are possible? If so, where might I find some information on how to implement them as my current searching attempts are not proving fruitful.
If not, can I get some suggestions on how I would go about this, keeping in mind I would like to avoid having to manage sockets if at all possible.
EDIT 1:
The main thread will also be responsible for issuing commands based on the messages it receives, or issuing commands to get required information. For this reason the main thread cannot wait on receiving messages, and should handle them in an event based manner.
Producer-Consumer Pattern:
One thread(producer) continuosly stacks objects(messages) in a queue.
another thread(consumer) reads and removes objects from the queue.
If your problem fits to this, Try "BlockingQueue".
http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/BlockingQueue.html
It is easy and effective.
If the queue is empty, consumer will be "block"ed, which means the thread waits(so do not uses cpu time) until producer puts some objects. otherwise cosumer continuosly consumes objects.
And if the queue is full, prducer will be blocked until consumer consumes some objects to make a room in the queue, vice versa.
Here's a example:
(a queue should be same object in both producer and consumer)
(Producer thread)
Message message = createMessage();
queue.put(message);
(Consumer thread)
Message message = queue.take();
handleMessage(message);

Categories