I am trying to write a very basic app where messages are sent to an actor at a higher rate, messages are being consumer by the actor at a slower rate, and then after some time, killing the app.
When I ran the app again with same actor system name and same actor name with same persistenceId, I was expecting to see the missed messages replayed, but it is not happening.
(If I delete the journal and snapshot locations, they are created again on the next run with some files which are not 0 byte sized, so something is happening for sure.)
Edit1: RecoveryCompleted object is being received in onReceiveRecover whenever I start the app.
public class App {
public static void main(String[] args) throws InterruptedException {
System.out.println("Hello World!");
ActorSystem actorSystem = ActorSystem.create("sample-actor-system");
ActorRef sampleActor = actorSystem.actorOf(
Props.create(AkkaWorker.class).withDispatcher(
"akka.actor.test-dispatcher"),
"sample-actor");
System.out.println(actorSystem.settings().config());
int i = 1;
//Run this coe the next time so that nothing is published, and only the replayed messages should be executed by the actor
// Thread.sleep(10000);
// System.exit(0);
while (true) {
String msg = "Hello there" + i;
sampleActor.tell(msg, null);
System.out.println("Published message: " + msg);
i++;
// break;
Thread.sleep(100);
if (i == 20) {
Thread.sleep(10000);
System.exit(0);
}
}
}
}
public class AkkaWorker extends UntypedPersistentActor {
public AkkaWorker() {
}
#Override
public String persistenceId() {
return "sample-id-1";
}
#Override
public void onReceiveCommand(Object message) throws Exception {
System.out.println("In onReceiveCommand");
// TODO Auto-generated method stub
if (message instanceof String) {
message = (String) message;
System.out.println("Received message: " + message);
if (((String) message).equalsIgnoreCase("suicide")) {
System.out.println("killing self");
getContext().stop(getSelf());
}
Thread.sleep(1000);
}
}
#Override
public void onReceiveRecover(Object message) {
System.out.println("In onReceiveRecover");
if (message instanceof String) {
System.out.println(message);
} else {
System.out.println("God knows what: "+ message.toString());
}
}
}
In application.conf,
test-dispatcher {
type = Dispatcher
executor = "fork-join-executor"
fork-join-executor {
parallelism-min = 1
parallelism-factor = 1.0
parallelism-max = 1
}
throughput = 1
}
persistence {
journal {
max-message-batch-size = 1
leveldb {
dir = "/Users/neeraj/akka-persist/journal"
native = true
}
}
snapshot-store.local.dir = "/Users/neeraj/akka-persist/snapshot"
}
Related
I am trying to setup an AsynchronousServerSocketChannel that accepts connections from clients and send and receive messages as needed (not necessarily request->response). To facilitate this, I am using asynchronous read and write calls with separate completion handlers. The issue I am having now is that when a client disconnects, the result that is passed to my completion handler isn't -1 and the tread continues to attempt to read. I would like my servers connections to be closed automatically when the corresponding client connection closes.
Here is the code for my read completion handler:
class ReadHandler implements CompletionHandler<Integer, Attachment> {
#Override
public void completed(Integer result, Attachment att) {
if (result < 0) {
try {
System.out.println("Peer at " + att.clientAddr + " has disconnected.");
att.channel.close();
} catch (IOException e) {
e.printStackTrace();
}
} else {
att.readBuffer.flip();
int limits = att.readBuffer.limit();
byte bytes[] = new byte[limits];
att.readBuffer.get(bytes, 0, limits);
att.readBuffer.clear();
if(att.hsDone) {
// process incoming msg
peer.processMessage(att.connectedPeerId, bytes);
} else { // if handshake has not been done
att.connectedPeerId = peer.processHandshake(bytes);
System.out.println("Shook hands with peer " + att.connectedPeerId + ".");
if(att.connectedPeerId < 0) {
try {
att.channel.close();
} catch (IOException e) {
e.printStackTrace();
}
} else {
att.hsDone = true;
att.writeBuffer.put(message.handshakeMsg(peer.id));
att.writeBuffer.flip();
WriteHandler handler = new WriteHandler();
att.channel.write(att.writeBuffer, att, handler);
}
}
att.readBuffer.flip();
att.channel.read(att.readBuffer, att, this);
}
}
#Override
public void failed(Throwable exc, Attachment att) {
System.err.println(exc.getMessage());
}
}
And for my write completion handler:
class WriteHandler implements CompletionHandler<Integer, Attachment> {
#Override
public void completed(Integer result, Attachment att) {
att.writeBuffer.clear();
// check if msg needs to be sent
byte data[] = peer.getNextMsg(att.connectedPeerId);
att.writeBuffer.put(data);
att.writeBuffer.flip();
if(att.channel.isOpen())
att.channel.write(att.writeBuffer, att, this);
}
#Override
public void failed(Throwable exc, Attachment att) {
System.err.println(exc.getMessage());
}
}
Any help solving this issue is appreciated.
My scheme: ajax long polling to Tomcat. Tomcat executes selenium "operations".
Im trying to execute selenium scenario from tomcat.
Its working ok but i need to get logs back to js client.
Javascript client partially receives messages from server when selenium working.
Inside some selenium operations im using Thread.sleep(). Maybe mistakes because of this ?
Please tell me why messages partially received (i think) by client
Here is ServerInfoLogger. BaseLogger outputs messages to console and file
public class ServerInfoLogger extends BaseLogger {
public ServerSession clientServerSession;
protected void logToClient(String message) {
super.log(message);
sendMessage(message);
}
// Server.serverSession and Server.clientServerSession not null but messages partially not received by client
private void sendMessage(String message) {
// send message to client
if (Server.serverSession!=null && Server.clientServerSession!=null) {
Server.clientServerSession.deliver(Server.serverSession, "/message", message);
} else {
System.err.println("Server error. Server.serverSession=" + Server.serverSession + " clientServerSession=" + clientServerSession);
}
}
}
Here is selenium scenario
public class Task extends ServerInfoLogger {
public static boolean datesAvailable = false;
private TaskParser parser = new TaskParser();
protected ArrayList<Step> steps = new ArrayList<>();
private WebDriver webDriver;
protected int currentStepIndex = 0;
protected Step currentStep;
private WebDriverFactory webDriverFactory = new WebDriverFactory();
private int currentTryout = 1;
private int maxTryouts = 40;
public Task(ServerSession clientServerSession, String taskData) {
this.clientServerSession = clientServerSession;
logToClient("new task"); // client always receives this message
steps = parser.parse(taskData);
logToClient("steps parsed. total: "+steps.size()); // client always receives this message
start();
}
protected void start() {
createWebDriver();
startStep();
}
protected void startStep() {
currentStep = steps.get(currentStepIndex);
currentStep.setWebDriver(webDriver);
currentStep.clientServerSession = clientServerSession;
boolean stepComplete = false;
try {
stepComplete = currentStep.start();
} catch (StepException e) {
logToClient(e.getMessage()+" step id: "+e.getStepId());
e.printStackTrace();
}
log("step complete = "+stepComplete);
if (stepComplete) {
onStepComplete();
} else {
onStepError();
}
}
private void onStepError() {
currentTryout++;
if (currentTryout > maxTryouts) {
destroyWebDriver();
logToClient("Max tryouts reached"); // client never receives this message
} else {
logToClient("Step not complete. Restarting task. currentTryout=" + currentTryout); // client partially receives this message
restart();
}
}
private void onStepComplete() {
currentStepIndex++;
if (currentStepIndex < steps.size()) {
startStep();
} else {
destroyWebDriver();
taskComplete();
}
}
private void destroyWebDriver() {
webDriver.quit();
webDriver = null;
}
private void taskComplete() {
logToClient("Task complete !!!"); // client **never** receives this message
TaskEvent taskEvent = new TaskEvent(TaskEvent.TASK_COMPLETE);
EventDispatcher.getInstance().dispatchEvent(taskEvent);
}
public void restart() {
logToClient("Task restart");
try {
setTimeout(Application.baseOperationWaitSecondsUntil);
new SwitchToDefaultContentOperation().execute();
} catch(OperationException exception) {
logToClient("Cannot get default content"); // client partially receives this message
}
currentStepIndex = 0;
startStep();
}
private void setTimeout(int seconds) {
webDriver.manage().timeouts().implicitlyWait(seconds, TimeUnit.SECONDS);
}
private void createWebDriver() {
webDriver = webDriverFactory.getDriver(BrowserType.CHROME).getDriver();
}
public int getCurrentStepIndex() {
return currentStepIndex;
}
}
Here is creation of logger
private void createLogger() {
Date currentDate = new Date();
String logFilePostfix = currentDate.getMonth()+"_"+currentDate.getDate()+"-"+currentDate.getHours()+"_"+currentDate.getMinutes()+"_"+currentDate.getSeconds();
logger = Logger.getLogger(appName);
logger.setUseParentHandlers(false);
FileHandler fh;
SimplestFormatter formatter = new SimplestFormatter();
try {
fh = new FileHandler("C:\\consultant\\logs\\log_"+logFilePostfix+".txt");
logger.addHandler(fh);
fh.setFormatter(formatter);
} catch (SecurityException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
Selenium operation with Thread.sleep()
public class SwitchToMainFrameOperation extends BaseOperation {
private WebElement mainIframe;
private WebDriverWait wait;
#Override
public boolean execute() throws OperationException {
switchToRoot();
sleepThread();
log("switchToMainIFrame by xPath "+Page.getMainIframeXPath()); // log to console and file
wait = new WebDriverWait(webDriver, Application.baseOperationWaitSecondsUntil);
try {
mainIframe = wait.until(ExpectedConditions.presenceOfElementLocated(By.xpath(Page.getMainIframeXPath())));
webDriver.switchTo().frame(mainIframe);
log("main frame switch OK"); // log to console and file
} catch(StaleElementReferenceException exception) {
log("main frame switch error. StaleElementReferenceException - trying again"); // log to console and file
wait = null;
sleepThread();
execute();
}
return true;
}
private void switchToRoot() {
log("switch to root");
webDriver.switchTo().defaultContent();
}
private void sleepThread() {
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
Problem statement
I have a JMS listener running as a thread listening to a topic. As soon a message comes in, I spawn a new Thread to process the in-bounded message. So for each incoming message I spawn a new Thread.
I have a scenario where duplicate message is also being processed when it is injected immediately in a sequential order. I need to prevent this from being processed. I tried using a ConcurrentHashMap to hold the process times where I add in the entry as soon as Thread is spawn and remove it from the map as soon Thread completes its execution. But it did not help when I tried with the scenario where I passed in same one after the another in concurrent fashion.
General Outline of my issue before you plunge into the actual code base
onMessage(){
processIncomingMessage(){
ExecutorService executorService = Executors.newFixedThreadPool(1000);
//Map is used to make an entry before i spawn a new thread to process incoming message
//Map contains "Key as the incoming message" and "value as boolean"
//check map for duplicate check
//The below check is failing and allowing duplicate messages to be processed in parallel
if(entryisPresentInMap){
//return doing nothing
}else{
//spawn a new thread for each incoming message
//also ensure a duplicate message being processed when it in process by an active thread
executorService.execute(new Runnable() {
#Override
public void run() {
try {
//actuall business logic
}finally{
//remove entry from the map so after processing is done with the message
}
}
}
}
Standalone example to mimic the scenario
public class DuplicateCheck {
private static Map<String,Boolean> duplicateCheckMap =
new ConcurrentHashMap<String,Boolean>(1000);
private static String name=null;
private static String[] nameArray = new String[20];
public static void processMessage(String message){
System.out.println("Processed message =" +message);
}
public static void main(String args[]){
nameArray[0] = "Peter";
nameArray[1] = "Peter";
nameArray[2] = "Adam";
for(int i=0;i<=nameArray.length;i++){
name=nameArray[i];
if(duplicateCheckMap.get(name)!=null && duplicateCheckMap.get(name)){
System.out.println("Thread detected for processing your name ="+name);
return;
}
addNameIntoMap(name);
new Thread(new Runnable() {
#Override
public void run() {
try {
processMessage(name);
} catch (Exception e) {
System.out.println(e.getMessage());
} finally {
freeNameFromMap(name);
}
}
}).start();
}
}
private static synchronized void addNameIntoMap(String name) {
if (name != null) {
duplicateCheckMap.put(name, true);
System.out.println("Thread processing the "+name+" is added to the status map");
}
}
private static synchronized void freeNameFromMap(String name) {
if (name != null) {
duplicateCheckMap.remove(name);
System.out.println("Thread processing the "+name+" is released from the status map");
}
}
Snippet of the code is below
public void processControlMessage(final Message message) {
RDPWorkflowControlMessage rdpWorkflowControlMessage= unmarshallControlMessage(message);
final String workflowName = rdpWorkflowControlMessage.getWorkflowName();
final String controlMessageEvent=rdpWorkflowControlMessage.getControlMessage().value();
if(controlMessageStateMap.get(workflowName)!=null && controlMessageStateMap.get(workflowName)){
log.info("Cache cleanup for the workflow :"+workflowName+" is already in progress");
return;
}else {
log.info("doing nothing");
}
Semaphore controlMessageLock = new Semaphore(1);
try{
controlMessageLock.acquire();
synchronized(this){
new Thread(new Runnable(){
#Override
public void run() {
try {
lock.lock();
log.info("Processing Workflow Control Message for the workflow :"+workflowName);
if (message instanceof TextMessage) {
if ("REFRESH".equalsIgnoreCase(controlMessageEvent)) {
clearControlMessageBuffer();
enableControlMessageStatus(workflowName);
List<String> matchingValues=new ArrayList<String>();
matchingValues.add(workflowName);
ConcreteSetDAO tasksSetDAO=taskEventListener.getConcreteSetDAO();
ConcreteSetDAO workflowSetDAO=workflowEventListener.getConcreteSetDAO();
tasksSetDAO.deleteMatchingRecords(matchingValues);
workflowSetDAO.deleteMatchingRecords(matchingValues);
fetchNewWorkflowItems();
addShutdownHook(workflowName);
}
}
} catch (Exception e) {
log.error("Error extracting item of type RDPWorkflowControlMessage from message "
+ message);
} finally {
disableControlMessageStatus(workflowName);
lock.unlock();
}
}
}).start();
}
} catch (InterruptedException ie) {
log.info("Interrupted Exception during control message lock acquisition"+ie);
}finally{
controlMessageLock.release();
}
}
private void addShutdownHook(final String workflowName) {
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
disableControlMessageStatus(workflowName);
}
});
log.info("Shut Down Hook Attached for the thread processing the workflow :"+workflowName);
}
private RDPWorkflowControlMessage unmarshallControlMessage(Message message) {
RDPWorkflowControlMessage rdpWorkflowControlMessage = null;
try {
TextMessage textMessage = (TextMessage) message;
rdpWorkflowControlMessage = marshaller.unmarshalItem(textMessage.getText(), RDPWorkflowControlMessage.class);
} catch (Exception e) {
log.error("Error extracting item of type RDPWorkflowTask from message "
+ message);
}
return rdpWorkflowControlMessage;
}
private void fetchNewWorkflowItems() {
initSSL();
List<RDPWorkflowTask> allTasks=initAllTasks();
taskEventListener.addRDPWorkflowTasks(allTasks);
workflowEventListener.updateWorkflowStatus(allTasks);
}
private void clearControlMessageBuffer() {
taskEventListener.getRecordsForUpdate().clear();
workflowEventListener.getRecordsForUpdate().clear();
}
private synchronized void enableControlMessageStatus(String workflowName) {
if (workflowName != null) {
controlMessageStateMap.put(workflowName, true);
log.info("Thread processing the "+workflowName+" is added to the status map");
}
}
private synchronized void disableControlMessageStatus(String workflowName) {
if (workflowName != null) {
controlMessageStateMap.remove(workflowName);
log.info("Thread processing the "+workflowName+" is released from the status map");
}
}
I have modified my code to incorporate suggestions provided below but still it is not working
public void processControlMessage(final Message message) {
ExecutorService executorService = Executors.newFixedThreadPool(1000);
try{
lock.lock();
RDPWorkflowControlMessage rdpWorkflowControlMessage= unmarshallControlMessage(message);
final String workflowName = rdpWorkflowControlMessage.getWorkflowName();
final String controlMessageEvent=rdpWorkflowControlMessage.getControlMessage().value();
if(controlMessageStateMap.get(workflowName)!=null && controlMessageStateMap.get(workflowName)){
log.info("Cache cleanup for the workflow :"+workflowName+" is already in progress");
return;
}else {
log.info("doing nothing");
}
enableControlMessageStatus(workflowName);
executorService.execute(new Runnable() {
#Override
public void run() {
try {
//actual code
fetchNewWorkflowItems();
addShutdownHook(workflowName);
}
}
} catch (Exception e) {
log.error("Error extracting item of type RDPWorkflowControlMessage from message "
+ message);
} finally {
disableControlMessageStatus(workflowName);
}
}
});
} finally {
executorService.shutdown();
lock.unlock();
}
}
private void addShutdownHook(final String workflowName) {
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
disableControlMessageStatus(workflowName);
}
});
log.info("Shut Down Hook Attached for the thread processing the workflow :"+workflowName);
}
private synchronized void enableControlMessageStatus(String workflowName) {
if (workflowName != null) {
controlMessageStateMap.put(workflowName, true);
log.info("Thread processing the "+workflowName+" is added to the status map");
}
}
private synchronized void disableControlMessageStatus(String workflowName) {
if (workflowName != null) {
controlMessageStateMap.remove(workflowName);
log.info("Thread processing the "+workflowName+" is released from the status map");
}
}
This is how you should add a value to a map. This double checking makes sure that only one thread adds a value to a map at any particular moment of time and you can control the access afterwards. Remove all the locking logic afterwards. It is as simple as that
public void processControlMessage(final String workflowName) {
if(!tryAddingMessageInProcessingMap(workflowName)){
Thread.sleep(1000); // sleep 1 sec and try again
processControlMessage(workflowName);
return ;
}
System.out.println(workflowName);
try{
// your code goes here
} finally{
controlMessageStateMap.remove(workflowName);
}
}
private boolean tryAddingMessageInProcessingMap(final String workflowName) {
if(controlMessageStateMap .get(workflowName)==null){
synchronized (this) {
if(controlMessageStateMap .get(workflowName)==null){
controlMessageStateMap.put(workflowName, true);
return true;
}
}
}
return false;
}
Read here more for https://en.wikipedia.org/wiki/Double-checked_locking
The issue is fixed now. Many thanks to #awsome for the approach. It is avoiding the duplicates when a thread is already processing the incoming duplicate message. If no thread is processing then it gets picked up
public void processControlMessage(final Message message) {
try {
lock.lock();
RDPWorkflowControlMessage rdpWorkflowControlMessage = unmarshallControlMessage(message);
final String workflowName = rdpWorkflowControlMessage.getWorkflowName();
final String controlMessageEvent = rdpWorkflowControlMessage.getControlMessage().value();
new Thread(new Runnable() {
#Override
public void run() {
try {
if (message instanceof TextMessage) {
if ("REFRESH".equalsIgnoreCase(controlMessageEvent)) {
if (tryAddingWorkflowNameInStatusMap(workflowName)) {
log.info("Processing Workflow Control Message for the workflow :"+ workflowName);
addShutdownHook(workflowName);
clearControlMessageBuffer();
List<String> matchingValues = new ArrayList<String>();
matchingValues.add(workflowName);
ConcreteSetDAO tasksSetDAO = taskEventListener.getConcreteSetDAO();
ConcreteSetDAO workflowSetDAO = workflowEventListener.getConcreteSetDAO();
tasksSetDAO.deleteMatchingRecords(matchingValues);
workflowSetDAO.deleteMatchingRecords(matchingValues);
List<RDPWorkflowTask> allTasks=fetchNewWorkflowItems(workflowName);
updateTasksAndWorkflowSet(allTasks);
removeWorkflowNameFromProcessingMap(workflowName);
} else {
log.info("Cache clean up is already in progress for the workflow ="+ workflowName);
return;
}
}
}
} catch (Exception e) {
log.error("Error extracting item of type RDPWorkflowControlMessage from message "
+ message);
}
}
}).start();
} finally {
lock.unlock();
}
}
private boolean tryAddingWorkflowNameInStatusMap(final String workflowName) {
if(controlMessageStateMap.get(workflowName)==null){
synchronized (this) {
if(controlMessageStateMap.get(workflowName)==null){
log.info("Adding an entry in to the map for the workflow ="+workflowName);
controlMessageStateMap.put(workflowName, true);
return true;
}
}
}
return false;
}
private synchronized void removeWorkflowNameFromProcessingMap(String workflowName) {
if (workflowName != null
&& (controlMessageStateMap.get(workflowName) != null && controlMessageStateMap
.get(workflowName))) {
controlMessageStateMap.remove(workflowName);
log.info("Thread processing the " + workflowName+ " is released from the status map");
}
}
I am looking for a java code for MQTT client that subscribes to a given topic, every message published on that topic should reach the client only once.I have written many codes and in all the cases messages are delivered properly to the client when it is connected to the broker but if the subscribed client disconnects from the broker for some time and then again connects back, it does not receive the messages that were sent during the time that it was not connected and I have set the clean session flag also as false but still its not working, the code that I used is given below
import org.fusesource.hawtbuf.*;
import org.fusesource.mqtt.client.*;
/**
* Uses an callback based interface to MQTT. Callback based interfaces
* are harder to use but are slightly more efficient.
*/
class Listener {
public static void main(String []args) throws Exception {
String user = env("APOLLO_USER", "admin");
String password = env("APOLLO_PASSWORD", "password");
String host = env("APOLLO_HOST", "localhost");
int port = Integer.parseInt(env("APOLLO_PORT", "61613"));
final String destination = arg(args, 1, "subject");
MQTT mqtt = new MQTT();
mqtt.setHost(host, port);
mqtt.setUserName(user);
mqtt.setPassword(password);
mqtt.setCleanSession(false);
mqtt.setClientId("newclient");
final CallbackConnection connection = mqtt.callbackConnection();
connection.listener(new org.fusesource.mqtt.client.Listener() {
long count = 0;
long start = System.currentTimeMillis();
public void onConnected() {
}
public void onDisconnected() {
}
public void onFailure(Throwable value) {
value.printStackTrace();
System.exit(-2);
}
public void onPublish(UTF8Buffer topic, Buffer msg, Runnable ack) {
System.out.println("Nisha Messages : " + msg);
System.out.println("Nisha topic" + topic);
System.out.println("Nisha Receive acknowledgement : " + ack);
String body = msg.utf8().toString();
if("SHUTDOWN".equals(body)) {
long diff = System.currentTimeMillis() - start;
System.out.println(String.format("Received %d in %.2f seconds", count, (1.0*diff/1000.0)));
connection.disconnect(new Callback<Void>() {
#Override
public void onSuccess(Void value) {
System.exit(0);
}
#Override
public void onFailure(Throwable value) {
value.printStackTrace();
System.exit(-2);
}
});
} else {
if( count == 0 ) {
start = System.currentTimeMillis();
}
if( count % 1000 == 0 ) {
System.out.println(String.format("Received %d messages.", count));
}
count ++;
}
}
});
connection.connect(new Callback<Void>() {
#Override
public void onSuccess(Void value) {
System.out.println("connected in :::: ");
Topic[] topics = {new Topic(destination, QoS.AT_MOST_ONCE)};
connection.subscribe(topics, new Callback<byte[]>() {
public void onSuccess(byte[] qoses) {
}
public void onFailure(Throwable value) {
value.printStackTrace();
System.exit(-2);
}
});
}
#Override
public void onFailure(Throwable value) {
value.printStackTrace();
System.exit(-2);
}
});
// Wait forever..
synchronized (Listener.class) {
while(true)
Listener.class.wait();
}
}
private static String env(String key, String defaultValue) {
String rc = System.getenv(key);
if( rc== null )
return defaultValue;
return rc;
}
private static String arg(String []args, int index, String defaultValue) {
if( index < args.length )
return args[index];
else
return defaultValue;
}
}
Am I doing something wrong here?
it does not receive the messages that were sent during the time that it was not connected
MQTT does not retain all messages. If the client goes offline, undelivered messages are lost. The retain mechanism retains only the last message published to a topic.
You can read more in the specs point 3.3.1.3 RETAIN
I have a Situation where I wrote a simple Producer Consumer model for reading in chunks of data from Bluetooth then every 10k bytes I write that to file. I used a standard P-C Model using a Vector as my message holder. So how do I change this so that multiple Thread consumers can read the same messages, I think the term would be Multicaster? I am actually using this on an Android phone so JMS is probably not an option.
static final int MAXQUEUE = 50000;
private Vector<byte[]> messages = new Vector<byte[]>();
/**
* Put the message in the queue for the Consumer Thread
*/
private synchronized void putMessage(byte[] send) throws InterruptedException {
while ( messages.size() == MAXQUEUE )
wait();
messages.addElement( send );
notify();
}
/**
* This method is called by the consumer to see if any messages in the queue
*/
public synchronized byte[] getMessage()throws InterruptedException {
notify();
while ( messages.size() == 0 && !Thread.interrupted()) {
wait(1);
}
byte[] message = messages.firstElement();
messages.removeElement( message );
return message;
}
I am referencing code from an Oreilly book Message Parser section
Pub-sub mechanism is definitely the way to achieve what you want. I am not sure why developing for Android will restrict you from using JMS, which is as simple a spec as it gets. Check out
this thread on SO.
You should definitely use a queue instead of the Vector!
Give every thread its own queue and, when a new message is received, add() the new message to every thread's queue. For flexibility, a listener pattern may be useful, too.
Edit:
Ok, I feel I should add an example, too:
(Classical observer pattern)
This is the interface, all consumers must implement:
public interface MessageListener {
public void newMessage( byte[] message );
}
A producer might look like this:
public class Producer {
Collection<MessageListener> listeners = new ArrayList<MessageListener>();
// Allow interested parties to register for new messages
public void addListener( MessageListener listener ) {
this.listeners.add( listener );
}
public void removeListener( Object listener ) {
this.listeners.remove( listener );
}
protected void produceMessages() {
byte[] msg = new byte[10];
// Create message and put into msg
// Tell all registered listeners about the new message:
for ( MessageListener l : this.listeners ) {
l.newMessage( msg );
}
}
}
And a consumer class could be (using a blocking queue which does all that wait()ing and notify()ing for us):
public class Consumer implements MessageListener {
BlockingQueue< byte[] > queue = new LinkedBlockingQueue< byte[] >();
// This implements the MessageListener interface:
#Override
public void newMessage( byte[] message ) {
try {
queue.put( message );
} catch (InterruptedException e) {
// won't happen.
}
}
// Execute in another thread:
protected void handleMessages() throws InterruptedException {
while ( true ) {
byte[] newMessage = queue.take();
// handle the new message.
}
}
}
This is what I came up with as an example when digging through some code and modifiying some existing examples.
package test.messaging;
import java.util.ArrayList;
import java.util.concurrent.LinkedBlockingQueue;
public class TestProducerConsumers {
static Broker broker;
public TestProducerConsumers(int maxSize) {
broker = new Broker(maxSize);
Producer p = new Producer();
Consumer c1 = new Consumer("One");
broker.consumers.add(c1);
c1.start();
Consumer c2 = new Consumer("Two");
broker.consumers.add(c2);
c2.start();
p.start();
}
// Test Producer, use your own message producer on a thread to call up
// broker.insert() possibly passing it the message instead.
class Producer extends Thread {
#Override
public void run() {
while (true) {
try {
broker.insert();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
class Consumer extends Thread {
String myName;
LinkedBlockingQueue<String> queue;
Consumer(String m) {
this.myName = m;
queue = new LinkedBlockingQueue<String>();
}
#Override
public void run() {
while(!Thread.interrupted()) {
try {
while (queue.size() == 0 && !Thread.interrupted()) {
;
}
while (queue.peek() == null && !Thread.interrupted()) {
;
}
System.out.println("" + myName + " Consumer: " + queue.poll());
} catch (Exception e) { }
}
}
}
class Broker {
public ArrayList<Consumer> consumers = new ArrayList<Consumer>();
int n;
int maxSize;
public Broker(int maxSize) {
n = 0;
this.maxSize = maxSize;
}
synchronized void insert() throws InterruptedException {
// only here for testing don't want it to runaway and
//memory leak, only testing first 100 samples.
if (n == maxSize)
wait();
System.out.println("Producer: " + n++);
for (Consumer c : consumers) {
c.queue.add("Message " + n);
}
}
}
public static void main(String[] args) {
TestProducerConsumers pc = new TestProducerConsumers(100);
}
}