Java / Prioritization of threads - java

I have an application that starts four threads to listen on incoming packets. Every thread opens a socket on a different port. Normally packets are received only on one port at the time, but in some cases messages can be received on two ports for some seconds. Every of these threads processes the messages and updates a bunch of listeners (all of them are doing some Swing painting stuff). As the messages are sent with a frequency of 10 Hz and the painting actions on the Swing components take some time, my first approach was to process only one messages out of 20 (2 seconds time to finish the paint on the components). Works well...
But when receiving two messages at the time, I need to tell my application just to process one of them (the one received only for short time). In summary 10 messages are received on the 2nd port, also with a frequency of 10 Hz. Means, using the first approach sometimes I miss all 10 of them, because only one out of 20 is processed.
Whenever a messages on the 2nd port is received I want my application to process that one, doesn't matter what is received on the 1st port or if something is painted at that time.
The following code shows the implementation of my threads, four of these are started at the same time depending on the ports given through the constructor.
private class IncomingRunner implements Runnable {
private int listenPort;
private DatagramSocket localSocket;
private DatagramPacket packet;
private int counter = 0;
public IncomingRunner(int port) {
this.listenPort = port;
}
#Override
public void run() {
try {
localSocket = new DatagramSocket(listenPort);
byte[] buffer = new byte[1024];
packet = new DatagramPacket(buffer, buffer.length);
while(isRunning)
recvIncomingMsg();
} catch (SocketException e) {
e.printStackTrace();
}
}
private void recvIncomingMsg() {
try {
localSocket.receive(packet);
port = localSocket.getLocalPort();
ReceivedMsg eventMsg;
if(port == Config.PORT_1) {
eventMsg = new ReceivedMsg(Config.PORT_1, Config.SOMETHING_1);
System.out.println(HexWriter.getHex(packet.getData()));
} else if (port == Config.PORT_2) {
eventMsg = new ReceivedMsg(Config.PORT_2, Config.SOMETHING_2);
System.out.println(HexWriter.getHex(packet.getData()));
} else if (port == Config.PORT_3) {
eventMsg = new ReceivedMsg(Config.PORT_3, Config.SOMETHING_3);
System.out.println(HexWriter.getHex(packet.getData()));
} else {
eventMsg = new ReceivedMsg(Config.PORT_4, Config.SOMETHING_4);
System.out.println(HexWriter.getHex(packet.getData()));
}
counter++;
if(counter%20 == 0) {
forward2PacketPanel(eventMsg);
counter = 0;
}
} catch (IOException e) {
e.printStackTrace();
}
}
private void forward2PacketPanel(final ReceivedMsg t) {
for(final IPacketListener c : listeners) {
if(c instanceof IPacketListener) {
new Thread(new Runnable() {
#Override
public void run() {
((IPacketListener)c).update(t);
}
}).start();
}
}
}
}
UPDATE:
The reason why I am starting new Threads to update the listeners is, because all of them should update the GUI at the same time. Every updated calls a paintComponent() method on a different JPanel. So all of them should run together.
UPDATE2:
I cannot use the first approach as this causes messages loss of maybe important messages (received on 2nd port). What I need is, when a normal Msg is received just process it and do the painting, doesn't matter how many new normal messages (on 1st port) come in. But even if only one Msg on 2nd port is received, the application needs to process that one, regardless what is going on in the normal receiver thread.
I guess I am facing two problems here:
I need to make each thread waiting until the painting is finished, as that is UDP I can process a normal packet, and forget about all following normal packets, during the painting actions. When done, process the next normal packet.
If a packet on 2nd port is received, break all normal packet processing actions and do the things needed to process the special packet.
Problem (1) is solved using a BitSet in the MainIncomingClass. Every Listener uses some kind of callback function to indicate that its done with painting and sets a specific Bit in the BitSet. If not all are true, I do not process any new Packet, just let them go.

They talk about the event dispatch thread here. You have to use it to update your GUI. Fortunately, you can also use it to post your updates in whatever order you want. The EDT takes care of the start() for you. You'll still have to synchronize access to t.
EventQueue.invokeLater(new Runnable() {
#Override
public void run() {
((IPacketListener)c).update(t);
}
});

Related

Do not share same socket between two threads at the same time

I have around 60 sockets and 20 threads and I want to make sure each thread works on different socket everytime so I don't want to share same socket between two threads at all.
In my SocketManager class, I have a background thread which runs every 60 seconds and calls updateLiveSockets() method. In the updateLiveSockets() method, I iterate all the sockets I have and then start pinging them one by one by calling send method of SendToQueue class and basis on the response I mark them as live or dead. In the updateLiveSockets() method, I always need to iterate all the sockets and ping them to check whether they are live or dead.
Now all the reader threads will call getNextSocket() method of SocketManager class concurrently to get the next live available socket to send the business message on that socket. So I have two types of messages which I am sending on a socket:
One is ping message on a socket. This is only sent from timer thread calling updateLiveSockets() method in SocketManager class.
Other is business message on a socket. This is done in SendToQueue class.
So if pinger thread is pinging a socket to check whether they are live or not then no other business thread should use that socket. Similarly if business thread is using a socket to send data on it, then pinger thread should not ping that socket. And this applies to all the socket. But I need to make sure that in updateLiveSockets method, we are pinging all the available sockets whenever my background thread starts so that we can figure out which socket is live or dead.
Below is my SocketManager class:
public class SocketManager {
private static final Random random = new Random();
private final ScheduledExecutorService scheduler = Executors.newSingleThreadScheduledExecutor();
private final Map<Datacenters, List<SocketHolder>> liveSocketsByDatacenter =
new ConcurrentHashMap<>();
private final ZContext ctx = new ZContext();
// ...
private SocketManager() {
connectToZMQSockets();
scheduler.scheduleAtFixedRate(new Runnable() {
public void run() {
updateLiveSockets();
}
}, 60, 60, TimeUnit.SECONDS);
}
// during startup, making a connection and populate once
private void connectToZMQSockets() {
Map<Datacenters, List<String>> socketsByDatacenter = Utils.SERVERS;
for (Map.Entry<Datacenters, List<String>> entry : socketsByDatacenter.entrySet()) {
List<SocketHolder> addedColoSockets = connect(entry.getValue(), ZMQ.PUSH);
liveSocketsByDatacenter.put(entry.getKey(), addedColoSockets);
}
}
private List<SocketHolder> connect(List<String> paddes, int socketType) {
List<SocketHolder> socketList = new ArrayList<>();
// ....
return socketList;
}
// this method will be called by multiple threads concurrently to get the next live socket
// is there any concurrency or thread safety issue or race condition here?
public Optional<SocketHolder> getNextSocket() {
for (Datacenters dc : Datacenters.getOrderedDatacenters()) {
Optional<SocketHolder> liveSocket = getLiveSocket(liveSocketsByDatacenter.get(dc));
if (liveSocket.isPresent()) {
return liveSocket;
}
}
return Optional.absent();
}
private Optional<SocketHolder> getLiveSocket(final List<SocketHolder> listOfEndPoints) {
if (!listOfEndPoints.isEmpty()) {
// The list of live sockets
List<SocketHolder> liveOnly = new ArrayList<>(listOfEndPoints.size());
for (SocketHolder obj : listOfEndPoints) {
if (obj.isLive()) {
liveOnly.add(obj);
}
}
if (!liveOnly.isEmpty()) {
// The list is not empty so we shuffle it an return the first element
return Optional.of(liveOnly.get(random.nextInt(liveOnly.size()))); // just pick one
}
}
return Optional.absent();
}
// runs every 60 seconds to ping all the available socket to make sure whether they are alive or not
private void updateLiveSockets() {
Map<Datacenters, List<String>> socketsByDatacenter = Utils.SERVERS;
for (Map.Entry<Datacenters, List<String>> entry : socketsByDatacenter.entrySet()) {
List<SocketHolder> liveSockets = liveSocketsByDatacenter.get(entry.getKey());
List<SocketHolder> liveUpdatedSockets = new ArrayList<>();
for (SocketHolder liveSocket : liveSockets) {
Socket socket = liveSocket.getSocket();
String endpoint = liveSocket.getEndpoint();
Map<byte[], byte[]> holder = populateMap();
Message message = new Message(holder, Partition.COMMAND);
// pinging to see whether a socket is live or not
boolean isLive = SendToQueue.getInstance().send(message.getAddress(), message.getEncodedRecords(), socket);
SocketHolder zmq = new SocketHolder(socket, liveSocket.getContext(), endpoint, isLive);
liveUpdatedSockets.add(zmq);
}
liveSocketsByDatacenter.put(entry.getKey(), Collections.unmodifiableList(liveUpdatedSockets));
}
}
}
And here is my SendToQueue class:
// this method will be called by multiple reader threads (around 20) concurrently to send the data
public boolean sendAsync(final long address, final byte[] encodedRecords) {
PendingMessage m = new PendingMessage(address, encodedRecords, true);
cache.put(address, m);
return doSendAsync(m);
}
private boolean doSendAsync(final PendingMessage pendingMessage) {
Optional<SocketHolder> liveSocket = SocketManager.getInstance().getNextSocket();
if (!liveSocket.isPresent()) {
// log error
return false;
}
ZMsg msg = new ZMsg();
msg.add(pendingMessage.getEncodedRecords());
try {
// send data on a socket LINE A
return msg.send(liveSocket.get().getSocket());
} finally {
msg.destroy();
}
}
public boolean send(final long address, final byte[] encodedRecords, final Socket socket) {
PendingMessage m = new PendingMessage(address, encodedRecords, socket, false);
cache.put(address, m);
try {
if (doSendAsync(m, socket)) {
return m.waitForAck();
}
return false;
} finally {
cache.invalidate(address);
}
}
Problem Statement
Now as you can see above that I am sharing same socket between two threads. It seems getNextSocket() in SocketManager class could return a 0MQ socket to Thread A. Concurrently, the timer thread may access the same 0MQ socket to ping it. In this case Thread A and the timer thread are mutating the same 0MQ socket, which can lead to problems. So I am trying to find a way so that I can prevent different threads from sending data to the same socket at the same time and mucking up my data.
One solution I can think of is using synchronization on a socket while sending the data but if many threads uses the same socket, resources aren't well utilized. Moreover If msg.send(socket); is blocked (technically it shouldn't) all threads waiting for this socket are blocked. So I guess there might be a better way to ensure that every thread uses a different single live socket at the same time instead of synchronization on a particular socket.
So I am trying to find a way so that I can prevent different threads from sending data to the same socket at the same time and mucking up my data.
There are certainly a number of different ways to do this. For me this seems like a BlockingQueue is the right thing to use. The business threads would take a socket from the queue and would be guaranteed that no one else would be using that socket.
private final BlockingQueue<SocketHolder> socketHolderQueue = new LinkedBlockingQueue<>();
...
public Optional<SocketHolder> getNextSocket() {
SocketHolder holder = socketHolderQueue.poll();
return holder;
}
...
public void finishedWithSocket(SocketHolder holder) {
socketHolderQueue.put(holder);
}
I think that synchronizing on the socket is not a good idea for the reasons that you mention – the ping thread will be blocking the business thread.
There are a number of ways of handling the ping thread logic. I would store your Socket with a last use time and then your ping thread could every so often take each of the sockets from the same BlockingQueue, test it, and put each back onto the end of the queue after testing.
public void testSockets() {
// one run this for as many sockets as are in the queue
int numTests = socketHolderQueue.size();
for (int i = 0; i < numTests; i++) {
SocketHolder holder = socketHolderQueue.poll();
if (holder == null) {
break;
}
if (socketIsOk(socketHolder)) {
socketHolderQueue.put(socketHolder);
} else {
// close it here or something
}
}
}
You could also have the getNextSocket() code that dequeues the threads from the queue check the timer and put them on a test queue for the ping thread to use and then take the next one from the queue. The business threads would never be using the same socket at the same time as the ping thread.
Depending on when you want to test the sockets, you can also reset the timer after the business thread returns it to the queue so the ping thread would test the socket after X seconds of no use.
It looks like you should consider using the try-with-resource feature here. You have the SocketHolder or Option class implement the AutoCloseable interface. For instance, let us assume that Option implements this interface. The Option close method will then add back the instance to the container. I created a simple example that shows what I mean. It is not complete but it gives you an idea on how to implement this in your code.
public class ObjectManager implements AutoCloseable {
private static class ObjectManagerFactory {
private static ObjectManager objMgr = new ObjectManager();
}
private ObjectManager() {}
public static ObjectManager getInstance() { return ObjectManagerFactory.objMgr; }
private static final int SIZE = 10;
private static BlockingQueue<AutoCloseable> objects = new LinkedBlockingQueue<AutoCloseable>();
private static ScheduledExecutorService sch;
static {
for(int cnt = 0 ; cnt < SIZE ; cnt++) {
objects.add(new AutoCloseable() {
#Override
public void close() throws Exception {
System.out.println(Thread.currentThread() + " - Adding object back to pool:" + this + " size: " + objects.size());
objects.put(this);
System.out.println(Thread.currentThread() + " - Added object back to pool:" + this);
}
});
}
sch = Executors.newSingleThreadScheduledExecutor();
sch.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
// TODO Auto-generated method stub
updateObjects();
}
}, 10, 10, TimeUnit.MICROSECONDS);
}
static void updateObjects() {
for(int cnt = 0 ; ! objects.isEmpty() && cnt < SIZE ; cnt++ ) {
try(AutoCloseable object = objects.take()) {
System.out.println(Thread.currentThread() + " - updateObjects - updated object: " + object + " size: " + objects.size());
} catch (Throwable t) {
// TODO Auto-generated catch block
t.printStackTrace();
}
}
}
public AutoCloseable getNext() {
try {
return objects.take();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
return null;
}
}
public static void main(String[] args) {
try (ObjectManager mgr = ObjectManager.getInstance()) {
for (int cnt = 0; cnt < 5; cnt++) {
try (AutoCloseable o = mgr.getNext()) {
System.out.println(Thread.currentThread() + " - Working with " + o);
Thread.sleep(1000);
} catch (Throwable t) {
t.printStackTrace();
}
}
} catch (Throwable tt) {
tt.printStackTrace();
}
}
#Override
public void close() throws Exception {
// TODO Auto-generated method stub
ObjectManager.sch.shutdownNow();
}
}
I will make some points here. In the getNextSocket method getOrderedDatacenters method will always return the same ordered list, so you will always pick from the same datacenters from start to end (it's not a problem).
How do you guarantee that two threads wont get the same liveSocket from getNextSocket?
What you are saying here it is true:
Concurrently, the timer thread may access the same 0MQ socket to ping
it.
I think the main problem here is that you don't distinguish between free sockets and reserved sockets.
One option as you said is to synchronize up on each socket. An other option is to keep a list of reserved sockets and when you want to get a next socket or to update sockets, to pick only from the free sockets. You don't want to update a socket which is already reserved.
Also you can take a look at here if it fits your needs.
There's a concept in operating systems software engineering called the critical section. A critical section occurs when 2 or more processes have shared data and they are concurrently executed, in this case, no process should modify or even read this shared data if there's another process accessing these data. So as a process enters the critical section it should notify all other concurrently executed processes that it's currently modifying the critical section, so all other processes should be blocked-waiting-to enter this critical section. you would ask who organize what process enters, this is another problem called process scheduling that controls what process should enter this critical section and the operating system do that for you.
so the best solution to you is using a semaphore where the value of the semaphore is the number of sockets, in your case, I think you have one socket so you will use a semaphore-Binary Semaphore- initialized with a semaphore value = 1, then your code should be divided into four main sections: critical section entry, the critical section, critical section exiting and remainder section.
Critical section entry: where a process enters the critical section and block all other processes. The semaphore will allow one Process-Thread-to enter the critical section-use a socket- and the value of the semaphore will be decremented-equal to zero-.
The critical section: the critical section code that the process should do.
Critical section exiting: the process releasing the critical section for another process to enter. The semaphore value will be incremented-equal to 1-allowing another process to enter
Remainder section: the rest of all your code excluding the previous 3 sections.
Now all you need is to open any Java tutorials about semaphores to know how to apply a semaphore in Java, it's really easy.
Mouhammed Elshaaer is right, but in additional you can also use any concurrent collection, for example ConcurrentHashMap where you can track that each thread works on different socket (for example ConcurrentHashMap key: socket hash code, value: thread hash code or smth else).
As for me it's a little bit stupid solution, but it can be used to.
For the problem of threads (Thread A and timer thread) accessing the same socket, I would keep 2 socket list for each datacenter:
list A: The sockets that are not in use
list B: The sockets that are in use
i.e.,
call synchronisation getNextSocket() to find an not-in-use socket from list A, remove it from list A and add it to list B;
call synchronisation returnSocket(Socket) upon receiving the reponse/ACK for a sent message (either business or ping), to move the socket from list B back to list A. Put a try {} finally {} block around "sending message" to make sure that the socket will be put back to list A even if there is an exception.
I have a simple solution maybe help you. I don't know if in Java you can add a custom attribute to each socket. In Socket.io you can. So I wanna considerate this (I will delete this answer if not).
You will add a boolean attribute called locked to each socket. So, when your thread check the first socket, locked attribute will be True. Any other thread, when ping THIS socket, will check if locked attribute is False. If not, getNextSocket.
So, in this stretch below...
...
for (SocketHolder liveSocket : liveSockets) {
Socket socket = liveSocket.getSocket();
...
You will check if socket is locked or not. If yes, kill this thread or interrupt this or go to next socket. (I don't know how you call it in Java).
So the process is...
Thread get an unlocked socket
Thread set this socket.locked to True.
Thread ping socket and do any stuff you want
Thread set this socket.locked to False.
Thread go to next.
Sorry my bad english :)

java blockingqueue consumer block on full queue

I'm writing an small program to put tweets from the Twitter public stream into an HBase database. The program uses two threads, one to collect the tweets and one to process them.
The first thread uses twitter4j StatusListener to get the tweets and puts them in an ArrayBlockingQueue with an capacity of 100.
The second thread takes an status from the queue, filters the needed data and moves them to the database.
The processing takes more time, than the collecting of the status.
The producer looks like this:
public void onStatus(Status status) {
try {
this.queue.put(status);
} catch(Exception ex) {
ex.printStackTrace();
}
}
The consumer uses take and calls an function to process the new status:
public void run() {
try {
while(true) {
// Get new status to process
this.status = this.queue.take();
this.analyse();
}
} catch(Exception ex) {
ex.printStackTrace();
}
}
In the main function the two threads were created and started:
ArrayBlockingQueue<Status> queue_public = new ArrayBlockingQueue<Status>(100);
Thread ta_public = new Thread(new TweetAnalyser(cl.getOptionValue("config"), queue_public));
Thread st_public = new Thread(new RunPublicStream(cl.getOptionValue("config"), queue_public));
ta_public.start();
st_public.start();
The program runs for awhile without any problem, but then stops suddenly. At this time the queue is full and it seems, that the consumer is not able to take a new status from it. I tried several variations of the producer/consumer pattern without success. No exception is thrown.
I don't know were to look for the failure. I hope someone could give me an hint or an solution.
If working with blocking queues double check for blocking commands (put and take for ArrayBlockingQueue) in the code and typos if working with multiple lists.

Access Queue from two separate Threads in parallel

So my goal is to measure the performance of a Streaming Engine. It's basically a library to which i can send data-packages. The idea to measure this is to generate data, put it into a Queue and let the Streaming Engine grab the data and process it.
I thought of implementing it like this: The Data Generator runs in a thread and generates data packages in an endless loop with a certain Thread.sleep(X) at the end. When doing the tests the idea is to minimize tis Thread.sleep(X) to see if this has an impact on the Streaming Engine's performance. The Data Generator writes the created packages into a queue, that is, a ConcurrentLinkedQueue, which at the same time is a Singleton.
In another thread I instantiate the Streaming Engine which continuously removes the packages from the queue by doing queue.remove(). This is done in an endlees loop without any sleeping, because it should just be done as fast as possible.
In a first try to implement this I ran into a problem. It seems as if the Data Generator is not able to put the packages into the Queue as it should be. It is doing that too slow. My suspicion is that the endless loop of the Streaming Engine thread is eating up all the resources and therefore slows down everything else.
I would be happy about how to approach this issue or other design patterns, which could solve this issue elegantly.
the requirements are: 2 threads which run in parallel basically. one is putting data into a queue. the other one is reading/removing from the queue. and i want to measure the size of the queue regularly in order to know if the engine which is reading/removing from the queue is fast enough to process the generated packages.
You can use a BlockingQueue, for example ArrayBlockingQueue, you can initialize these to a certain size, so the number of items queued will never exceed a certain number, as per this example:
// create queue, max size 100
final ArrayBlockingQueue<String> strings = new ArrayBlockingQueue<>(100);
final String stop = "STOP";
// start producing
Runnable producer = new Runnable() {
#Override
public void run() {
try {
for(int i = 0; i < 1000; i++) {
strings.put(Integer.toHexString(i));
}
strings.put(stop);
} catch(InterruptedException ignore) {
}
}
};
Thread producerThread = new Thread(producer);
producerThread.start();
// start monitoring
Runnable monitor = new Runnable() {
#Override
public void run() {
try {
while (true){
System.out.println("Queue size: " + strings.size());
Thread.sleep(5);
}
} catch(InterruptedException ignore) {
}
}
};
Thread monitorThread = new Thread(monitor);
monitorThread.start();
// start consuming
Runnable consumer = new Runnable() {
#Override
public void run() {
// infinite look, will interrupt thread when complete
try {
while(true) {
String value = strings.take();
if(value.equals(stop)){
return;
}
System.out.println(value);
}
} catch(InterruptedException ignore) {
}
}
};
Thread consumerThread = new Thread(consumer);
consumerThread.start();
// wait for producer to finish
producerThread.join();
consumerThread.join();
// interrupt consumer and monitor
monitorThread.interrupt();
You could also have third thread monitoring the size of the queue, to give you an idea of which thread is outpacing the other.
Also, you can used the timed put method and the timed or untimed offer methods, which will give you more control of what to do if the queue if full or empty. In the above example execution will stop until there is space for the next element or if there are no further elements in the queue.

Passing a Set of Objects between threads

The current project I am working on requires that I implement a way to efficiently pass a set of objects from one thread, that runs continuously, to the main thread. The current setup is something like the following.
I have a main thread which creates a new thread. This new thread operates continuously and calls a method based on a timer. This method fetches a group of messages from an online source and organizes them in a TreeSet.
This TreeSet then needs to be passed back to the main thread so that the messages it contains can be handled independent of the recurring timer.
For better reference my code looks like the following
// Called by the main thread on start.
void StartProcesses()
{
if(this.IsWindowing)
{
return;
}
this._windowTimer = Executors.newSingleThreadScheduledExecutor();
Runnable task = new Runnable() {
public void run() {
WindowCallback();
}
};
this.CancellationToken = false;
_windowTimer.scheduleAtFixedRate(task,
0, this.SQSWindow, TimeUnit.MILLISECONDS);
this.IsWindowing = true;
}
/////////////////////////////////////////////////////////////////////////////////
private void WindowCallback()
{
ArrayList<Message> messages = new ArrayList<Message>();
//TODO create Monitor
if((!CancellationToken))
{
try
{
//TODO fix epochWindowTime
long epochWindowTime = 0;
int numberOfMessages = 0;
Map<String, String> attributes;
// Setup the SQS client
AmazonSQS client = new AmazonSQSClient(new
ClasspathPropertiesFileCredentialsProvider());
client.setEndpoint(this.AWSSQSServiceUrl);
// get the NumberOfMessages to optimize how to
// Receive all of the messages from the queue
GetQueueAttributesRequest attributesRequest =
new GetQueueAttributesRequest();
attributesRequest.setQueueUrl(this.QueueUrl);
attributesRequest.withAttributeNames(
"ApproximateNumberOfMessages");
attributes = client.getQueueAttributes(attributesRequest).
getAttributes();
numberOfMessages = Integer.valueOf(attributes.get(
"ApproximateNumberOfMessages")).intValue();
// determine if we need to Receive messages from the Queue
if (numberOfMessages > 0)
{
if (numberOfMessages < 10)
{
// just do it inline it's less expensive than
//spinning threads
ReceiveTask(numberOfMessages);
}
else
{
//TODO Create a multithreading version for this
ReceiveTask(numberOfMessages);
}
}
if (!CancellationToken)
{
//TODO testing
_setLock.lock();
Iterator<Message> _setIter = _set.iterator();
//TODO
while(_setIter.hasNext())
{
Message temp = _setIter.next();
Long value = Long.valueOf(temp.getAttributes().
get("Timestamp"));
if(value.longValue() < epochWindowTime)
{
messages.add(temp);
_set.remove(temp);
}
}
_setLock.unlock();
// TODO deduplicate the messages
// TODO reorder the messages
// TODO raise new Event with the results
}
if ((!CancellationToken) && (messages.size() > 0))
{
if (messages.size() < 10)
{
Pair<Integer, Integer> range =
new Pair<Integer, Integer>(Integer.valueOf(0),
Integer.valueOf(messages.size()));
DeleteTask(messages, range);
}
else
{
//TODO Create a way to divide this work among
//several threads
Pair<Integer, Integer> range =
new Pair<Integer, Integer>(Integer.valueOf(0),
Integer.valueOf(messages.size()));
DeleteTask(messages, range);
}
}
}catch (AmazonServiceException ase){
ase.printStackTrace();
}catch (AmazonClientException ace) {
ace.printStackTrace();
}
}
}
As can be seen by some of the commenting, my current preferred way to handle this is by creating an event in the timer thread if there are messages. The main thread will then be listening for this event and handle it appropriately.
Presently I am unfamiliar with how Java handles events, or how to create/listen for them. I also do not know if it is possible to create events and have the information contained within them passed between threads.
Can someone please give me some advice/insight on whether or not my methods are possible? If so, where might I find some information on how to implement them as my current searching attempts are not proving fruitful.
If not, can I get some suggestions on how I would go about this, keeping in mind I would like to avoid having to manage sockets if at all possible.
EDIT 1:
The main thread will also be responsible for issuing commands based on the messages it receives, or issuing commands to get required information. For this reason the main thread cannot wait on receiving messages, and should handle them in an event based manner.
Producer-Consumer Pattern:
One thread(producer) continuosly stacks objects(messages) in a queue.
another thread(consumer) reads and removes objects from the queue.
If your problem fits to this, Try "BlockingQueue".
http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/BlockingQueue.html
It is easy and effective.
If the queue is empty, consumer will be "block"ed, which means the thread waits(so do not uses cpu time) until producer puts some objects. otherwise cosumer continuosly consumes objects.
And if the queue is full, prducer will be blocked until consumer consumes some objects to make a room in the queue, vice versa.
Here's a example:
(a queue should be same object in both producer and consumer)
(Producer thread)
Message message = createMessage();
queue.put(message);
(Consumer thread)
Message message = queue.take();
handleMessage(message);

Java - Simple networked game is extremly laggy

I already asked this on codereview a day ago but I haven't got any responses yet, so I though I'd try to ask it here.
Let me tell you what I'm trying to make:
A window pops up asking the user if they want to run a server, or a client. Choosing server will start a server on the LAN. Choosing client will try to connect to that server. Once a server is running and a client has connected, a window pops up with two squares. Both the server/client can move their square with the arrow-keys.
This is what I'm getting:
The square for the server moves at the wanted speed, but his movement is very choppy on the side of the client. The clients square, on the other hand, seems to move at about 3 pixels per second (way too slow).
This is what I'm asking:
I guess my question is pretty obvious. All I'm doing is sending 2 integers over the internet. Modern online games send much more data than this, and they hardly lag, so obviously I'm doing something wrong, but what?
Server.java:
// server class
public class Server {
// networking objects
private ServerSocket serverSocket;
private Socket clientSocket;
private DataOutputStream clientOutputStream;
private DataInputStream clientInputStream;
// game objects
private Vec2D serverPos, clientPos;
private GameManager gameManager;
// run method
public void run() {
// intialization try-catch block
try {
// setup sockets
serverSocket = new ServerSocket(1111);
clientSocket = serverSocket.accept();
// setup I/O streams
clientOutputStream = new DataOutputStream(clientSocket.getOutputStream());
clientInputStream = new DataInputStream(clientSocket.getInputStream());
} catch(IOException e) { Util.err(e); }
// declare & intialize data exchange thread
Thread dataExchange = new Thread(
new Runnable() {
// run method
#Override
public void run() {
// I/O try-catch block
try {
// exchange-loop
while(true) {
// write x & y, flush
synchronized(gameManager) {
clientOutputStream.writeInt(serverPos.x);
clientOutputStream.writeInt(serverPos.y);
clientOutputStream.flush();
}
// read x & y
clientPos.x = clientInputStream.readInt();
clientPos.y = clientInputStream.readInt();
}
} catch(IOException e) { Util.err(e); }
}
}
);
// setup game data
serverPos = new Vec2D(10, 10);
clientPos = new Vec2D(300, 300);
gameManager = new GameManager(serverPos, clientPos, serverPos);
// start data exchange thread
dataExchange.start();
// start main loop
while(true) {
// get keyboard input
synchronized(gameManager) {
gameManager.update();
}
// repaint, sleep
gameManager.repaint();
Util.sleep(15);
}
}
}
Client.java:
// client class
public class Client {
// networking objects
private Socket serverConnection;
private DataOutputStream serverOutputStream;
private DataInputStream serverInputStream;
// game objects
private Vec2D serverPos, clientPos;
private GameManager gameManager;
// run method
public void run() {
// intialization try-catch block
try {
// setup socket
serverConnection = new Socket(InetAddress.getByName("192.168.0.19"), 1111);
// setup I/O streams
serverOutputStream = new DataOutputStream(serverConnection.getOutputStream());
serverInputStream = new DataInputStream(serverConnection.getInputStream());
} catch(IOException e) { Util.err(e); }
// declare & intialize data exchange thread
Thread dataExchange = new Thread(
new Runnable() {
// run method
#Override
public void run() {
// I/O try-catch block
try {
// exchange-loop
while(true) {
// read x & y
synchronized(gameManager) {
serverPos.x = serverInputStream.readInt();
serverPos.y = serverInputStream.readInt();
}
// write x & y, flush
serverOutputStream.writeInt(clientPos.x);
serverOutputStream.writeInt(clientPos.y);
serverOutputStream.flush();
}
} catch(IOException e) { Util.err(e); }
}
}
);
// setup game data
serverPos = new Vec2D(10, 10);
clientPos = new Vec2D(300, 300);
gameManager = new GameManager(serverPos, clientPos, clientPos);
// start data exchange thread
dataExchange.start();
// start main loop
while(true) {
// get keyboard input
synchronized(gameManager) {
gameManager.update();
}
// repaint, sleep
gameManager.repaint();
Util.sleep(15);
}
}
}
I got rid of a bunch of the code - I hope it isn't confusing now. Thanks for the help!
You are using Sockets, maybe you see it being laggy for a real time conversation because they are built in over TCP wich has to acknowledge the message and keep pinging to see if the connection is still alive.
Maybe you should use DatagramSocket, that work on UDP protocol. The difference is that UDP just spits stuff without the bother of keeping the connection alive or even trying to know if the message arrived.
example of use: http://docs.oracle.com/javase/tutorial/networking/datagrams/clientServer.html
Edit: Why don't you try sending that int only when the position in the server changes? Probably the server is sending so much ints that your client has a buffer full of the same values and as you read int by int instead of emptying the buffer you have the fake sensation of being laggy.
A problem in your code are the while(true) loops:
while(true) {
// get keyboard input
synchronized(gameManager) {
gameManager.update();
}
// repaint, sleep
gameManager.repaint();
Util.sleep(15);
}
This way, you are sending either too many updates (when nobody presses a key) or too few updates (because you always wait 15 milliseconds, no matter what happens). It would be better if you listened for keyboard events and if there is one, propagate it to the other side - the other side can then update as a reaction to this "change" event. You might find the Observer pattern useful for implementing this.
I haven't read all of the code, but I did notice that the "client" and "server" both have threads that read and write updates in a tight loop.
There are three problems with this:
There is no point the client (or server) telling the other end the current position if it hasn't changed.
Because the client and server both rigidly "write then read then write then ..." the two threads get into lock-step, and each write / read cycle requires a network round trip.
You are doing part of the work while holding a lock, and there is another thread grabbing the same lock and doing a screen update.
So you need to arrange that:
a position update is only sent when the position actually changes, and
the reading and writing happen on different threads.
#cyroxx has identified another problem that will also result in lagginess.

Categories