lets say I had a socket that needed to send continuous data at random (but small) intervals, lets say about 20 objects a second, for any span of time.
I am for-seeing possible issues that I am not sure how to handle.
1) If I send one object at a time as in example A, may they still arrive in bunches? So as to make it better to do as in example B?
2) Would the thread receiving the data possibly try to read the data before a entire object was sent, thus splitting data, and making another issue I will have to look out for?
pseudo for sending data might look like this
EXAMPLE A
void run()
{
get socket outputstream
while (isBroadcasting)
{
if (myQueue.isEmpty() == false)
send first object in queue through outputstream
Thread.sleep(25);
}
}
EXAMPLE B
void run()
{
get socket outputstream
while (isBroadcasting)
{
while (myQueue.isEmpty() == false)
send all object in queue through outputstream
Thread.sleep(25);
}
}
and finally read it like this
void run()
{
get socket inputstream
while (isReceiving)
{
get object(s) from inputstream and publish them to main thread
Thread.sleep(25);
}
}
3) Would this be a viable solution? Is it ok to keep the streams open at both ends, looping and writing/reading data until finished?
1) If I send one object at a time as in example A, may they still arrive in bunches? So as to make it better to do as in example B?
Depending on the type of network you are sending them over, they may definitely still arrive in bunches at the receiving end. In addition, if you are using TCP, the application of Nagle's algorithm may introduce more bunching and apparent delay.
2) Would the thread receiving the data possibly try to read the data before a entire object was sent, thus splitting data, and making another issue I will have to look out for?
This is very dependent on your implementation details, and is impossible to answer with the pseudocode provided.
Is it ok to keep the streams open at both ends, looping and writing/reading data until finished?
Yes, it is perfectly reasonable to use a TCP connection like this for a long period of time. However, your application must be willing to reconnect if the connection is lost for some reason.
Related
I am trying to understand some things about threads in java, which I am very unfamiliar with. Unfortunately my example is too big for running code, but I'll try to specify my problem as well as possible.
One of two similar code segments (taken from a little example which features a simple ChatClient/Server class), which are the center of my question:
public void run(){
String message;
try{
while((message = reader.readLine()) != null){
tellEveryone(message);
}
}catch(Exception ex){...}
}
(Taken from an inner class of the Server class.)
The server is waiting in a while(true) loop for clients via its serversockets accept() method and whenever a client connects, a new Thread is started with the above run method as "entry point".
What I don't understand is why this works. My understanding until now was that Thread which is supposed to constantly listen to something has to contain a while(true) construct because otherwise it would just finish it's run method and it would be finished with no return ("dead" call stack).
So for my example when reader gave us all lines he had to give in the beginning, I supposed it would leave the run()-method and nothing would happen when the specific client would send a new message but it seems it stays listening for client input. How does that work?
(I probably should say that "reader" is a BufferedReader within the inner class which is instantiated once for every connected client.)
I hope that was sufficiently explained. If more Information is needed I will gladly provide it.
readLine() blocks while there is no data. It only returns null at end of stream, which in the case of a socket means that the peer has disconnected.
If the client does not send anything, the server socket does not have anything to read. When the client writes to the socket and the contents are sent the reader can read the contents
Currently I have a server that listens for connections (it is a basic highscore server for a mobile game I have), loops through the connections every 1000ms and listens for any incoming data.
public void readData(Connection c) throws IOException {
PacketBuffer readBuffer = new PacketBuffer(Server.PACKET_CAPACITY);
int packetSize = c.getIn().read();
c.getIn().mark(packetSize);
byte[] buffer = new byte[packetSize];
c.getIn().read(buffer, 0, buffer.length);
readBuffer.setBuffer(buffer);
packetHandler.addProcess(c, readBuffer);
}
I use my own PacketBuffer and I need to find a way so that c.getIn().read() (That is my connections InputStream) doesn't block. Currently the socket is set to 500ms timeout, and my server would run fine that way. My problem is if someone tries to make their own program to connect to try and hack their own highscores or ddos the server it will become convoluted with a bunch of useless connections that block the thread for 500ms a piece when the connection isn't writing.
You could try something like this. Every time readData gets called it will check to see if bytes are available to read. I used a while loop here because you want it to process all the data it can before the thread sleeps again. This will ensure messages dont get backed up, if it were to only read one every x milliseconds.
public void readData(Connection c) throws IOException {
while (c.getIn().available() > 0) {
int packetSize = c.getIn().read();
c.getIn().mark(packetSize);
byte[] buffer = new byte[packetSize];
c.getIn().read(buffer, 0, buffer.length);
PacketBuffer readBuffer = new PacketBuffer(Server.PACKET_CAPACITY);
readBuffer.setBuffer(buffer);
packetHandler.addProcess(c, readBuffer);
}
I dont know why you are using the mark method. Looks problematic to me.
You also really need to use a readFully() style method (see DataInputStream) which won't return until it's definitely read the full byte array. Regular reads can always "return short", even when the sender has sent the full data block (due to network packet sizing etc).
There are two classic ways to implement servers in java.
The first and oldest way is to use a read/write thread pair for each connected client. This is okay for smaller servers without a lot of connected clients as each client requires two threads to manage it. It doesn't scale very well to a lot of concurrent clients.
The second and newer way is to use java.nio.ServerSocketChannel, java.nio.SocketChannel, and java.nio.Selector. These three classes allow you to manage all IO operations for every client you have connected in a single thread. Here is an example for how to implement a very basic server using the java.nio package.
A much better way to implement a server would be to use a third-party framework. A great library that I have used in the past is Netty. It handles all the nitty-gritty details of sockets and provides a fairly clean and simple api that scales well.
You can't. InputStreams are blocking. Period. It's hard to see how non-blocking mode would actually solve the problem you mention. I suggest a redesign is in order.
I am currently trying to write a very simple chat application to introduce myself to java socket programming and multithreading. It consists of 2 modules, a psuedo-server and a psuedo-client, however my design has lead me to believe that I'm trying to implement an impossible concept.
The Server
The server waits on localhost port 4000 for a connection, and when it receives one, it starts 2 threads, a listener thread and a speaker thread. The speaker thread constantly waits for user input to the console, and sends it to the client when it receives said input. The listener thread blocks to the ObjectInputStream of the socket for any messages sent by the client, and then prints the message to the console.
The Client
The client connects the user to the server on port 4000, and then starts 2 threads, a listener and s speaker. These threads have the same functionality as the server's threads, but, for obvious reasons, handle input/output in the opposite way.
The First Problem
The problem I am running into is that in order to end the chat, a user must type "Bye". Now, since my threads have been looped to block for input:
while(connected()){
//block for input
//do something with this input
//determine if the connection still exists (was the message "Bye"?)
}
Then it becomes a really interesting scenario when trying to exit the application. If the client types "Bye", then it returns the sending thread and the thread that listened for the "Bye" on the server also returns. This leaves us with the problem that the client-side listener and the server-side speaker do not know that "Bye" has been typed, and thus continue execution.
I resolved this issue by creating a class Synchronizer that holds a boolean variable that both threads access in a synchronized manner:
public class Synchronizer {
boolean chatting;
public Synchronizer(){
chatting = true;
onChatStatusChanged();
}
synchronized void stopChatting(){
chatting = false;
onChatStatusChanged();
}
synchronized boolean chatting(){
return chatting;
}
public void onChatStatusChanged(){
System.out.println("Chat status changed!: " + chatting);
}
}
I then passed the same instance of this class into the thread as it was created. There was still one issue though.
The Second Problem
This is where I deduced that what I am trying to do is impossible using the methods I am currently employing. Given that one user has to type "Bye" to exit the chat, the other 2 threads that aren't being utilized still go on to pass the check for a connection and begin blocking for I/O. While they are blocking, the original 2 threads realize that the connection has been terminated, but even though they change the boolean value, the other 2 threads have already passed the check, and are already blocking for I/O.
This means that even though you will terminate the thread on the next iteration of the loop, you will still be trying to receive input from the other threads that have been properly terminated. This lead me to my final conclusion and question.
My Question
Is it possible to asynchronously receive and send data in the manner which I am trying to do? (2 threads per client/server that both block for I/O) Or must I send a heartbeat every few milliseconds back and forth between the server and client that requests for any new data and use this heartbeat to determine a disconnect?
The problem seems to reside in the fact that my threads are blocking for I/O before they realize that the partner thread has disconnected. This leads to the main issue, how would you then asynchronously stop a thread blocking for I/O?
I feel as though this is something that should be able to be done as the behavior is seen throughout social media.
Any clarification or advice would be greatly appreciated!
I don't know Java, but if it has threads, the ability to invoke functions on threads, and the ability to kill threads, then even if it doesn't have tasks, you can add tasks, which is all you need to start building your own ASync interface.
For that matter, if you can kill threads, then the exiting threads could just kill the other threads.
Also, a "Bye" (or some other code) should be sent in any case where the window is closing and the connection is open - If Java has Events, and the window you're using has a Close event, then that's the place to put it.
Alternately, you could test for a valid/open window, and send the "Bye" if the window is invalid/closed. Think of that like a poor mans' event handler.
Also, make sure you know how to (and have permission to) manually add exceptions to your networks' firewall(s).
Also, always test it over a live network. Just because it works in a loopback, doesn't mean it'll work over the network. Although you probably already know that.
Just to clarify for anyone who might stumble upon this post in the future, I ended up solving this problem by tweaking the syntax of my threads a bit. First of all, I had to remove my old threads, and replace them with AsyncSender and AsyncReader, respectively. These threads constantly send and receive regardless of user input. When there is no user input, it simply sends/receives a blank string and only prints it to the console if it is anything but a blank string.
The Workaround
try{
if((obj = in.readObject()) != null){
if(obj instanceof String)
output = (String) obj;
if(output.equalsIgnoreCase("Bye"))
s.stop();
}
}
catch(ClassNotFoundException e){
e.printStackTrace();
}
catch(IOException e){
e.printStackTrace();
}
In this iteration of the receiver thread, it does not block for input, but rather tests if the object read was null (no object was in the stream). The same is done in the sender thread.
This successfully bypasses the problem of having to stop a thread that is blocking for I/O.
Note that there are still other ways to work around this issue, such as using the InterruptableChannel.
Here is the situation. I'm creating an Android that utilises Bluetooth to update connected clients of each other's status. The idea is for the host of the Bluetooth connection to hold a multi-dimensional array holding all of these stats. Each time a client updates the host of their new status, the host then update the data in the array and then sends this to all the clients.
Of course, this all sounds like milk and cookies to me, but unfortunately it is not. I understand that I need to have a Bluetooth socket on each end and one of them needs to be a host socket. So, get one connection done seems pretty straight forward. But what if I then want to accept more connections? I've been reading around and apparently I have to create a new thread for each connection. I don't see how that would work, could someone please explain this?
The reason you need a thread for each connection is this:
Imagine you have two opened sockets, sock1 and sock2. To read from those sockets, you might call something like
InputStream in1 = sock1.getInputStream();
InputStream in2 = sock2.getInputStream();
Now, to read from sock1, you call
in1.read(buffer);
where "buffer" is a byte array, in which you store the bytes you read.
However, read() is a blocking call - in other words, it doesn't return, and you don't get to execute the next line, until there are some bytes to read(). So if you try to read sock1, you'll never get to read sock2, and vice versa, if they're in the same thread.
Thus if you have one thread per connection, each thread can call read(), and wait for input. If input comes while one of the other threads is executing, it waits until that thread's turn comes up, and then proceeds.
To actually implement this, all you need to do is stick the code to handle one connection into a class that extends Thread.
There are lots of details involved - I would suggest the Android BluetoothChat sample as a good tutorial.
It is possible to skip data from an InputStream
in.skip(in.available());
but if you want to do something similar with OutputStream I've found
socket.getOutputStream().flush();
But that's not the same, flush will transmit the buffered data instead of ignoring it.
Is there any possibility of deleting buffered data?
Thanks
EDIT
The situation is a client-server application, when a new command is send (from client)
it (try) to be sure that the answer read will correspond to the last command sent.
Some commands are sent by (human-fired) events, and others are sent by automatic threads.
If a command is on buffer and a new one is send then the answer will be for the first one, causing desynchronization.
Of course a synchronized method plus a flag called "waitingCommand" could be the safer approach but as the communication is not reliable, this approach is slow (depends on timeouts ). That's why I've asked for the skip method.
You can't remove data you could have sent. You can write the data into an in-memory OutputStream like ByteArrayOutputStream and copy only the portions you want.
I'm no sure if it makes sense, but you can try:
class MyBufferedOutputStream extends java.io.BufferedOutputStream {
public MyBufferedOutputStream(OutputStream out) {
super(out);
}
/** throw away everything in a buffer without writing it */
public synchronized void skip() {
count = 0;
}
}
What does it mean to "skip" outputting data?
Once the data is in the buffer, there's no way to get it back or remove it. I suggest checking if you want to skip the data before you write it to the OutputStream. Either that, or have your own secondary buffer that you can modify at will.
This question doesn't make any sense. Throwing away pending requests will just make your application protocol problem worse. What happens to the guy that is waiting for the response to the request that got deleted? What happened to the functionality that that request was supposed to implement? You need to rethink all this from another point of view. If you have a single connection to a server that is executing request/response transactions for this client, the protocol is already sequential. You will have to synchronize on e.g. the socket at the point of writing & flushing the request and reading the response, but you're not losing any performance by this as the processing at the other end is sequentialized anyway. You don't need a 'waitingCommand' flag as well, just synchronization.
Since you are controlling the data written to OutputStream, just don't write pieces that you don't need. OutputStream by contract, does not ensure when data is actually written, so it doesn't make much sense to have skip method.
The best you can do to "ignore" output data, is not to write it at first.