jssc writebytes stops working after a while on linux - java

I'm programming in Java to talk with a device connected to a com-port (connected to the PC through USB, but by an RS232 to USB cable in between). I've written a program that talks with jssc and that functioned correctly under Windows, it kept working for a longer time even when nothing happens, like it should. Under Linux the program stops responding after a minute of 2 or 3 and I wonder why.
The run statement is as follows
public void run() {
while (stayConnected) {
try {
serialPort.writeBytes(pollBuf);
readResponse(false);
Thread.sleep(400);
serialPort.writeBytes(readEvents);
readResponse(true);
} catch (InterruptedException ie) {
logger.error("Interrupted exception: " + ie.getMessage());
} catch (SerialPortException spe) {
logger.error("SerialPortException: " + spe.getMessage());
}
}
}
To know where the program hangs I've added loglines and I found out that the last command to function correctly is the last call to readResponse(true) and the first to stop returning is serialPort.writeBytes(pollBuf).
Hoping that it would solve the issue I split the 400 ms sleep in two and placed the other sleep before serialPort.writeBytes(pollBuf). That doesn't help. Somehow the serialPort.writeBytes function just never returns and doesn't throw an exception either.
Does anyone have a guess of what the failure might be? It's not the stayConnected boolean as I never call the function yet that sets it so false;
edit: I've just added a counter and the program gets into the loop 283 and 285 times when I run it twice, that's pretty close and both around 2 minutes ...

I'm having a very similar problem under Windows 7. I've deployed my software to a clients PC and after about 5 to 6 hours the serial port can be opened but not able to be written to.
As per the example my code is similar:
String readPort(String command, int byteToRead) {
String Input = null;
if (opened == true) {
try {
serialPort.writeString(command);
Input = serialPort.readString(byteToRead);
} catch (SerialPortException ex) {
System.out.println(ex);
}
}
return Input;
}
The line of code that does not return is
serialPort.writeString(command);

I had exactly the same issue, the cause was another thread closing the port while the main one still reading, i see your code snippet is about a Runnable so carefully check you multithread management.

Related

Connection reset by peer or socket closed out of nothing

So far I've used this site whenever I encountered a problem and I've found solutions too, but this time I have no idea what's even happening.
I am working on a game that is based on a 1-vs-1-multiplayer-mode. So far i have created a server and my program with the client.
My server creates a new thread with a socket for every client that connects with the server and when the "New Game"-Button is pressed in the game, the thread searches for another thread that is looking for a new game right now and once it found him, creates a separate thread that sends a message to both threads to signal them that a game has started, which is then sent through their socket to the program which reacts accordingly.
Here is my code:
Thread:
public void run() {
try {
out = new ObjectOutputStream(socket.getOutputStream());
in = new ObjectInputStream(socket.getInputStream());
ServerNachricht inputLine, outputLine;
LabyrinthProtocol prot = new LabyrinthProtocol();
while (socket.isConnected()) {
ServerNachricht is a class that consists of a type(int), a sender(player) and a message(String).
When the thread gets a new game message, the protocol changes the players status-value to "searching", then looks if another "searching" player exists and then changes both players values to "playing" and returns a new ServerNachricht of type Kampfbeginn with the found player as sender.
After the protocol returns the outputLine, this is what the thread does:
if (outputLine.getArt() == ServerNachricht.KAMPFBEGINN) {
System.out.println(outputLine.getSender().getSname()+" ist da");
server.kampfbeginn(this, outputLine.getSender());
}
The sysout just verifies that the protocol has actually found another player and is printing that players name to be sure. So far, this has always worked.
Here are the parts that call for a new game in the server:
public void kampfbeginn(LabyrinthThread t, Spieler gegner) {
KampfThread kampf = null;
System.out.println(gegner.getSname()+" anerkannt");
for(int i = 0;i<threads.size();i++){
if(threads.get(i)!=null){
System.out.println(threads.get(i).getSpieler().getSname());
if(threads.get(i).getSpieler().getSname().equals(gegner.getSname())){
LabyrinthThread gegnert = threads.get(i);
kampf = new KampfThread(t,gegnert);
t.setKampf(kampf);
gegnert.setKampf(kampf);
break;
}
}
}
This code searches through every existing thread (the server stores them in a vector) and checks if that threads connected player is the player returned by the protocol. When the thread was found, both threads are then given to a newly created thread that stores both of them while also storing that new thread in both threads.
The new thread even verifies the connection with two sysouts:
public KampfThread(LabyrinthThread spieler1, LabyrinthThread spieler2) {
super();
this.spieler1 = spieler1;
this.spieler2 = spieler2;
System.out.println(spieler1.getSpieler().getSname() + "ist drin");
System.out.println(spieler2.getSpieler().getSname() + "ist drin");
}
which I also get every time.
After both connections are established, that thread sends a message to both threads so that they will notify their programs to start:
case(ServerNachricht.KAMPFBEGINN):
spieler1.ThreadNachricht(new ServerNachricht(ServerNachricht.KAMPFBEGINN,spieler2.getSpieler(),""));
spieler2.ThreadNachricht(new ServerNachricht(ServerNachricht.KAMPFBEGINN,spieler1.getSpieler(),""));
break;
which calls this method in the threads:
public void ThreadNachricht(ServerNachricht s) {
if(socket.isConnected()) {
try {
out.writeObject(s);
} catch (IOException e) {
e.printStackTrace();
}
}
The strange thing is that this works absolutely perfect about 80% of the time (so both programs go into the "game started" mode) but sometimes it just works for one or even neither program and the server gets either a
Connection reset by peer
or a
Socket closed
error in
public void ThreadNachricht(ServerNachricht s) {
if(socket.isConnected()) {
try {
out.writeObject(s);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
in the out.writeObject(s); line. There is no line anywhere that closes anything (I've even taken out every single close() out of anywhere to make sure that nothing can interfere) and there seems to be no pattern at all to when it works and when it doesn't (and not working closes the servers and the programs clientsocket so the program is unable to work when that happens). Is there any way I can guarantee that my program works or is there any error I made? I am rather desperate because I couldn't even do major tests to find out a pattern since starting the program twice with exactly the same setup still causes it to work most of the time.
Edit: I literally just had a situation in which one player went into the new game mode while the other one stayed in the main menu (resulting in a Connection reset by peer: socket write error for the server) twice in a row before it worked the third time without any problems in the same run. So I searched with both players but only one went into the game screen (and the other one got the error). I then pressed back to go into the main menu and did the same again with the same result. When I tried for the third time, it worked and both players got into the game screen and started interacting with each other.
It was actually a rather funny error I made: My server kept the threads stored in his vector even after their sockets disconnected. So logging in with an account that was already connected to the server before since its last restart (I use to keep the server running when I'm just testing cosmetic things) causes its
for(int i = 0;i<threads.size();i++){
if(threads.get(i)!=null){
System.out.println(threads.get(i).getSpieler().getSname());
if(threads.get(i).getSpieler().getSname().equals(gegner.getSname())){
loop to determine the thread for the other player to find an older and already closed thread and not the one the other player is connected to at the moment.
'connection reset' usually means that your wrote to a connection that had already been closed by the peer: in other words, an application protocol error.
'socket closed' means that you closed the socket and then continued to use it.
Neither of these comes 'out of nowhere'. Both indicate application bugs.
isConnected() is not an appropriate test. It doesn't magically become false when the peer disconnects. I'm not sure it becomes false even when you disconnect.
All this indicates nothing more than coding bugs. Post more of your code and I'll show you some more of them.

Java threaded socket connection timeouts

I have to make simultaneous tcp socket connections every x seconds to multiple machines, in order to get something like a status update packet.
I use a Callable thread class, which creates a future task that connects to each machine, sends a query packet, and receives a reply which is returned to the main thread that creates all the callable objects.
My socket connection class is :
public class ClientConnect implements Callable<String> {
Connection con = null;
Statement st = null;
ResultSet rs = null;
String hostipp, hostnamee;
ClientConnect(String hostname, String hostip) {
hostnamee=hostname;
hostipp = hostip;
}
#Override
public String call() throws Exception {
return GetData();
}
private String GetData() {
Socket so = new Socket();
SocketAddress sa = null;
PrintWriter out = null;
BufferedReader in = null;
try {
sa = new InetSocketAddress(InetAddress.getByName(hostipp), 2223);
} catch (UnknownHostException e1) {
e1.printStackTrace();
}
try {
so.connect(sa, 10000);
out = new PrintWriter(so.getOutputStream(), true);
out.println("\1IDC_UPDATE\1");
in = new BufferedReader(new InputStreamReader(so.getInputStream()));
String [] response = in.readLine().split("\1");
out.close();in.close();so.close(); so = null;
try{
Integer.parseInt(response[2]);
} catch(NumberFormatException e) {
System.out.println("Number format exception");
return hostnamee + "|-1" ;
}
return hostnamee + "|" + response[2];
} catch (IOException e) {
try {
if(out!=null)out.close();
if(in!=null)in.close();
so.close();so = null;
return hostnamee + "|-1" ;
} catch (IOException e1) {
// TODO Auto-generated catch block
return hostnamee + "|-1" ;
}
}
}
}
And this is the way i create a pool of threads in my main class :
private void StartThreadPool()
{
ExecutorService pool = Executors.newFixedThreadPool(30);
List<Future<String>> list = new ArrayList<Future<String>>();
for (Map.Entry<String, String> entry : pc_nameip.entrySet())
{
Callable<String> worker = new ClientConnect(entry.getKey(),entry.getValue());
Future<String> submit = pool.submit(worker);
list.add(submit);
}
for (Future<String> future : list) {
try {
String threadresult;
threadresult = future.get();
//........ PROCESS DATA HERE!..........//
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
}
}
The pc_nameip map contains (hostname, hostip) values and for every entry i create a ClientConnect thread object.
My problem is that when my list of machines contains lets say 10 pcs (which most of them are not alive), i get a lot of timeout exceptions (in alive pcs) even though my timeout limit is set to 10 seconds.
If i force the list to contain a single working pc, I have no problem.
The timeouts are pretty random, no clue what's causing them.
All machines are in a local network, the remote servers are written by my also (in C/C++) and been working in another setup for more than 2 years without any problems.
Am i missing something or could it be an os network restriction problem?
I am testing this code on windows xp sp3. Thanks in advance!
UPDATE:
After creating two new server machines, and keeping one that was getting a lot of timeouts, i have the following results :
For 100 thread runs over 20 minutes :
NEW_SERVER1 : 99 successful connections/ 1 timeouts
NEW_SERVER2 : 94 successful connections/ 6 timeouts
OLD_SERVER : 57 successful connections/ 43 timeouts
Other info :
- I experienced a JRE crash (EXCEPTION_ACCESS_VIOLATION (0xc0000005)) once and had to restart the application.
- I noticed that while the app was running my network connection was struggling as i was browsing the internet. I have no idea if this is expected but i think my having at MAX 15 threads is not that much.
So, fisrt of all my old servers had some kind of problem. No idea what that was, since my new servers were created from the same OS image.
Secondly, although the timeout percentage has dropped dramatically, i still think it is uncommon to get even one timeout in a small LAN like ours. But this could be a server's application part problem.
Finally my point of view is that, apart from the old server's problem (i still cannot beleive i lost so much time with that!), there must be either a server app bug, or a JDK related bug (since i experienced that JRE crash).
p.s. I use Eclipse as IDE and my JRE is the latest.
If any of the above ring any bells to you, please comment.
Thank you.
-----EDIT-----
Could it be that PrintWriter and/or BufferedReader are not actually thread safe????!!!?
----NEW EDIT 09 Sep 2013----
After re-reading all the comments and thanks to #Gray and his comment :
When you run multiple servers does the first couple work and the rest of them timeout? Might be interesting to put a small sleep in your fork loop (like 10 or 100ms) to see if it works that way.
I rearanged the tree list of the hosts/ip's and got some really strange results.
It seems that if an alive host is placed on top of the tree list, thus being first to start a socket connection, has no problem connecting and receiving packets without any delay or timeout.
On the contrary, if an alive host is placed at the bottom of the list, with several dead hosts before it, it just takes too long to connect and with my previous timeout of 10 secs it failed to connect. But after changing the timeout to 60 seconds (thanks to #EJP) i realised that no timeouts are occuring!
It just takes too long to connect (more than 20 seconds in some occasions).
Something is blobking new socket connections, and it isn't that the hosts or network is to busy to respond.
I have some debug data here, if you would like to take a look :
http://pastebin.com/2m8jDwKL
You could simply check for availability before you connect to the socket. There is an answer who provides some kind of hackish workaround https://stackoverflow.com/a/10145643/1809463
Process p1 = java.lang.Runtime.getRuntime().exec("ping -c 1 " + ip);
int returnVal = p1.waitFor();
boolean reachable = (returnVal==0);
by jayunit100
It should work on unix and windows, since ping is a common program.
My problem is that when my list of machines contains lets say 10 pcs (which most of them are not alive), i get a lot of timeout exceptions (in alive pcs) even though my timeout limit is set to 10 seconds.
So as I understand the problem, if you have (for example) 10 PCs in your map and 1 is alive and the other 9 are not online, all 10 connections time out. If you just put the 1 alive PC in the map, it shows up as fine.
This points to some sort of concurrency problem but I can't see it. I would have thought that there was some sort of shared data that was not being locked or something. I see your test code is using Statement and ResultSet. Maybe there is a database connection that is being shared without locking or something? Can you try just returning the result string and printing it out?
Less likely is some sort of network or firewall configuration but the idea that one failed connection would cause another to fail is just strange. Maybe try running your program on one of the servers or from another computer?
If I try your test code, it seems to work fine. Here's the source code for my test class. It has no problems contacting a combination of online and offline hosts.
Lastly some quick comments about your code:
You should close the streams, readers, and sockets in a finally block. Check my test class for a better pattern there.
You should return a small Result class instead of passing back a String that they has to be parsed.
Hope this helps.
After a lot of reading and experimentation i will have to answer my own question (if i am allowed to do of course).
Java just can't handle concurrent multiple socket connections without adding a big performance overhead. At least in a Core2Duo/4GB RAM/ Windows XP machine.
Creating multiple concurrent socket connections to remote hosts (using of course the code i posted) creates some kind of resource bottleneck, or blocking situation, wich i am still not aware of.
If you try to connect to 20 hosts simultaneously, and a lot of them are disconnected, then you cannot guarantee a "fast" connection to the alive ones.
You will get connected but could be after 20-25 seconds. Meaning that you'll have to set socket timeout to something like 60 seconds. (not acceptable for my application)
If an alive host is lucky to start its connection try first (having in mind that concurrency is not absolute. the for loop still has sequentiality), then he will probably get connected very fast and get a response.
If it is unlucky, the socket.connect() method will block for some time, depending on how many are the hosts before it that will timeout eventually.
After adding a small sleep between the pool.submit(worker) method calls (100 ms) i realised that it makes some difference. I get to connect faster to the "unlucky" hosts. But still if the list of dead hosts is increased, the results are almost the same.
If i edit my host list and place a previously "unlucky" host at the top (before dead hosts), all problems dissapear...
So, for some reason the socket.connect() method creates a form of bottleneck when the hosts to connect to are many, and not alive. Be it a JVM problem, a OS limitation or bad coding from my side, i have no clue...
I will try a different coding approach and hopefully tommorow i will post some feedback.
p.s. This answer made me think of my problem :
https://stackoverflow.com/a/4351360/2025271

android socket gets stuck after connection

I am trying to make a app that scans all the ip in range for a specific open port (5050) and if it is open write some message on LOG.
heres the code :
public void run(){
for(int i=0;i<256;i++)
{
Log.d("NetworkScanner","attemping to contact 192.168.1."+i);
try {
Socket socket=new Socket(searchIP+i,5050);
possibleClients.add(searchIP);
socket.close();
Log.d("NetworkScanner"," 192.168.1."+i+" YEAAAHHH");
} catch (UnknownHostException e) {
Log.d("NetworkScanner"," 192.168.1."+i+" unavailable");
} catch (IOException e) {
e.printStackTrace();
}
}
}
EDIT: here's a new problem: Even if a host is found online without the port open the scanning process (for loop) is stuck for a long time before moving to next. also scanning each host is taking considerable time!
Phew the final solution was to make a Socket object with default constructor then create the InetAddr object for the host and then use the Connect(InetAddr,timeout) function of the socket api with timeout in milliseconds (approx 300ms) that scans every ip in just 300 ms or less (less than 200 ms may give errors) and multi threading to scan in parallel make it as fast as 5 sec to scan all the IPs in range..
You are breaking out of the loop when no Exception is thrown.
You need to remove break;
To address your new problem:
Of course it's slow. What did you expect? You are trying to establish a connection to each IP in your subnet which takes time. It seems you are only trying figure out what devices are available on the network, so you might be able to decrease the time a little by looking at this answer. He is using the build in isReachable method which accepts a timeout value. It will still take some time, but not that much time.
Remove the "break;"...it stops the iteration.

how to make a jar file always running

i have a jar file: myServerSide.jar,
this jar takes request from client apps, processes them, each one ina thread and renders a response
i've put my jar on linux, but i want it to be ALWAYS running
if i do java -jar myServerSide.jar & for no reason it stops after a while
i also tried deamon -- java -jar myServerSide.jar & it also stops
do you know the reason why?
what should i do,so that it stays always running, and never exit.(is it necessary to make it a service)
thanks for your help
(i'm hosting my jar on linode (a VPS) if it is related)
this is the code for my server
try
{
FTLogger.getInstance().logMessage(Level.FINE, "S: Connecting...");
ServerSocket serverSocket = new ServerSocket(SERVERPORT);
while (true)
{
Socket client = serverSocket.accept();
Thread serverThread = new Thread(new ServerThread(client));
serverThread.start();
}
}
catch (Exception e)
{
FTLogger.getInstance().logMessage(Level.SEVERE, "S: Error getting connection", e);
}
in my logs, i don't see any error, and when working the jar works as it should.
(if you're sure that it's smthg from my code, should i open another question, and discard this?)
if i do java -jar myServerSide.jar & for no reason it stops after a while
The reason it stops could be (probably is) in your code.
Debugging it should tell you why it stops.
Assuming you don't have access to screen you can try nohup java -jar myServerSide.jar > log.out &
If an java.lang.Error occurs it wouldn't be catched by
catch (Exception e) {
...
}
only
catch( Throwable t ) {
...
}
would do it.
I think that you should ensure this programatically by something like infinite loop waiting for requests from client and delegating them to separate threads for processing:
// this is very high-level and obviously a exit point from this loop should be provided
while (true) {
Request r = waitForRequest();
processRequestInNewThread(r);
}
Or is there something more you need that I'm missing? Maybe a sample code from your implementation of request handling will help.
You should give us some code. The first thing that pops into my mind is that you need to make sure your method that accepts the connections from clients need to run in an infinite loop. For example:
while (true) {
acceptAndParseRequest();
}
If you launch a java application, and you embed your code into a loop:
while(true){
...
}
It will never stop, the only reason why it should stop it's because an exception is launched (do you consume resources inside the while) ?
In case it really stops, try to understand what is the problem in this way:
while(true){
try{
... your code ....
}catch(Throwable t){
system.out.println("This is my problem:" + t.printStackTrace);
}
}
Sure it helps

Java TCP/IP Server Closing Connections Improperly

I've created an MMO for the Android phone and use a Java server with TCP/IP sockets. Everything generally works fine, but after about a day of clients logging on and off my network becomes extremely laggy -- even if there aren't clients connected. NETSTAT shows no lingering connections, but there is obviously something terribly wrong going on.
If I do a full reboot everything magically is fine again, but this isn't a tenable solution for the long-term. This is what my disconnect method looks like (on both ends):
public final void disconnect()
{
Alive = false;
Log.write("Disconnecting " + _socket.getRemoteSocketAddress());
try
{
_socket.shutdownInput();
}
catch (final Exception e)
{
Log.write(e);
}
try
{
_socket.shutdownOutput();
}
catch (final Exception e)
{
Log.write(e);
}
try
{
_input.close();
}
catch (final Exception e)
{
Log.write(e);
}
try
{
_output.close();
}
catch (final Exception e)
{
Log.write(e);
}
try
{
_socket.close();
}
catch (final Exception e)
{
Log.write(e);
}
}
_input and _output are BufferedInputStream and BufferedOutputStream spawned from the socket. According to documentation calling shutdownInput() and shutdownOutput() shouldn't be necessary, but I'm throwing everything I possibly can at this.
I instantiate the sockets with default settings -- I'm not touching soLinger, KeepAlive, noDelay or anything like that. I do not have any timeouts set on send/receive. I've tried using WireShark but it reveals nothing unusual, just like NETSTAT.
I'm pretty desperate for answers on this. I've put a lot of effort into this project and am frustrated with what appears to be a serious hidden flaw in Java's default TCP implementation.
Get rid of shutdownInput() and shutdownOutput() and all the closes except the close for the BufferedOutputStream, and a subsequent close on the socket itself in a finally block as a belt & braces. You are shutting down and closing everything else before the output stream, which prevents it from flushing. Closing the output stream flushes it and closes the socket. That's all you need.
OP here, unable to comment on original post.
Restarting the server process does not appear to resolve the issue. The network remains very "laggy" even several minutes after shutting down the server entirely.
By "laggy" I mean the connection becomes extremely slow with both up and down traffic. Trying to load websites, or upload to my FTP, is painfully slow like I'm on a 14.4k modem (I'm on a 15mbs fiber). Internet Speed Tests don't even work when it is in this state -- I get an error about not finding the file, when the websites eventually load up.
All of this instantly clears up after a reboot, and only after a reboot.
I modified my disconnect method as EJP suggested, but the problem persists.
Server runs on a Windows 7 installation, latest version of Java / Java SDK. The server has 16gb of RAM, although it's possible I'm not allocating it properly for the JVM to use fully. No stray threads or processes appear to be present. I'll see what JVISUALVM says. – jysend 13 mins ago
Nothing unusual in JVISUALVM -- 10mb heap, 50% CPU use, 3160 objects (expected), 27 live threads out of 437 started. Server has been running for about 18 hours; loading up CNN's front page takes about a minute, and the normal speed test I use (first hit googling Speed Test) won't even load the page. NETSTAT shows no lingering connections. Ran all up to date antivirus. Server has run 24/7 in the past without any issues -- it is only when I started running this Java server on it that this started to happen.

Categories