android socket gets stuck after connection - java

I am trying to make a app that scans all the ip in range for a specific open port (5050) and if it is open write some message on LOG.
heres the code :
public void run(){
for(int i=0;i<256;i++)
{
Log.d("NetworkScanner","attemping to contact 192.168.1."+i);
try {
Socket socket=new Socket(searchIP+i,5050);
possibleClients.add(searchIP);
socket.close();
Log.d("NetworkScanner"," 192.168.1."+i+" YEAAAHHH");
} catch (UnknownHostException e) {
Log.d("NetworkScanner"," 192.168.1."+i+" unavailable");
} catch (IOException e) {
e.printStackTrace();
}
}
}
EDIT: here's a new problem: Even if a host is found online without the port open the scanning process (for loop) is stuck for a long time before moving to next. also scanning each host is taking considerable time!
Phew the final solution was to make a Socket object with default constructor then create the InetAddr object for the host and then use the Connect(InetAddr,timeout) function of the socket api with timeout in milliseconds (approx 300ms) that scans every ip in just 300 ms or less (less than 200 ms may give errors) and multi threading to scan in parallel make it as fast as 5 sec to scan all the IPs in range..

You are breaking out of the loop when no Exception is thrown.
You need to remove break;
To address your new problem:
Of course it's slow. What did you expect? You are trying to establish a connection to each IP in your subnet which takes time. It seems you are only trying figure out what devices are available on the network, so you might be able to decrease the time a little by looking at this answer. He is using the build in isReachable method which accepts a timeout value. It will still take some time, but not that much time.

Remove the "break;"...it stops the iteration.

Related

java: Good socket timeout for LAN connections?

I have a server (java app running on my laptop) and a client (java app running on my android smartphone).
I'm trying to automatically find the IP address of the server from my client.
Right now i just loop all IPs in the same LAN (192.168.1.0 > 192.168.1.1.255) and if the server (that is listening on a custom port) accept the connection then i found the IP.
The problem is, if i set the connection timeout less then 200ms most of times the client can't find the server.
So the question is, how i can implement a better (faster) way to find the server IP?
I have tried the java InetAddress.isReachable() method but the server always seems unrechable...
And, if there isn't a better way, what do you think it's a good timeout value from local (LAN) socket connections?
Just for others... I just found a very good way to find the server IP in LESS THEN HALF SECOND!
here my solution:
String partialIp = "192.168.1.";
int port = 123;
int counter;
boolean found;
String ip;
Runnable tryNextIp = new Runnable() {
#Override
public void run() {
int myIp = counter++;
String targetIpTemp = partialIp + myIp;
Socket socketTemp = new Socket();
try {
socketTemp.connect(new InetSocketAddress(targetIpTemp, port), 6000);
socketTemp.close();
ip = targetIpTemp;
found = true;
} catch (IOException e) {
try {
socketTemp.close();
} catch (IOException e1) {}
}
}
};
String findIp() {
counter = 0;
found = false;
ExecutorService executorService = Executors.newFixedThreadPool(256);
for (int i=0; i<256; i++) {
if (found)
break;
executorService.execute(tryNextIp);
}
executorService.shutdown();
try {
while (!found && !executorService.isTerminated())
executorService.awaitTermination(200, TimeUnit.MILLISECONDS);
} catch (InterruptedException e) {}
if (found)
return ip;
else
return null;
}
A good timeout value is the time you're willing to wait for your server to reply, given typical network conditions and server response times. You need to pick a reasonable value, independently of your application here -- it is up to you to decide that if the server does not respond in X amount of time, then it is safe to assume it is not there.
To speed up your client, consider creating multiple threads to query multiple servers at once. Executors.newFixedThreadPool() will make this trivial for you.
However, you may want to consider other alternatives that don't require a full network scan; for example:
Just let the user/administrator specify the IP address (Why do you need to discover the server IP? Do you not know what machine you set up your server on? Why not just configure the server to have a static LAN IP?)
If you truly do need service discovery, technologies like NSD/Zeroconf/Bonjour allow for service advertising and discovery.
Even something very basic may be suitable to your needs, e.g. send a broadcast UDP packet from the client and let the server respond, or have the server periodically broadcast announcements.
What the socket timeout should be depends entirely on the expected service time of the request. Naively you could find the average service time and use double that for the timeout. If you want to get more accurate, you would need to plot the statistical distribution of service times, determine the standard deviation, and use the average plus three or even four times the standard deviation as the timeout, to make sure you don't get false-positive timeouts but you do detect failures within a reasonable time. Ultimately it depends on just how trigger-happy you want to be.

Android - multithread TCP connection

I've been searching for an answer to my problem, but none of the solutions so far have helped me solve it. I'm working on an app that communicates with another device that works as a server. The app sends queries to the server and receives appropriate responses to dynamically create fragments.
In the first implementation the app sent the query and then waited to receive the answer in a single thread. But that solution wasn't satisfactory since the app did not receive any feedback from the server. The server admin said he was receiving the queries, however he hinted that the device was sending the answer back too fast and that the app probably wasn't already listening by the time the answer arrived.
So what I am trying to achieve is create seperate threads: one for listening and one for sending the query. The one that listens would start before we sent anything to the server, to ensure the app does not miss the server response.
Implementing this so far hasn't been succesful. I've tried writing and running seperate Runnable classes and AsyncTasks, but the listener never received an answer and at some points one of the threads didn't even execute. Here is the code for the asynctask listener:
#Override
protected String doInBackground(String... params) {
int bufferLength = 28;
String masterIP = "192.168.1.100";
try {
Log.i("TCPQuery", "Listening for ReActor answers ...");
Socket tcpSocket = new Socket();
SocketAddress socketAddress = new InetSocketAddress(masterIP, 50001);
try {
tcpSocket.connect(socketAddress);
Log.i("TCPQuery", "Is socket connected: " + tcpSocket.isConnected());
} catch (IOException e) {
e.printStackTrace();
}
while(true){
Log.i("TCPQuery", "Listening ...");
try{
Log.i("TCPQuery", "Waiting for ReActor response ...");
byte[] buffer = new byte[bufferLength];
tcpSocket.getInputStream().read(buffer);
Log.i("TCPQuery", "Received message " + Arrays.toString(buffer) + " from ReActor.");
}catch(Exception e){
e.printStackTrace();
Log.e("TCPQuery", "An error occured receiving the message.");
}
}
} catch (Exception e) {
Log.e("TCP", "Error", e);
}
return "";
}
And this is how the tasks are called:
if (Build.VERSION.SDK_INT>=Build.VERSION_CODES.HONEYCOMB) {
listener.executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR, "");
sender.executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR, "");
}
else {
listener.execute();
sender.executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR);
}
How exactly would you approach this problem? If this code is not sufficient I would be glad to post more.
This is because Android's AsyncTask is actually only one thread, no matter how many you create, so if you really want 2 threads running at the same time, I suggest you use standard Java concurrent package tools, not AsyncTask. As explained in the documentation:
AsyncTask is designed to be a helper class around Thread and Handler
and does not constitute a generic threading framework. AsyncTasks
should ideally be used for short operations (a few seconds at the
most.) If you need to keep threads running for long periods of time,
it is highly recommended you use the various APIs provided by the
java.util.concurrent pacakge such as Executor, ThreadPoolExecutor and
FutureTask.
Look this is tcp connection. So you don't need to bother about data lose. This is port to port connection and it never sends end of stream (-1). Perhaps you have to care about read functionality. Because you can not conform all steams are received or not. Tcp read method is a blocking call. If your read buffer size is smaller than available stream size then it block until it can read fully. And you are using android device, perhaps available stream can vary depending upon your device network. So you have 2 options,
1) your buffer size should be dynamic. At first check your available input stream size by using is.available() and create your buf size by this size. If available size is zero then sleep for a certain time to check it is lost its stream availability or not.
2) set your input stream timeout. It really works, because it reads its available stream and wait for the timeout delay, if any stream is not available within the timeout period then it throws timeout exception.
Try to change your code.

Java threaded socket connection timeouts

I have to make simultaneous tcp socket connections every x seconds to multiple machines, in order to get something like a status update packet.
I use a Callable thread class, which creates a future task that connects to each machine, sends a query packet, and receives a reply which is returned to the main thread that creates all the callable objects.
My socket connection class is :
public class ClientConnect implements Callable<String> {
Connection con = null;
Statement st = null;
ResultSet rs = null;
String hostipp, hostnamee;
ClientConnect(String hostname, String hostip) {
hostnamee=hostname;
hostipp = hostip;
}
#Override
public String call() throws Exception {
return GetData();
}
private String GetData() {
Socket so = new Socket();
SocketAddress sa = null;
PrintWriter out = null;
BufferedReader in = null;
try {
sa = new InetSocketAddress(InetAddress.getByName(hostipp), 2223);
} catch (UnknownHostException e1) {
e1.printStackTrace();
}
try {
so.connect(sa, 10000);
out = new PrintWriter(so.getOutputStream(), true);
out.println("\1IDC_UPDATE\1");
in = new BufferedReader(new InputStreamReader(so.getInputStream()));
String [] response = in.readLine().split("\1");
out.close();in.close();so.close(); so = null;
try{
Integer.parseInt(response[2]);
} catch(NumberFormatException e) {
System.out.println("Number format exception");
return hostnamee + "|-1" ;
}
return hostnamee + "|" + response[2];
} catch (IOException e) {
try {
if(out!=null)out.close();
if(in!=null)in.close();
so.close();so = null;
return hostnamee + "|-1" ;
} catch (IOException e1) {
// TODO Auto-generated catch block
return hostnamee + "|-1" ;
}
}
}
}
And this is the way i create a pool of threads in my main class :
private void StartThreadPool()
{
ExecutorService pool = Executors.newFixedThreadPool(30);
List<Future<String>> list = new ArrayList<Future<String>>();
for (Map.Entry<String, String> entry : pc_nameip.entrySet())
{
Callable<String> worker = new ClientConnect(entry.getKey(),entry.getValue());
Future<String> submit = pool.submit(worker);
list.add(submit);
}
for (Future<String> future : list) {
try {
String threadresult;
threadresult = future.get();
//........ PROCESS DATA HERE!..........//
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
}
}
The pc_nameip map contains (hostname, hostip) values and for every entry i create a ClientConnect thread object.
My problem is that when my list of machines contains lets say 10 pcs (which most of them are not alive), i get a lot of timeout exceptions (in alive pcs) even though my timeout limit is set to 10 seconds.
If i force the list to contain a single working pc, I have no problem.
The timeouts are pretty random, no clue what's causing them.
All machines are in a local network, the remote servers are written by my also (in C/C++) and been working in another setup for more than 2 years without any problems.
Am i missing something or could it be an os network restriction problem?
I am testing this code on windows xp sp3. Thanks in advance!
UPDATE:
After creating two new server machines, and keeping one that was getting a lot of timeouts, i have the following results :
For 100 thread runs over 20 minutes :
NEW_SERVER1 : 99 successful connections/ 1 timeouts
NEW_SERVER2 : 94 successful connections/ 6 timeouts
OLD_SERVER : 57 successful connections/ 43 timeouts
Other info :
- I experienced a JRE crash (EXCEPTION_ACCESS_VIOLATION (0xc0000005)) once and had to restart the application.
- I noticed that while the app was running my network connection was struggling as i was browsing the internet. I have no idea if this is expected but i think my having at MAX 15 threads is not that much.
So, fisrt of all my old servers had some kind of problem. No idea what that was, since my new servers were created from the same OS image.
Secondly, although the timeout percentage has dropped dramatically, i still think it is uncommon to get even one timeout in a small LAN like ours. But this could be a server's application part problem.
Finally my point of view is that, apart from the old server's problem (i still cannot beleive i lost so much time with that!), there must be either a server app bug, or a JDK related bug (since i experienced that JRE crash).
p.s. I use Eclipse as IDE and my JRE is the latest.
If any of the above ring any bells to you, please comment.
Thank you.
-----EDIT-----
Could it be that PrintWriter and/or BufferedReader are not actually thread safe????!!!?
----NEW EDIT 09 Sep 2013----
After re-reading all the comments and thanks to #Gray and his comment :
When you run multiple servers does the first couple work and the rest of them timeout? Might be interesting to put a small sleep in your fork loop (like 10 or 100ms) to see if it works that way.
I rearanged the tree list of the hosts/ip's and got some really strange results.
It seems that if an alive host is placed on top of the tree list, thus being first to start a socket connection, has no problem connecting and receiving packets without any delay or timeout.
On the contrary, if an alive host is placed at the bottom of the list, with several dead hosts before it, it just takes too long to connect and with my previous timeout of 10 secs it failed to connect. But after changing the timeout to 60 seconds (thanks to #EJP) i realised that no timeouts are occuring!
It just takes too long to connect (more than 20 seconds in some occasions).
Something is blobking new socket connections, and it isn't that the hosts or network is to busy to respond.
I have some debug data here, if you would like to take a look :
http://pastebin.com/2m8jDwKL
You could simply check for availability before you connect to the socket. There is an answer who provides some kind of hackish workaround https://stackoverflow.com/a/10145643/1809463
Process p1 = java.lang.Runtime.getRuntime().exec("ping -c 1 " + ip);
int returnVal = p1.waitFor();
boolean reachable = (returnVal==0);
by jayunit100
It should work on unix and windows, since ping is a common program.
My problem is that when my list of machines contains lets say 10 pcs (which most of them are not alive), i get a lot of timeout exceptions (in alive pcs) even though my timeout limit is set to 10 seconds.
So as I understand the problem, if you have (for example) 10 PCs in your map and 1 is alive and the other 9 are not online, all 10 connections time out. If you just put the 1 alive PC in the map, it shows up as fine.
This points to some sort of concurrency problem but I can't see it. I would have thought that there was some sort of shared data that was not being locked or something. I see your test code is using Statement and ResultSet. Maybe there is a database connection that is being shared without locking or something? Can you try just returning the result string and printing it out?
Less likely is some sort of network or firewall configuration but the idea that one failed connection would cause another to fail is just strange. Maybe try running your program on one of the servers or from another computer?
If I try your test code, it seems to work fine. Here's the source code for my test class. It has no problems contacting a combination of online and offline hosts.
Lastly some quick comments about your code:
You should close the streams, readers, and sockets in a finally block. Check my test class for a better pattern there.
You should return a small Result class instead of passing back a String that they has to be parsed.
Hope this helps.
After a lot of reading and experimentation i will have to answer my own question (if i am allowed to do of course).
Java just can't handle concurrent multiple socket connections without adding a big performance overhead. At least in a Core2Duo/4GB RAM/ Windows XP machine.
Creating multiple concurrent socket connections to remote hosts (using of course the code i posted) creates some kind of resource bottleneck, or blocking situation, wich i am still not aware of.
If you try to connect to 20 hosts simultaneously, and a lot of them are disconnected, then you cannot guarantee a "fast" connection to the alive ones.
You will get connected but could be after 20-25 seconds. Meaning that you'll have to set socket timeout to something like 60 seconds. (not acceptable for my application)
If an alive host is lucky to start its connection try first (having in mind that concurrency is not absolute. the for loop still has sequentiality), then he will probably get connected very fast and get a response.
If it is unlucky, the socket.connect() method will block for some time, depending on how many are the hosts before it that will timeout eventually.
After adding a small sleep between the pool.submit(worker) method calls (100 ms) i realised that it makes some difference. I get to connect faster to the "unlucky" hosts. But still if the list of dead hosts is increased, the results are almost the same.
If i edit my host list and place a previously "unlucky" host at the top (before dead hosts), all problems dissapear...
So, for some reason the socket.connect() method creates a form of bottleneck when the hosts to connect to are many, and not alive. Be it a JVM problem, a OS limitation or bad coding from my side, i have no clue...
I will try a different coding approach and hopefully tommorow i will post some feedback.
p.s. This answer made me think of my problem :
https://stackoverflow.com/a/4351360/2025271

Find server IP in local network with known port in Java/Android

I want to find the IP address to a server on a local network in short time. I know the port that the application is using on the server.
I've tried this, but its too slow. Even when I know the IP, the responstime is too long (like 4 seconds or so for each IP). Considered this method, it would take minutes to scan the whole subnet from 10.0.0.0 to 10.0.0.255.
String ip = "10.0.0.45";
try {
InetAddress ping = InetAddress.getByName(ip);
Socket s = new Socket(ping, 32400);
System.out.println("Server found on IP: " + ping.getCanonicalHostName());
s.close();
} catch (IOException e) {
System.out.println("Nothing");
}
}
I could use threads, but that would still be slow. Ive seen application finding the IP in milliseconds out there. How do they do this? Java code would be appreciated!
You'll want to do two things - use threads to check many hosts simultaneously, and give the socket connection a lower timeout.
This answer show a very similar example.
I can suggest to look for source code of angry ip scanner. It is fast enough I think.
https://github.com/angryziber/ipscan

Java Socket Returns True

I hope you can help. Im fairly new to progamming and Im playing around with java Sockets.
The problem is the code below. for some reason commSocket = new Socket(hostName, portNumber); is returning true even when it has not connected with the server (server not implemented yet!). Any ideas regarding this situation?
For hostName Im passing my local machine IP and for port a manually selected port.
public void networkConnect(String hostName, int portNumber){
try {
networkConnected = false;
netMessage = "Attempting Connection";
NetworkMessage networkMessage = new NetworkMessage(networkConnected, netMessage);
commSocket = new Socket(hostName, portNumber);
// this returns true!!
System.out.println(commSocket.isConnected());
networkConnected = true;
netMessage = "Connected: ";
System.out.println("hellooo");
} catch (UnknownHostException e){
System.out.println(e.getMessage());
} catch (IOException e){
System.out.println(e.getMessage());
}
}
Many thanks.
EDIT: new Socket(.., ..); is blocking isnt it? i thought in that case if that was processed without exceptions then we have a true connection?
EDIT: I played around with anti virus and now it is working!
Had that exact same situation a few days ago on a corporate computer, and searched for it for hours.
Check your antivirus, some antivirus (like E*** N**32) use live TCP scanning that make a connection succeed even if nothing is listening on the target port but will reset it later when you try to read/write from the socket.
Add this to your code:
commSocket.getOutputStream().write(0);
commSocket.getInputStream().read();
If you get a SocketException now, you should really consider to change your antivirus.
Alternatively, set a breakpoint in your application right after creating the socket, and then use netstat -ano (on Windows) to check which process id is associated with the other endpoint of your socket (which should be on your machine if you connect to localhost).
I would suggest you to disable your antivirus, but in some cases even that does not help to unload their broken live TCP scanning driver...
The Socket constructor connects right away and will throw an IOException if it doesn't succeed. So apparently you have connected successfully to a server (this could be one you didn't make yourself).

Categories