Java - URLConnections stop connecting? - java

I'm working on a homework assignment that has the purpose of showing how increasing the number of threads can help or hurt a program's performance. The basic idea is to thread individual requests for data from a website, then determine how long it takes to perform all the queries when one runs n queries simultaneously.
I think I have the threading and the clocking done properly, but something odd is going on with the requests. I am using java.net.URLConnection to get connect to the databases. My first three thousand or so connections will succeed and load. Then, several hundred or so calls fail without any evidence of Java having tried for the specified timeout period.
The code I run in a thread is as follows:
/* This code to get the contents from an URL was adapted from a
* StackOverflow question found at http://goo.gl/QPqR4 .
*/
private static String loadContent(String address) throws Exception {
String toReturn = "";
try {
URL url = new URL(address);
URLConnection con = url.openConnection();
con.setConnectTimeout(5000);
con.setReadTimeout(5000);
InputStream stream = con.getInputStream();
Reader r = new InputStreamReader(stream, "ISO-8859-1");
while (true) {
int ch = r.read();
if (ch < 0) {
break;
}
toReturn += (char) ch;
}
r.close();
stream.close();
} catch (Exception e) {
System.out.println(address + ": " + e.getMessage());
throw e;
}
return toReturn;
}
The code for running the threads is as follows. The NormalPerformance class is one I wrote to simplify calculating the mean and variance of a series of observations.
/* This code is patterned after code provided by my professor.
*/
private static NormalPerformance performExperiment(int threads, int runs)
throws Exception
{
NormalPerformance toReturn = new NormalPerformance();
for (int i = 0; i < runs; i++) {
final List<Callable<Void>> tasks = new ArrayList<Callable<Void>>();
for (int j = 0; j < URLS.length; j++) {
final String url = URLS[i];
tasks.add(new Callable<Void>() {
public Void call() throws Exception {
loadContent(url);
return null;
}
});
}
long start = System.nanoTime();
final ExecutorService exectuorPool = Executors.newFixedThreadPool(threads);
executorPool.invokeAll(tasks);
executorPool.shutdown();
double time = (System.nano() - start) / 1000000000.;
toReturn.addObservation(time);
System.out.println("" + threads + " " + (i + 1) + ": " + time);
}
return toReturn;
}
Why am I seeing this odd pattern of success and failure? Even stranger, there are times when killing the program and restarting does nothing to stop the run of failures. I've tried things like forcing threads to sleep, calling System.gc(), and increasing the connection and reading timeout values, but none of these, alone or combined, have fixed this.
How can I guarantee that my connections have the best chance possible of connecting?
Environment:
Windows 7 64-bit,
Eclipse Juno 64-bit,
JRE 7

Related

One Producer ten consumers file-processing with Executors.newSingleThreadExecutor()

I have a LinkedBlockingQueue with an arbitrarily picked capacity of 10, and an input file with 1000 lines. I have one ExecutorService-type variable in the main method of the service class that, to my knowledge, first handles--using Executors.newSingleThreadExecutor()--a single thread to call buffer.readline() until file line == null, and then handles--within a loop using Executors.newSingleThreadExecutor()--ten threads to process lines and write them to output files, until !queue.take().equals("Stop"). However, after writing some lines to files, when I am in the debug mode, I see that the capacity of the queue eventually reaches max (10), and the processing threads do not execute queue.take(). All threads are in the running state, but the process halts after queue.put(). What would cause this problem, and is it solvable using some combination of thread-pooling or multiple ExecutorServicehandler variables, instead of a single variable?
Outline for current state of main method in service:
//app settings to get values for keys within a properties file
AppSettings appSettings = new AppSettings();
BlockingQueue<String> queue = new LinkedBlockingQueue<String>(10);
maxProdThreads = 1;
maxConsThreads = 10;
ExecutorService execSvc = null;
for (int i = 0; i < maxProdThreads; i++) {
execSvc = Executors.newSingleThreadExecutor();
execSvc.submit(new ReadJSONMessage(appSettings,queue));
}
for (int i = 0; i < maxConsThreads; i++) {
execSvc = Executors.newSingleThreadExecutor();
execSvc.submit(new ProcessJSONMessage(appSettings,queue));
}
Reading method code:
buffer = new BufferedReader(new FileReader(inputFilePath));
while((line = buffer.readLine()) != null){
line = line.trim();
queue.put(line);
}
Processing and Writing code:
while(!(line=queue.take()).equals("Stop")){
if(line.length() > 10)
{
try {
if(processMessage(line, outputFilePath) == true)
{
++count;
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
public boolean processMessage(String line, String outputFilePath){
CustomObject cO = new CustomObject();
cO.setText(line);
writeToFile1(cO,...);
writeToFile2(cO,...);
}
public void writeOutputAToFile(CustomObject cO,...){
synchronized(cO){
...
org.apache.commons.io.FileUtils.writeStringToFile(...)
}
}
public void writeOutputBToFile(CustomObject cO,...){
synchronized(cO){
...
org.apache.commons.io.FileUtils.writeStringToFile(...)
}
}
In the Processing and writing code..ensure that all resources are closed properly..Probably the resources might not be closed properly due to which the thread keeps running and the ExecutorService can not find an idle thread...

Java benchmark disk speed

I'm trying to get some reliable method of measuring disk read speed, but failing at removal of cache out of the equation.
In How to measure Disk Speed in Java for Benchmarking is in answer from simgineer utility for exactly this, but for some reason, I failed to replicate its behaviour, and running the utility does not yield anything precise either (for read).
From suggestion in different answer, setting test file to something bigger than main memory size seems to work, but I cannot afford to spend whole four minutes for system to allocate 130GB file. (not writing anything in the file results in sparse file and returns bogus times)
Sufficient file size seems to be somewhere between
Runtime.getRuntime().maxMemory()
and
Runtime.getRuntime().maxMemory()*2
The source code of my current solution:
File file = new File(false ? "D:/work/bench.dat" : "./work/bench.dat");
RandomAccessFile wFile = null, rFile = null;
try {
System.out.println("Allocating test file ...");
int blockSize = 1024*1024;
long size = false ? 10L*1024L*(long)blockSize : Runtime.getRuntime().maxMemory()*2;
byte[] block = new byte[blockSize];
for(int i = 0; i<blockSize; i++) {
if(i % 2 == 0) block[i] = (byte) (i & 0xFF);
}
System.out.println("Writing ...");
wFile = new RandomAccessFile(file,"rw");
wFile.setLength(size);
for(long i = 0; i<size-blockSize; i+= blockSize) {
wFile.write(block);
}
wFile.close();
System.out.println("Running read test ...");
long t0 = System.nanoTime();
rFile = new RandomAccessFile(file,"r");
int blockCount = (int)(size/blockSize)-1;
Random rnd = new Random();
for(int i = 0; i<testCount; i++) {
rFile.seek((long)rnd.nextInt(blockCount)*(long)blockSize);
rFile.readFully(block, 0, blockSize);
}
rFile.close();
long t1 = System.nanoTime();
double readB = ((double)testCount*(double)blockSize);
double timeNs = (double)(t1-t0);
return (readB/(1024*1024))/(timeNs/(1000*1000*1000));
} catch (Exception e) {
Logger.logError("Failed to benchmark drive speed!", e);
return 0;
} finally {
if(wFile != null) {try {wFile.close();} catch (IOException e) {}}
if(rFile != null) {try {rFile.close();} catch (IOException e) {}}
if(file.exists()) {file.delete();}
}
I somewhat hoped to get a benchmark that will finish within seconds (caching results for following runs) having only first execution a bit slower.
I could technically crawl the filesystem and bench the read on files that are already on the drive, but that smells like a lot of undefined behaviour and firewalls are not happy about it either.
Any other options left? (platform dependent libraries are off the table)
In the end decided to solve the problem by scouring local work folder for files and load those, hoping we packaged enough with application to get specs speeds. In my current test case, the answer is luckily yes, but there are no guarantees, so I keep the approach from question as a backup plan.
This is not exactly perfect solution, but it somewhat works, getting specs speed at about 2000 test files. Bear in mind that this test cannot be rerun with same results, as all test files from previous execution are now probably cached.
You can always call flushmem ( https://chadaustin.me/flushmem/ ) by Chad Austin, but that takes about as much time to execute as the original approach, so I would advise to simply cache result of the first run and hope for the best.
Used code:
final int MIN_FILE_SIZE = 1024*10;
final int MAX_READ = 1024*1024*50;
final int FILE_COUNT_FRACTION = 4;
// Scour the location of the runtime for any usable files.
ArrayList<File> found = new ArrayList<>();
ArrayList<File> queue = new ArrayList<>();
queue.add(new File("./"));
while(!queue.isEmpty() && found.size() < testCount) {
File tested = queue.remove(queue.size()-1);
if(tested.isDirectory()) {
queue.addAll(Arrays.asList(tested.listFiles()));
} else if(tested.length()>MIN_FILE_SIZE){
found.add(tested);
}
}
// If amount of found files is not sufficient, perform test with new file.
if(found.size() < testCount/FILE_COUNT_FRACTION) {
Logger.logInfo("Disk to CPU transfer benchmark failed to find "
+ "sufficient amount of files to read, slow version "
+ "will be performed!", found.size());
return benchTransferSlowDC(testCount);
}
System.out.println(found.size());
byte[] block = new byte[MAX_READ];
Collections.shuffle(found);
RandomAccessFile raf = null;
long readB = 0;
try {
long t0 = System.nanoTime();
for(int i = 0; i<Math.min(found.size(), testCount); i++) {
File file = found.get(i);
int size = (int) Math.min(file.length(), MAX_READ);
raf = new RandomAccessFile(file,"r");
raf.read(block, 0, size);
raf.close();
readB += size;
}
long t1 = System.nanoTime();
return ((double)readB/(1024*1024))/((double)(t1-t0)/(1000*1000*1000));
//return (double)(t1-t0) / (double)readB;
} catch (Exception e) {
Logger.logError("Failed to benchmark drive speed!", e);
if(raf != null) try {raf.close();} catch(Exception ex) {}
return 0;
}

Inconsistent output from multithreaded FTP InputStreams

I'm trying to create a java program that downloads certain asset files from an FTP server to a local file. Because my (free) FTP server doesn't support file sizes over a few megabytes, I decided to split up the files when they are uploaded and recombine them when the program downloads them. This works, but it is rather slow, because for each file, it has to get the InputStream, which takes some time.
The FTP server I use has a way to download the files without actually logging into the server, so I'm using this code to get the InputStream:
private static final InputStream getInputStream(String file) throws IOException {
return new URL("http://site.website.com/path/" + file).openStream();
}
To get the InputStream of a part of the asset file I'm using this code:
public static InputStream getAssetInputStream(String asset, int num) throws IOException, FTPException {
try {
return getInputStream("assets/" + asset + "_" + num + ".raf");
} catch (Exception e) {
// error handling
}
}
Because the getAssetInputStreams(String, int) method takes some time to run (especially if the file size is more then a megabyte), I decided to make the code that actually downloads the file multi-threaded. Here is where my problem lies.
final Map<Integer, Boolean> done = new HashMap<Integer, Boolean>();
final Map<Integer, byte[]> parts = new HashMap<Integer, byte[]>();
for (int i = 0; i < numParts; i++) {
final int part = i;
done.put(part, false);
new Thread(new Runnable() {
#Override
public void run() {
try {
InputStream is = FTP.getAssetInputStream(asset, part);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] buf = new byte[DOWNLOAD_BUFFER_SIZE];
int len = 0;
while ((len = is.read(buf)) > 0) {
baos.write(buf, 0, len);
curDownload.addAndGet(len);
totAssets.addAndGet(len);
}
parts.put(part, baos.toByteArray());
done.put(part, true);
} catch (IOException e) {
// error handling
} catch (FTPException e) {
// error handling
}
}
}, "Download-" + asset + "-" + i).start();
}
while (done.values().contains(false)) {
try {
Thread.sleep(100);
} catch(InterruptedException e) {
e.printStackTrace();
}
}
File assetFile = new File(dir, "assets/" + asset + ".raf");
assetFile.createNewFile();
FileOutputStream fos = new FileOutputStream(assetFile);
for (int i = 0; i < numParts; i++) {
fos.write(parts.get(i));
}
fos.close();
This code works, but not always. When I run it on my desktop computer, it works almost always. Not 100% of the time, but often it works just fine. On my laptop, which has a far worse internet connection, it almost never works. The result is a file that is incomplete. Sometimes, it downloads 50% of the file. Sometimes, it downloads 90% of the file, it differs every time.
Now, if I replace the .start() by .run(), the code works just fine, 100% of the time, even on my laptop. It is, however, incredibly slow, so I'd rather not use .run().
Is there a way I could change my code so it does work multi-threaded? Any help will be appreciated.
Firstly, get your FTP server replaced, there are plenty of free FTP servers that support arbitrary file size serving with additional features, but I digress...
Your code seems to have many unrelated problems that could potentially all cause the behavior you are seeing, addressed below:
You have race conditions from accessing the done and parts maps from unprotected/unsynchronized access from multiple threads. This could cause data corruption and loss of synchronization for these variables between threads, potentially causing done.values().contains(false) to return true even when it's really not.
You are calling done.values().contains() repeatedly at a high frequency. Whilst the javadoc doesn't explicitly state, a hash map likely traverses every value in a O(n) fashion to check if a given map contains a value. Coupled with the fact that other threads are modifying the map, you'll get undefined behavior. According to values() javadoc:
If the map is modified while an iteration over the collection is in progress (except through the iterator's own remove operation), the results of the iteration are undefined.
You are somehow calling new URL("http://site.website.com/path/" + file).openStream(); but stating you are using FTP. The http:// in the link defines the protocol openStream() tries to open in and http:// is not ftp://. Not sure if this is a typo or did you mean HTTP (or do you have an HTTP server serving identical files).
Any thread raising any type of Exception will cause the code to fail given that not all parts will have "completed" (based on your busy-wait loop design). Granted, you may be redacted some other logic to guard against this, but otherwise this is a potential problem with the code.
You aren't closing any streams that you've opened. This could mean that the underlying socket itself is also left open. Not only does this constitute resource leakage, if the server itself has some sort of maximum number of simultaneous connection limit, you are only causing new connections to fail because the old, completed transfers are not closed.
Based on the issues above, I propose moving the download logic into a Callable task and running them through an ExecutorService as follows:
LinkedList<Callable<byte[]>> tasksToExecute = new LinkedList<>();
// Populate tasks to run
for(int i = 0; i < numParts; i++){
final int part = i;
// Lambda to
tasksToExecute.add(() -> {
InputStream is = null;
try{
is = FTP.getAssetInputStream(asset, part);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] buf = new byte[DOWNLOAD_BUFFER_SIZE];
int len = 0;
while((len = is.read(buf)) > 0){
baos.write(buf, 0, len);
curDownload.addAndGet(len);
totAssets.addAndGet(len);
}
return baos.toByteArray();
}catch(IOException e){
// handle exception
}catch(FTPException e){
// handle exception
}finally{
if(is != null){
try{
is.close();
}catch(IOException ignored){}
}
}
return null;
});
}
// Retrieve an ExecutorService instance, note the use of work stealing pool is Java 8 only
// This can be substituted for newFixedThreadPool(nThreads) for Java < 8 as well for tight control over number of simultaneous links
ExecutorService executor = Executors.newWorkStealingPool(4);
// Tells the executor to execute all the tasks and give us the results
List<Future<byte[]>> resultFutures = executor.invokeAll(tasksToExecute);
// Populates the file
File assetFile = new File(dir, "assets/" + asset + ".raf");
assetFile.createNewFile();
try(FileOutputStream fos = new FileOutputStream(assetFile)){
// Iterate through the futures, writing them to file in order
for(Future<byte[]> result : resultFutures){
byte[] partData = result.get();
if(partData == null){
// exception occured during downloading this part, handle appropriately
}else{
fos.write(partData);
}
}
}catch(IOException ex(){
// handle exception
}
Using the executor service, you further optimize your multi-threading scenario since the output file will start writing as soon as pieces (in order) are available and that threads themselves are reused to save on thread creation costs.
As mentioned, there could be the case where too many simultaneous links causes the server to reject connections (or even more dangerously, write an EOF to make you think the part was downloaded). In this case, the number of worker threads can be tweaked by newFixedThreadPool(nThreads) to ensure at any given time, only nThreads amount of downloads can happen concurrently.

Calculating the bandwidth by sending several packets through linear regression

I implemented a TCP client-server model to test my bandwidth with the server through sending number of packets with different sizes and see the RTT then calculate the bandwidth through linear regression,
Here is the server code:
import java.io.*;
import java.net.*;
public class Server implements Runnable {
ServerSocket welcomeSocket;
String clientSentence;
Thread thread;
Socket connectionSocket;
BufferedReader inFromClient;
DataOutputStream outToClient;
public Server() throws IOException {
welcomeSocket = new ServerSocket(6588);
connectionSocket = welcomeSocket.accept();
inFromClient = new BufferedReader(new InputStreamReader(connectionSocket.getInputStream()));
outToClient = new DataOutputStream(connectionSocket.getOutputStream());
thread = new Thread(this);
thread.start();
}
#Override
public void run() {
// TODO Auto-generated method stub
while(true)
{
try {
clientSentence = inFromClient.readLine();
if (clientSentence != null) {
System.out.println("Received: " + clientSentence);
outToClient.writeBytes(clientSentence + '\n');
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
public static void main(String[] args) throws IOException {
new Server();
}
}
And this is the method in the Client class that return an array of the RTT by each packet
public int [] getResponseTime() throws UnknownHostException, IOException {
timeArray = new int[sizes.length];
for (int i = 0; i < sizes.length; i++) {
sentence = StringUtils.leftPad("", sizes[i], '*');
long start = System.nanoTime();
outToServer.writeBytes(sentence + '\n');
modifiedSentence = inFromServer.readLine();
long end = System.nanoTime();
System.out.println("FROM SERVER: " + modifiedSentence);
timeArray[i] = (int) (end - start);
simpleReg.addData(timeArray[i]* Math.pow(10, -9), sizes[i] * 2); // each char is 2 bytes
}
return timeArray;
}
when i get the slope it returns me a BW with kilo bytes however they are in the same network and the bandwidth should be much more . What i am doing wrong ?
Are you obliged to use linear regression or could it be a different estimator? I am actually not sure if linear regression is the best approach here. I am curious, do you happen to know any sources that suggest to use it in this kind of situation?
Note, that especially the initial BW measurements are much smaller than the real maximal goodput (due to TCP slow-start), so it is important to use a metric estimation that takes large wrong outliers into account.
In previous work I have used the harmonic mean to monitor the bandwidth over a longer period of time and it worked pretty good (also on links with a large bandwidth). The advantage of the harmonic mean over other means, is that while it is still very easy to compute, it mitigates the impact of large outliers, meaning the estimate is not as easily falsified.
Given a series of bandwidth measurements R_i, where i=0,1,2,..., n-1, the harmonic mean is calculated as:
R_total = (n+1)/((n/R_total) + (1/R_n))
It is also good practice to skip the first few measurement values (depending on how often you measure...), e.g., R_(0..5), since you might have initial bursts due to initial preparations in the different layers and are in the slow-start phase anyways.
Here an example implementation in Java. Even though in this case the measurement is done through a file download, it can be easily applied to your environment too - simply use your echo server instead of the file download:
public class Estimator
{
private static double R; // harmonic mean of all bandwidth measurements
private static int n = 0; // number of measurements
private static int skips = 5; // skip measurements for first 5 socket.read() operations
// size in bytes
// start/end in ns
public static double harmonicMean(long start, long end, double size){
// check if we need to skip this initial value, since it might falsify our estimate
if(skips-- > 0) return 0;
// get current value of R
double curR = (size/(1024*1024))/(double)((end - start)*Math.pow(10, -9));
System.out.println(curR);
if(n == 0) {
// initial value
R = curR;
} else {
// use harmonic mean
R = (n+1)/((n/R)+(1/curR));
}
n++;
return R;
}
public static void main(String[] args)
{
// temporary buffer to hold bytes
byte[] buffer = new byte[1024*1024*10]; // 10MB buffer - just in case ...
Socket socket = null;
try {
// measurement done through file download from server
// prepare request
socket = new Socket("yourserver.com",80);
PrintWriter pw = new PrintWriter(socket.getOutputStream());
InputStream is = socket.getInputStream();
pw.println("GET /test_blob HTTP/1.1"); // a test file, e.g., 1MB big
pw.println("Host: yourserver.com");
pw.println("");
pw.flush();
// prepare measurement
long start,end;
double bytes = 0;
double totalBytes = 0;
start = System.nanoTime();
while((bytes = is.read(buffer)) != -1) {
// socket.read() occurred -> calculate harmonic mean
end = System.nanoTime();
totalBytes += bytes;
harmonicMean(start, end, totalBytes);
}
// clean up
is.close();
pw.close();
}
catch(Exception e){
e.printStackTrace();
}
finally {
if(socket != null) {
try{
socket.close();
}
catch(Exception e){
e.printStackTrace();
}
}
}
System.out.println(R+" MB/s");
}
}
Additionally, for the sake of completeness, as I already mentioned in the comments it is important that the test messages/files are big enough, so TCP reaches the full goodput potential of the link.
Please also note, that this is a simplified way to estimate the bandwidth. In this example we start measuring (taking the first timestamp) from when the request was sent, meaning we include the link propagation and server processing delay, which in return will reduce the overall estimated value. Anyways, since you seem to use a local network, I expect the sum of these delays to be rather small, which means they will not falsify the final estimate too much.
I wrote a small blog post concerning measuring TCP connection metrics inside an application layer. Everything is described in more detail there (though the code examples are in C).

Reducing CPU overhead while reading from Sockets using JAVA

I have two threads that increase the CPU overhead.
1. Reading from the socket in a synchronous way.
2. Waiting to accept connections from other clients
Problem 1, I'm just trying to read any data that comes from the client, and I can not use readline, because the incoming data has newlines that I mark for knowing the header end of a message. So I'm using that way in a thread, but it increases the CPU overhead
public static String convertStreamToString(TCPServerConnectionListner socket) throws UnsupportedEncodingException, IOException, InterruptedException {
BufferedReader reader = new BufferedReader(new InputStreamReader(socket.getSocket().getInputStream()));
// At this point it is too early to read. So it most likely return false
System.out.println("Buffer Reader ready? " + reader.ready());
// StringBuilder to hold the response
StringBuilder sb = new StringBuilder();
// Indicator to show if we have started to receive data or not
boolean dataStreamStarted = false;
// How many times we went to sleep waiting for data
int sleepCounter = 0;
// How many times (max) we will sleep before bailing out
int sleepMaxCounter = 5;
// Sleep max counter after data started
int sleepMaxDataCounter = 50;
// How long to sleep for each cycle
int sleepTime = 5;
// Start time
long startTime = System.currentTimeMillis();
// This is a tight loop. Not sure what it will do to CPU
while (true) {
if (reader.ready()) {
sb.append((char) reader.read());
// Once started we do not expect server to stop in the middle and restart
dataStreamStarted = true;
} else {
Thread.sleep(sleepTime);
if (dataStreamStarted && (sleepCounter >= sleepMaxDataCounter)) {
System.out.println("Reached max sleep time of " + (sleepMaxDataCounter * sleepTime) + " ms after data started");
break;
} else {
if (sleepCounter >= sleepMaxCounter) {
System.out.println("Reached max sleep time of " + (sleepMaxCounter * sleepTime) + " ms. Bailing out");
// Reached max timeout waiting for data. Bail..
break;
}
}
sleepCounter++;
}
}
long endTime = System.currentTimeMillis();
System.out.println(sb.toString());
System.out.println("Time " + (endTime - startTime));
return sb.toString();
}
Problem 2, I don't know what is the best way for doing that, I'm just having a thread that constantly wait for other clients, and accept it. But this also takes a lot of CPU overhead.
// Listner to accept any client connection
#Override
public void run() {
while (true) {
try {
mutex.acquire();
if (!welcomeSocket.isClosed()) {
connectionSocket = welcomeSocket.accept();
// Thread.sleep(5);
}
} catch (IOException ex) {
Logger.getLogger(TCPServerConnectionListner.class.getName()).log(Level.SEVERE, null, ex);
} catch (InterruptedException ex) {
Logger.getLogger(TCPServerConnectionListner.class.getName()).log(Level.SEVERE, null, ex);
}
finally
{
mutex.release();
}
}
}
}
A Profiler Picture would also help, but I'm wondering why the SwingWorker Thread Takes that much time?
Update Code For Problem One:
public static String convertStreamToString(TCPServerConnectionListner socket) throws UnsupportedEncodingException, IOException, InterruptedException {
byte[] resultBuff = new byte[0];
byte[] buff = new byte[65534];
int k = -1;
k = socket.getSocket().getInputStream().read(buff, 0, buff.length);
byte[] tbuff = new byte[resultBuff.length + k]; // temp buffer size = bytes already read + bytes last read
System.arraycopy(resultBuff, 0, tbuff, 0, resultBuff.length); // copy previous bytes
System.arraycopy(buff, 0, tbuff, resultBuff.length, k); // copy current lot
resultBuff = tbuff; // call the temp buffer as your result buff
return new String(resultBuff);
}
}
![snapshot][2]
Just get rid of the ready() call and block. Everything you do while ready() is false is literally a complete waste of time, including the sleep. The read() will block for exactly the right amount of time. A sleep() won't. You are either not sleeping for long enough, which wastes CPU time, or too long, which adds latency. Once in a while you may sleep for the correct time, but this is 100% luck, not good management. If you want a read timeout, use a read timeout.
You appear to be waiting until there is not more data after some timeout.
I suggest you use Socket.setSoTimeout(timeout in seconds)
A better solution is to not need to do this by having a protocol which allows you to know when the end of data is reached. You would only do this if you the server is poorly implemented and you have no way to fix it.
For Problem 1. 100% CPU may be because you are reading single char from the BufferedReader.read(). Instead you can read chunk of data to an array and add it to your stringbuilder.

Categories