I have two threads that increase the CPU overhead.
1. Reading from the socket in a synchronous way.
2. Waiting to accept connections from other clients
Problem 1, I'm just trying to read any data that comes from the client, and I can not use readline, because the incoming data has newlines that I mark for knowing the header end of a message. So I'm using that way in a thread, but it increases the CPU overhead
public static String convertStreamToString(TCPServerConnectionListner socket) throws UnsupportedEncodingException, IOException, InterruptedException {
BufferedReader reader = new BufferedReader(new InputStreamReader(socket.getSocket().getInputStream()));
// At this point it is too early to read. So it most likely return false
System.out.println("Buffer Reader ready? " + reader.ready());
// StringBuilder to hold the response
StringBuilder sb = new StringBuilder();
// Indicator to show if we have started to receive data or not
boolean dataStreamStarted = false;
// How many times we went to sleep waiting for data
int sleepCounter = 0;
// How many times (max) we will sleep before bailing out
int sleepMaxCounter = 5;
// Sleep max counter after data started
int sleepMaxDataCounter = 50;
// How long to sleep for each cycle
int sleepTime = 5;
// Start time
long startTime = System.currentTimeMillis();
// This is a tight loop. Not sure what it will do to CPU
while (true) {
if (reader.ready()) {
sb.append((char) reader.read());
// Once started we do not expect server to stop in the middle and restart
dataStreamStarted = true;
} else {
Thread.sleep(sleepTime);
if (dataStreamStarted && (sleepCounter >= sleepMaxDataCounter)) {
System.out.println("Reached max sleep time of " + (sleepMaxDataCounter * sleepTime) + " ms after data started");
break;
} else {
if (sleepCounter >= sleepMaxCounter) {
System.out.println("Reached max sleep time of " + (sleepMaxCounter * sleepTime) + " ms. Bailing out");
// Reached max timeout waiting for data. Bail..
break;
}
}
sleepCounter++;
}
}
long endTime = System.currentTimeMillis();
System.out.println(sb.toString());
System.out.println("Time " + (endTime - startTime));
return sb.toString();
}
Problem 2, I don't know what is the best way for doing that, I'm just having a thread that constantly wait for other clients, and accept it. But this also takes a lot of CPU overhead.
// Listner to accept any client connection
#Override
public void run() {
while (true) {
try {
mutex.acquire();
if (!welcomeSocket.isClosed()) {
connectionSocket = welcomeSocket.accept();
// Thread.sleep(5);
}
} catch (IOException ex) {
Logger.getLogger(TCPServerConnectionListner.class.getName()).log(Level.SEVERE, null, ex);
} catch (InterruptedException ex) {
Logger.getLogger(TCPServerConnectionListner.class.getName()).log(Level.SEVERE, null, ex);
}
finally
{
mutex.release();
}
}
}
}
A Profiler Picture would also help, but I'm wondering why the SwingWorker Thread Takes that much time?
Update Code For Problem One:
public static String convertStreamToString(TCPServerConnectionListner socket) throws UnsupportedEncodingException, IOException, InterruptedException {
byte[] resultBuff = new byte[0];
byte[] buff = new byte[65534];
int k = -1;
k = socket.getSocket().getInputStream().read(buff, 0, buff.length);
byte[] tbuff = new byte[resultBuff.length + k]; // temp buffer size = bytes already read + bytes last read
System.arraycopy(resultBuff, 0, tbuff, 0, resultBuff.length); // copy previous bytes
System.arraycopy(buff, 0, tbuff, resultBuff.length, k); // copy current lot
resultBuff = tbuff; // call the temp buffer as your result buff
return new String(resultBuff);
}
}
![snapshot][2]
Just get rid of the ready() call and block. Everything you do while ready() is false is literally a complete waste of time, including the sleep. The read() will block for exactly the right amount of time. A sleep() won't. You are either not sleeping for long enough, which wastes CPU time, or too long, which adds latency. Once in a while you may sleep for the correct time, but this is 100% luck, not good management. If you want a read timeout, use a read timeout.
You appear to be waiting until there is not more data after some timeout.
I suggest you use Socket.setSoTimeout(timeout in seconds)
A better solution is to not need to do this by having a protocol which allows you to know when the end of data is reached. You would only do this if you the server is poorly implemented and you have no way to fix it.
For Problem 1. 100% CPU may be because you are reading single char from the BufferedReader.read(). Instead you can read chunk of data to an array and add it to your stringbuilder.
Related
I'm extending the BaseIOIOLooper to open up a UART device and send messages. I'm testing with a readback, where I send a packet over a line and receive that packet on another line and print it out. Because I don't want the InputStream.read() method to block, I am handling packet formation and input in a different thread. I have narrowed my problem down to the InputStream.read() method, which returns -1 (no bytes read, but no exception).
Here is what it looks like in the Looper thread:
#Override
protected void setup() throws ConnectionLostException, InterruptedException {
log_.write_log_line(log_header_ + "Beginning IOIO setup.");
// Initialize IOIO UART pins
// Input at pin 1, output at pin 2
try {
inQueue_ = MinMaxPriorityQueue.orderedBy(new ComparePackets())
.maximumSize(QUEUESIZE).create();
outQueue_ = MinMaxPriorityQueue.orderedBy(new ComparePackets())
.maximumSize(QUEUESIZE).create();
ioio_.waitForConnect();
uart_ = ioio_.openUart(1, 2, 38400, Uart.Parity.NONE, Uart.StopBits.ONE);
// Start InputHandler. Takes packets from ELKA on inQueue_
in_= new InputHandler(inQueue_, uart_.getInputStream());
in_.start();
// Start OutputHandler. Takes packets from subprocesses on outQueue_
out_= new OutputHandler(outQueue_);
out_.start();
// Get output stream
os_=uart_.getOutputStream();
// Set default target state
setTargetState(State.TRANSFERRING);
currInPacket_[0]=1; //Initial value to start transferring
log_.write_log_line(log_header_ + "IOIO setup complete.\n\t" +
"Input pin set to 1\n\tOutput pin set to 2\n\tBaud rate set to 38400\n\t" +
"Parity set to even\n\tStop bits set to 1");
} catch (IncompatibilityException e) {
log_.write_log_line(log_header_+e.toString());
} catch (ConnectionLostException e) {
log_.write_log_line(log_header_+e.toString());
} catch (Exception e) {
log_.write_log_line(log_header_+"mystery exception: "+e.toString());
}
}
And in the InputHandler thread:
#Override
public void run() {
boolean notRead;
byte i;
log_.write_log_line(log_header_+"Beginning InputHandler thread");
while (!stop) {
i = 0;
notRead = true;
nextInPacket = new byte[BUFFERSIZE];
readBytes = -1;
//StringBuilder s=new StringBuilder();
//TODO re-implement this with signals
while (i < READATTEMPTS && notRead) {
try {
// Make sure to adjust packet size. Done manually here for speed.
readBytes = is_.read(nextInPacket, 0, BUFFERSIZE);
/* Debugging
for (int j=0;j<nextInPacket.length;j++)
s.append(Byte.toString(nextInPacket[j]));
log_.write_log_line(log_header_+s.toString());
*/
if (readBytes != -1) {
notRead = false;
nextInPacket= new byte[]{1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0};
synchronized (q_) {
q_.add(nextInPacket);
}
//log_.write_log_line(log_header_ + "Incoming packet contains valid data.");
} else i++;
} catch (IOException e) {
log_.write_log_line(log_header_ + "mystery exception:\n\t" + e.toString());
}
}
if (i>=READATTEMPTS)
log_.write_log_line(log_header_+"Too many read attempts from input stream.");
/*
try {
sleep(100);
} catch (InterruptedException e) {
log_.write_log_line(log_header_+"fuck");
}
*/
}
}
On an oscilloscope, pins 1 and 2 both read an oscillating voltage, albeit at a very high amplitude, which is of some concern. Point is nothing is available to be read from the InputStream in the InputHandler class. Any ideas?
-1 returned from read() should only happen whenever the UART is closed. The closure can happen as result of explicitly calling close() on the Uart object or calling softReset() on the IOIO object.
The Android log might give you some clues about what's going on.
The reading you're seeing on the oscilloscope is suspicious: how high is "very high amplitude"? You should only ever see 0V or 3.3V on those pins, or floating in case the pins where not opened (or closed) for some reason.
I implemented a TCP client-server model to test my bandwidth with the server through sending number of packets with different sizes and see the RTT then calculate the bandwidth through linear regression,
Here is the server code:
import java.io.*;
import java.net.*;
public class Server implements Runnable {
ServerSocket welcomeSocket;
String clientSentence;
Thread thread;
Socket connectionSocket;
BufferedReader inFromClient;
DataOutputStream outToClient;
public Server() throws IOException {
welcomeSocket = new ServerSocket(6588);
connectionSocket = welcomeSocket.accept();
inFromClient = new BufferedReader(new InputStreamReader(connectionSocket.getInputStream()));
outToClient = new DataOutputStream(connectionSocket.getOutputStream());
thread = new Thread(this);
thread.start();
}
#Override
public void run() {
// TODO Auto-generated method stub
while(true)
{
try {
clientSentence = inFromClient.readLine();
if (clientSentence != null) {
System.out.println("Received: " + clientSentence);
outToClient.writeBytes(clientSentence + '\n');
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
public static void main(String[] args) throws IOException {
new Server();
}
}
And this is the method in the Client class that return an array of the RTT by each packet
public int [] getResponseTime() throws UnknownHostException, IOException {
timeArray = new int[sizes.length];
for (int i = 0; i < sizes.length; i++) {
sentence = StringUtils.leftPad("", sizes[i], '*');
long start = System.nanoTime();
outToServer.writeBytes(sentence + '\n');
modifiedSentence = inFromServer.readLine();
long end = System.nanoTime();
System.out.println("FROM SERVER: " + modifiedSentence);
timeArray[i] = (int) (end - start);
simpleReg.addData(timeArray[i]* Math.pow(10, -9), sizes[i] * 2); // each char is 2 bytes
}
return timeArray;
}
when i get the slope it returns me a BW with kilo bytes however they are in the same network and the bandwidth should be much more . What i am doing wrong ?
Are you obliged to use linear regression or could it be a different estimator? I am actually not sure if linear regression is the best approach here. I am curious, do you happen to know any sources that suggest to use it in this kind of situation?
Note, that especially the initial BW measurements are much smaller than the real maximal goodput (due to TCP slow-start), so it is important to use a metric estimation that takes large wrong outliers into account.
In previous work I have used the harmonic mean to monitor the bandwidth over a longer period of time and it worked pretty good (also on links with a large bandwidth). The advantage of the harmonic mean over other means, is that while it is still very easy to compute, it mitigates the impact of large outliers, meaning the estimate is not as easily falsified.
Given a series of bandwidth measurements R_i, where i=0,1,2,..., n-1, the harmonic mean is calculated as:
R_total = (n+1)/((n/R_total) + (1/R_n))
It is also good practice to skip the first few measurement values (depending on how often you measure...), e.g., R_(0..5), since you might have initial bursts due to initial preparations in the different layers and are in the slow-start phase anyways.
Here an example implementation in Java. Even though in this case the measurement is done through a file download, it can be easily applied to your environment too - simply use your echo server instead of the file download:
public class Estimator
{
private static double R; // harmonic mean of all bandwidth measurements
private static int n = 0; // number of measurements
private static int skips = 5; // skip measurements for first 5 socket.read() operations
// size in bytes
// start/end in ns
public static double harmonicMean(long start, long end, double size){
// check if we need to skip this initial value, since it might falsify our estimate
if(skips-- > 0) return 0;
// get current value of R
double curR = (size/(1024*1024))/(double)((end - start)*Math.pow(10, -9));
System.out.println(curR);
if(n == 0) {
// initial value
R = curR;
} else {
// use harmonic mean
R = (n+1)/((n/R)+(1/curR));
}
n++;
return R;
}
public static void main(String[] args)
{
// temporary buffer to hold bytes
byte[] buffer = new byte[1024*1024*10]; // 10MB buffer - just in case ...
Socket socket = null;
try {
// measurement done through file download from server
// prepare request
socket = new Socket("yourserver.com",80);
PrintWriter pw = new PrintWriter(socket.getOutputStream());
InputStream is = socket.getInputStream();
pw.println("GET /test_blob HTTP/1.1"); // a test file, e.g., 1MB big
pw.println("Host: yourserver.com");
pw.println("");
pw.flush();
// prepare measurement
long start,end;
double bytes = 0;
double totalBytes = 0;
start = System.nanoTime();
while((bytes = is.read(buffer)) != -1) {
// socket.read() occurred -> calculate harmonic mean
end = System.nanoTime();
totalBytes += bytes;
harmonicMean(start, end, totalBytes);
}
// clean up
is.close();
pw.close();
}
catch(Exception e){
e.printStackTrace();
}
finally {
if(socket != null) {
try{
socket.close();
}
catch(Exception e){
e.printStackTrace();
}
}
}
System.out.println(R+" MB/s");
}
}
Additionally, for the sake of completeness, as I already mentioned in the comments it is important that the test messages/files are big enough, so TCP reaches the full goodput potential of the link.
Please also note, that this is a simplified way to estimate the bandwidth. In this example we start measuring (taking the first timestamp) from when the request was sent, meaning we include the link propagation and server processing delay, which in return will reduce the overall estimated value. Anyways, since you seem to use a local network, I expect the sum of these delays to be rather small, which means they will not falsify the final estimate too much.
I wrote a small blog post concerning measuring TCP connection metrics inside an application layer. Everything is described in more detail there (though the code examples are in C).
I'm working on a homework assignment that has the purpose of showing how increasing the number of threads can help or hurt a program's performance. The basic idea is to thread individual requests for data from a website, then determine how long it takes to perform all the queries when one runs n queries simultaneously.
I think I have the threading and the clocking done properly, but something odd is going on with the requests. I am using java.net.URLConnection to get connect to the databases. My first three thousand or so connections will succeed and load. Then, several hundred or so calls fail without any evidence of Java having tried for the specified timeout period.
The code I run in a thread is as follows:
/* This code to get the contents from an URL was adapted from a
* StackOverflow question found at http://goo.gl/QPqR4 .
*/
private static String loadContent(String address) throws Exception {
String toReturn = "";
try {
URL url = new URL(address);
URLConnection con = url.openConnection();
con.setConnectTimeout(5000);
con.setReadTimeout(5000);
InputStream stream = con.getInputStream();
Reader r = new InputStreamReader(stream, "ISO-8859-1");
while (true) {
int ch = r.read();
if (ch < 0) {
break;
}
toReturn += (char) ch;
}
r.close();
stream.close();
} catch (Exception e) {
System.out.println(address + ": " + e.getMessage());
throw e;
}
return toReturn;
}
The code for running the threads is as follows. The NormalPerformance class is one I wrote to simplify calculating the mean and variance of a series of observations.
/* This code is patterned after code provided by my professor.
*/
private static NormalPerformance performExperiment(int threads, int runs)
throws Exception
{
NormalPerformance toReturn = new NormalPerformance();
for (int i = 0; i < runs; i++) {
final List<Callable<Void>> tasks = new ArrayList<Callable<Void>>();
for (int j = 0; j < URLS.length; j++) {
final String url = URLS[i];
tasks.add(new Callable<Void>() {
public Void call() throws Exception {
loadContent(url);
return null;
}
});
}
long start = System.nanoTime();
final ExecutorService exectuorPool = Executors.newFixedThreadPool(threads);
executorPool.invokeAll(tasks);
executorPool.shutdown();
double time = (System.nano() - start) / 1000000000.;
toReturn.addObservation(time);
System.out.println("" + threads + " " + (i + 1) + ": " + time);
}
return toReturn;
}
Why am I seeing this odd pattern of success and failure? Even stranger, there are times when killing the program and restarting does nothing to stop the run of failures. I've tried things like forcing threads to sleep, calling System.gc(), and increasing the connection and reading timeout values, but none of these, alone or combined, have fixed this.
How can I guarantee that my connections have the best chance possible of connecting?
Environment:
Windows 7 64-bit,
Eclipse Juno 64-bit,
JRE 7
I'm reading a file which conatins 500000 rows.
I'm testing to see how multiple thread speed up the process....
private void multiThreadRead(int num){
for(int i=1; i<= num; i++) {
new Thread(readIndivColumn(i),""+i).start();
}
}
private Runnable readIndivColumn(final int colNum){
return new Runnable(){
#Override
public void run() {
// TODO Auto-generated method stub
try {
long startTime = System.currentTimeMillis();
System.out.println("From Thread no:"+colNum+" Start time:"+startTime);
RandomAccessFile raf = new RandomAccessFile("./src/test/test1.csv","r");
String line = "";
//System.out.println("From Thread no:"+colNum);
while((line = raf.readLine()) != null){
//System.out.println(line);
//System.out.println(StatUtils.getCellValue(line, colNum));
}
long elapsedTime = System.currentTimeMillis() - startTime;
String formattedTime = String.format("%d min, %d sec",
TimeUnit.MILLISECONDS.toMinutes(elapsedTime),
TimeUnit.MILLISECONDS.toSeconds(elapsedTime) -
TimeUnit.MINUTES.toSeconds(TimeUnit.MILLISECONDS.toMinutes(elapsedTime))
);
System.out.println("From Thread no:"+colNum+" Finished Time:"+formattedTime);
}
catch (Exception e) {
// TODO Auto-generated catch block
System.out.println("From Thread no:"+colNum +"===>"+e.getMessage());
e.printStackTrace();
}
}
};
}
private void sequentialRead(int num){
try{
long startTime = System.currentTimeMillis();
System.out.println("Start time:"+startTime);
for(int i =0; i < num; i++){
RandomAccessFile raf = new RandomAccessFile("./src/test/test1.csv","r");
String line = "";
while((line = raf.readLine()) != null){
//System.out.println(line);
}
}
long elapsedTime = System.currentTimeMillis() - startTime;
String formattedTime = String.format("%d min, %d sec",
TimeUnit.MILLISECONDS.toMinutes(elapsedTime),
TimeUnit.MILLISECONDS.toSeconds(elapsedTime) -
TimeUnit.MINUTES.toSeconds(TimeUnit.MILLISECONDS.toMinutes(elapsedTime))
);
System.out.println("Finished Time:"+formattedTime);
}
catch (Exception e) {
e.printStackTrace();
// TODO: handle exception
}
}
public TesterClass() {
sequentialRead(1);
this.multiThreadRead(1);
}
for num = 1 I get following result:
Start time:1326224619049
Finished Time:2 min, 14 sec
Sequential read ENDS...........
Multi-Thread read starts:
From Thread no:1 Start time:1326224753606
From Thread no:1 Finished Time:2 min, 13 sec
Multi-Thread read ENDS.....
for num = 5 I get following result:
formatted Time:10 min, 20 sec
Sequential read ENDS...........
Multi-Thread read starts:
From Thread no:1 Start time:1326223509574
From Thread no:3 Start time:1326223509574
From Thread no:4 Start time:1326223509574
From Thread no:5 Start time:1326223509574
From Thread no:2 Start time:1326223509574
From Thread no:4 formatted Time:5 min, 54 sec
From Thread no:2 formatted Time:6 min, 0 sec
From Thread no:3 formatted Time:6 min, 7 sec
From Thread no:5 formatted Time:6 min, 23 sec
From Thread no:1 formatted Time:6 min, 23 sec
Multi-Thread read ENDS.....
My question is: shouldn't multi-threaded read takes approx. 2.13 sec ?
Can you please explain why is it taking too long with multi-threaded solution?
Thanks in advance.
The reason you are seeing a slow down when reading in parallel is because the magnetic hard disk head needs to seek the next read position (taking about 5ms) for each thread. Thus, reading with multiple threads effectively bounces the disk between seeks, slowing it down. The only recommended way to read a file from a single disk is to read sequentially with one thread.
Since file reading is mainly waiting for disk I/O, you have the problem that the disk won't spin faster just because it's used by many threads :)
Reading from a file is an inherently serial process, assuming no caching, meaning there is a limit to how fast you can retrieve data from a file. Even without file locks (i.e. opening the file read-only) all the threads after the 1st will just block on the disk read so you make all the other threads wait and whichever one is active when the data becomes available is the one that processes the next block.
I have to make webcall to an external server at the rate of 5 tps. Every webcall usually takes around 7 secs to complete. How do I implement this. Would you recommend PHP for this?
Here's a Java solution since you tagged your question with Java. It'll make a request to a web site 5 times per second. Since you indicated that those requests can take a long time, it'll use up to 50 threads at once to avoid getting blocked.
final URL url = new URL("http://whitefang34.com");
Runnable runnable = new Runnable() {
public void run() {
try {
InputStream in = url.openStream();
// process input
in.close();
} catch (IOException e) {
// deal with exception
}
}
};
ExecutorService service = Executors.newFixedThreadPool(50);
long nextTime = System.currentTimeMillis();
while (true) {
service.submit(runnable);
long waitTime = nextTime - System.currentTimeMillis();
Thread.sleep(Math.max(0, waitTime));
nextTime += 200;
}