I implemented a TCP client-server model to test my bandwidth with the server through sending number of packets with different sizes and see the RTT then calculate the bandwidth through linear regression,
Here is the server code:
import java.io.*;
import java.net.*;
public class Server implements Runnable {
ServerSocket welcomeSocket;
String clientSentence;
Thread thread;
Socket connectionSocket;
BufferedReader inFromClient;
DataOutputStream outToClient;
public Server() throws IOException {
welcomeSocket = new ServerSocket(6588);
connectionSocket = welcomeSocket.accept();
inFromClient = new BufferedReader(new InputStreamReader(connectionSocket.getInputStream()));
outToClient = new DataOutputStream(connectionSocket.getOutputStream());
thread = new Thread(this);
thread.start();
}
#Override
public void run() {
// TODO Auto-generated method stub
while(true)
{
try {
clientSentence = inFromClient.readLine();
if (clientSentence != null) {
System.out.println("Received: " + clientSentence);
outToClient.writeBytes(clientSentence + '\n');
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
public static void main(String[] args) throws IOException {
new Server();
}
}
And this is the method in the Client class that return an array of the RTT by each packet
public int [] getResponseTime() throws UnknownHostException, IOException {
timeArray = new int[sizes.length];
for (int i = 0; i < sizes.length; i++) {
sentence = StringUtils.leftPad("", sizes[i], '*');
long start = System.nanoTime();
outToServer.writeBytes(sentence + '\n');
modifiedSentence = inFromServer.readLine();
long end = System.nanoTime();
System.out.println("FROM SERVER: " + modifiedSentence);
timeArray[i] = (int) (end - start);
simpleReg.addData(timeArray[i]* Math.pow(10, -9), sizes[i] * 2); // each char is 2 bytes
}
return timeArray;
}
when i get the slope it returns me a BW with kilo bytes however they are in the same network and the bandwidth should be much more . What i am doing wrong ?
Are you obliged to use linear regression or could it be a different estimator? I am actually not sure if linear regression is the best approach here. I am curious, do you happen to know any sources that suggest to use it in this kind of situation?
Note, that especially the initial BW measurements are much smaller than the real maximal goodput (due to TCP slow-start), so it is important to use a metric estimation that takes large wrong outliers into account.
In previous work I have used the harmonic mean to monitor the bandwidth over a longer period of time and it worked pretty good (also on links with a large bandwidth). The advantage of the harmonic mean over other means, is that while it is still very easy to compute, it mitigates the impact of large outliers, meaning the estimate is not as easily falsified.
Given a series of bandwidth measurements R_i, where i=0,1,2,..., n-1, the harmonic mean is calculated as:
R_total = (n+1)/((n/R_total) + (1/R_n))
It is also good practice to skip the first few measurement values (depending on how often you measure...), e.g., R_(0..5), since you might have initial bursts due to initial preparations in the different layers and are in the slow-start phase anyways.
Here an example implementation in Java. Even though in this case the measurement is done through a file download, it can be easily applied to your environment too - simply use your echo server instead of the file download:
public class Estimator
{
private static double R; // harmonic mean of all bandwidth measurements
private static int n = 0; // number of measurements
private static int skips = 5; // skip measurements for first 5 socket.read() operations
// size in bytes
// start/end in ns
public static double harmonicMean(long start, long end, double size){
// check if we need to skip this initial value, since it might falsify our estimate
if(skips-- > 0) return 0;
// get current value of R
double curR = (size/(1024*1024))/(double)((end - start)*Math.pow(10, -9));
System.out.println(curR);
if(n == 0) {
// initial value
R = curR;
} else {
// use harmonic mean
R = (n+1)/((n/R)+(1/curR));
}
n++;
return R;
}
public static void main(String[] args)
{
// temporary buffer to hold bytes
byte[] buffer = new byte[1024*1024*10]; // 10MB buffer - just in case ...
Socket socket = null;
try {
// measurement done through file download from server
// prepare request
socket = new Socket("yourserver.com",80);
PrintWriter pw = new PrintWriter(socket.getOutputStream());
InputStream is = socket.getInputStream();
pw.println("GET /test_blob HTTP/1.1"); // a test file, e.g., 1MB big
pw.println("Host: yourserver.com");
pw.println("");
pw.flush();
// prepare measurement
long start,end;
double bytes = 0;
double totalBytes = 0;
start = System.nanoTime();
while((bytes = is.read(buffer)) != -1) {
// socket.read() occurred -> calculate harmonic mean
end = System.nanoTime();
totalBytes += bytes;
harmonicMean(start, end, totalBytes);
}
// clean up
is.close();
pw.close();
}
catch(Exception e){
e.printStackTrace();
}
finally {
if(socket != null) {
try{
socket.close();
}
catch(Exception e){
e.printStackTrace();
}
}
}
System.out.println(R+" MB/s");
}
}
Additionally, for the sake of completeness, as I already mentioned in the comments it is important that the test messages/files are big enough, so TCP reaches the full goodput potential of the link.
Please also note, that this is a simplified way to estimate the bandwidth. In this example we start measuring (taking the first timestamp) from when the request was sent, meaning we include the link propagation and server processing delay, which in return will reduce the overall estimated value. Anyways, since you seem to use a local network, I expect the sum of these delays to be rather small, which means they will not falsify the final estimate too much.
I wrote a small blog post concerning measuring TCP connection metrics inside an application layer. Everything is described in more detail there (though the code examples are in C).
Related
My code throws an OutOfMemoryError when running the following line:
int numBytes = socketChannel.write(_send_buffer);
where socketChannel is an instance of java.nio.channels.SocketChannel
and _send_buffer is an instance of java.nio.ByteBuffer
The code arrives at this point via a non-blocking selector write operation, and throws this on the first attempt to write when the capacity of _send_buffer is large. I have no issues with the code when _send_buffer is less than 20Mb, but when attempting to test this with larger buffers (e.g. > 100Mb) it fails.
According to the docs for java.nio.channels.SocketChannel.write():
An attempt is made to write up to r bytes to the channel, where r is the number of bytes remaining in the buffer, that is, src.remaining(), at the moment this method is invoked.
Suppose that a byte sequence of length n is written, where 0 <= n <= r. This byte sequence will be transferred from the buffer starting at index p, where p is the buffer's position at the moment this method is invoked; the index of the last byte written will be p + n - 1. Upon return the buffer's position will be equal to p + n; its limit will not have changed.
Unless otherwise specified, a write operation will return only after writing all of the r requested bytes. Some types of channels, depending upon their state, may write only some of the bytes or possibly none at all. A socket channel in non-blocking mode, for example, cannot write any more bytes than are free in the socket's output buffer.
My channels should be setup to be non-blocking, so I would think the write operation should only attempt to write up to the capacity of the socket's output buffer. As I did not previously specify this I tried setting it to 1024 bytes via the setOption method with the SO_SNDBUF option. i.e:
socketChannel.setOption(SO_SNDBUF, 1024);
Though I am still getting the OutOfMemoryError. Here is the full error message:
2021-04-22 11:52:44.260 11591-11733/jp.oist.abcvlib.serverLearning I/.serverLearnin: Clamp target GC heap from 195MB to 192MB
2021-04-22 11:52:44.260 11591-11733/jp.oist.abcvlib.serverLearning I/.serverLearnin: Alloc concurrent copying GC freed 2508(64KB) AllocSpace objects, 0(0B) LOS objects, 10% free, 171MB/192MB, paused 27us total 12.714ms
2021-04-22 11:52:44.261 11591-11733/jp.oist.abcvlib.serverLearning W/.serverLearnin: Throwing OutOfMemoryError "Failed to allocate a 49915610 byte allocation with 21279560 free bytes and 20MB until OOM, target footprint 201326592, growth limit 201326592" (VmSize 5585608 kB)
2021-04-22 11:52:44.261 11591-11733/jp.oist.abcvlib.serverLearning I/.serverLearnin: Starting a blocking GC Alloc
2021-04-22 11:52:44.261 11591-11733/jp.oist.abcvlib.serverLearning I/.serverLearnin: Starting a blocking GC Alloc
Now I can inline debug and stop at the write line and nothing crashes, so I believe there is no problem handling the memory requirement for the _send_buffer itself, but when attempting to write, something in the background is creating another allocation that's too much to handle.
Maybe I'm thinking about this wrong, and need to limit my _send_buffer size to something smaller, but I'd think there should be a way to limit the allocation made by the write command no? Or at least some way to allocate more of the Android memory to my app. I'm using a Pixel 3a, which according to the specs it should have 4GB of RAM. Now I realize that has to be shared with the rest of the system, but this is a bare bones test device (no games, personal apps, etc. are installed) so I'd assume I should have access to a fairly large chunk of that 4GB. As I'm crashing with a growth limit of 201,326,592 (according to the logcat above), it seems strange to me that I'm crashing at 0.2 / 4.0 = 5% of the spec'd memory.
Any tips in the right direction about a fundamental flaw in my approach, or recommendations for avoiding the OutOfMemoryError would be much appreciated!
Edit 1:
Adding some code context as requested by comments. Note this is not a runnable example as the code base is quite large and I am not allowed to share it all due to company policies. Just note that the _send_buffer is has nothing to do with the sendbuffer of the socketChannel itself (i.e. what is referenced by getSendBufferSize, it is just a ByteBuffer that I use to bundle together everything before sending it via the channel. As I can't share all the code related to generating the contents of _send_buffer just note it is a ByteBuffer than can be very large (> 100Mb). If this is fundamentally a problem, then please point this out and why.
So with the above in mind, the NIO related code is pasted below. Note this is very prototype alpha code, so I apologize for the overload of comments and log statements.
SocketConnectionManager.java
(Essentially a Runnable in charge of the Selector)
Note the sendMsgToServer method is overridden (without modification) and called from the main Android activity (not shown). The byte[] episode arg is what gets wrapped into a ByteBuffer within SocketMessage.java (next section) which later gets put into the _send_buffer instance within the write method of SocketMessage.java.
package jp.oist.abcvlib.util;
import android.util.Log;
import java.io.IOException;
import java.net.InetSocketAddress;
import java.net.SocketOption;
import java.nio.channels.CancelledKeyException;
import java.nio.channels.ClosedSelectorException;
import java.nio.channels.IllegalBlockingModeException;
import java.nio.channels.SelectionKey;
import java.nio.channels.Selector;
import java.nio.channels.SocketChannel;
import java.util.Set;
import static java.net.StandardSocketOptions.SO_SNDBUF;
public class SocketConnectionManager implements Runnable{
private SocketChannel sc;
private Selector selector;
private SocketListener socketListener;
private final String TAG = "SocketConnectionManager";
private SocketMessage socketMessage;
private final String serverIp;
private final int serverPort;
public SocketConnectionManager(SocketListener socketListener, String serverIp, int serverPort){
this.socketListener = socketListener;
this.serverIp = serverIp;
this.serverPort = serverPort;
}
#Override
public void run() {
try {
selector = Selector.open();
start_connection(serverIp, serverPort);
do {
int eventCount = selector.select(0);
Set<SelectionKey> events = selector.selectedKeys(); // events is int representing how many keys have changed state
if (eventCount != 0){
Set<SelectionKey> selectedKeys = selector.selectedKeys();
for (SelectionKey selectedKey : selectedKeys){
try{
SocketMessage socketMessage = (SocketMessage) selectedKey.attachment();
socketMessage.process_events(selectedKey);
}catch (ClassCastException e){
Log.e(TAG,"Error", e);
Log.e(TAG, "selectedKey attachment not a SocketMessage type");
}
}
}
} while (selector.isOpen()); //todo remember to close the selector somewhere
} catch (IOException e) {
Log.e(TAG,"Error", e);
}
}
private void start_connection(String serverIp, int serverPort){
try {
InetSocketAddress inetSocketAddress = new InetSocketAddress(serverIp, serverPort);
sc = SocketChannel.open();
sc.configureBlocking(false);
sc.setOption(SO_SNDBUF, 1024);
socketMessage = new SocketMessage(socketListener, sc, selector);
Log.v(TAG, "registering with selector to connect");
int ops = SelectionKey.OP_CONNECT;
sc.register(selector, ops, socketMessage);
Log.d(TAG, "Initializing connection with " + inetSocketAddress);
boolean connected = sc.connect(inetSocketAddress);
Log.v(TAG, "socketChannel.isConnected ? : " + sc.isConnected());
} catch (IOException | ClosedSelectorException | IllegalBlockingModeException
| CancelledKeyException | IllegalArgumentException e) {
Log.e(TAG, "Initial socket connect and registration:", e);
}
}
public void sendMsgToServer(byte[] episode){
boolean writeSuccess = socketMessage.addEpisodeToWriteBuffer(episode);
}
/**
* Should be called prior to exiting app to ensure zombie threads don't remain in memory.
*/
public void close(){
try {
Log.v(TAG, "Closing connection: " + sc.getRemoteAddress());
selector.close();
sc.close();
} catch (IOException e) {
Log.e(TAG,"Error", e);
}
}
}
SocketMessage.java
This is greatly inspired from the example Python code given here, in particular the libclient.py and app-client.py. This is because the server is running python code and clients are running Java. So if you want the reasoning behind why things are the way they are, reference the RealPython socket tutorial. I essentially used the app-server.py as a template for my code, and translated (with modifications) to Java for the clients.
package jp.oist.abcvlib.util;
import android.util.Log;
import org.json.JSONException;
import org.json.JSONObject;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.channels.ClosedChannelException;
import java.nio.channels.SelectionKey;
import java.nio.channels.Selector;
import java.nio.channels.SocketChannel;
import java.nio.charset.StandardCharsets;
import java.text.DecimalFormat;
import java.util.Vector;
public class SocketMessage {
private final SocketChannel sc;
private final Selector selector;
private final ByteBuffer _recv_buffer;
private ByteBuffer _send_buffer;
private int _jsonheader_len = 0;
private JSONObject jsonHeaderRead; // Will tell Java at which points in msgContent each model lies (e.g. model1 is from 0 to 1018, model2 is from 1019 to 2034, etc.)
private byte[] jsonHeaderBytes;
private ByteBuffer msgContent; // Should contain ALL model files. Parse to individual files after reading
private final Vector<ByteBuffer> writeBufferVector = new Vector<>(); // List of episodes
private final String TAG = "SocketConnectionManager";
private JSONObject jsonHeaderWrite;
private boolean msgReadComplete = false;
private SocketListener socketListener;
private long socketWriteTimeStart;
private long socketReadTimeStart;
public SocketMessage(SocketListener socketListener, SocketChannel sc, Selector selector){
this.socketListener = socketListener;
this.sc = sc;
this.selector = selector;
this._recv_buffer = ByteBuffer.allocate(1024);
this._send_buffer = ByteBuffer.allocate(1024);
}
public void process_events(SelectionKey selectionKey){
SocketChannel sc = (SocketChannel) selectionKey.channel();
// Log.i(TAG, "process_events");
try{
if (selectionKey.isConnectable()){
sc.finishConnect();
Log.d(TAG, "Finished connecting to " + ((SocketChannel) selectionKey.channel()).getRemoteAddress());
Log.v(TAG, "socketChannel.isConnected ? : " + sc.isConnected());
}
if (selectionKey.isWritable()){
// Log.i(TAG, "write event");
write(selectionKey);
}
if (selectionKey.isReadable()){
// Log.i(TAG, "read event");
read(selectionKey);
// int ops = SelectionKey.OP_WRITE;
// sc.register(selectionKey.selector(), ops, selectionKey.attachment());
}
} catch (ClassCastException | IOException | JSONException e){
Log.e(TAG,"Error", e);
}
}
private void read(SelectionKey selectionKey) throws IOException, JSONException {
SocketChannel socketChannel = (SocketChannel) selectionKey.channel();
while(!msgReadComplete){
// At this point the _recv_buffer should have been cleared (pointer 0 limit=cap, no mark)
int bitsRead = socketChannel.read(_recv_buffer);
if (bitsRead > 0 || _recv_buffer.position() > 0){
if (bitsRead > 0){
// Log.v(TAG, "Read " + bitsRead + " bytes from " + socketChannel.getRemoteAddress());
}
// If you have not determined the length of the header via the 2 byte short protoheader,
// try to determine it, though there is no gaurantee it will have enough bytes. So it may
// pass through this if statement multiple times. Only after it has been read will
// _jsonheader_len have a non-zero length;
if (this._jsonheader_len == 0){
socketReadTimeStart = System.nanoTime();
process_protoheader();
}
// _jsonheader_len will only be larger than 0 if set properly (finished being set).
// jsonHeaderRead will be null until the buffer gathering it has filled and converted it to
// a JSONobject.
else if (this.jsonHeaderRead == null){
process_jsonheader();
}
else if (!msgReadComplete){
process_msgContent(selectionKey);
} else {
Log.e(TAG, "bitsRead but don't know what to do with them");
}
}
}
}
private void write(SelectionKey selectionKey) throws IOException, JSONException {
if (!writeBufferVector.isEmpty()){
SocketChannel socketChannel = (SocketChannel) selectionKey.channel();
Log.v(TAG, "writeBufferVector contains data");
if (jsonHeaderWrite == null){
int numBytesToWrite = writeBufferVector.get(0).limit();
// Create JSONHeader containing length of episode in Bytes
Log.v(TAG, "generating jsonheader");
jsonHeaderWrite = generate_jsonheader(numBytesToWrite);
byte[] jsonBytes = jsonHeaderWrite.toString().getBytes(StandardCharsets.UTF_8);
// Encode length of JSONHeader to first two bytes and write to socketChannel
int jsonLength = jsonBytes.length;
// Add up length of protoHeader, JSONheader and episode bytes
int totalNumBytesToWrite = Integer.BYTES + jsonLength + numBytesToWrite;
// Create new buffer that compiles protoHeader, JsonHeader, and Episode
_send_buffer = ByteBuffer.allocate(totalNumBytesToWrite);
Log.v(TAG, "Assembling _send_buffer");
// Assemble all bytes and flip to prepare to read
_send_buffer.putInt(jsonLength);
_send_buffer.put(jsonBytes);
_send_buffer.put(writeBufferVector.get(0));
_send_buffer.flip();
Log.d(TAG, "Writing to server ...");
// Write Bytes to socketChannel //todo shouldn't be while as should be non-blocking
if (_send_buffer.remaining() > 0){
int numBytes = socketChannel.write(_send_buffer); // todo memory dump error here!
int percentDone = (int) Math.ceil((((double) _send_buffer.limit() - (double) _send_buffer.remaining())
/ (double) _send_buffer.limit()) * 100);
int total = _send_buffer.limit() / 1000000;
// Log.d(TAG, "Sent " + percentDone + "% of " + total + "Mb to " + socketChannel.getRemoteAddress());
}
} else{
// Write Bytes to socketChannel
if (_send_buffer.remaining() > 0){
socketChannel.write(_send_buffer);
}
}
if (_send_buffer.remaining() == 0){
int total = _send_buffer.limit() / 1000000;
double timeTaken = (System.nanoTime() - socketWriteTimeStart) * 10e-10;
DecimalFormat df = new DecimalFormat();
df.setMaximumFractionDigits(2);
Log.i(TAG, "Sent " + total + "Mb in " + df.format(timeTaken) + "s");
// Remove episode from buffer so as to not write it again.
writeBufferVector.remove(0);
// Clear sending buffer
_send_buffer.clear();
// make null so as to catch the initial if statement to write a new one.
jsonHeaderWrite = null;
// Set socket to read now that writing has finished.
Log.d(TAG, "Reading from server ...");
int ops = SelectionKey.OP_READ;
sc.register(selectionKey.selector(), ops, selectionKey.attachment());
}
}
}
private JSONObject generate_jsonheader(int numBytesToWrite) throws JSONException {
JSONObject jsonHeader = new JSONObject();
jsonHeader.put("byteorder", ByteOrder.nativeOrder().toString());
jsonHeader.put("content-length", numBytesToWrite);
jsonHeader.put("content-type", "flatbuffer"); // todo Change to flatbuffer later
jsonHeader.put("content-encoding", "flatbuffer"); //Change to flatbuffer later
return jsonHeader;
}
/**
* recv_buffer may contain 0, 1, or several bytes. If it has more than hdrlen, then process
* the first two bytes to obtain the length of the jsonheader. Else exit this function and
* read from the buffer again until it fills past length hdrlen.
*/
private void process_protoheader() {
Log.v(TAG, "processing protoheader");
int hdrlen = 2;
if (_recv_buffer.position() >= hdrlen){
_recv_buffer.flip(); //pos at 0 and limit set to bitsRead
_jsonheader_len = _recv_buffer.getShort(); // Read 2 bytes converts to short and move pos to 2
// allocate new ByteBuffer to store full jsonheader
jsonHeaderBytes = new byte[_jsonheader_len];
_recv_buffer.compact();
Log.v(TAG, "finished processing protoheader");
}
}
/**
* As with the process_protoheader we will check if _recv_buffer contains enough bytes to
* generate the jsonHeader objects, and if not, leave it alone and read more from socket.
*/
private void process_jsonheader() throws JSONException {
Log.v(TAG, "processing jsonheader");
// If you have enough bytes in the _recv_buffer to write out the jsonHeader
if (_jsonheader_len - _recv_buffer.position() < 0){
_recv_buffer.flip();
_recv_buffer.get(jsonHeaderBytes);
// jsonheaderBuffer should now be full and ready to convert to a JSONobject
jsonHeaderRead = new JSONObject(new String(jsonHeaderBytes));
Log.d(TAG, "JSONheader from server: " + jsonHeaderRead.toString());
try{
int msgLength = (int) jsonHeaderRead.get("content-length");
msgContent = ByteBuffer.allocate(msgLength);
}catch (JSONException e) {
Log.e(TAG, "Couldn't get content-length from jsonHeader sent from server", e);
}
}
// Else return to selector and read more bytes into the _recv_buffer
// If there are any bytes left over (part of the msg) then move them to the front of the buffer
// to prepare for another read from the socket
_recv_buffer.compact();
}
/**
* Here a bit different as it may take multiple full _recv_buffers to fill the msgContent.
* So check if msgContent.remaining is larger than 0 and if so, dump everything from _recv_buffer to it
* #param selectionKey : Used to reference the instance and selector
* #throws ClosedChannelException :
*/
private void process_msgContent(SelectionKey selectionKey) throws IOException {
if (msgContent.remaining() > 0){
_recv_buffer.flip(); //pos at 0 and limit set to bitsRead set ready to read
msgContent.put(_recv_buffer);
_recv_buffer.clear();
}
if (msgContent.remaining() == 0){
// msgContent should now be full and ready to convert to a various model files.
socketListener.onServerReadSuccess(jsonHeaderRead, msgContent);
// Clear for next round of communication
_recv_buffer.clear();
_jsonheader_len = 0;
jsonHeaderRead = null;
msgContent.clear();
int totalBytes = msgContent.capacity() / 1000000;
double timeTaken = (System.nanoTime() - socketReadTimeStart) * 10e-10;
DecimalFormat df = new DecimalFormat();
df.setMaximumFractionDigits(2);
Log.i(TAG, "Entire message containing " + totalBytes + "Mb recv'd in " + df.format(timeTaken) + "s");
msgReadComplete = true;
// Set socket to write now that reading has finished.
int ops = SelectionKey.OP_WRITE;
sc.register(selectionKey.selector(), ops, selectionKey.attachment());
}
}
//todo should send this to the mainactivity listener so it can be customized/overridden
private void onNewMessageFromServer(){
// Take info from JSONheader to parse msgContent into individual model files
// After parsing all models notify MainActivity that models have been updated
}
// todo should be able deal with ByteBuffer from FlatBuffer rather than byte[]
public boolean addEpisodeToWriteBuffer(byte[] episode){
boolean success = false;
try{
ByteBuffer bb = ByteBuffer.wrap(episode);
success = writeBufferVector.add(bb);
Log.v(TAG, "Added data to writeBuffer");
int ops = SelectionKey.OP_WRITE;
socketWriteTimeStart = System.nanoTime();
sc.register(selector, ops, this);
// I want this to trigger the selector that this channel is writeReady.
} catch (NullPointerException | ClosedChannelException e){
Log.e(TAG,"Error", e);
Log.e(TAG, "SocketConnectionManager.data not initialized yet");
}
return success;
}
}
Stumbled upon this in the Android Docs, which answers the question of why I get the OutOfMemoryError.
To maintain a functional multi-tasking environment, Android sets a hard limit on the heap size for each app. The exact heap size limit varies between devices based on how much RAM the device has available overall. If your app has reached the heap capacity and tries to allocate more memory, it can receive an OutOfMemoryError.
In some cases, you might want to query the system to determine exactly how much heap space you have available on the current device—for example, to determine how much data is safe to keep in a cache. You can query the system for this figure by calling getMemoryClass(). This method returns an integer indicating the number of megabytes available for your app's heap.
After running the ActivityManager.getMemoryClass method, I see for my Pixel 3a I have a hard limit of 192 MB. As I was trying to allocate just over 200 MB, I hit this limit.
I also checked the ActivityManager.getLargeMemoryClass and see I have a hard limit of 512 MB. So I can set my app to have a "largeHeap", but despite having 4GB of RAM, I have a hard limit of 512 MB I need to work around.
Unless someone else knows any way around this, I'll have to write some logic to piecewise write the episode to file if it goes above a certain point, and piecewise send it over the channel later. This will slow things down a fair bit I guess, so if anyone has an answer that can avoid this, or tell me why this won't slow things down if done properly, then I'm happy to give you the answer. Just posting this as an answer as it does answer my original question, but rather unsatisfactorily.
I currently have a game, for which I have implemented a client and a server.
I then have the server sending data to the client about it's position, the client sending movement inputs into the server, etc.
The problem is that the CPU skyrockets to 100%. I have directly connected the high usage to the following code, which is in an update() method that is called ten times per second:
try{
sendToClientUDP(("ID:" + String.valueOf(uid)));
sendToClientUDP(("Scale:" + GameServer.scale));
for (Clients cl : GameServer.players){
//sendToClient(("newShip;ID:" + cl.uid).getBytes(), packet.getAddress(), packet.getPort());
sendToClientUDP((("UID:" + cl.uid +";x:" + cl.x)));
sendToClientUDP((("UID:" + cl.uid +";y:" + cl.y)));
sendToClientUDP((("UID:" + cl.uid +";z:" + cl.z)));
sendToClientUDP((("UID:" + cl.uid +";Rotation:" + (cl.rotation))));
cl.sendToClientUDP(new String("newShip;ID:" + uid));
sendToClientUDP(new String("newShip;ID:" + cl.uid));
}
}catch (Exception e){
e.printStackTrace();
}
Removing the code, and the high CPU usage disappears.
Here is my sendToClientUDP() method.
public void sendToClientUDP(String str){
if (!NPC){ //NPC is checking if it is a computer-controlled player.
UDP.sendData(str.getBytes(), ip, port);
}
}
And here is my UDP.sendData() method:
public static void sendData(String data, InetAddress ip, int port) {
sendData(data.getBytes(), ip, port);
}
public static void sendData(byte[] data, InetAddress ip, int port) {
DatagramPacket packet = new DatagramPacket(data, data.length, ip, port);
try {
socket.send(packet);
} catch (IOException e) {
e.printStackTrace();
}
}
Why is so much CPU being used simply by sending UDP packets? And what, if anything, can I do to reduce it?
I suggest you take out or optimise the code which is producing so much CPU, A CPU profiler is the best place to start but these are likely to be causes of CPU consumption.
creating Strings and byte[] are expensive, I would avoid doing those.
creating multiple packets instead of batching them is also expensive.
Creating a new DatagramPacket can be avoided.
I would remove duplication between messages as this adds redundant work you can avoid.
you might consider using a binary format to avoid the translation overhead of convert to/from text.
There is almost never a good time to use new String() it is almost certainly redundant.
EDIT: This is what I had in mind. Instead of sending 5 packets per client, you send just one packet, total. For ten clients you send 1/50 of the packets, reducing the overhead.
import java.io.IOException;
import java.net.*;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.util.ArrayList;
import java.util.List;
/**
* Created by peter on 31/07/15.
*/
public class PacketSender {
public static void main(String[] args) throws IOException {
PacketSender ps = new PacketSender(InetAddress.getByName("localhost"), 12345);
List<Client> clients = new ArrayList<>();
for(int i=0;i<10;i++)
clients.add(new Client());
for(int t = 0; t< 3;t++) {
long start = System.nanoTime();
int tests = 100000;
for (int i = 0; i < tests; i++) {
ps.sendData(1234, 1, clients);
}
long time = System.nanoTime() - start;
System.out.printf("Sent %,d messages per second%n", (long) (tests * 1e9 / time));
}
}
final ThreadLocal<ByteBuffer> bufferTL = ThreadLocal.withInitial(() -> ByteBuffer.allocate(8192).order(ByteOrder.nativeOrder()));
final ThreadLocal<DatagramSocket> socketTL;
final ThreadLocal<DatagramPacket> packetTL;
public PacketSender(InetAddress address, int port) {
socketTL = ThreadLocal.withInitial(() -> {
try {
return new DatagramSocket(port, address);
} catch (SocketException e) {
throw new AssertionError(e);
}
});
packetTL = ThreadLocal.withInitial(() -> new DatagramPacket(bufferTL.get().array(), 0, address, port));
}
public void sendData(int uid, int scale, List<Client> clients) throws IOException {
ByteBuffer b = bufferTL.get();
b.clear();
b.putInt(uid);
b.putInt(scale);
b.putInt(clients.size());
for (Client cl : clients) {
b.putInt(cl.x);
b.putInt(cl.y);
b.putInt(cl.z);
b.putInt(cl.rotation);
b.putInt(cl.uid);
}
DatagramPacket dp = packetTL.get();
dp.setData(b.array(), 0, b.position());
socketTL.get().send(dp);
}
static class Client {
int x,y,z,rotation,uid;
}
}
When this performance test runs it prints
Sent 410,118 messages per second
Sent 458,126 messages per second
Sent 459,499 messages per second
Edit: to write/read text you can do the following.
import java.nio.ByteBuffer;
/**
* Created by peter on 09/08/2015.
*/
public enum ByteBuffers {
;
/**
* Writes in ISO-8859-1 encoding. This assumes string up to 127 bytes long.
*
* #param bb to write to
* #param cs to write from
*/
public static void writeText(ByteBuffer bb, CharSequence cs) {
// change to stop bit encoding to have lengths > 127
assert cs.length() < 128;
bb.put((byte) cs.length());
for (int i = 0, len = cs.length(); i < len; i++)
bb.put((byte) cs.charAt(i));
}
public static StringBuilder readText(ByteBuffer bb, StringBuilder sb) {
int len = bb.get();
assert len >= 0;
sb.setLength(0);
for (int i = 0; i < len; i++)
sb.append((char) (bb.get() & 0xFF));
return sb;
}
private static final ThreadLocal<StringBuilder> SB = new ThreadLocal<>() {
#Override
protected Object initialValue() {
return new StringBuilder();
}
};
public static String readText(ByteBuffer bb) {
// TODO use a string pool to reduce String garbage.
return readText(bb, SB.get()).toString();
}
}
If you need something more complicated you should consider using Chronicle-Bytes which I wrote. It has
support for 64-bit memory sizes, including memory mapping 64-bit.
thread safe operation off heap.
UTF-8 encoding of strings.
compressed types such as stop bit encoding.
automatic string pooling to reduce garbage.
deterministic clean up of off heap resources via reference counting.
Info
I'm trying to find a way to read blocks of data from an incoming socket stream at a set interval, but ignoring the rest of the data and not closing the connection between reads. I was wondering if anyone had some advice?
The reason I ask is I have been given a network connected analogue to digital converter (ADC) and I want to write a simple oscilloscope application.
Basically once I connect to the ADC and send a few initialisation commands it then takes a few minutes to stabilise, at which point it starts throwing out measurements in a byte stream.
I want to read 1MB of data every few seconds and discard the rest, if I don't discard the rest the ADC will buffer 512kB of readings then pause so any subsequent reads will be of old data. If I close the connection between reads the ADC then takes a while before it sends data again.
Problem
I wrote a simple Python script as a test, in this I used a continuously running thread which would read bytes to an unused buffer, if a flag was set, which seems to work fine.
When I tried this on Android I ran into problems as it seems that only some of the data is being discarded, the ADC still pauses if the update interval is too long.
Where have I made the mistake(s)? My first guess is synchronisation as I'm not sure its working as intended (see the ThreadBucket class). I'll have to admit spending many hours on playing with this, trying different sync permutations, buffer sizes, BufferedInputStream and NIO, but with no luck.
Any input on this would be appreciated, I'm not sure if using a thread like this is the right way to go in Java.
Code
The Reader class sets up the thread, connects to the ADC, reads data on request and in between activates the bit bucket thread (I've omitted the initialisation and closing for clarity).
class Reader {
private static final int READ_SIZE = 1024 * 1024;
private String mServer;
private int mPort;
private Socket mSocket;
private InputStream mIn;
private ThreadBucket mThreadBucket;
private byte[] mData = new byte[1];
private final byte[] mBuffer = new byte[READ_SIZE];
Reader(String server, int port) {
mServer = server;
mPort = port;
}
void setup() throws IOException {
mSocket = new Socket(mServer, mPort);
mIn = mSocket.getInputStream();
mThreadBucket = new ThreadBucket(mIn);
mThreadBucket.start();
// Omitted: Send a few init commands a look at the response
// Start discarding data
mThreadBucket.bucket(true);
}
private int readRaw(int samples) throws IOException {
int current = 0;
// Probably fixed size but may change
if (mData.length != samples)
mData = new byte[samples];
// Stop discarding data
mThreadBucket.bucket(false);
// Read in number of samples to mData
while (current < samples) {
int len = mIn.read(mBuffer);
if (current > samples)
current = samples;
if (current + len > samples)
len = samples - current;
System.arraycopy(mBuffer, 0, mData, current, len);
current += mBuffer.length;
}
// Discard data again until the next read
mThreadBucket.bucket(true);
return current;
}
}
The ThreadBucket class runs continuously, on slurping data to the bit bucket if mBucket is true.
The synchronisation is meant to stop either thread from reading data whilst the other one is.
public class ThreadBucket extends Thread {
private static final int BUFFER_SIZE = 1024;
private final InputStream mIn;
private Boolean mBucket = false;
private boolean mCancel = false;
public ThreadBucket(final InputStream in) throws IOException {
mIn = in;
}
#Override
public void run() {
while (!mCancel && !Thread.currentThread().isInterrupted()) {
synchronized (this) {
if (mBucket)
try {
mIn.skip(BUFFER_SIZE);
} catch (final IOException e) {
break;
}
}
}
}
public synchronized void bucket(final boolean on) {
mBucket = on;
}
public void cancel() {
mCancel = true;
}
}
Thank you.
You need to read continuously, period, as fast as you can code it, and then manage what you do with the data separately. Don't mix the two up.
I have two threads that increase the CPU overhead.
1. Reading from the socket in a synchronous way.
2. Waiting to accept connections from other clients
Problem 1, I'm just trying to read any data that comes from the client, and I can not use readline, because the incoming data has newlines that I mark for knowing the header end of a message. So I'm using that way in a thread, but it increases the CPU overhead
public static String convertStreamToString(TCPServerConnectionListner socket) throws UnsupportedEncodingException, IOException, InterruptedException {
BufferedReader reader = new BufferedReader(new InputStreamReader(socket.getSocket().getInputStream()));
// At this point it is too early to read. So it most likely return false
System.out.println("Buffer Reader ready? " + reader.ready());
// StringBuilder to hold the response
StringBuilder sb = new StringBuilder();
// Indicator to show if we have started to receive data or not
boolean dataStreamStarted = false;
// How many times we went to sleep waiting for data
int sleepCounter = 0;
// How many times (max) we will sleep before bailing out
int sleepMaxCounter = 5;
// Sleep max counter after data started
int sleepMaxDataCounter = 50;
// How long to sleep for each cycle
int sleepTime = 5;
// Start time
long startTime = System.currentTimeMillis();
// This is a tight loop. Not sure what it will do to CPU
while (true) {
if (reader.ready()) {
sb.append((char) reader.read());
// Once started we do not expect server to stop in the middle and restart
dataStreamStarted = true;
} else {
Thread.sleep(sleepTime);
if (dataStreamStarted && (sleepCounter >= sleepMaxDataCounter)) {
System.out.println("Reached max sleep time of " + (sleepMaxDataCounter * sleepTime) + " ms after data started");
break;
} else {
if (sleepCounter >= sleepMaxCounter) {
System.out.println("Reached max sleep time of " + (sleepMaxCounter * sleepTime) + " ms. Bailing out");
// Reached max timeout waiting for data. Bail..
break;
}
}
sleepCounter++;
}
}
long endTime = System.currentTimeMillis();
System.out.println(sb.toString());
System.out.println("Time " + (endTime - startTime));
return sb.toString();
}
Problem 2, I don't know what is the best way for doing that, I'm just having a thread that constantly wait for other clients, and accept it. But this also takes a lot of CPU overhead.
// Listner to accept any client connection
#Override
public void run() {
while (true) {
try {
mutex.acquire();
if (!welcomeSocket.isClosed()) {
connectionSocket = welcomeSocket.accept();
// Thread.sleep(5);
}
} catch (IOException ex) {
Logger.getLogger(TCPServerConnectionListner.class.getName()).log(Level.SEVERE, null, ex);
} catch (InterruptedException ex) {
Logger.getLogger(TCPServerConnectionListner.class.getName()).log(Level.SEVERE, null, ex);
}
finally
{
mutex.release();
}
}
}
}
A Profiler Picture would also help, but I'm wondering why the SwingWorker Thread Takes that much time?
Update Code For Problem One:
public static String convertStreamToString(TCPServerConnectionListner socket) throws UnsupportedEncodingException, IOException, InterruptedException {
byte[] resultBuff = new byte[0];
byte[] buff = new byte[65534];
int k = -1;
k = socket.getSocket().getInputStream().read(buff, 0, buff.length);
byte[] tbuff = new byte[resultBuff.length + k]; // temp buffer size = bytes already read + bytes last read
System.arraycopy(resultBuff, 0, tbuff, 0, resultBuff.length); // copy previous bytes
System.arraycopy(buff, 0, tbuff, resultBuff.length, k); // copy current lot
resultBuff = tbuff; // call the temp buffer as your result buff
return new String(resultBuff);
}
}
![snapshot][2]
Just get rid of the ready() call and block. Everything you do while ready() is false is literally a complete waste of time, including the sleep. The read() will block for exactly the right amount of time. A sleep() won't. You are either not sleeping for long enough, which wastes CPU time, or too long, which adds latency. Once in a while you may sleep for the correct time, but this is 100% luck, not good management. If you want a read timeout, use a read timeout.
You appear to be waiting until there is not more data after some timeout.
I suggest you use Socket.setSoTimeout(timeout in seconds)
A better solution is to not need to do this by having a protocol which allows you to know when the end of data is reached. You would only do this if you the server is poorly implemented and you have no way to fix it.
For Problem 1. 100% CPU may be because you are reading single char from the BufferedReader.read(). Instead you can read chunk of data to an array and add it to your stringbuilder.
I have created the normal publishers and subscribers implemented using java , which works as reading the contents by size as 1MB of total size 5MB and published on every 1MB to the subscriber.Data is getting published successfully .Now 'm facing the issue on appending the content to the existing file .Finally i could find only the last 1MB of data in the file.So please let me to know how to solve this issue ? and also i have attached the source code for publisher and subscriber.
Publisher:
public class MessageDataPublisher {
static StringBuffer fileContent;
static RandomAccessFile randomAccessFile ;
public static void main(String[] args) throws IOException {
MessageDataPublisher msgObj=new MessageDataPublisher();
String fileToWrite="test.txt";
msgObj.towriteDDS(fileToWrite);
}
public void towriteDDS(String fileName) throws IOException{
DDSEntityManager mgr=new DDSEntityManager();
String partitionName="PARTICIPANT";
// create Domain Participant
mgr.createParticipant(partitionName);
// create Type
BinaryFileTypeSupport binary=new BinaryFileTypeSupport();
mgr.registerType(binary);
// create Topic
mgr.createTopic("Serials");
// create Publisher
mgr.createPublisher();
// create DataWriter
mgr.createWriter();
// Publish Events
DataWriter dwriter = mgr.getWriter();
BinaryFileDataWriter binaryWriter=BinaryFileDataWriterHelper.narrow(dwriter);
int bufferSize=1024*1024;
File readfile=new File(fileName);
FileInputStream is = new FileInputStream(readfile);
byte[] totalbytes = new byte[is.available()];
is.read(totalbytes);
byte[] readbyte = new byte[bufferSize];
BinaryFile binaryInstance;
int k=0;
for(int i=0;i<totalbytes.length;i++){
readbyte[k]=totalbytes[i];
k++;
if(k>(bufferSize-1)){
binaryInstance=new BinaryFile();
binaryInstance.name="sendpublisher.txt";
binaryInstance.contents=readbyte;
int status = binaryWriter.write(binaryInstance, HANDLE_NIL.value);
ErrorHandler.checkStatus(status, "MsgDataWriter.write");
ErrorHandler.checkStatus(status, "MsgDataWriter.write");
k=0;
}
}
if(k < (bufferSize-1)){
byte[] remaingbyte = new byte[k];
for(int j=0;j<(k-1);j++){
remaingbyte[j]=readbyte[j];
}
binaryInstance=new BinaryFile();
binaryInstance.name="sendpublisher.txt";
binaryInstance.contents=remaingbyte;
int status = binaryWriter.write(binaryInstance, HANDLE_NIL.value);
ErrorHandler.checkStatus(status, "MsgDataWriter.write");
}
is.close();
try {
Thread.sleep(4000);
} catch (InterruptedException e) {
e.printStackTrace();
}
// clean up
mgr.getPublisher().delete_datawriter(binaryWriter);
mgr.deletePublisher();
mgr.deleteTopic();
mgr.deleteParticipant();
}
}
Subscriber:
public class MessageDataSubscriber {
static RandomAccessFile randomAccessFile ;
public static void main(String[] args) throws IOException {
DDSEntityManager mgr = new DDSEntityManager();
String partitionName = "PARTICIPANT";
// create Domain Participant
mgr.createParticipant(partitionName);
// create Type
BinaryFileTypeSupport msgTS = new BinaryFileTypeSupport();
mgr.registerType(msgTS);
// create Topic
mgr.createTopic("Serials");
// create Subscriber
mgr.createSubscriber();
// create DataReader
mgr.createReader();
// Read Events
DataReader dreader = mgr.getReader();
BinaryFileDataReader binaryReader=BinaryFileDataReaderHelper.narrow(dreader);
BinaryFileSeqHolder binaryseq=new BinaryFileSeqHolder();
SampleInfoSeqHolder infoSeq = new SampleInfoSeqHolder();
boolean terminate = false;
int count = 0;
while (!terminate && count < 1500) {
// To run undefinitely
binaryReader.take(binaryseq, infoSeq, 10,
ANY_SAMPLE_STATE.value, ANY_VIEW_STATE.value,ANY_INSTANCE_STATE.value);
for (int i = 0; i < binaryseq.value.length; i++) {
toWrtieXML(binaryseq.value[i].contents);
terminate = true;
}
try
{
Thread.sleep(200);
}
catch(InterruptedException ie)
{
}
++count;
}
binaryReader.return_loan(binaryseq,infoSeq);
// clean up
mgr.getSubscriber().delete_datareader(binaryReader);
mgr.deleteSubscriber();
mgr.deleteTopic();
mgr.deleteParticipant();
}
private static void toWrtieXML(byte[] bytes) throws IOException {
// TODO Auto-generated method stub
File Writefile=new File("samplesubscriber.txt");
if(!Writefile.exists()){
randomAccessFile = new RandomAccessFile(Writefile, "rw");
randomAccessFile.write(bytes, 0, bytes.length);
randomAccessFile.close();
}
else{
randomAccessFile = new RandomAccessFile(Writefile, "rw");
long i=Writefile.length();
randomAccessFile.seek(i);
randomAccessFile.write(bytes, 0, bytes.length);
randomAccessFile.close();
}
}
}
Thanks in advance
It is hard to give a conclusive answer to your question, because your issue could be the result of several different causes. Also, once the cause of the problem has been identified, you will probably have multiple options to mitigate it.
The first place to look is at the reader side. The code does a take() in a loop with a 200 millisecond pause between each take. Depending on your QoS settings on the DataReader, you might be facing a situation where your samples get overwritten in the DataReader while your application is sleeping for 200 milliseconds. If you are doing this over a gigabit ethernet, then a typical DDS product would be able to do those 5 chunks of 1 megabyte within that sleep period, meaning that your default, one-place buffer will get overwritten 4 times during your sleep.
This scenario would be likely if you used the default history QoS settings for your BinaryFileDataReader, which means history.kind = KEEP_LAST and history.depth = 1. Increasing the latter to a larger value, for example to 20, would result in a queue capable of holding 20 chunks of your file while you are sleeping. That should be sufficient for now.
If this does not resolve your issue, other possible causes can be explored.