SocketChannel.write() throwing OutOfMemoryError when attempting to write large buffer - java

My code throws an OutOfMemoryError when running the following line:
int numBytes = socketChannel.write(_send_buffer);
where socketChannel is an instance of java.nio.channels.SocketChannel
and _send_buffer is an instance of java.nio.ByteBuffer
The code arrives at this point via a non-blocking selector write operation, and throws this on the first attempt to write when the capacity of _send_buffer is large. I have no issues with the code when _send_buffer is less than 20Mb, but when attempting to test this with larger buffers (e.g. > 100Mb) it fails.
According to the docs for java.nio.channels.SocketChannel.write():
An attempt is made to write up to r bytes to the channel, where r is the number of bytes remaining in the buffer, that is, src.remaining(), at the moment this method is invoked.
Suppose that a byte sequence of length n is written, where 0 <= n <= r. This byte sequence will be transferred from the buffer starting at index p, where p is the buffer's position at the moment this method is invoked; the index of the last byte written will be p + n - 1. Upon return the buffer's position will be equal to p + n; its limit will not have changed.
Unless otherwise specified, a write operation will return only after writing all of the r requested bytes. Some types of channels, depending upon their state, may write only some of the bytes or possibly none at all. A socket channel in non-blocking mode, for example, cannot write any more bytes than are free in the socket's output buffer.
My channels should be setup to be non-blocking, so I would think the write operation should only attempt to write up to the capacity of the socket's output buffer. As I did not previously specify this I tried setting it to 1024 bytes via the setOption method with the SO_SNDBUF option. i.e:
socketChannel.setOption(SO_SNDBUF, 1024);
Though I am still getting the OutOfMemoryError. Here is the full error message:
2021-04-22 11:52:44.260 11591-11733/jp.oist.abcvlib.serverLearning I/.serverLearnin: Clamp target GC heap from 195MB to 192MB
2021-04-22 11:52:44.260 11591-11733/jp.oist.abcvlib.serverLearning I/.serverLearnin: Alloc concurrent copying GC freed 2508(64KB) AllocSpace objects, 0(0B) LOS objects, 10% free, 171MB/192MB, paused 27us total 12.714ms
2021-04-22 11:52:44.261 11591-11733/jp.oist.abcvlib.serverLearning W/.serverLearnin: Throwing OutOfMemoryError "Failed to allocate a 49915610 byte allocation with 21279560 free bytes and 20MB until OOM, target footprint 201326592, growth limit 201326592" (VmSize 5585608 kB)
2021-04-22 11:52:44.261 11591-11733/jp.oist.abcvlib.serverLearning I/.serverLearnin: Starting a blocking GC Alloc
2021-04-22 11:52:44.261 11591-11733/jp.oist.abcvlib.serverLearning I/.serverLearnin: Starting a blocking GC Alloc
Now I can inline debug and stop at the write line and nothing crashes, so I believe there is no problem handling the memory requirement for the _send_buffer itself, but when attempting to write, something in the background is creating another allocation that's too much to handle.
Maybe I'm thinking about this wrong, and need to limit my _send_buffer size to something smaller, but I'd think there should be a way to limit the allocation made by the write command no? Or at least some way to allocate more of the Android memory to my app. I'm using a Pixel 3a, which according to the specs it should have 4GB of RAM. Now I realize that has to be shared with the rest of the system, but this is a bare bones test device (no games, personal apps, etc. are installed) so I'd assume I should have access to a fairly large chunk of that 4GB. As I'm crashing with a growth limit of 201,326,592 (according to the logcat above), it seems strange to me that I'm crashing at 0.2 / 4.0 = 5% of the spec'd memory.
Any tips in the right direction about a fundamental flaw in my approach, or recommendations for avoiding the OutOfMemoryError would be much appreciated!
Edit 1:
Adding some code context as requested by comments. Note this is not a runnable example as the code base is quite large and I am not allowed to share it all due to company policies. Just note that the _send_buffer is has nothing to do with the sendbuffer of the socketChannel itself (i.e. what is referenced by getSendBufferSize, it is just a ByteBuffer that I use to bundle together everything before sending it via the channel. As I can't share all the code related to generating the contents of _send_buffer just note it is a ByteBuffer than can be very large (> 100Mb). If this is fundamentally a problem, then please point this out and why.
So with the above in mind, the NIO related code is pasted below. Note this is very prototype alpha code, so I apologize for the overload of comments and log statements.
SocketConnectionManager.java
(Essentially a Runnable in charge of the Selector)
Note the sendMsgToServer method is overridden (without modification) and called from the main Android activity (not shown). The byte[] episode arg is what gets wrapped into a ByteBuffer within SocketMessage.java (next section) which later gets put into the _send_buffer instance within the write method of SocketMessage.java.
package jp.oist.abcvlib.util;
import android.util.Log;
import java.io.IOException;
import java.net.InetSocketAddress;
import java.net.SocketOption;
import java.nio.channels.CancelledKeyException;
import java.nio.channels.ClosedSelectorException;
import java.nio.channels.IllegalBlockingModeException;
import java.nio.channels.SelectionKey;
import java.nio.channels.Selector;
import java.nio.channels.SocketChannel;
import java.util.Set;
import static java.net.StandardSocketOptions.SO_SNDBUF;
public class SocketConnectionManager implements Runnable{
private SocketChannel sc;
private Selector selector;
private SocketListener socketListener;
private final String TAG = "SocketConnectionManager";
private SocketMessage socketMessage;
private final String serverIp;
private final int serverPort;
public SocketConnectionManager(SocketListener socketListener, String serverIp, int serverPort){
this.socketListener = socketListener;
this.serverIp = serverIp;
this.serverPort = serverPort;
}
#Override
public void run() {
try {
selector = Selector.open();
start_connection(serverIp, serverPort);
do {
int eventCount = selector.select(0);
Set<SelectionKey> events = selector.selectedKeys(); // events is int representing how many keys have changed state
if (eventCount != 0){
Set<SelectionKey> selectedKeys = selector.selectedKeys();
for (SelectionKey selectedKey : selectedKeys){
try{
SocketMessage socketMessage = (SocketMessage) selectedKey.attachment();
socketMessage.process_events(selectedKey);
}catch (ClassCastException e){
Log.e(TAG,"Error", e);
Log.e(TAG, "selectedKey attachment not a SocketMessage type");
}
}
}
} while (selector.isOpen()); //todo remember to close the selector somewhere
} catch (IOException e) {
Log.e(TAG,"Error", e);
}
}
private void start_connection(String serverIp, int serverPort){
try {
InetSocketAddress inetSocketAddress = new InetSocketAddress(serverIp, serverPort);
sc = SocketChannel.open();
sc.configureBlocking(false);
sc.setOption(SO_SNDBUF, 1024);
socketMessage = new SocketMessage(socketListener, sc, selector);
Log.v(TAG, "registering with selector to connect");
int ops = SelectionKey.OP_CONNECT;
sc.register(selector, ops, socketMessage);
Log.d(TAG, "Initializing connection with " + inetSocketAddress);
boolean connected = sc.connect(inetSocketAddress);
Log.v(TAG, "socketChannel.isConnected ? : " + sc.isConnected());
} catch (IOException | ClosedSelectorException | IllegalBlockingModeException
| CancelledKeyException | IllegalArgumentException e) {
Log.e(TAG, "Initial socket connect and registration:", e);
}
}
public void sendMsgToServer(byte[] episode){
boolean writeSuccess = socketMessage.addEpisodeToWriteBuffer(episode);
}
/**
* Should be called prior to exiting app to ensure zombie threads don't remain in memory.
*/
public void close(){
try {
Log.v(TAG, "Closing connection: " + sc.getRemoteAddress());
selector.close();
sc.close();
} catch (IOException e) {
Log.e(TAG,"Error", e);
}
}
}
SocketMessage.java
This is greatly inspired from the example Python code given here, in particular the libclient.py and app-client.py. This is because the server is running python code and clients are running Java. So if you want the reasoning behind why things are the way they are, reference the RealPython socket tutorial. I essentially used the app-server.py as a template for my code, and translated (with modifications) to Java for the clients.
package jp.oist.abcvlib.util;
import android.util.Log;
import org.json.JSONException;
import org.json.JSONObject;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.channels.ClosedChannelException;
import java.nio.channels.SelectionKey;
import java.nio.channels.Selector;
import java.nio.channels.SocketChannel;
import java.nio.charset.StandardCharsets;
import java.text.DecimalFormat;
import java.util.Vector;
public class SocketMessage {
private final SocketChannel sc;
private final Selector selector;
private final ByteBuffer _recv_buffer;
private ByteBuffer _send_buffer;
private int _jsonheader_len = 0;
private JSONObject jsonHeaderRead; // Will tell Java at which points in msgContent each model lies (e.g. model1 is from 0 to 1018, model2 is from 1019 to 2034, etc.)
private byte[] jsonHeaderBytes;
private ByteBuffer msgContent; // Should contain ALL model files. Parse to individual files after reading
private final Vector<ByteBuffer> writeBufferVector = new Vector<>(); // List of episodes
private final String TAG = "SocketConnectionManager";
private JSONObject jsonHeaderWrite;
private boolean msgReadComplete = false;
private SocketListener socketListener;
private long socketWriteTimeStart;
private long socketReadTimeStart;
public SocketMessage(SocketListener socketListener, SocketChannel sc, Selector selector){
this.socketListener = socketListener;
this.sc = sc;
this.selector = selector;
this._recv_buffer = ByteBuffer.allocate(1024);
this._send_buffer = ByteBuffer.allocate(1024);
}
public void process_events(SelectionKey selectionKey){
SocketChannel sc = (SocketChannel) selectionKey.channel();
// Log.i(TAG, "process_events");
try{
if (selectionKey.isConnectable()){
sc.finishConnect();
Log.d(TAG, "Finished connecting to " + ((SocketChannel) selectionKey.channel()).getRemoteAddress());
Log.v(TAG, "socketChannel.isConnected ? : " + sc.isConnected());
}
if (selectionKey.isWritable()){
// Log.i(TAG, "write event");
write(selectionKey);
}
if (selectionKey.isReadable()){
// Log.i(TAG, "read event");
read(selectionKey);
// int ops = SelectionKey.OP_WRITE;
// sc.register(selectionKey.selector(), ops, selectionKey.attachment());
}
} catch (ClassCastException | IOException | JSONException e){
Log.e(TAG,"Error", e);
}
}
private void read(SelectionKey selectionKey) throws IOException, JSONException {
SocketChannel socketChannel = (SocketChannel) selectionKey.channel();
while(!msgReadComplete){
// At this point the _recv_buffer should have been cleared (pointer 0 limit=cap, no mark)
int bitsRead = socketChannel.read(_recv_buffer);
if (bitsRead > 0 || _recv_buffer.position() > 0){
if (bitsRead > 0){
// Log.v(TAG, "Read " + bitsRead + " bytes from " + socketChannel.getRemoteAddress());
}
// If you have not determined the length of the header via the 2 byte short protoheader,
// try to determine it, though there is no gaurantee it will have enough bytes. So it may
// pass through this if statement multiple times. Only after it has been read will
// _jsonheader_len have a non-zero length;
if (this._jsonheader_len == 0){
socketReadTimeStart = System.nanoTime();
process_protoheader();
}
// _jsonheader_len will only be larger than 0 if set properly (finished being set).
// jsonHeaderRead will be null until the buffer gathering it has filled and converted it to
// a JSONobject.
else if (this.jsonHeaderRead == null){
process_jsonheader();
}
else if (!msgReadComplete){
process_msgContent(selectionKey);
} else {
Log.e(TAG, "bitsRead but don't know what to do with them");
}
}
}
}
private void write(SelectionKey selectionKey) throws IOException, JSONException {
if (!writeBufferVector.isEmpty()){
SocketChannel socketChannel = (SocketChannel) selectionKey.channel();
Log.v(TAG, "writeBufferVector contains data");
if (jsonHeaderWrite == null){
int numBytesToWrite = writeBufferVector.get(0).limit();
// Create JSONHeader containing length of episode in Bytes
Log.v(TAG, "generating jsonheader");
jsonHeaderWrite = generate_jsonheader(numBytesToWrite);
byte[] jsonBytes = jsonHeaderWrite.toString().getBytes(StandardCharsets.UTF_8);
// Encode length of JSONHeader to first two bytes and write to socketChannel
int jsonLength = jsonBytes.length;
// Add up length of protoHeader, JSONheader and episode bytes
int totalNumBytesToWrite = Integer.BYTES + jsonLength + numBytesToWrite;
// Create new buffer that compiles protoHeader, JsonHeader, and Episode
_send_buffer = ByteBuffer.allocate(totalNumBytesToWrite);
Log.v(TAG, "Assembling _send_buffer");
// Assemble all bytes and flip to prepare to read
_send_buffer.putInt(jsonLength);
_send_buffer.put(jsonBytes);
_send_buffer.put(writeBufferVector.get(0));
_send_buffer.flip();
Log.d(TAG, "Writing to server ...");
// Write Bytes to socketChannel //todo shouldn't be while as should be non-blocking
if (_send_buffer.remaining() > 0){
int numBytes = socketChannel.write(_send_buffer); // todo memory dump error here!
int percentDone = (int) Math.ceil((((double) _send_buffer.limit() - (double) _send_buffer.remaining())
/ (double) _send_buffer.limit()) * 100);
int total = _send_buffer.limit() / 1000000;
// Log.d(TAG, "Sent " + percentDone + "% of " + total + "Mb to " + socketChannel.getRemoteAddress());
}
} else{
// Write Bytes to socketChannel
if (_send_buffer.remaining() > 0){
socketChannel.write(_send_buffer);
}
}
if (_send_buffer.remaining() == 0){
int total = _send_buffer.limit() / 1000000;
double timeTaken = (System.nanoTime() - socketWriteTimeStart) * 10e-10;
DecimalFormat df = new DecimalFormat();
df.setMaximumFractionDigits(2);
Log.i(TAG, "Sent " + total + "Mb in " + df.format(timeTaken) + "s");
// Remove episode from buffer so as to not write it again.
writeBufferVector.remove(0);
// Clear sending buffer
_send_buffer.clear();
// make null so as to catch the initial if statement to write a new one.
jsonHeaderWrite = null;
// Set socket to read now that writing has finished.
Log.d(TAG, "Reading from server ...");
int ops = SelectionKey.OP_READ;
sc.register(selectionKey.selector(), ops, selectionKey.attachment());
}
}
}
private JSONObject generate_jsonheader(int numBytesToWrite) throws JSONException {
JSONObject jsonHeader = new JSONObject();
jsonHeader.put("byteorder", ByteOrder.nativeOrder().toString());
jsonHeader.put("content-length", numBytesToWrite);
jsonHeader.put("content-type", "flatbuffer"); // todo Change to flatbuffer later
jsonHeader.put("content-encoding", "flatbuffer"); //Change to flatbuffer later
return jsonHeader;
}
/**
* recv_buffer may contain 0, 1, or several bytes. If it has more than hdrlen, then process
* the first two bytes to obtain the length of the jsonheader. Else exit this function and
* read from the buffer again until it fills past length hdrlen.
*/
private void process_protoheader() {
Log.v(TAG, "processing protoheader");
int hdrlen = 2;
if (_recv_buffer.position() >= hdrlen){
_recv_buffer.flip(); //pos at 0 and limit set to bitsRead
_jsonheader_len = _recv_buffer.getShort(); // Read 2 bytes converts to short and move pos to 2
// allocate new ByteBuffer to store full jsonheader
jsonHeaderBytes = new byte[_jsonheader_len];
_recv_buffer.compact();
Log.v(TAG, "finished processing protoheader");
}
}
/**
* As with the process_protoheader we will check if _recv_buffer contains enough bytes to
* generate the jsonHeader objects, and if not, leave it alone and read more from socket.
*/
private void process_jsonheader() throws JSONException {
Log.v(TAG, "processing jsonheader");
// If you have enough bytes in the _recv_buffer to write out the jsonHeader
if (_jsonheader_len - _recv_buffer.position() < 0){
_recv_buffer.flip();
_recv_buffer.get(jsonHeaderBytes);
// jsonheaderBuffer should now be full and ready to convert to a JSONobject
jsonHeaderRead = new JSONObject(new String(jsonHeaderBytes));
Log.d(TAG, "JSONheader from server: " + jsonHeaderRead.toString());
try{
int msgLength = (int) jsonHeaderRead.get("content-length");
msgContent = ByteBuffer.allocate(msgLength);
}catch (JSONException e) {
Log.e(TAG, "Couldn't get content-length from jsonHeader sent from server", e);
}
}
// Else return to selector and read more bytes into the _recv_buffer
// If there are any bytes left over (part of the msg) then move them to the front of the buffer
// to prepare for another read from the socket
_recv_buffer.compact();
}
/**
* Here a bit different as it may take multiple full _recv_buffers to fill the msgContent.
* So check if msgContent.remaining is larger than 0 and if so, dump everything from _recv_buffer to it
* #param selectionKey : Used to reference the instance and selector
* #throws ClosedChannelException :
*/
private void process_msgContent(SelectionKey selectionKey) throws IOException {
if (msgContent.remaining() > 0){
_recv_buffer.flip(); //pos at 0 and limit set to bitsRead set ready to read
msgContent.put(_recv_buffer);
_recv_buffer.clear();
}
if (msgContent.remaining() == 0){
// msgContent should now be full and ready to convert to a various model files.
socketListener.onServerReadSuccess(jsonHeaderRead, msgContent);
// Clear for next round of communication
_recv_buffer.clear();
_jsonheader_len = 0;
jsonHeaderRead = null;
msgContent.clear();
int totalBytes = msgContent.capacity() / 1000000;
double timeTaken = (System.nanoTime() - socketReadTimeStart) * 10e-10;
DecimalFormat df = new DecimalFormat();
df.setMaximumFractionDigits(2);
Log.i(TAG, "Entire message containing " + totalBytes + "Mb recv'd in " + df.format(timeTaken) + "s");
msgReadComplete = true;
// Set socket to write now that reading has finished.
int ops = SelectionKey.OP_WRITE;
sc.register(selectionKey.selector(), ops, selectionKey.attachment());
}
}
//todo should send this to the mainactivity listener so it can be customized/overridden
private void onNewMessageFromServer(){
// Take info from JSONheader to parse msgContent into individual model files
// After parsing all models notify MainActivity that models have been updated
}
// todo should be able deal with ByteBuffer from FlatBuffer rather than byte[]
public boolean addEpisodeToWriteBuffer(byte[] episode){
boolean success = false;
try{
ByteBuffer bb = ByteBuffer.wrap(episode);
success = writeBufferVector.add(bb);
Log.v(TAG, "Added data to writeBuffer");
int ops = SelectionKey.OP_WRITE;
socketWriteTimeStart = System.nanoTime();
sc.register(selector, ops, this);
// I want this to trigger the selector that this channel is writeReady.
} catch (NullPointerException | ClosedChannelException e){
Log.e(TAG,"Error", e);
Log.e(TAG, "SocketConnectionManager.data not initialized yet");
}
return success;
}
}

Stumbled upon this in the Android Docs, which answers the question of why I get the OutOfMemoryError.
To maintain a functional multi-tasking environment, Android sets a hard limit on the heap size for each app. The exact heap size limit varies between devices based on how much RAM the device has available overall. If your app has reached the heap capacity and tries to allocate more memory, it can receive an OutOfMemoryError.
In some cases, you might want to query the system to determine exactly how much heap space you have available on the current device—for example, to determine how much data is safe to keep in a cache. You can query the system for this figure by calling getMemoryClass(). This method returns an integer indicating the number of megabytes available for your app's heap.
After running the ActivityManager.getMemoryClass method, I see for my Pixel 3a I have a hard limit of 192 MB. As I was trying to allocate just over 200 MB, I hit this limit.
I also checked the ActivityManager.getLargeMemoryClass and see I have a hard limit of 512 MB. So I can set my app to have a "largeHeap", but despite having 4GB of RAM, I have a hard limit of 512 MB I need to work around.
Unless someone else knows any way around this, I'll have to write some logic to piecewise write the episode to file if it goes above a certain point, and piecewise send it over the channel later. This will slow things down a fair bit I guess, so if anyone has an answer that can avoid this, or tell me why this won't slow things down if done properly, then I'm happy to give you the answer. Just posting this as an answer as it does answer my original question, but rather unsatisfactorily.

Related

How to programmatically limit the download speed?

I use the following code to limit the download speed of a file in java:
package org;
import java.io.IOException;
import java.io.InputStream;
import java.net.HttpURLConnection;
import java.net.URL;
class MainClass {
public static void main(String[] args) {
download("https://speed.hetzner.de/100MB.bin");
}
public static void download(String link) {
try {
URL url = new URL(link);
HttpURLConnection con = (HttpURLConnection) url.openConnection();
con.setConnectTimeout(5000);
con.setReadTimeout(5000);
InputStream is = con.getInputStream();
CustomInputStream inputStream = new CustomInputStream(is);
byte[] buffer = new byte[2024];
int len;
while ((len = inputStream.read(buffer)) != -1) {
System.out.println("downloaded : " + len);
//save file
}
} catch (IOException e) {
e.printStackTrace();
}
}
public static class CustomInputStream extends InputStream {
private static final int MAX_SPEED = 8 * 1024;
private final long ONE_SECOND = 1000;
private long downloadedWhithinOneSecond = 0L;
private long lastTime = System.currentTimeMillis();
private InputStream inputStream;
public CustomInputStream(InputStream inputStream) {
this.inputStream = inputStream;
lastTime = System.currentTimeMillis();
}
#Override
public int read() throws IOException {
long currentTime;
if (downloadedWhithinOneSecond >= MAX_SPEED
&& (((currentTime = System.currentTimeMillis()) - lastTime) < ONE_SECOND)) {
try {
Thread.sleep(ONE_SECOND - (currentTime - lastTime));
} catch (InterruptedException e) {
e.printStackTrace();
}
downloadedWhithinOneSecond = 0;
lastTime = System.currentTimeMillis();
}
int res = inputStream.read();
if (res >= 0) {
downloadedWhithinOneSecond++;
}
return res;
}
#Override
public int available() throws IOException {
return inputStream.available();
}
#Override
public void close() throws IOException {
inputStream.close();
}
}
}
The download speed is successfully limited, but a new problem arises. When the download is in progress, and I disconnect from the internet, the download does not end and continues for a while. When i disconnect the internet connection, it takes more than 10 seconds to throw a java.net.SocketTimeoutException exception. I do not really understand what happens in the background.
Why does this problem arise?
Your rate limit doesn't actually work like you think it does, because the data is not actually sent byte-per-byte, but in packets. These packets are buffered, and what you observe (download continues without connection) is just your stream reading the buffer. Once it reaches the end of your buffer, it waits 5 seconds before the timeout is thrown (because that is what you configured).
You set the rate to 8 kB/s, and the normal packet size is normally around 1 kB and can go up to 64 kB, so there would be 8 seconds where you are still reading the same packet. Additionally it is possible that multiple packets were already sent and buffered. There exists also a receive buffer, this buffer can be as small as 8 - 32 kB up to several MB. So really you are just reading from the buffer.
[EDIT]
Just to clarify, you are doing the right thing. On average, the rate will be limited to what you specify. The server will send a bunch of data, then wait until the client has emptied its buffer enough to receive more data.
You apparently want to limit download speed on the client side, and you also want the client to respond immediately to the connection being closed.
AFAIK, this is not possible ... without some compromises.
The problem is that the only way that the client application can detect that the connection is closed is by performing a read operation. That read is going to deliver data. But if you have already reached your limit for the current period, then that read will push you over the limit.
Here are a couple of ideas:
If you "integrate" the download rate over a short period (e.g. 1kbytes every second versus 10kbytes every 10 seconds) then you can reduce the length of time for the sleep calls.
When you are close to your target download rate, you could fall back to doing tiny (e.g. 1 byte) reads and small sleeps.
Unfortunately, both of these will be inefficient on the client side (more syscalls), but this is the cost you must pay if you want your application to detect connection closure quickly.
In a comment you said:
I'd expect the connection to be reset as soon as the internet connection is disabled.
I don't think so. Normally, the client-side protocol stack will deliver any outstanding data received from the network before telling the application code that the connection it is reading has been closed.

CPU-Wise, How can I optimize UDP packet sending?

I currently have a game, for which I have implemented a client and a server.
I then have the server sending data to the client about it's position, the client sending movement inputs into the server, etc.
The problem is that the CPU skyrockets to 100%. I have directly connected the high usage to the following code, which is in an update() method that is called ten times per second:
try{
sendToClientUDP(("ID:" + String.valueOf(uid)));
sendToClientUDP(("Scale:" + GameServer.scale));
for (Clients cl : GameServer.players){
//sendToClient(("newShip;ID:" + cl.uid).getBytes(), packet.getAddress(), packet.getPort());
sendToClientUDP((("UID:" + cl.uid +";x:" + cl.x)));
sendToClientUDP((("UID:" + cl.uid +";y:" + cl.y)));
sendToClientUDP((("UID:" + cl.uid +";z:" + cl.z)));
sendToClientUDP((("UID:" + cl.uid +";Rotation:" + (cl.rotation))));
cl.sendToClientUDP(new String("newShip;ID:" + uid));
sendToClientUDP(new String("newShip;ID:" + cl.uid));
}
}catch (Exception e){
e.printStackTrace();
}
Removing the code, and the high CPU usage disappears.
Here is my sendToClientUDP() method.
public void sendToClientUDP(String str){
if (!NPC){ //NPC is checking if it is a computer-controlled player.
UDP.sendData(str.getBytes(), ip, port);
}
}
And here is my UDP.sendData() method:
public static void sendData(String data, InetAddress ip, int port) {
sendData(data.getBytes(), ip, port);
}
public static void sendData(byte[] data, InetAddress ip, int port) {
DatagramPacket packet = new DatagramPacket(data, data.length, ip, port);
try {
socket.send(packet);
} catch (IOException e) {
e.printStackTrace();
}
}
Why is so much CPU being used simply by sending UDP packets? And what, if anything, can I do to reduce it?
I suggest you take out or optimise the code which is producing so much CPU, A CPU profiler is the best place to start but these are likely to be causes of CPU consumption.
creating Strings and byte[] are expensive, I would avoid doing those.
creating multiple packets instead of batching them is also expensive.
Creating a new DatagramPacket can be avoided.
I would remove duplication between messages as this adds redundant work you can avoid.
you might consider using a binary format to avoid the translation overhead of convert to/from text.
There is almost never a good time to use new String() it is almost certainly redundant.
EDIT: This is what I had in mind. Instead of sending 5 packets per client, you send just one packet, total. For ten clients you send 1/50 of the packets, reducing the overhead.
import java.io.IOException;
import java.net.*;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.util.ArrayList;
import java.util.List;
/**
* Created by peter on 31/07/15.
*/
public class PacketSender {
public static void main(String[] args) throws IOException {
PacketSender ps = new PacketSender(InetAddress.getByName("localhost"), 12345);
List<Client> clients = new ArrayList<>();
for(int i=0;i<10;i++)
clients.add(new Client());
for(int t = 0; t< 3;t++) {
long start = System.nanoTime();
int tests = 100000;
for (int i = 0; i < tests; i++) {
ps.sendData(1234, 1, clients);
}
long time = System.nanoTime() - start;
System.out.printf("Sent %,d messages per second%n", (long) (tests * 1e9 / time));
}
}
final ThreadLocal<ByteBuffer> bufferTL = ThreadLocal.withInitial(() -> ByteBuffer.allocate(8192).order(ByteOrder.nativeOrder()));
final ThreadLocal<DatagramSocket> socketTL;
final ThreadLocal<DatagramPacket> packetTL;
public PacketSender(InetAddress address, int port) {
socketTL = ThreadLocal.withInitial(() -> {
try {
return new DatagramSocket(port, address);
} catch (SocketException e) {
throw new AssertionError(e);
}
});
packetTL = ThreadLocal.withInitial(() -> new DatagramPacket(bufferTL.get().array(), 0, address, port));
}
public void sendData(int uid, int scale, List<Client> clients) throws IOException {
ByteBuffer b = bufferTL.get();
b.clear();
b.putInt(uid);
b.putInt(scale);
b.putInt(clients.size());
for (Client cl : clients) {
b.putInt(cl.x);
b.putInt(cl.y);
b.putInt(cl.z);
b.putInt(cl.rotation);
b.putInt(cl.uid);
}
DatagramPacket dp = packetTL.get();
dp.setData(b.array(), 0, b.position());
socketTL.get().send(dp);
}
static class Client {
int x,y,z,rotation,uid;
}
}
When this performance test runs it prints
Sent 410,118 messages per second
Sent 458,126 messages per second
Sent 459,499 messages per second
Edit: to write/read text you can do the following.
import java.nio.ByteBuffer;
/**
* Created by peter on 09/08/2015.
*/
public enum ByteBuffers {
;
/**
* Writes in ISO-8859-1 encoding. This assumes string up to 127 bytes long.
*
* #param bb to write to
* #param cs to write from
*/
public static void writeText(ByteBuffer bb, CharSequence cs) {
// change to stop bit encoding to have lengths > 127
assert cs.length() < 128;
bb.put((byte) cs.length());
for (int i = 0, len = cs.length(); i < len; i++)
bb.put((byte) cs.charAt(i));
}
public static StringBuilder readText(ByteBuffer bb, StringBuilder sb) {
int len = bb.get();
assert len >= 0;
sb.setLength(0);
for (int i = 0; i < len; i++)
sb.append((char) (bb.get() & 0xFF));
return sb;
}
private static final ThreadLocal<StringBuilder> SB = new ThreadLocal<>() {
#Override
protected Object initialValue() {
return new StringBuilder();
}
};
public static String readText(ByteBuffer bb) {
// TODO use a string pool to reduce String garbage.
return readText(bb, SB.get()).toString();
}
}
If you need something more complicated you should consider using Chronicle-Bytes which I wrote. It has
support for 64-bit memory sizes, including memory mapping 64-bit.
thread safe operation off heap.
UTF-8 encoding of strings.
compressed types such as stop bit encoding.
automatic string pooling to reduce garbage.
deterministic clean up of off heap resources via reference counting.

Calculating the bandwidth by sending several packets through linear regression

I implemented a TCP client-server model to test my bandwidth with the server through sending number of packets with different sizes and see the RTT then calculate the bandwidth through linear regression,
Here is the server code:
import java.io.*;
import java.net.*;
public class Server implements Runnable {
ServerSocket welcomeSocket;
String clientSentence;
Thread thread;
Socket connectionSocket;
BufferedReader inFromClient;
DataOutputStream outToClient;
public Server() throws IOException {
welcomeSocket = new ServerSocket(6588);
connectionSocket = welcomeSocket.accept();
inFromClient = new BufferedReader(new InputStreamReader(connectionSocket.getInputStream()));
outToClient = new DataOutputStream(connectionSocket.getOutputStream());
thread = new Thread(this);
thread.start();
}
#Override
public void run() {
// TODO Auto-generated method stub
while(true)
{
try {
clientSentence = inFromClient.readLine();
if (clientSentence != null) {
System.out.println("Received: " + clientSentence);
outToClient.writeBytes(clientSentence + '\n');
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
public static void main(String[] args) throws IOException {
new Server();
}
}
And this is the method in the Client class that return an array of the RTT by each packet
public int [] getResponseTime() throws UnknownHostException, IOException {
timeArray = new int[sizes.length];
for (int i = 0; i < sizes.length; i++) {
sentence = StringUtils.leftPad("", sizes[i], '*');
long start = System.nanoTime();
outToServer.writeBytes(sentence + '\n');
modifiedSentence = inFromServer.readLine();
long end = System.nanoTime();
System.out.println("FROM SERVER: " + modifiedSentence);
timeArray[i] = (int) (end - start);
simpleReg.addData(timeArray[i]* Math.pow(10, -9), sizes[i] * 2); // each char is 2 bytes
}
return timeArray;
}
when i get the slope it returns me a BW with kilo bytes however they are in the same network and the bandwidth should be much more . What i am doing wrong ?
Are you obliged to use linear regression or could it be a different estimator? I am actually not sure if linear regression is the best approach here. I am curious, do you happen to know any sources that suggest to use it in this kind of situation?
Note, that especially the initial BW measurements are much smaller than the real maximal goodput (due to TCP slow-start), so it is important to use a metric estimation that takes large wrong outliers into account.
In previous work I have used the harmonic mean to monitor the bandwidth over a longer period of time and it worked pretty good (also on links with a large bandwidth). The advantage of the harmonic mean over other means, is that while it is still very easy to compute, it mitigates the impact of large outliers, meaning the estimate is not as easily falsified.
Given a series of bandwidth measurements R_i, where i=0,1,2,..., n-1, the harmonic mean is calculated as:
R_total = (n+1)/((n/R_total) + (1/R_n))
It is also good practice to skip the first few measurement values (depending on how often you measure...), e.g., R_(0..5), since you might have initial bursts due to initial preparations in the different layers and are in the slow-start phase anyways.
Here an example implementation in Java. Even though in this case the measurement is done through a file download, it can be easily applied to your environment too - simply use your echo server instead of the file download:
public class Estimator
{
private static double R; // harmonic mean of all bandwidth measurements
private static int n = 0; // number of measurements
private static int skips = 5; // skip measurements for first 5 socket.read() operations
// size in bytes
// start/end in ns
public static double harmonicMean(long start, long end, double size){
// check if we need to skip this initial value, since it might falsify our estimate
if(skips-- > 0) return 0;
// get current value of R
double curR = (size/(1024*1024))/(double)((end - start)*Math.pow(10, -9));
System.out.println(curR);
if(n == 0) {
// initial value
R = curR;
} else {
// use harmonic mean
R = (n+1)/((n/R)+(1/curR));
}
n++;
return R;
}
public static void main(String[] args)
{
// temporary buffer to hold bytes
byte[] buffer = new byte[1024*1024*10]; // 10MB buffer - just in case ...
Socket socket = null;
try {
// measurement done through file download from server
// prepare request
socket = new Socket("yourserver.com",80);
PrintWriter pw = new PrintWriter(socket.getOutputStream());
InputStream is = socket.getInputStream();
pw.println("GET /test_blob HTTP/1.1"); // a test file, e.g., 1MB big
pw.println("Host: yourserver.com");
pw.println("");
pw.flush();
// prepare measurement
long start,end;
double bytes = 0;
double totalBytes = 0;
start = System.nanoTime();
while((bytes = is.read(buffer)) != -1) {
// socket.read() occurred -> calculate harmonic mean
end = System.nanoTime();
totalBytes += bytes;
harmonicMean(start, end, totalBytes);
}
// clean up
is.close();
pw.close();
}
catch(Exception e){
e.printStackTrace();
}
finally {
if(socket != null) {
try{
socket.close();
}
catch(Exception e){
e.printStackTrace();
}
}
}
System.out.println(R+" MB/s");
}
}
Additionally, for the sake of completeness, as I already mentioned in the comments it is important that the test messages/files are big enough, so TCP reaches the full goodput potential of the link.
Please also note, that this is a simplified way to estimate the bandwidth. In this example we start measuring (taking the first timestamp) from when the request was sent, meaning we include the link propagation and server processing delay, which in return will reduce the overall estimated value. Anyways, since you seem to use a local network, I expect the sum of these delays to be rather small, which means they will not falsify the final estimate too much.
I wrote a small blog post concerning measuring TCP connection metrics inside an application layer. Everything is described in more detail there (though the code examples are in C).

How to read (all available) data from serial connection when using JSSC?

I'm trying to work with JSSC.
I built my app according to this link:
https://code.google.com/p/java-simple-serial-connector/wiki/jSSC_examples
My event handler looks like:
static class SerialPortReader implements SerialPortEventListener {
public void serialEvent(SerialPortEvent event) {
if(event.isRXCHAR()){//If data is available
try {
byte buffer[] = serialPort.readBytes();
}
catch (SerialPortException ex) {
System.out.println(ex);
}
}
}
}
}
The problem is that I'm always not getting the incoming data in one piece. (I the message has a length of 100 bytes, Im getting 48 and 52 bytes in 2 separates calls)
- The other side send me messages in different lengths.
- In the ICD Im working with, there is a field which tell us the length of the message. (from byte #10 to byte #13)
- I cant read 14 bytes:
(serialPort.readBytes(14);,
parse the message length and read the rest of the message:
(serialPort.readBytes(messageLength-14);
But if I will do it, I will not have the message in once piece (I will have 2 separates byte[] and I need it in one piece (byte[]) without the work of copy function.
Is it possible ?
When working with Ethernet (SocketChannel) we can read data using ByteBuffer. But with JSSC we cant.
Is there a good alternative to JSSC ?
Thanks
You can't rely on any library to give you all the content you need at once because :
the library dont know how many data you need
the library will give you data as it comes and also depending on buffers, hardware, etc
You must develop your own business logic to handle your packets reception. It will of course depend on how your packets are defined : are they always the same length, are they separated with same ending character, etc.
Here is an example that should work with your system (note you should take this as a start, not a full solution, it doesn't include timeout for example) :
static class SerialPortReader implements SerialPortEventListener
{
private int m_nReceptionPosition = 0;
private boolean m_bReceptionActive = false;
private byte[] m_aReceptionBuffer = new byte[2048];
#Override
public void serialEvent(SerialPortEvent p_oEvent)
{
byte[] aReceiveBuffer = new byte[2048];
int nLength = 0;
int nByte = 0;
switch(p_oEvent.getEventType())
{
case SerialPortEvent.RXCHAR:
try
{
aReceiveBuffer = serialPort.readBytes();
for(nByte = 0;nByte < aReceiveBuffer.length;nByte++)
{
//System.out.print(String.format("%02X ",aReceiveBuffer[nByte]));
m_aReceptionBuffer[m_nReceptionPosition] = aReceiveBuffer[nByte];
// Buffer overflow protection
if(m_nReceptionPosition >= 2047)
{
// Reset for next packet
m_bReceptionActive = false;
m_nReceptionPosition = 0;
}
else if(m_bReceptionActive)
{
m_nReceptionPosition++;
// Receive at least the start of the packet including the length
if(m_nReceptionPosition >= 14)
{
nLength = (short)((short)m_aReceptionBuffer[10] & 0x000000FF);
nLength |= ((short)m_aReceptionBuffer[11] << 8) & 0x0000FF00;
nLength |= ((short)m_aReceptionBuffer[12] << 16) & 0x00FF0000;
nLength |= ((short)m_aReceptionBuffer[13] << 24) & 0xFF000000;
//nLength += ..; // Depending if the length in the packet include ALL bytes from the packet or only the content part
if(m_nReceptionPosition >= nLength)
{
// You received at least all the content
// Reset for next packet
m_bReceptionActive = false;
m_nReceptionPosition = 0;
}
}
}
// Start receiving only if this is a Start Of Header
else if(m_aReceptionBuffer[0] == '\0')
{
m_bReceptionActive = true;
m_nReceptionPosition = 1;
}
}
}
catch(Exception e)
{
e.printStackTrace();
}
break;
default:
break;
}
}
}
After writing data to serial port it need to be flushed. Check the timing and pay attention to the fact that read should occur only after other end has written. read size is just an indication to read system call and is not guaranteed. The data may have arrived and is buffered in serial port hardware buffer but may not have been transferred to operating system buffer hence not to application. Consider using scm library, it flushes data after each write http://www.embeddedunveiled.com/
Try this:
Write your data to the serial port (using serialPort.writeBytes()) and if you are expecting a response, use this:
byte[] getData() throws SerialPortException, IOException {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] b;
try {
while ((b = serialPort.readBytes(1, 100)) != null) {
baos.write(b);
// System.out.println ("Wrote: " + b.length + " bytes");
}
// System.out.println("Returning: " + Arrays.toString(baos.toByteArray()));
} catch (SerialPortTimeoutException ex) {
; //don't want to catch it, it just means there is no more data to read
}
return baos.toByteArray();
}
Do what you want with the returned byte array; in my case I just display it for testing.
I found it works just fine if you read one byte at a time, using a 100ms timeout, and when it does time out, you've read all data in the buffer.
Source: trying to talk to an Epson serial printer using jssc and ESC/POS.

Java sending byte[] through sockets...wrong length read

For my homework assignment, I have a network of Nodes that are passing messages to each other. Each Node is connected to a set amount of other Nodes (I'm using 4 for testing). Each Link has a weight, and all the Nodes have computed the shortest path for how they want their messages sent. Every Packet that is sent is composed of the message protocol (a hard-coded int), an integer that tells how many messages have passed through the sending Node, and the routing path for the Packet.
Every Node has a Thread for each of its Links. There is an active Socket in each Link. The Packets are sent by adding a 4-byte int to the beginning of the message telling the message's length.
Everything works fine until I stress the network. For my test, there are 10 Nodes, and I get 5 of them to send 10000 packets in a simple while() loop with no Thread.sleep(). Without exception, there is always an error at some point during execution at the if(a!=len) statement.
Please let me know if I can clarify anything. Thanks in advance! Here is the code (from the Link Thread; send() and forward() are called from the Node itself):
protected void listen(){
byte[] b;
int len;
try{
DataInputStream in = new DataInputStream(sock.getInputStream());
while(true){
len = in.readInt();
b = new byte[len];
int a = in.read(b,0,len);
if(a!=len){
System.out.println("ERROR: " + a + "!=" + len);
throw new SocketException(); //may have to fix...this will happen when message is corrupt/incomplete
}
Message m = new Message(b);
int p = m.getProtocol();
switch (p){
case CDNP.PACKET:
owner.incrementTracker();
System.out.print("\n# INCOMMING TRACKER: " + m.getTracker() + "\n>>> ");
owner.forward(m);
}
}
}catch (IOException e){
e.printStackTrace();
}
}
public void send(int tracker){
String[] message = { Conv.is(CDNP.PACKET), Conv.is(tracker), owner.getMST().toString() };
Message m = new Message(message);
forward(m);
}
public synchronized void forward(Message m){
try{
OutputStream out = sock.getOutputStream();
//convert length to byte array of length 4
ByteBuffer bb = ByteBuffer.allocate(4+m.getLength());
bb.putInt(m.getLength());
bb.put(m.getBytes());
out.write(bb.array());
out.flush();
}catch (UnknownHostException e){
System.out.println("ERROR: Could not send to Router at " + sock.getRemoteSocketAddress().toString());
return;
}catch (IOException e1){
}
}
int a = in.read(b,0,len);
if(a!=len){
That won't work. The InputStream may not read all the bytes you want, it may read only what is available right now, and return that much without blocking.
To quote the Javadocs (emphasis mine):
Reads up to len bytes of data from the input stream into an array of bytes. An attempt is made to read as many as len bytes, but a smaller number may be read, possibly zero. The number of bytes actually read is returned as an integer.
You need to continue reading in a loop until you have all the data you want (or the stream is finished).
Or, since you are using a DataInputStream, you can also use
in.readFully(b, 0, len);
which always reads exactly len bytes (blocking until those have arrived, throwing an exception when there is not enough data).

Categories