write int from Java client to c server over socket - java

I thought it might be byte ordering but it doesn't look like it.
I am not sure what else it could be.
Java client on linux
private static final int CODE = 0;
Socket socket = new Socket("10.10.10.10", 50505);
DataOutputStream output = new DataOutputStream(socket.getOutputStream());
output.writeInt(CODE);
c server also on linux
int sd = createSocket();
int code = -1;
int bytesRead = 0;
int result;
while (bytesRead < sizeof(int))
{
result = read(sd, &code + bytesRead, sizeof(int) - bytesRead);
bytesRead += result;
}
int ntolCode = ntohl(code); //test for byte order issue
printf("\n%i\n%i\n%i\n", code, ntolCode, bytesRead);
Which prints out:
-256
16777215
4
Not sure what else to try.
Solution
This solution is not intuitive in the least for me, but thanks for the down votes anyway!
Java side
Socket socket = new Socket("10.10.10.10", 50505);
DataOutputStream out = new DataOutputStream(socket.getOutputStream());
int x = 123456;
ByteBuffer buff = ByteBuffer.allocate(4);
byte[] b = buff.order(ByteOrder.LITTLE_ENDIAN).putInt(x).array();
out.write(b);
C side
int sd = createSocket();
char buff[4];
int bytesRead = 0;
int result;
while (bytesRead < 4){
result = read(sd, buff + bytesRead, sizeof(buff) - bytesRead);
if (result < 1) {
return -1;
}
bytesRead += result;
}
int answer = (buff[3] << 24 | buff[2] << 16 | buff[1] << 8 | buff[0]);
I am still interested in a simpler solution if anyone has anything, preferably using BufferedWriter if that is possible.

The problem is here:
&code + bytesRead
This will increment the address of code in steps of 4 (sizeof code), not 1. You need a byte array, or some typecasting.

You forgot to design and implement a protocol! You wrote one piece of code that sends data in one format and another piece of code that receives data in an entirely different format. Decide on a format, document that format, then write code that sends in that format, then write code the receives in that format.
Do not skip the documentation step. That is the most important one. Document precisely what bytes will be used to communicate the information.

Related

Buffer to String

I send/receive data between Android and other device through the usb.
The code that I use for receive data:
StringBuilder stringBuilder = new StringBuilder();
int i = 0;
int s = buffer[0];
for (; i < s; i++) {
stringBuilder.append(String.valueOf((char)buffer[i]));
}
byte[] b = String.valueOf(stringBuilder).getBytes();
I receive fine all of bytes, except when the byte is bigger than 127. How to do?
I try to use:
stringBuilder2.append(String.valueOf((int)buffer[i] & 0xFF));
And work fine if I read String.valueOf(stringBuilder), but not when I create byte[]
If all the bytes you're receiving are in format ASCII, in your stringBuilder you already have the text of the String you need.
On the other side assuming that buffer[0] is the size of your buffer you could try something like this:
byte[] tmp = new byte[buffer[0]];
System.arraycopy(buffer, 1, tmp, 0, buffer[0]);
String result = new String(tmp);

Servlet getContentLength() returns > 0 but getInputStream().available() returns 0 [duplicate]

How do I read an entire InputStream into a byte array?
You can use Apache Commons IO to handle this and similar tasks.
The IOUtils type has a static method to read an InputStream and return a byte[].
InputStream is;
byte[] bytes = IOUtils.toByteArray(is);
Internally this creates a ByteArrayOutputStream and copies the bytes to the output, then calls toByteArray(). It handles large files by copying the bytes in blocks of 4KiB.
You need to read each byte from your InputStream and write it to a ByteArrayOutputStream.
You can then retrieve the underlying byte array by calling toByteArray():
InputStream is = ...
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
int nRead;
byte[] data = new byte[16384];
while ((nRead = is.read(data, 0, data.length)) != -1) {
buffer.write(data, 0, nRead);
}
return buffer.toByteArray();
Finally, after twenty years, there’s a simple solution without the need for a 3rd party library, thanks to Java 9:
InputStream is;
…
byte[] array = is.readAllBytes();
Note also the convenience methods readNBytes(byte[] b, int off, int len) and transferTo(OutputStream) addressing recurring needs.
Use vanilla Java's DataInputStream and its readFully Method (exists since at least Java 1.4):
...
byte[] bytes = new byte[(int) file.length()];
DataInputStream dis = new DataInputStream(new FileInputStream(file));
dis.readFully(bytes);
...
There are some other flavors of this method, but I use this all the time for this use case.
If you happen to use Google Guava, it'll be as simple as using ByteStreams:
byte[] bytes = ByteStreams.toByteArray(inputStream);
Safe solution (close streams correctly):
Java 9 and newer:
final byte[] bytes;
try (inputStream) {
bytes = inputStream.readAllBytes();
}
Java 8 and older:
public static byte[] readAllBytes(InputStream inputStream) throws IOException {
final int bufLen = 4 * 0x400; // 4KB
byte[] buf = new byte[bufLen];
int readLen;
IOException exception = null;
try {
try (ByteArrayOutputStream outputStream = new ByteArrayOutputStream()) {
while ((readLen = inputStream.read(buf, 0, bufLen)) != -1)
outputStream.write(buf, 0, readLen);
return outputStream.toByteArray();
}
} catch (IOException e) {
exception = e;
throw e;
} finally {
if (exception == null) inputStream.close();
else try {
inputStream.close();
} catch (IOException e) {
exception.addSuppressed(e);
}
}
}
Kotlin (when Java 9+ isn't accessible):
#Throws(IOException::class)
fun InputStream.readAllBytes(): ByteArray {
val bufLen = 4 * 0x400 // 4KB
val buf = ByteArray(bufLen)
var readLen: Int = 0
ByteArrayOutputStream().use { o ->
this.use { i ->
while (i.read(buf, 0, bufLen).also { readLen = it } != -1)
o.write(buf, 0, readLen)
}
return o.toByteArray()
}
}
To avoid nested use see here.
Scala (when Java 9+ isn't accessible) (By #Joan. Thx):
def readAllBytes(inputStream: InputStream): Array[Byte] =
Stream.continually(inputStream.read).takeWhile(_ != -1).map(_.toByte).toArray
As always, also Spring framework (spring-core since 3.2.2) has something for you: StreamUtils.copyToByteArray()
public static byte[] getBytesFromInputStream(InputStream is) throws IOException {
ByteArrayOutputStream os = new ByteArrayOutputStream();
byte[] buffer = new byte[0xFFFF];
for (int len = is.read(buffer); len != -1; len = is.read(buffer)) {
os.write(buffer, 0, len);
}
return os.toByteArray();
}
In-case someone is still looking for a solution without dependency and If you have a file.
DataInputStream
byte[] data = new byte[(int) file.length()];
DataInputStream dis = new DataInputStream(new FileInputStream(file));
dis.readFully(data);
dis.close();
ByteArrayOutputStream
InputStream is = new FileInputStream(file);
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
int nRead;
byte[] data = new byte[(int) file.length()];
while ((nRead = is.read(data, 0, data.length)) != -1) {
buffer.write(data, 0, nRead);
}
RandomAccessFile
RandomAccessFile raf = new RandomAccessFile(file, "r");
byte[] data = new byte[(int) raf.length()];
raf.readFully(data);
Do you really need the image as a byte[]? What exactly do you expect in the byte[] - the complete content of an image file, encoded in whatever format the image file is in, or RGB pixel values?
Other answers here show you how to read a file into a byte[]. Your byte[] will contain the exact contents of the file, and you'd need to decode that to do anything with the image data.
Java's standard API for reading (and writing) images is the ImageIO API, which you can find in the package javax.imageio. You can read in an image from a file with just a single line of code:
BufferedImage image = ImageIO.read(new File("image.jpg"));
This will give you a BufferedImage, not a byte[]. To get at the image data, you can call getRaster() on the BufferedImage. This will give you a Raster object, which has methods to access the pixel data (it has several getPixel() / getPixels() methods).
Lookup the API documentation for javax.imageio.ImageIO, java.awt.image.BufferedImage, java.awt.image.Raster etc.
ImageIO supports a number of image formats by default: JPEG, PNG, BMP, WBMP and GIF. It's possible to add support for more formats (you'd need a plug-in that implements the ImageIO service provider interface).
See also the following tutorial: Working with Images
If you don't want to use the Apache commons-io library, this snippet is taken from the sun.misc.IOUtils class. It's nearly twice as fast as the common implementation using ByteBuffers:
public static byte[] readFully(InputStream is, int length, boolean readAll)
throws IOException {
byte[] output = {};
if (length == -1) length = Integer.MAX_VALUE;
int pos = 0;
while (pos < length) {
int bytesToRead;
if (pos >= output.length) { // Only expand when there's no room
bytesToRead = Math.min(length - pos, output.length + 1024);
if (output.length < pos + bytesToRead) {
output = Arrays.copyOf(output, pos + bytesToRead);
}
} else {
bytesToRead = output.length - pos;
}
int cc = is.read(output, pos, bytesToRead);
if (cc < 0) {
if (readAll && length != Integer.MAX_VALUE) {
throw new EOFException("Detect premature EOF");
} else {
if (output.length != pos) {
output = Arrays.copyOf(output, pos);
}
break;
}
}
pos += cc;
}
return output;
}
ByteArrayOutputStream out = new ByteArrayOutputStream();
byte[] buffer = new byte[1024];
while (true) {
int r = in.read(buffer);
if (r == -1) break;
out.write(buffer, 0, r);
}
byte[] ret = out.toByteArray();
#Adamski: You can avoid buffer entirely.
Code copied from http://www.exampledepot.com/egs/java.io/File2ByteArray.html (Yes, it is very verbose, but needs half the size of memory as the other solution.)
// Returns the contents of the file in a byte array.
public static byte[] getBytesFromFile(File file) throws IOException {
InputStream is = new FileInputStream(file);
// Get the size of the file
long length = file.length();
// You cannot create an array using a long type.
// It needs to be an int type.
// Before converting to an int type, check
// to ensure that file is not larger than Integer.MAX_VALUE.
if (length > Integer.MAX_VALUE) {
// File is too large
}
// Create the byte array to hold the data
byte[] bytes = new byte[(int)length];
// Read in the bytes
int offset = 0;
int numRead = 0;
while (offset < bytes.length
&& (numRead=is.read(bytes, offset, bytes.length-offset)) >= 0) {
offset += numRead;
}
// Ensure all the bytes have been read in
if (offset < bytes.length) {
throw new IOException("Could not completely read file "+file.getName());
}
// Close the input stream and return bytes
is.close();
return bytes;
}
Input Stream is ...
ByteArrayOutputStream bos = new ByteArrayOutputStream();
int next = in.read();
while (next > -1) {
bos.write(next);
next = in.read();
}
bos.flush();
byte[] result = bos.toByteArray();
bos.close();
Java 9 will give you finally a nice method:
InputStream in = ...;
ByteArrayOutputStream bos = new ByteArrayOutputStream();
in.transferTo( bos );
byte[] bytes = bos.toByteArray();
We are seeing some delay for few AWS transaction, while converting S3 object to ByteArray.
Note: S3 Object is PDF document (max size is 3 mb).
We are using the option #1 (org.apache.commons.io.IOUtils) to convert the S3 object to ByteArray. We have noticed S3 provide the inbuild IOUtils method to convert the S3 object to ByteArray, we are request you to confirm what is the best way to convert the S3 object to ByteArray to avoid the delay.
Option #1:
import org.apache.commons.io.IOUtils;
is = s3object.getObjectContent();
content =IOUtils.toByteArray(is);
Option #2:
import com.amazonaws.util.IOUtils;
is = s3object.getObjectContent();
content =IOUtils.toByteArray(is);
Also let me know if we have any other better way to convert the s3 object to bytearray
I know it's too late but here I think is cleaner solution that's more readable...
/**
* method converts {#link InputStream} Object into byte[] array.
*
* #param stream the {#link InputStream} Object.
* #return the byte[] array representation of received {#link InputStream} Object.
* #throws IOException if an error occurs.
*/
public static byte[] streamToByteArray(InputStream stream) throws IOException {
byte[] buffer = new byte[1024];
ByteArrayOutputStream os = new ByteArrayOutputStream();
int line = 0;
// read bytes from stream, and store them in buffer
while ((line = stream.read(buffer)) != -1) {
// Writes bytes from byte array (buffer) into output stream.
os.write(buffer, 0, line);
}
stream.close();
os.flush();
os.close();
return os.toByteArray();
}
I tried to edit #numan's answer with a fix for writing garbage data but edit was rejected. While this short piece of code is nothing brilliant I can't see any other better answer. Here's what makes most sense to me:
ByteArrayOutputStream out = new ByteArrayOutputStream();
byte[] buffer = new byte[1024]; // you can configure the buffer size
int length;
while ((length = in.read(buffer)) != -1) out.write(buffer, 0, length); //copy streams
in.close(); // call this in a finally block
byte[] result = out.toByteArray();
btw ByteArrayOutputStream need not be closed. try/finally constructs omitted for readability
See the InputStream.available() documentation:
It is particularly important to realize that you must not use this
method to size a container and assume that you can read the entirety
of the stream without needing to resize the container. Such callers
should probably write everything they read to a ByteArrayOutputStream
and convert that to a byte array. Alternatively, if you're reading
from a file, File.length returns the current length of the file
(though assuming the file's length can't change may be incorrect,
reading a file is inherently racy).
Wrap it in a DataInputStream if that is off the table for some reason, just use read to hammer on it until it gives you a -1 or the entire block you asked for.
public int readFully(InputStream in, byte[] data) throws IOException {
int offset = 0;
int bytesRead;
boolean read = false;
while ((bytesRead = in.read(data, offset, data.length - offset)) != -1) {
read = true;
offset += bytesRead;
if (offset >= data.length) {
break;
}
}
return (read) ? offset : -1;
}
Java 8 way (thanks to BufferedReader and Adam Bien)
private static byte[] readFully(InputStream input) throws IOException {
try (BufferedReader buffer = new BufferedReader(new InputStreamReader(input))) {
return buffer.lines().collect(Collectors.joining("\n")).getBytes(<charset_can_be_specified>);
}
}
Note that this solution wipes carriage return ('\r') and can be inappropriate.
The other case to get correct byte array via stream, after send request to server and waiting for the response.
/**
* Begin setup TCP connection to PC app
* to open integrate connection between mobile app and pc app (or mobile app)
*/
mSocket = new Socket(IP, port);
// mSocket.setSoTimeout(30000);
DataOutputStream mDos = new DataOutputStream(mSocket.getOutputStream());
String str = "MobileRequest#" + params[0] + "#<EOF>";
mDos.write(str.getBytes());
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
/* Since data are accepted as byte, all of them will be collected in the
following byte array which initialised with accepted data length. */
DataInputStream mDis = new DataInputStream(mSocket.getInputStream());
byte[] data = new byte[mDis.available()];
// Collecting data into byte array
for (int i = 0; i < data.length; i++)
data[i] = mDis.readByte();
// Converting collected data in byte array into String.
String RESPONSE = new String(data);
You're doing an extra copy if you use ByteArrayOutputStream. If you know the length of the stream before you start reading it (e.g. the InputStream is actually a FileInputStream, and you can call file.length() on the file, or the InputStream is a zipfile entry InputStream, and you can call zipEntry.length()), then it's far better to write directly into the byte[] array -- it uses half the memory, and saves time.
// Read the file contents into a byte[] array
byte[] buf = new byte[inputStreamLength];
int bytesRead = Math.max(0, inputStream.read(buf));
// If needed: for safety, truncate the array if the file may somehow get
// truncated during the read operation
byte[] contents = bytesRead == inputStreamLength ? buf
: Arrays.copyOf(buf, bytesRead);
N.B. the last line above deals with files getting truncated while the stream is being read, if you need to handle that possibility, but if the file gets longer while the stream is being read, the contents in the byte[] array will not be lengthened to include the new file content, the array will simply be truncated to the old length inputStreamLength.
I use this.
public static byte[] toByteArray(InputStream is) throws IOException {
ByteArrayOutputStream output = new ByteArrayOutputStream();
try {
byte[] b = new byte[4096];
int n = 0;
while ((n = is.read(b)) != -1) {
output.write(b, 0, n);
}
return output.toByteArray();
} finally {
output.close();
}
}
This is my copy-paste version:
#SuppressWarnings("empty-statement")
public static byte[] inputStreamToByte(InputStream is) throws IOException {
if (is == null) {
return null;
}
// Define a size if you have an idea of it.
ByteArrayOutputStream r = new ByteArrayOutputStream(2048);
byte[] read = new byte[512]; // Your buffer size.
for (int i; -1 != (i = is.read(read)); r.write(read, 0, i));
is.close();
return r.toByteArray();
}
Java 7 and later:
import sun.misc.IOUtils;
...
InputStream in = ...;
byte[] buf = IOUtils.readFully(in, -1, false);
You can try Cactoos:
byte[] array = new BytesOf(stream).bytes();
Here is an optimized version, that tries to avoid copying data bytes as much as possible:
private static byte[] loadStream (InputStream stream) throws IOException {
int available = stream.available();
int expectedSize = available > 0 ? available : -1;
return loadStream(stream, expectedSize);
}
private static byte[] loadStream (InputStream stream, int expectedSize) throws IOException {
int basicBufferSize = 0x4000;
int initialBufferSize = (expectedSize >= 0) ? expectedSize : basicBufferSize;
byte[] buf = new byte[initialBufferSize];
int pos = 0;
while (true) {
if (pos == buf.length) {
int readAhead = -1;
if (pos == expectedSize) {
readAhead = stream.read(); // test whether EOF is at expectedSize
if (readAhead == -1) {
return buf;
}
}
int newBufferSize = Math.max(2 * buf.length, basicBufferSize);
buf = Arrays.copyOf(buf, newBufferSize);
if (readAhead != -1) {
buf[pos++] = (byte)readAhead;
}
}
int len = stream.read(buf, pos, buf.length - pos);
if (len < 0) {
return Arrays.copyOf(buf, pos);
}
pos += len;
}
}
Solution in Kotlin (will work in Java too, of course), which includes both cases of when you know the size or not:
fun InputStream.readBytesWithSize(size: Long): ByteArray? {
return when {
size < 0L -> this.readBytes()
size == 0L -> ByteArray(0)
size > Int.MAX_VALUE -> null
else -> {
val sizeInt = size.toInt()
val result = ByteArray(sizeInt)
readBytesIntoByteArray(result, sizeInt)
result
}
}
}
fun InputStream.readBytesIntoByteArray(byteArray: ByteArray,bytesToRead:Int=byteArray.size) {
var offset = 0
while (true) {
val read = this.read(byteArray, offset, bytesToRead - offset)
if (read == -1)
break
offset += read
if (offset >= bytesToRead)
break
}
}
If you know the size, it saves you on having double the memory used compared to the other solutions (in a brief moment, but still could be useful). That's because you have to read the entire stream to the end, and then convert it to a byte array (similar to ArrayList which you convert to just an array).
So, if you are on Android, for example, and you got some Uri to handle, you can try to get the size using this:
fun getStreamLengthFromUri(context: Context, uri: Uri): Long {
context.contentResolver.query(uri, arrayOf(MediaStore.MediaColumns.SIZE), null, null, null)?.use {
if (!it.moveToNext())
return#use
val fileSize = it.getLong(it.getColumnIndex(MediaStore.MediaColumns.SIZE))
if (fileSize > 0)
return fileSize
}
//if you wish, you can also get the file-path from the uri here, and then try to get its size, using this: https://stackoverflow.com/a/61835665/878126
FileUtilEx.getFilePathFromUri(context, uri, false)?.use {
val file = it.file
val fileSize = file.length()
if (fileSize > 0)
return fileSize
}
context.contentResolver.openInputStream(uri)?.use { inputStream ->
if (inputStream is FileInputStream)
return inputStream.channel.size()
else {
var bytesCount = 0L
while (true) {
val available = inputStream.available()
if (available == 0)
break
val skip = inputStream.skip(available.toLong())
if (skip < 0)
break
bytesCount += skip
}
if (bytesCount > 0L)
return bytesCount
}
}
return -1L
}
You can use cactoos library with provides reusable object-oriented Java components.
OOP is emphasized by this library, so no static methods, NULLs, and so on, only real objects and their contracts (interfaces).
A simple operation like reading InputStream, can be performed like that
final InputStream input = ...;
final Bytes bytes = new BytesOf(input);
final byte[] array = bytes.asBytes();
Assert.assertArrayEquals(
array,
new byte[]{65, 66, 67}
);
Having a dedicated type Bytes for working with data structure byte[] enables us to use OOP tactics for solving tasks at hand.
Something that a procedural "utility" method will forbid us to do.
For example, you need to enconde bytes you've read from this InputStream to Base64.
In this case you will use Decorator pattern and wrap Bytes object within implementation for Base64.
cactoos already provides such implementation:
final Bytes encoded = new BytesBase64(
new BytesOf(
new InputStreamOf("XYZ")
)
);
Assert.assertEquals(new TextOf(encoded).asString(), "WFla");
You can decode them in the same manner, by using Decorator pattern
final Bytes decoded = new Base64Bytes(
new BytesBase64(
new BytesOf(
new InputStreamOf("XYZ")
)
)
);
Assert.assertEquals(new TextOf(decoded).asString(), "XYZ");
Whatever your task is you will be able to create own implementation of Bytes to solve it.

Java input and output binary file byte size are not matching after transformation

My input data set is 1201x1201 elements of 16 bit integers(2 bytes) in binary format. Total file size is 2884802 bytes. I read this data into Java using ByteBuffer and then wrote it out as a 2-dimensional array of unsigned shorts using ObjectOutputStream's writeShort() method. Now my file size is 2898893 bytes. Why this difference ?
FileChannel fileInputChannel = new FileInputStream(fileInput).getChannel();
ObjectOutputStream oos = new ObjectOutputStream(new FileOutputStream(fileOutput));
short[][] data = new short[1201][1201];
ByteBuffer bb = ByteBuffer.allocateDirect(2884802);
while (bb.remaining() > 0)
fileInputChannel.read(bb);
fileInputChannel.close();
bb.flip();
ShortBuffer sb=null;
if (ByteOrder.nativeOrder().equals(ByteOrder.BIG_ENDIAN))
{
sb = bb.order(ByteOrder.BIG_ENDIAN).asShortBuffer();
}
else
{
sb = bb.order(ByteOrder.LITTLE_ENDIAN).asShortBuffer();
}
for (int i=0;i<1201;i++)
{
for (int j=0;j<1201;j++)
{
data[i][j] = sb.get();
oos.writeShort(data[i][j] & 0xFFFF);
}
}
Don't use a DataOutputStream, that is meant tobte used in serialization. Use the FileOutputStream you already have, something like this:
FileChannel fileInputChannel = new FileInputStream(fileInput).getChannel();
FileOutputStream fos = new FileOutputStream(fileOutput);
short[][] data = new short[1201][1201];
ByteBuffer bb = ByteBuffer.allocateDirect(2884802);
while (bb.remaining() > 0)
fileInputChannel.read(bb);
fileInputChannel.close();
bb.flip();
ShortBuffer sb=null;
if (ByteOrder.nativeOrder().equals(ByteOrder.BIG_ENDIAN))
{
for (int i=0;i<1201;i++)
{
for (int j=0;j<1201;j++)
{
fos.write((data[i][j] >> 8) & 0xFF);
fos.write(data[i][j] & 0xFF);
}
}
}
else
{
for (int i=0;i<1201;i++)
{
for (int j=0;j<1201;j++)
{
fos.write(data[i][j] & 0xFF);
fos.write((data[i][j] >> 8) & 0xFF);
}
}
}
Use a DataOutputStream, not an ObjectOutputStream. Or a ByteBuffer too. An ObjectOutputStream needs to store its classes too, to recreate objects.
In fact the above code does a simple Files.copy, but I assume you intend to do some processing.
Apart from the ByteOrder.

Java Socket synchronization behavior

I tried to solve the problem in many ways but without success and I have also looked for information in this forum but with same results, so here we go.
I am actually doing a server daemon that accepts client requests and then it (the server) transfers all the files contained in a specific folder. I'm going to post the code of the sendFileData (on the server) and the receiveFileData (on the client).
The server uses:
public static void sendFileData(File file, Socket socket) throws FileNotFoundException, IOException, SocketException {
byte[] auxiliar = new byte[8192];
byte[] mybytearray = new byte[(int) file.length()];
int longitud = mybytearray.length;
BufferedInputStream bis = new BufferedInputStream(new FileInputStream(file));
bis.read(mybytearray, 0, longitud);
DataOutputStream os = new DataOutputStream(socket.getOutputStream());
int paquetes = longitud / 8187;
int resto = longitud % 8187;
int i = 0;
while(i<paquetes){//The length goes on the first 4 bytes and the 5th tells if there are more packets to send (8192 bytes or less).
byte[] bytes = ByteBuffer.allocate(4).putInt(8187).array();
auxiliar[0] = bytes[0];
auxiliar[1] = bytes[1];
auxiliar[2] = bytes[2];
auxiliar[3] = bytes[3];
auxiliar[4] = 1;
for(int j = 5; j < 8192; j++){
auxiliar[j] = mybytearray[i*8187+(j-5)];
}
os.write(auxiliar, 0, 8192);
i+=1;
}
if(resto > 0){
byte[] bytes = ByteBuffer.allocate(4).putInt(resto).array();
auxiliar[0] = bytes[0];
auxiliar[1] = bytes[1];
auxiliar[2] = bytes[2];
auxiliar[3] = bytes[3];
auxiliar[4] = 0;
for(int j = 5; j < resto+5; j++){
auxiliar[j] = mybytearray[i*8187+(j-5)];
}
os.write(auxiliar, 0, resto+5);
}
os.flush();
}
And in the client side:
public static void receiveFileData(String nombreFichero, Socket s) throws IOException{
File monitored = new File(nombreFichero);
if(monitored.exists() == false){
monitored.createNewFile();
}
byte[] mybytearray;
DataInputStream is = new DataInputStream(s.getInputStream());
FileOutputStream fos = new FileOutputStream(monitored);
BufferedOutputStream bos = new BufferedOutputStream(fos);
int bytesRead = 0;
int hasNext = 1;
do {
bytesRead = is.readInt();//Leo longitud
try {
Thread.sleep(1);// HERE!!!!
} catch (InterruptedException e) {
}
// System.out.println("Bytes read "+bytesRead );
if(bytesRead <= 8187 && bytesRead > 0){
// System.out.println("Bytes leídos "+bytesRead );
hasNext = is.readByte();//Leo si hay más datos por enviar
mybytearray = new byte[bytesRead];
is.read(mybytearray);
if(monitored.exists()){
synchronized(monitored){
bos.write(mybytearray, 0, mybytearray.length);
}
}
mybytearray = null;
}else{
System.out.println("Fuera de rango "+bytesRead);
}
}while(hasNext == 1);
bos.close();
mybytearray = null;
System.out.println("Fichero recibido: "+monitored.getAbsolutePath());
}
In the receiveFileData code, if I do not put a Thread.sleep(1) or a System.out.println() or whatever who takes time to execute, I am not receiving the data in the correct way on the client, because readInt() returns a very high number randomly negative or positive (which implies Heap out of memory and other exceptions).
Sure it's something about synchronization but I think the transfering schema between the two methods is correct (maybe the client is too slow and server too fast).
What is happening?? Because I do not want to put a Thread.sleep, this is not good programming here I think.
Thank you so much!
is.read(bytes) is not guaranteed to fill the supplied byte array. You need to check its return value to see how many bytes were read or (better) use readFully().
The sleep() probably just allows time for all bytes to have been returned from the socket.

Ensuring no packet loss between TCP client and server

I am writing a Java TCP client which sends chunks of data to a C server. The client-server worked very well on my development PC. This code upon deployment on a hardware board showed packet loss. I only have the logs with me and I know that the server did not receive all packets.
I do not have the hardware to test. Therefore, at the first level, I want to be very sure client code does send all the required data.
Here is my code(the client part in Java). How do I make sure this is done? Is there some resend commands with timings etc?
Socket mySocket = new Socket("10.0.0.2",2800);
OutputStream os = mySocket.getOutputStream();
System.out.println(" Sending 8 byte Header Msg with length of following data to Server");
os.write(hdr, 0, 8);
os.flush();
System.out.println(" Sending Data ");
start = 0;
for(int index=0; index < ((rbuffer.length/chucksize)+1); index++){
if(start + chucksize > rbuffer.length) {
System.arraycopy(rbuffer, start, val, 0, rbuffer.length - start);
} else {
System.arraycopy(rbuffer, start, val, 0, chucksize);
}
start += chucksize ;
os.write(val,0,chucksize);
os.flush();
}
Here is the C snippet which receives this data:
while ((bytes_received = recv(connected, rMsg, sizeof(rMsg),0)) > 0){
if (bytes_received > 0) // zero indicates end of transmission */
{
/* get length of message (2 bytes) */
tmpVal = 0;
tmpVal |= rMsg[idx++];
tmpVal = tmpVal << 8;
tmpVal |= rMsg[idx++];
msg_len = tmpVal;
len = msg_len;
//printf("msg_len = %d\n", len);
printf("length of following message from header message : %d\n", len);
char echoBuffer[RCVBUFSIZE] ;
memset(echoBuffer, 0, RCVBUFSIZE);
int recvMsgsize = 0;
plain=(char *)malloc(len+1);
if (!plain)
{
fprintf(stderr, "Memory error!");
}
for( i = RCVBUFSIZE; i < (len+RCVBUFSIZE); i=i+RCVBUFSIZE){
if(i>=len){
recvMsgSize = recv(connected, echoBuffer, (len - (i-RCVBUFSIZE)), 0);
memcpy(&plain[k], echoBuffer, recvMsgSize);
k = k+recvMsgSize;
}
else{
recvMsgSize = recv(connected, echoBuffer, RCVBUFSIZE, 0);
memcpy(&plain[k], echoBuffer, recvMsgSize);
k = k+recvMsgSize;
}
}
}//closing if
}//closing while
First of all there is no such thing as packet loss in TCP/IP. This protocol was designed to reliably send a stream of bytes in correct order. So the problem must be with your application or the other side.
I am not really in a mood to analyze this whole arraycopy() madness (C anyone?), but why aren't you just sending the whole rbuffer in one go through BufferedOutputStream?
OutputStream os = new BufferedOutputStream(mySocket.getOutputStream());
and then:
os.write(rbuffer);
Believe me, BufferedOutputStream is doing the exact same thing (collecting bytes into chunks and sending them in one go). Or maybe I am missing something?
I changed the C side program in the following way and it now works:
printf("length of following message from header message : %d\n", len);
plain=(char *)malloc(len+1);
if (!plain)
{
fprintf(stderr, "Memory error!");
}
memset(plain, 0, len+1);
int remain = len;
k= 0;
while (remain){
int toGet = remain > RCVBUFSIZE ? RCVBUFSIZE : remain;
remain -= toGet;
int recvd = 0;
while(recvd < toGet) {
if((recvMsgSize = recv(connected, echoBuffer, toGet-recvd, 0)) < 0){
printf("error receiving data\n");
}
memcpy(&plain[k], echoBuffer, recvMsgSize);
k += recvMsgSize;
printf("Total data accumulated after recv input %d\n", k);
recvd += recvMsgSize;
}
}

Categories