In my team, we have an issue with a specific endpoint which, when called with some specific parameters, provides a huge JSON in chunks. So, for example, if the JSON had 1,000 rows, after about 30 seconds of opening the URL with our browser (it's a GET endpoint) we get 100 rows, then wait a few more and we get the next 200, etc until the JSON is exhausted. This is a problem for us because our application times out before retrieving the JSON. We want to emulate the behavior of the endpoint with an example endpoint of our own, for debugging purposes.
So far, the following is what I have. For simplicity, I'm not even reading a JSON, just a randomly generated string. The logs show me that I'm reading the data a few bytes at a time, writing it and flushing the OutputStream. The crucial difference is that my browser (or POSTMAN) show me the data at the very end, not in chunks. Is there anything I can do to make it so that I can see the data coming back in chunks?
private static final int readBufSize = 10;
private static final int generatedStringSize = readBufSize * 10000;
#GetMapping(path = "/v2/payload/mocklargepayload")
public void simulateLargePayload(HttpServletResponse response){
try(InputStream inputStream = IOUtils.toInputStream(RandomStringUtils.randomAlphanumeric(generatedStringSize));
OutputStream outputStream = response.getOutputStream()) {
final byte[] buffer = new byte[readBufSize];
for(int i = 0; i < generatedStringSize; i+= readBufSize){
inputStream.read(buffer, 0, readBufSize - 1);
buffer[buffer.length - 1] = '\n';
log.info("Read bytes: {}", buffer);
outputStream.write(buffer);
log.info("Wrote bytes {}", buffer);
Thread.sleep(500);
log.info("Flushing stream");
outputStream.flush();
}
} catch (IOException | InterruptedException e) {
log.error("Received exception: {}", e.getClass().getSimpleName());
}
}
Your endpoint should return a header "content-length" where you will specify the total size of the info that your endpoint will return. That will inform your client of how much info to expect. Also, you can read info chunk by chunk as it becomes available. I had a reverse problem where I wrote a large input into my end-point (POST). And end-point was reading it faster than I was writing, so at some point when it read all the available info so far it stopped reading thinking it was it. So, I wrote this code which you can implement the same way on your client side:
#PostMapping
public ResponseEntity<String> uploadTest(HttpServletRequest request) {
try {
String lengthStr = request.getHeader("content-length");
int length = TextUtils.parseStringToInt(lengthStr, -1);
if(length > 0) {
byte[] buff = new byte[length];
ServletInputStream sis =request.getInputStream();
int counter = 0;
while(counter < length) {
int chunkLength = sis.available();
byte[] chunk = new byte[chunkLength];
sis.read(chunk);
for(int i = counter, j= 0; i < counter + chunkLength; i++, j++) {
buff[i] = chunk[j];
}
counter += chunkLength;
if(counter < length) {
TimeUtils.sleepFor(5, TimeUnit.MILLISECONDS);
}
}
Files.write(Paths.get("C:\\Michael\\tmp\\testPic.jpg"), buff);
}
} catch (Exception e) {
System.out.println(TextUtils.getStacktrace(e));
}
return ResponseEntity.ok("Success");
}
Also, I wrote a general feature for read/write with the same problem (again for server-side) but again you can implement the same logic on client side as well. The feature reads the info in chunks as it becomes available. This feature comes with Open-source library MgntUtils (written and maintained by me). See class WebUtils. The library with source code and Javadoc is available on Github here. Javadoc is here. It is also available as Maven artifact here
Related
I am using C# to create a server software for Windows and Java to create the client software.
It works fine most of the time, except for those few exceptions that I don't understand.
I am generally using .ReadLine() and .WriteLine() on both ends to communicate, unless I try to send binary data. That's when I write and read the bytes directly.
This is how the software is supposed work:
Client requests the binary data
Server responds with the length of the binary data as a string
Client receives the length and converts it into an integer and starts reading (length) bytes
Server starts writing (length) bytes
It works in most cases, but sometimes the client app doesn't receive the full data and blocks. The server always immediately flushes after writing data, so flushing is not the problem.
Furthermore I've noticed this usually happens with larger files, small files (up to ~1 MB) usually are not a problem.
NOTE It seems like the C# server does send the data completely, so the problem is most likely somewhere in the Java code.
EDIT - Here are some logs from the client side
Working download: pastebin.com/hFd5TvrF
Failing download: pastebin.com/Q3zFWRLB
It seems like the client is waiting for 2048 bytes at the end (as it should be, as length - processed = 2048 in this case), but for some reason the client blocks.
Any ideas what I'm doing wrong? Below are the source codes of both server and client:
C# Server:
public void Write(BinaryWriter str, byte[] data)
{
int BUFFER = 2048;
int PROCESSED = 0;
// WriteString sends the String using a StreamWriter (+ flushing)
WriteString(data.Length.ToString());
while (PROCESSED < data.Length)
{
if (PROCESSED + BUFFER > data.Length)
BUFFER = data.Length - PROCESSED;
str.Write(data, PROCESSED, BUFFER);
str.Flush();
PROCESSED += BUFFER;
}
}
Java Client:
public byte[] ReadBytes(int length){
byte[] buffer = new byte[length];
int PROCESSED = 0;
int READBUF = 2048;
TOTAL = length;
progress.setMax(TOTAL);
InputStream m;
try {
m = clientSocket.getInputStream();
while(PROCESSED < length){
if(PROCESSED + READBUF > length)
READBUF = length - PROCESSED;
try {
PROCESSED += m.read(buffer, PROCESSED, READBUF);
} catch (IOException e) {
}
XPROCESSED = PROCESSED;
}
} catch (IOException e1) {
// Removed because of sensitive data
}
return decryptData(buffer);
}
I've found a fix. As of now, the server sends the length and right after sends the byte array. For some reason this does not work.
So what I've changed is:
Send length and wait for the client to respond with "OK"
Start writing bytes
Not sure why, but it works. Ran it in a while(true) loop and it's been sending data 1000 times in 4 minutes straight and no problems, so I guess it's fixed.
I'm trying to create a java program that downloads certain asset files from an FTP server to a local file. Because my (free) FTP server doesn't support file sizes over a few megabytes, I decided to split up the files when they are uploaded and recombine them when the program downloads them. This works, but it is rather slow, because for each file, it has to get the InputStream, which takes some time.
The FTP server I use has a way to download the files without actually logging into the server, so I'm using this code to get the InputStream:
private static final InputStream getInputStream(String file) throws IOException {
return new URL("http://site.website.com/path/" + file).openStream();
}
To get the InputStream of a part of the asset file I'm using this code:
public static InputStream getAssetInputStream(String asset, int num) throws IOException, FTPException {
try {
return getInputStream("assets/" + asset + "_" + num + ".raf");
} catch (Exception e) {
// error handling
}
}
Because the getAssetInputStreams(String, int) method takes some time to run (especially if the file size is more then a megabyte), I decided to make the code that actually downloads the file multi-threaded. Here is where my problem lies.
final Map<Integer, Boolean> done = new HashMap<Integer, Boolean>();
final Map<Integer, byte[]> parts = new HashMap<Integer, byte[]>();
for (int i = 0; i < numParts; i++) {
final int part = i;
done.put(part, false);
new Thread(new Runnable() {
#Override
public void run() {
try {
InputStream is = FTP.getAssetInputStream(asset, part);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] buf = new byte[DOWNLOAD_BUFFER_SIZE];
int len = 0;
while ((len = is.read(buf)) > 0) {
baos.write(buf, 0, len);
curDownload.addAndGet(len);
totAssets.addAndGet(len);
}
parts.put(part, baos.toByteArray());
done.put(part, true);
} catch (IOException e) {
// error handling
} catch (FTPException e) {
// error handling
}
}
}, "Download-" + asset + "-" + i).start();
}
while (done.values().contains(false)) {
try {
Thread.sleep(100);
} catch(InterruptedException e) {
e.printStackTrace();
}
}
File assetFile = new File(dir, "assets/" + asset + ".raf");
assetFile.createNewFile();
FileOutputStream fos = new FileOutputStream(assetFile);
for (int i = 0; i < numParts; i++) {
fos.write(parts.get(i));
}
fos.close();
This code works, but not always. When I run it on my desktop computer, it works almost always. Not 100% of the time, but often it works just fine. On my laptop, which has a far worse internet connection, it almost never works. The result is a file that is incomplete. Sometimes, it downloads 50% of the file. Sometimes, it downloads 90% of the file, it differs every time.
Now, if I replace the .start() by .run(), the code works just fine, 100% of the time, even on my laptop. It is, however, incredibly slow, so I'd rather not use .run().
Is there a way I could change my code so it does work multi-threaded? Any help will be appreciated.
Firstly, get your FTP server replaced, there are plenty of free FTP servers that support arbitrary file size serving with additional features, but I digress...
Your code seems to have many unrelated problems that could potentially all cause the behavior you are seeing, addressed below:
You have race conditions from accessing the done and parts maps from unprotected/unsynchronized access from multiple threads. This could cause data corruption and loss of synchronization for these variables between threads, potentially causing done.values().contains(false) to return true even when it's really not.
You are calling done.values().contains() repeatedly at a high frequency. Whilst the javadoc doesn't explicitly state, a hash map likely traverses every value in a O(n) fashion to check if a given map contains a value. Coupled with the fact that other threads are modifying the map, you'll get undefined behavior. According to values() javadoc:
If the map is modified while an iteration over the collection is in progress (except through the iterator's own remove operation), the results of the iteration are undefined.
You are somehow calling new URL("http://site.website.com/path/" + file).openStream(); but stating you are using FTP. The http:// in the link defines the protocol openStream() tries to open in and http:// is not ftp://. Not sure if this is a typo or did you mean HTTP (or do you have an HTTP server serving identical files).
Any thread raising any type of Exception will cause the code to fail given that not all parts will have "completed" (based on your busy-wait loop design). Granted, you may be redacted some other logic to guard against this, but otherwise this is a potential problem with the code.
You aren't closing any streams that you've opened. This could mean that the underlying socket itself is also left open. Not only does this constitute resource leakage, if the server itself has some sort of maximum number of simultaneous connection limit, you are only causing new connections to fail because the old, completed transfers are not closed.
Based on the issues above, I propose moving the download logic into a Callable task and running them through an ExecutorService as follows:
LinkedList<Callable<byte[]>> tasksToExecute = new LinkedList<>();
// Populate tasks to run
for(int i = 0; i < numParts; i++){
final int part = i;
// Lambda to
tasksToExecute.add(() -> {
InputStream is = null;
try{
is = FTP.getAssetInputStream(asset, part);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] buf = new byte[DOWNLOAD_BUFFER_SIZE];
int len = 0;
while((len = is.read(buf)) > 0){
baos.write(buf, 0, len);
curDownload.addAndGet(len);
totAssets.addAndGet(len);
}
return baos.toByteArray();
}catch(IOException e){
// handle exception
}catch(FTPException e){
// handle exception
}finally{
if(is != null){
try{
is.close();
}catch(IOException ignored){}
}
}
return null;
});
}
// Retrieve an ExecutorService instance, note the use of work stealing pool is Java 8 only
// This can be substituted for newFixedThreadPool(nThreads) for Java < 8 as well for tight control over number of simultaneous links
ExecutorService executor = Executors.newWorkStealingPool(4);
// Tells the executor to execute all the tasks and give us the results
List<Future<byte[]>> resultFutures = executor.invokeAll(tasksToExecute);
// Populates the file
File assetFile = new File(dir, "assets/" + asset + ".raf");
assetFile.createNewFile();
try(FileOutputStream fos = new FileOutputStream(assetFile)){
// Iterate through the futures, writing them to file in order
for(Future<byte[]> result : resultFutures){
byte[] partData = result.get();
if(partData == null){
// exception occured during downloading this part, handle appropriately
}else{
fos.write(partData);
}
}
}catch(IOException ex(){
// handle exception
}
Using the executor service, you further optimize your multi-threading scenario since the output file will start writing as soon as pieces (in order) are available and that threads themselves are reused to save on thread creation costs.
As mentioned, there could be the case where too many simultaneous links causes the server to reject connections (or even more dangerously, write an EOF to make you think the part was downloaded). In this case, the number of worker threads can be tweaked by newFixedThreadPool(nThreads) to ensure at any given time, only nThreads amount of downloads can happen concurrently.
I'm trying to work with JSSC.
I built my app according to this link:
https://code.google.com/p/java-simple-serial-connector/wiki/jSSC_examples
My event handler looks like:
static class SerialPortReader implements SerialPortEventListener {
public void serialEvent(SerialPortEvent event) {
if(event.isRXCHAR()){//If data is available
try {
byte buffer[] = serialPort.readBytes();
}
catch (SerialPortException ex) {
System.out.println(ex);
}
}
}
}
}
The problem is that I'm always not getting the incoming data in one piece. (I the message has a length of 100 bytes, Im getting 48 and 52 bytes in 2 separates calls)
- The other side send me messages in different lengths.
- In the ICD Im working with, there is a field which tell us the length of the message. (from byte #10 to byte #13)
- I cant read 14 bytes:
(serialPort.readBytes(14);,
parse the message length and read the rest of the message:
(serialPort.readBytes(messageLength-14);
But if I will do it, I will not have the message in once piece (I will have 2 separates byte[] and I need it in one piece (byte[]) without the work of copy function.
Is it possible ?
When working with Ethernet (SocketChannel) we can read data using ByteBuffer. But with JSSC we cant.
Is there a good alternative to JSSC ?
Thanks
You can't rely on any library to give you all the content you need at once because :
the library dont know how many data you need
the library will give you data as it comes and also depending on buffers, hardware, etc
You must develop your own business logic to handle your packets reception. It will of course depend on how your packets are defined : are they always the same length, are they separated with same ending character, etc.
Here is an example that should work with your system (note you should take this as a start, not a full solution, it doesn't include timeout for example) :
static class SerialPortReader implements SerialPortEventListener
{
private int m_nReceptionPosition = 0;
private boolean m_bReceptionActive = false;
private byte[] m_aReceptionBuffer = new byte[2048];
#Override
public void serialEvent(SerialPortEvent p_oEvent)
{
byte[] aReceiveBuffer = new byte[2048];
int nLength = 0;
int nByte = 0;
switch(p_oEvent.getEventType())
{
case SerialPortEvent.RXCHAR:
try
{
aReceiveBuffer = serialPort.readBytes();
for(nByte = 0;nByte < aReceiveBuffer.length;nByte++)
{
//System.out.print(String.format("%02X ",aReceiveBuffer[nByte]));
m_aReceptionBuffer[m_nReceptionPosition] = aReceiveBuffer[nByte];
// Buffer overflow protection
if(m_nReceptionPosition >= 2047)
{
// Reset for next packet
m_bReceptionActive = false;
m_nReceptionPosition = 0;
}
else if(m_bReceptionActive)
{
m_nReceptionPosition++;
// Receive at least the start of the packet including the length
if(m_nReceptionPosition >= 14)
{
nLength = (short)((short)m_aReceptionBuffer[10] & 0x000000FF);
nLength |= ((short)m_aReceptionBuffer[11] << 8) & 0x0000FF00;
nLength |= ((short)m_aReceptionBuffer[12] << 16) & 0x00FF0000;
nLength |= ((short)m_aReceptionBuffer[13] << 24) & 0xFF000000;
//nLength += ..; // Depending if the length in the packet include ALL bytes from the packet or only the content part
if(m_nReceptionPosition >= nLength)
{
// You received at least all the content
// Reset for next packet
m_bReceptionActive = false;
m_nReceptionPosition = 0;
}
}
}
// Start receiving only if this is a Start Of Header
else if(m_aReceptionBuffer[0] == '\0')
{
m_bReceptionActive = true;
m_nReceptionPosition = 1;
}
}
}
catch(Exception e)
{
e.printStackTrace();
}
break;
default:
break;
}
}
}
After writing data to serial port it need to be flushed. Check the timing and pay attention to the fact that read should occur only after other end has written. read size is just an indication to read system call and is not guaranteed. The data may have arrived and is buffered in serial port hardware buffer but may not have been transferred to operating system buffer hence not to application. Consider using scm library, it flushes data after each write http://www.embeddedunveiled.com/
Try this:
Write your data to the serial port (using serialPort.writeBytes()) and if you are expecting a response, use this:
byte[] getData() throws SerialPortException, IOException {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] b;
try {
while ((b = serialPort.readBytes(1, 100)) != null) {
baos.write(b);
// System.out.println ("Wrote: " + b.length + " bytes");
}
// System.out.println("Returning: " + Arrays.toString(baos.toByteArray()));
} catch (SerialPortTimeoutException ex) {
; //don't want to catch it, it just means there is no more data to read
}
return baos.toByteArray();
}
Do what you want with the returned byte array; in my case I just display it for testing.
I found it works just fine if you read one byte at a time, using a 100ms timeout, and when it does time out, you've read all data in the buffer.
Source: trying to talk to an Epson serial printer using jssc and ESC/POS.
My application loops through about 200 urls that are all jpg images.
In the simulator it reads ok, then stores the byte array in persistentStore with no problems.
On the device, it gives java.io.IOException: TCP read timed out on basically every image.
Every now and then, one gets through. Not even sure how. The image sizes don't give insight either. Some are 6k, some are 11k. Size doesn't seem to matter for timing out.
I'll try to post what I believe to be the relevant code, but I am not really an expert here, so if I left something out, please say so.
Call http connection through loop and join thread:
for(int i = 0; i < images.size(); i ++)
{
try {
String url = images.elementAt(i).toString();
HttpRequest data3 = new HttpRequest(url, "GET", false);
data3.start();
data3.join();
} catch (IOException e) {
Dialog.inform("wtf " + e);
}
}
Make the actual connection in HttpConnection class with the proper suffix:
try
{
HttpConnection connection = (HttpConnection)Connector.open(url + updateConnectionSuffix());
int responseCode = connection.getResponseCode();
if(responseCode != HttpConnection.HTTP_OK)
{
connection.close();
return;
}
String contentType = connection.getHeaderField("Content-type");
long length = connection.getLength();
InputStream responseData = connection.openInputStream();
connection.close();
outputFinal(responseData, contentType, length);
}
catch(IOException ex)
{
} catch (SAXException ex) {
} catch (ParserConfigurationException ex) {
}
Finally, read the stream and write the bytes to a byte array:
else if(contentType.equals("image/png") || contentType.equals("image/jpeg") || contentType.equals("image/gif"))
{
try
{
if((int) length < 1)
length = 15000;
byte[] responseData = new byte[(int) length];
int offset = 0;
int numRead = 0;
StringBuffer rawResponse = new StringBuffer();
int chunk = responseData.length-offset;
if(chunk < 1)
chunk = 1024;
while (offset < length && (numRead=result.read(responseData, offset, chunk)) >= 0){
rawResponse.append(new String(responseData, offset, numRead));
offset += numRead;
}
String resultString = rawResponse.toString();
byte[] dataArray = resultString.getBytes();
result.close();
database db = new database();
db.storeImage(venue_id, dataArray);
}
catch( Exception e )
{
System.out.println(">>>>>>>----------------> total image fail: " + e);
}
}
Things to consider:
Length is always byte length in simulator. In device it is always -1.
The chunk var is a test to see if I force a 15k byte array, will it try to read as expected since byte[-1] gave an out of bounds exception. The results are the same. Sometimes it writes. Mostly it times out.
Any help is appreciated.
You can adjust the length of TCP timeouts on Blackberry using the parameter 'ConnectionTimeout'.
In your code here:
HttpConnection connection = (HttpConnection)Connector.open(url + updateConnectionSuffix());
You'll want to append ConnectionTimeout. You might write it into updateConnectionSuffix() or just append it.
HttpConnection connection = (HttpConnection)Connector.open(url + updateConnectionSuffix() + ";ConnectionTimeout=54321");
This sets the timeout to 54321 milliseconds.
Timeouts occur when the client is waiting for the server to send an ack and it doesn't get one in a specified amount of time.
edit: also, are you able to use the browser and stuff? You may also want to play with the deviceside parameter.
I think the problem may be that you're closing the connection before reading the bytes from the input stream. Try moving the connection.close() after the bytes have been read in.
I have a c++ client which needs to send a file to a c++ server. I'm splitting the file to chunks of PACKET_SIZE (=1024) bytes and send them over a TCP socket. At the server side I read at most PACKET_SIZE bytes to a buffer. When the client sends files which are less than PACKET_SIZE, the server receives more bytes than sent. Even when I limit the number of bytes to be exactly the size of the file, the files differ. I know the problem does not have to do with the client because I've tested it with a c++ server and it works flawlessly.
Thanks.
Server:
public void run() {
DataInputStream input = null;
PrintWriter output = null;
try {
input = new DataInputStream (_client.getInputStream());
}
catch (Exception e) {/* Error handling code */}
FileHeader fh = recvHeader(input);
size = fh._size;
filename = fh._name;
try {
output = new PrintWriter(_client.getOutputStream(), true);
}
catch (Exception e) {/* Error handling code */}
output.write(HEADER_ACK);
output.flush();
FileOutputStream file = null;
try {
file = new FileOutputStream(filename);
}
catch (FileNotFoundException fnfe) {/* Error handling code */}
int total_bytes_rcvd = 0, bytes_rcvd = 0, packets_rcvd = 0;
byte [] buf = new byte [PACKET_DATA_SIZE];
try {
int max = (size > PACKET_DATA_SIZE)? PACKET_DATA_SIZE: size;
bytes_rcvd = input.read(buf,0, max);
while (total_bytes_rcvd < size) {
if (-1 == bytes_rcvd) {...}
++packets_rcvd;
total_bytes_rcvd += bytes_rcvd;
file.write (buf,0, bytes_rcvd);
if (total_bytes_rcvd < size)
bytes_rcvd = input.read(buf);
}
file.close();
}
catch (Exception e) {/* Error handling code */}
}
Client:
char packet [PACKET_SIZE] ;
file.open (filename, ios::in | ios::binary);//fopen (file_path , "rb");
int max = 0;
if (file.is_open()) {
if (size > PACKET_SIZE)
max = PACKET_SIZE;
else
max = size;
file.read (packet , max);
}
else {...}
int sent_packets = 0;
while (sent_packets < (int) ceil (((float)size)/PACKET_SIZE) ) {
_write=send(_sd , packet, max,0);
if (_write <0) {...}
else {
++sent_packets;
if (size > PACKET_SIZE* sent_packets) {
if (size - PACKET_SIZE* sent_packets >= PACKET_SIZE)
max = PACKET_SIZE;
else
max = size - PACKET_SIZE* sent_packets;
file.read (packet , max);
}
}
}
Is the sending socket closed at the end of the file, or is the next file streamed over the same socket? If more than one file is streamed, you could pick up data from the next file if you have the endedness wrong for the file size in recvHeader(), i.e. you send a file of length 0x0102 and try to read one of length 0x0201.
Other question, why do you provide a max for the first read, but not for the following reads on the same file?
One issue I see is that it appears that you assume that if the send returns a non-error, that it sent the entire chunk you requested it to send. This is not necessarily true, especially with stream sockets. How large are the packets you are sending, and how many? The most likely reason this could occur would be if the sndbuf for the socket filled, and your socket _sd is set to non-blocking. I'm not positive (depends on stack implementation), but I believe it could also likely occur if the TCP transmit window was full for your connection, and tcp couldn't enqueue your entire packet.
You should probably loop on the send until max is sent.
Thusly:
int send_ct=0;
while( (_write = send(_sd, packet + send_ct, max-send_ct, 0)) > 0) {
send_ct += _write;
if(send_ct >= max) {
break;
} else {
// Had to do another send
}
}
the code is not complete. E.g. you have omitted the sending of the filename and the filesize, as well as the parsing of those values. Are those values correct? If not first ensure that these values are the right ones before investigating further.