HttpConnection truncating messages in Sony Ericsson W580 - java

I've been having a problem while using HttpConnection with a Sony Ericsson W580.
The response to my http requests is application/octet-stream, and I'm sending a quite large array of bytes.
In this mobile phone however, it is consistently being cut down to 210 bytes...
I've tested the MIDP application in a large number of different mobile phones, using different mobile carriers and wi-fi, and no other mobile has shown this behavior.

Ok, I found the problem. Entirely my fault...
How I was reading the stream:
while(true){
int bytesRead = stream.read(tmpBuffer);
// if -1, EOF
if(bytesRead < 0)
break;
(...)
// WRONG LOGIC !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
// if we read the last chunk of data, and found EOF
if(bytesRead < tmpBufferArrayLength)
break;
// WRONG LOGIC !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
}
See the part between the "wrong logic !!!" comments? I was assuming that if read(tmpBuffer) could not fill the tmp buffer entirely, it was because EOF was being reached. The API does not guarantee this at all: it just states that EOF is signaled by a read(tmpBuffer) returning -1.
I didn't see this before because all the mobiles (and emulatores) I'd tested were being able to fill completely the buffer at every call.

Related

How to properly handle IOException with NFCA.transceive?

I'm a beginner with Android developement and I'm trying to configure a "pass-through" mode for NFC. Basically I2C writes something on an NFC TAG, Mobile Phone picks it up, new data is written by I2C and so on. I kind of struggle with the time the tag is written: Meanwhile, the phone gets an "NAK" and throws back an IOException since transceive fails. How can I properly handle it? I tried with "thread.millis" to wait till I2C is done, but this solution looks pretty crappy and only works with my arduino and phone.
while (Schleife < 1000) {
try {
answer = ultralight.transceive(command); //This one throws an IOException if the data is not ready yet
Schleife = Schleife + 1;
} catch (IOException ioe) {
//Log.e("UnsupportedEncoding", ioe.toString());
}
}
I want the program to re execute the process. One thing I tried was to include the catch statement into the while-loop, but it took forever sometimes to rerun the while loop.
I'm thankful for every answer.
Kind regards
My reading of the datasheet for the chip is that you are looping around transceiving the wrong command.
After a READ or FAST_READ command involving the terminator page of the SRAM, bit SRAM_RF_READY and bit RF_LOCKED are automatically reset to 0b allowing the I2C interface to further write data into the SRAM buffer.To signal to the host that further data is ready to be written, the following mechanisms are in place:•The NFC interface polls/reads the bit SRAM_RF_READY from NS_REG (see Table14) to know if new data has been written by the I2C interface in the SRAM
You loop should be read E8h block and checking the to see if the SRAM is ready to be read by the RF connection, then read 64 bytes with fast read when the right bits are set in byte 0
This is how the chip implements a flow control mechanism between the I2C interface and the RF interface to prevent errors.
update
Ok the implementation sheet shows how to do it without flow control.
For the question how to handle a NACK, first you need to check for it
Below is how I check for a NACK
if ((answer != null) && (answer.length == 1) && ((answer[0] & 0x00A) != 0x00A)) {
// Got NACK
Log.e("Nack", Schleife); //added to identify iteration.
}
It would be helpful to also log the iteration number of any IOException
I'm thinking that the NACK and IO exception are on different iterations.
As a proper NACK is not an IO Exception.
Also Android code the anticollision under the hood, so the only thing you can try when receiving the NACK is close and connect again.
or
A low level transeive to "0x95 0x70 (UID bytes)" be correct
(taken from https://android.googlesource.com/kernel/common/+/android-3.18/net/nfc/digital_technology.c#349 )
"0x95 0x70" I think is the correct Anti-collison command for the card type.

Raspberry: How to check remaining bytes in serial port's write buffer

What I'm doing:
Implementing RS-485 protocol, I have to:
Set a write pin to high and go into write mode.
Write my data (of 16 bytes length) to serial port.
Set my pin to low again and go into read mode.
Wait until 16 bytes have arrived.
End transmission.
The details including why a write pin must be set to high and low, why 16 packets exactly, and ... are some hardware implementation details I can not change.
Current solution:
// .....
private final Serial serial;
private final GpioPinDigitalOutput pin;
private final long DELAY_M = 2;
private final int DELAY_N = 0
// .....
public boolean send(final byte[] packet) {
result = false;
try {
this.pin.high();
this.serial.write(packet);
Thread.sleep(DELAY_M, DELAY_S); // -> THE TROUBLE MAKER
this.pin.low();
result = true;
}
// ... Now in read mode, recv 16 bytes in a pre-defined time window
return result;
The problem
Before I go into read mode (setting the pin to low) I must wait till all data in the serial buffer is transmitted. I am using Pi4J library and it has no facility for checking the remaining bytes in buffer. The dirty solution is to wait a constant DELAY_M milliseconds BUT this constant time changes in different environments, different hardware and ...
Looking into Pi4J's code, in native implementations (JNI) it's calling WiringPi's API. WiringPi in turn treats the serial port as a regular linux file and writes into the file. Again, WiringPi provides no method to check the remaining bytes in buffer. Then it must be a Linux-Hardware-Kernely thing and not necessarily Pi4j's responsibility. So: How do you check remaining data in raspberry's serial port buffer? wich is /dev/ttyAMA0
P.S: the serial interface in Pi4j has a method flush() with this documentation:
Forces the transmission of any remaining data in the serial port transmit buffer.
Please note that this does not force the transmission of data, it discards it!
Update:
Regarding what #sawdust pointed out in comments, I've found this tutorial. It enables what's called RTS and CTS (more about these flags here and here) BUT it is not working yet. My oscilloscope shows no signal on CTS and RTS pins.
Also note that the article dates back to 2013 and gpio_setfunc wont even compile. It needs some strange scripts not available anywhere. But do look in apt-get packages list with apt-cache search gpio and you'll find the required tools.
You can hold the receive enable low which means you receive your own transmission. That way you know when the transmission is complete and can then take Tx enable low. Then just filter your transmission from the response.
eg. for your transmit routine:
synchronized(mutex) {
transmitEnable.high();
awaitingEcho = true;
expectedEcho = "test\n";
serial.writeln("test");
}
and the receive:
synchronized(mutex) {
data = event.getAsciiString();
if (awaitingEcho && data.contains(expectedEcho)) {
transmitEnable.low();
data = data.replace(expectedEcho, EMPTY);
expectedEcho = null;
}
}

Weird issues with gzip encoded responses

Ok, so I'm running my own fork of NanoHttpd (a minimalist java web server, the fork is quite complex though), and I had to implement gzip compression on top of it.
It has worked fine, but it just turned out that firefox 33.0 on Linux mint 17.1 will not execute gzipped js files at all, although they load just fine and headers look OK etc. This does not happen on the same pc with chrome, or with any other browser I've tried, but still is something that I must get fixed.
Also, the js resources execute just fine if I disable gzipping. I also tried removing Connection: keep-alive, but that did not have any effect.
Here's the code responsible for gzipping:
private void sendAsFixedLength(OutputStream outputStream) throws IOException {
int pending = data != null ? data.available() : 0; // This is to support partial sends, see serveFile()
headerLines.add("Content-Length: "+pending+"\r\n");
boolean acceptEncoding = shouldAcceptEnc();
if(acceptEncoding){
headerLines.add("Content-Encoding: gzip\r\n");
}
headerLines.add("\r\n");
dumpHeaderLines(outputStream);//writes header to outputStream
if(acceptEncoding)
outputStream = new java.util.zip.GZIPOutputStream(outputStream);
if (requestMethod != Method.HEAD && data != null) {
int BUFFER_SIZE = 16 * 1024;
byte[] buff = new byte[BUFFER_SIZE];
while (pending > 0) {
int read = data.read(buff, 0, ((pending > BUFFER_SIZE) ? BUFFER_SIZE : pending));
if (read <= 0) {
break;
}
outputStream.write(buff, 0, read);
pending -= read;
}
}
outputStream.flush();
outputStream.close();
}
Fwiw, the example I copied this from did not close the outputStream, but without doing that the gzipped resources did not load at all, while non-gzipped resources still loaded ok. So I'm guessing that part is off in some way.
EDIT: firefox won't give any errors, it just does not excecute the script, eg:
index.html:
<html><head><script src="foo.js"></script></head></html>
foo.js:
alert("foo");
Does not do anything, despite that the resources are loaded OK. No warnings in console, no nothing. Works fine when gzip is disabled and on other browsers.
EDIT 2:
If I request foo.js directly, it loads just fine.
EDIT 3:
Tried checking the responses & headers with TemperData while having gzipping on/off.
The only difference was that when gzipping is turned on, there is Content-Encoding: gzip in the response header, which is not very suprising. Other than that, 100% equal responses.
EDIT 4:
Turns out that removing content-length from the header made it work again... Not sure of the side effects tho, but at least this pinpoints it better.
I think the cause of your problem is that you are writing the Content-Length header before compressing the data, which turns out in an incoherent information to the browser. I guess that depending on the browser implementation, it handles this situation in one or other way, and it seems that Firefox does it the strict way.
If you don't know the size of the compressed data (which is understandable), you'd better avoid writing the Content-Length header, which is not mandatory.

How to do good real-time data streaming using Java Android SDK

I have a home-made bluetooth device measuring ECG at 500Hz: every 2 ms the device sends 9 bytes of data (header, ECG measurment, footer). So this is roughly a 9*500=4.5kbytes/s data stream.
I have a C++ Windows program able to connect the device and retrieve the data stream (displaying it with Qt/qwt). In this case, I use Windows control panel to bond the device and I connect it via a virtual COM port using boost serial_port interface. This works perfectly and I'm receiving my data stream in real time: I get a measurment point every 2ms or so.
I ported the whole program on Android via QtCreator 3.0.1 (Qt 5.2.1). It appears that virtual COM ports cannot be accessed by boost (probably SDK permissions won't allow that) so I wrote a piece of Java code to open and manage the Bluetooth connection. So my app remains C++/Qt but only the layer connecting and reading data from the device was reworked in Java (opening the connexion with createInsecureRfcommSocketToServiceRecord):
Java code to read the data:
public int readData( byte[] buffer )
{
if( mInputStream == null )
{
traceErrorString("No connection, can't receive data");
}
else
{
try
{
final boolean verbose = false;
int available = mInputStream.available();
if ( verbose )
{
Calendar c = Calendar.getInstance();
Date date = new Date();
c.setTime(date);
c.get(Calendar.MILLISECOND);
SimpleDateFormat sdf = new SimpleDateFormat("HH:mm:ss");
String currentTime = sdf.format(date);
traceDebugString( currentTime + ":" + c.get(Calendar.MILLISECOND) + " - " + available + " bytes available, requested " + buffer.length );
}
if ( available >= buffer.length )
return mInputStream.read( buffer ); // only call read if we know it's not blocking
else
return 0;
}
catch (IOException e)
{
traceDebugString( "Failed to read data...disconnected?" );
}
}
return -1;
}
Called from C++ like that:
bool ReceiveData( JNIEnv* env,
char* data,
size_t length,
bool& haserror )
{
bool result = false;
jbyteArray array = env->NewByteArray(length);
jint res = env->CallIntMethod(j_object, s_patchIfReceiveDataID, array );
if ( static_cast<size_t>(res) == length )
{
env->GetByteArrayRegion(array, 0, length, reinterpret_cast<jbyte*>(data));
result = true;
}
else if ( res == -1 )
{
haserror = true;
}
else
{
// not enough data in the stream buffer
haserror = false;
}
return result;
}
bool readThread( size_t blockSize )
{
BTGETANDCHECKENV // retrieving environment
char* buf = new char[blockSize];
bool haserror = false;
while ( !haserror )
{
if ( !ReceiveData( env, buf, blockSize, haserror ) )
{
// could not read data
if ( haserror )
{
// will stop this thread soon
}
else
{
boost::this_thread::sleep( boost::posix_time::milliseconds( 10 ) );
}
}
}
delete [] buf;
return true;
}
This works pretty well...for the five first seconds I'm gettings values in a sort of real time, then:
Sometimes it freezes for ever, meaning the mInputStream.available() value remains lower than requested.
Sometimes it freezes only for a second or so and then it continues but data are received by blocks of ~1 second. Meaning mInputStream.available() can move from 0 to more than 3000 between two calls (elapsed by 10ms). Actually, I see the same during the 5 firsts seconds, but the buffer availability never exceeds 150 bytes, after 5 seconds, it can go up to 3000 bytes.
Here is what the log can look like when verbose is set to true:
14:59:30:756 - 0 bytes available, requested 3
14:59:30:767 - 0 bytes available, requested 3
14:59:30:778 - 0 bytes available, requested 3
14:59:30:789 - 1728 bytes available, requested 3
14:59:30:790 - 1725 bytes available, requested 6
14:59:30:792 - 1719 bytes available, requested 3
My ECG device definitely did not send 1728 bytes in 11ms!!
I know my device sends 9 bytes every 2ms (otherwise, it would not work on my PC application). Looks like Java does some unexpected buffering and does not make available 9 bytes every 2ms....
It's also strange things appear to work fine for only 5 seconds at the beginning.
Note that I tried using read() without checking available() (blocking version) but experienced exactly the same behaviour.
So I'm wondering what I'm doing wrong...
Is there a way to force a Java input stream to update itself?
Is there a way to ask Java to proceed it's pending events (like we have QApplication::processEvents)?
Is there any global settings to specify buffer sizes for streams (I did not find any at BluetoothDevice/BluetoothSocket level)
On PC, when opening the virtual COM port, I have to specify baudrate, stop bit, handshaking and stuff like that. On Android I just open the Rfcomm socket with no option, could this be the problem (then ECG device and smartphone would not be synced...)?
Any help or idea would be welcomed!
Edit: I'm experiencing that on a Nexus 5 phone, Android 4.4.2 I just tested the same apk package on different devices:
a Galaxy S4 with Android 4.4.2: Same problem.
a Galaxy S3 with custom CyanogenMod 11 Android 4.4.2: data streaming seems perfect, no freezing after 5sec and data are arriving in real-time....so looks like the whole system is able to achieve what I want, but looks like Android default setup makes things too slow....dunno if there could be a setting to be changed at the OS level to fix this issue.
Edit: As I got no answer :-( I tried to do the same thing using a pure Java program (no C++, no Qt). Had the same problem: Real-time Bluetooth SPP data streaming on Android only works for 5 seconds
This problem is apparently similar to the one reported here.
After 5 seconds, I had either a connection lost, either real-time streaming being dramatically slow down.
As said here Android >4.3 apparently does not like one-way communication exceeding 5 secondes. So I'm now sending a dummy command to the device every 1 seconde (kind of "keep-alive" command) and now Android is happy because it's not a one-way communication anymore...and so data streaming is as good after the fifth second than before!

USB HID on Android

I'm trying to read data from the custom made USB device (working as slave) in Android. I was able to write the data to the device with this code:
UsbRequest request = new UsbRequest();
request.initialize(_outConnection, _outEndpoint);
int bufferDataLength = _outEndpoint.getMaxPacketSize();
ByteBuffer buffer = ByteBuffer.allocate(bufferDataLength + 1);
buffer.put(bytes);
if (request.queue(buffer, bufferDataLength)) {
UsbRequest req = _outConnection.requestWait(); }
I see the result on the debug board that my device is connected to.
I'm trying the same approach for reading data, but apparently that doesn't work:
int siz = 1;
ByteBuffer res = ByteBuffer.allocate(siz + 1);
UsbRequest request = new UsbRequest();
request.initialize(_inConnection, _inEndpoint);
request.queue(res, siz); // This return false always
What am I doing wrong? I have no idea of the size of the packet sent back - but I assume that 1 byte I would be always able to read.
My device has HID interface with two interrupt endpoints (IN and OUT)
I have no idea what fixed the problem, but now it works. I have rewritten everything from scratch. I think I didn't use this code (I thought it is for user notification and I don't need that. But appears it is something else) - and that was the main reason why it didn't work:
// Create and intent and request a permission.
PendingIntent mPermissionIntent = PendingIntent.getBroadcast(_context, 0, new Intent(ACTION_USB_PERMISSION), 0);
IntentFilter filter = new IntentFilter(ACTION_USB_PERMISSION);
_context.registerReceiver(mUsbReceiver, filter);
Several things that I did and which helped me to implement stable connection are:
1) Closing and opening the connection each time I need it. I know this is sounds strange, but this is the only way I could make it stable. If I try to use long-living connection, for some reason it gets corrupted and stops from working after some time.
2) Reading continously in never-ending while loop. I also put some short sleeps in all my threads - that helped to read in more real time manner.
3) Locking the device (synchronized). I do not open both write and read connections simultaneously.
I didn't have much hours assigned for this project, and this project is only a demo - so all of this suited us well. I think if spent a little more time, some of these things could have been rewritten to more nice ones.

Categories