I'm building a client-server android application. One app is installed in Google Glass and sends video frames captured by the camera and sent via Bluetooth connection. Another application is installed in an Android device and reads the video frames. When reading from the stream len = mmInStream.read(buffer, 0, bufferSize); it seems like the maximum byte count read is 990. It doesn't matter if I set bufferSize to an extremely large number.
I'm sending a 320x240 image frame with 4 channels. So this is a total
of 307200 bytes. So when reading the entire image, it is being read
in chunks of 990 bytes which I think affects the speed of my app. It
takes 1-3 seconds to read all data. Is there a way to change the
maximum bytes read? Is this an application setting or controlled by
the Android OS? I'm not sure if reading all data at once would affect
performance but I am just curious.
UPDATE:
I notice the same thing when sending the data from Google Glass using OutputStream. It takes about 2 seconds to write to the OutputStream. Is this normal performance for Bluetooth connection? Is there a better way to transmit camera capture frames between 2 devices?
UPDATE 2:
I think the delay is in the write speed. Writing the data to the stream takes about 2 seconds. When the other app is trying to read the data, it probably waits for the complete data to be written to the stream. Still not sure if this is as expected or can still be improved.
Related
In my current project of a local VoIP app, I have to stream audio from one device and play it as fast as possible on another device connected to the same network using UDP. So I made a very basic demo android app using AudioRecord and DatagramSocket and a very simple C++ program to play the received audio using a fairly small circular buffer.
An important point is that the minimum delay seems to be decided by AudioRecord.getMinBufferSize() which resulted in my configuration (48kHz 16 channels mono PCM on a real Android phone) to be 3840 bytes = 1920 samples = 40ms of audio.
while (isRecording == true) {
// read the actual amount of bytes inside the buffer
int read = mAudioRecord.read(audioBuffer, 0, minBufferSize);
// prepare the UDP packet
packet = new DatagramPacket(audioBuffer, read, address, port);
// send the packet
mDatagramSocket.send(packet);
}
I managed to achieve near real-time audio (under 50ms latency), but after a couple of testing, I came to the conclusion that playing the AudioRecord's buffers as fast as possible is not an optimal solution as it results in gaps in the audio (underflows) as well as clicks and audio being cut (overflows/overwrite of audio), and after some investigation the culprit was simply the unstable ping between the two devices, and even a small lag could result in a terrible audio distorsion as shown below:
And the only solution I found is to manually delay the playback of EVERY packet by 10ms, solving the problem in the precedent scenario but increasing the overall latency to 40 + 10 = 50ms, and if I need to keep good audio with up to 20ms ping spikes, I would have to increase the latency to 60 ms and so on, resulting in a delayed audio that will be even mode delayed if redirected and sent online which is not really acceptable for VoIP (trying to keep it below the 150ms bar).
So I thought the perfect solution would be to reduce the amount of time it takes to record a single packet so I could add on top of it the wanted latency (e.g. if each audio buffer had only 20ms of audio I could add up to 30ms of delay to each playback and still keep it only 50ms delayed which is pretty good for VoIP).
But I'm not sure if this is possible, I wonder if there is a tricky way of achieving that. I noticed that AudioRecord.getMinBufferSize() on the Android Studio Emulator (on Windows 10) gives 640 bytes (~7 ms) with the same PCM configuration which is an amazing number, but on the Genymotion emulator (on Debian) the minimum buffer size is 4480 bytes (~47 ms).
The solution is to read smaller amounts of data from the AudioRecord and send them as soon as possible to the server.
The AudioRecord.getMinBufferSize() JavaDoc states:
Returns the minimum buffer size required for the successful creation of an AudioRecord object, in byte units. Note that this size doesn't guarantee a smooth recording under load, and higher values should be chosen according to the expected frequency at which the AudioRecord instance will be polled for new data.
So this is the minimum size for the buffer that AudioRecord should allocate. Depending on how often you can fetch data from the AudioRecord instance it might even be required to specify a bigger buffer.
The AudioRecord JavaDoc in turn states (emphasis added):
Upon creation, an AudioRecord object initializes its associated audio buffer that it will fill with the new audio data. The size of this buffer, specified during the construction, determines how long an AudioRecord can record before "over-running" data that has not been read yet. Data should be read from the audio hardware in chunks of sizes inferior to the total recording buffer size.
So the documentation explicitly tells you not to try to read the whole buffer at once!
You can for example setup the AudioRecord object as
mAudioRecord = new AudioRecord(
MediaRecorder.AudioSource.MIC,
48000, // sampeRate
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT,
9600 // buffer size: 100ms
);
and the polling code can read 480 samples every 10ms and send them to the server:
final int BUFFER_SIZE = 960; // 1 package every 10ms
byte[] audioBuffer = new byte[BUFFER_SIZE];
while (isRecording) {
// read the actual amount of bytes inside the buffer
int read = mAudioRecord.read(audioBuffer, 0, BUFFER_SIZE);
// prepare the UDP packet
packet = new DatagramPacket(audioBuffer, read, address, port);
// send the packet
mDatagramSocket.send(packet);
}
Currently I am using this code on both Server and Client Side. Client is an android device.
BufferedOutputStream os = new BufferedOutputStream(socket.getOutputStream(),10000000);
BufferedInputStream sin = new BufferedInputStream(socket.getInputStream(),10000000);
os.write("10000000\n".getBytes());
os.flush();
for (int i =0;i<10000000;i++){
os.write((sampleRead[i]+" ").getBytes());
}
os.flush();
The problem is that this code takes about 80 secs to transfer data from android client to server while it takes only 8 seconds to transfer the data back from server to client. The code is same on both sides and buffer is also same. I also tried with different buffer sizes but the problem is with this segment
for (int i =0;i<10000000;i++){
os.write((sampleRead[i]+" ").getBytes());
}
The buffering takes most of the time while the actual transfer takes only about 6-7 seconds on a 150mbps hotspot connection. What could be the problem and how to solve it?
First of all, as a commenter has already noted, using a monstrously large buffer is likely to be counter productive. Once your stream buffer is bigger than the size of a network packet, app-side buffering loses its effectiveness. (The data in your "big" buffer needs to be split packet-sized chunks by the TCP/IP stack before it goes onto the network.) Indeed, if the app-side buffer is really large, you may find that your data gets stuck in the buffer for a long time waiting for the buffer to fill ... while the network is effectively idle.
(The Buffered... readers, writers and streams are primarily designed to avoid lots of syscalls that transfer tiny amounts of data. Above 10K or so, the buffering doesn't performance help much.)
The other thing to now is that in a lot of OS environments, the network throughput is actually limited by virtualization and default network stack tuning parameters. To get a better throughput, you may need to tune at the OS level.
Finally, if your network path is going over a network path that is congested, has a high end-to-end latency or links with constrained data rate, then you are unlikely to get fast data transfers no matter how you tune things.
(Compression might help ... if you can afford the CPU overhead at both ends ... but some data links already do compression transparently.)
You could compress the data transfer, it will save a lot of memory and well to transfer a compress stream of data is cheaper... For that you need to implement compress logic in client side and decompress logic in server side, see GZIPInputStream... And try reducing the buffer size is huge for a mobile device...
I'm creating an app which communicates with an ECG monitor. Data is read at a rate of 250 samples pr second. Each package from the ECG monitor contains 80 bytes and this is received 40 times per second.
I've tried using a RandomAcccessFile but packages were lost both in sync
RandomAccessFile(outputFile, "rws") and async RandomAccessFile(outputFile, "rw") mode.
In a recent experiment I've tried using the MappedByteBuffer. This should be extremely performant, but when I create the buffer I have to specify a size map(FileChannel.MapMode.READ_WRITE, 0, 10485760) for a 10MB buffer. But this results in a file that's always 10MB in size. Is it possible to use a MappedByteBuffer where the file size is only the actual amount of data stored?
Or is there another way to achieve this? Is it naive to write to a file this often?
On a side note this wasn't an issue at all on iOS - this can be achieved with no buffering at all.
The basic Idea is to create an application that can record audio from one device and send it over Wlan using sockets to another device that will play it. In nutshell a Lan voice chat program.
I am recording live audio from mic using a AudioRecord object and then read the recorded data into byte array ,then write the byte array to a TCP socket. The receiving device then reads that byte array from the socket and writes it to buffer of an AudioTrack object.
its like
Audio Record-->byte array-->socket--->LAN--->socket-->byte array-->AudioTrack
The process is repeated using while loops.
Although the audio is playing there its lagging between frames. i.e when I say Hello the receiver hears He--ll--O. Although the audio is complete but there is lag between the buffer blocks.
As far as I know the lag is due to delay in Lan transmission
How do I improve it?
What approach should I use so it is smooth as it is in commercial online chat applications like skype and gtalk?
Sounds like you need a longer buffer somewhere to deal with the variance of the audio transmission over lan. To deal with this you could create an intermediary buffer between the socket byte array and the audio track. Your buffer can be x times the size of the buffer used in the AudioTrack object. So something like this:
Socket bytes -> Audio Buffer -> Buffer to get fed to Audio Track -> Audio Track
When audio starts recording, don't play anything back until it completely fills up the longer buffer. And after that you can feed blocks of the size of your Audio Track buffer to your Audio Track object.
I am just using the RTP to send some bufferedimage from the client to the server using the jlibrtp.
And each packet is limited to 1480bytes so I need to divide each image to several parts and send the byte to the server and at the server side it need to wait, until it receives all the bytes and reform a bufferedimage.
But the problem is very often when the bufferedimage size is too large some of the packets will loss. However, when I try to reduce the size, this problem does not happen.
Actually the image I send is the continuous frame capture from the webcam, so when I try to drop those "not complete" image, the image screen shows in a very "non-continuously" way which is not acceptable.
So I would like to ask is there any ways to improve this situation?
Thank you very much!
Switch to a video encoding format like MPEG-2, MPEG-4, Theora or WebM.
You can send back to the sender that you didn't get some packets and need them resent. The advantage here over TCP is that you can easily time out and stop asking when the packet is too old. I cannot find it quickly for jlibrtp but other RTP libraries have a retransmit request function.
You can reduce the image resolution and send less data.
You can start sending more data. If this extra data is part of a forward error correction system you may be able to reconstruct the lost packets.
You can do both, reducing the resolution and increasing the error correction so the bandwidth remains similar.