Ngrok streaming audio exception - java

I've write a Client / Server code using Java server socket ( TCP ).
The server is working as a Radio, listening to Mic, and sending the bytes to connected clients.
When i run the code using "localhost" as server name, it works very well, and i can hear the voice in the speakers without any issues.
Now, when i want to expose the localhost to internet using ngrok:
Forwarding tcp://0.tcp.ngrok.io:11049 -> localhost:5000
I start to get below exception in the client side:
java.lang.IllegalArgumentException: illegal request to write non-integral number of frames (1411 bytes, frameSize = 2 bytes)
at com.sun.media.sound.DirectAudioDevice$DirectDL.write(Unknown Source)
at client.Client.Start(Client.java:79)
at client.Receiver.main(Receiver.java:17)
Does any one know why, and how i can fix such problem ?
I tried to change the byte array length.
//server code
byte _buffer[] = new byte[(int) (_mic.getFormat().getSampleRate() *0.4)];
// byte _buffer[] = new byte[1024];
_mic.start();
while (_running) {
// returns the length of data copied in buffer
int count = _mic.read(_buffer, 0, _buffer.length);
//if data is available
if (count > 0) {
server.SendToAll(_buffer, 0, count);
}
}
// client code where exception happens:
_streamIn = _server.getInputStream();
_speaker.start();
byte[] data = new byte[8000];
System.out.println("Waiting for data...");
while (_running) {
// checking if the data is available to speak
if (_streamIn.available() <= 0)
continue; // data not available so continue back to start of loop
// count of the data bytes read
int readCount= _streamIn.read(data, 0, data.length);
if(readCount>0){
_speaker.write(data, 0, readCount); // here throws exception
}
}
should play the sound through the speaker.

Related

C# Sockets programming, received sockets length isn't correct

I'm doing a chat application between C# clients and a java server.
I need to send a lot of sockets from the server to the client when he connects. I want to send the logs of the day. So I have all logs in a file.txt, and I send them to the new connected client.
For sending them, I have a for loop until all the logs are sent. Here is the loop:
for (String item : Logs) {
client.send("log:" + item);
}
And for the send method:
public void send(String text) {
//'os' is the: Socket.getOutputStream();
//What the server will send to the client
PrintWriter Out = new PrintWriter(os);
// 0 is the offset, not needed
Out.write(text, 0, text.length());
Out.flush();
System.out.println(text.length());
}
So until there, all works well.
Now my problem is: The output stream sends bytes length like '30', '100', '399' who is 'text.length()', and the C# client receive all sockets, but paste 2 or 3 sockets in one.
E.g: if I send with separated sockets (each line is a out.write() and out.flush() because I call the send method for each line)
(Server-side)
log:abcdefghijklmnopqrstuvwxyz123456789101112131415
log:abcdefghijklmnopqrstuvwxyz
log:abcdefghijklmnopqrstuvwxyz123456789101
log:abcdefghijklmnopqrstuvwxyz1234567891011121314151617
log:abcdefghijklmnopqrst
log:abcdefghijklmnopqrstuvwxyzyxwvu
The sockets will be at the end:
(Client-side)
log:abcdefghijklmnopqrstuvwxyz123456789101112131415log:abcdefghijklmnopqrstuvwxyzlog:abcdefghijklmnopqrstuvwxyz123456789101
log:abcdefghijklmnopqrstuvwxyz1234567891011121314151617log:abcdefghijklmnopqrst
log:abcdefghijklmnopqrstuvwxyzyxwvu
And if I check the sockets length in the server side I get like;
20
12
15
17
20
But in the client side;
32
15
37
The sum of multiples sockets put together.. (And sometimes it's 3 sockets put together, and sometimes 2, sometime 4...) I cant understand why...
Here's my Async method for receiving the sockets from the server;
private void callBack(IAsyncResult aResult)
{
String message = "";
try
{
int size = sck.EndReceiveFrom(aResult, ref ip);
if (size > 0)
{
byte[] receive = new byte[size];
receive = (byte[])aResult.AsyncState;
message = Encoding.Default.GetString(receive, 0, size);
Debug.WriteLine(message.Length);
}
byte[] buffer = new byte[1024];
//restart the async task
sck.BeginReceiveFrom(buffer, 0, buffer.Length, SocketFlags.None, ref ip, new AsyncCallback(callBack), buffer);
}
catch (Exception) { }
}
Where the int 'size' contains the size of the byte[] received, and here is the problem. How can I get the right sockets I sent from the server?
If I send each socket with delay in the server side (like 15ms), the client can get the sockets one by one. But only if you have a good connection. If your connection do like 200ms of latency, you will get the sockets grouped... So the problem is in the client side (I think...). The server (java) side works correctly, the flush method always send the socket!
UPDATE:
Here are my sockets;
//Global var
EndPoint ip;
public Socket sck;
//How I connect my sockets
private void connect()
{
ip = new IPEndPoint(IPAddress.Parse("127.0.0.1"), mysql.selectPort());
sck = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
try
{
sck.Connect(ip);
}catch(Exception e) {
Debug.WriteLine(e.Message);
}
}

Transferring Data Between Client and Server and Dynamically Flush

I've been playing around with transferring data between a test client (written in Java) and a server (written in C#/.NET).
I tried TCP clients and servers, but there has been and current is a problem flushing the stream. I realize flush doesn't always flush the stream, so I'm wondering if there is any way to flush/send a stream without .flush() or in a more reliable way?
Currently, the important part of the client looks like this (message is a string, serverSocket is a Socket object):
OutputStream output = serverSocket.getOutputStream();
byte[] buffer = message.getBytes();
int length = buffer.length;
output.write(ByteBuffer.allocate(4).putInt(length).array());
output.write(buffer);
output.flush();
and the server looks like this:
NetworkStream stream = client.GetStream ();
byte[] sizeBuffer = new byte[4];
int read = stream.Read (sizeBuffer, 0, 4);
int size = BitConverter.ToInt32 (sizeBuffer, 0);
Databaser.log ("recieved byte message denoting size: " + size);
byte[] messageBuffer = new byte[size];
read = stream.Read (messageBuffer, 0, size);
string result = BitConverter.ToString (messageBuffer);
Databaser.log ("\tmessage is as follows: '" + result + "'");
Where, if it's not evident from the code, the client sends 4 bytes, which are combined into a 32 bit integer which is the length of the message. Then I read in the message based on that length and have build in converters translate it into a string.
As I said, I'm wondering how to flush the connection? I know this code isn't perfect, but I can change it back to when I used TCP and UTF exclusive string messaging over the network, but either way, the connection doesn't send anything from the client until the client shuts down or closes the connection.
Maybe the problem is in the byte order. I have an application which send from a tablet (java) to a C# application (Windows Intel), I used similar to what you've done, except in the following
ByteBuffer iLength = ByteBuffer.allocate(4);
iLength.order(ByteOrder.LITTLE_ENDIAN);
iLength.putInt(length);
output.write(iLength.array(), 0, 4);
output.write(buffer);
output.flush();
Java uses BIG-ENDIAN and Intel uses LITTLE-ENDIAN bytes order.

Java getInputStream Read not Exiting

Hi I have created a server socket for reading byte array from socket using getInputStream, But getInputStream.read is not exiting after endof data reaches. Below is my code.
class imageReciver extends Thread {
private ServerSocket serverSocket;
InputStream in;
public imageReciver(int port) throws IOException
{
serverSocket = new ServerSocket(port);
}
public void run()
{
Socket server = null;
server = serverSocket.accept();
in = server.getInputStream();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte buffer[] = new byte[1024];
while(true){
int s = 0;
s = in.read(buffer); //Not exiting from here
if(s<0) break;
baos.write(buffer, 0, s);
}
server.close();
return;
}
}
From the client if I sent 2048 bytes, the line in.read(buffer) should return -1 after reading two times, but it waiting there to read for the third time. How can I solve this ?
Thanks in advance....
Your server will need to close the connection, basically. If you're trying to send multiple "messages" over the same connection, you'll need some way to indicate the size/end of a message - e.g. length-prefixing or using a message delimiter. Remember that you're using a stream protocol - the abstraction is just that this is a stream of data; it's up to you to break it up as you see fit.
See the "network packets" in Marc Gravell's IO blog post for more information.
EDIT: Now that we know that you have an expected length, you probably want something like this:
int remainingBytes = expectedBytes;
while (remainingBytes > 0) {
int bytesRead = in.read(buffer, 0, Math.min(buffer.length, remainingBytes));
if (bytesRead < 0) {
throw new IOException("Unexpected end of data");
}
baos.write(buffer, 0, bytesRead);
remainingBytes -= bytesRead;
}
Note that this will also avoid overreading, i.e. if the server starts sending the next bit of data, we won't read into that.
If I send 2048 bytes, the line 'in.read(buffer)' should return -1 after reading two times.
You are mistaken on at least two counts here. If you send 2048 bytes, the line 'in.read(buffer)' should execute an indeterminate number of times, to read a total of 2048 bytes, and then block. It should only return -1 when the peer has closed the connection.

Java - InputStream - Test for input

I am sending data to a server in two steps:
1) Length of what I will send using byte[4]
2) Data.
The server listens to the exact length of the data (shipped first) and then replies.
So I listen to the InputStream and try to get the data.
My Problem:
Whatever I am doing I am getting only the stream I send, but the server definatly sends a new string.
It seems I cannot wait for a -1 (end of string), as the program would time out and I am sure the server does not send anything alike.
Therefore I am using inputStream.available() to find out how many bytes are left in the buffer.
Once I am sending inputStream.read() after reading all the data it will time out with "Network idle timeout".
But I need to listen to the inputStream to make sure I am not missing information.
Why am I only receiving the information I send and not what is send by the server?
How can I listen to the connection for new items coming in?
Here is my code:
private void sendData (byte[] sendBytes){
try {
os.write(sendBytes);
os.flush();
} catch (IOException ex) {
}
}
Please help
THD
This is how you normally read all data from a reader (until the other end closes):
//BufferedReader is
StringBuilder data = new StringBuilder();
char[] buffer = new char[1024 * 32];
int len = 0;
while ((len = is.read(buffer)) != -1) {
data.append(buffer, 0, len);
}
//data will on this line contain all code received from the server

java socket server and embedded device - can't handle disconnect properly

I'm writing a server that is supposed to communicate with some embedded devices. The communication protocol is based on a fixed length header. The problem is I can't get my server to handle sudden disconnects of the devices properly (by "sudden" I mean situations when I just turn the device off). Here is the code for the client thread main loop:
while(!terminate) {
try {
// Receive the header
while(totalBytesRead < ServerCommon.HEADER_SIZE) {
bytesRead = dis.read(headerBuffer, bytesRead, ServerCommon.HEADER_SIZE - bytesRead);
if(bytesRead == -1) {
// Can't get here!
}
else {
totalBytesRead += bytesRead;
}
}
totalBytesRead = 0;
bytesRead = 0;
type = Conversion.byteArrayToShortOrder(headerBuffer, 0);
length = Conversion.byteArrayToShortOrder(headerBuffer, 2);
// Receive the payload
while(totalBytesRead < length) {
bytesRead = dis.read(receiveBuffer, bytesRead, length - bytesRead);
if(bytesRead == -1) {
// Can't get here!
}
else {
totalBytesRead += bytesRead;
}
}
totalBytesRead = 0;
bytesRead = 0;
// Pass received frame to FrameDispatcher
Even if I turn the device off, the read method keeps returning 0, not -1. How this could be?
When you close a socket normally, there's a sequence of codes sent between client and server to coordinate this (starting with a FIN code from the closing end). In this instance this isn't happening (since you simply turn the device off), and consequently the server is left wondering what has happened.
You may want to investigate configuration via timeouts etc., or some sort of timed protocol to identify a disconnect through absence of response (perhaps out-of-band heartbeats using ICMP/UDP?). Or is a connectionless protocol like UDP of use for your communication ?
Read is supposed to return 0, only if the supplied length is 0. In case of an error -1 should be returned or an exception be thrown.
I suggest that you debug your server program first. Create a java client application (should be easy to do so). Kill the client to see what happens. Even better use two PCs and suddenly unplug them. This will simulate your situation better.
TCP has a 30 seconds timeout for communication partners that are not reachable. I suppose if you wait for 30 seconds you should get your -1.

Categories