I'm running a Java server that uses the Twitter API and collects search results about any given keyword. My goal is to send the results to my website in PHP. However some Tweets have text with bytes less than 0, these appear to be unicode characters or similar. I've had to replace all of those characters with a space for the packet to be sent at all. If a byte less than 0 is sent the PHP script just reads "null". I need to be able to send bytes of any value, even if they're below 0.
Java: Replace bytes below value 0 with a space
// Get the Tweet text
String text = content.getData(2);
// Get the bytes
byte [] bytes = text.getBytes();
// Replace any byte below 0 with a space
for(int a = 0; a < bytes.length; ++a) {
if(bytes[a] < 0) {
bytes[a] = " ".getBytes()[0];
}
}
// Put the bytes back into a String
text = new String(bytes);
Java: Server that listens to commands and replies with output
ServerSocket socket = null;
InputStreamReader inputStream = null;
BufferedReader input = null;
try {
socket = new ServerSocket(port);
Logger.log("Server running on port " + port);
while(running) {
connection = socket.accept();
inputStream = new InputStreamReader(connection.getInputStream(), StandardCharsets.UTF_8);
input = new BufferedReader(inputStream);
// Run the command we're given, in this case the command will request Twitter search results
String reply = runCommand(input.readLine());
// Reply(String) will reply with the results
reply(reply);
}
} catch(Exception e) {
e.printStackTrace();
} finally {
try {
connection.close();
response.close();
if(inputStream != null) {
inputStream.close();
}
if(input != null) {
input.close();
}
if(socket != null) {
socket.close();
}
} catch(IOException e) {
e.printStackTrace();
}
}
Java: Reply method to send the results of the command execution (in this case it'll send the Twitter search results)
private void reply(String reply) {
try {
response = new DataOutputStream(connection.getOutputStream());
response.writeUTF(reply);
response.flush();
} catch(IOException e) {
e.printStackTrace();
}
}
PHP: Send commands (Twitter search query) via sockets and get the reply (Tweet data)
$socket = socket_create(AF_INET, SOCK_STREAM, getprotobyname('tcp'));
try {
socket_connect($socket, $address, $port);
// Encode the message in UTF-8, is this correct do to?
$message = utf8_encode($message);
// Get the result of sending this message to my Java server
$status = socket_sendto($socket, $message, strlen($message), MSG_EOF, $address, $port);
// Decode and return the results, is this the correct way to do this?
if($status != false) {
if($next = utf8_decode(socket_read($socket, $port))) {
return substr($next, 2);
}
}
} catch(Exception $e) {
}
// Even when I console.log the return of this method and it comes out "null" it is never this. I've tried changing this to return "-1" and other values and it still always returned "null" as if it was returning the string "null" in the above if statements.
return null;
I believe it may be important to note that bottom comment in PHP. I've tried looking over Google for a while now about this and I'm not sure if I'm doing things wrong or if I'm searching for the wrong thing.
How would I send bytes that are less than 0 through this system?
Related
For quite long time now I'm struggling with handling TFTP protocol in my Android app. Its main feature is downloading files from custom designed device which hosts TFTP server.
I was browsing internet hoping to find some good, already written, implementation. First I've tried with TFTP library which is part of Apache Commons. Unfortunately no luck - constant timeouts or even complete freeze. After some further research I found some code on github - please take a look. I've adopted code to Android and after some tweaking I managed to finally receive some files.
Creator of the device stated, that block size should be exactly 1015 bytes. So I increased package size to 1015 and updated creating read request packet method:
DatagramPacket createReadRequestPacket(String strFileName) {
byte[] filename = strFileName.getBytes();
byte[] mode = currentMode.getBytes();
int len = rOpCode.length + filename.length + mode.length + 2;
ByteArrayOutputStream outputStream = new ByteArrayOutputStream(len);
try {
outputStream.write(rOpCode);
outputStream.write(filename);
byte term = 0;
outputStream.write(term);
outputStream.write(mode); // "octet"
outputStream.write(term);
outputStream.write("blksize".getBytes());
outputStream.write(term);
outputStream.write("1015".getBytes());
outputStream.write(term);
} catch (IOException e) {
e.printStackTrace();
}
byte[] readPacketArray = outputStream.toByteArray();
return new DatagramPacket(readPacketArray, readPacketArray.length, serverAddr, port);
}
Chunks are being downloaded, but there is one major issue - files I'm downloading are in parts, 512kB each (except last one), and each part I receive on Android device is around 0,5kB larger. It seems like there is one byte more each time or one whole append more. Apparently I don't understand it completely and I'm missing something.
This is my method for file receiving:
byte previousBlockNumber = (byte) -1;
try {
PktFactory pktFactory;
DatagramSocket clientSocket;
byte[] buf;
DatagramPacket sendingPkt;
DatagramPacket receivedPkt;
System.out.print(ftpHandle);
if (isConnected) {
System.out.println("You're already connected to " + hostname.getCanonicalHostName());
}
try {
hostname = InetAddress.getByName(host);
if (!hostname.isReachable(4000)) {
System.out.println("Hostname you provided is not responding. Try again.");
return false;
}
} catch (UnknownHostException e) {
System.out.println("tftp: nodename nor servname provided, or not known");
return false;
}
clientSocket = new DatagramSocket();
pktFactory = new PktFactory(PKT_LENGTH + 4, hostname, TFTP_PORT);
System.out.println("Connecting " +
hostname.getCanonicalHostName() + " at the port number " + TFTP_PORT);
isConnected = true;
ftpHandle = "tftp#" + hostname.getCanonicalHostName() + "> ";
System.out.println("mode " + PktFactory.currentMode);
if (!isConnected) {
System.out.println("You must be connected first!");
}
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
buf = new byte[PKT_LENGTH + 4];
/* Sending the reading request with the filename to the server. **/
try {
/* Sending a RRQ with the filename. **/
System.out.println("Sending request to server.");
sendingPkt = pktFactory.createReadRequestPacket(filename);
clientSocket.setSoTimeout(4500);
clientSocket.send(sendingPkt);
} catch (Exception e) {
e.printStackTrace();
System.out.println("Connection with server failed");
}
boolean receivingMessage = true;
while (true) {
try {
receivedPkt = new DatagramPacket(buf, buf.length);
clientSocket.setSoTimeout(10000);
clientSocket.receive(receivedPkt);
byte[] dPkt = receivedPkt.getData();
byte[] ropCode = pktFactory.getOpCode(dPkt);
/* rPkt either a DATA or an ERROR pkt. If an error then print the error message and
* terminate the program finish get command. **/
if (ropCode[1] == 5) {
String errorMsg = pktFactory.getErrorMessage(dPkt);
System.out.println(errorMsg);
return false;
}
if (receivedPkt.getLength() < PKT_LENGTH + 4 && ropCode[1] == 3) {
byte[] fileDataBytes = pktFactory.getDataBytes(dPkt);
outputStream.write(fileDataBytes);
if (isListFile) {
listBytes = outputStream.toByteArray();
} else {
FileOutputStream fstream = new FileOutputStream(Constants.EEG_DATA_PATH.concat("file.bin"), true);
// Let's get the last data pkt for the current transfering file.
fstream.write(outputStream.toByteArray());
fstream.close();
}
// It's time to send the last ACK message before Normal termination.
byte[] bNum = pktFactory.getBlockNum(dPkt);
DatagramPacket sPkt = pktFactory.createAckPacket(bNum, receivedPkt.getPort());
clientSocket.send(sPkt);
disconnect();
return true;
}
if (ropCode[1] == 3) {
if (receivingMessage) {
System.out.println("Receiving the file now..");
receivingMessage = false;
}
byte[] bNum = pktFactory.getBlockNum(dPkt);
//I've added this if and it reduces file size a little (it was more than 0,5kB bigger)
if (previousBlockNumber != bNum[1]) {
byte[] fileDataBytes = pktFactory.getDataBytes(dPkt);
previousBlockNumber = bNum[1];
outputStream.write(fileDataBytes);
}
/* For each received DATA pkt we need to send ACK pkt back. **/
DatagramPacket sPkt = pktFactory.createAckPacket(bNum, receivedPkt.getPort());
clientSocket.send(sPkt);
}
} catch (SocketTimeoutException e) {
disconnect();
System.out.println("Server didn't respond and timeout occured.");
return false;
}
}
} catch (Exception e) {
System.out.println(e.getMessage());
return false;
}
I know what was wrong. That strange behavior was result of this line when last packet was received:
byte[] fileDataBytes = pktFactory.getDataBytes(dPkt);
Returned array size was always equal to specified packet length, even if received data was smaller. In my case last packet was 0 bytes (+4 bytes for tftp), but even then extra 512 bytes was added to output stream.
To resolve this I overload mentioned method with extra parameter - actual size of received packet when received data size is higher than 4 bytes and lower than specified packet size (512 bytes). This change resulted with getting correct size of array for last packet, so received file has correct size at the end of the operation.
So I want to have a TCP connection between a Java client and a C++ server. Think of the client as an input device and the C++ server should receive JSON objects, parse them and use them in a game.
It seems like the connection is established successfully, but 1) there is an error("parse error-unexpected ''") when I try to parse the json objects (i'm using nlohmann's json) and 2) when I don't even call doStuff a.k.a just print out the buffer, only weird characters are printed (e.g.).
I assume I messed up something in the sending/receiving of data(This is the first time I use C++), but I've lost two days and really can't figure it out!
In the Java client I have:
private void connect() {
try {
hostname = conn.getHostname();
portnumber = conn.getPortNr();
socket = new Socket(hostname, portnumber);
out = new OutputStreamWriter(socket.getOutputStream());
in = new BufferedReader(new InputStreamReader(socket.getInputStream()));
} catch (Exception e) {
e.printStackTrace();
Log.e(debugString, e.getMessage());
}
}
public void sendMessage(String json) {
try {
//connect();
out.write(json.length());
Log.d(debugString, String.valueOf(json.length()));
out.flush();
out.write(json);
out.flush();
Log.d(debugString, json);
in.read();
this.close();
} catch (Exception e) {
e.printStackTrace();
Log.e(debugString, e.getMessage());
}
}
And in the C++ server:
void Server::startConnection() {
if (listen(s, 1) != 0) {
perror("Error on listen");
exit(EXIT_FAILURE);
}
listen(s, 1);
clilen = sizeof(cli_addr);
newsockfd = accept(s, (struct sockaddr *) &cli_addr, &clilen);
if (newsockfd < 0) {
close(newsockfd);
perror("Server: ERROR on accept");
exit(EXIT_FAILURE);
}
puts("Connection accepted");
int numbytes;
char buffer[MAXDATASIZE];
while (1)
{
numbytes = recv(s,buffer,MAXDATASIZE-1,0);
buffer[numbytes]='\0';
//Here's where the weird stuff happens
//cout << buffer;
//doStuff(numbytes,buffer);
if (numbytes==0)
{
cout << "Connection closed"<< endl;
break;
}
}
}
bool Server::sendData(char *msg) {
int len = strlen(msg);
int bytes_sent = send(s,msg,len,0);
if (bytes_sent == 0) {
return false;
} else {
return true;
}
}
void Server::doStuff(int numbytes, char * buf) {
json jdata;
try {
jdata.clear();
jdata = nlohmann::json::parse(buf);
if (jdata["type"] == "life") {
life = jdata["value"];
puts("json parsed");
}
} catch (const std::exception& e) {
cerr << "Unable to parse json: " << e.what() << std::endl;
}
}
Since your char "buffer" is showing weird characters after recv() on the C++ server it seems to me the issue should be due to character encoding mismatch between the Java client and the C++ server. To verify you can check the "numbytes" returned by recv() on C++ server, it should be greater than the number of characters in the JSON string on the Java client.
You are sending the lower 8 bytes of the JSON length but you're never doing anything about it at the receiver. This is almost certainly a mistake anyway. You shouldn't need to send the length. JSON is self-describing.
I hope to find any help on my old annoying problem.
I have a TCP sever program with java and client program with c#
packet protocol between those two is simply consist of 4byte length & body ASCII data.
The Problem is that C# client faces FormatException which is from parsing fail on length byte. If I look into an error from client side, then client is trying to parse somewhere in the body which is not length header.
But apparently, Server does not send broken packet.
meanwhile, at the server, I could find an Broken pipe error whenever this kind of problem happens.
Unfortunately this error does not always happen and was not able to recreate the problem situation. it makes me difficult to find exact cause of this problem
Please see below codes for server side
public class SimplifiedServer {
private Map<InetAddress, DataOutputStream> outMap;
private Map<InetAddress,DataInputStream> inMap;
protected void onAcceptNewClient(Socket client) {
DataOutputStream out = null;
DataInputStream in = null;
try {
out = new DataOutputStream(client.getOutputStream());
in = new DataInputStream(client.getInputStream());
} catch (IOException e) {
e.printStackTrace();
}
outMap.put(client.getInetAddress(), out);
inMap.put(client.getInetAddress(), in);
}
public void writeToAll(String packet) {
outMap.forEach((key, out) -> {
try {
byte[] body = packet.getBytes("UTF-8");
int len = body.length;
if (len > 9999) {
throw new IllegalArgumentException("packet length is longer than 10000, this try will be neglected");
}
String lenStr = String.format("%04d%s", len, packet);
byte[] obuf = lenStr.getBytes();
synchronized (out) {
out.write(obuf);
out.flush();
}
} catch (IOException e) {
e.printStackTrace();
}
});
}
public void listenClient(Socket client) {
try {
DataOutputStream out = outMap.get(client.getInetAddress());
DataInputStream in = inMap.get(client.getInetAddress());
while (true) {
byte[] received = SimplePacketHandler.receiveLpControlerData(in);
byte[] lenBytes = new byte[4];
for( int i = 0 ; i < 4 ; i ++){
lenBytes[i] = in.readByte();
}
String lenString = new String(lenBytes);
int length = Integer.parseInt(lenString);
byte[] data = new byte[length];
for ( int i = 0 ; i < length ; i ++){
data[i] = in.readByte();
}
if ( data == null ){
System.out.println("NetWork error, closing socket :" + client.getInetAddress());
in.close();
out.close();
outMap.remove(client.getInetAddress());
inMap.remove(client.getInetAddress());
return;
}
doSomethingWithData(out, data);
}
} catch (NumberFormatException e) {
e.printStackTrace();
} catch ( Exception e ) {
e.printStackTrace();
} finally {
try {
System.out.println(client.getRemoteSocketAddress().toString() + " closing !!! ");
// remove stream handler from map
outMap.remove(client.getInetAddress());
inMap.remove(client.getInetAddress());
//close socket.
client.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
And here is client side code
public class ClientSide
{
public TcpClient client;
public String ip;
public int port;
public NetworkStream ns;
public BinaryWriter writer;
public BinaryReader reader;
public Boolean isConnected = false;
public System.Timers.Timer t;
public String lastPacketSucceeded = String.Empty;
public ClientSide(String ip, int port)
{
this.ip = ip;
this.port = port;
client = new TcpClient();
}
public bool connect()
{
try
{
client.Connect(ip, port);
}
catch (SocketException e)
{
Console.WriteLine(e.ToString());
return false;
}
Console.WriteLine("Connection Established");
reader = new BinaryReader(client.GetStream());
writer = new BinaryWriter(client.GetStream());
isConnected = true;
return true;
}
public void startListen()
{
Thread t = new Thread(new ThreadStart(listen));
t.Start();
}
public void listen()
{
byte[] buffer = new byte[4];
while (true)
{
try
{
reader.Read(buffer, 0, 4);
String len = Encoding.UTF8.GetString(buffer);
int length = Int32.Parse(len);
byte[] bodyBuf = new byte[length];
reader.Read(bodyBuf, 0, length);
String body = Encoding.UTF8.GetString(bodyBuf);
doSomethingWithBody(body);
}
catch (FormatException e)
{
Console.WriteLine(e.Message);
}
}
}
public void writeToServer(String bodyStr)
{
byte[] body = Encoding.UTF8.GetBytes(bodyStr);
int len = body.Length;
if (len > 10000)
{
Console.WriteLine("Send Abort:" + bodyStr);
}
len = len + 10000;
String lenStr = Convert.ToString(len);
lenStr = lenStr.Substring(1);
byte[] lengthHeader = Encoding.UTF8.GetBytes(lenStr);
String fullPacket = lenStr + bodyStr;
byte[] full = Encoding.UTF8.GetBytes(fullPacket);
try
{
writer.Write(full);
}
catch (Exception)
{
reader.Close();
writer.Close();
client.Close();
reader = null;
writer = null;
client = null;
Console.WriteLine("Send Fail" + fullPacket);
}
Console.WriteLine("Send complete " + fullPacket);
}
}
Considering it is impossible to recreate problem, I would guess this problem is from multithread issue. but I could not find any further clue to fix this problem.
Please let me know if you guys need any more information to solve this out.
Any help will be great appreciated, thanks in advance.
A broken pipe exception is caused by closing the connection on the other side. Most likely the C# client has a bug, causing the format exception which causes it to close the connection and therefore the broken pipe on the server side. See what is the meaning of Broken pipe Exception?.
Check the return value of this read:
byte[] bodyBuf = new byte[length];
reader.Read(bodyBuf, 0, length);
According to Microsoft documentation for BinaryReader.Read https://msdn.microsoft.com/en-us/library/ms143295%28v=vs.110%29.aspx
[The return value is ] The number of bytes read into buffer. This might be less than the number of bytes requested if that many bytes are not available, or it might be zero if the end of the stream is reached.
If it reads less than the length bytes then next time it will be parsing the length using data somewhere in the middle of the last message.
These broke pipe exceptions happen when the client (browser) has closed the connection, but the server (your tag) continues to try to write to the stream.
This usually happens when someone clicks Back, Stop, etc. in the browser and it disconnects from the server before the request is finished. Sometimes, it can happen because, for example, the Content-Length header is incorrect (and the browser takes its value as true).
Usually, this is a non-event, and nothing to worry about. But if you are seeing them in your dev environment when you know you have not interrupted your browser, you might dig a bit more to find out why.
WLS server will try to filter these exceptions from the web container out of the log, since it is due to client (browser) action and we can't do anything about it. But the server doesn't catch all of them.
refer from :: https://community.oracle.com/thread/806884
I am sending data over a socket but the java socket seems to change ordering and loose data and I can't fix it.
Here is my java code:
Socket socket;
...
while(isSending){
try {
DataOutputStream out = new DataOutputStream(socket.getOutputStream());
String data = getMyData();
out.writeBytes(data);//data is a csv string parsed on server-side
out.flush();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
Server.cpp:
while(1){
char recv_buffer[4096];
memset(recv_buffer,0,4096);
//receive data from socket
int ret = recv(socket , recv_buffer , 4095 , 0);
if (ret == 0){
error_print("Socket not connected");
ret = 0;
} else if (ret < 0) {
error_print("Error reading from socket!");
ret = 0;
}
if(ret<=0) break;
recv_buffer[ret]='\0';
//parse recv_buffer
}
If I put a Thread.sleep(2000) in the java while-loop, the values are received correctly. What could be the reason for this behavior and how can I fix it?
Just as I suspected. You are completely ignoring the value returned by the recv() function. It can be -1 indicating an error, or zero indicating end of stream, or a positive integer indicating the length received. Instead you are assuming not only that the read aucceeded but also that it returns a null-terminated string.
This is what you usually do when sending text data
// Receiver code
while (mRun && (response = in.readLine()) != null && socket.isConnected()) {
// Do stuff
}
// Sender code
printWriter.println(mMessage);
printWriter.flush();
but when working with DataOutputStream#write(byte[]) to send byte[], how do you write a while loop to receive sent data.
All I have found is this, but it doesn't loop, so I'm guessing this will just run on the first sent message:
int length = in.readInt();
byte[] data = new byte[length];
in.readFully(data);
How can I achieve this?
PS: yep, I'm new to socket programming.
EDIT: I'm sending a byte array each 3 to 5 seconds. This is what I've got so far.
// In the client side, in order to send byte[]. This is executed each 3 seconds.
if(out != null) {
try {
out.writeInt(encrypted.length);
out.write(encrypted);
out.writeInt(0);
out.flush();
return true;
} catch (IOException e) {
e.printStackTrace();
return false;
}
}
// In the serverside, in order to receive byte[] sent from client (also executed 3 to 5 seconds due to bytes being sent at said rate. "client" being the Socket instance.
while(true && client.isConnected()) {
byte[] data = null;
while(true) {
int length = in.readInt();
if(length == 0)
break;
data = new byte[length];
in.readFully(data);
}
if(data != null) {
String response = new String(data);
if(listener != null) {
listener.onMessageReceived(response);
}
}
}
Assuming you're trying to handle a stream of messages, sounds like what you're missing is a way of specifying (in the stream) how big your messages are (or where they end).
I suggest you just write a prefix before each message, specifying the length:
output.writeInt(data.length);
output.write(data);
Then when reading:
while (true)
{
int length = input.readInt();
byte[] buffer = new byte[length];
input.readFully(buffer, 0, length);
// Process buffer
}
You'll also need to work out a way of detecting the end of input. DataInputStream doesn't have a clean way of detecting that as far as I can tell. There are various options - the simplest may well be to write out a message of length 0, and break out of the loop if you read a length of 0.