I have made a class in java that allows me to wrap an existing socket with the WebSocket protocols. I have everything working for the RFC6445 protocol working and everything works in chrome and FF. However Safari and iOS is using the hixie76 / HyBi00 protocol (according to Wikipedia).
I have everything working and Safari and iOS correctly handshake and start sending/receiving messages... well, at least most of the time.
About 20-30% of the time, the handshake fails and Safari closes the connection. (Java reads a -1 byte upon trying to read first frame). Safari does not report any errors in the console, but just calls the onclose event handler.
Why would the handshakes only work part of the time?
Here is my handshake code:
Note: No exceptions are thrown and the "Handshake Complete" is written to the console. But then upon trying to read the first frame the connection is closed. (Java returns -1 on inst.read())
// Headers are read in a previous method which wraps the socket using RFC6445
// protocol. If it detects 2 keys it will call this and pass in the headers.
public static MessagingWebSocket wrapOldProtocol(HashMap<String, String> headers, PushbackInputStream pin, Socket sock) throws IOException, NoSuchAlgorithmException {
// SPEC
// https://datatracker.ietf.org/doc/html/draft-hixie-thewebsocketprotocol-76#page-32
// Read the "key3" value. This is 8 random bytes after the headers.
byte[] key3 = new byte[8];
for ( int i=0; i<key3.length; i++ ) {
key3[i] = (byte)pin.read();
}
// Grab the two keys we need to use for the handshake
String key1 = headers.get("Sec-WebSocket-Key1");
String key2 = headers.get("Sec-WebSocket-Key2");
// Count the spaces in both keys
// Abort the connection is either key has 0 spaces
int spaces1 = StringUtils.countMatches(key1, " ");
int spaces2 = StringUtils.countMatches(key2, " ");
if ( spaces1 == 0 || spaces2 == 0 ) {
throw new IOException("Bad Handshake Request, Possible Cross-protocol attack");
}
// Strip all non-digit characters from each key
// Use the remaining value as a base-10 integer.
// Abort if either number is not a multiple of it's #spaces counterpart
// Need to use long because the values are unsigned
long num1 = Long.parseLong( key1.replaceAll("\\D", "") );
long num2 = Long.parseLong( key2.replaceAll("\\D", "") );
if ( !(num1 % spaces1 == 0) || !(num2 % spaces2 == 0) ) {
throw new IOException("Bad Handshake Request. Possible non-conforming client");
}
// Part1/2 is key num divided by the # of spaces
int part1 = (int)(num1 / spaces1);
int part2 = (int)(num2 / spaces2);
// Now calculate the challenge response
// MD5( num1 + num2 + key3 ) ... concat, not add
MessageDigest md = MessageDigest.getInstance("MD5");
md.update(ByteBuffer.allocate(4).putInt(part1));
md.update(ByteBuffer.allocate(4).putInt(part2));
md.update(key3);
byte[] response = md.digest();
// Now build the server handshake response
// Ignore Sec-WebSocket-Protocol (we don't use this)
String origin = headers.get("Origin");
String location = "ws://" + headers.get("Host") + "/";
StringBuilder sb = new StringBuilder();
sb.append("HTTP/1.1 101 WebSocket Protocol Handshake").append("\r\n");
sb.append("Upgrade: websocket").append("\r\n");
sb.append("Connection: Upgrade").append("\r\n");
sb.append("Sec-WebSocket-Origin: ").append(origin).append("\r\n");
sb.append("Sec-WebSocket-Location: ").append(location).append("\r\n");
sb.append("\r\n");
// Anything left in the buffer?
if ( pin.available() > 0 ) {
throw new IOException("Unexpected bytes after handshake!");
}
// Send the handshake & challenge response
OutputStream out = sock.getOutputStream();
out.write(sb.toString().getBytes());
out.write(response);
out.flush();
System.out.println("[MessagingWebSocket] Handshake Complete.");
// Return the wrapper socket class.
MessagingWebSocket ws = new MessagingWebSocket(sock);
ws.oldProtocol = true;
return ws;
}
Thanks!
Note: I am not looking for third-party alternatives for WebSockets such at jWebSocket, Jetty and Socket.IO. I already know about many of these.
Your MD5 digest method has a bug:
protocol described as below: https://datatracker.ietf.org/doc/html/draft-hixie-thewebsocketprotocol-76#section-5.2
byte[] bytes = new byte[16];
BytesUtil.fillBytesWithArray(bytes, 0, 3, BytesUtil.intTobyteArray(part1));
BytesUtil.fillBytesWithArray(bytes, 4, 7, BytesUtil.intTobyteArray(part2));
BytesUtil.fillBytesWithArray(bytes, 8, 15, key3);
I think your problem is caused by Little Endian and Big Endian.
Related
I have been attempting to set up a basic server using Java's ServerSocket, Socket, and InputStream. In reading the InputStream, the expected result was a repeating series of byte 0x0b and 10 bytes of associated data (0x0b-data-0x0b-data repeating). The issue is that a small amount of the bytes are entirely dropped somewhere within the Java application, leaving only 9 bytes of data in some packets (after checking with Wireshark, the bytes are present in the original packets, just not the output of the InputStream).
The context in which this is happening is during a sequence of around a hundred packets sent in quick succession in response to certain behavior. I believe this is simply because there are more bytes that have an opportunity to be dropped and not the speed at which it is recieved.
After some searching, I found the same issue at Java Socket InputStream read missing bytes, but that thread died with requests for further information (and hence no useful answers).
The entirety of the code causing this problem is below. The most important sections are the while true loop and the readData function (excluding the else if chain).
To clarify, the question is the cause of this weird behaviour.
package com.kevycat.minerria;
import java.io.IOException;
import java.io.InputStream;
import java.net.ServerSocket;
import java.net.Socket;
import java.util.Arrays;
public class Minerria {
private static Socket client;
public static void main(String[] args) throws IOException {
ServerSocket socket = new ServerSocket(7777);
System.out.println("Listening");
client = socket.accept();
InputStream stream = client.getInputStream();
System.out.println("Connected");
byte[] extraData = new byte[0];
while (true) {
int available = stream.available();
byte[] data = new byte[available + extraData.length];
stream.read(data, extraData.length, available);
if (extraData.length > 0) {
for (int i = 0; i < extraData.length; i++) {
data[i] = extraData[i];
}
}
if (data.length > 0) {
for (int i = 0; i < data.length; i++) {
System.out.print(data[i] + " ");
}
System.out.println(" ");
}
if (data.length > 0) {
extraData = readData(data);
}
}
}
private static byte[] readData(byte[] data) throws IOException {
if (data.length < 3) {
return data;
}
int length = data[0] + data[1] * 256;
int type = data[2];
String payload = new String(Arrays.copyOfRange(data, 4, length));
System.out.println(length + " " + type + " " + payload);
if (type == 1) {
client.getOutputStream().write(new byte[] { 5, 0, 3, 0, 0 });
} else if (type == 4) {
client.getOutputStream().write(data);
} else if (type == 5) {
client.getOutputStream().write(data);
} else if (type == 68) {
client.getOutputStream().write(data);
} else if (type == 16) {
client.getOutputStream().write(data);
} else if (type == 42) {
client.getOutputStream().write(data);
} else if (type == 50) {
client.getOutputStream().write(data);
} else if (type == 6) {
byte[] b = new byte[80];
b[0] = 80;
b[2] = 7;
client.getOutputStream().write(b);
} else if (type == 8) {
client.getOutputStream().write(new byte[] { 11, 0, 9, 0, 1, 0, 0, 0, 'e', 'e', 'e' });
}
return data.length > length ? Arrays.copyOfRange(data, length, data.length - 1) : new byte[0];
}
}
int available = stream.available();
Don't do this. available() does nothing useful. If you don't believe me, I shall quote the javadoc:
Returns an estimate of the number of bytes that can be read (or skipped over) from this input stream without blocking by the next invocation of a method for this input stream.
'estimate'. That's programmer jargon. In plain english it translates as 'mostly useless'.
stream.read(data, extraData.length, available);
There's your error. You can't ignore the returned value of a read call. Read the javadoc: That read call will guarantee:
It reads at least 1 byte, unless the stream is closed / ended (then it reads nothing, and returns -1).
It will never read more than available.
But that is where it ends. It is perfectly legimitate for this method to only read half of available.
The ACTUAL # of bytes read is returned, unless it read nothing (only possible if stream is closed), then it returns -1.
The reason it's so convoluted is to get it to you as fast as possible. If packet arrives on your network card with 6 bytes and you ask for 10, it'll give you 6.
Use .readFully() if you want to just read X bytes (such as, 10 bytes, in your protocol that sounds useful), and ask the stream to wait as long as is needed (specifically, return only until either the stream ends are all 10 bytes are read).
For your protocol, I see two easy options:
Wrap the stream into a BufferedInputStream, and invoke only read(), the no-args one. That is a much simpler call: It returns -1 if stream ends, and a byte otherwise, easy peasy. It'll wait as long as needed until there's either data, or the stream is closed.
Alternatively, use .readFully. If you know that the data arrives in exact chunks of 11 every time, that'll work just as well. Although, calling a 'short' read (11 bytes is very short) on a non-buffered stream can be rather inefficient. Depends on the underlying stream.
Door #1 is less messy. It definitely does not suffer from inefficiency due to asking for too few bytes at a time, and it's hard to mess up your code.
I'm porting some C# code to Java code and I done it right (hopefully) but I'm still clueless on the behaviour or what I do miss on the Java port.
Background: It's a server socket reading & sending packets (as a byte[]). I have no access to the client layer, but the structure is right on the server side (so the client can accept)
C# code uses NetworkStream to write/read.
Java code uses InputStream & OutputStream to write/read.
C# code: (the thread method is started via Thread.start)
public override void ReceivingThread(object o)
{
Client client = (Client)o;
PacketStream clientStream = client.PacketStream;
byte[] clientBuffer = new byte[4096];
packetHandler.SendFirstPacket(client);
while ((!this.Stopped) && (clientStream.Read(clientBuffer, 0, 8) != 0))
{
if (BitConverter.ToInt16(clientBuffer, 6) > 0)
{
clientStream.Read(clientBuffer, 8, BitConverter.ToInt16(clientBuffer, 6));
}
Packet packet = new Packet(clientBuffer);
Console.WriteLine($"RECV [{packet.PacketId:X4}] {BitConverter.ToString(packet.GetRawPacket(), 0, packet.DataLength + 8)}");
packetHandler.HandlePacket(client, packet);
}
Console.WriteLine("outside of loop");
}
Scenario: Server sends first packet, client reads it and sends credentials(in this case), server checks them. To make it easier for now I send a packet with a code indicating that credentials are invalid (code -4 as a short -> 4002). The client accepts it immidiatly and the Thread is discarded. (the "outside of loop" is hit). The client doesn't hang, nor does the server.
I replicated or better to say, ported that code and functionality in Java.
Since in Java you cannot start Threads as methods in form of Thread extending etc. I'm starting the thread with help of Java8 & lambdas. (new Thread(() -> receivingThread(client)).start();)
#Override
public void receivingThread(Object o) {
Client client = (Client) o;
PacketStream clientStream = client.getPacketStream();
byte[] clientBuffer = new byte[4096];
packetHandler.sendWelcomePacket(client);
while ((!this.stopped) && (clientStream.read(clientBuffer, 0, 8) != -1)) {
if (BitKit.bytesToShort(clientBuffer, 6) > -1)
clientStream.read(clientBuffer, 8, BitKit.bytesToShort(clientBuffer, 6));
Packet packet = new Packet(clientBuffer);
logger.info("RECV [" + String.format("0x%x", (int)packet.getPacketId()) + "] " + BitKit.toString(packet.getRawPacket(), 0, packet.getDataLength() + 8));
packetHandler.handlePacket(client, packet);
}
logger.info("outside of loop");
}
NetworkStream.Read returns 0, Java InputStream.read returns -1 for eof.
The thing is, in the Java port, it hangs after the second packet is sent. Like somehow the read blocks in some not normal way. I think it's about a minute or something. Then the client throws an connection error instead of the code I sent with the second write.
The streams are setup in constructor of Client class which is passed down. The write and read operations are for raw data -> read(byte[] buffer, int offset, int size)
Any ideas about it?
Why does it work on C# end and not Java? Does C# work with packets differently than Java? Any help is appreciated and thanks in forward!
P.S: Sorry for the long text :)
I use NIO with reactor pattern to connect a server to a client. My codes are as follows:
Server side codes, in the block of if(selectionKey.isWritable){} :
public void isWritable(SelectionKey selectionKey) throws Exception {
SocketChannel socketChannel =
(SocketChannel) selectionKey.channel();
Integer myInteger = (Integer) selectionKey.attachment();
if (myInteger == null){
int myJob = jobFacade.isAnyJob(socketChannel, 100 /*deadline*/);
if (myJob > 0){
ByteBuffer inputBuffer = ByteBuffer.wrap("available\n".getBytes("UTF-8"));
socketChannel.write(inputBuffer);
myInteger = myJob;
socketChannel.register(
selector, SelectionKey.OP_WRITE, myInteger);
}else if (myJob == -1){
ByteBuffer inputBuffer = ByteBuffer.wrap("unavailable\n".getBytes("UTF-8"));
socketChannel.write(inputBuffer);
socketChannel.close();
UnsupportedOperationException un = new UnsupportedOperationException();
throw un;
}else if (myJob == -2){
ByteBuffer inputBuffer = ByteBuffer.wrap("pending\n".getBytes("UTF-8"));
inputBuffer.flip();
socketChannel.write(inputBuffer);
myInteger = null;
socketChannel.register(
selector, SelectionKey.OP_WRITE, myInteger);
}
// is there any new job to do?
}else{
int myInt = myInteger.intValue();
if ( myInt > 0 ){
long startRange = jobFacade.findByID(myInt);
sendTextFile(startRange, Integer.parseInt(properties.getProperty("workUnit")),
properties.getProperty("textPath"), socketChannel);
myInteger = -3;
socketChannel.register(
selector, SelectionKey.OP_WRITE, myInteger);
}else if (myInt == -3){
sendAlgorithmFile(socketChannel, properties.getProperty("algorithmPath"));
myInteger = -4;
socketChannel.register(
selector, SelectionKey.OP_WRITE, myInteger);
// send algorithm file
}else if (myInt == -4){
int isOK = jobFacade.isAccepted(socketChannel.socket().getInetAddress().toString(),
Long.parseLong(properties.getProperty("deadline")));
if(isOK == -1){
ByteBuffer inputBuffer = ByteBuffer.wrap("notaccepted\n".getBytes("UTF-8"));
socketChannel.write(inputBuffer);
myInteger = null;
socketChannel.register(
selector, SelectionKey.OP_WRITE, myInteger);
}else {
ByteBuffer inputBuffer = ByteBuffer.wrap("accepted\n".getBytes("UTF-8"));
socketChannel.write(inputBuffer);
myInteger = isOK;
socketChannel.register(
selector, SelectionKey.OP_READ, myInteger);
}
// send "accepted" or "not accepted"
}
}
}
It is no need to know what my methods in each block do except that these methods generate a number with this order at first. 1)myInteger=null, 2) myInteger > 0, 3) myInteger = -3, 4) myInteger = -4
In this order, OP-WRITE will register consecutively for four times. And this part is so important. So lets see my Client side code and then I will tell you my problem:
BufferedReader inFromServer = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
sentence = inFromServer.readLine();
System.out.println("Response from Server : " + sentence);
if (sentence.equals("available")){
BufferedReader inFromServer1 = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
while ((sentence = inFromServer1.readLine()) != null) {
myJob = myJob + sentence ;
}
inFromServer = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
String acception = inFromServer.readLine();
if (acception.equals("accepted")){
File file = new File("account.json");
byte[] bytes = new byte[2048];
InputStream inputStream = new FileInputStream(file);
OutputStream outputStream = clientSocket.getOutputStream();
int count;
try {
while ((count = inputStream.read(bytes)) > 0){
outputStream.write(bytes, 0, count);
}
outputStream.close();
inputStream.close();
}catch (IOException io){}
continue;
}else if (acception.equals("notaccepted")){
continue;
}
Now, my problem is that when I run my server and then my client, my server will run without waiting for my client to get input stream. First, the client get "available" but when the second getInputStream will be reached in client, the server paced all the phase of OP-WRITE registering and wait for client to get streams of data (As I defined in my code).
Actually, my server do its job well. It will pass all the stages in required order. But the problem is that sending and receiving data is not synchronous.
I do not know what my problem is. But I guess when I register OP-WRITE consecutively, it means that my server did not send all bytes of data, so just the first getInputStream will get the data.
On the other hand, I need this order to run my program. So, Is there any Idea?
I find out my problem. There is no problem with my code. OP_WRITE can be registered any time with any order. The most important thing is to write to buffer and read from socket correctly.
Actually, when I send something for the second time to my client, I did not clear the buffer. In this case I found it, and correct it.
But when I send some characters to my client and then want to send a file, because in my client side I have a loop to get all characters, the content if the file is gotten by the same loop.
The question here is that how I can make them separate?
I will help you clarify the problem before thinking about patterns:
You have one thread/process that passes a message asking another thread/ process to act upon the message.
The receiver needs to read the message and maybe start some child threads of its own to perform that work because it can receive other requests.
It would be nice to tell the sender that an acknowledgment that the request was received.
It seems necessary that the message passing is protected. Because if another requests comes in while you are reading you could end up processing garbage.
You can configure nio to have several readers and just one writer, just read one portion of a buffer, etc. Check the how-tos, api docs. It is plenty powerful
exactly after sending a message
There is no such thing as a message in TCP. It is a byte stream. Two writes at the sender are very likely to by read by one read at the receiver. If you want messages you have to implement them yourself, with count words, terminators, STX/ETX, XML, etc.
I have a very strange situation. I connect my Java software with a device, let´s call it "Black Box" (because I cannot look into it or make traces within it). I am adressing a specific port (5550) and send commands as byte sequences on a socket. As a result, I get an answer from the Black Box on the same socket.
Both my commands and the replies are prefixed in a pre-defined way (according to the API) and have an XOR checksum.
When I run the code from Windows, all is fine: Command 1 gets its Reply 1 and Command 2 gets its Reply 2.
When I run the code from Android (which is actually my target - Windows came into play to track down the error) it gets STRANGE: Command 1 gets its Reply 1 but Command 2 does not. When I play with Command 2 (change the prefix illegally, violate the checksum) the Black Box reacts as expected (with an error reply). But with the correct Command 2 being issued from Android, the Reply is totally mis-formed: Wrong prefix and missing checksum.
In the try to analyse the error I tried WireShark and this shows that on the network interface, the Black Box is sending the RIGHT Reply 2, but evaluating this reply in Java from the socket, it is wrong. How can this be when all is fine for Command/Reply 1???
Strange is, that parts of the expected data are present:
Expected: ff fe e4 04 00 11 00 f1
Received: fd fd fd 04 00 11 00 // byte 8 missing
I am attaching the minimalistic code to force the problem. What could falsify the bytes which I receive? Is there a "raw" access in Java to the socket which could reveal the problem?
I am totally confused so any help would be appreciated:
String address = "192.168.1.10";
int port = 5550;
Socket socket;
OutputStream out;
BufferedReader in;
try {
socket = new Socket(address, port);
out = socket.getOutputStream();
in = new BufferedReader(new InputStreamReader(socket.getInputStream()));
// This is "Command 1" which is receiving the right reply
// byte[] allesAn = new byte[] {(byte)0xff, (byte)0xfe, (byte)0x21, (byte)0x81, (byte)0xa0};
// out.write(allesAn);
// This is "Command 2" which will not receive a right reply
byte[] getLokInfo3 = new byte[] {(byte)0xff, (byte)0xfe, (byte)0xe3, (byte)0, (byte)0, (byte)3, (byte)0xe0};
out.write(getLokInfo3);
out.flush();
while (true) {
String received = "";
final int BufSize = 1000;
char[] buffer = new char[BufSize];
int charsRead = 0;
charsRead = in.read(buffer, 0, BufSize);
// Convert to hex presentation
for (int i=0; i < charsRead; i++) {
byte b = (byte)buffer[i];
received += hexByte((b + 256) % 256) + " ";
}
String result = charsRead + ">" + received + "<";
Log.e("X", "Read: " + result);
}
} catch (Exception e) {
Log.e("X", e.getMessage() + "");
}
with
private static String hexByte(int value) {
String s = Integer.toHexString(value);
return s.length() % 2 == 0 ? s : "0" + s;
}
Here is what wireshark says, showing the expected 8 bytes:
I have an SSL function on my C side that receives only the exact number of bytes sent. Problem is that I'm sending strings & JSON of different byte length's from my servlet.
My Approach: I'm sending the length of each string first then the actual message.
//For Json String (349 bytes)
outputstreamwriter.write(jsonLength);
outputstreamwriter.write(jsonData);
outputstreamwriter.flush();
// For other request strings
final String path = request.getPathInfo();
String all_Users = "GET ALL USERS";
String user_length = Integer.toString(all_Users.length());
String all_Keys = "GET ALL KEYS";
String key_length = Integer.toString(all_Keys.length());
if (path == null || path.endsWith("/users"))
{ outputstreamwriter.write(user_length);
outputstreamwriter.write(all_Users);
}
else if (path.endsWith("/keys")) {
outputstreamwriter.write(key_length);
outputstreamwriter.write(all_Keys);
}
On the C side: I first read the incoming bytes for the incoming string length and then call the function again to expect that message. Now, my Json is 349 while the other requests are 12. These are 3 and 2 bytes respectively.
debug_print("Clearing Buffer\n", NULL);
memset(inBuf, 0, 1024);
ssl_recv_exactly(rs_ssl, inBuf, 2, &ssllog))
lengthValue = atoi(inBuf);
printf("successfully received %d bytes of request:\n<%s>\n", lengthValue, inBuf);
}
if (lengthValue == 13)
{
// Call this function
else if (lengthValue == 12)
{
// call this function
}
Current solution: I'm adding a zero to my 2 byte requests to make them 3 bytes. Any smarter way of doing this?
Going by what Serge and Drew suggested I was to able to do this using Binary method. Here's how I'm sending the string in Binary format:
String user_length = String.format("%9s",
Integer.toBinaryString(all_Users.length()).replace(' ', '0'));