When trying to write with netty, the written data never ends up at the remote side, confirmed with Wireshark.
I have tried:
//Directly using writeAndFlush
channel.writeAndFlush(new Packet());
//Manually flushing
channel.write(new Packet());
channel.flush();
// Even sending bytes won't work:
channel.writeAndFlush(new byte[]{1,2,3});
No exception is caught when I wrap it in try{...}catch(Throwable e){e.printStackTrace();}
What can I do to debug this problem?
Netty is asynchronous, meaning that it won't throw exceptions when a write failed. Instead of throwing exceptions, it returns a Future<?> that will be updated when the request is done. Make sure to log any exceptions coming from this as your first debugging steps:
channel.writeAndFlush(...).addListener(new GenericFutureListener<Future<Object>>() {
#Override
public void operationComplete(Future<Object> future) {
// TODO: Use proper logger in production here
if (future.isSuccess()) {
System.out.println("Data written succesfully");
} else {
System.out.println("Data failed to write:");
future.cause().printStackTrace();
}
}
});
Or more simply:
channel.writeAndFlush(...).addListener(ChannelFutureListener.FIRE_EXCEPTION_ON_FAILURE);
After you get the root cause of the exception, there could be multiple problems:
java.lang.UnsupportedOperationException:unsupported message type: <type> (expected: ...)
Notice: This also throws when using an ObjectEncoder, but your object does not implements Serializable
A default Netty channel can only send ByteBufs and FileRegions. You need to convert your objects to these types either by adding more handlers to the pipeline, or converting them manually to ByteBufs.
A ByteBuf is the Netty variant of a byte array, but has the potential for performance because it can be stored in the direct memory space.
The following handlers are commonly used:
To convert a String use a StringEncoder
To convert a Serializable use a ObjectEncoder (warning, not compatible with normal Java object streams)
To convert a byte[] use a ByteArrayEncoder
Notice: Since TCP is a stream based protocol, you usually want some form of packet sizes attached, since you may not receive exact packets that you write. See Dealing with a Stream-based Transport in the Netty wiki for more information.
Related
I am trying to connect to an tcp/ip interface which sends many different packet types. Each packet is different in length and content. I simply want to process each packet type and generate POJOs which will be processed again by another state-handler.
So far I am not sure if there is any structure in Netty which supports this type of processing packets/frames. One solution I could think of, was to create one decoder inbound handler which manipulates the pipeline depending on the first byte (which is the type field). Which structure or algorithm in Netty could help me to realize such simple Switch-Case problem?
thx,
Tom
If your connection is supposed to handle the stream of same packet types as the first packet type (i.e. the first packet determines the state of the connection), you could take a look into the port unification example.
If your connection is supposed to handle the stream of arbitrary packet types, you'd better write a decoder that understands all packet types and converts them into a POJO. Unless the number of packet types to handle are too many, it shouldn't be very difficult. Once a decoder decodes a packet, your last handler in the pipeline will look like the following:
public class MyPacketHandler extends SimpleChannelInboundHandler {
#Override
public void channelRead0(ChannelHandlerContext ctx, Object msg) {
if (msg instanceof MsgA) {
handleA(ctx, (MsgA) msg);
} else if (msg instanceof MsgB) {
handleB(ctx, (MsgB) msg);
} ...
}
private void handleA(ChannelHandlerContext ctx, MsgA msg) {
...
}
...
}
If you do not like the tedious if-else blocks, you could make use of a java.util.Map and Class.isAssignableFrom().
Check the portunification example which does something similar:
https://github.com/netty/netty/blob/4.0/example/src/main/java/io/netty/example/portunification/PortUnificationServerHandler.java
From Socket documentation:
shutdownInput
public void shutdownInput()
throws IOException
Places the input stream for this socket at "end of stream". Any data sent to the input stream side of the socket is acknowledged and then silently discarded.
If you read from a socket input stream after invoking shutdownInput() on the socket, the stream will return EOF.
In order to test interaction between clients in a server, I've written some client bots.
These bots generate somewhat random client requests. Since these only write to the server, they have no need for the input stream, they do not need to read the updates the server sends. This is the main body of code for the bots:
private void runWriteBot(PrintWriter out) throws IOException {
//socket.shutdownInput();
String request;
System.out.println("Write bot ready.");
while (!quit) {
request = randomRequest();
out.println(request);
sleep();
}
}
If I uncomment the shutdownInput, an exception is thrown in the server's client handler:
Connection reset
I wasn't expecting an exception to be thrown on the other side of the socket. The documentation suggests (to me, at least) that anything sent by the other side will just be silently discarded, causing no interference with the other end's activity, ie without having the other side throw an exception.
Can I just ignore what the server sends, or should I drain what comes to the input stream?
Is there any automagic way of doing it, or do I need to regularly read and ignore?
The behaviour when you call shutdownInput() is platform-dependent.
BSD Unix will silently discard any further input.
Linux will keep buffering the input, which will eventually block the sender, or cause him to get EAGAIN/EWOULDBLOCK if he is in non-blocking mode.
Windows will reset the connection if any further data arrives.
This is determined by the platform, not by Java.
I don't see any need for calling shutdownInput() in most situations. The only thing it is really useful for is unblocking a read. In your situation you are going to have to read the server responses.
I'm trying to write out to URLConnection#getOutputStream, however, no data is actually sent until I call URLConnection#getInputStream. Even if I set URLConnnection#doInput to false, it still will not send. Does anyone know why this is? There's nothing in the API documentation that describes this.
Java API Documentation on URLConnection: http://download.oracle.com/javase/6/docs/api/java/net/URLConnection.html
Java's Tutorial on Reading from and Writing to a URLConnection: http://download.oracle.com/javase/tutorial/networking/urls/readingWriting.html
import java.io.IOException;
import java.io.OutputStreamWriter;
import java.net.URL;
import java.net.URLConnection;
public class UrlConnectionTest {
private static final String TEST_URL = "http://localhost:3000/test/hitme";
public static void main(String[] args) throws IOException {
URLConnection urlCon = null;
URL url = null;
OutputStreamWriter osw = null;
try {
url = new URL(TEST_URL);
urlCon = url.openConnection();
urlCon.setDoOutput(true);
urlCon.setRequestProperty("Content-Type", "text/plain");
////////////////////////////////////////
// SETTING THIS TO FALSE DOES NOTHING //
////////////////////////////////////////
// urlCon.setDoInput(false);
osw = new OutputStreamWriter(urlCon.getOutputStream());
osw.write("HELLO WORLD");
osw.flush();
/////////////////////////////////////////////////
// MUST CALL THIS OTHERWISE WILL NOT WRITE OUT //
/////////////////////////////////////////////////
urlCon.getInputStream();
/////////////////////////////////////////////////////////////////////////////////////////////////////////
// If getInputStream is called while doInput=false, the following exception is thrown: //
// java.net.ProtocolException: Cannot read from URLConnection if doInput=false (call setDoInput(true)) //
/////////////////////////////////////////////////////////////////////////////////////////////////////////
} catch (Exception e) {
e.printStackTrace();
} finally {
if (osw != null) {
osw.close();
}
}
}
}
The API for URLConnection and HttpURLConnection are (for better or worse) designed for the user to follow a very specific sequence of events:
Set Request Properties
(Optional) getOutputStream(), write to the stream, close the stream
getInputStream(), read from the stream, close the stream
If your request is a POST or PUT, you need the optional step #2.
To the best of my knowledge, the OutputStream is not like a socket, it is not directly connected to an InputStream on the server. Instead, after you close or flush the stream, AND call getInputStream(), your output is built into a Request and sent. The semantics are based on the assumption that you will want to read the response. Every example that I've seen shows this order of events. I would certainly agree with you and others that this API is counterintuitive when compared to the normal stream I/O API.
The tutorial you link to states that "URLConnection is an HTTP-centric class". I interpret that to mean that the methods are designed around a Request-Response model, and make the assumption that is how they will be used.
For what it's worth, I found this bug report that explains the intended operation of the class better than the javadoc documentation. The evaluation of the report states "The only way to send out the request is by calling getInputStream."
Although the getInputStream() method can certainly cause a URLConnection object to initiate an HTTP request, it is not a requirement to do so.
Consider the actual workflow:
Build a request
Submit
Process the response
Step 1 includes the possibility of including data in the request, by way of an HTTP entity. It just so happens that the URLConnection class provides an OutputStream object as the mechanism for providing this data (and rightfully so for many reasons that aren't particularly relevant here). Suffice to say that the streaming nature of this mechanism provides the programmer an amount of flexibility when supplying the data, including the ability to close the output stream (and any input streams feeding it), before finishing the request.
In other words, step 1 allows for supplying a data entity for the request, then continuing to build it (such as by adding headers).
Step 2 is really a virtual step, and can be automated (like it is in the URLConnection class), since submitting a request is meaningless without a response (at least within the confines of the HTTP protocol).
Which brings us to Step 3. When processing an HTTP response, the response entity -- retrieved by calling getInputSteam() -- is just one of the things we might be interested in. A response consists of a status, headers, and optionally an entity. The first time any one of these is requested, the URLConnection will perform virtual step 2 and submit the request.
No matter if an entity is being sent via the connection's output stream or not, and no matter whether a response entity is expected back, a program will ALWAYS want to know the result (as provided by the HTTP status code). Calling getResponseCode() on the URLConnection provides this status, and switching on the result may end the HTTP conversation without ever calling getInputStream().
So, if data is being submitted, and a response entity is not expected, don't do this:
// request is now built, so...
InputStream ignored = urlConnection.getInputStream();
... do this:
// request is now built, so...
int result = urlConnection.getResponseCode();
// act based on this result
As my experiments have shown (java 1.7.0_01) the code:
osw = new OutputStreamWriter(urlCon.getOutputStream());
osw.write("HELLO WORLD");
osw.flush();
Doesn't send anything to the server. It just saves what's written there to the memory buffer. Thus in case you're going to upload a large file via POST - you need to be sure that you have enough memory. On desktop/server it may not be such a big problem, but on android that may result in out of memory error. Here's the example of how the stack trace looks when trying to write to output stream, and memory runs out.
Exception in thread "Thread-488" java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.Arrays.copyOf(Arrays.java:2271)
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
at sun.net.www.http.PosterOutputStream.write(PosterOutputStream.java:78)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282)
at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125)
at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:135)
at java.io.OutputStreamWriter.write(OutputStreamWriter.java:220)
at java.io.Writer.write(Writer.java:157)
at maxela.tables.weboperations.POSTRequest.makePOST(POSTRequest.java:138)
On the bottom of the trace you can see the makePOST() method which does the following:
writer = new OutputStreamWriter(conn.getOutputStream());
for (int j = 0 ; j < 3000 * 100 ; j++)
{
writer.write("&var" + j + "=garbagegarbagegarbage_"+ j);
}
writer.flush();
And writer.write() throws the exception.
Also my experiments have shown that any exception related to the actual connection/IO with the server is thrown only after urlCon.getOutputStream() is called. Even urlCon.connect() seems to be "dummy" method which doesn't do any physical connection.
However if you call urlCon.getContentLengthLong() which returns Content-Length: header field from the server response-headers - then URLConnection.getOutputStream() will be called automatically and in case there's exception - it will be thrown.
The exceptions thrown by urlCon.getOutputStream() are all IOException, and I have met the follwing ones:
try
{
urlCon.getOutputStream();
}
catch (UnknownServiceException ex)
{
System.out.println("UnkownServiceException():" + ex.getMessage());
}
catch (ConnectException ex)
{
System.out.println("ConnectException()");
Logger.getLogger(POSTRequest.class.getName()).log(Level.SEVERE, null, ex);
}
catch (IOException ex) {
System.out.println("IOException():" + ex.getMessage());
Logger.getLogger(POSTRequest.class.getName()).log(Level.SEVERE, null, ex);
}
Hopefully my little research helps to people, as URLConnection class is a bit counter-intuitive in some cases thus, when implementing it - one needs to know what's it deals with.
Second reason is: when working with servers - the work with server may fail because of many reasons (connection, dns, firewall, httpresponses, server not being able to accept connection, server not being able to process request timely). Thus it is important to understand how exceptions raised can explain about what's actually happening with the connection.
Calling getInputStream() signals that the client is finished sending it's request, and is ready to receive the response (per HTTP spec). It seems that the URLConnection class has this notion built into it, and must be flush()ing the output stream when the input stream is asked for.
As the other responder noted, you should be able to call flush() yourself to trigger the write.
The fundamental reason is that it has to compute a Content-length header automatically (unless you are using chunked or streaming mode). It can't do that until it has seen all the output, and it has to send it before the output, so it has to buffer the output. And it needs a decisive event to know when the last output has actually been written. So it uses getInputStream() for that. At that time it writes the headers including the content-length, then the output, then it starts reading the input.
(Repost from your first question. Shameless self-plug)
Don't fiddle around with URLConnection yourself, let Resty handle it.
Here's the code you would need to write (I assume you are getting text back):
import static us.monoid.web.Resty.*;
import us.monoid.web.Resty;
...
new Resty().text(TEST_URL, content("HELLO WORLD")).toString();
How can I detect that the client side of a tomcat servlet request has disconnected? I've read that I should do a response.getOutputStream().print(), then a response.getOutputStream().flush() and catch an IOException, but is there a way I can detect this without writing any data?
EDIT:
The servlet sends out a data stream that doesn't necessarily end, but doesn't necessarily have any data flowing through it (it's a stream of real time events). I need to actually detect when the client disconnects because I have some cleanup I have to do at that point (resources to release, etcetera). If I have the HttpServletRequest available, will trying to read from that throw an IOException if the client disconnects?
is there a way I can detect this
without writing any data?
No because there isn't a way in TCP/IP to detect it without writing any data.
Don't worry about it. Just complete the request actions and write the response. If the client has disappeared, that will cause an IOException: connection reset, which will be thrown into the servlet container. Nothing you have to do about that.
I need to actually detect when the client disconnects because I have some cleanup I have to do at that point (resources to release, etcetera).
There the finally block is for. It will be executed regardless of the outcome. E.g.
OutputStream output = null;
try {
output = response.getOutputStream();
// ...
output.flush();
// ...
} finally {
// Do your cleanup here.
}
If I have the HttpServletRequest available, will trying to read from that throw an IOException if the client disconnects?
Depends on how you're reading from it and how much of request body is already in server memory. In case of normal form encoded requests, whenever you call getParameter() beforehand, it will usually be fully parsed and stored in server memory. Calling the getInputStream() won't be useful at all. Better do it on the response instead.
Have you tried to flush the buffer of the response:
response.flushBuffer();
Seems to throw an IOException when the client disconnected.
In Java, how would you set up a socket listener that listened to a socket for a series of bytes that represented a command and on recieving called a method which parsed the incoming data and invoked the appropriate command?
Clarification:
My issue is not with handling the commands (Which might also be error codes or responses to commands from the server) but with creating the socket and listening to it.
More Clarification:
What I want to do is mimic the following line of .Net (C#) code:
_stream.BeginRead(_data,0, _data.Length, new
AsyncCallback(this.StreamEventHandler), _stream);
Where:
_stream is a network stream created from a socket
_data is an array of Byte of length 9
this.StreamHandler is a delegate (function pointer) which get executed when data is read.
I am rewriting a library from C# into Java and the component I am currently writing passes commands to a server over TCPIP but also has to be able to bubble up events/responses to the layer above it.
In C# this seems to be trivial and it's looking less and less so in Java.
Starting from my other answer: The specific part you request is the one that goes into the section: "Magic goes here". It can be done in ohh so many ways, but one is:
final InputStream in = socket.getInputStream();
// This creates a new thread to service the request.
new Thread(new Runnable(){
public void run(){
byte[] retrievedData= new byte[ITEM_LENGTH];
in.read(retrievedData, 0, ITEM_LENGTH);
in.close();
// Here call your delegate or something to process the data
callSomethingWithTheData(retrievedData);
}
}).start();
Have a small main method which sets up the socket and listens for incoming connections. Pass each connection to a worker object (possibly in its own thread).
The worker object should have two APIs: The server and the client. The client API gets a connection and reads data from it, the server API takes a connection and writes data to it.
I like to keep these two in a single class because that makes it much more simple to keep the two in sync. Use a helper class to encode/decode the data for transmission, so you have single point to decide how to transmit integers, commands, options, etc.
If you want to go further, define a command class and write code to serialize that to a socket connection and read it from it. This way, you worker objects just need to declare which command class they handle and the server/client API gets even more simple (at the expense of the command class).
I would
put each command into a class of its own, where each class implements a specific interface (e.g. Command)
create a Map<String,Command> which contains a lookup table from each command string to an instance of the class that implements that command
This should help.
Lesson 1: Socket Communications
The TCP connection provides you with one InputStream and one OutputStream. You could just poll the InputStream continuously for the next command (and its inputs) on a dedicated thread. ByteBuffer.wrap(byte[] array) may be useful in interpreting the bytes as chars, ints, longs, etc. You could also pass objects around using serialization.
Any naive approach most likely will not scale well.
Consider using a REST-approach with a suitable small web-server. Jetty is usually a good choice.
To create an listen to a socket, in a very naive way:
mServerSocket = new ServerSocket(port);
listening = true;
while (listening) {
// This call blocks until a connection is made
Socket socket = serverSocket.accept();
OutputStream out = socket.getOutputStream();
InputStream in = socket.getInputStream();
// Here you do your magic, reading and writing what you need from the streams
// You would set listening to true if you have some command to close the server
// remotely
out.close();
in.close();
socket.close();
}
Normally it is a good idea to delegate the processing of the input stream to some other thread, so you can answer the next request. Otherwise, you will answer all requests serially.
You also need to define some kind of protocol of what bytes you expect on the input and output streams, but from your question it looks like you already have one.
You could create an enum with one member per command
interface Comamnd {
// whatever you expect all command to know to perform their function
void perform(Context context);
}
enum Commands implements Command{
ACTIONONE() {
void perform(Context context) {
System.out.println("Action One");
}
},
ACTIONTWO() {
void perform(Context context) {
System.out.println("Action Two");
}
}
}
// initialise
DataInputStream in = new DataInputStream(socket.getInputStream());
// in a loop
byte[] retrievedData= new byte[ITEM_LENGTH];
in.readFully(retrievedData);
String command = new String(retrievedData, 0);
Commands.valueOf(command).perform(context);