Getting HTTP POST to work when making own Java Server - java

I'm looking to try and make a java server that can accept GET and POST HTTP requests. Now I've managed to get the GET method to work. But I'm not managing to get the POST method to work. My server manages to read the Request Header but doesn't seem to read the body of the message. i.e what was posted. Here is the code:
int port = 1991;
ServerSocket serverSocket = new ServerSocket(port);
System.err.println("The Server is on and listening on port " + port);
System.out.println(" ");
while (true)
{
Socket ClientSocketConnection = serverSocket.accept();
System.err.println("We have established a connection with a client!");
System.out.println(" ");
BufferedReader ServerInput = new BufferedReader(new InputStreamReader(ClientSocketConnection.getInputStream()));
DataOutputStream ServerOutput =new DataOutputStream(ClientSocketConnection.getOutputStream());
String StringInput;
int iCount = 0;
int CountNull = 0;
while ((StringInput = ServerInput.readLine()) != null)
{
System.out.println(StringInput);
}
Now I simply display everything that is sent through the socket. But for some reason I just dont get the requests message body and I know the Body is sent because in chrome I have this:
I'm not sure how to get that "Form Data". Any help would really be appreciated!!
UPDATE:
Here is the problem further narrowed down. From sends the HTTP request fine. With an HTTP POST method we have the request header a \r\n and then the message data. The problem is when my BufferedData variable ServerInput reads in the \r\n (empty line) it stops reading from the ServerInput. Anyway to fix this?

You need to read about the HTTP protocol. You could take a look at HttpServlet api for that.
The purpose of servlets is exactly passing from a socket to an Http protocol. Are you sure you want to do the job again?

I'd highly recommend you take a look at Jetty. It's an embeddable http server which will abstract all of this away for you.

As mentioned in the post the problem I had was that the server I made read the HTTP request header but some how never managed to read the post information being sent to the server via Google chrome.
Now an HTTP post request has the following structure:
request header
\r\n
post information
The reason for me not being able to read the post information was because of the .readLine() function! Once the function reads in a \r\n it assumes that is the end of the message and stops reading the post information being sent and hence the error. To fix this problem I had to use the .read() function instead of the .readLine(). The .read() function reads in every character from the HTTP request which included the post information

Related

How to send file to web browser using java sockets?

I'm trying to build a HTTP server in Java out of curiosity.
I know that HTTP uses sockets underneath(correct me if i'm wrong). So started programming initially using ServerSocket class.
public class Server
{
public static void main(String[] args) throws IOException
{
System.out.println("Listening.....");
ServerSocket ss = new ServerSocket(80);
while(true)
{
Socket s = ss.accept();
Scanner sc = new Scanner(s.getInputStream());
while(sc.hasNextLine())
{
String line = sc.nextLine();
if(line.equals(""))
break;
else
System.out.println(line);
}
System.out.println("-------------------------------");
PrintStream ps = new PrintStream(s.getOutputStream());
ps.println("Hello from Server");
s.close();
ps.close();
sc.close();
}
}
}
(I'm using Thread in my actual code to serve multiple users. I've just provided the basic code.)
I'm getting all the headers from the web browser. But how can I send the files and images?
For, simple HTML I can read the file and use PrintStream to print it on the web browser.
But how can I send JavaScript, Images etc to the browser?
HTTP has a protocol to it, you need to follow that protocol. The HTTP 1.1 protocol spec still in wide use is RFC 2616 (though it has officially been replaced with newer RFCs 7230, 7231, 7232, 7233, 7234, and 7235).
In my answer to another question, I show the correct way to read an inbound HTTP request from a Java Socket directly.
When sending a reply back, you can use a PrintStream or PrintWriter to send the response HTTP headers. However, the body content is sent as raw bytes, based on the format specified by the Content-Type and Transfer-Encoding response headers. Typically, you would just send the raw bytes directly to the socket's OutputStream, or at least to a BufferedOutputStream attached to it. If you are sending a pre-existing file from disk, regardless of its type, you could just open an InputStream for the file and then copy its data directly to the socket's OutputStream. If you are generating data dynamically, then you would send the data to the socket's OutputStream using whatever intermediate classes are appropriate. Print... classes are only appropriate for textual data, not binary data, like images.
That being said, Java has its own HttpServer and HttpsServer classes. You should consider using them.
Basically the same way. You should "print" the raw bytes to the socket's OutputStream.
However, for the browser to be able to understand it, you need to shape your response according to the HTTP/1.1 protocol. Specifying a Content-Type header will tell the browser what it is receiving from you. Specifying a Content-Length header will tell the browser how many bytes it is receiving from you. Etc.

Error 503 in HTTP during page parsing in java

Today I'm developing a java RMI server (and also the client) that gets info from a page and returns me what I want. I put the code right down here. The problem is that sometimes the url I pass to the method throws an IOException that says that the url given makes a 503 HTTP error. It could be easy if it was always that way but the thing is that it appears sometimes.
I have this method structure because the page I parse is from a weather company and I want info from many cities, not only for one, so some cities works perfectly at the first chance and others it fails. Any suggestions?
public ArrayList<Medidas> parse(String url){
medidas = new ArrayList<Medidas>();
int v=0;
String sourceLine;
String content = "";
try{
// The URL address of the page to open.
URL address = new URL(url);
// Open the address and create a BufferedReader with the source code.
InputStreamReader pageInput = new InputStreamReader(address.openStream());
BufferedReader source = new BufferedReader(pageInput);
// Append each new HTML line into one string. Add a tab character.
while ((sourceLine = source.readLine()) != null){
if(sourceLine.contains("<tbody>")) v=1;
else if (sourceLine.contains("</tbody>"))
break;
else if(v==1)
content += sourceLine + "\n";
}
........................
........................ NOW THE PARSING CODE, NOT IMPORTANT
}
HTTP 500 errors reflect server errors so it has likely nothing to do with your client code.
You would get a 400 error if you were passing invalid parameters on your request.
503 is "Service Unavailable" and may be sent by the server when it is overloaded and cannot process your request. From a publicly accessible server, that could explain the erratic behavior.
Edit
Build a retry handler in your code when you detect a 503. Apache HTTPClient can do that automatically for you.
List of HTTP Status Codes
Check that the IOException is really not a MalformedURLException. Try printing out the URLs to verify a bad URL is not causing the IOException.
How large is the file you are parsing? Perhaps your JVM is running out of memory.

My socket client only seems to receive the first response from a HTTP server. Why?

I am trying to develop a class that allows me to run a socket on a thread, and that at any time allows me to send through it data, as well as receive a notifications when data arrives. It should assume no things such as only receiving a message after it has sent a message first, etc.
By some reason, the following code is printing the response for only the first request:
public static void main(String[] args) throws IOException {
TCPClient client = new TCPClient(new TCPClientObserver());
client.connect("www.microsoft.com", 80);
sleep(1000);
client.send("HTTP GET");
sleep(5000);
client.send("XYZ");
}
printing
echo: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd">
echo: <HTML><HEAD><TITLE>Bad Request</TITLE>
echo: <META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD>
echo: <BODY><h2>Bad Request - Invalid URL</h2>
echo: <hr><p>HTTP Error 400. The request URL is invalid.</p>
echo: </BODY></HTML>
Here is the core logic of the socket:
echoSocket = new Socket("www.microsoft.com", 80);
out = new PrintWriter(echoSocket.getOutputStream(), true);
in = new BufferedReader(new InputStreamReader(
echoSocket.getInputStream()));
while (true) {
String response = in.readLine();
if (response != null)
System.out.println("echo: " + response);
}
I guess the problem lies in my loop?
The full test code of my app can be seen here:
http://codepad.org/bmHwct35
Thanks
The problem is not the loop, but rather the first request you send. "HTTP GET" is invalid request and the server should respond with "400 Bad request", then close the connection. That's why you don't get response for your second message. Try a valid HTTP request e.g. "GET / HTTP/1.1\r\nHost: www.microsoft.com\r\n\r\n" instead. HTTP 1.1 connections are keep-alive by default, so you'll be able to send several of them and receive subsequent responses.
Assuming the HTTP 400 is what you expected, you need to break out of your loop if readLine() returns null. This is usually written like this:
while ((line = in.readLine()) != null)
{
// ...
}
The first you send is an invalid request and after replying the server will close the connection. In your code you are then stuck in the while (true) loop as you will keep getting null back from readLine as the connection/stream is closed (end of stream reached).
while (true) { // <-- never terminates
String response = in.readLine();
if (response != null) // <- now null all the time
System.out.println("echo: " + response);
}
The problem is that HTTP servers talk the HTTP protocol, and your client code is not talking the protocol properly. Instead it is opening a plain socket and writing random stuff that seems to be based on guesswork.
The HTTP protocol is specified in this document. If you read it, you will see that what you are doing is nothing like what a client is supposed to do.
I recommend that you don't attempt to implement your client this way. Implementing the client side of the HTTP protocol from scratch is too much work ... and the world does not need yet another flakey HTTP client. Either use the standard URL / HttpURLConnection APIs, or use a 3rd-party HTTP client library.
In your case, the 400 response is most likely a consequence of sending a malformed request. An HTTP GET request (indeed any HTTP request) is supposed to include the target URL in the first line. Read the spec.

Problem in converting InputStream to String

I have created a java program which acts as Rest Web Server. It gets the http request and send response. I am getting the http request as Input Stream inside my server. I want to convert this Input stream to String and then want to parse string according to some predefined pattern. The problem is when I get the input Stream and tries to convert it to String, it will not finish the operation until new request comes or the original request is terminated. If any of these two events happen only then the Input stream is successfully converted to string otherwise itjust gets hanged there. Am I missing any thing ? Any Suggestions will be very helpful.
ServerSocket service = new ServerSocket(Integer.parseInt(argv[0]));
Socket connection = service.accept();
InputStream is = connection.getInputStream();
String ss = IOUtils.toString(is);
System.out.println("PRINT : "+ss);
Now the ss is only printed when the old request is terminated or the new request is accepted at socket. I want to convert it to string within the same request.
Please Suggest me what i am doing wrong?
Thanks,
Tara Singh
You should read the request in steps. First read the headers, line for line. Then, if it's a POST request, there will be a request body. If that's the case, you should have read a Content-Length header before, which says how long the body is in bytes. You should read that number of bytes from the input stream.
Most of that is already handled for you if you make this app as a servlet, or if that's not possible, using an HTTP server library.
What you are doing wrong is that you want to convert a stream to a string a operation that is only possible when the stream is finished. That is why you get your string when the connection is terminated. How else is the method toString supposed to know when there is no more data comming and it should begin the conversion? What if it spits a string and in the meantime more data arrive in the stream? I guess you won't be happy then too :)
In short: you must somehow know when you are finished receiving before converting to string. Redesign your app.

HTTP 1.1 Persistent Connections using Sockets in Java

Let's say I have a java program that makes an HTTP request on a server using HTTP 1.1 and doesn't close the connection. I make one request, and read all data returned from the input stream I have bound to the socket. However, upon making a second request, I get no response from the server (or there's a problem with the stream - it doesn't provide any more input). If I make the requests in order (Request, request, read) it works fine, but (request, read, request, read) doesn't.
Could someone shed some insight onto why this might be happening? (Code snippets follow). No matter what I do, the second read loop's isr_reader.read() only ever returns -1.
try{
connection = new Socket("SomeServer", port);
con_out = connection.getOutputStream();
con_in = connection.getInputStream();
PrintWriter out_writer = new PrintWriter(con_out, false);
out_writer.print("GET http://somesite HTTP/1.1\r\n");
out_writer.print("Host: thehost\r\n");
//out_writer.print("Content-Length: 0\r\n");
out_writer.print("\r\n");
out_writer.flush();
// If we were not interpreting this data as a character stream, we might need to adjust byte ordering here.
InputStreamReader isr_reader = new InputStreamReader(con_in);
char[] streamBuf = new char[8192];
int amountRead;
StringBuilder receivedData = new StringBuilder();
while((amountRead = isr_reader.read(streamBuf)) > 0){
receivedData.append(streamBuf, 0, amountRead);
}
// Response is processed here.
if(connection != null && !connection.isClosed()){
//System.out.println("Connection Still Open...");
out_writer.print("GET http://someSite2\r\n");
out_writer.print("Host: somehost\r\n");
out_writer.print("Connection: close\r\n");
out_writer.print("\r\n");
out_writer.flush();
streamBuf = new char[8192];
amountRead = 0;
receivedData.setLength(0);
while((amountRead = isr_reader.read(streamBuf)) > 0 || amountRead < 1){
if (amountRead > 0)
receivedData.append(streamBuf, 0, amountRead);
}
}
// Process response here
}
Responses to questions:
Yes, I'm receiving chunked responses from the server.
I'm using raw sockets because of an outside restriction.
Apologies for the mess of code - I was rewriting it from memory and seem to have introduced a few bugs.
So the consensus is I have to either do (request, request, read) and let the server close the stream once I hit the end, or, if I do (request, read, request, read) stop before I hit the end of the stream so that the stream isn't closed.
According to your code, the only time you'll even reach the statements dealing with sending the second request is when the server closes the output stream (your input stream) after receiving/responding to the first request.
The reason for that is that your code that is supposed to read only the first response
while((amountRead = isr_reader.read(streamBuf)) > 0) {
receivedData.append(streamBuf, 0, amountRead);
}
will block until the server closes the output stream (i.e., when read returns -1) or until the read timeout on the socket elapses. In the case of the read timeout, an exception will be thrown and you won't even get to sending the second request.
The problem with HTTP responses is that they don't tell you how many bytes to read from the stream until the end of the response. This is not a big deal for HTTP 1.0 responses, because the server simply closes the connection after the response thus enabling you to obtain the response (status line + headers + body) by simply reading everything until the end of the stream.
With HTTP 1.1 persistent connections you can no longer simply read everything until the end of the stream. You first need to read the status line and the headers, line by line, and then, based on the status code and the headers (such as Content-Length) decide how many bytes to read to obtain the response body (if it's present at all). If you do the above properly, your read operations will complete before the connection is closed or a timeout happens, and you will have read exactly the response the server sent. This will enable you to send the next request and then read the second response in exactly the same manner as the first one.
P.S. Request, request, read might be "working" in the sense that your server supports request pipelining and thus, receives and processes both request, and you, as a result, read both responses into one buffer as your "first" response.
P.P.S Make sure your PrintWriter is using the US-ASCII encoding. Otherwise, depending on your system encoding, the request line and headers of your HTTP requests might be malformed (wrong encoding).
Writing a simple http/1.1 client respecting the RFC is not such a difficult task.
To solve the problem of the blocking i/o access where reading a socket in java, you must use java.nio classes.
SocketChannels give the possibility to perform a non-blocking i/o access.
This is necessary to send HTTP request on a persistent connection.
Furthermore, nio classes will give better performances.
My stress test give to following results :
HTTP/1.0 (java.io) -> HTTP/1.0 (java.nio) = +20% faster
HTTP/1.0 (java.io) -> HTTP/1.1 (java.nio with persistent connection) = +110% faster
Make sure you have a Connection: keep-alive in your request. This may be a moot point though.
What kind of response is the server returning? Are you using chunked transfer? If the server doesn't know the size of the response body, it can't provide a Content-Length header and has to close the connection at the end of the response body to indicate to the client that the content has ended. In this case, the keep-alive won't work. If you're generating content on-the-fly with PHP, JSP etc., you can enable output buffering, check the size of the accumulated body, push the Content-Length header and flush the output buffer.
Is there a particular reason you're using raw sockets and not Java's URL Connection or Commons HTTPClient?
HTTP isn't easy to get right. I know Commons HTTP Client can re-use connections like you're trying to do.
If there isn't a specific reason for you using Sockets this is what I would recommend :)
Writing your own correct client HTTP/1.1 implementation is nontrivial; historically most people who I've seen attempt it have got it wrong. Their implementation usually ignores the spec and just does what appears to work with one particular test server - in particular, they usually ignore the requirement to be able to handle chunked responses.
Writing your own HTTP client is probably a bad idea, unless you have some VERY strange requirements.

Categories