I'm trying to build a HTTP server in Java out of curiosity.
I know that HTTP uses sockets underneath(correct me if i'm wrong). So started programming initially using ServerSocket class.
public class Server
{
public static void main(String[] args) throws IOException
{
System.out.println("Listening.....");
ServerSocket ss = new ServerSocket(80);
while(true)
{
Socket s = ss.accept();
Scanner sc = new Scanner(s.getInputStream());
while(sc.hasNextLine())
{
String line = sc.nextLine();
if(line.equals(""))
break;
else
System.out.println(line);
}
System.out.println("-------------------------------");
PrintStream ps = new PrintStream(s.getOutputStream());
ps.println("Hello from Server");
s.close();
ps.close();
sc.close();
}
}
}
(I'm using Thread in my actual code to serve multiple users. I've just provided the basic code.)
I'm getting all the headers from the web browser. But how can I send the files and images?
For, simple HTML I can read the file and use PrintStream to print it on the web browser.
But how can I send JavaScript, Images etc to the browser?
HTTP has a protocol to it, you need to follow that protocol. The HTTP 1.1 protocol spec still in wide use is RFC 2616 (though it has officially been replaced with newer RFCs 7230, 7231, 7232, 7233, 7234, and 7235).
In my answer to another question, I show the correct way to read an inbound HTTP request from a Java Socket directly.
When sending a reply back, you can use a PrintStream or PrintWriter to send the response HTTP headers. However, the body content is sent as raw bytes, based on the format specified by the Content-Type and Transfer-Encoding response headers. Typically, you would just send the raw bytes directly to the socket's OutputStream, or at least to a BufferedOutputStream attached to it. If you are sending a pre-existing file from disk, regardless of its type, you could just open an InputStream for the file and then copy its data directly to the socket's OutputStream. If you are generating data dynamically, then you would send the data to the socket's OutputStream using whatever intermediate classes are appropriate. Print... classes are only appropriate for textual data, not binary data, like images.
That being said, Java has its own HttpServer and HttpsServer classes. You should consider using them.
Basically the same way. You should "print" the raw bytes to the socket's OutputStream.
However, for the browser to be able to understand it, you need to shape your response according to the HTTP/1.1 protocol. Specifying a Content-Type header will tell the browser what it is receiving from you. Specifying a Content-Length header will tell the browser how many bytes it is receiving from you. Etc.
Related
I'd need to send an HTTP/2 request via a TCP socket from my Java classes. I've adapter a piece of code which works for plain HTTP/1.1. However it does not output any response nor error code when using HTTP/2.
Can you see anything wrong in it? The server I'm trying to reach it's on https://localhost:8443
Socket s = new Socket(InetAddress.getByName("localhost"), 8443);
PrintWriter pw = new PrintWriter(s.getOutputStream());
pw.print("GET / HTTP/2.0\r\n");
pw.print("Host: localhost:8443\r\n\r\n");
pw.flush();
BufferedReader br = new BufferedReader(new InputStreamReader(s.getInputStream()));
String t;
while((t = br.readLine()) != null) System.out.println(t);
br.close();
Thanks!
That will not work.
HTTP/2 is a binary protocol, not a textual protocol, so in order to use a raw socket you have to generate the proper bytes that form a HTTP/2 request.
This is quite complicated as it requires that you implement HPACK to compress the headers, so you will be far better off using a Java library that does HTTP/2 for you, with a higher level API (rather than using raw sockets).
[Disclaimer: I'm the HTTP/2 implementer in Jetty].
Jetty offers a low-level HTTP/2 client that allows you to deal with HTTP/2 frames, and a high-level HTTP client that can send generic HTTP request using the HTTP/2 format.
For the first you can find an example here: https://github.com/eclipse/jetty.project/blob/jetty-9.4.18.v20190429/jetty-alpn/jetty-alpn-java-client/src/test/java/org/eclipse/jetty/alpn/java/client/JDK9HTTP2ClientTest.java
For the second one there is this section of the documentation: https://www.eclipse.org/jetty/documentation/9.4.x/http-client-transport.html#_http_2_transport
I'm looking to try and make a java server that can accept GET and POST HTTP requests. Now I've managed to get the GET method to work. But I'm not managing to get the POST method to work. My server manages to read the Request Header but doesn't seem to read the body of the message. i.e what was posted. Here is the code:
int port = 1991;
ServerSocket serverSocket = new ServerSocket(port);
System.err.println("The Server is on and listening on port " + port);
System.out.println(" ");
while (true)
{
Socket ClientSocketConnection = serverSocket.accept();
System.err.println("We have established a connection with a client!");
System.out.println(" ");
BufferedReader ServerInput = new BufferedReader(new InputStreamReader(ClientSocketConnection.getInputStream()));
DataOutputStream ServerOutput =new DataOutputStream(ClientSocketConnection.getOutputStream());
String StringInput;
int iCount = 0;
int CountNull = 0;
while ((StringInput = ServerInput.readLine()) != null)
{
System.out.println(StringInput);
}
Now I simply display everything that is sent through the socket. But for some reason I just dont get the requests message body and I know the Body is sent because in chrome I have this:
I'm not sure how to get that "Form Data". Any help would really be appreciated!!
UPDATE:
Here is the problem further narrowed down. From sends the HTTP request fine. With an HTTP POST method we have the request header a \r\n and then the message data. The problem is when my BufferedData variable ServerInput reads in the \r\n (empty line) it stops reading from the ServerInput. Anyway to fix this?
You need to read about the HTTP protocol. You could take a look at HttpServlet api for that.
The purpose of servlets is exactly passing from a socket to an Http protocol. Are you sure you want to do the job again?
I'd highly recommend you take a look at Jetty. It's an embeddable http server which will abstract all of this away for you.
As mentioned in the post the problem I had was that the server I made read the HTTP request header but some how never managed to read the post information being sent to the server via Google chrome.
Now an HTTP post request has the following structure:
request header
\r\n
post information
The reason for me not being able to read the post information was because of the .readLine() function! Once the function reads in a \r\n it assumes that is the end of the message and stops reading the post information being sent and hence the error. To fix this problem I had to use the .read() function instead of the .readLine(). The .read() function reads in every character from the HTTP request which included the post information
I'm trying to build a "full-duplex" HTTP streaming request using Apache HTTPClient.
In my first attempt, I tried using the following request code:
URL url=new URL(/* code goes here */);
HttpPost request=new HttpPost(url.toString());
request.addHeader("Connection", "close");
PipedOutputStream requestOutput=new PipedOutputStream();
PipedInputStream requestInput=new PipedInputStream(requestOutput, DEFAULT_PIPE_SIZE);
ContentType requestContentType=getContentType();
InputStreamEntity requestEntity=new InputStreamEntity(requestInput, -1, requestContentType);
request.setEntity(requestEntity);
HttpEntity responseEntity=null;
HttpResponse response=getHttpClient().execute(request); // <-- Hanging here
try {
if(response.getStatusLine().getStatusCode() != 200)
throw new IOException("Unexpected status code: "+response.getStatusLine().getStatusCode());
responseEntity = response.getEntity();
}
finally {
if(responseEntity == null)
request.abort();
}
InputStream responseInput=responseEntity.getContent();
ContentType responseContentType;
if(responseEntity.getContentType() != null)
responseContentType = ContentType.parse(responseEntity.getContentType().getValue());
else
responseContentType = DEFAULT_CONTENT_TYPE;
Reader responseStream=decode(responseInput, responseContentType);
Writer requestStream=encode(requestOutput, getContentType());
The request hangs at the line indicated above. It seems that the code is trying to send the entire request before it gets the response. In retrospect, this makes sense. However, it's not what I was hoping for. :)
Instead, I was hoping to send the request headers with Transfer-Encoding: chunked, receive a response header of HTTP/1.1 200 OK with a Transfer-Encoding: chunked header of its own, and then I'd have a full-duplex streaming HTTP connection to work with.
Happily, my HTTPClient has another NIO-based asynchronous client with good usage examples (like this one). My questions are:
Is my interpretation of the synchronous HTTPClient behavior correct? Or is there something I can do to continue using the (simpler) synchronous HTTPClient in the manner I described?
Does the NIO-based client wait to send the whole request before seeking a response? Or will I be able to send the request incrementally and receive the response incrementally at the same time?
If HTTPClient will not support this modality, is there another HTTP client library that will? Or should I be planning to write a (minimal) HTTP client to support this modality?
Here is my view on skim reading the code:
I cannot completely agree with the fact that a non-200 response means failure. All 2XX responses are mostly valid. Check wiki for more details
For any TCP request, I would recommend to receive the entire response to confirm that it is valid. I say this because, a partial response may mostly be treated as bad response as most of the client implementations cannot make use of it. (Imagine a case where server is responding with 2MB of data and it goes down during this time)
A separate thread must be writing to the OutputStream for your code to
work.
The code above provides the HTTPClient with a PipedInputStream.
PipedInputStream makes bytes available as they are written to the corresponding OutputStream.
The code above does not write to the OutputStream (which must be done by a separate thread.
Therefore the code is hanging exactly where your comment is.
Under the hood, the Apache client says "inputStream.read()" which in the case of piped streams requires that outputStream.write(bytes) was called previously (by a separate thread).
Since you aren't pumping bytes into the associated OutputStream from a separate thread the InputStream just sits and waits for the OutputStream to be written to by "some other thread."
From the JavaDocs:
A piped input stream should be connected to a piped output stream;
the piped input stream then provides whatever data bytes are written
to the piped output stream.
Typically, data is read from a PipedInputStream object by one thread
and data is written to the corresponding PipedOutputStream by some
other thread.
Attempting to use both objects from a single thread is not
recommended, as it may deadlock the thread.
The piped input stream contains a buffer, decoupling read operations
from write operations, within limits. A pipe is said to be "broken"
if a thread that was providing data bytes to the connected piped
output stream is no longer alive.
Note: Seems to me, since piped streams and concurrency were not mentioned in your problem statement, that it's not necessary. Try wrapping a ByteArrayInputStream() with the Entity object instead first for a sanity check... that should help you narrow down the issue.
Update
Incidentally, I wrote an inversion of Apache's HTTP Client API [PipedApacheClientOutputStream] which provides an OutputStream interface for HTTP POST using Apache Commons HTTP Client 4.3.4. This may be close to what you are looking for...
Calling-code looks like this:
// Calling-code manages thread-pool
ExecutorService es = Executors.newCachedThreadPool(
new ThreadFactoryBuilder()
.setNameFormat("apache-client-executor-thread-%d")
.build());
// Build configuration
PipedApacheClientOutputStreamConfig config = new
PipedApacheClientOutputStreamConfig();
config.setUrl("http://localhost:3000");
config.setPipeBufferSizeBytes(1024);
config.setThreadPool(es);
config.setHttpClient(HttpClientBuilder.create().build());
// Instantiate OutputStream
PipedApacheClientOutputStream os = new
PipedApacheClientOutputStream(config);
// Write to OutputStream
os.write(...);
try {
os.close();
} catch (IOException e) {
logger.error(e.getLocalizedMessage(), e);
}
// Do stuff with HTTP response
...
// Close the HTTP response
os.getResponse().close();
// Finally, shut down thread pool
// This must occur after retrieving response (after is) if interested
// in POST result
es.shutdown();
Note - In practice the same client, executor service, and config will likely be reused throughout the life of the application, so the outer prep and close code in the above example will likely live in bootstrap/init and finalization code rather than directly inline with the OutputStream instantiation.
I have a GWT page where user enter data (start date, end date, etc.), then this data goes to the server via RPC call. On the server I want to generate Excel report with POI and let user save that file on their local machine.
This is my test code to stream file back to the client but for some reason I think it does not know how to stream file to the client when I'm using RPC:
public class ReportsServiceImpl extends RemoteServiceServlet implements ReportsService {
public String myMethod(String s) {
File f = new File("/excelTestFile.xls");
String filename = f.getName();
int length = 0;
try {
HttpServletResponse resp = getThreadLocalResponse();
ServletOutputStream op = resp.getOutputStream();
ServletContext context = getServletConfig().getServletContext();
resp.setContentType("application/octet-stream");
resp.setContentLength((int) f.length());
resp.setHeader("Content-Disposition", "attachment; filename*=\"utf-8''" + filename + "");
byte[] bbuf = new byte[1024];
DataInputStream in = new DataInputStream(new FileInputStream(f));
while ((in != null) && ((length = in.read(bbuf)) != -1)) {
op.write(bbuf, 0, length);
}
in.close();
op.flush();
op.close();
}
catch (Exception ex) {
ex.printStackTrace();
}
return "Server says: " + filename;
}
}
I've read somewhere on internet that you can't do file stream with RPC and I have to use Servlet for that. Is there any example of how to use Servlet and how to call that servlet from ReportsServiceImpl. Do I really need to make a servlet or it is possible to stream it back with my RPC?
You have to make a regular Servlet, you cannot stream binary data from ReportsServiceImpl. Also, there is no way to call the servlet from ReportsServiceImpl - your client code has to directly invoke the servlet.
On the client side, you'd have to create a normal anchor link with the parameters passed via the query string. Something like <a href="http://myserver.com/myservlet?parm1=value1&.."</a>.
On the server side, move your code to a standard Servlet, one that does NOT inherit from RemoteServiceServlet. Read the parameters from the request object, create the excel and send it back to the client. The browser will automatically popup the file download dialog box.
You can do that just using GWT RPC and Data URIs:
In your example, make your myMethod return the file content.
On the client side, format a Data URI with the file content received.
Use Window.open to open a file save dialog passing the formatted DataURI.
Take a look at this reference, to understand the Data URI usage:
Export to csv in jQuery
It's possible to get the binary data you want back through the RPC channel in a number of ways... uuencode, for instance. However, you would still have to get the browser to handle the file as a download.
And, based on your code, it appears that you are trying to trigger the standard browser mechanism for handling the given mime-type by modifying the response in the server so the browser will recognize it as a download... open a save dialog, for instance. To do that, you need to get the browser to make the request for you and you need the servlet there to handle the request. It can be done with rest urls, but ultimately you will need a serviet to do even that.
You need, in effect, to set a browser window URL to the URL that sends back the modified response object.
So this question (about streaming) is not really compatible with the code sample. One or the other (communication protocols or server-modified response object) approach has to be adjusted.
The easiest one to adjust is the communication method.
Let's say I have a java program that makes an HTTP request on a server using HTTP 1.1 and doesn't close the connection. I make one request, and read all data returned from the input stream I have bound to the socket. However, upon making a second request, I get no response from the server (or there's a problem with the stream - it doesn't provide any more input). If I make the requests in order (Request, request, read) it works fine, but (request, read, request, read) doesn't.
Could someone shed some insight onto why this might be happening? (Code snippets follow). No matter what I do, the second read loop's isr_reader.read() only ever returns -1.
try{
connection = new Socket("SomeServer", port);
con_out = connection.getOutputStream();
con_in = connection.getInputStream();
PrintWriter out_writer = new PrintWriter(con_out, false);
out_writer.print("GET http://somesite HTTP/1.1\r\n");
out_writer.print("Host: thehost\r\n");
//out_writer.print("Content-Length: 0\r\n");
out_writer.print("\r\n");
out_writer.flush();
// If we were not interpreting this data as a character stream, we might need to adjust byte ordering here.
InputStreamReader isr_reader = new InputStreamReader(con_in);
char[] streamBuf = new char[8192];
int amountRead;
StringBuilder receivedData = new StringBuilder();
while((amountRead = isr_reader.read(streamBuf)) > 0){
receivedData.append(streamBuf, 0, amountRead);
}
// Response is processed here.
if(connection != null && !connection.isClosed()){
//System.out.println("Connection Still Open...");
out_writer.print("GET http://someSite2\r\n");
out_writer.print("Host: somehost\r\n");
out_writer.print("Connection: close\r\n");
out_writer.print("\r\n");
out_writer.flush();
streamBuf = new char[8192];
amountRead = 0;
receivedData.setLength(0);
while((amountRead = isr_reader.read(streamBuf)) > 0 || amountRead < 1){
if (amountRead > 0)
receivedData.append(streamBuf, 0, amountRead);
}
}
// Process response here
}
Responses to questions:
Yes, I'm receiving chunked responses from the server.
I'm using raw sockets because of an outside restriction.
Apologies for the mess of code - I was rewriting it from memory and seem to have introduced a few bugs.
So the consensus is I have to either do (request, request, read) and let the server close the stream once I hit the end, or, if I do (request, read, request, read) stop before I hit the end of the stream so that the stream isn't closed.
According to your code, the only time you'll even reach the statements dealing with sending the second request is when the server closes the output stream (your input stream) after receiving/responding to the first request.
The reason for that is that your code that is supposed to read only the first response
while((amountRead = isr_reader.read(streamBuf)) > 0) {
receivedData.append(streamBuf, 0, amountRead);
}
will block until the server closes the output stream (i.e., when read returns -1) or until the read timeout on the socket elapses. In the case of the read timeout, an exception will be thrown and you won't even get to sending the second request.
The problem with HTTP responses is that they don't tell you how many bytes to read from the stream until the end of the response. This is not a big deal for HTTP 1.0 responses, because the server simply closes the connection after the response thus enabling you to obtain the response (status line + headers + body) by simply reading everything until the end of the stream.
With HTTP 1.1 persistent connections you can no longer simply read everything until the end of the stream. You first need to read the status line and the headers, line by line, and then, based on the status code and the headers (such as Content-Length) decide how many bytes to read to obtain the response body (if it's present at all). If you do the above properly, your read operations will complete before the connection is closed or a timeout happens, and you will have read exactly the response the server sent. This will enable you to send the next request and then read the second response in exactly the same manner as the first one.
P.S. Request, request, read might be "working" in the sense that your server supports request pipelining and thus, receives and processes both request, and you, as a result, read both responses into one buffer as your "first" response.
P.P.S Make sure your PrintWriter is using the US-ASCII encoding. Otherwise, depending on your system encoding, the request line and headers of your HTTP requests might be malformed (wrong encoding).
Writing a simple http/1.1 client respecting the RFC is not such a difficult task.
To solve the problem of the blocking i/o access where reading a socket in java, you must use java.nio classes.
SocketChannels give the possibility to perform a non-blocking i/o access.
This is necessary to send HTTP request on a persistent connection.
Furthermore, nio classes will give better performances.
My stress test give to following results :
HTTP/1.0 (java.io) -> HTTP/1.0 (java.nio) = +20% faster
HTTP/1.0 (java.io) -> HTTP/1.1 (java.nio with persistent connection) = +110% faster
Make sure you have a Connection: keep-alive in your request. This may be a moot point though.
What kind of response is the server returning? Are you using chunked transfer? If the server doesn't know the size of the response body, it can't provide a Content-Length header and has to close the connection at the end of the response body to indicate to the client that the content has ended. In this case, the keep-alive won't work. If you're generating content on-the-fly with PHP, JSP etc., you can enable output buffering, check the size of the accumulated body, push the Content-Length header and flush the output buffer.
Is there a particular reason you're using raw sockets and not Java's URL Connection or Commons HTTPClient?
HTTP isn't easy to get right. I know Commons HTTP Client can re-use connections like you're trying to do.
If there isn't a specific reason for you using Sockets this is what I would recommend :)
Writing your own correct client HTTP/1.1 implementation is nontrivial; historically most people who I've seen attempt it have got it wrong. Their implementation usually ignores the spec and just does what appears to work with one particular test server - in particular, they usually ignore the requirement to be able to handle chunked responses.
Writing your own HTTP client is probably a bad idea, unless you have some VERY strange requirements.