I'm trying to write a simple HTTP server but can't figure out how to read the body segment of a POST-request. I'm having trouble reading beyond the empty line after the headers.
Here's what I do:
BufferedReader br = new BufferedReader(new InputStreamReader(client.getInputStream()));
StringBuilder request = new StringBuilder();
String line;
while(!(line = br.readLine()).isEmpty()) {
request.append(line).append(CRLF);
System.out.println(line);
}
// read body ?
So this basically loads the Request and headers in a String. But I can't figure out how to skip that one line that separates the headers from the body.
I've tried readLine() != null or to manually read two more lines after the loop terminates, but that results in a loop.
Try parsing the content-length header to get the number of bytes. After the blank line you'll want to read exactly that many bytes. Using readLine() won't work because the body isn't terminated by a CRLF.
Related
I'm using latest Apache Commons Net to make use of FTP functionality.
My goal is to upload CSV files (based on ;), which might contain latin characters, such as ñ, á or Ú. The thing is that when I upload them to the FTP Server, those characters are transformed to another.
The following line:
12345678A;IÑIGO;PÉREZ;JIMÉNEZ;X
gets transformed into this:
12345678A;IÑIGO;PÉREZ;JIMÉNEZ;X
My code seems something like that:
// pFile is passed as parameter to the current method
InputStream is = new FileInputStream(pFile);
ftp.setFileType(FTP.BINARY_FILE_TYPE);
ftp.setControlEncoding("UTF-8");
if (ftp.storeFile("some\\path", is)) {
is.close();
...
}
I've digged some hours to find a solution (I thought setFileType() and/or setControlEncoding() would work), but nope...
I've tried to print to the standard output (screen, with logger and System.out), and I've realised that it's InputStream who doesn't read those characters. Executing the following code printed the mentioned characters in a right way:
InputStreamReader isr = new InputStreamReader(is, StandardCharsets.UTF_8);
BufferedReader in = new BufferedReader(isr);
String line = null;
while((line = in.readLine()) != null){
System.out.print(line);
logger.debug(line);
}
in.close();
isr.close();
But how to tell FTP client or storeFile() to make use of UTF-8?
Thank you all.
Sorry, but I've got the answer.
When I've told you that I see transformed some characters
12345678A;IÑIGO;PÉREZ;JIMÉNEZ;X
I meant that those characters were seen on a FTP Client application (I use WinSCP). The issue is that the default character encoding was selected and it wasn't UTF-8-
Now, after realising it, I select the proper encoding (UTF-8), and the text seem to be well-formed.
Thanks for your help.
I was wondering if it's possible to send a single string with newline characters using BufferedWriter.
For context I need to send the following string over the network:
String message = "LOOKREPLY\nX...X\n.....\n.....\n.....\nX...X"
socketOutput = new BufferedWriter(new OutputStreamWriter(socket.getOutputStream()));
socketOutput.write(message + "\n");
But this sends 6 messages over the network because of the newline characters. I could re-build it on the client side but was wondering if there's a neater/simpler approach.
In your code
bw.write(message+"\n");
As it will put \n after all message has been written.
BufferedWriter will not send 6 Messages it will send only one but as you gave \n in your string it will write according to that.
As delv said you can put escape sequence to the \n so that it will write only one line.
String message = "LOOKREPLY\\nX...X\\n.....\\n.....\\n.....\\nX...X";
So that message will be written like this.
LOOKREPLY\nX...X\n.....\n.....\n.....\nX...X
I'm getting an strange issue in a loop that is reading a BufferedReader and never ends...
This is the code:
BufferedReader in = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
int b;
StringBuilder buff = new StringBuilder();
while ((b = in.read()) != -1 ) {
buff.append((char) b);
}
System.out.println(buff.toString());
But never arrives to the last line to print buff.toString().
There's anything wrong in this code?
Thanks.
Can you try changing the while condition like this.
while ((b = in.read()) > -1 ) {
buff.append((char) b);
}
Your loop is trying to read until EOF (that is the only reason for an input stream/reader to return -1 for the read() method).
The problem is that your HTTP connection (and your socket) might be left open (for a while), also leaving your input stream/reader open. So instead of reaching the end condition for your loop, the in.read() call will just block, waiting for input.
If you control the "other side" of the connection, you could close it, to see what happens. But I doubt that would work for the use case in general. Instead, you need to parse the message according to the HTTP protocol (see HTTP/1.1 RFC2616).
If you only need to parse the headers, then you could use your BufferedReader, and read only full lines, until you find an empty line. This will work, because HTTP uses linefeeds (linefeed being CRLF in this case) after each header name/value pair, and end the header part with exactly two linefeeds. Everything after that will be the message body.
PS: This is the easy/happy case. Note that a single connection/socket may be re-used for multiple HTTP requests/responses. You may have handle this as well, depending on your requirements.
I am getting a PDF attachment in a Soap response message. I need to generate a PDF back out of it. However, the generated PDF is of the following form:
%PDF-1.4
%
2 0 obj
<</Type/XObject/ColorSpace/DeviceRGB/Subtype/Image/BitsPerComponent 8/Width
278/Length 7735/Height 62/Filter/DCTDecode>>stream
How can I solve this issue?
Here is the code showing how I am embedding a PDF as an attachment:
message = messageFactory.createMessage();
SOAPBody body = message.getSOAPBody();
header.detachNode();
AttachmentPart attachment1 = message.createAttachmentPart();
fr = new FileReader(new File(pathName));
br = new BufferedReader(fr);
String stringContent = "";
line = br.readLine();
while (line != null) {
stringContent = stringContent.concat(line);
stringContent = stringContent.concat("\n");
line = br.readLine();
}
fr.close();
br.close();
attachment1.setMimeHeader("Content-Type", "application/pdf");
attachment1.setContent(stringContent, "application/pdf");
The below code describes how I am getting PDF back from the SOAP message:
Object content = attachment1.getContent();
writePdf(content);
private void writePdf(Object content) throws IOException, PrintException,
DocumentException {
String str = content.toString();
//byte[] b = Base64.decode(str);
//byteArrayToFile(b);
OutputStream file = new FileOutputStream(new File
(AppConfig.getInstance().getConfigValue("webapp.root") +
File.separator + "temp" + File.separator + "hede.pdf"));
//String s2 = new String(bytes, "UTF-8");
//System.out.println("S2::::::::::"+s2);
Document document = new Document();
PdfWriter.getInstance(document, file);
document.open();
document.add(new Paragraph(str));
document.close();
file.close();
}
Can anyone help me out?
There are several faults in the supplied code:
In the code showing how you are embedding pdf as an attachment, you are using a Reader (a FileReader enveloped in a BufferedReader) to read the file to attach line by line, concat these lines with using \n as separator, and send the result of the concatenation as attachment content of type "application/pdf".
This is a procedure you may consider for text files (even though it isn't a good choice there either) but binary files read like this most like get broken beyond repair (and PDFs are binary files, in spite of a phase early in their history where handling them as text was quite harmless):
When reading a file, a Reader interprets the bytes in it according to some character encoding (as none is given explicitly here, most likely the platform default encoding is used) to transform them to Unicode characters collected in a String. Already here most likely the binary data is damaged.
When using readLine you read these Unicode characters until the Reader recognizes a line break. A line is considered to be terminated by any one of a line feed ('\n'), a carriage return ('\r'), or a carriage return followed immediately by a linefeed. (Java API sources JavaDocs). When you continue to concatenate these lines uniformly using \n as separators, you essentially replace all single carriage return characters and all carriage return - line feed character pairs into single line feed characters, damaging the binary data even further.
When you make the attachment API you use encode this string as the content of some attachment part, you make it transform your Unicode characters back into bytes. If by chance the same character encoding is assumed as was by the Reader before, this might heal some of the damage done back then, but surely not all, and the line break interpretation of the step inbetween certainly isn't healed, either. If a different encoding is used, the data is damaged once again.
Thus, check what other arguments your AttachmentPart.setContent methods accept, choose something which does not damage binaries (e.g. InputStreams, ByteBuffers, byte[], ...) and use that, e.g. a FileInputStream.
The code which describes how you are getting PDF back from SOAP message is even weirder... You assume that toString of the attachment content returns some meaningful string representation (very unlikely here), and then continue to create a new PDF containing that string representation as text content of the first and only paragraph of the PDF. Thus while your attachment creation code discussed above at least 'merely' damaged the PDF, your attachment restrieval code completely ignores the nature of the attachment and destroys it beyond recognition.
You should instead check the actual type of the content object, retrieve the binary data it holds according to its type, and store that content using a FileOutputStream (not a Writer, and not using Strings inbetween, and not copying 'line' by 'line').
And whatever source gave you the impression that your code was appropriate for the task... well, either you completely misunderstood it or you should shun it from now on.
all:
I wonder how to quickly read the last line of an online file, such as "http://www.17500.cn/getData/ssq.TXT",
I know the RandomAccessFile class, but it seems that it can only read the local files. Any suggestion ?? TKS in advance.
You'll have to read through the whole reader, and only keep the last line:
String line;
String lastLine = null;
while ((line = reader.readLine()) != null) {
lastLine = line;
}
EDIT: as Joachim says in his comment, if you know that the last line will never be longer than (for example) 500 bytes, you could set the Range header in your HTTP request to -500, and thus only download the last 500 bytes. The same algorithm as above could be used.
I don't know, however, if it would deal correctly a stream starting in the middle of a multi-byte encoded character if the encoding is multi-byte (like UTF-8). With ASCII or ISO-8859-1, you won't have any problem.
Also note that the server is not forced to honor the range request, and could retur the whole file.
httpConnection.setRequestProperty("Range","-500");