I am sending file from server to client using java socket programming.
Here is my server side code:
public void fileSendingProtocol(String filePath) {
File myFile = new File(filePath);
byte[] mybytearray = new byte[(int) myFile.length()];
FileInputStream fis = null;
try {
fis = new FileInputStream(myFile);
} catch (FileNotFoundException ex) {
System.err.println(ex);
}
BufferedInputStream bis = new BufferedInputStream(fis);
try {
bis.read(mybytearray, 0, mybytearray.length);
os.write(mybytearray, 0, mybytearray.length);
os.flush();
System.out.println(filePath + " Submitted");
// File sent, exit the main method
} catch (IOException ex) {
// Do exception handling
System.out.println(ex.toString());
} finally {
try {
os.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
Here i have closed the os in the finally block. Because if i omit os.close() then i am not able to receive the file in the client side.
Here is my client file receiving code:
public static void fileReceivingProtocol(String filePath) {
try {
fos = new FileOutputStream(filePath);
} catch (FileNotFoundException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
bos = new BufferedOutputStream(fos);
/* read question paper from the server. */
try {
bytesRead = is.read(aByte, 0, aByte.length);
do {
baos.write(aByte);
bytesRead = is.read(aByte);
} while (bytesRead != -1);
bos.write(baos.toByteArray());
bos.flush();
fos.close();
} catch (IOException e) {
System.err.println("IOException: " + e);
}
}
I need the server first send a file . Then after receiving that file the client need to send another file to server after some minutes. But if i call the os.close() in the server side then my socket get closed and i am not able to continue any further communication between then.
You do not need to close os (the OutputStream), just call its flush() method. Streams are often buffered for performance reasons. A call to flush() will instruct the implementation to send all cached data.
Most likely the file you send is small (maybe a few KB at the most) which is less than the typical cache size. If you write less data than the cache size (or the data that can be transmitted in a TCP packet in case of a Socket's OutputStream), the implementation will likely not send it for some time. flush() will send whatever data there is cached.
If your client does not know the file size (does not know how many bytes to wait for), you have to implement some kind of "protocol" to exchange these information. A very basic would be to first send the file size (number of bytes) in 4 bytes (that is the size of a Java int), and then send the content of the file.
The client will know that the first 4 bytes will be the file size, and it will wait for / read that amount of bytes.
How to convert int to bytes: Convert integer into byte array (Java) or Java integer to byte array
Modified file sender
// First write the file's length (4 bytes)
int length = (int) myFile.length();
os.write((length >>> 24) & 0xff);
os.write((length >>> 16) & 0xff);
os.write((length >>> 8) & 0xff);
os.write(length & 0xff);
// And now send the content of the file just as you did
Modified file receiver
// First read the file's length (4 bytes)
int b1 = is.read();
int b2 = is.read();
int b3 = is.read();
int b4 = is.read();
if (b1 < 0 || b2 < 0 || b3 < 0 || b4 < 0)
throw new EOFException(); // Less than 4 bytes received, end of stream
int length = (b1 << 24) + (b2 << 16) + (b3 << 8) + b4;
// And now read the content of the file which must be exactly length bytes
Related
I am working on a school project where I want to make a personal storage server. At the moment, what I am trying to achieve is being able to transfer a file from the client machine to the server. However, when testing this with an image, the file partially sends before it corrupts.
Please bare in mind that I am a reasonably new programmer and that my technical knowledge may be some-what limited.
I am using a byte array through a DataOutputStream to transfer the file. I want to use this method as it should work for any file type. I've tried to set the buffer size to the exact size of the file and larger but neither have worked.
Server:
public void run() {
try {
System.out.println("ip: " + clientSocket.getInetAddress().getHostAddress());
out = new DataOutputStream(clientSocket.getOutputStream());
in = new DataInputStream(clientSocket.getInputStream());
in.read(buffer, 0, buffer.length);
fileOut = new FileOutputStream("X:\\My Documents\\My
Pictures\\gradient.jpg");
fileOut.write(buffer, 0, buffer.length);
in.close();
out.close();
clientSocket.close();
} catch (IOException ex) {
System.out.println(ex.getMessage());
}
}
Client:
public void startConnection(String ip, int port) {
try {
clientSocket = new Socket(ip, port);
out = new DataOutputStream(clientSocket.getOutputStream());
in = new DataInputStream(clientSocket.getInputStream());
x = false;
Path filePath = Paths.get("C:\\Users\\georg\\Documents\\gradient.jpg");
buffer = Files.readAllBytes(filePath);
Thread.sleep(3000);
//Files.write(filePath, buffer);
//out.write(buffer,0,buffer.length);
x = true;
sendMessage(buffer);
} catch (IOException ex) {
System.out.println(ex.getMessage());
} catch (InterruptedException ex) {
Logger.getLogger(PCS_Client.class.getName()).log(Level.SEVERE, null, ex);
}
}
public byte[] sendMessage(byte[] buffer) {
if (x==true){
try {
out.write(buffer,0,buffer.length);
} catch (IOException ex) {
System.out.println(ex.getMessage());
}
}
return null;
}
Here is a comparison of the files I've tried to send vs the files I receive:
https://imgur.com/gallery/T7nUUJT
Curiously, sending a single colour image produces a single colour image on the server. I believe the issue here may have to be in the timing of code execution however I am not sure and do not know how to go about fixing it.
The issue is in your server code, at this line:
in.read(buffer, 0, buffer.length);
You expect to read all the data at once, but if you read the doc you will find this:
public final int read(byte[] b,
int off,
int len)
throws IOException
Reads up to len bytes of data from the contained input stream into an
array of bytes. An attempt is made to read as many as len bytes, but a
smaller number may be read, possibly zero. The number of bytes
actually read is returned as an integer.
The important part is Reads up to len bytes of data.
You must use the return value of read and call it read repeatedly until the is nothing more to read.
For quite long time now I'm struggling with handling TFTP protocol in my Android app. Its main feature is downloading files from custom designed device which hosts TFTP server.
I was browsing internet hoping to find some good, already written, implementation. First I've tried with TFTP library which is part of Apache Commons. Unfortunately no luck - constant timeouts or even complete freeze. After some further research I found some code on github - please take a look. I've adopted code to Android and after some tweaking I managed to finally receive some files.
Creator of the device stated, that block size should be exactly 1015 bytes. So I increased package size to 1015 and updated creating read request packet method:
DatagramPacket createReadRequestPacket(String strFileName) {
byte[] filename = strFileName.getBytes();
byte[] mode = currentMode.getBytes();
int len = rOpCode.length + filename.length + mode.length + 2;
ByteArrayOutputStream outputStream = new ByteArrayOutputStream(len);
try {
outputStream.write(rOpCode);
outputStream.write(filename);
byte term = 0;
outputStream.write(term);
outputStream.write(mode); // "octet"
outputStream.write(term);
outputStream.write("blksize".getBytes());
outputStream.write(term);
outputStream.write("1015".getBytes());
outputStream.write(term);
} catch (IOException e) {
e.printStackTrace();
}
byte[] readPacketArray = outputStream.toByteArray();
return new DatagramPacket(readPacketArray, readPacketArray.length, serverAddr, port);
}
Chunks are being downloaded, but there is one major issue - files I'm downloading are in parts, 512kB each (except last one), and each part I receive on Android device is around 0,5kB larger. It seems like there is one byte more each time or one whole append more. Apparently I don't understand it completely and I'm missing something.
This is my method for file receiving:
byte previousBlockNumber = (byte) -1;
try {
PktFactory pktFactory;
DatagramSocket clientSocket;
byte[] buf;
DatagramPacket sendingPkt;
DatagramPacket receivedPkt;
System.out.print(ftpHandle);
if (isConnected) {
System.out.println("You're already connected to " + hostname.getCanonicalHostName());
}
try {
hostname = InetAddress.getByName(host);
if (!hostname.isReachable(4000)) {
System.out.println("Hostname you provided is not responding. Try again.");
return false;
}
} catch (UnknownHostException e) {
System.out.println("tftp: nodename nor servname provided, or not known");
return false;
}
clientSocket = new DatagramSocket();
pktFactory = new PktFactory(PKT_LENGTH + 4, hostname, TFTP_PORT);
System.out.println("Connecting " +
hostname.getCanonicalHostName() + " at the port number " + TFTP_PORT);
isConnected = true;
ftpHandle = "tftp#" + hostname.getCanonicalHostName() + "> ";
System.out.println("mode " + PktFactory.currentMode);
if (!isConnected) {
System.out.println("You must be connected first!");
}
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
buf = new byte[PKT_LENGTH + 4];
/* Sending the reading request with the filename to the server. **/
try {
/* Sending a RRQ with the filename. **/
System.out.println("Sending request to server.");
sendingPkt = pktFactory.createReadRequestPacket(filename);
clientSocket.setSoTimeout(4500);
clientSocket.send(sendingPkt);
} catch (Exception e) {
e.printStackTrace();
System.out.println("Connection with server failed");
}
boolean receivingMessage = true;
while (true) {
try {
receivedPkt = new DatagramPacket(buf, buf.length);
clientSocket.setSoTimeout(10000);
clientSocket.receive(receivedPkt);
byte[] dPkt = receivedPkt.getData();
byte[] ropCode = pktFactory.getOpCode(dPkt);
/* rPkt either a DATA or an ERROR pkt. If an error then print the error message and
* terminate the program finish get command. **/
if (ropCode[1] == 5) {
String errorMsg = pktFactory.getErrorMessage(dPkt);
System.out.println(errorMsg);
return false;
}
if (receivedPkt.getLength() < PKT_LENGTH + 4 && ropCode[1] == 3) {
byte[] fileDataBytes = pktFactory.getDataBytes(dPkt);
outputStream.write(fileDataBytes);
if (isListFile) {
listBytes = outputStream.toByteArray();
} else {
FileOutputStream fstream = new FileOutputStream(Constants.EEG_DATA_PATH.concat("file.bin"), true);
// Let's get the last data pkt for the current transfering file.
fstream.write(outputStream.toByteArray());
fstream.close();
}
// It's time to send the last ACK message before Normal termination.
byte[] bNum = pktFactory.getBlockNum(dPkt);
DatagramPacket sPkt = pktFactory.createAckPacket(bNum, receivedPkt.getPort());
clientSocket.send(sPkt);
disconnect();
return true;
}
if (ropCode[1] == 3) {
if (receivingMessage) {
System.out.println("Receiving the file now..");
receivingMessage = false;
}
byte[] bNum = pktFactory.getBlockNum(dPkt);
//I've added this if and it reduces file size a little (it was more than 0,5kB bigger)
if (previousBlockNumber != bNum[1]) {
byte[] fileDataBytes = pktFactory.getDataBytes(dPkt);
previousBlockNumber = bNum[1];
outputStream.write(fileDataBytes);
}
/* For each received DATA pkt we need to send ACK pkt back. **/
DatagramPacket sPkt = pktFactory.createAckPacket(bNum, receivedPkt.getPort());
clientSocket.send(sPkt);
}
} catch (SocketTimeoutException e) {
disconnect();
System.out.println("Server didn't respond and timeout occured.");
return false;
}
}
} catch (Exception e) {
System.out.println(e.getMessage());
return false;
}
I know what was wrong. That strange behavior was result of this line when last packet was received:
byte[] fileDataBytes = pktFactory.getDataBytes(dPkt);
Returned array size was always equal to specified packet length, even if received data was smaller. In my case last packet was 0 bytes (+4 bytes for tftp), but even then extra 512 bytes was added to output stream.
To resolve this I overload mentioned method with extra parameter - actual size of received packet when received data size is higher than 4 bytes and lower than specified packet size (512 bytes). This change resulted with getting correct size of array for last packet, so received file has correct size at the end of the operation.
What is really the correct way of reading files from the socket? Because the loop on reading the file doesn't end even though on the client side writing the files has been finished. I even tried printing the buffer position and length if I still data to be read.
Here is my code for reading the file.
private void readActualData(SocketChannel socketChannel) {
RandomAccessFile aFile = null;
System.out.println("Reading actual Data");
try {
aFile = new RandomAccessFile(path, "rw");
ByteBuffer buffer = ByteBuffer.allocate(50000000);
FileChannel fileChannel = aFile.getChannel();
int length;
while ((length = socketChannel.read(buffer)) >= 0 || buffer.position() > 0) {
buffer.flip();
fileChannel.write(buffer);
buffer.compact();
System.out.println("Length : "+length+" and Buffer position : "+buffer.position());
}
fileChannel.close();
System.out.println("End of file reached..Done Reading");
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
This code assumes the peer closes the socket when the file has been completely sent. If that isn't the case, you need something a lot more complex, starting by transmitting the file length ahead of the file and then limiting the amount read from the socket to exactly that length. I provided an example for blocking-mode sockets here, but adapting it to NIO is non-trivial.
I am trying to archive list of files in zip format and then downloading it for the user on the fly...
I am facing out of memory issue when downloading a zip of 1gb size
Please help me how i can resolve this without increasing jvm heap size. i would like to flush the stream periodically..
I AM TRYING TO FLUSH PERIODICALLY BUT THIS IS NOT WORKING FOR ME.
Please find my code attached below:
try{
ServletOutputStream out = response.getOutputStream();
ZipOutputStream zip = new ZipOutputStream(out);
response.setContentType("application/octet-stream");
response.addHeader("Content-Disposition",
"attachment; filename=\"ResultFiles.zip\"");
//adding multiple files to zip
ZipUtility.addFileToZip("c:\\a", "print1.txt", zip);
ZipUtility.addFileToZip("c:\\a", "print2.txt", zip);
ZipUtility.addFileToZip("c:\\a", "print3.txt", zip);
ZipUtility.addFileToZip("c:\\a", "print4.txt", zip);
zip.flush();
zip.close();
out.close();
} catch (ZipException ex) {
System.out.println("zip exception");
} catch (Exception ex) {
System.out.println("exception");
ex.printStackTrace();
}
public class ZipUtility {
static public void addFileToZip(String path, String srcFile,
ZipOutputStream zip) throws Exception {
File file = new File(path + "\\" + srcFile);
boolean exists = file.exists();
if (exists) {
long fileSize = file.length();
int buffersize = (int) fileSize;
byte[] buf = new byte[buffersize];
int len;
FileInputStream fin = new FileInputStream(path + "\\" + srcFile);
zip.putNextEntry(new ZipEntry(srcFile));
int bytesread = 0, bytesBuffered = 0;
while ((bytesread = fin.read(buf)) > -1) {
zip.write(buf, 0, bytesread);
bytesBuffered += bytesread;
if (bytesBuffered > 1024 * 1024) { //flush after 1mb
bytesBuffered = 0;
zip.flush();
}
}
zip.closeEntry();
zip.flush();
fin.close();
}
}
}
}
You want to use chunked encoding to send a file that large otherwise the servlet container will try and figure out the size of the data you are trying to send before sending it so it can set the Content-Length header. Since you are compressing files you don't know the size of the data you're sending. Chunked-Encoding allows you to send pieces of the response in smaller chunks. Don't set the content length of the stream. You might try using curl or something to see the HTTP headers in the response your getting from the server. If it isn't chunked then you'll want to figure that out. You'll want to research how to force the servlet container to send chunked encoding. You might have to add this to the response header to make the servlet container send it chunked.
response.setHeader("Transfer-Encoding", "chunked");
The other option would be to compress the file into a temporary file with File.createTemp(), and then send the contents of that. If you compress to a temp file first then you can know how big the file is and set the content length for the servlet.
I guess you are digging in a wrong direction. Try to replace the servlet output stream by a file stream and see if the issue is still here. I suspect your web container tries to collect whole servlet output to calculate content-length before sending http headers.
Another thing...you are performing your close inside your try catch block. This leaves the chance for the stream to stay open on your files if you have an exception, as well as NOT giving the stream the chance to flush to the disk.
Always make sure your close is in a finally block (at least until you can get Java 7 with its try-with-resources block)
//build the byte buffer for transferring the data from the file
//to the zip.
final int BUFFER = 2048;
byte [] data = new byte[BUFFER];
File zipFile= new File("C\:\\myZip.zip");
BufferedInputStream in = null;
ZipOutputStream zipOut = null;
try {
//create the out stream to send the file to and zip it.
//we want it buffered as that is more efficient.
FileOutputStream destination = new FileOutputStream(zipFile);
zipOut = new ZipOutputStream(new BufferedOutputStream(destination));
zipOut.setMethod(ZipOutputStream.DEFLATED);
//create the input stream (buffered) to read in the file so we
//can write it to the zip.
in = new BufferedInputStream(new FileInputStream(fileToZip), BUFFER);
//now "add" the file to the zip (in object speak only).
ZipEntry zipEntry = new ZipEntry(fileName);
zipOut.putNextEntry(zipEntry);
//now actually read from the file and write the file to the zip.
int count;
while((count = in.read(data, 0, BUFFER)) != -1) {
zipOut.write(data, 0, count);
}
}
catch (FileNotFoundException e) {
throw e;
}
catch (IOException e) {
throw e;
}
finally {
//whether we succeed or not, close the streams.
if(in != null) {
try {
in.close();
}
catch (IOException e) {
//note and do nothing.
e.printStackTrace();
}
}
if(zipOut != null) {
try {
zipOut.close();
}
catch (IOException e) {
//note and do nothing.
e.printStackTrace();
}
}
}
Now if you need to loop, you can just loop around the part that you need to add more files to. Perhaps pass in an array of files and loop over it. This code worked for me zipping a file up.
Don't size your buf based on the file size, use a fixed size buffer.
I currently writing a Java TCP server to handle the communication with a client (which I didn't write). When the server, hosted on windows, responds to the client with the number of records received the client doesn't read the integer correctly, and instead reads it as an empty packet. When the same server code, hosted on my Mac, responds to the client with the number of records received the client reads the packet and responds correctly. Through my research I haven't found an explanation that seems to solve the issue. I have tried reversing the bytes (Integer.reverseBytes) before calling the writeInt method and that didn't seem to resolve the issue. Any ideas are appreciated.
Brian
After comparing the pcap files there are no obvious differences in how they are sent. The first byte is sent followed by the last 3. Both systems send the correct number of records.
Yes I'm referring to the DataOutputStream.writeInt() method. //Code added
public void run() {
try {
InputStream in = socket.getInputStream();
DataOutputStream datOut = new DataOutputStream(socket.getOutputStream());
datOut.writeByte(1); //sends correctly and read correctly by client
datOut.flush();
//below is used to read bytes to determine length of message
int bytesRead=0;
int bytesToRead=25;
byte[] input = new byte[bytesToRead];
while (bytesRead < bytesToRead) {
int result = in.read(input, bytesRead, bytesToRead - bytesRead);
if (result == -1) break;
bytesRead += result;
}
try {
inputLine = getHexString(input);
String hexLength = inputLine.substring(46, 50);
System.out.println("hexLength: " + hexLength);
System.out.println(inputLine);
//used to read entire sent message
bytesRead = 0;
bytesToRead = Integer.parseInt(hexLength, 16);
System.out.println("bytes to read " + bytesToRead);
byte[] dataInput = new byte[bytesToRead];
while (bytesRead < bytesToRead) {
int result = in.read(dataInput, bytesRead, bytesToRead - bytesRead);
if (result == -1) break;
bytesRead += result;
}
String data = getHexString(dataInput);
System.out.println(data);
//Sends received data to class to process
ProcessTel dataValues= new ProcessTel(data);
String[] dataArray = new String[10];
dataArray = dataValues.dataArray();
//assigns returned number of records to be written to client
int towrite = Integer.parseInt(dataArray[0].trim());
//Same write method on Windows & Mac...works on Mac but not Windows
datOut.writeInt(towrite);
System.out.println("Returned number of records: " + Integer.parseInt(dataArray[0].trim()) );
datOut.flush();
} catch (Exception ex) {
Logger.getLogger(ServerThread.class.getName()).log(Level.SEVERE, null, ex);
}
datOut.close();
in.close();
socket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
As described in its Javadoc, DataOutputStream.writeInt() uses network byte order as per the TCP/IP RFCs. Is that the method you are referring to?
No, x86 processors only support little-endian byte order, it doesn't vary with the OS. Something else is wrong.
I suggest using wireshark to capture the stream from a working Mac server and a non-working Windows server and compare.
Some general comments on your code:
int bytesRead=0;
int bytesToRead=25;
byte[] input = new byte[bytesToRead];
while (bytesRead < bytesToRead) {
int result = in.read(input, bytesRead, bytesToRead - bytesRead);
if (result == -1) break;
bytesRead += result;
}
This EOF handling is hokey. It means that you don't know whether or not you've actually read the full 25 bytes. And if you don't, you'll assume that the bytes-to-send is 0.
Worse, you copy-and-paste this code lower down, relying on proper initialization of the same variables. If there's a typo, you'll never know it. You could refactor it into its own method (with tests), or you could call DataInputStream.readFully().
inputLine = getHexString(input);
String hexLength = inputLine.substring(46, 50);
You're converting to hex in order to extract an integer? Why? And more important, if you have any endianness issues this is probably the reason
I was originally going to recommend using a ByteBuffer to extract values, but on a second look I think you should wrap your input stream with a DataInputStream. That would allow you to read complete byte[] buffers without the need for a loop, and it would let you get rid of the byte-to-hex-to-integer conversions: you'd simply call readInt().
But, continuing on:
String[] dataArray = new String[10];
dataArray = dataValues.dataArray();
Do you realize that the new String[10] is being thrown away by the very next line? Is that what you want?
int towrite = Integer.parseInt(dataArray[0].trim());
datOut.writeInt(towrite);
System.out.println("Returned number of records: " + Integer.parseInt(dataArray[0].trim()) );
If you're using logging statements, print what you're actually using (towrite). Don't recalculate it. There's too much of a chance to make a mistake.
} catch (Exception ex) {
Logger.getLogger(ServerThread.class.getName()).log(Level.SEVERE, null, ex);
}
// ...
} catch (IOException e) {
e.printStackTrace();
}
Do either or both of these catch blocks get invoked? And why do they send their output to different places? For that matter, if you have a logger, why are you inserting System.out.println() statements?