Java - FTP program corrupting file during transfer - java

I've made a basic client server FTP program using sockets, but for some reason files are getting corrupted during the transfer. In the case below, I'm pushing a file to the server from the client. It almost works, since some files (such as a .png) transfer and open fine, but others (a .docx) don't. Any file that I transfer has a different MD5 to the one I sent.
Client code:
File file = null;
FTPDataBlock transferBlock;
int numBytesRead = 0;
int blockNumber = 1;
int blockSize = 1024;
byte[] block = new byte[blockSize];
fc = new JFileChooser();
// select file to upload
int returnVal = fc.showOpenDialog(Client.this);
if (returnVal == JFileChooser.APPROVE_OPTION) {
file = fc.getSelectedFile();
try {
// get total number of blocks and send to server
int totalNumBlocks = (int)Math.ceil((file.length()*1.0) / blockSize);
System.out.println("File length is: " + file.length());
FTPCommand c = new FTPCommand("PUSH", Integer.toString(totalNumBlocks));
oos = new ObjectOutputStream(sock.getOutputStream());
oos.writeObject(c);
oos.flush();
// send to server block by block
FileInputStream fin = new FileInputStream(file);
while ((numBytesRead = fin.read(block)) != -1){
transferBlock = new FTPDataBlock(file.getName(), blockNumber, block);
blockNumber++;
System.out.println("Sending block " + transferBlock.getBlockNumber() + " of " + totalNumBlocks);
oos = new ObjectOutputStream(sock.getOutputStream());
oos.writeObject(transferBlock);
oos.flush();
}
fin.close();
System.out.println("PUSH Complete");
// get response from server
ois = new ObjectInputStream(sock.getInputStream());
FTPResponse response = (FTPResponse)ois.readObject();
statusArea.setText(response.getResponse());
} catch (IOException | ClassNotFoundException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
}
Server Code:
else if (cmd.getCommand().equals("PUSH")){
// get total number of file blocks
int totalNumBlocks = Integer.parseInt(cmd.getParameters());
// get first block
in = new ObjectInputStream(sock.getInputStream());
FTPDataBlock currentBlock = (FTPDataBlock)in.readObject();
// create file and write first block to file
File file = new File (workingDirectory + File.separator + currentBlock.getFilename());
FileOutputStream fOut = new FileOutputStream(file);
fOut.write(currentBlock.getData());
fOut.flush();
// get remaining blocks
while(currentBlock.getBlockNumber()+1 <= totalNumBlocks){
in = new ObjectInputStream(sock.getInputStream());
currentBlock = (FTPDataBlock)in.readObject();
fOut.write(currentBlock.getData());
fOut.flush();
}
fOut.close();
// send response
FTPResponse response = new FTPResponse("File Received OK");
out = new ObjectOutputStream(sock.getOutputStream());
out.writeObject(response);
}
FTPDataBlock class:
public class FTPDataBlock implements Serializable{
private static final long serialVersionUID = 1L;
private String filename;
private int blockNumber; // current block number
private byte[] data;
//constructors & accessors
}
I'm sure it's something small that I'm missing here. Any ideas?

This happened because the server was writing whole 1024 byte blocks to the file, even if there was less than 1024 bytes actually written to the block.
The solution (thanks to #kdgregory) was to use the return value of FileInputStream.read() to populate a new attribute in my FTPDataBlock class, int bytesWritten.
Then on the server side I could use:
FileOutputStream.write(currentBlock.getData(), 0, currentBlock.getBytesWritten());
to write the exact number of bytes to the file, instead of the whole block every time.

I think there may be a problem with the file extension. provide a option in the client side as:
FILE_TO_RECEIVED = JOptionPane.showInputDialog("Please enter the Drive followed by the file name to be saved. Eg: D:/xyz.jpg");
it is to help you to provide the correct file extension name.
then i think u should also provide the file size in client side like:
public final static int FILE_SIZE = 6022386;
and in then in the array block u used u can make the following changes as:
try {
sock = new Socket(SERVER, SOCKET_PORT);
byte [] mybytearray = new byte [FILE_SIZE];
InputStream is = sock.getInputStream();
fos = new FileOutputStream(FILE_TO_RECEIVED);
bos = new BufferedOutputStream(fos);
bytesRead = is.read(mybytearray,0,mybytearray.length);
current = bytesRead;
do {
bytesRead =
is.read(mybytearray, current, (mybytearray.length-current));
if(bytesRead >= 0) current += bytesRead;
} while(bytesRead > -1);
bos.write(mybytearray, 0 , current);
bos.flush();
}

Related

How to create File from list of byte arrays in Android?

I am trying to transfer a .mp4 file using WebRTC and it's DataChannel. In order to do that I am breaking the file into chunks like below:
FileInputStream is = new FileInputStream(file);
byte[] chunk = new byte[260000];
int chunkLen = 0;
sentFileByte = new ArrayList<>();
while ((chunkLen = is.read(chunk)) != -1) {
sentFileByte.add(chunk);
}
After that, sending the chunks by index like:
byte[] b = sentFileByte.get(index);
ByteBuffer bb = ByteBuffer.wrap(b);
bb.put(b);
bb.flip();
dataChannel.send(new DataChannel.Buffer(bb, true));
On the receiver end I am receiving the chunks and adding it to an Arraylist
receivedFileByteArr.add(chunkByteArr);
After receiving all the chunks successfully I am trying to convert these in to a file like below:
String path = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS).getAbsolutePath() + "/" + fileName;
File file = new File(path);
try {
FileOutputStream fileOutputStream = new FileOutputStream(file);
for (int i = 0; i < receivedFileByteArr.size(); i++) {
fileOutputStream.write(receivedFileByteArr.get(i));
}
fileOutputStream.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
After completing all these steps, file is created successfully. File size is also same. But problem is the file is not playable in any video player. I guess I am making some mistake on FileInputStream and FileOutputStream. I need help to fix this error.

Resuming a file upload in java Socket

package experiments;
import java.io.*;
import java.net.Socket;
/** * * #author User */
/*THE CLIENT*/ public class UploadManager {
public static void setup(String address, int port,String path){
try( Socket s = new Socket(address,port); OutputStream out = s.getOutputStream(); InputStream in = s.getInputStream(); DataOutputStream dos = new DataOutputStream(out);
DataInputStream dis = new DataInputStream(in); )
{
/** * send file name and size to server */
System.out.println("processing file now");
File from = new File(path);
if(from.exists() && from.canRead()){
String FILENAME = from.getName();
long FILESIZE = from.length();
// String FILEHASH
/** * write all ,2 * to SERVER */ dos.writeLong(FILESIZE); dos.writeUTF(FILENAME);
//dos.writeUTF(FILEHASH);
dos.flush();
/** * what does SERVER HAS TO SAY? */
String RESPONSE_FROM_SERVER_1 =dis.readUTF();
if(RESPONSE_FROM_SERVER_1 . equalsIgnoreCase("SEND")){
uploadFresh(from,dos);
}
/** * in case there was an interruption we need * resume from an offset. */
if(RESPONSE_FROM_SERVER_1 . equalsIgnoreCase("RESUME")){
/** * length already sent */
System.out.println("resuming upload");
String LENGTH_ALREADY_READ =dis.readUTF();
long OFF_SET = Long.parseLong(LENGTH_ALREADY_READ);
/** * get the file name so we know the exact file */
String FILE_REAL_NAME = dis.readUTF();
resume(FILE_REAL_NAME,OFF_SET,dos);
}
}else{ System.out.println("Couldn't read: err");
}
}catch(Exception ex){
ex.printStackTrace();
}
}
/** the resume method **/
private static void resume(String FILE_REAL_NAME,long OFF_SET,DataOutputStream to)throws Exception{
/** * copy from the stream or disk to server */
FileInputStream reader = null;
try{
reader = new FileInputStream(new File("c:\\movies\\hallow.mp4",FILE_REAL_NAME)); byte[] size = new byte [8192];
int COUNT_BYTES_READ ; reader.skip(OFF_SET);
//skip the length already written
while( (COUNT_BYTES_READ = reader.read(size)) >0{
to.write(size);
to.flush();
}
}finally{
if( reader != null ) reader.close();
to.close();
}
}
/** and the normal send method which works as expected **/
private static void uploadFresh(File from, DataOutputStream to)throws Exception{
/** * copy from the stream or disk to server */
FileInputStream reader = null;
try{
reader = new FileInputStream(from);
byte[] size = new byte [1024];
int COUNT_BYTES_READ ;
while( (COUNT_BYTES_READ = reader.read(size)) >0){
to.write(size,0,COUNT_BYTES_READ); to.flush();
}
}finally{
if( reader != null ) reader.close();
to.close();
}
}
}
//ThE SERVER
public class UserServer {
String ip_Adress;
int Port;
ServerSocket ss;
String path;
public UserServer(String ip_Adress,int Port ){
this.Port =Port; this.ip_Adress = ip_Adress; this. path = path;
}
public void startserver() throws IOException{
FileOutputStream fos =null;
// String realhash;
out.println("starting server");
this.ss= new ServerSocket(Port);
out.println("starting server on port"+Port); out.println("waiting for client conection......");
Socket visitor = this.ss.accept();
out.println("1 User connected to port");
/**
* reading file name and size from upload
*/
try( OutputStream ot =visitor.getOutputStream(); InputStream in = visitor.getInputStream(); DataOutputStream dos = new DataOutputStream(ot); DataInputStream dis = new DataInputStream(in); ){
System.out.println("processing debug");
long FILE_SIZE = dis.readLong();
System.out.println(FILE_SIZE+"processing");
String FILE_NAME = dis.readUTF(); System.out.println(FILE_NAME+"processing");
//String FILEHASH = dis.readUTF(); // System.out.println(FILEHASH+"processing");
File file = new File("c:\\aylo\\"+FILE_NAME);
/** * what do We Have TO Say? */
if(file.exists()){
long SIZE = file.length();
if(SIZE<FILE_SIZE){
String RESUME = "RESUME";
//tell client to resume the upload
dos.writeUTF(RESUME);
/*sending the resuming length in*/ string String size = String.valueOf(SIZE);
dos.writeUTF(size);
dos.flush();
fos=new FileOutputStream(file,true);
//append to exisiting file
byte[] b = new byte [1024]; int COUNT_BYTES_READ ;
while( (COUNT_BYTES_READ = dis.read(b)) >0){
fos.write(b);
fos.flush();
}
}
} else{
}
if(!file.exists()){
file.getParentFile().mkdirs();
file.createNewFile();
String SEND = "SEND";
dos.writeUTF(SEND);
dos.flush();
fos=new FileOutputStream(file);
byte b[]=new byte[1024];
int s;
while((s=dis.read(b))>0){
fos.write(b,0,s); fos.flush(); System.out.println(s+"processing");
}
}
System.out.println("i'm done reading ");
}catch(Exception ex){
ex.printStackTrace();
}finally{
fos.close();
if(this.ss!=null) this.ss.close();
}
}
//the main
public static void main(String []arg){
UserServer us = new UserServer("localhost", 2089);
us.startserver();
//start client
UploadManager.setup("localhost", 2089, ("c:\\movies\\hallow.mp4"));
}
}
I've searched and searched most forums and OS about this and found nothing.
What I'm trying to do is to learn or figure out how I can resume a file interrupted on upload through java socket.
Right now my code partially works, but there's always a break or gap which I assumed is caused by the new FileOutputStream(from,true); constructor
Can anyone pls check my methods and help point out my mistakes a sample code demo too will be nice. Thanks!
Please review java convention on variable names; it took me quite a while to find your problem, and a lot of it is down to the fact that your code looks unfamiliar. Also, really, review your names. Naming a byte buffer to transfer bytes 'size' is similar to naming your favourite cat 'horse'. It's going to cause confusion.
The problem: You have quite a few places where you do this:
byte[] buffer = new byte[8192];
int bytesRead = networkIn.read(buffer);
fileOut.write(buffer);
and this is incorrect. The 'write' call will write the entire buffer, all 8192 bytes, even if fewer than 8192 bytes were actually read. What you need to do is this:
byte[] buffer = new byte[8192];
int bytesRead = networkIn.read(buffer);
fileOut.write(buffer, 0, bytesRead);
NB: It is possible there are more bugs in your code; the above is one that's definitely going to cause issues and would explain why a resume operation produces a corrupted file.

Error while sending large files through socket

I'm trying to send large files via socket. The program works fine for small files (such as html pages or pdf), but when i send files over 3/4 mb the output is always corrupted (viewing it with a text editor i noticed that the last few lines are always missing).
Here's the code of the server:
BufferedInputStream in = null;
FileOutputStream fout = null;
try {
server = new ServerSocket(port);
sock = server.accept();
in = new BufferedInputStream(sock.getInputStream());
setPerc(0);
received = 0;
int incByte = -1;
fout = new FileOutputStream(path+name, true);
long size = length;
do{
int buffSize;
if(size >= 4096){
buffSize = 4096;
}else{
buffSize = 1;
}
byte[] o = new byte[buffSize];
incByte = in.read(o, 0, buffSize);
fout.write(o);
received+=buffSize;
setPerc(calcPerc(received, length));
size -= buffSize;
//d("BYTE LETTI => "+incByte);
}while(size > 0);
server.close();
} catch (IOException e) {
e("Errore nella ricezione file: "+e);
}finally{
try {
fout.flush();
fout.close();
in.close();
} catch (IOException e) {
e("ERRORE INCOMINGFILE");
}
}
pr.release(port);
And here's the code of the client:
FileInputStream fin = null;
BufferedOutputStream out = null;
try {
sock = new Socket(host, port);
fin = new FileInputStream(file);
out = new BufferedOutputStream(sock.getOutputStream());
long size = file.length();
int read = -1;
do{
int buffSize = 0;
if(size >= 4096){
buffSize = 4096;
}else{
buffSize = (int)size;
}
byte[] o = new byte[buffSize];
for(int i = 0; i<o.length;i++){
o[i] = (byte)0;
}
read = fin.read(o, 0, buffSize);
out.write(o);
size -= buffSize;
//d("BYTE LETTI DAL FILE => "+read);
}while(size > 0);
} catch (UnknownHostException e) {
} catch (IOException e) {
d("ERRORE NELL'INVIO DEL FILE: "+e);
e.printStackTrace();
}finally{
try {
out.flush();
out.close();
fin.close();
} catch (IOException e) {
d("Errore nella chiusura dei socket invio");
}
}
i think it's something related with the buffer size, but i can't figure out what's wrong here.
This is incorrect:
byte[] o = new byte[buffSize];
incByte = in.read(o, 0, buffSize);
fout.write(o);
You are reading up to buffSize bytes and then writing exactly buffSize bytes.
You are doing the same thing at the other end as well.
You may be able to get away with this when reading from a file1, but when you read from a socket then a read is liable to give you a partially filled buffer, especially if the writing end can't always keep ahead of the reading end 'cos you are hammering the network with a large transfer.
The right way to do it is:
incByte = in.read(o, 0, buffSize);
fout.write(o, 0, incByte);
1 - It has been observed that when you read from a local file, a read call will typically give you all of the bytes that you requested (subject to the file size, etc). So, if you set buffSize to the length of the file, this code would probably work when reading from a local file. But doing this is a bad idea, because you are relying behaviour that is not guaranteed by either Java or a typical operating system.
You might have a problem e.g. here.
read = fin.read(o, 0, buffSize);
out.write(o);
Here read gives you the count of bytes you've actually just read.
On the next line you should write out only as many bytes as you've read.
In other words, you cannot expect the size of the file
you're reading to be multiple of your buffer size.
Review your server code too for the same issue.
The correct way to copy streams in Java is as follows:
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}
where count is an int, and buffer is a byte[] array of length > 0, typically 8k. You don't need to allocate byte arrays inside the loop, and you don't need a byte array of a specific size. Specifically, it's a complete waste of space to allocate a buffer as large as the file; it only works up to files of Integer.MAX_VALUE bytes, and it doesn't scale.
You do need to save the count returned by 'read()' and use it in the 'write()' method as shown above.

Reading file and adding text to random character location

I have a text file with a sequence of 4194304 letters ranging from A-D all on one line (4 MB).
How would I randomly point to a character and replace the following set of characters to another file that is 100 characters long and write it out to a file?
I'm actually currently able to do this, but I feel it's really inefficient when I iterate it several times.
Here's an illustration of what I mentioned above:
Link to Imageshack
Here's how I'm currently achieving this:
Random rnum = new Random();
FileInputStream fin = null;
FileOutputStream fout = null;
int count = 10000;
FileInputStream fin1 = null;
File file1 = new File("fileWithSet100C.txt");
int randChar = 0;
while(cnt > 0){
try {
int c = 4194304 - 100;
randChar = rnum.nextInt(c);
File file = new File("file.txt");
//seems inefficient to initiate these guys over and over
fin = new FileInputStream(file);
fin1 = new FileInputStream(file1);
//would like to remove this and have it just replace the original
fout = new FileOutputStream("newfile.txt");
int byte_read;
int byte_read2;
byte[] buffer = new byte[randChar];
byte[] buffer2 = new byte[(int)file1.length()]; //4m
byte_read = fin.read(buffer);
byte_read2 = fin1.read(buffer2);
fout.write(buffer, 0, byte_read);
fout.write(buffer2, 0, byte_read2);
byte_read = fin.read(buffer2);
buffer = new byte[4096]; //4m
while((byte_read = (fin.read(buffer))) != -1){
fout.write(buffer, 0, byte_read);
}
cnt--;
}
catch (...) {
...
}
finally {
...
}
try{
File file = new File("newfile.txt");
fin = new FileInputStream(file);
fout = new FileOutputStream("file.txt");
int byte_read;
byte[] buffer = new byte[4096]; //4m
byte_read = fin.read(buffer);
while((byte_read = (fin.read(buffer))) != -1){
fout.write(buffer, 0, byte_read);
}
}
catch (...) {
...
}
finally {
...
}
Thanks for reading!
EDIT:
For those curious, here's the code I used to solve the aforementioned problem:
String stringToInsert = "insertSTringHERE";
byte[] answerByteArray = stringToInsert.getBytes();
ByteBuffer byteBuffer = ByteBuffer.wrap(answerByteArray);
Random rnum = new Random();
randChar = rnum.nextInt(4194002); //4MB worth of bytes
File fi = new File("file.txt");
RandomAccessFile raf = null;
try {
raf = new RandomAccessFile(fi, "rw");
} catch (FileNotFoundException e1) {
// TODO error handling and logging
}
FileChannel fo = null;
fo = raf.getChannel();
// Move to the beginning of the file and write out the contents
// of the byteBuffer.
try {
outputFileChannel.position(randChar);
while(byteBuffer.hasRemaining()) {
fo.write(byteBuffer);
}
} catch (IOException e) {
// TODO error handling and logging
}
try {
outputFileChannel.close();
} catch (IOException e) {
// TODO error handling and logging
}
try {
randomAccessFile.close();
} catch (IOException e) {
// TODO error handling and logging
}
You probably want to use Java's random-access file features. Sun/Oracle has a Random Access Files tutorial that will probably be useful to you.
If you can't use Java 7, then look at RandomAccessFile which also has seek functionality and has existed since Java 1.0.
First off, for your files you could have the Files as global variables. This would all you to use the file when ever you needed without reading it again. Also note that if you keep making new files then you will lose the data that you have already acquired.
For example:
public class Foo {
// Gloabal Vars //
File file;
public Foo(String location) {
// Do Something
file = new File(location);
}
public add() {
// Add
}
}
Answering your question, I would first read both files and then make all the changes you want in memory. After you have made all the changes, I would then write the changes to the file.
However, if the files are very large, then I would make all the changes one by one on the disk... it will be slower, but you will not run out of memory this way. For what you are doing I doubt you could use a buffer to help counter how slow it would be.
My overall suggestion would be to use arrays. For example I would do the following...
public char[] addCharsToString(String str, char[] newChars, int index) {
char[] string = str.toCharArray();
char[] tmp = new char[string.length + newChars.length];
System.arraycopy(string, 0, tmp, 0, index);
System.arraycopy(newChars, index, tmp, index, newChars.length);
System.arraycopy(string, index + newChars.length, tmp, index + newChars.length, tmp.length - (newChars.length + index));
return tmp;
}
Hope this helps!

How to split file into chunks while still writing into it?

I tried to create byte array blocks from file whil the process was still using the file for writing. Actually I am storing video into file and I would like to create chunks from the same file while recording.
The following method was supposed to read blocks of bytes from file:
private byte[] getBytesFromFile(File file) throws IOException{
InputStream is = new FileInputStream(file);
long length = file.length();
int numRead = 0;
byte[] bytes = new byte[(int)length - mReadOffset];
numRead = is.read(bytes, mReadOffset, bytes.length - mReadOffset);
if(numRead != (bytes.length - mReadOffset)){
throw new IOException("Could not completely read file " + file.getName());
}
mReadOffset += numRead;
is.close();
return bytes;
}
But the problem is that all array elements are set to 0 and I guess it is because the writing process locks the file.
I would bevery thankful if anyone of you could show any other way to create file chunks while writing into file.
Solved the problem:
private void getBytesFromFile(File file) throws IOException {
FileInputStream is = new FileInputStream(file); //videorecorder stores video to file
java.nio.channels.FileChannel fc = is.getChannel();
java.nio.ByteBuffer bb = java.nio.ByteBuffer.allocate(10000);
int chunkCount = 0;
byte[] bytes;
while(fc.read(bb) >= 0){
bb.flip();
//save the part of the file into a chunk
bytes = bb.array();
storeByteArrayToFile(bytes, mRecordingFile + "." + chunkCount);//mRecordingFile is the (String)path to file
chunkCount++;
bb.clear();
}
}
private void storeByteArrayToFile(byte[] bytesToSave, String path) throws IOException {
FileOutputStream fOut = new FileOutputStream(path);
try {
fOut.write(bytesToSave);
}
catch (Exception ex) {
Log.e("ERROR", ex.getMessage());
}
finally {
fOut.close();
}
}
If it were me, I would have it chunked by the process/thread writing to the file. This is how Log4j seems to do it, at any rate. It should be possible to make an OutputStream which automatically starts writing to a new file every N bytes.

Categories