I need to download .png files from a FTP server in Java.
I have 3 different servers, each one contain a folder with exactly the same .png files.
On server 1 :
If I download my .png file of 4686 bytes stored on this server with FTPClient (apache.commons.net.ftp), I get a 4706 bytes .png file, and I can't open it.
If I download it with Total Commander, I get a 4686 bytes .png file, and I can open it.
On server 2 and 3 :
With FTPClient and Total Commander, I get in both case a 4686 bytes file and I can open it without problem.
My code :
FTPClient ftpClient = new FTPClient();
ftpClient.connect("...", PORT);
ftpClient.login("...", "...");
ftpClient.enterLocalPassiveMode();
FTPFile[] imageFiles = ftpClient.listFiles(distantPathForImages);
for (FTPFile imageFile : imageFiles) {
InputStream inputStream = ftpClient.retrieveFileStream(distantPathForImages + imageFile.getName());
OutputStream outputStream = new BufferedOutputStream(new FileOutputStream(new File(PATHDESTCSS + imageFile.getName())));
byte[] bytesArray = new byte[65536];
int bytesRead;
while ((bytesRead = inputStream.read(bytesArray)) != -1) {
outputStream.write(bytesArray, 0, bytesRead);
}
outputStream.close();
inputStream.close();
ftpClient.completePendingCommand();
}
Why does my file have these "extra bytes" only when I download it from the server 1, and how can I fix this?
FTPClient uses ascii mode by default.
You have to use binary mode to transfer binary files.
ftpClient.setFileType(FTP.BINARY_FILE_TYPE);
Your current code can by chance work on some servers even in the ascii mode, if the server is using Windows EOL sequence, hence no conversion takes place. And even then probably only, if the file by chance does not contain any lone #13.
One of your servers probably attempts to transmit the file as text, and your ftp client also thinks that it receives text.
Here is an excerpt from the javadoc:
If the current file type is ASCII, the returned InputStream
will convert line separators in the file to the local
representation.
If you are on windows, every line break will be replaced by 'linebreak + cr', wreaking havoc on all data structures in the png file.
The expected number of bytes for this scenario is: 4686 * (1 + 1 / 256) = 4704.3046875 , because on average, every 256-th byte in a png file should look like an ASCII line break, and will therefore result in an extra added byte. Your file ends up having 4706 bytes, which is pretty close.
Setting file type to FTP.BINARY_FILE_TYPE should fix this: https://commons.apache.org/proper/commons-net/apidocs/org/apache/commons/net/ftp/FTPClient.html#setFileType(int)
Related
I'm using FTPClient for uploading files on ftp server. first, I have a TempFile object(no matters what is this object), that have a getBytes() method. this method returns List of bytes:
List<Byte> byteList = tempFile.getBytes();
Byte[] byteObjects= new Byte[byteList.size()];
byte[] byteArray = new byte[byteList.size()];
byteObjects = byteList.toArray(byteObjects);
byteArray = ArrayUtils.toPrimitive(byteObjects);
InputStream stream = new ByteArrayInputStream(byteArray);
ftpClient.storeFile(file.getName()+"."+file.getExtension(), stream);
After executing the last line of code, a file creates successfully with expected name, but when i want to open that file, i see "The image test.jpg can not be displayed, because it contains error" in the browser. What is the problem?
It sounds like you needs to set your ftp client to transmit as binary
ftpClient.setFileType(FTP.BINARY_FILE_TYPE);
As per the docs
If the current file type is ASCII, line separators in the file are transparently converted to the NETASCII format
I am implementing a Direct Connect client. I am using the NMDC protocol. I can connect to a hub and other connected clients. I am trying to retrieve the file list from each client, I understand that in order to do that one must download the file files.xml.bz2 from the other client. The protocol to download a file is as follows:
-> $ADCGET file <filename> <params>|
<- $ADCSND file <fileName> <params>|
<- (*** binary data is now transfered from client B to client A ***)
I am trying to create a file named files.xml.bz2 using the binary data received. Here's my code:
//filesize is provided through the $ADCSND response from other client
byte[] data = new byte[filesize];
/*
Reading binary data from socket inputstream
*/
int read = 0;
for (int i=0; read<filesize;){
int available = in2.available();
int leftspace = filesize-read;
if (available>0){
in2.read(data, read, available>leftspace? leftspace:available);
++i;
}
read += (available>leftspace? leftspace:available)+1;
}
/*
writing the bytes to an actual file
*/
ByteArrayInputStream f = new ByteArrayInputStream(data);
FileOutputStream file = new FileOutputStream("files.xml.bz2");
file.write(data);
file.close();
The file is created, however, the contents (files.xml) are not readable. Opening it in firefox gives:
XML Parsing Error: not well-formed
Viewing the contents in the terminal only reads binary data. What am i doing wrong?
EDIT
I also tried Decompressing the file using the bz2 libray from Apache Ant.
ByteArrayInputStream f = new ByteArrayInputStream(data);
BZip2CompressorInputStream bzstream = new BZip2CompressorInputStream(f);
FileOutputStream xmlFile = new FileOutputStream("files.xml");
byte[] bytes = new byte[1024];
while((bzstream.read(bytes))!=-1){
xmlFile.write(bytes);
}
xmlFile.close();
bzstream.close();
I get an error, here's the stacktrace:
java.io.IOException: Stream is not in the BZip2 format
at org.apache.commons.compress.compressors.bzip2.BZip2CompressorInputStream.init(BZip2CompressorInputStream.java:240)
at org.apache.commons.compress.compressors.bzip2.BZip2CompressorInputStream.<init>(BZip2CompressorInputStream.java:132)
at org.apache.commons.compress.compressors.bzip2.BZip2CompressorInputStream.<init>(BZip2CompressorInputStream.java:109)
at control.Controller$1.run(Controller.java:196)
Usual, typical misuse of available(). All you need to copy a stream in Java is as follows:
while ((count = in.read(buffer)) >= 0)
{
out.write(buffer, 0, count);
}
Use this with any size buffer greater than zero, but preferably several kilobytes. You don't need a new buffer per iteration, and you don't need to know how much data is available to read without blocking, as you have to block, otherwise you're just smoking the CPU. But you do need to know how much data was actually read per iteration, and this is the first place where your code falls down.
The error java.io.IOException: Stream is not in the BZip2 format is generated by the constructor of class BZip2CompressorInputStream. I decided to scan the bytes, looking for the magic number to make sure that the file was bz2 format, it turns out that Java was right -- it wasnt in bz2 format.
Upon examining the source code of Jucy, I saw that the reason for this was a slight error in the command I sent to the other client, in essence, this error was caused a mistake in my protocol implementation. The solution was:
Replace:
$ADCGET file files.xml.bz2 0 -1 ZL1|
With:
$ADCGET file files.xml.bz2 0 -1|
ZL1 specifies compression of the files being sent (Not necessary).
i'm trying to transfer Files with a DatagrammSocket in Java. I'm reading the files into 4096 Byte pieces. We are using ACK, so all pieces are in the right order, we tried pdf, exe, jpg and lot more stuff successfully, but iso, zip and 7z are not working. They have exactly the same size afterwards. Do you have any idea?
Reading the Parts:
byte[] b = new byte[FileTransferClient.PACKAGE_SIZE - 32];
FileInputStream read = new FileInputStream(file);
read.skip((part - 1) * (FileTransferClient.PACKAGE_SIZE - 32));
read.read(b);
content = b;
Writing the Parts:
stream = new FileOutputStream(new File(this.filePath));
stream.write(output);
...
stream.write(output);
stream.close();
(Sorry for great grammar, i'm German)
Your write() method calls are assuming that the entire buffer was filled by receive(). You must use the length provided with the DatagramPacket:
datagramSocket.receive(packet);
stream.write(packet.getData(), packet.getOffset(), packet.getLength());
If there is overhead in the packet, e.g. a sequence number, which there should be, you will need to adjust the offset and length accordingly.
NB TCP will ensure 'everything gets transferred and is not damaged'.
To make my life easier at work, I'm making a Java program to download some modules off the server (Sometimes they get deleted off of my local machine and it takes 15 minutes to build them all).
Following is my code for downloading the files:
Note that all files are less than a megabyte big.
URL url = new URL("http://www.url.com/ModuleName.swf");
URLConnection connection = url.openConnection();
InputStream input = connection.getInputStream();
byte[] buffer = new byte[4096];
int n = -1;
OutputStream output = new FileOutputStream(new File("dlFile.swf"));
while ((n = input.read(buffer)) != -1)
{
output.write(buffer, 0, n);
output.flush();
}
output.close();
If I use a hex editor to compare the file downloaded via Java andvia Firefox, it's almost the same at first. But later on there's so many errors.
Now, the strange thing is this: If I use Firefox to download the file and upload that file to dropbox, the file will be downloaded correctly with my application.
Any idea what could possibly cause this?
I'm trying to put a local file onto a remote host via XML-RPC Base64 encode/decode. This works perfectly fine for binary files, but when I try to send over the text file, all the line endings are removed. Why's this happening?
On the client side,
my $buf;
my $encoded = '';
while (read($FILE, $buf, 60 * 57)) {
$encoded .= encode_base64($buf);
}
To which it then sends over to my Redstone XML-RPC server, which takes it and writes it out:
// Create file
File file = new File(path);
file.createNewFile();
// Decode the encoded data sent over into bytes
byte[] bytes = Base64.decode(data.getBytes());
// Write them out to the file
FileOutputStream os = new FileOutputStream(file);
os.write(bytes);
os.flush();
os.close();
Try to set the $FILE in binary mode, you need to specify after the open command:
open my $FILE, '<', 'the_file_name.extension';
binmode $FILE;
# your code ...
The problem was that I was opening in Notepad, which wasn't recognizing CRLF endings.