upload progress in FTPClient - java

I'm using commons-net FTPClient to upload some files.
How can I get progress of upload (number of bytes uploaded up now)?
Thanks

Sure, just use CopyStreamListener. Below you will find an example (copied from commons-io wiki) of file retrieval, so You can easily change it other-way-round.
try {
InputStream stO =
new BufferedInputStream(
ftp.retrieveFileStream("foo.bar"),
ftp.getBufferSize());
OutputStream stD =
new FileOutputStream("bar.foo");
org.apache.commons.net.io.Util.copyStream(
stO,
stD,
ftp.getBufferSize(),
/* I'm using the UNKNOWN_STREAM_SIZE constant here, but you can use the size of file too */
org.apache.commons.net.io.CopyStreamEvent.UNKNOWN_STREAM_SIZE,
new org.apache.commons.net.io.CopyStreamAdapter() {
public void bytesTransferred(long totalBytesTransferred,
int bytesTransferred,
long streamSize) {
// Your progress Control code here
}
});
ftp.completePendingCommand();
} catch (Exception e) { ... }

I think perhaps it is better to us the CountingOutputStream since it seems intended for this very purpose ?
This is answered by someone here: Monitoring progress using Apache Commons FTPClient

Related

Solution for downloading hundreds of files asynchronously

I have an app in which the user may need to download up to 760 files, totaling around 350MB. It is not possible to zip these files, they must be downloaded as loose files!
I'm currently using Android Asynchronous Http Client to download individual files, and AsyncTask to run the entire process.
Here's an example of a DownloadThread object which handles downloading hundreds of files in the background:
public class DownloadThread extends AsyncTask<String,String,String> {
ArrayList<String> list;
AsyncHttpClient client;
String[] allowedContentTypes = new String[] { "audio/mpeg" };
BufferedOutputStream bos;
FileOutputStream fos;
#Override
protected String doInBackground(String... params) {
DownloadTask task;
for (String file : list) {
//the "list" variable has already been populated with hundreds of strings
task = new DownloadTask(file);
task.execute("");
while (!task.isdone)
try {
Thread.sleep(10);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
return null;
}
class DownloadTask extends AsyncTask<String, String, String> {
String character, filename;
boolean isdone = false;
public DownloadTask(String file) {
//file = something like "Whale/sadwhale.mp3"
character = file.split("/")[0];
filename = file.split("/")[1];
}
#Override
protected void onPreExecute() {
}
#Override
protected void onPostExecute(String result) {
if (!result.equals("Error")) {
//Do something on success
}
isdone = true;
}
#Override
protected String doInBackground(String... str) {
client = new AsyncHttpClient();
client.get("http://some-site.com/sounds/" + character + "/"
+ filename, new BinaryHttpResponseHandler(
allowedContentTypes) {
#Override
public void onSuccess(byte[] fileData) {
try {
// Make file/folder and create stream
File folder = new File(Environment
.getExternalStorageDirectory()
+ CharSelect.directory + character);
folder.mkdirs();
File dest = new File(folder, filename);
fos = new FileOutputStream(dest);
bos = new BufferedOutputStream(fos);
// Transfer data to file
bos.write(fileData);
bos.flush();
bos.close();
} catch (Exception e) {
e.printStackTrace();
}
}
});
return "Success";
}
}
}
DownloadThread runs in the background, and also calls hundreds of it's own AsyncTasks. It waits until the task is done downloading, then continues the for loop for each download.
This works, kinda. Some downloads appear to not finish properly or not start at all. Out of a list of 760 downloads, an average of 100 downloads complete properly, and I have to restart the process to download another additional 100 downloads until that one fails as well. I have a feeling this is due to timing issues, as the Thread.sleep(10) line seems a little "hackish".
Surely, calling hundreds of AsyncTasks from another AsyncTask is not the most efficient way to do this. How can I alter this code or implement a 3rd party solution to fit this task?
Try out DownloadManager API. This should be what you need.
Here is the thing you need to keep in mind:
Computers have limited resources; network bandwidth, CPU, memory, disk, etc
The time it takes to download 1 file at a time vs. 760 files simultaneous can never logically take any longer than simultaneous download.
However, by spawning a whole lot of background tasks/threads you are incurring a lot of thread thrashing/overhead as each one needs to be context switched in and out. CPU bandwidth will be consumed in the switching instead of actually moving data in and off of the network interface. In addition, each thread will consume it's own memory and potentially need creating if not part of a pool.
Basically the reason your app isn't working reliably/at all is almost certainly because it's running out of CPU/DISK-IO/memory resources well before it finishes the downloads or fully utilizes the network.
Solution: find a library to do this or make use of the Executor suite of classes and use a limited pool of threads (then only download a few at a time).
Here is some good evidence in the wild that what you're trying to do is not advised:
Google play updates are all serialized
Amazon MP3 file downloader is totally serialized
default scp client in Linux is serialized file transfer
Windows update downloads serially
Getting the picture? Spewing all those threads is a recipe for problems in return for perceived speed improvement.

Write Log4j output to HDFS

Has anyone tried to write log4j log file directly to Hadoop Distributed File System ?
If yes, please reply how to achieve this.
I think I will have to create an Appender for it.
Is this the way?
My necessity is to write logs to a file at particular intervals and query that data at a later stage.
I recommend to use Apache Flume for this task. There is Flume appender for Log4j. This way, you send logs to Flume, and it writes to HDFS. Good thing about this approach is that Flume becomes single point of communication with HDFS. Flume makes it easy to add new data sources without writing bunch of code for interaction with HDFS again and again.
the standard log4j(1.x) does not support write to HDFS. but lucky, log4j is very easy to extends. I have written one HDFS FileAppender to write log to MapRFS(compatible with Hadoop). the file name can be something like "maprfs:///projects/example/root.log". It works good in our projects. I extract the appender part of code and paste it below. the code snippets may not be able to run. but this will give you the idea how to write you appender. Actually, you only need to extends the org.apache.log4j.AppenderSkeleton, and implement append(), close(), requiresLayout(). for more information, you can also download the log4j 1.2.17 source code and see how the AppenderSkeleton is defined, it will give you all the information there. good luck!
note: the alternative way to write to HDFS is to mount the HDFS to all your nodes, so you can write the logs just like write to local directory. maybe this is a better way in practice.
import org.apache.log4j.AppenderSkeleton;
import org.apache.log4j.spi.LoggingEvent;
import org.apache.log4j.Layout;
import org.apache.hadoop.conf.Configuration;
import java.io.*;
public class HDFSFileAppender {
private String filepath = null;
private Layout layout = null;
public HDFSFileAppender(String filePath, Layout layout){
this.filepath = filePath;
this.layout = layout;
}
#Override
protected void append(LoggingEvent event) {
String log = this.layout.format(event);
try {
InputStream logStream = new ByteArrayInputStream(log.getBytes());
writeToFile(filepath, logStream, false);
logStream.close();
}catch (IOException e){
System.err.println("Exception when append log to log file: " + e.getMessage());
}
}
#Override
public void close() {}
#Override
public boolean requiresLayout() {
return true;
}
//here write to HDFS
//filePathStr: the file path in MapR, like 'maprfs:///projects/aibot/1.log'
private boolean writeToFile(String filePathStr, InputStream inputStream, boolean overwrite) throws IOException {
boolean success = false;
int bytesRead = -1;
byte[] buffer = new byte[64 * 1024 * 1024];
try {
Configuration conf = new Configuration();
org.apache.hadoop.fs.FileSystem fs = org.apache.hadoop.fs.FileSystem.get(conf);
org.apache.hadoop.fs.Path filePath = new org.apache.hadoop.fs.Path(filePathStr);
org.apache.hadoop.fs.FSDataOutputStream fsDataOutputStream = null;
if(overwrite || !fs.exists(filePath)) {
fsDataOutputStream = fs.create(filePath, overwrite, 512, 3, 64*1024*1024);
}else{ //append to existing file.
fsDataOutputStream = fs.append(filePath, 512);
}
while ((bytesRead = inputStream.read(buffer)) != -1) {
fsDataOutputStream.write(buffer, 0, bytesRead);
}
fsDataOutputStream.close();
success = true;
} catch (IOException e) {
throw e;
}
return success;
}
}

FTP apache commons progress bar in java

I'm working on a little program, which can upload a file to my FTP Server and do some other stuff with it.
Now... It all works, i'm using the org.apache.commons.net.ftp FTPClient class for uploading.
ftp = new FTPClient();
ftp.connect(hostname);
ftp.login(username, password);
ftp.setFileType(FTP.BINARY_FILE_TYPE);
ftp.changeWorkingDirectory("/shares/public");
int reply = ftp.getReplyCode();
if (FTPReply.isPositiveCompletion(reply)) {
addLog("Uploading...");
} else {
addLog("Failed connection to the server!");
}
File f1 = new File(location);
in = new FileInputStream(
ftp.storeFile(jTextField1.getText(), in);
addLog("Done");
ftp.logout();
ftp.disconnect();
The file which should uploaded, is named in hTextField1.
Now... How do i add a progress bar? I mean, there is no stream in ftp.storeFile... How do i handle this?
Thanks for any help! :)
Greetings
You can do it using the CopyStreamListener, that according to Apache commons docs is the listener to be used when performing store/retrieve operations.
CopyStreamAdapter streamListener = new CopyStreamAdapter() {
#Override
public void bytesTransferred(long totalBytesTransferred, int bytesTransferred, long streamSize) {
//this method will be called everytime some bytes are transferred
int percent = (int)(totalBytesTransferred*100/yourFile.length());
// update your progress bar with this percentage
}
});
ftp.setCopyStreamListener(streamListener);
Hope this helps

Why uploaded file through Apache FTPClient is smaller then original file on the local?

I am uploading files(.cvs,.zip,.rar,.doc,.png,.jpg...) to ftp server. The strange is everything is successfully but I miss some data.
Does any body know why it happens and how to fix it?
public static void uploadWithCommonsFTP(File fileToBeUpload) {
FTPClient f = new FTPClient();
try {
f.connect(server.getServer());
f.login(server.getUsername(), server.getPassword());
f.changeWorkingDirectory("user");
f.setFileType(FTP.BINARY_FILE_TYPE);
f.setFileTransferMode(FTP.BINARY_FILE_TYPE);//this is part of Mohammad Adil's solutions
f.enterLocalPassiveMode();
ByteArrayInputStream in = new ByteArrayInputStream(FileUtils.readFileToByteArray(fileToBeUpload));
boolean reply = f.storeFile(fileToBeUpload.getName(), in);
if(!f.completePendingCommand()) {
f.logout();
f.disconnect();
System.err.println("File transfer failed.");
System.exit(1);
}
if(reply){
JOptionPane.showMessageDialog(null,"uploaded successfully.");
}else{
JOptionPane.showMessageDialog(null,"Upload failed.");
}
}
//Logout and disconnect from server
in.close();//this is part of Mohammad Adil's solutions
f.logout();
f.disconnect();
} catch (IOException e) {
e.printStackTrace();
}
}
It's often forgotten that FTP has two modes of operation - one for text files and the other for binary(jpg,csv,pdf,zip) files.
Your code doesn't work because the default transfer mode for FTPClient is FTP.ASCII_FILE_TYPE. You just need to update the configuration to transfer in binary mode.
Add this in your code :
f.setFileTransferMode(FTP.BINARY_FILE_TYPE);
just put that line after f.setFileType(FTP.BINARY_FILE_TYPE);
and it should work then.
EDIT:
You are not closing inputStream in your code,Just call in.close() before calling logout()

Serving a file with Netty - response is truncated by one byte

I've serving a files from Android assets via Netty server (images, html).
Text files such a html is saved as .mp3 to disable compression (I need an InputStream!)
My pipeline is looking like this:
pipeline.addLast("decoder", new HttpRequestDecoder());
pipeline.addLast("aggregator", new HttpChunkAggregator(65536));
pipeline.addLast("encoder", new HttpResponseEncoder());
pipeline.addLast("chunkedWriter", new ChunkedWriteHandler());
pipeline.addLast("handler", new AssetsServerHandler(context));
My handler is:
public class AssetsServerHandler extends SimpleChannelUpstreamHandler {
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
// some checks
final FileInputStream is;
final AssetFileDescriptor afd;
try {
afd = assetManager.openFd(path);
is = afd.createInputStream();
} catch(IOException exc) {
sendError(ctx, NOT_FOUND);
return;
}
final long fileLength = afd.getLength();
HttpResponse response = new DefaultHttpResponse(HTTP_1_1, OK);
setContentLength(response, fileLength);
final Channel ch = e.getChannel();
final ChannelFuture future;
ch.write(response);
future = ch.write(new ChunkedStream(is));
future.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
future.getChannel().close();
}
});
if (!isKeepAlive(request)) {
future.addListener(ChannelFutureListener.CLOSE);
}
}
// other stuff
}
With that handler i've got my resposes truncated by at least one byte. If I change ChunkedStream to ChunkedNioFile (and so use a is.getChannel() instead of is as a constructor to it) - everything works perfectly.
Please, help me understand what is wrong with ChunkedStream.
Your code looks right to me. Does the returned FileInputStream of AssetFileDescriptor contain "all the bytes" ? You could check this with a unit test. If there is no bug in it then its a bug in netty. I make heavy use of ChunkInputStream and never had such a problem yet, but maybe it really depends on the nature of the InputStream.
Would be nice if you could write a test case and open a issue at netty's github.

Categories