Writing into text file on ftp server in Java - java

I have text file on my ftp server. I am trying to write into this file but couldn't. This is my code.
URL url = new URL("ftp://username:pass#thunder.cise.ufl.edu/public/foler/a.txt;type=i");
URLConnection urlc = url.openConnection();
OutputStream os = urlc.getOutputStream(); // To upload
OutputStream buffer = new BufferedOutputStream(os);
ObjectOutput output = new ObjectOutputStream(buffer);
output.writeChars("hello");
buffer.close();
os.close();
output.close();

ObjectOutputStream class is intended to write object data so it can be reconstructed by ObjectInputStream (see here). It's not for writing textual files. If all you need is writing String to stream better use PrintStream
URL url = new URL("ftp://username:pass#thunder.cise.ufl.edu/public/foler/a.txt;type=i");
URLConnection urlc = url.openConnection();
OutputStream os = urlc.getOutputStream(); // To upload
OutputStream buffer = new BufferedOutputStream(os);
PrintStream output = new PrintStream(buffer);
output.print("hello");
buffer.close();
os.close();
output.close();

what library do you use?
I think you must use a right java library when connecting to FTP
I used this one in my previous projects
ApacheCommons FTPClient
feel free to ask if you have problems using the above library.

Take a look at the question Uploading to FTP using Java and look at the answer by user Loša.
The only thing that Loša's answer is missing is the definition of the var BUFFER_SIZE as
final int BUFFER_SIZE = 1024; // or whatever size you think it should be
and importing libraries and basic class definition for what you're doing.
Some simple searching here, or via DuckDuckGo or Google would have found what you're looking for.
Also, you aren't asking a question so much as saying "This doesn't work and I don't know why. Fix it for me."

Related

FileOutputStream - Insufficient system resources exist to complete the requested service

I'm using the following code to write files to the disk.
`try{
FileOutputStream fileOutputStream = null;
fileOutputStream = new FileOutputStream(filePath);
fileOutputStream.write(fileData);
fileOutputStream.flush();
}
finally{
fileOutputStream.close();
}
`
The problem is that I'm getting the following error intermittently:
Insufficient system resources exist to complete the requested service.
I have already checked a few cases when this problem can happen, like the lack of Paged Pool Memory, but none of them is my case. I'm using windows server 2003 Server R2 SP2. Architecture x86.
Should I try to write the file in smaller chunks? What is the best way to do that?
A few things.
First, you should consider using buffers. Try wrapping your FileOutputStream with a BufferedOutputStream.
try{
BufferedOutputStream outputBuffer = null;
outputBuffer = new BufferedOutputStream (new FileOutputStream(filePath));
outputBuffer.write(fileData);
outputBuffer.flush();
}
finally{
outputBuffer.close();
}
Second, try checking if you really are running out of handles. I left a comment with a link regarding this.

Upload to S3 using Gzip in Java

I'm new to Java and I'm trying to upload a large file ( ~10GB ) to Amazon S3. Could anyone please help me with how to use GZip outputsteam for it ?
I've been through some documentations but got confused about Byte Streams, Gzip streams. They must be used together ? Can anyone help me with this piece of code ?
Thanks in advance.
Have a look at this,
Is it possible to gzip and upload this string to Amazon S3 without ever being written to disk?
ByteArrayOutputStream byteOut = new ByteArrayOutputStream();
GZipOuputStream gzipOut = new GZipOutputStream(byteOut);
// write your stuff
byte[] bites = byteOut.toByteArray();
//write the bites to the amazon stream
Since its a large file you might want to have a look at multi part upload
This question could have been more specific and there are several ways to achieve this. One approach might look like the below.
The example depends on the commons-io and commons-compress libraries, and uses classes from the java.nio.file package.
public static void compressAndUpload(AmazonS3 s3, InputStream in)
throws IOException
{
// Create temp file
Path tmpPath = Files.createTempFile("prefix", "suffix");
// Create and write to gzip compressor stream
OutputStream out = Files.newOutputStream(tmpPath);
GzipCompressorOutputStream gzOut = new GzipCompressorOutputStream(out);
IOUtils.copy(in, gzOut);
// Read content from temp file
InputStream fileIn = Files.newInputStream(tmpPath);
long size = Files.size(tmpPath);
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentType("application/x-gzip");
metadata.setContentLength(size);
// Upload file to S3
s3.putObject(new PutObjectRequest("bucket", "key", fileIn, metadata));
}
Buffering, error handling and closing of streams are omitted for brevity.

How to download monthly Treasury Files

Up till early this year the US Treasury web site posted monthly US Receipts and Outlays data in txt format. It was easy to write a program to read and store the info. All I use were:
URL url = new URL("https://www.fiscal.treasury.gov/fsreports/rpt/mthTreasStmt/mts1214.txt")
URLConnection connection.openConnection();
InputStream is = connection.getInputStream();
Then I just read the InputStream into a local file.
Now when I try same code, for May, I get an InputStream with nothing in it.
Just clicking on "https://www.fiscal.treasury.gov/fsreports/rpt/mthTreasStmt/mts0415.xlsx" opens an excel worksheet (the download path has since changed).
Which is great if you don't mind clicking on each link separately ... saving the file somewhere ... opening it manually to enable editing ... then saving it again as a real .xlsx file (because they really hand you an .xls file.)
But when I create a URL from that link, and use it to get an InputStream, the is empty. I also tried url.openStream() directly. No different.
Can anyone see a way I can resume using a program to read the new format?
In case its of interest I now use this code to write the stream to the file bit by bit... but there are no bits, so I don't know if it works.
static void copyInputStreamToFile( InputStream in, File file ) {
try {
OutputStream out = new FileOutputStream(file);
byte[] buf = new byte[1024];
System.out.println("reading: " + in.read(buf));
//This is what tells me it is empty, i.e. the loop below is ignored.
int len;
while((len=in.read(buf))>0){
out.write(buf,0,len);
}
out.close();
in.close();
} catch (Exception e) {
e.printStackTrace();
}
}
Any help is appreciated.

How to get file size of http file path

I'm using the below code to download files from a remote location via http. Some assets are not fully downloading and appear to be corrupt. It looks to be about 5% of the time. I'm thinking it'd be good to ensure I've downloaded the full file by getting the file size in advance and comparing it to what I've downloaded to be sure nothing was missed.
Through some google searches and looking at the objects I'm already working with I don't see an obvious way to obtain this file size. Can someone point me in the right direction?
HttpURLConnection con = (HttpURLConnection) url.openConnection();
con.setInstanceFollowRedirects(true);
InputStream is = con.getInputStream();
file = new File(destinationPath+"."+remoteFile.getExtension());
BufferedInputStream bis = new BufferedInputStream(is);
BufferedOutputStream bos = new BufferedOutputStream(new FileOutputStream(file.getAbsolutePath()));
while ((i = bis.read()) != -1) {
bos.write(i);
}
bos.flush();
bis.close();
con.getContentLength() may give you what you want, but only if the server provided it as a response header. If the server used "chunked" encoding instead of providing a Content-Length header then the total length is not available up-front.
Check out the getContentLength() method here HttpURLConnection that inherits from URLConnection.
You can use InputStream#available() method.
It returns an estimate of the number of bytes that can be read (or skipped over) from this input stream without blocking by the next invocation of a method for this input stream.
The next invocation might be the same thread or another thread. A single read or skip of this many bytes will not block, but may read or skip fewer bytes.
FileInputStream fis = new FileInputStream(destinationPath+"."+remoteFile.getExtension());
System.out.println("Total file size to read (in bytes) : "+ fis.available());

Client-Server File Transfer in Java

I'm looking for an efficient way to transfer files between client and server processes using TCP in Java. My server code looks something like this:
socket = serverSocket.accept();
InputStream is = socket.getInputStream();
OutputStream os = socket.getOutputStream();
FileInputStream fis = new FileInputStream(new File(filename));
I'm just unsure of how to proceed. I know I want to read bytes from fis and then write them to os, but I'm unsure about the best way to read and write bytes using byte streams in Java. I'm only familiar with writing/reading text using Writers and Readers. Can anyone tell me the appropriate way to do this? What should I wrap os and fis in (if anything) and how do I keep reading bytes until the end of file without a hasNext() method (or equivalent)
You could do something like:
byte[] contents = new byte[BUFFER_SIZE];
int numBytes =0;
while((numBytes = is.read(contents))>0){
os.write(contents,0,numBytes);
}
You could use Apache's IOUtils.copy(in, out) or
import org.apache.commons.fileupload.util.Streams;
...
Streams.copy(in, out, false);
Inspecting the source might prove interesting. ( http://koders.com ?)
There is the java.nio.Channel with a transferTo method, with mixed opinions in the community wether better for smaller/larger files.
A simple block wise copy between Input/OutputStream would be okay. You could wrap it in buffered streams.

Categories