I'm using the Java URL and URLConnection classes to upload an file to a server using FTP. I don't need to do anything other than simply upload the file, so I'd like to avoid any external libraries and I'm wary of using the non-supported sun.net.ftp class.
Is there any way to use absolute paths in the FTP connection string? I'd like to put my files in something like "/ftptransfers/..." but the FTP path is relative to the user home directory.
Sample upload code:
URL url = new URL("ftp://username:password#host/file.txt");
URLConnection uc = url.openConnection();
uc.setDoOutput(true);
OutputStream out = uc.getOutputStream() ;
out.write("THIS DATA WILL BE WRITTEN TO FILE".getBytes());
out.close();
I did actually find out there is a semi-standard way to do it that worked for me.
Short answer: replace the leading slash with "%2F"
Long answer: per the "A FTP URL Format" document:
For example, the URL "ftp://myname#host.dom/%2Fetc/motd" is
interpreted by FTP-ing to "host.dom", logging in as "myname"
(prompting for a password if it is asked for), and then executing
"CWD /etc" and then "RETR motd".
This has a different meaning from
"ftp://myname#host.dom/etc/motd" which would "CWD etc" and then
"RETR motd"; the initial "CWD" might be executed relative to the
default directory for "myname".
On the other hand,
"ftp://myname#host.dom//etc/motd", would "CWD " with a null
argument, then "CWD etc", and then "RETR motd".
I think that your best bet is to use the apache commons FTP component, and do a 'cd' after you make the connection.
you can always write a wrapper so that the URL can be specified in the format above if you so wish.
-ace
Related
I am trying to load some data into an AWS lambda and am using getClass().getResource() to do so. This returns a nice URL that in logs seemingly prints out a plausible url; however, when I try and make a file based on that path, I get a file that when I call .exists() returns false.
If I run the code bellow, the first print statement gives "returns exists: false"
Meanwhile, the second print statement gives something around the lines of "test path: /file:/var/task/lib/MyLambda-1.0.jar!/com/my/package/folders/file.end
File test = new File(cFile);
System.out.println("exists: " + test.exists());
System.out.println("test path: " + test.getAbsolutePath());
Not sure why this would be. If Java finds a file, then I would assume that the file exists...
Short answer: don't assume that the "path" of a URL is a file system pathname.
I am trying to load some data into an AWS lambda and am using getClass().getResource() to do so. This returns a nice URL that in logs seemingly prints out a plausible url;
Yes. (It would be nice if you showed us what the original URL looks like ... though I can guess.)
However, when I try and make a file based on that path, I get a file that when I call .exists() returns false.
OK, unless the URL has the protocol "file:", I would NOT expect that to work.
The path in a URL is a path that is intended for the protocol handler to resolve. The idea is that you use URL::openStream to open a stream to the resource named by the URL and then read it. The protocol handler takes care of interpreting the path (etc) and setting up the stream.
For a "file:" URL, the protocol handler will resolve the path in the file system, and provide you a stream to read the file.
For a "http:" URL, the protocol handler establishes a connection to the server, sends a GET request, and returns you a stream to read the response body.
For a "jar:" URL, the protocol handler opens the JAR file, finds the entry within the JAR file, and hands you a stream to read it.
And so on.
If you look at these, it is only in the "file:" case that there is a reasonable expectation that treating the path component of the URL as a file system pathname could work.
Looking at the pathname in your question:
file:/var/task/lib/MyLambda-1.0.jar!/com/my/package/folders/file.end
I surmise that the original URL was:
jar:file:/var/task/lib/MyLambda-1.0.jar!/com/my/package/folders/file.end
So what that says to the "jar:" protocol handler is:
Find the resource identified by the URL "file:/var/task/lib/MyLambda-1.0.jar"
Open it as a JAR file stream
Find the entry "/com/my/package/folders/file.end" in the JAR file's namespace
Open a stream to read that entry's content.
The JAR file protocol handler knows how to do that. But (clearly) the File class doesn't ... because that "path" is not a file system pathname.
How you solve this depends on what you really need.
If you just need a stream to read the resource, use getClass().getResourceAsStream(...) instead.
If it must be a file in the file system, you may have to get hold of the stream (see above), copy it to a temporary file, and use a File for the temporary file.
If you are doing the because you want to write to the "file", I would suggest that you give up on that idea. It is a bad idea for an application to try to update its resources. And in some cases it simply won't / cannot work.
Is your File test = new File(cFile), Is your cFile made correctly with a proper path? Maybe the last print statement is just picking up on the incorrect path you made? But in reality you don't actually have a file there. Have you checked manually?
I have an xslt file stored in the folder Project/tools. (I'm using Netbeans IDE.)
I try to access this file in my code, but at run time, I get an AccessControlException: access denied.
The code is:
java.net.URI xsltURI = new java.net.URI(myUtil.getUri("xsltFile.xslt"));
Transformer transformer = factory.newTransformer(new StreamSource(new File(xsltURI)));
The myUtil instance must be used to access the URI for reasons not important here. I printed its output, and it correctly gives the relative path of the file.
I have tried to prefix the relative path with file:/// and file:///[fulldomain], but in each of these cases, it actually tries to access a hard drive on the server, even though I did not give a drive name anywhere. (!) It tries to access C:[relative-path], which isn't even where the file is anyway.
If I omit file:/// then I get that the URI is not absolute, and if I just give the full web address of the file I get a NullPointerException.
Any help at all would be greatly appreciated.
UPDATE: Following my comment below, my code resembles
java.net.URI xsltURI = new java.net.URI("https://host" + myB2U.getUri("xsltFile.xslt"));
java.net.URL xsltURL = xsltURI.toURL();
java.net.URLConnection myConnection = xsltURL.openConnection();
myConnection.connect(); //AccessControlException: access denied ("java.net.SocketPermission"...
java.io.InputStream xsltStream = myConnection.getInputStream();
Transformer transformer = factory.newTransformer(new StreamSource(xsltStream));
Is there something obvious that is wrong?
The file:// protocol tells Java to use file access to open the stream. If you don't want file access you should use a different protocol such as http://.
If you're using a relative path the URI should look something like file://./My/Relative/Path. The 3rd slash means that it is relative to the root.
From what I've gathered, I'm supposed to instantiate a URL object with the path of the file. From there, I'm supposed to be able to initialize a URLConnection from the URL. After I call the URL's connect() method, I'm supposed to be able to obtain an InputStream by calling the getInputStream() method.
I have an application that requires a user to upload a zipfile containing xml report file among other files.
What I want to do is, to verify it is a zip, then open and check if there is an xml file, and verify some few nodes which are required in that xml.
I want to do this before I save this zipfile to a disk/filesystem, and withought creating a temporary file. I will only save the file if it passes the validation.
I am using Spring multipart CommonsMultipartFile to manage uploads.
The application is using Java, jsp, tomcat
Thanks.
See my comment on the OP about the wisdom of buffering the entire file in memory.
One quick first check for a valid zip file would be to check the first 4 bytes for the appropriate "magic" bytes. a zip file should start with the first 4 bytes {(byte)0x50, (byte)0x4b, (byte)0x03, (byte)0x04}. the only way to really check it, however, is to attempt to unzip it.
If you want to check whether a file is a ZIP file, perhaps you can use getContentType() method of the URLConnection class? Something like this:
URL u = new URL(fileUrl);
URLConnection uc = u.openConnection();
String type = uc.getContentType();
But it would be much faster to detect the magic bytes which, for the ZIP format, are 50 4B.
Java7 ships with a default Path implementation for local files. Is there a Path implementation for URLs?
For example, I should be able to copy a remote resource using the following code:
Path remote = Paths.get(new URI("http://www.example.com/foo/bar.html"));
Path local = Paths.get(new URI("/bar.html"));
Files.copy(remote, local);
Currently, this throws java.nio.file.FileSystemNotFoundException: Provider "http" not installed. I could probably implement this myself but I'd rather not reinvent the wheel.
It seems like what you're really trying to do is accomplish what FTP does - copy files from one place to another. I would suggest you find better ways to do this with existing FTP code libraries.
URIs are not file system paths, so you can't treat them as such. They are addresses/resource locators that, when you go there with your browser (or another client that handles them), they trigger some action as defined by the server that's behind them. There's no standard for what that server does, hence the flexibility of web services. Therefore, if your server is doing to accept HTTP requests in this manner to facilitate file copies, you're going to have to roll your own, and pass the file data into a POST request.
To say it another way, (1) don't treat URIs like they are file system paths - they aren't, (2) find an FTP library to copy files, and/or (3) if you really want to build a web service that does this, abstract the details of the file copying via a POST request. If you do #3 understand that what your building is pretty close to custom, and that it will probably only work on a subset of sites that follow your particular design (i.e. the ones you build yourself). There's no standard set of parameters or "file copying" via POST command that I'm aware of that you can leverage to make this "just work" - you're going to have to match up your HTTP request with the web service on the server side.
You can do:
URI uri = new URI("http://www.example.com/foo/bar.html");
try (InputStream is = uri.toURL().openStream()) {
// ...
}
It will work for http, https and file out of the box, probably for few more.
For relative URIs, you have to resolve them first:
URI relative = new URI("bar.html");
URI base = new URI("http://www.example.com/foo/");
URI absolute = base.resolve(relative);
System.out.println(absolute); // prints "http://www.example.com/foo/bar.html"
Now you can call toURL().openStream() on the absolute URI.
I had to deal with a similar problem and wrote a little method to solve this, you can find below.
It works fine for concatenation of URL and relative suffixes. Be careful not to give suffix absolute because of the behaviour of URI resolve function.
private URI resolveURI(String root, String suffix) {
URI uri;
try {
uri = new URI(root + "/").resolve(suffix);
} catch (URISyntaxException e) {
//log
}
return uri;
}
The default output of File.toURL() is
file:/c:/foo/bar
These don't appear to work on windows, and need to be changed to
file:///c:/foo/bar
Does the format
file:/foo/bar
work correctly on Unix (I don't have a Unix machine to test on)? Is there a library that can take care of generating a URL from a File that is in the correct format for the current environment?
I've considered using a regex to fix the problem, something like:
fileUrl.replaceFirst("^file:/", "file:///")
However, this isn't quite right, because it will convert a correct URL like:
file:///c:/foo/bar
to:
file://///c:/foo/bar
Update
I'm using Java 1.4 and in this version File.toURL() is not deprecated and both File.toURL().toString() and File.toURI().toString() generate the same (incorrect) URL on windows
The File(String) expects a pathname, not an URL. If you want to construct a File based on a String which actually represents an URL, then you'll need to convert this String back to URL first and make use of File(URI) to construct the File based on URL#toURI().
String urlAsString = "file:/c:/foo/bar";
URL url = new URL(urlAsString);
File file = new File(url.toURI());
Update: since you're on Java 1.4 and URL#toURI() is actually a Java 1.5 method (sorry, overlooked that bit), better use URL#getPath() instead which returns the pathname, so that you can use File(String).
String urlAsString = "file:/c:/foo/bar";
URL url = new URL(urlAsString);
File file = new File(url.getPath());
The File.toURL() method is deprecated - it is recommended that you use the toURI() method instead. If you use that instead, does your problem go away?
Edit:
I understand: you are using Java 4. However, your question did not explain what you were trying to do. If, as you state in the comments, you are attempting to simply read a file, use a FileReader to do so (or a FileInputStream if the file is a binary format).
What do you actually mean with "Does the format file:/c:/foo/bar work correctly on Unix"?
Some examples from Unix.
File file = new File("/tmp/foo.txt"); // this file exists
System.out.println(file.toURI()); // "file:/tmp/foo.txt"
However, you cannot e.g. do this:
File file = new File("file:/tmp/foo.txt");
System.out.println(file.exists()); // false
(If you need a URL instance, do file.toURI().toURL() as the Javadoc says.)
Edit: how about the following, does it help?
URL url = new URL("file:/tmp/foo.txt");
System.out.println(url.getFile()); // "/tmp/foo.txt"
File file = new File(url.getFile());
System.out.println(file.exists()); // true
(Basically very close to BalusC's example which used new File(url.toURI()).)