I'm using Java to download a file, I have used three different code snippets that I'm going to post here. But the result for all is same. They are all downloading the file partially and as a result the file can't be opened as it is not downloaded completely. How I should fix this problem.
My first code:
public class FileDownloader {
final static int size=1024;
public static void main(String[] args){
fileUrl("http://textfiles.com/holiday","holiday.tar.gz","C:\\Users\\Me\\Downloads");
}
public static void fileUrl(String fAddress, String localFileName, String destinationDir) {
OutputStream outStream = null;
URLConnection uCon = null;
InputStream is = null;
try {
URL Url;
byte[] buf;
int ByteRead,ByteWritten=0;
Url= new URL(fAddress);
outStream = new BufferedOutputStream(new
FileOutputStream(destinationDir+"\\"+localFileName));
uCon = Url.openConnection();
is = uCon.getInputStream();
buf = new byte[size];
while ((ByteRead = is.read(buf)) != -1) {
outStream.write(buf, 0, ByteRead);
ByteWritten += ByteRead;
}
System.out.println("Downloaded Successfully.");
System.out.println("File name:\""+localFileName+ "\"\nNo ofbytes :" + ByteWritten);
}catch (Exception e) {
e.printStackTrace();
}
finally {
try {
is.close();
outStream.close();
}
catch (IOException e) {
e.printStackTrace();
}
}
}
}
Second code I used:
public class downloadFile {
public static void main(String[]args){
downloadFile dfs = new downloadFile();
dfs.downloadFileAndStore("C:\\Users\\Me\\Downloads","Sign&ValidateVersion2.docx");
}
/**
* download and save the file
* #param fileUrl: String containing URL
* #param destinationDirectory : directory to store the downloaded file
* #param fileName : fileName without extension
*/
public void downloadFileAndStore(String destinationDirectory,String fileName){
// URL url = null;
FileOutputStream fos = null;
//convert the string to URL
try {
// url = new URL(fileUrl);
HttpClient client = HttpClientBuilder.create().build();
HttpGet request = new HttpGet("https://mail.uvic.ca/owa/#path=/mail");
HttpResponse response = client.execute(request);
if(response.getEntity().getContent().read()==-1){
Log.error("Response is empty");
}
else{
BufferedReader rd = new BufferedReader(new InputStreamReader(response
.getEntity().getContent()));
StringBuffer result = new StringBuffer();
String line = "";
while ((line = rd.readLine()) != null) {
result.append(line);
}
fos = new FileOutputStream(destinationDirectory + "\\" + fileName);
fos.write(result.toString().getBytes());
fos.close();
}
} catch (MalformedURLException e) {
// TODO Auto-generated catch block
Log.error("PDF " + fileName + " error: " + e.getMessage());
} catch (ClientProtocolException e) {
// TODO Auto-generated catch block
Log.error("PDF " + fileName + " error: " + e.getMessage());
} catch (UnsupportedOperationException e) {
// TODO Auto-generated catch block
Log.error("PDF " + fileName + " error: " + e.getMessage());
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
Log.error("PDF " + fileName + " error: " + e.getMessage());
} catch (IOException e) {
// TODO Auto-generated catch block
Log.error("PDF " + fileName + " error: " + e.getMessage());
}
}
}
the third code I used:
public class download {
public static void main(String[] args) {
download dfs = new download();
dfs.downloadFile();
}
public void downloadFile(){
try {
IOUtils.copy(
new URL("https://archive.org/details/alanoakleysmalltestvideo").openStream(),
new FileOutputStream("C:\\Users\\Me\\Downloads\\spacetestSMALL.wmv")
);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
I have to download a video file, which is about 6-7 MB, however I have used these three codes for small text files and other types, it didn't work. Any one have any idea?
I tried your first example and it works for me. I guess you are misinterpreting what you are actually doing. With
fileUrl("http://textfiles.com/holiday","holiday.tar.gz","C:\\Users\\Me\\Downloads");
you are downloading the HTML page of textfiles.com/holiday to a file called "holiday.tar.gz". The actual tar archive has the URL archives.textfiles.com/holiday.tar.gz.
In one of your comments you say you actually want to download a video under archive.org/details/alanoakleysmalltestvideo, but using this URL you simply download the HTML page (successfully) where the video is embedded.
If you look into the HTML sources, you can find an actual video URL, e.g., archive.org/download/alanoakleysmalltestvideo/spacetestSMALL_512kb.mp4, and successfully download it with your existing code.
I had a bad interpretation from URL as you told me fhissen. I fixed that now my url is pointing to the file: "http://ia601409.us.archive.org/8/items/alanoakleysmalltestvideo/spacetestSMALL_512kb.mp4" using the first code snippet, I have: outStream = new BufferedOutputStream(new
FileOutputStream(destinationDir + "spacetestSMALL_512kb.mp4")); and the destinationDir is : "C:\download\" . After I solved this problem another error I had as FileNotFoundException access denied. This is about the permission on the folder I want to download the file in to it. So I gave my self full permission on the folder and I also did that for the Java folder in Program Files as mentioned in this page how to do it : Access is denied java.io.FileNotFoundException . This is now working using first code, I think code 2 and 3 will work too, but anyone can test it. Now I can download the file properly. Thanks everyone :-)
Related
I am working on File Upload using Java API.
I want to upload file into server by using FTPS , I found that I can use Google library of apache commons net but I am facing issue in org.apache.commons.net.ftp.FTPSClient when i upload file.
Following is the error
Exception in thread "main" java.lang.NullPointerException
this is my code :
public static void main(String[] args) {
String server = "HOST";
int port = 21;
String user = "USER";
String pass = "PASS";
FTPSClient ftpClient;
try {
ftpClient = new FTPSClient();
try {
ftpClient.connect(server, port);
ftpClient.login(user, pass);
ftpClient.enterLocalPassiveMode();
ftpClient.setFileType(FTP.BINARY_FILE_TYPE);
// APPROACH #1: uploads first file using an InputStream
File firstLocalFile = new File("TEST1.CSV");
String firstRemoteFile = "TEST1.txt";
InputStream inputStream = new FileInputStream(firstLocalFile);
System.out.println("Start uploading first file");
boolean done = ftpClient.storeFile(firstRemoteFile, inputStream);
inputStream.close();
if (done) {
System.out.println("The first file is uploaded successfully.");
}
// APPROACH #2: uploads second file using an OutputStream
File secondLocalFile = new File("TEST2.CSV");
String secondRemoteFile = "TEST2.TXT";
inputStream = new FileInputStream(secondLocalFile);
System.out.println("Start uploading second file");
OutputStream outputStream = ftpClient.storeFileStream(secondRemoteFile);
byte[] bytesIn = new byte[4096];
int read = 0;
while ((read = inputStream.read(bytesIn)) != -1) {
outputStream.write(bytesIn, 0, read);
}
inputStream.close();
//outputStream.close();
boolean completed = ftpClient.completePendingCommand();
if (completed) {
System.out.println("The second file is uploaded successfully.");
}
} catch (IOException ex) {
System.out.println("Error: " + ex.getMessage());
ex.printStackTrace();
} finally {
try {
if (ftpClient.isConnected()) {
ftpClient.logout();
ftpClient.disconnect();
}
} catch (IOException ex) {
ex.printStackTrace();
}
}
} catch (NoSuchAlgorithmException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
Any support is appreciated
This is my code that saves image in directory:
#POST
#Path("/imagestore")
#Consumes(MediaType.MULTIPART_FORM_DATA)
// #Consumes(MediaType.APPLICATION_JSON)
#Produces(MediaType.APPLICATION_JSON)
public JSONObject uploadFile(#FormDataParam("file") InputStream file) {
String dirPath = servletContext.getContextPath()+"/images";
File imagesDir = new File(dirPath);
boolean dirCreated = true;
if (!imagesDir.exists()) {
try {
dirCreated = imagesDir.mkdirs();
} catch (Exception e) {
e.printStackTrace();
}
}
if (dirCreated) {
String filePath = dirPath + "/1.jpg";
JSONObject obj = new JSONObject();
// save the file to the server
try {
File newFile = new File(filePath);
boolean fileCreated = true;
if (!newFile.exists()) {
fileCreated = newFile.createNewFile();
}
if (fileCreated) {
FileOutputStream outpuStream = new FileOutputStream(newFile);
int read = 0;
byte[] bytes = new byte[1024];
while ((read = file.read(bytes)) != -1) {
outpuStream.write(bytes, 0, read);
}
outpuStream.flush();
outpuStream.close();
}
} catch (IOException e) {
try {
obj.put("error", e.getMessage());
} catch (JSONException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
e.printStackTrace();
}
String output = "File saved to server location : " + filePath;
try {
obj.put("output", output);
return obj;
} catch (JSONException e) {
e.printStackTrace();
return null;
}
}
return obj;
}
Now this works perfectly but i also need to save data to database and for that i need to consume Json data as well but i don't know how to do both these things are the same time because you can only write one consumes.
In Simple words i want to consume both json (containing user info) and Mulipart_form_data (containing image to upload on server) . So how do i do it.I'll appreciate the help :)
I have got links like this link, which directly ask for the filename to save with, and start downloading in the browser.
How can I download or save this file programmatically?
I tried with the following method:
static void DownloadFile(String url, String fileName) throws MalformedURLException, IOException
{
url = "http://dbpedia.org/sparql?default-graph-uri=http%3A%2F%2Fdbpedia.org&query=DESCRIBE+<"+ url +">&format=text%2Fcsv";
URL link = new URL(url); //The file that you want to download
InputStream in = new BufferedInputStream(link.openStream());
ByteArrayOutputStream out = new ByteArrayOutputStream();
byte[] buf = new byte[1024];
int n = 0;
while (-1!=(n=in.read(buf)))
{
out.write(buf, 0, n);
}
out.close();
in.close();
byte[] response = out.toByteArray();
FileOutputStream fos = new FileOutputStream(fileName);
fos.write(response);
fos.close();
System.out.println("Finished");
}
but this save the file having only the first line as ""subject","predicate","object"
" and not the complete file.
EDIT:
As suggested in an answer I tried the following, but that too gave only the first line of the file:
static void DownloadFile(String s_url, String fileName) throws MalformedURLException, IOException
{
s_url = "http://dbpedia.org/sparql?default-graph-uri=http%3A%2F%2Fdbpedia.org&query=DESCRIBE+<"+ s_url +">&format=text%2Fcsv";
//url = "http://dbpedia.org/data/Sachin_Tendulkar.rdf";
try {
URL url = new URL(s_url); //The file that you want to download
// read text returned by server
BufferedReader in = new BufferedReader(new InputStreamReader(url.openStream()));
PrintWriter out = new PrintWriter(fileName);
String line;
while ((line = in.readLine()) != null) {
out.println(line);
}
in.close();
out.close();
}
catch (MalformedURLException e) {
System.out.println("Malformed URL: " + e.getMessage());
}
catch (IOException e) {
System.out.println("I/O Error: " + e.getMessage());
}
System.out.println("Finished");
}
EDIT:
I tried with Apache FileUtils too, but that too gave only the first line of the file.
static void DownloadFile(String s_url, String fileName) throws MalformedURLException, IOException
{
s_url = "http://dbpedia.org/sparql?default-graph-uri=http%3A%2F%2Fdbpedia.org&query=DESCRIBE+<"+ s_url +">&format=text%2Fcsv";
URL url = new URL(s_url); //The file that you want to download
FileUtils.copyURLToFile(url, new File(fileName));
System.out.println("Finished");
}
if you want to download a file, Apache Commons have just what you are looking for, works great!
org.apache.commons.io.FileUtils.copyURLToFile(new URL("URL")), new File("path/to/file"));
if the url returns text, you can try something like this:
try {
URL url = new URL("http://www.google.com:80/");
// read text returned by server
BufferedReader in = new BufferedReader(new InputStreamReader(url.openStream()));
PrintWriter out = new PrintWriter("filename.txt");
String line;
while ((line = in.readLine()) != null) {
out.println(line);
}
in.close();
out.close();
}
catch (MalformedURLException e) {
System.out.println("Malformed URL: " + e.getMessage());
}
catch (IOException e) {
System.out.println("I/O Error: " + e.getMessage());
}
I'm developing a server that should receive ,multiple files instantaneously and be able to save them to the local hard drive. After the file received the server should send a response to the client and confirm that the file passed. When i'm trying to send multiple files instantaneously the result is that 1 client received the answer of the second and vice versa.
Does any one have a clue what is the problem with this server?
Here is my servlet code:
protected void doPost(final HttpServletRequest request, final HttpServletResponse response) throws ServletException, IOException {
try {
request.setCharacterEncoding("UTF-8");
} catch (UnsupportedEncodingException e2) {
// TODO Auto-generated catch block
e2.printStackTrace();
}
response.setCharacterEncoding("UTF-8");
PrintWriter writer = null;
try {
writer = response.getWriter();
} catch (IOException e2) {
// TODO Auto-generated catch block
e2.printStackTrace();
}
try {
// get access to file that is uploaded from client
Part p1 = request.getPart("File");
InputStream is = p1.getInputStream();
// read filename which is sent as a part
Part p2 = request.getPart("MetaData");
Scanner s = new Scanner(p2.getInputStream());
String stringJson = s.nextLine(); // read filename from stream
s.close();
json = new JSONObject(stringJson);
fileName = new String(json.getString("FileName").getBytes("UTF-8"));
fileDirectory = BASE + request.getSession().getId();
File dir = new File(fileDirectory);
dir.mkdir();
// get filename to use on the server
String outputfile = BASE + dir.getName() + "/" + fileName; // get path on the server
FileOutputStream os = new FileOutputStream (outputfile);
// write bytes taken from uploaded file to target file
byte[] buffer = new byte[1024];
int ch = is.read(buffer);
final Object lock = new Object();
while (ch != -1) {
synchronized (lock) {
os.write(buffer);
ch = is.read(buffer);
}
}
os.close();
is.close();
}
catch(Exception ex) {
writer.println("Exception -->" + ex.getMessage());
}
finally {
try {
myRequest = request;
try {
printFile(request.getSession().getId(), writer);
} catch (IOException e) {
// TODO Auto-generated catch block
writer.println("Exception -->" + e.getMessage());
}
writer.close();
} catch (InterruptedException e) {
writer.println("Exception -->" + e.getMessage());
}
}
}
Thanks in advance :)
Some PHP sites use a page to act as a middle man for handling file downloads.
With a browser this works transparently. There seems to a be a slight pause while the php page processes the request.
However, attempting a download through Java using a URL or HttpURLConnection returns a plain html page. How could I get the file downloads working in the same way?
Edit: Here is an example link:
http://depot.eice.be/index.php?annee_g=jour&cours=poo
Edit: Here is some of the code I've been testing:
// This returns an HTML page
private void downloadURL(String theURL) {
URL url;
InputStream is = null;
DataInputStream dis;
String s;
StringBuffer sb = new StringBuffer();
try {
url = new URL(theURL);
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
conn.setRequestMethod("GET");
conn.connect();
if (conn.getResponseCode()!=HttpURLConnection.HTTP_OK)
return;
InputStream in = conn.getInputStream();
ByteArrayOutputStream bos = new ByteArrayOutputStream();
int i;
while ((i = in.read()) != -1) {
bos.write(i);
}
byte[] b = bos.toByteArray();
FileOutputStream fos = new FileOutputStream( getNameFromUrl( theURL ) );
fos.write(b);
fos.close();
conn.disconnect();
} catch (MalformedURLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
// This will throw Exceptions if the URL isn't in the expected format
public String getNameFromUrl(String url) {
int slashIndex = url.lastIndexOf('/');
int dotIndex = url.lastIndexOf('.');
System.out.println("url:" + url + "," + slashIndex + "," + dotIndex);
if (dotIndex == -1) {
return url.substring(slashIndex + 1);
} else {
try {
return url.substring(slashIndex + 1, url.length());
} catch (StringIndexOutOfBoundsException e) {
return "";
}
}
}
Considering no other constrains, you can read the redirected URL from the HTTP header and connect to that URL directly from JAVA.
There is an API setting to follow redirects automatically – but it should be true by default. How do you access the URL?
See Java API docs...
I think I've found a solution using HttpUnit. The source of the framework is available if you wish to see how this is handled.
public void downloadURL(String url) throws IOException {
WebConversation wc = new WebConversation();
WebResponse indexResp = wc.getResource(new GetMethodWebRequest(url));
WebLink[] links = new WebLink[1];
try {
links = indexResp.getLinks();
} catch (SAXException ex) {
// Log
}
for (WebLink link : links) {
try {
link.click();
} catch (SAXException ex) {
// Log
}
WebResponse resp = wc.getCurrentPage();
String fileName = resp.getURL().getFile();
fileName = fileName.substring(fileName.lastIndexOf("/") + 1);
System.out.println("filename:" + fileName);
File file = new File(fileName);
BufferedInputStream bis = new BufferedInputStream(
resp.getInputStream());
BufferedOutputStream bos = new BufferedOutputStream(
new FileOutputStream(file.getName()));
int i;
while ((i = bis.read()) != -1) {
bos.write(i);
}
bis.close();
bos.close();
}
System.out.println("Done downloading.");
}