I am using below code, its not working.
when i use imageUrl on browser its redirect somewhere then its working.
But i have n number of facebook id only and every time redirected url is different.
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.URL;
public class SaveImageFromUrl {
public static void main(String[] args) throws Exception {
String imageUrl = "http://graph.facebook.com/67563683055/picture?type=square";
String destinationFile = "C:\\Users\\emtx\\Desktop\\Nxg-pic.png";
saveImage(imageUrl, destinationFile);
}
public static void saveImage(String imageUrl, String destinationFile) throws IOException {
URL url = new URL(imageUrl);
InputStream is = url.openStream();
OutputStream os = new FileOutputStream(destinationFile);
byte[] b = new byte[2048];
int length;
while ((length = is.read(b)) != -1) {
os.write(b, 0, length);
}
is.close();
os.close();
}
}
In addition to Phsemos answer, you can also disable the redirection with the redirect parameter and use the following API call to get the real URL as JSON:
https://graph.facebook.com/67563683055/picture?type=square&redirect=false
This is how the JSON looks like:
{
"data": {
"is_silhouette": false,
"url": "https://scontent.xx.fbcdn.net/hprofile-xaf1/v/t1.0-1/c44.44.544.544/s50x50/316295_10151906553973056_2129080216_n.jpg?oh=04888b6ef5d631447227b42d82ebd35d&oe=57250EA4"
}
}
Address you are using redirects you to other location. Since URL class doesn't redirects you automatically what you get is
headers containing information about redirection (like Location which points you to new address),
and possibly body (but in this case it is empty).
You may want to create method like (based on: http://www.mkyong.com/java/java-httpurlconnection-follow-redirect-example/)
public static String getFinalLocation(String address) throws IOException{
URL url = new URL(address);
HttpURLConnection conn = (HttpURLConnection)url.openConnection();
int status = conn.getResponseCode();
if (status != HttpURLConnection.HTTP_OK)
{
if (status == HttpURLConnection.HTTP_MOVED_TEMP
|| status == HttpURLConnection.HTTP_MOVED_PERM
|| status == HttpURLConnection.HTTP_SEE_OTHER)
{
String newLocation = conn.getHeaderField("Location");
return getFinalLocation(newLocation);
}
}
return address;
}
and change your
URL url = new URL(imageUrl);
to
URL url = new URL(getFinalLocation(imageUrl));
Related
I'm trying to code based on the manual operation. For manual, I have a URL and when I paste the URL to the Chrome browser, the browser automatically downloads the PDF file from that URL and save to folder "download" without prompting any user input. With Code, I'm able to accomplish the same thing as the manual operation. However I would like the code to save the PDF into specific folder instead of default folder "download". Is it possible to do that?
public static void browseURL() {
try {
String url ="mycompanyURL";
System.out.println("url " + url );
Desktop desktop = Desktop.getDesktop();
URI uri = new URI (url);
desktop.browse(uri);
}catch(Exception err) {
System.out.println("exception " + err.getMessage());
}
}
When I had to do that in old versions of Java, I used the following snippet (pure Java, source: Baeldung).
public void streamFromUrl(String downloadUrl, String filePath) throws IOException {
File file = new File(filePath);
try (BufferedInputStream in = new BufferedInputStream(new URL(downloadUrl).openStream());
FileOutputStream fileOutputStream = new FileOutputStream(file)) {
byte[] dataBuffer = new byte[1024];
int bytesRead;
while ((bytesRead = in.read(dataBuffer, 0, 1024)) != -1) {
fileOutputStream.write(dataBuffer, 0, bytesRead);
}
}
}
The above opens an input stream on the URL, and outputs the bytes of such stream into a file output stream (where the file is wherever you wish).
Alternatively, there are many libraries doing that in one/two liners (the article I posted shows some of those alternatives).
Also, starting from more recent versions of Java, there are other shorter options:
public void streamFromUrl(String downloadUrl, String filePath) throws IOException {
try (InputStream in = new URL(downloadUrl).openStream()) {
Files.copy(in, Paths.get(new File(filePath)), StandardCopyOption.REPLACE_EXISTING);
}
}
Depending on the version of Java you have, you may pick one of those. Generally speaking, I suggest you reading through the Baeldung's article and check the one that best suits for you.
Here you go. Handles redirects and so on can use and modify as you wish. Have fun with it. All in native Java. Did write this to download some media easily. This can also download media like images, videos and documents.
import java.io.IOException;
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpRequest.Builder;
import java.net.http.HttpResponse.BodyHandlers;
import java.nio.file.Files;
import java.nio.file.Path;
public class Downloader {
public static void download(String url) {
final HttpClient hc = HttpClient.newHttpClient();
final Builder requestBuilder = HttpRequest.newBuilder().version(HttpClient.Version.HTTP_1_1);
Path path = Path.of("myfilepath");
handleGet(hc, "myfile.pdf", "myurl.com", path, requestBuilder);
}
private static void handleGet(
final HttpClient hc,
final String fileName,
final String url,
final Path filePath,
final Builder requestBuilder
) {
final HttpRequest request = requestBuilder.uri(URI.create(url)).build();
hc.sendAsync(request, BodyHandlers.ofInputStream())
.thenApply(resp -> {
int sc = resp.statusCode();
System.out.println("STATUSCODE: "+sc+" for url '"+url+"'");
if(sc >= 200 && sc < 300) return resp;
if(sc == 302) {
System.out.println("Handling 302...");
String newUrl = resp.headers().firstValue("location").get();
handleGet(hc, fileName, newUrl, filePath, requestBuilder);
}
return resp;
})
.thenAccept(resp -> {
int sc = resp.statusCode();
if(sc >= 200 && sc < 300) {
try {
System.out.println("Im fine here");
Files.copy(resp.body(), filePath);
} catch (IOException e) {
throw new RuntimeException(e);
}
} else {
System.err.println("STATUSCODE: "+ sc +" for file "+ fileName);
}
}).join();
}
}
In Java, this code throws an exception when the HTTP result is 404 range:
URL url = new URL("http://stackoverflow.com/asdf404notfound");
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
conn.getInputStream(); // throws!
In my case, I happen to know that the content is 404, but I'd still like to read the body of the response anyway.
(In my actual case the response code is 403, but the body of the response explains the reason for rejection, and I'd like to display that to the user.)
How can I access the response body?
Here is the bug report (close, will not fix, not a bug).
Their advice there is to code like this:
HttpURLConnection httpConn = (HttpURLConnection)_urlConnection;
InputStream _is;
if (httpConn.getResponseCode() < HttpURLConnection.HTTP_BAD_REQUEST) {
_is = httpConn.getInputStream();
} else {
/* error from server */
_is = httpConn.getErrorStream();
}
It's the same problem I was having:
HttpUrlConnection returns FileNotFoundException if you try to read the getInputStream() from the connection.
You should instead use getErrorStream() when the status code is higher than 400.
More than this, please be careful since it's not only 200 to be the success status code, even 201, 204, etc. are often used as success statuses.
Here is an example of how I went to manage it
... connection code code code ...
// Get the response code
int statusCode = connection.getResponseCode();
InputStream is = null;
if (statusCode >= 200 && statusCode < 400) {
// Create an InputStream in order to extract the response object
is = connection.getInputStream();
}
else {
is = connection.getErrorStream();
}
... callback/response to your handler....
In this way, you'll be able to get the needed response in both success and error cases.
Hope this helps!
In .Net you have the Response property of the WebException that gives access to the stream ON an exception. So i guess this is a good way for Java,...
private InputStream dispatch(HttpURLConnection http) throws Exception {
try {
return http.getInputStream();
} catch(Exception ex) {
return http.getErrorStream();
}
}
Or an implementation i used. (Might need changes for encoding or other things. Works in current environment.)
private String dispatch(HttpURLConnection http) throws Exception {
try {
return readStream(http.getInputStream());
} catch(Exception ex) {
readAndThrowError(http);
return null; // <- never gets here, previous statement throws an error
}
}
private void readAndThrowError(HttpURLConnection http) throws Exception {
if (http.getContentLengthLong() > 0 && http.getContentType().contains("application/json")) {
String json = this.readStream(http.getErrorStream());
Object oson = this.mapper.readValue(json, Object.class);
json = this.mapper.writer().withDefaultPrettyPrinter().writeValueAsString(oson);
throw new IllegalStateException(http.getResponseCode() + " " + http.getResponseMessage() + "\n" + json);
} else {
throw new IllegalStateException(http.getResponseCode() + " " + http.getResponseMessage());
}
}
private String readStream(InputStream stream) throws Exception {
StringBuilder builder = new StringBuilder();
try (BufferedReader in = new BufferedReader(new InputStreamReader(stream))) {
String line;
while ((line = in.readLine()) != null) {
builder.append(line); // + "\r\n"(no need, json has no line breaks!)
}
in.close();
}
System.out.println("JSON: " + builder.toString());
return builder.toString();
}
I know that this doesn't answer the question directly, but instead of using the HTTP connection library provided by Sun, you might want to take a look at Commons HttpClient, which (in my opinion) has a far easier API to work with.
First check the response code and then use HttpURLConnection.getErrorStream()
InputStream is = null;
if (httpConn.getResponseCode() !=200) {
is = httpConn.getErrorStream();
} else {
/* error from server */
is = httpConn.getInputStream();
}
My running code.
HttpURLConnection httpConn = (HttpURLConnection) urlConn;
if (httpConn.getResponseCode() < HttpURLConnection.HTTP_BAD_REQUEST) {
in = new InputStreamReader(urlConn.getInputStream());
BufferedReader bufferedReader = new BufferedReader(in);
if (bufferedReader != null) {
int cp;
while ((cp = bufferedReader.read()) != -1) {
sb.append((char) cp);
}
bufferedReader.close();
}
in.close();
} else {
/* error from server */
in = new InputStreamReader(httpConn.getErrorStream());
BufferedReader bufferedReader = new BufferedReader(in);
if (bufferedReader != null) {
int cp;
while ((cp = bufferedReader.read()) != -1) {
sb.append((char) cp);
}
bufferedReader.close();
}
in.close();
}
System.out.println("sb="+sb);
How to read 404 response body in java:
Use Apache library - https://hc.apache.org/httpcomponents-client-4.5.x/httpclient/apidocs/
or
Java 11 - https://docs.oracle.com/en/java/javase/11/docs/api/java.net.http/java/net/http/HttpClient.html
Snippet given below uses Apache:
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.util.EntityUtils;
CloseableHttpClient client = HttpClients.createDefault();
CloseableHttpResponse resp = client.execute(new HttpGet(domainName + "/blablablabla.html"));
String response = EntityUtils.toString(resp.getEntity());
I am using Jsoup Java HTML parser to fetch images from a particular URL. But some of the images are throwing a status 502 error code and are not saved to my machine. Here is the code snapshot i have used:-
String url = "http://www.jabong.com";
String html = Jsoup.connect(url.toString()).get().html();
Document doc = Jsoup.parse(html, url);
images = doc.select("img");
for (Element element : images) {
String imgSrc = element.attr("abs:src");
log.info(imgSrc);
if (imgSrc != "") {
saveFromUrl(imgSrc, dirPath+"/" + nameCounter + ".jpg");
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
log.error("error in sleeping");
}
nameCounter++;
}
}
And the saveFromURL function looks like this:-
public static void saveFromUrl(String Url, String destinationFile) {
try {
URL url = new URL(Url);
InputStream is = url.openStream();
OutputStream os = new FileOutputStream(destinationFile);
byte[] b = new byte[2048];
int length;
while ((length = is.read(b)) != -1) {
os.write(b, 0, length);
}
is.close();
os.close();
} catch (IOException e) {
log.error("Error in saving file from url:" + Url);
//e.printStackTrace();
}
}
I searched on internet about status code 502 but it says error is due to bad gateway. I don't understand this. One of the possible things i am thinking that this error may be because of I am sending get request to images in loop. May be webserver is not able handle to this much load so denying the request to the images when previous image is not sent.So I tried to put sleep after fetching every image but no luck :(
Some advices please
Here's a full code example that works for me...
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.net.Authenticator;
import java.net.HttpURLConnection;
import java.net.InetSocketAddress;
import java.net.MalformedURLException;
import java.net.Proxy;
import java.net.SocketAddress;
import java.net.URL;
public class DownloadImage {
public static void main(String[] args) {
// URLs for Images we wish to download
String[] urls = {
"http://cdn.sstatic.net/stackoverflow/img/apple-touch-icon.png",
"http://www.google.co.uk/images/srpr/logo3w.png",
"http://i.microsoft.com/global/en-us/homepage/PublishingImages/sprites/microsoft_gray.png"
};
for(int i = 0; i < urls.length; i++) {
downloadFromUrl(urls[i]);
}
}
/*
Extract the file name from the URL
*/
private static String getOutputFileName(URL url) {
String[] urlParts = url.getPath().split("/");
return "c:/temp/" + urlParts[urlParts.length-1];
}
/*
Assumes there is no Proxy server involved.
*/
private static void downloadFromUrl(String urlString) {
InputStream is = null;
FileOutputStream fos = null;
try {
URL url = new URL(urlString);
System.out.println("Reading..." + url);
HttpURLConnection conn = (HttpURLConnection)url.openConnection(proxy);
is = conn.getInputStream();
String filename = getOutputFileName(url);
fos = new FileOutputStream(filename);
byte[] readData = new byte[1024];
int i = is.read(readData);
while(i != -1) {
fos.write(readData, 0, i);
i = is.read(readData);
}
System.out.println("Created file: " + filename);
}
catch (MalformedURLException e) {
e.printStackTrace();
}
catch (IOException e) {
e.printStackTrace();
}
finally {
if(is != null) {
try {
is.close();
} catch (IOException e) {
System.out.println("Big problems if InputStream cannot be closed");
}
}
if(fos != null) {
try {
fos.close();
} catch (IOException e) {
System.out.println("Big problems if FileOutputSream cannot be closed");
}
}
}
System.out.println("Completed");
}
}
You should see the following ouput on your console...
Reading...http://cdn.sstatic.net/stackoverflow/img/apple-touch-icon.png
Created file: c:/temp/apple-touch-icon.png
Completed
Reading...http://www.google.co.uk/images/srpr/logo3w.png
Created file: c:/temp/logo3w.png
Completed
Reading...http://i.microsoft.com/global/en-us/homepage/PublishingImages/sprites/microsoft_gray.png
Created file: c:/temp/microsoft_gray.png
Completed
So that's a working example without a Proxy server involved.
Only if you require authentication with a proxy server here's an additional Class that you'll need based on this Oracle technote
import java.net.Authenticator;
import java.net.PasswordAuthentication;
public class ProxyAuthenticator extends Authenticator {
private String userName, password;
public ProxyAuthenticator(String userName, String password) {
this.userName = userName;
this.password = password;
}
protected PasswordAuthentication getPasswordAuthentication() {
return new PasswordAuthentication(userName, password.toCharArray());
}
}
And to use this new Class you would use the following code in place of the call to openConnection() shown above
...
try {
URL url = new URL(urlString);
System.out.println("Reading..." + url);
Authenticator.setDefault(new ProxyAuthenticator("username", "password");
SocketAddress addr = new InetSocketAddress("proxy.server.com", 80);
Proxy proxy = new Proxy(Proxy.Type.HTTP, addr);
HttpURLConnection conn = (HttpURLConnection)url.openConnection(proxy);
...
Your problem sounds like HTTP communication issues, so you are probably better off trying to use a library to handle the communication side of things. Take a look at Apache Commons HttpClient.
Some notes about your code example. You haven't used a URLConnection object so it's not clear what the behaviour will be in regards to the Web/Proxy servers and closing resources cleanly, etc. The HttpCommon library mentioned will help in this aspect.
There also seems to be some examples of doing what you want using J2ME libararies. Not something I have used personally but may also help you out.
I'm trying to find Java's equivalent to Groovy's:
String content = "http://www.google.com".toURL().getText();
I want to read content from a URL into string. I don't want to pollute my code with buffered streams and loops for such a simple task. I looked into apache's HttpClient but I also don't see a one or two line implementation.
Now that some time has passed since the original answer was accepted, there's a better approach:
String out = new Scanner(new URL("http://www.google.com").openStream(), "UTF-8").useDelimiter("\\A").next();
If you want a slightly fuller implementation, which is not a single line, do this:
public static String readStringFromURL(String requestURL) throws IOException
{
try (Scanner scanner = new Scanner(new URL(requestURL).openStream(),
StandardCharsets.UTF_8.toString()))
{
scanner.useDelimiter("\\A");
return scanner.hasNext() ? scanner.next() : "";
}
}
This answer refers to an older version of Java. You may want to look at ccleve's answer.
Here is the traditional way to do this:
import java.net.*;
import java.io.*;
public class URLConnectionReader {
public static String getText(String url) throws Exception {
URL website = new URL(url);
URLConnection connection = website.openConnection();
BufferedReader in = new BufferedReader(
new InputStreamReader(
connection.getInputStream()));
StringBuilder response = new StringBuilder();
String inputLine;
while ((inputLine = in.readLine()) != null)
response.append(inputLine);
in.close();
return response.toString();
}
public static void main(String[] args) throws Exception {
String content = URLConnectionReader.getText(args[0]);
System.out.println(content);
}
}
As #extraneon has suggested, ioutils allows you to do this in a very eloquent way that's still in the Java spirit:
InputStream in = new URL( "http://jakarta.apache.org" ).openStream();
try {
System.out.println( IOUtils.toString( in ) );
} finally {
IOUtils.closeQuietly(in);
}
Or just use Apache Commons IOUtils.toString(URL url), or the variant that also accepts an encoding parameter.
There's an even better way as of Java 9:
URL u = new URL("http://www.example.com/");
try (InputStream in = u.openStream()) {
return new String(in.readAllBytes(), StandardCharsets.UTF_8);
}
Like the original groovy example, this assumes that the content is UTF-8 encoded. (If you need something more clever than that, you need to create a URLConnection and use it to figure out the encoding.)
Now that more time has passed, here's a way to do it in Java 8:
URLConnection conn = url.openConnection();
try (BufferedReader reader = new BufferedReader(new InputStreamReader(conn.getInputStream(), StandardCharsets.UTF_8))) {
pageText = reader.lines().collect(Collectors.joining("\n"));
}
Additional example using Guava:
URL xmlData = ...
String data = Resources.toString(xmlData, Charsets.UTF_8);
Java 11+:
URI uri = URI.create("http://www.google.com");
HttpRequest request = HttpRequest.newBuilder(uri).build();
String content = HttpClient.newHttpClient().send(request, BodyHandlers.ofString()).body();
If you have the input stream (see Joe's answer) also consider ioutils.toString( inputstream ).
http://commons.apache.org/io/api-1.4/org/apache/commons/io/IOUtils.html#toString(java.io.InputStream)
The following works with Java 7/8, secure urls, and shows how to add a cookie to your request as well. Note this is mostly a direct copy of this other great answer on this page, but added the cookie example, and clarification in that it works with secure urls as well ;-)
If you need to connect to a server with an invalid certificate or self signed certificate, this will throw security errors unless you import the certificate. If you need this functionality, you could consider the approach detailed in this answer to this related question on StackOverflow.
Example
String result = getUrlAsString("https://www.google.com");
System.out.println(result);
outputs
<!doctype html><html itemscope="" .... etc
Code
import java.net.URL;
import java.net.URLConnection;
import java.io.BufferedReader;
import java.io.InputStreamReader;
public static String getUrlAsString(String url)
{
try
{
URL urlObj = new URL(url);
URLConnection con = urlObj.openConnection();
con.setDoOutput(true); // we want the response
con.setRequestProperty("Cookie", "myCookie=test123");
con.connect();
BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));
StringBuilder response = new StringBuilder();
String inputLine;
String newLine = System.getProperty("line.separator");
while ((inputLine = in.readLine()) != null)
{
response.append(inputLine + newLine);
}
in.close();
return response.toString();
}
catch (Exception e)
{
throw new RuntimeException(e);
}
}
Here's Jeanne's lovely answer, but wrapped in a tidy function for muppets like me:
private static String getUrl(String aUrl) throws MalformedURLException, IOException
{
String urlData = "";
URL urlObj = new URL(aUrl);
URLConnection conn = urlObj.openConnection();
try (BufferedReader reader = new BufferedReader(new InputStreamReader(conn.getInputStream(), StandardCharsets.UTF_8)))
{
urlData = reader.lines().collect(Collectors.joining("\n"));
}
return urlData;
}
URL to String in pure Java
Example call to get payload from http get call
String str = getStringFromUrl("YourUrl");
Implementation
You can use the method described in this answer, on How to read URL to an InputStream and combine it with this answer on How to read InputStream to String.
The outcome will be something like
public String getStringFromUrl(URL url) throws IOException {
return inputStreamToString(urlToInputStream(url,null));
}
public String inputStreamToString(InputStream inputStream) throws IOException {
try(ByteArrayOutputStream result = new ByteArrayOutputStream()) {
byte[] buffer = new byte[1024];
int length;
while ((length = inputStream.read(buffer)) != -1) {
result.write(buffer, 0, length);
}
return result.toString(UTF_8);
}
}
private InputStream urlToInputStream(URL url, Map<String, String> args) {
HttpURLConnection con = null;
InputStream inputStream = null;
try {
con = (HttpURLConnection) url.openConnection();
con.setConnectTimeout(15000);
con.setReadTimeout(15000);
if (args != null) {
for (Entry<String, String> e : args.entrySet()) {
con.setRequestProperty(e.getKey(), e.getValue());
}
}
con.connect();
int responseCode = con.getResponseCode();
/* By default the connection will follow redirects. The following
* block is only entered if the implementation of HttpURLConnection
* does not perform the redirect. The exact behavior depends to
* the actual implementation (e.g. sun.net).
* !!! Attention: This block allows the connection to
* switch protocols (e.g. HTTP to HTTPS), which is <b>not</b>
* default behavior. See: https://stackoverflow.com/questions/1884230
* for more info!!!
*/
if (responseCode < 400 && responseCode > 299) {
String redirectUrl = con.getHeaderField("Location");
try {
URL newUrl = new URL(redirectUrl);
return urlToInputStream(newUrl, args);
} catch (MalformedURLException e) {
URL newUrl = new URL(url.getProtocol() + "://" + url.getHost() + redirectUrl);
return urlToInputStream(newUrl, args);
}
}
/*!!!!!*/
inputStream = con.getInputStream();
return inputStream;
} catch (Exception e) {
throw new RuntimeException(e);
}
}
Pros
It is pure java
It can be easily enhanced by adding different headers as a map (instead of passing a null object, like the example above does), authentication, etc.
Handling of protocol switches is supported
I'm making call to Alfresco Webscripts which return JSON. I do this using GET requests which all work perfectly. If I do a file POST however, the Alfresco server receives the file correctly and sends back a JSON response, but this time the response causes the browser to prompt for a download instead of the letting Javascript process the callback.
Now all these calls are going through a "home made" reverse proxy (see below) which uses HttpUrlConnection. This proxy routes all the calls to an Alfresco running on another host. Everything else works fine (pngs, text, html, GET requests,even authentication). In both GET and POST responses the Content-Type is "application/json;charset=UTF-8"
Many thanks for any responses.
import java.io.*;
import java.net.HttpURLConnection;
import java.net.URL;
import java.util.*;
import javax.servlet.*;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.apache.commons.codec.binary.Base64;
public class ReverseProxy extends GenericServlet{
public static final String SERVER_URL = "serverURL";
protected String serverURL;
protected boolean debug;
public ReverseProxy(){
}
public void init(ServletConfig config) throws ServletException {
super.init(config);
debug = Boolean.valueOf(config.getInitParameter("debug")).booleanValue();
serverURL = config.getInitParameter("serverURL");
if(serverURL == null){
throw new ServletException("ReverseProxy servlet initialization parameter 'serverURL' not defined");
}
}
public void service(ServletRequest req, ServletResponse resp) throws ServletException, IOException {
InputStream inputStream;
OutputStream outputStream;
Exception exception;
if(debug){System.out.println("ReverseProxy.service()");}
HttpServletRequest request;
HttpServletResponse response;
try{
request = (HttpServletRequest)req;
response = (HttpServletResponse)resp;
}
catch(ClassCastException e){
throw new ServletException("non-HTTP request or response");
}
String method = request.getMethod();
StringBuffer urlBuffer = new StringBuffer();
urlBuffer.append(serverURL);
urlBuffer.append(request.getServletPath());
if(request.getPathInfo() != null)
urlBuffer.append(request.getPathInfo());
if(request.getQueryString() != null){
urlBuffer.append('?');
urlBuffer.append(request.getQueryString());
}
URL url = new URL(urlBuffer.toString());
//pass authentication
String user=null, password=null;
Set entrySet = req.getParameterMap().entrySet();
Map headers = new HashMap();
for ( Object anEntrySet : entrySet ) {
Map.Entry header = (Map.Entry) anEntrySet;
String key = (String) header.getKey();
String value = ((String[]) header.getValue())[0];
if ("user".equals(key)) {
user = value;
} else if ("password".equals(key)) {
password = value;
}else {
headers.put(key, value);
}
}
String userpass = null;
if (user != null && userpass!=null) {
userpass = user+":"+password;
}
String auth = request.getHeader("Authorization");
if(auth != null){
if (auth.toUpperCase().startsWith("BASIC ")){
String userpassEncoded = auth.substring(6);
userpass = new String(Base64.decodeBase64(userpassEncoded.getBytes()));
}
}
String digest=null;
if (userpass!=null) {
if(debug){System.out.println("ReverseProxy found userpass:" + userpass);}
digest = "Basic " + new String(Base64.encodeBase64((userpass).getBytes()));
}
else{
if(debug){System.out.println("ReverseProxy found no auth credentials");}
}
//do connection
HttpURLConnection connection = null;
connection = (HttpURLConnection) url.openConnection();
if (digest != null) {connection.setRequestProperty("Authorization", digest);}
connection.setRequestMethod(method);
connection.setDoInput(true);
if(method.equals("POST")){
if(request.getHeader("Content-Type") != null){
if(debug){System.out.println("ReverseProxy Content-Type: " + request.getHeader("Content-Type"));}
if(debug){System.out.println("ReverseProxy Content-Length: " + request.getHeader("Content-Length"));}
if(request.getHeader("Content-Type").indexOf("multipart/form-data") != -1){
connection.setRequestProperty("Content-Type", request.getHeader("Content-Type"));
connection.setRequestProperty("Content-Length", request.getHeader("Content-Length"));
}
}
connection.setDoOutput(true);
}
if(debug){
System.out.println((new StringBuilder()).append("ReverseProxy: URL=").append(url).append(" method=").append(method).toString());
}
//set headers
Set headersSet = headers.entrySet();
for ( Object aHeadersSet : headersSet ) {
Map.Entry header = (Map.Entry) aHeadersSet;
connection.setRequestProperty((String) header.getKey(), (String) header.getValue());
}
connection.connect();
inputStream = null;
outputStream = null;
try{
if(method.equals("POST")){
javax.servlet.ServletInputStream servletInputStream = request.getInputStream();
outputStream = connection.getOutputStream();
copy(servletInputStream, outputStream);
}
response.setContentLength(connection.getContentLength());
response.setContentType(connection.getContentType());
if(debug){System.out.println("ReverseProxy Connection Content-Type: " + connection.getContentType());}
response.setCharacterEncoding(connection.getContentEncoding());
String cacheControl = connection.getHeaderField("Cache-Control");
if(cacheControl != null){
response.setHeader("Cache-Control", cacheControl);
}
int responseCode = connection.getResponseCode();
response.setStatus(responseCode);
if(responseCode == 401){
response.setHeader("WWW-Authenticate", "Basic realm=\"Login Required\"");
}
for( Iterator i = connection.getHeaderFields().entrySet().iterator() ; i.hasNext() ;){
Map.Entry mapEntry = (Map.Entry)i.next();
if(mapEntry.getKey()!=null){
response.setHeader(mapEntry.getKey().toString(), ((List)mapEntry.getValue()).get(0).toString());
}
}
//if(debug){System.out.println("ReverseProxy Connection Content-Disposition: " + connection.getHeaderField("Content-Disposition"));}
if(debug){System.out.println((new StringBuilder()).append("ReverseProxy: response code '").append(responseCode).append("' from ").append(url).toString());}
if (responseCode == 200 || responseCode == 201) {
inputStream = connection.getInputStream();
}
else{
inputStream = connection.getErrorStream();
}
javax.servlet.ServletOutputStream servletOutputStream = response.getOutputStream();
copy(inputStream, servletOutputStream);
}
catch(IOException ex){
if(debug)
ex.printStackTrace();
throw ex;
}
finally{
//if(inputStream == null) goto _L0; else goto _L0
//break;
}
if(inputStream != null){
inputStream.close();
}
if(outputStream != null){
outputStream.close();
}
inputStream.close();
if(outputStream != null){
outputStream.close();
}
//throw exception;
}
public long copy(InputStream input, OutputStream output) throws IOException{
byte buffer[] = new byte[4096];
long count = 0L;
for(int n = 0; -1 != (n = input.read(buffer));){
output.write(buffer, 0, n);
count += n;
}
output.flush();
if(debug)
System.err.println((new StringBuilder()).append("copy ").append(count).append(" bytes").toString());
return count;
}
}
I guess the problem is more in the client side or a misconception in your side. It's correct behaviour if the browser prompts to download the file when it has a content type of application/json, because the browser itself doesn't know how to handle it. The browser can only display everything which matches a content type of at least text/* or image/*.
Normally, JSON responses are to be handled internally by JavaScript, which can perfectly handle ajaxical responses with a content type of application/json. You can test it by changing it to text/plain or text/javascript, you'll see that the browser will display it (because it matches text/*). But for JSON the correct content type is indeed application/json. Just keep it as is and use the right tools to download/open the JSON ;)
Solved (as per my comment)
If the request is a XmlHttpRequest sent from Javascript, then the "application/json" content type will be understood and a download will not occur. This is be true for both GET and POST requests. If one is doing a file upload, Libraries such as JQuery, ExtJS etc create a hidden form with a setting of "application/x-www-form-urlencoded" and post it (all without the users interaction). This means the response is being interpreted by the browser, not Javascript. The only way around this is to set the content type of the returning JSON to "text/html" (NOT "text/plain" or else the browser tries to add tags).