I'm writing a simple web application that completes just one GET request with custom headers. When I tried making the request using ajax, it gave me a cross domain error like so:
No 'Access-Control-Allow-Origin' header is present on the requested resource.
Origin 'http://localhost:8080' is therefore not allowed access.
When I make the same request in Java using custom headers, it works completely fine.
public static String executeGET() {
String response = "";
try {
URL url = new URL("http://....");
HttpURLConnection con = (HttpURLConnection) url.openConnection();
con.setRequestMethod("GET");
//set custom headers
con.setRequestProperty("header1", "2.0");
con.setRequestProperty("header2", "sellingv2");
con.connect();
InputStreamReader reader = new InputStreamReader(con.getInputStream());
Scanner scanner = new Scanner(reader);
while (scanner.hasNext()) {
response += scanner.next();
}
scanner.close();
con.disconnect();
}
catch (Exception ex) {
ex.printStackTrace();
}
return response;
}
Why does this work in Java and not with AJAX?
This request works in Java and not with AJAX because AJAX is called from within a web browser. Web browsers enforce a "Same-origin policy" which prevents front-end scripts from performing possibly malicious AJAX requests. Your Java application is not subject to this limitation so it can make the request just fine. The Access-Control-Allow-Origin header can be used to override this functionality, but your server is not configured to use it. It is mostly likely the case that the protocol, host, or port, in your url string do not match what is hosting your front-end files. If you change your url to a relative path it should work.
Related
I'm validating links by trying to hit them and getting the response codes(in Java). But I get invalid response codes(403 or 404) from code but from browser, I get 200 status code when I inspect the network activity. Here's my code that gets the response code. [I do basic validations on urls beforehand, like making it lowercase, etc.]
static int getResponseCode(String link) throws IOException {
URL url = new URL(link);
HttpURLConnection http = (HttpURLConnection) url.openConnection();
return http.getResponseCode();
}
For link like http://science.sciencemag.org/content/220/4599/868, I am getting 403 status when I run this code. But on browser(chrome), I am getting 200 status. Also, if I use the below curl command, I am getting 200 status code.
curl -Is http://science.sciencemag.org/content/220/4599/868
The only way to overcome that is to:
check what are the HTTP headers sent by your program (for instance, by sending queries to http://scooterlabs.com/echo and check the response)
check what are the HTTP headers sent by your browser (for instance, by visiting https://www.whatismybrowser.com/detect/what-http-headers-is-my-browser-sending )
spot the differences
change your program to send the same headers as your browser (the ones that work)
I made this analysis for you, and it turns out this website requires an Accept header that resemble the Accept headers of an existing browser. By default Java sends something valid, but not resembling that.
You just need to change your program as so:
static int getResponseCode(String link) throws IOException {
URL url = new URL(link);
HttpURLConnection http = (HttpURLConnection) url.openConnection();
http.setRequestProperty("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8");
return http.getResponseCode();
}
(Or any other value that an actual browser uses)
I'm trying to open an InputStream to a certain URL as given by the service's API. However, it does not have a set protocol (it's not http or https) and without one, I am getting the following error.
Is there any way to get a request without a protocol?
Exception:
Exception in thread "main" java.net.MalformedURLException: no protocol.
Code:
String url = "maple.fm/api/2/search?server=1";
InputStream is = new URL(url).openStream();
UPDATE: I now updated the code to:
Code:
String url = "http://maple.fm/api/2/search?server=1";
InputStream is = new URL(url).openStream();
and now I'm getting the following error:
Exception:
Exception in thread "main" java.io.IOException: Server returned HTTP response code: 403 for URL: http://maple.fm/api/2/search?server=1
A URL without a protocol is not a valid URL. It is actually a relative URI, and you can only use a relative URI if you have an absolute URI (or equivalent) to provide the context for resolving it.
Is there any way to [make] a request without a protocol?
Basically .... No. The protocol tells the client-side libraries how to perform the request. If there is no protocol, the libraries would not know what to do.
The reason that "urls without protocols" work in a web browser's URL bar is that the browser is being helpful, and filling in the missing protocol with "http:" ... on the assumption that that is what the user probably means. (Plus a whole bunch of other stuff, like adding "www.", adding ".com", escaping spaces and other illegal characters, ... or trying a search instead of a normal HTTP GET request.)
Now you could try to do the same stuff in your code before passing the URL string to the URL class. But IMO, the correct solution if you are writing code to talk to a service is to just fix the URL. Put the correct protocol on the front ...
The 403 error you are now getting means Forbidden. The server is saying "you are not permitted to do this".
Check the documentation for the service you are trying to use. (Perhaps you need to go through some kind of login procedure. Perhaps what you are trying to do is only permitted for certain users, or something.)
Try the example URL on this page ... which incidentally works for me from my web browser.
When you say it does not have a set protocol, I am a little bit suspicious of what that means. If it can use multiple protocols, I would hope the API documentation mentions some way of determining what the protocol should be.
I hit the URL http://maple.fm/api/2/search?server=1 and it is simply returning JSON over http. I think your actual problem is that you are trying to open an InputStream to talk to a web server. I believe the solution to your problem, of trying to handle JSON over http, can be found here.
I decided to dig into this because I was curious. Combining this answer and this answer, we have the following code which will print out the JSON output from your URL. Of course, you still need a JSON library to parse it, but that's a separate problem.
import java.net.*;
import java.io.*;
public class Main{
public static String getHTML(String urlToRead) {
URL url;
HttpURLConnection conn;
BufferedReader rd;
String line;
String result = "";
try {
url = new URL(urlToRead);
conn = (HttpURLConnection) url.openConnection();
conn.setRequestMethod("GET");
conn.setRequestProperty("User-Agent", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.95 Safari/537.11");
rd = new BufferedReader(new InputStreamReader(conn.getInputStream()));
while ((line = rd.readLine()) != null) {
result += line;
}
rd.close();
} catch (IOException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
return result;
}
public static void main(String[] args) {
String url = "http://maple.fm/api/2/search?server=1";
System.out.println(getHTML(url));
}
}
you need to surround it with a try/catch block
try {
String url = "maple.fm/api/2/search?world=1";
InputStream is = new URL(url).openStream();
catch(MalformedURLException e) {
e.printStackTrace();
I have a situation where a intermediate servlet needs to be introduced which will handle requests from existing project and redirect the manipulated response to either existing project or the new one. This servlet will act as an interface to login into the new project from some other application.
So currently I use the following code to get back response in jsp as an xml.
var jqxhr =$.post("http://abhishek:15070/abc/login.action",
{ emailaddress: "ars#gmail.com",
projectid: "123" },
function(xml)
{
if($(xml).find('isSuccess').text()=="true")
{
sessiontoken=$(xml).find('sessiontoken').text();
setCookie("abcsessionid", sessiontoken , 1);
setCookie("abcusername",e_add,1);
}
}
)
.error(function() {
if(jqxhr.responseText == 'INVALID_SESSION') {
alert("Your Session has been timed out");
window.location.replace("http://abhishek:15070/abc/index.html");
}else {
alert( jqxhr.responseText);
}
});
xml content
<Response>
<sessiontoken>334465683124</sessiontoken>
<isSuccess>true</isSuccess>
</Response>
but now I want the same thing to be done using servlet, is it possible?
String emailid=(String) request.getParameter("emailaddress");
String projectid=(String) request.getParameter("projectid");
Update
I just came up with something.
Is it possible to return back a html page with form (from servlet), whose on body load it will submit a form and on submission of this form it will receive the response xml which will get processed.
Use java.net.URLConnection or Apache HttpComponents Client. Then, parse the returned HTTP response with a XML tool like as JAXB or something.
Kickoff example:
String emailaddress = request.getParameter("emailaddress");
String projectid = request.getParameter("projectid");
String charset = "UTF-8";
String query = String.format("emailaddress=%s&projectid=%s",
URLEncoder.encode(emailaddress, charset),
URLEncoder.encode(projectid, charset));
URLConnection connection = new URL("http://abhishek:15070/abc/login.action").openConnection();
connection.setDoOutput(true);
connection.setRequestProperty("Accept-Charset", charset);
connection.setRequestProperty("Content-Type", "application/x-www-form-urlencoded;charset=" + charset);
try {
connection.getOutputStream().write(query.getBytes(charset));
}
finally {
connection.getOutputStream().close();
}
InputStream response = connection.getInputStream();
// ...
See also:
Using java.net.URLConnection to fire and handle HTTP requests
HttpClient tutorial and examples
Actually, what you probably want is not an intermediate servlet at all. What you probably want is called a servlet filter and writing one is not particularly hard. I've written one in the past and I just started on a new one yesterday.
An article like this one or this one lays out pretty simply how you can use a servlet filter to intercept calls to specific URLs and then redirect or reject from there. If the incoming URL matches the pattern for the filter, it will get a shot at the request and response and it can then make a choice whether or not to pass it on to the next filter in line.
I don't know if all third party security solutions do it like this, but at least CAS seemed to be implemented that way.
String url = "http://maps.googleapis.com/maps/api/directions/xml?origin=Chicago,IL&destination=Los+Angeles,CA&waypoints=Joplin,MO|Oklahoma+City,OK&sensor=false";
URL google = new URL(url);
HttpURLConnection con = (HttpURLConnection) google.openConnection();
and I use BufferedReader to print the content I get 403 error
The same URL works fine in the browser. Could any one suggest.
The reason it works in a browser but not in java code is that the browser adds some HTTP headers which you lack in your Java code, and the server requires those headers. I've been in the same situation - and the URL worked both in Chrome and the Chrome plugin "Simple REST Client", yet didn't work in Java. Adding this line before the getInputStream() solved the problem:
connection.addRequestProperty("User-Agent", "Mozilla/4.0");
..even though I have never used Mozilla. Your situation might require a different header. It might be related to cookies ... I was getting text in the error stream advising me to enable cookies.
Note that you might get more information by looking at the error text. Here's my code:
try {
HttpURLConnection connection = ((HttpURLConnection)url.openConnection());
connection.addRequestProperty("User-Agent", "Mozilla/4.0");
InputStream input;
if (connection.getResponseCode() == 200) // this must be called before 'getErrorStream()' works
input = connection.getInputStream();
else input = connection.getErrorStream();
BufferedReader reader = new BufferedReader(new InputStreamReader(input));
String msg;
while ((msg =reader.readLine()) != null)
System.out.println(msg);
} catch (IOException e) {
System.err.println(e);
}
HTTP 403 is a Forbidden status code. You would have to read the HttpURLConnection.getErrorStream() to see the response from the server (which can tell you why you have been given a HTTP 403), if any.
This code should work fine. If you have been making a number of requests, it is possible that Google is just throttling you. I have seen Google do this before. You can try using a proxy to verify.
Most browsers automatically encode URLs when you enter them, but the Java URL function doesn't.
You should Encode the URL with URLEncoder URL Encoder
I know this is a bit late, but the easiest way to get the contents of a URL is to use the Apache HttpComponents HttpClient project: http://hc.apache.org/httpcomponents-client-ga/index.html
you original page (with link) and the targeted linked page are not the same domain.
original-domain and target-domain.
I found the difference is in request header:
with 403 forbidden error,
request header have one line:
Referer: http://original-domain/json2tree/ipfs/ipfsList.html
when I enter url, no 403 forbidden,
the request header does NOT have above line referer: original-domain
I finally figure out how to fix this error!!!
on your original-domain web page, you have to add
<meta name="referrer" content="no-referrer" />
it will remove or prevent sending the Referer in header, works both for links and for Ajax requests made
I am trying to visit a site, and get the request to be processed to follow the redirect.
i visit the i agree site, but it doesnt seem to continue past that, and keeps redirecting me
Here is my code:
public static void main(String[] args)
{
System.out.println("results");
//String targetConfirmation18 = "";
URL url;
HttpURLConnection connection;
OutputStreamWriter osw = null;
BufferedReader br = null;
String line;
try {
url = new URL("");
//url = new URL(targetConfirmation);
connection = (HttpURLConnection)url.openConnection();
connection.setDoInput(true);
connection.setDoOutput(true);
osw = new OutputStreamWriter(connection.getOutputStream());
osw.write("");
osw.flush();
br = new BufferedReader(new InputStreamReader(connection.getInputStream()));
while ((line = br.readLine()) != null) {
System.out.println(line);
}
} catch (Exception e) {
e.printStackTrace();
} finally {
try {
br.close();
} catch (IOException ioe) {
// nothing to see here
}
}
}
I suspect that you are violating the Tabcorp Terms of Service. They say:
You may, using an industry-standard web browser, download and view the Content for your personal, non-commercial use only.
and
All rights not expressly granted herein are reserved.
The site sets cookies after you do post on 18+ url. You must remember them and submit with next requests. You can easily figure it out with FireBug.
As a result, you will need to use more advanced HTTP client than simple URL. For example, Apache HTTP Client that allows cookie manipulation.
This section in HTTP Client Tutorial specifically covers cookies.
I am pretty sure that your problem here is the HTTP session.
When you surf to the site using browser the server creates HTTP session and sends its ID as one of the cookies. Then browser sends the cookies back on each request, so server can recognize that this is the existing session.
I think that server always redirects you to 18+ page when session is unknown.
So, why the session is unknown in your case? It is because all your requests are independent. You should do as a browser. Do not start from posting to 18+ confirmation page. Start from HTTP get that will redirect you to this page. Take cookies from response header Set-Cookie and send the cookies back using request header "Cookie".
You can also use higher level tools like Jakarta HTTP client that does this work for you automatically, but it is a good exercise to implement it yourself. I tried this technique several times and saw that it works also with standard HttpUrlConnection.
BTW, I hope that this is not your case but sometimes you have to mimic the User-Agent: present yourself as one of the known browsers. Otherwise some sites redirect you to page that says that your browser is unsupported.
Good luck.