So this is currently how my app is set up:
1.) Login Activity.
2.) Once logged in, other activities may be fired up that use PHP scripts that require the cookies sent from logging in.
I am using one HttpClient across my app to ensure that the same cookies are used, but my problem is that I am getting 2 of the 3 cookies rejected. I do not care about the validity of the cookies, but I do need them to be accepted. I tried setting the CookiePolicy, but that hasn't worked either. This is what logcat is saying:
11-26 10:33:57.613: WARN/ResponseProcessCookies(271): Cookie rejected: "[version: 0] [name: cookie_user_id][value: 1][domain: www.trackallthethings.com][path: trackallthethings][expiry: Sun Nov 25 11:33:00 CST 2012]". Illegal path attribute "trackallthethings". Path of origin: "/mobile-api/login.php"
11-26 10:33:57.593: WARN/ResponseProcessCookies(271): Cookie rejected: "[version: 0][name: cookie_session_id][value: 1985208971][domain: www.trackallthethings.com][path: trackallthethings][expiry: Sun Nov 25 11:33:00 CST 2012]". Illegal path attribute "trackallthethings". Path of origin: "/mobile-api/login.php"
I am sure that my actual code is correct (my app still logs in correctly, just doesn't accept the aforementioned cookies), but here it is anyway:
HttpGet httpget = new HttpGet(//MY URL);
HttpResponse response;
response = Main.httpclient.execute(httpget);
HttpEntity entity = response.getEntity();
InputStream in = entity.getContent();
BufferedReader reader = new BufferedReader(new InputStreamReader(in));
StringBuilder sb = new StringBuilder();
From here I use the StringBuilder to simply get the String of the response. Nothing fancy.
I understand that the reason my cookies are being rejected is because of an "Illegal path attribute" (I am running a script at /mobile-api/login.php whereas the cookie will return with a path of just "/" for trackallthethings), but I would like to accept the cookies anyhow. Is there a way to do this?
The issue that you are facing seems to be by design for privacy/security purpose. In general any resource is not allowed to set a cookie it will not be able to receive. Here you are trying to set the cookie with the path trackallthethings from the resource /mobile-api/login.php which obviously is not working.
Here you have following two options
Set the cookie with the path which is accessible to both the resources (this may be root '/') OR
Define a custom cookie policy and Registering your own cookie support. Here is related documentation and example.
Hope this helps.
Since the API of HttpClient seems to change very fast, here is some working example code for HttpClient 4.5.1 to allow all (malformed) cookies:
class EasyCookieSpec extends DefaultCookieSpec {
#Override
public void validate(Cookie arg0, CookieOrigin arg1) throws MalformedCookieException {
//allow all cookies
}
}
class EasySpecProvider implements CookieSpecProvider {
#Override
public CookieSpec create(HttpContext context) {
return new EasyCookieSpec();
}
}
Registry<CookieSpecProvider> r = RegistryBuilder.<CookieSpecProvider>create()
.register("easy", new EasySpecProvider())
.build();
CookieStore cookieStore = new BasicCookieStore();
RequestConfig requestConfig = RequestConfig.custom()
.setCookieSpec("easy")
.build();
CloseableHttpClient httpclient = HttpClients.custom()
.setDefaultCookieStore(cookieStore)
.setDefaultCookieSpecRegistry(r)
.setDefaultRequestConfig(requestConfig)
.build();
Related
I am using the Apache HttpClient to send requests to our internal API servers. The servers require authentication and need a cookie to be set with an auth token.
Up to HttpClient 4.3.6 this has been working fine, but on 4.4 and above it has stopped sending the cookies on requests. My cookie domain is set to .subdomain.mycompany.com, which works for 4.3.6, but not 4.4 and above. If I'm more specific and give the full host as the cookie domain, i.e. host.subdomain.mycompany.com it works, but this is not a solution.
Here's a code snippet similar to what I'm doing:
public CloseableHttpResponse execute(CloseableHttpClient httpClient) throws IOException {
BasicClientCookie cookie = new BasicClientCookie("cookieName", "myAuthtoken");
cookie.setPath("/");
cookie.setDomain(".subdomain.mycompany.com");
cookie.setSecure(false);
HttpContext localContext = new BasicHttpContext(parentContext);
CookieStore cookieStore = new BasicCookieStore();
cookieStore.addCookie(cookie);
localContext.setAttribute(HttpClientContext.COOKIE_STORE, cookieStore);
return httpClient.execute(target, request, localContext);
}
The httpClient is already constructed and passed into this code which sets the auth cookie.
I saw this, which is similar Cookies getting ignored in Apache httpclient 4.4, but in my case the cookies aren't being sent to the server.
After turning on wire logging in the HttpClient I can see the following in 4.3.6, but not in 4.4 and above:
DEBUG [org.apache.http.client.protocol.RequestAddCookies] Cookie [version: 0][name: cookieName][value: authToken][domain: .subdomain.mycompany.com][path: /][expiry: Wed Jul 15 16:07:05 IST 2015] match [host.subdomain.mycompany.com:80/myApi]
Which leads me to think it's something to do with cookie domain matching. Anyone have any ideas? Thanks.
I have debugged the example code. The problem is at BasicDomainHandler.match(Cookie, CookieOrigin) line: 129 as it expects org.apache.http.cookie.ClientCookie.DOMAIN_ATTR to be set in order to match full host name from URL to cookie domain. So you need to add the following line to your code, after you set the domain:
cookie.setAttribute(ClientCookie.DOMAIN_ATTR, "true");
The change was added with revision 1646864 on 12/19/14, 10:59 PM:
RFC 6265 compliant cookie spec
As suggested by the other answer, setting something like this should resolve:
cookie.setAttribute(ClientCookie.DOMAIN_ATTR, ".subdomain.mycompany.com");
The necessity of setting ClientCookie.DOMAIN_ATTR is is documented in HTTP Components Chapter 3. HTTP state management:
Here is an example of creating a client-side cookie object:
BasicClientCookie cookie = new BasicClientCookie("name", "value");
// Set effective domain and path attributes
cookie.setDomain(".mycompany.com");
cookie.setPath("/");
// Set attributes exactly as sent by the server
cookie.setAttribute(ClientCookie.PATH_ATTR, "/");
cookie.setAttribute(ClientCookie.DOMAIN_ATTR, ".mycompany.com");
We are using HttpHead to get the info from our customer's website, but for some reason we are getting cookie in the response as well. Is it expected? Is there a way to set to not return cookie?
The following is the code we have
HttpClient httpclient = new DefaultHttpClient();
// the time it takes to open TCP connection.
httpclient.getParams().setParameter(CoreConnectionPNames.CONNECTION_TIMEOUT, this.timeout);
// timeout when server does not send data.
httpclient.getParams().setParameter(CoreConnectionPNames.SO_TIMEOUT, this.timeout);
// the head method
HttpHead httphead = new HttpHead(url);
HttpResponse response = httpclient.execute(httphead);
And we are getting the following warning, indicating that there was cookie returned with response as well.
[WARN] ResponseProcessCookies - Cookie rejected: "[version: 0][name: DXFXFSG][value: AUR][domain: ...omitted...][path: /][expiry: null]". Illegal domain attribute "...omitted...". Domain of origin: "...omitted..."
Yes it is expected; you should get the same response as for the equivalent GET except that there is no body. If the GET would include a cookie, you should see it.
As an aside, I believe the warning you are seeing, from the redacted message you gave, is that the server is trying to set a cookie for a different domain.
I want to get the id cookie that Google issues when you opt-in at the ads settings page (if you're already accepting target advertisement, you must opt out first to see the page to which I am referring).
I've found that, in order to get this cookie, you have to perform an HTTP GET to the action URL in the form that is in this page. The problem is that this URL contains a hash that changes for every new HTTP connection so, first, I must go to this page and get this URL and, then, perform the GET to the URL.
I'm using HttpComponents to get http://www.google.com/ads/preferences but when I parse the contents with JSOUP there is only a script and no form can be found.
I'm afraid that this happens becauses contents are loaded dynamically using some sort of timeout... Does anyone know a workaround for this?
EDIT: by the way, the code that I use by now is:
HttpClient httpclient = new DefaultHttpClient();
// Create a local instance of cookie store
CookieStore cookieStore = new BasicCookieStore();
// Bind custom cookie store to the local context
((AbstractHttpClient) httpclient).setCookieStore(cookieStore);
CookieSpecFactory csf = new CookieSpecFactory() {
public CookieSpec newInstance(HttpParams params) {
return new BrowserCompatSpec() {
#Override
public void validate(Cookie cookie, CookieOrigin origin)
throws MalformedCookieException {
// Allow all cookies
System.out.println("Allowed cookie: " + cookie.getName() + " "
+ cookie.getValue() + " " + cookie.getPath());
}
};
}
};
((AbstractHttpClient) httpclient).getCookieSpecs().register("EASY", csf);
// Create local HTTP context
HttpContext localContext = new BasicHttpContext();
// Bind custom cookie store to the local context
localContext.setAttribute(ClientContext.COOKIE_STORE, cookieStore);
HttpGet httpget = new HttpGet(doubleClickURL);
// Override the default policy for this request
httpclient.getParams().setParameter(
ClientPNames.COOKIE_POLICY, "EASY");
// Pass local context as a parameter
HttpResponse response = httpclient.execute(httpget, localContext);
HttpEntity entity = response.getEntity();
if (entity != null) {
InputStream instream = entity.getContent();
BufferedReader reader = new BufferedReader(
new InputStreamReader(instream));
instream.close();
// Find action attribute of form
Document document = Jsoup.parse(reader.readLine());
Element form = document.select("form").first();
String optinURL = form.attr("action");
URL connection = new URL(optinURL);
// ... get id Cookie
}
You may have more chance using HtmlUnit, Selenium or jWebUnit for such a task. JSoup does not interpret Javascript, and the Google page your pointing to is full of Javascript that should be executed by a browser to produce what you're seeing.
HtmlUnit is OS independent and does not need anything else installed, but I've never used it for complicated Javascript sites. HtmlUnit can also extract data from the web page like JSoup does, but you can still feed the html to JSoup if you prefer using it.
Finally I found it! I found the following site describing the doubleclick cookie protocol:
Privacy Advisory
Then, is as easy as setting a cookie in that domain with name id and value A. Then make an HTTP request to http://www.google.com/ads/preferences and they'll set a correct ID value.
It is a very specific question but I hope that serves to future viewers.
By the way, I found that amazon.com is for example a member of the Ad-sense Network. An HTTP request to doubleclick is sent by means of script in the main page to:
http://ad.doubleclick.net/adj/amzn.us.gw.atf
There you can find a script that seems the actual code to give you the id cookie. Nevertheless, if you access this with the cookie with value A it will set the id of doubleclick.
I am having a problem getting the Apache HttpClient to connect to a service external to my virtualised development environment.
To access the internet (e.g. api.twitter.com) I need to call a local URL (e.g. api.twitter.com.dev.mycompany.net), which then forwards the request to real host.
The problem is, that to whatever request I send, I get a 404 Not Found response.
I have tried debugging it using wget, and it appears the problem is, that the destination server identifies the desired resource by using both the request URL and the hostname in the Host header. Since the hostname does not match, it is unable to locate the resource.
I have (unsuccessfully) tried to override the Host header by setting the http.virtual-host parameter on the client like this:
HttpClient client = new DefaultHttpClient();
if (envType.isWithProxy()) {
client.getParams().setParameter(ClientPNames.VIRTUAL_HOST, "api.twitter.com");
}
Technical details:
Client is used as an executor in RESTeasy to call the REST API. So "manually" setting the virtual host (as described here) is not an option.
Everything is done via HTTPS/SSL - not that I think it makes a difference.
Edit 1: Using a HttpHost instead of a String does not have the desired effect either:
HttpClient client = new DefaultHttpClient();
if (envType.isWithProxy()) {
HttpHost realHost = new HttpHost("api.twitter.com", port, scheme);
client.getParams().setParameter(ClientPNames.VIRTUAL_HOST, realHost);
}
Edit 2: Further investigation has revealed, that the parameter needs to be set on the request object. The following is the code v. 4.2-aplha1 of HttpClient setting the virtual host:
HttpRequest orig = request;
RequestWrapper origWrapper = wrapRequest(orig);
origWrapper.setParams(params);
HttpRoute origRoute = determineRoute(target, origWrapper, context);
virtualHost = (HttpHost) orig.getParams().getParameter(
ClientPNames.VIRTUAL_HOST);
paramsare the parameters passed from the client. But the value for 'virtualHost' is read from the request parameters.
So this changes the nature of the question to: How do I set the VIRTUAL_HOST property on the requests?
ClientPNames.VIRTUAL_HOST is the right parameter for overriding physical host name in HTTP requests. I would just recommend setting this parameter on the request object instead of the client object. If that does not produce the desired effect please post the complete wire / context log of the session (see logging guide for instructions) either here or to the HttpClient user list.
Follow-up
OK. Let's take a larger sledge hammer. One can override content of the Host header using an interceptor.
DefaultHttpClient client = new DefaultHttpClient();
client.addRequestInterceptor(new HttpRequestInterceptor() {
public void process(
final HttpRequest request,
final HttpContext context) throws HttpException, IOException {
request.setHeader(HTTP.TARGET_HOST, "www.whatever.com");
}
});
One can make the interceptor clever enough to override the header selectively, only for specific hosts.
I'm trying to use the Apache/Jakarta HttpClient 4.1.1 to connect to an arbitrary web page using the given credentials. To test this, I have a minimal install of IIS 7.5 on my dev machine running where only one authentication mode is active at a time. Basic authentication works fine, but Digest and NTLM return 401 error messages whenever I try to log in. Here is my code:
DefaultHttpClient httpclient = new DefaultHttpClient();
HttpContext localContext = new BasicHttpContext();
HttpGet httpget = new HttpGet("http://localhost/");
CredentialsProvider credsProvider = new BasicCredentialsProvider();
credsProvider.setCredentials(AuthScope.ANY,
new NTCredentials("user", "password", "", "localhost"));
if (!new File(System.getenv("windir") + "\\krb5.ini").exists()) {
List<String> authtypes = new ArrayList<String>();
authtypes.add(AuthPolicy.NTLM);
authtypes.add(AuthPolicy.DIGEST);
authtypes.add(AuthPolicy.BASIC);
httpclient.getParams().setParameter(AuthPNames.PROXY_AUTH_PREF,
authtypes);
httpclient.getParams().setParameter(AuthPNames.TARGET_AUTH_PREF,
authtypes);
}
localContext.setAttribute(ClientContext.CREDS_PROVIDER, credsProvider);
HttpResponse response = httpclient.execute(httpget, localContext);
System.out.println("Response code: " + response.getStatusLine());
The one thing I've noticed in Fiddler is that the hashes sent by Firefox versus by HttpClient are different, making me think that maybe IIS 7.5 is expecting stronger hashing than HttpClient provides? Any ideas? It'd be great if I could verify that this would work with NTLM. Digest would be nice too, but I can live without that if necessary.
I am not an expert on the subject but during the NTLM authentication using http components I have seen that the client needs 3 attempts in order to connect to an NTML endpoint in my case. It is kinda described here for Spnego but it is a bit different for the NTLM authentication.
For NTLM in the first attempt client will make a request with Target auth state: UNCHALLENGED and Web server returns HTTP 401 status and a header: WWW-Authenticate: NTLM
Client will check for the configured Authentication schemes, NTLM should be configured in client code.
Second attempt, client will make a request with Target auth state: CHALLENGED, and will send an authorization header with a token encoded in base64 format: Authorization: NTLM TlRMTVNTUAABAAAAAYIIogAAAAAoAAAAAAAAACgAAAAFASgKAAAADw==
Server again returns HTTP 401 status but the header: WWW-Authenticate: NTLM now is populated with encoded information.
3rd Attempt Client will use the information from WWW-Authenticate: NTLM header and will make the final request with Target auth state: HANDSHAKE and an authorisation header Authorization: NTLM which contains more information for the server.
In my case I receive an HTTP/1.1 200 OK after that.
In order to avoid all this in every request documentation at chapter 4.7.1 states that the same execution token must be used for logically related requests. For me it did not worked.
My code:
I initialize the client once in a #PostConstruct method of an EJB
PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
cm.setMaxTotal(18);
cm.setDefaultMaxPerRoute(6);
RequestConfig requestConfig = RequestConfig.custom()
.setSocketTimeout(30000)
.setConnectTimeout(30000)
.setTargetPreferredAuthSchemes(Arrays.asList(AuthSchemes.NTLM))
.setProxyPreferredAuthSchemes(Arrays.asList(AuthSchemes.BASIC))
.build();
CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY,
new NTCredentials(userName, password, hostName, domainName));
// Finally we instantiate the client. Client is a thread safe object and can be used by several threads at the same time.
// Client can be used for several request. The life span of the client must be equal to the life span of this EJB.
this.httpclient = HttpClients.custom()
.setConnectionManager(cm)
.setDefaultCredentialsProvider(credentialsProvider)
.setDefaultRequestConfig(requestConfig)
.build();
Use the same client instance in every request:
HttpPost httppost = new HttpPost(endPoint.trim());
// HttpClientContext is not thread safe, one per request must be created.
HttpClientContext context = HttpClientContext.create();
response = this.httpclient.execute(httppost, context);
Deallocate the resources and return the connection back to connection manager, at the #PreDestroy method of my EJB:
this.httpclient.close();
I had the same problem with HttpClient4.1.X After upgrading it to
HttpClient 4.2.6 it woked like charm. Below is my code
DefaultHttpClient httpclient = new DefaultHttpClient();
HttpContext localContext = new BasicHttpContext();
HttpGet httpget = new HttpGet("url");
CredentialsProvider credsProvider = new BasicCredentialsProvider();
credsProvider.setCredentials(AuthScope.ANY,
new NTCredentials("username", "pwd", "", "domain"));
List<String> authtypes = new ArrayList<String>();
authtypes.add(AuthPolicy.NTLM);
httpclient.getParams().setParameter(AuthPNames.TARGET_AUTH_PREF,authtypes);
localContext.setAttribute(ClientContext.CREDS_PROVIDER, credsProvider);
HttpResponse response = httpclient.execute(httpget, localContext);
HttpEntity entity=response.getEntity();
The easiest way troubleshoot such situations I found is Wireshark. It is a very big hammer, but it really will show you everything. Install it, make sure your server is on another machine (does not work with Localhost) and start logging.
Run your request that fails, run one that works. Then, filter by http (just put http in the filter field), find the first GET request, find the other GET request and compare. Identify meaningful difference, you now have specific keywords or issues to search code/net for. If not enough, narrow down to first TCP conversation and look at full request/response. Same with the other one.
I solved an unbelievable number of problems with that approach. And Wireshark is very useful tool to know. Lots of super-advanced functions to make your network debugging easier.
You can also run it on either client or server end. Whatever will show you both requests to allow you to compare.
I had a similar problem with HttpClient 4.1.2. For me, it was resolved by reverting to HttpClient 4.0.3. I could never get NTLM working with 4.1.2 using either the built-in implementation or using JCIFS.
Updating our application to use the jars in the httpcomponents-client-4.5.1 resolved this issue for me.
I finally figured it out. Digest authentication requires that if you use a full URL in the request, the proxy also needs to use the full URL. I did not leave the proxy code in the sample, but it was directed to "localhost", which caused it to fail. Changing this to 127.0.0.1 made it work.