Using Apache HttpClient for an OPTIONS request - java

I am trying to use httpclient to verify if a specific endpoint is reachable. It seems that it is only possible to check if the server is up but cannot verify if the actual resource is available.
Here is my code:
HttpClient client = new Default HttpClient();
client.execute(new HttpOptions(url)).getStatusLine().getStatusCode();
According to the protocol (http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html) it should have been checking the status of the specific resource but I always get a 200 response as long the the actual host is reachable. What am I doing wrong here?
Thank you.

Related

407 Proxy Authentication error - Unirest Java 8

Hi everyone i'm so hopeless so ask you guys.
I'm trying to do a simple HTTP request but with my proxies I get 407 error code.
I use Unirest and Java 8.
Unirest.config().proxy(host, port, usernameProxy, passwordProxy);
HttpResponse<JsonNode> response = Unirest.post(url).asJson();
String body = response.getBody().toString();
That's it, my url is private but i wrote it like this: "https://myurl.com/?param1=param&param2....
It works proxyless but i'm stuck with proxies.
Thanks a lot
Seems like the proxy server expects for the proxy credentials within the Headers, which Unirest doesn't seem to propagate.
The header must specifically contain the "Proxy-Authorization" key in order to the handshake be even started.
String proxyCred= "user:password";
String baseCred= Base64.getEncoder().encodeToString(proxyCred.getBytes());
HttpHeaders headers = new HttpHeaders();
headers.add("Proxy-Authorization", "Basic " + baseCred); // the proxy server needs this
This solution uses the Basic mechanism; It may not work, as the proxy may expect another type of authentication. You'll know which one is by reading the Proxy-Authenticate header within the server's response.
If the communication is not secured (HTTP and not HTTPS), you could read the response by sniffing the packet with some tool such as WireShark. Once you locate the 407 packet, you could read inside the Proxy-Authenticate value and modify your authorization method according do it.

HttpClient can't get response from server

This problem has blocked our whole team half a day!
We use apache httpclient 4.3.x to post and get data from an storage server which provides http api. In order to improve performance, we used PoolingHttpClientConnectionManager:
public HttpClient createHttpClient() {
Registry registry = RegistryBuilder.create()....build();
PoolingHttpClientConnectionManager connectionManager = new PoolingHttpClientConnectionManager(registry);
connectionManager.setMaxTotal(50);
connectionManager.setDefaultMaxPerRoute(50);
CloseableHttpClient httpClient = HttpClients.custom()
.setConnectionManager(connectionManager)
.build();
return httpClient;
}
Then we hold an instance of the httpClient in our program, reuse it with every http request:
Global httpClient:
HttpClient httpClient = createHttpClient();
Post some data:
HttpPost httpPut = new HttpPost("...");
HttpResponse response = httpClient.execute(httpPut);
// Notice we get the response content here!
String content = EntityUtils.toString(response.getEntity());
System.out.println(content);
httpPut.releaseConnection();
response.close();
Then get:
HttpGet httpGet = new HttpGet("...");
// Blocked at this line !!!!
HttpResponse response = httpClient.execute(httpGet);
String content = EntityUtils.toString(response.getEntity());
System.out.println(content);
httpPut.releaseConnection();
response.close();
Please notice the line: // Blocked at this line !!!!
The program has blocked at that line and never go to next line. In debugging mode, I can see it has been blocked at:
SocketInputStream.socketRead0()
I've searched for a lot of questions and documents, but no lucky.
My colleage just fix it by setting NoConnectionReuseStrategy.INSTANCE:
HttpClients.custom()
.setConnectionManager(connectionManager)
// Following line fixed the problem, but why?
.setConnectionReuseStrategy(NoConnectionReuseStrategy.INSTANCE)
.build();
Now it doens't blocked, but why?
What does "reuse connection" mean? And is there performance issue by using NoConnectionReuseStrategy?
Thank you, guys~
I tried to reproduce the blocking http-get (also as an exercise for myself) but even without closing responses I could not get it to block. The ONLY time I managed to make the http-get block is by doing a response.getEntity().getContent() without reading from the returned InputStream and without closing the returned InputStream.
For my tests I used Tomcat 7.0.47 with two very simple servlets (one responding "OK" to a get, the other echoing a post) as a server. The client started 50 threads with each thread performing 30 alternating http-get and http-post request (total of 1500 requests). The client did not use the RegistryBuilder, instead the default one is used (created by the PoolingHttpClientConnectionManager itself).
About the NoConnectionReuseStrategy: by default (HttpClient created with HttpClients.createDefault(), I used org.apache.httpcomponents:httpclient:4.3.1) a connection pool is used with a maximum of 2 connections to 1 server. E.g. even if 5 threads are doing all kinds of requests at the same time to 1 server, the connection pool opens only 2 connections, re-uses them for all requests and ensures that 1 connection is used by 1 thread at any given time. This can have a very positive impact on client performance and significantly reduces load on the server. The only thing you must make sure is to call response.close() in a finally-block (this ensures the connection is returned to the connection pool). By using the NoConnectionReuseStrategy you basically disable the connection pool: for each request a new connection will be created. I recommend you enable debug-logging for category org.apache.http.impl.conn.PoolingHttpClientConnectionManager, it is very informative.
A note about httpPut.releaseConnection(): this does not actually release a connection, it only ensures that you can re-use the "httpPut" object in a next request (see the apidocs, follow the shown link). Also note that in your code for the "httpGet", you call releaseConnection() on "httpPut" instead of "httpGet".
Ran into this problem just a while back. In case someone else comes across this problem, this post might be useful.
I am using a Java Servlet to service my requests. When I wrote to the response stream using the PrintWriter instance my client blocked. Tried writing to the OutputStream directlyresponse.getOutputStream.write("myresponse") and it worked.

Why getting cookie back for HttpHead method?

We are using HttpHead to get the info from our customer's website, but for some reason we are getting cookie in the response as well. Is it expected? Is there a way to set to not return cookie?
The following is the code we have
HttpClient httpclient = new DefaultHttpClient();
// the time it takes to open TCP connection.
httpclient.getParams().setParameter(CoreConnectionPNames.CONNECTION_TIMEOUT, this.timeout);
// timeout when server does not send data.
httpclient.getParams().setParameter(CoreConnectionPNames.SO_TIMEOUT, this.timeout);
// the head method
HttpHead httphead = new HttpHead(url);
HttpResponse response = httpclient.execute(httphead);
And we are getting the following warning, indicating that there was cookie returned with response as well.
[WARN] ResponseProcessCookies - Cookie rejected: "[version: 0][name: DXFXFSG][value: AUR][domain: ...omitted...][path: /][expiry: null]". Illegal domain attribute "...omitted...". Domain of origin: "...omitted..."
Yes it is expected; you should get the same response as for the equivalent GET except that there is no body. If the GET would include a cookie, you should see it.
As an aside, I believe the warning you are seeing, from the redacted message you gave, is that the server is trying to set a cookie for a different domain.

Configuring Apache HttpClient to access service through proxy/load-balancer (overriding Host header)

I am having a problem getting the Apache HttpClient to connect to a service external to my virtualised development environment.
To access the internet (e.g. api.twitter.com) I need to call a local URL (e.g. api.twitter.com.dev.mycompany.net), which then forwards the request to real host.
The problem is, that to whatever request I send, I get a 404 Not Found response.
I have tried debugging it using wget, and it appears the problem is, that the destination server identifies the desired resource by using both the request URL and the hostname in the Host header. Since the hostname does not match, it is unable to locate the resource.
I have (unsuccessfully) tried to override the Host header by setting the http.virtual-host parameter on the client like this:
HttpClient client = new DefaultHttpClient();
if (envType.isWithProxy()) {
client.getParams().setParameter(ClientPNames.VIRTUAL_HOST, "api.twitter.com");
}
Technical details:
Client is used as an executor in RESTeasy to call the REST API. So "manually" setting the virtual host (as described here) is not an option.
Everything is done via HTTPS/SSL - not that I think it makes a difference.
Edit 1: Using a HttpHost instead of a String does not have the desired effect either:
HttpClient client = new DefaultHttpClient();
if (envType.isWithProxy()) {
HttpHost realHost = new HttpHost("api.twitter.com", port, scheme);
client.getParams().setParameter(ClientPNames.VIRTUAL_HOST, realHost);
}
Edit 2: Further investigation has revealed, that the parameter needs to be set on the request object. The following is the code v. 4.2-aplha1 of HttpClient setting the virtual host:
HttpRequest orig = request;
RequestWrapper origWrapper = wrapRequest(orig);
origWrapper.setParams(params);
HttpRoute origRoute = determineRoute(target, origWrapper, context);
virtualHost = (HttpHost) orig.getParams().getParameter(
ClientPNames.VIRTUAL_HOST);
paramsare the parameters passed from the client. But the value for 'virtualHost' is read from the request parameters.
So this changes the nature of the question to: How do I set the VIRTUAL_HOST property on the requests?
ClientPNames.VIRTUAL_HOST is the right parameter for overriding physical host name in HTTP requests. I would just recommend setting this parameter on the request object instead of the client object. If that does not produce the desired effect please post the complete wire / context log of the session (see logging guide for instructions) either here or to the HttpClient user list.
Follow-up
OK. Let's take a larger sledge hammer. One can override content of the Host header using an interceptor.
DefaultHttpClient client = new DefaultHttpClient();
client.addRequestInterceptor(new HttpRequestInterceptor() {
public void process(
final HttpRequest request,
final HttpContext context) throws HttpException, IOException {
request.setHeader(HTTP.TARGET_HOST, "www.whatever.com");
}
});
One can make the interceptor clever enough to override the header selectively, only for specific hosts.

HttpClient 4.1.1 returns 401 when authenticating with NTLM, browsers work fine

I'm trying to use the Apache/Jakarta HttpClient 4.1.1 to connect to an arbitrary web page using the given credentials. To test this, I have a minimal install of IIS 7.5 on my dev machine running where only one authentication mode is active at a time. Basic authentication works fine, but Digest and NTLM return 401 error messages whenever I try to log in. Here is my code:
DefaultHttpClient httpclient = new DefaultHttpClient();
HttpContext localContext = new BasicHttpContext();
HttpGet httpget = new HttpGet("http://localhost/");
CredentialsProvider credsProvider = new BasicCredentialsProvider();
credsProvider.setCredentials(AuthScope.ANY,
new NTCredentials("user", "password", "", "localhost"));
if (!new File(System.getenv("windir") + "\\krb5.ini").exists()) {
List<String> authtypes = new ArrayList<String>();
authtypes.add(AuthPolicy.NTLM);
authtypes.add(AuthPolicy.DIGEST);
authtypes.add(AuthPolicy.BASIC);
httpclient.getParams().setParameter(AuthPNames.PROXY_AUTH_PREF,
authtypes);
httpclient.getParams().setParameter(AuthPNames.TARGET_AUTH_PREF,
authtypes);
}
localContext.setAttribute(ClientContext.CREDS_PROVIDER, credsProvider);
HttpResponse response = httpclient.execute(httpget, localContext);
System.out.println("Response code: " + response.getStatusLine());
The one thing I've noticed in Fiddler is that the hashes sent by Firefox versus by HttpClient are different, making me think that maybe IIS 7.5 is expecting stronger hashing than HttpClient provides? Any ideas? It'd be great if I could verify that this would work with NTLM. Digest would be nice too, but I can live without that if necessary.
I am not an expert on the subject but during the NTLM authentication using http components I have seen that the client needs 3 attempts in order to connect to an NTML endpoint in my case. It is kinda described here for Spnego but it is a bit different for the NTLM authentication.
For NTLM in the first attempt client will make a request with Target auth state: UNCHALLENGED and Web server returns HTTP 401 status and a header: WWW-Authenticate: NTLM
Client will check for the configured Authentication schemes, NTLM should be configured in client code.
Second attempt, client will make a request with Target auth state: CHALLENGED, and will send an authorization header with a token encoded in base64 format: Authorization: NTLM TlRMTVNTUAABAAAAAYIIogAAAAAoAAAAAAAAACgAAAAFASgKAAAADw==
Server again returns HTTP 401 status but the header: WWW-Authenticate: NTLM now is populated with encoded information.
3rd Attempt Client will use the information from WWW-Authenticate: NTLM header and will make the final request with Target auth state: HANDSHAKE and an authorisation header Authorization: NTLM which contains more information for the server.
In my case I receive an HTTP/1.1 200 OK after that.
In order to avoid all this in every request documentation at chapter 4.7.1 states that the same execution token must be used for logically related requests. For me it did not worked.
My code:
I initialize the client once in a #PostConstruct method of an EJB
PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
cm.setMaxTotal(18);
cm.setDefaultMaxPerRoute(6);
RequestConfig requestConfig = RequestConfig.custom()
.setSocketTimeout(30000)
.setConnectTimeout(30000)
.setTargetPreferredAuthSchemes(Arrays.asList(AuthSchemes.NTLM))
.setProxyPreferredAuthSchemes(Arrays.asList(AuthSchemes.BASIC))
.build();
CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY,
new NTCredentials(userName, password, hostName, domainName));
// Finally we instantiate the client. Client is a thread safe object and can be used by several threads at the same time.
// Client can be used for several request. The life span of the client must be equal to the life span of this EJB.
this.httpclient = HttpClients.custom()
.setConnectionManager(cm)
.setDefaultCredentialsProvider(credentialsProvider)
.setDefaultRequestConfig(requestConfig)
.build();
Use the same client instance in every request:
HttpPost httppost = new HttpPost(endPoint.trim());
// HttpClientContext is not thread safe, one per request must be created.
HttpClientContext context = HttpClientContext.create();
response = this.httpclient.execute(httppost, context);
Deallocate the resources and return the connection back to connection manager, at the #PreDestroy method of my EJB:
this.httpclient.close();
I had the same problem with HttpClient4.1.X After upgrading it to
HttpClient 4.2.6 it woked like charm. Below is my code
DefaultHttpClient httpclient = new DefaultHttpClient();
HttpContext localContext = new BasicHttpContext();
HttpGet httpget = new HttpGet("url");
CredentialsProvider credsProvider = new BasicCredentialsProvider();
credsProvider.setCredentials(AuthScope.ANY,
new NTCredentials("username", "pwd", "", "domain"));
List<String> authtypes = new ArrayList<String>();
authtypes.add(AuthPolicy.NTLM);
httpclient.getParams().setParameter(AuthPNames.TARGET_AUTH_PREF,authtypes);
localContext.setAttribute(ClientContext.CREDS_PROVIDER, credsProvider);
HttpResponse response = httpclient.execute(httpget, localContext);
HttpEntity entity=response.getEntity();
The easiest way troubleshoot such situations I found is Wireshark. It is a very big hammer, but it really will show you everything. Install it, make sure your server is on another machine (does not work with Localhost) and start logging.
Run your request that fails, run one that works. Then, filter by http (just put http in the filter field), find the first GET request, find the other GET request and compare. Identify meaningful difference, you now have specific keywords or issues to search code/net for. If not enough, narrow down to first TCP conversation and look at full request/response. Same with the other one.
I solved an unbelievable number of problems with that approach. And Wireshark is very useful tool to know. Lots of super-advanced functions to make your network debugging easier.
You can also run it on either client or server end. Whatever will show you both requests to allow you to compare.
I had a similar problem with HttpClient 4.1.2. For me, it was resolved by reverting to HttpClient 4.0.3. I could never get NTLM working with 4.1.2 using either the built-in implementation or using JCIFS.
Updating our application to use the jars in the httpcomponents-client-4.5.1 resolved this issue for me.
I finally figured it out. Digest authentication requires that if you use a full URL in the request, the proxy also needs to use the full URL. I did not leave the proxy code in the sample, but it was directed to "localhost", which caused it to fail. Changing this to 127.0.0.1 made it work.

Categories