Performance issue with HttpURLConnection - java

I'm establishing a HttpURLConnection to a WebServer with basically the following two methods:
private HttpURLConnection establishConnection(URL url) {
HttpURLConnection conn = null;
try {
conn = (HttpURLConnection) url.openConnection();
conn = authenticate(conn);
conn.setRequestMethod(httpMethod);
conn.setConnectTimeout(50000);
conn.connect();
input= conn.getInputStream();
return conn;
} catch (IOException e1) {
e1.printStackTrace();
}
return null;
}
private HttpURLConnection authenticate(HttpURLConnection conn) {
String userpass = webServiceUserName + ":" + webServicePassword;
byte[] authEncBytes = Base64.encodeBase64(userpass.getBytes());
String authStringEnc = new String(authEncBytes);
conn.setRequestProperty("Authorization", "Basic " + authStringEnc);
return conn;
}
This works quite well, the Server is sending some XML-File and I can continue with it. The Problem I'm encountering is, i have to do about ~220 of these and they add up to about 25s processing time. The data is used in a WebPage, so 25s response time is not really acceptable.
The code above takes about: 86000036ns (~86ms), so im searching for a way to improve the speed somehow. I tried using the org.apache.http.* package, but that was a bit slower than my current implementation.
Thanks
Markus
Edit: input=conn.getInputStream();
Is responsible for ~82-85ms of that delay. Is there anyway "around" it?
Edit2: I used the Connection Manager aswell
PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
cm.setMaxTotal(200);
cm.setDefaultMaxPerRoute(20);
HttpHost localhost = new HttpHost(webServiceHostName, 443);
cm.setMaxPerRoute(new HttpRoute(localhost), 50);
CredentialsProvider credsProvider = new BasicCredentialsProvider();
credsProvider.setCredentials(
new AuthScope(webServiceHostName, 443),
new UsernamePasswordCredentials(webServiceUserName, webServicePassword));
httpclient = HttpClients.custom().setConnectionManager(cm).setDefaultCredentialsProvider(credsProvider).build();
But the runtime increases to ~40s and i get a Warning from my Tomcat after every request that the Cookie was rejeceted because of a "Illegal path attribute"

You may be able to get a substantial boost by downloading a number of files in parallel.
I had a project where I had to download 20 resources from a server over a satellite backhaul (around 700ms round-trip delay). Downloading them sequentially took around 30 seconds; 5 at a time took 6.5 seconds, 10 at a time took 3.5 seconds, and all 20 at once was a bit over 2.5 seconds.
Here is an example which will perform multiple downloads concurrently, and if support by the server, will use connection keep-alive.
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import org.apache.http.HttpEntity;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.impl.conn.PoolingHttpClientConnectionManager;
import org.apache.http.protocol.BasicHttpContext;
import org.apache.http.protocol.HttpContext;
import org.apache.http.util.EntityUtils;
public class Downloader {
private static final int MAX_REQUESTS_PER_ROUTE = 10;
private static final int MAX_REQUESTS_TOTAL = 50;
private static final int MAX_THREAD_DONE_WAIT = 60000;
public static void main(String[] args) throws IOException,
InterruptedException {
long startTime = System.currentTimeMillis();
// create connection manager and http client
PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
cm.setDefaultMaxPerRoute(MAX_REQUESTS_PER_ROUTE);
cm.setMaxTotal(MAX_REQUESTS_TOTAL);
CloseableHttpClient httpclient = HttpClients.custom()
.setConnectionManager(cm).build();
// list of download items
List<DownloadItem> items = new ArrayList<DownloadItem>();
items.add(new DownloadItem("http://www.example.com/file1.xml"));
items.add(new DownloadItem("http://www.example.com/file2.xml"));
items.add(new DownloadItem("http://www.example.com/file3.xml"));
items.add(new DownloadItem("http://www.example.com/file4.xml"));
// create and start download threads
DownloadThread[] threads = new DownloadThread[items.size()];
for (int i = 0; i < items.size(); i++) {
threads[i] = new DownloadThread(httpclient, items.get(i));
threads[i].start();
}
// wait for all threads to complete
for (int i = 0; i < items.size(); i++) {
threads[i].join(MAX_THREAD_DONE_WAIT);
}
// use content
for (DownloadItem item : items) {
System.out.println("uri: " + item.uri + ", status-code: "
+ item.statusCode + ", content-length: "
+ item.content.length);
}
// done with http client
httpclient.close();
System.out.println("Time to download: "
+ (System.currentTimeMillis() - startTime) + "ms");
}
static class DownloadItem {
String uri;
byte[] content;
int statusCode;
DownloadItem(String uri) {
this.uri = uri;
content = null;
statusCode = -1;
}
}
static class DownloadThread extends Thread {
private final CloseableHttpClient httpClient;
private final DownloadItem item;
public DownloadThread(CloseableHttpClient httpClient, DownloadItem item) {
this.httpClient = httpClient;
this.item = item;
}
#Override
public void run() {
try {
HttpGet httpget = new HttpGet(item.uri);
HttpContext context = new BasicHttpContext();
CloseableHttpResponse response = httpClient.execute(httpget,
context);
try {
item.statusCode = response.getStatusLine().getStatusCode();
HttpEntity entity = response.getEntity();
if (entity != null) {
item.content = EntityUtils.toByteArray(entity);
}
} finally {
response.close();
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
}

Without knowing what kind of work your web request do, I assume that more than 99% of the 25 seconds consist of network time, and waiting around for various resources to respond (disk systems, LDAP servers, Name servers etc.).
The Speed of Light
I see you use userid/password against the webserver. Is this an external webserver? If so, the network distance itself could account for the 86 ms. With many request you start to feel the restriction of speed of light.
The way to optimize you program is to minimize all the waiting time stacking up. This might be done by running requests in parallel, or by allowing for multiple request in one request (if you can change on the web server).
Connection pooling itself won't solve the problem if you still run the requests in sequence.
An possible solution
Base on further description in comments you might use the following sequence:
Request the overview XML.
Extract list of devices from overview XML.
Request device details for all devices in parallel.
Collect responses from all requests.
Run through XML again, and this time update with the responses.

Related

what is 'proxy.mycompany1.local'

I just started working in Java networking protocols. I am trying to connect to the internet using my proxy server. When I see the post at 'https://www.tutorialspoint.com/javaexamples/net_poxy.htm', they set the http.proxyHost property to 'proxy.mycompany1.local'. I know I can set this to my proxy server IP, but I am curious to know why my program still works, even though I set it to some random string like "abcd".
A. What does 'proxy.mycompany1.local" stand for?
B. How come my program works, even though I set the http.proxyHost" to "abcd"?
Following is my working program:
import java.net.HttpURLConnection;
import java.net.InetSocketAddress;
import java.net.Proxy;
import java.net.ProxySelector;
import java.net.URI;
import java.net.URL;
public class TestProxy {
public static void main(String s[]) throws Exception {
try {
System.setProperty("http.proxyHost", "abcd");
System.setProperty("http.proxyPort", "8080");
URL u = new URL("http://www.google.com");
HttpURLConnection con = (HttpURLConnection) u.openConnection();
System.out.println(con.getResponseCode() + " : " + con.getResponseMessage());
} catch (Exception e) {
e.printStackTrace();
System.out.println(false);
}
Proxy proxy = (Proxy) ProxySelector.getDefault().select(new URI("http://www.google.com")).iterator().next();
System.out.println("proxy Type : " + proxy.type());
InetSocketAddress addr = (InetSocketAddress) proxy.address();
if (addr == null) {
System.out.println("No Proxy");
} else {
System.out.println("proxy hostname : " + addr.getHostName());
System.out.println("proxy port : " + addr.getPort());
}
}
}
This is the output:
200 : OK
proxy Type : HTTP
proxy hostname : abcd
proxy port : 8080
First of all, according System Properties tutorial.
Warning: Changing system properties is potentially dangerous and
should be done with discretion. Many system properties are not reread
after start-up and are there for informational purposes. Changing some
properties may have unexpected side-effects.
And my experience say you can get unpleasant issues on your system when you change *.proxyHost properties. So I highly wouldn't recommend you to change system properties for this task.
Much better use something like:
//Proxy instance, proxy ip = 127.0.0.1 with port 8080
Proxy proxy = new Proxy(Proxy.Type.HTTP, new InetSocketAddress("127.0.0.1", 8080));
conn = new URL(urlString).openConnection(proxy);
and authorisation on proxy:
Authenticator authenticator = new Authenticator() {
#Override
public PasswordAuthentication getPasswordAuthentication() {
return new PasswordAuthentication("user",
"mypassword".toCharArray());
}
};
Authenticator.setDefault(authenticator);
Now return to main questions:
A. 'proxy.mycompany1.local" is just example. You can use any hostname
B. Class URL uses java.net.PlainSocketImpl class via Socket. It tries to resolve proxy hostname 'abcd', swallow error and go to google directly. Just try to play with this code:
import java.net.*;
import java.io.*;
public class RequestURI {
public static void main(String[] args) {
int port = 8181;
long startTime = System.currentTimeMillis();
try {
// System.getProperties().setProperty("http.proxyHost", "abcd");
// System.getProperties().setProperty("http.proxyPort", Integer.toString(port));
URL url = new URL("http://google.com");
HttpURLConnection uc = (HttpURLConnection) url.openConnection();
int resp = uc.getResponseCode();
if (resp != 200) {
throw new RuntimeException("Failed: Fragment is being passed as part of the RequestURI");
}
} catch (IOException e) {
e.printStackTrace();
}
System.out.println("Run time in ms ="+ (System.currentTimeMillis() - startTime));
}
}
You can see run time is bigger when you uncomment section with setProperty. Unsuccessful attempt to resolve hostname increases execution time.
First of all, proxy.mycompany1.local is just a host name, it is a sample, it is nothing special.
I tried your code in a non proxied network, and it worked as you described. I guess that url.openConnection() method ignores proxy settings, because if you manage your own proxy and use url.openConnection(proxy), then it fails with a java.net.UnknownHostException.
Here you are with a piece of code that fails:
SocketAddress addr = new InetSocketAddress("abcd", 8080);
Proxy proxy = new Proxy(Proxy.Type.HTTP, addr);
URL url = new URL("http://google.com");
URLConnection conn = url.openConnection(proxy);
InputStream in = conn.getInputStream();
in.close();
You can read more about Java Networking and Proxies.

What's the difference between CloseableHttpResponse.close() and httpPost.releaseConnection()?

CloseableHttpResponse response = null;
try {
// do some thing ....
HttpPost request = new HttpPost("some url");
response = getHttpClient().execute(request);
// do some other thing ....
} catch(Exception e) {
// deal with exception
} finally {
if(response != null) {
try {
response.close(); // (1)
} catch(Exception e) {}
request.releaseConnection(); // (2)
}
}
I've made a http request like above.
In order to release the underlying connection, is it correct to call (1) and (2)? and what's the difference between the two invocation?
Short answer:
request.releaseConnection() is releasing the underlying HTTP connection to allow it to be reused. response.close() is closing a stream (not a connection), this stream is the response content we are streaming from the network socket.
Long Answer:
The correct pattern to follow in any recent version > 4.2 and probably even before that, is not to use releaseConnection.
request.releaseConnection() releases the underlying httpConnection so the request can be reused, however the Java doc says:
A convenience method to simplify migration from HttpClient 3.1 API...
Instead of releasing the connection, we ensure the response content is fully consumed which in turn ensures the connection is released and ready for reuse. A short example is shown below:
CloseableHttpClient httpclient = HttpClients.createDefault();
HttpGet httpGet = new HttpGet("http://targethost/homepage");
CloseableHttpResponse response1 = httpclient.execute(httpGet);
try {
System.out.println(response1.getStatusLine());
HttpEntity entity1 = response1.getEntity();
// do something useful with the response body
String bodyAsString = EntityUtils.toString(exportResponse.getEntity());
System.out.println(bodyAsString);
// and ensure it is fully consumed (this is how stream is released.
EntityUtils.consume(entity1);
} finally {
response1.close();
}
CloseableHttpResponse.close() closes the tcp socket
HttpPost.releaseConnection() closes the tcp socket
EntityUtils.consume(response.getEntity()) allows you to re-use the tcp socket
Details
CloseableHttpResponse.close() closes the tcp socket, preventing the connection from being re-used. You need to establish a new tcp connection in order to initiate another request.
This is the call chain that lead me to the above conclusion:
HttpResponseProxy.close()
-> ConnectionHolder.close()
-> ConnectionHolder.releaseConnection(reusable=false)
-> managedConn.close()
-> BHttpConnectionBase.close()
-> Socket.close()
HttpPost.releaseConnection() also closes the Socket. This is the call chain that lead me to the above conclusion:
HttpPost.releaseConnection()
HttpRequestBase.releaseConnect()
AbstractExecutionAwareRequest.reset()
ConnectionHolder.cancel() (
ConnectionHolder.abortConnection()
HttpConnection.shutdown()
Here is experimental code that also demonstrates the above three facts:
import java.lang.reflect.Constructor;
import java.net.Socket;
import java.net.SocketImpl;
import java.net.SocketImplFactory;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClientBuilder;
import org.apache.http.util.EntityUtils;
public class Main {
private static SocketImpl newSocketImpl() {
try {
Class<?> defaultSocketImpl = Class.forName("java.net.SocksSocketImpl");
Constructor<?> constructor = defaultSocketImpl.getDeclaredConstructor();
constructor.setAccessible(true);
return (SocketImpl) constructor.newInstance();
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public static void main(String[] args) throws Exception {
// this is a hack that lets me listen to Tcp socket creation
final List<SocketImpl> allSockets = Collections.synchronizedList(new ArrayList<>());
Socket.setSocketImplFactory(new SocketImplFactory() {
public SocketImpl createSocketImpl() {
SocketImpl socket = newSocketImpl();
allSockets.add(socket);
return socket;
}
});
System.out.println("num of sockets after start: " + allSockets.size());
CloseableHttpClient client = HttpClientBuilder.create().build();
System.out.println("num of sockets after client created: " + allSockets.size());
HttpGet request = new HttpGet("http://www.google.com");
System.out.println("num of sockets after get created: " + allSockets.size());
CloseableHttpResponse response = client.execute(request);
System.out.println("num of sockets after get executed: " + allSockets.size());
response.close();
System.out.println("num of sockets after response closed: " + allSockets.size());
response = client.execute(request);
System.out.println("num of sockets after request executed again: " + allSockets.size());
request.releaseConnection();
System.out.println("num of sockets after release connection: " + allSockets.size());
response = client.execute(request);
System.out.println("num of sockets after request executed again for 3rd time: " + allSockets.size());
EntityUtils.consume(response.getEntity());
System.out.println("num of sockets after entityConsumed: " + allSockets.size());
response = client.execute(request);
System.out.println("num of sockets after request executed again for 4th time: " + allSockets.size());
}
}
pom.xml
<project>
<modelVersion>4.0.0</modelVersion>
<groupId>org.joseph</groupId>
<artifactId>close.vs.release.conn</artifactId>
<version>1.0.0</version>
<properties>
<maven.compiler.target>1.8</maven.compiler.target>
<maven.compiler.source>1.8</maven.compiler.source>
</properties>
<build>
<plugins>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.5.13</version>
</dependency>
</dependencies>
</project>
Output:
num of sockets after start: 0
num of sockets after client created: 0
num of sockets after get created: 0
num of sockets after get executed: 1
num of sockets after response closed: 1
num of sockets after request executed again: 2
num of sockets after release connection: 2
num of sockets after request executed again for 3rd time: 3
num of sockets after entityConsumed: 3
num of sockets after request executed again for 4th time: 3
Notice that both .close() and .releaseConnection() both result in a new tcp connection. Only consuming the entity allows you to re-use the tcp connection.
If you want the connect to be re-usable after each request, then you need to do what #Matt recommended and consume the entity.

HttpClient: How to have only one connection to the server?

This code creates a new connection to the RESTful server for each request rather than just use the existing connection. How do I change the code, so that there is only one connection?
The line "response = oClientCloseable.execute(...)" not only does the task, but creates a connection.
I checked the server daemon log and the only activity generates from the .execute() method.
import org.apache.http.HttpResponse;
import org.apache.http.client.ClientProtocolException;
import org.apache.http.client.config.RequestConfig;
import org.apache.http.client.methods.HttpDelete;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.client.utils.HttpClientUtils;
import org.apache.http.conn.ConnectionPoolTimeoutException;
import org.apache.http.entity.StringEntity;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClientBuilder;
...
String pathPost = "http://someurl";
String pathDelete = "http://someurl2";
String xmlPost = "myxml";
HttpResponse response = null;
BufferedReader rd = null;
String line = null;
CloseableHttpClient oClientCloseable = HttpClientBuilder.create().setDefaultRequestConfig(defaultRequestConfig).build();
for (int iLoop = 0; iLoop < 25; iLoop++)
{
HttpPost hPost = new HttpPost(pathPost);
hPost.setHeader("Content-Type", "application/xml");
StringEntity se = new StringEntity(xmlPost);
hPost.setEntity(se);
line = "";
try
{
response = oClientCloseable.execute(hPost);
rd = new BufferedReader(new InputStreamReader(response.getEntity().getContent()));
while ((line = rd.readLine()) != null)
{
System.out.println(line);
}
}
catch (ClientProtocolException e)
{
e.printStackTrace();
}
catch (ConnectionPoolTimeoutException e)
{
e.printStackTrace();
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
HttpClientUtils.closeQuietly(response);
}
HttpDelete hDelete = new HttpDelete(pathDelete);
hDelete.setHeader("Content-Type", "application/xml");
try
{
response = oClientCloseable.execute(hDelete);
}
catch (ClientProtocolException e)
{
e.printStackTrace();
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
HttpClientUtils.closeQuietly(response);
}
}
oClientCloseable.close();
The server daemon log emits the following for whatever it is worth, when connecting.
HTTP connection from [192.168.20.86]...ALLOWED
POST [/linx] SIZE 248
LINK-18446744073709551615: 2 SEND-BMQs, 2 RECV-BMQs
THREAD-LINK_CONNECT-000, TID: 7F0F1B7FE700 READY
NODE connecting to [192.168.30.20]:9099...
LINK-0-CONTROL-NODE-0 connected to 192.168.30.20(192.168.30.20 IPv4 address: 192.168.30.20):9099
Auth accepted, protocol compatible
NODE connecting to [192.168.30.20]:9099...
This article seems the most relevant, as it talks about consuming (closing) connections, which ties in the response. That article is also out of date, as consumeContent is deprecated. It seems that response.close() is the proper way, but that closes the connection and a new response creates a new connection.
It seems that I need to somehow create one response to the serer daemon and then change action (get, post, put, or delete).
Thoughts on how the code should change?
Here are some other links that I used:
link 1
link 2
link 3
I implemented the suggestion of Robert Rowntree (sorry not sure to properly reference name) by replacing the beginning code with:
// Increase max total connection to 200 and increase default max connection per route to 20.
// Configure total max or per route limits for persistent connections
// that can be kept in the pool or leased by the connection manager.
PoolingHttpClientConnectionManager oConnectionMgr = new PoolingHttpClientConnectionManager();
oConnectionMgr.setMaxTotal(200);
oConnectionMgr.setDefaultMaxPerRoute(20);
oConnectionMgr.setMaxPerRoute(new HttpRoute(new HttpHost("192.168.20.120", 8080)), 20);
RequestConfig defaultRequestConfig = RequestConfig.custom()
.setSocketTimeout(5000)
.setConnectTimeout(5000)
.setConnectionRequestTimeout(5000)
.setStaleConnectionCheckEnabled(true)
.build();
//HttpClient client = HttpClientBuilder.create().setDefaultRequestConfig(defaultRequestConfig).build();
CloseableHttpClient oClientCloseable = HttpClientBuilder.create()
.setConnectionManager(oConnectionMgr)
.setDefaultRequestConfig(defaultRequestConfig)
.build();
I still saw the bunch of authenticates.
I contacted the vendor and shared with them the log using the modified version and my code was clean.
My test sample created a connection (to a remote server) followed by deleting the connection and repeating however many times. Their code dumps the authenticate message each time a connection creation request arrives.
I was pointed to what technically I already knew that the line that creates a new RESTful connection to the service is always "XXXXX connection allowed". There was one of those, two if you count my going to the browser based interface afterwards to make sure that all my links were gone.
Sadly, I am not sure that I can use the Apache client, so sad. Apache does not support message bodies inside a GET request. To the simple minded here (me, in this case), Apache does not allow:
GET http://www.example.com/whatevermethod:myport?arg1=data1&arg2=data2
Apache HttpClient --> HttpGet does not have a setEntities command. Research showed that as a POST request, but the service is the way that it is and will not change, so...
You can definitely use query parameters in Apache HttpClient:
URIBuilder builder = new URIBuilder("http://www.example.com/whatevermehtod");
builder.addParameter("arg1", "data1");
URI uri = builder.build();
HttpGet get = new HttpGet(uri);

Timer in Java/ATG - Web Services call

I have generated a request for web services. I need to do do a check on my call. If the response is not returned within 5 seconds, another request will be shooted.
Pseudo Code :
webServiceClass response = xyz.getData();
If the response is not obtained in 5 seconds - send another request CheckData() to web services.This should be done for a maximum of 5 times.
I need to do this without using threads.
Try something like this (not tested but should give you the idea):
final MultiThreadedHttpConnectionManager httpConnections = new MultiThreadedHttpConnectionManager();
final HttpConnectionManagerParams connParams = manager.getParams();
final HttpClient httpClient = new HttpClient(manager);
final int connectionTimeout = 5000;
connParams.setConnectionTimeout(connectionTimeout);
try
{
// your web service call goes here
}
catch(ConnectTimeoutException cte)
{
if (isLoggingError())
{
logError(cte.getMessage());
}
}
catch(IOException ioe)
{
if (isLoggingError())
{
logError(ioe.getMessage());
}
}
finally
{
// make sure we always release the connection
method.releaseConnection();
}

JUnit test on URLConnection, use EasyMock?

Hey, have been trying to work this out for last day or so but hitting brick wall. Trying to unit test this bit of code. But not sure if need to use EasyMock or not?? Seem few examples online but seem to be using older techniques.
public boolean verifyConnection(final String url) {
boolean result;
final int timeout = getConnectionTimeout();
if (timeout < 0) {
log.info("No need to verify connection to client. Supplied timeout = {}", timeout);
result = true;
} else {
try {
log.debug("URL: {} Timeout: {} ", url, timeout);
final URL targetUrl = new URL(url);
final HttpURLConnection connection = (HttpURLConnection) targetUrl.openConnection();
connection.setConnectTimeout(timeout);
connection.connect();
result = true;
} catch (ConnectException e) {
log.warn("Could not connect to client supplied url: " + url, e);
result = false;
} catch (MalformedURLException e) {
log.error("Malformed client supplied url: " + url, e);
result = false;
} catch (IOException e) {
log.warn("Could not connect to client supplied url: " + url, e);
result = false;
}
}
return result;
}
It just take's in a url checks its valid and returns T or F.
I have always observed that Mocking Can be avoided as much as possible because it can lead to difficult to maintain JUnit tests and defeat the whole purpose.
My suggestion would be to create a temporary server on your local machine from a JUnit itself.
At the beginning of JUnit you can create a server(not more than 10-15 lines of coding required) using Java sockets and then in your code pass the URL for the local server. This way you are reducing mocking and ensuring maximum code coverage.
Something like this -
public class SimpleServer extends Thread {
public void run() {
try {
serverSocket = new ServerSocket(port);
while (true) {
Socket s = serverSocket.accept();
}
}
catch (IOException e) {
e.printStackTrace();
}
finally {
serverSocket = null;
}
}
}
If you want to mock this method, I'd recommend passing in the URL rather than the String. Don't have your method create the URL it needs; let the client create the URL for you and pass it in. That way your test can substitute a mock if it needs to.
It's almost a dependency injection idea - your method should be given its dependencies and not create them on its own. The call to "new" is the dead giveaway.
It's not a drastic change. You could overload the method and have two signatures: one that accepts a URL string and another that accepts the URL itself. Have the first method create the URL and call the second. That way you can test it and still have the method with the String signature in your API for convenience.
Trying to set up mock implementation of the HttpURLConnection. Like
public class MockHttpURLConnection extends HttpURLConnection {'
then added method to class to override
' protected HttpURLConnection createHttpURLConnection(URL url)
throws IOException {
return (HttpURLConnection) url.openConnection();
}
So test looking something like this:
#Test
public void testGetContentOk() throws Exception
{
String url = "http://localhost";
MockHttpURLConnection mockConnection = new MockHttpURLConnection();
TestableWebClient client = new TestableWebClient();
client.setHttpURLConnection(mockConnection);
boolean result = client.verify(url);
assertEquals(true, result);
}
#Test
public void testDoesNotGetContentOk() throws Exception
{
String url = "http://1.2.3.4";
MockHttpURLConnection mockConnection = new MockHttpURLConnection();
TestableWebClient client = new TestableWebClient();
client.setHttpURLConnection(mockConnection);
boolean result = client.verify(url);
assertEquals(false, result);
}
/**
* An inner, private class that extends WebClient and allows us
* to override the createHttpURLConnection method.
*/
private class TestableWebClient extends WebClient1 {
private HttpURLConnection connection;
/**
* Setter method for the HttpURLConnection.
*
* #param connection
*/
public void setHttpURLConnection(HttpURLConnection connection)
{
this.connection = connection;
}
/**
* A method that we overwrite to create the URL connection.
*/
#Override
public HttpURLConnection createHttpURLConnection(URL url) throws IOException
{
return this.connection;
}
}
First part passed but is getting true for false dummy test, thanks for feedback back so far best site I have found for help. So let me know if think on right track

Categories