I'm looking for the best way to test a class which internally makes HTTP requests to a pre-defined URL. Generally, the class in question looks more or less like this :
public class ServiceAccess {
private static final String SERVICE_URL = "http://someservice.com/";
public ServiceAccess(String username) throws IOException,
UserNotFoundException, MalformedURLException {
URL url = new URL(SERVICE_URL + username);
HttpURLConnection conn = (HttpURLConnection)url.openConnection();
if(conn.getResponseCode() == HTTP_NOT_FOUND) {
throw new UserNotFoundException("user not found : " + username);
}
// and some more checks
}
}
I would like to test that the class properly reacts to the HTTP server's responses, including response codes, header fields, and such. I found the mockwebserver library that looks just like something I need. However, in order to use it, I would need to somehow change the URL that the class connects to.
The only sensible option that I see is to pass the URL in the constructor : however, it seems to me that this does not play too well in terms of design, since requiring the client to pass an URL to such a class looks fishy. Furthermore, I have not seen any other web service access libraries (Twitter4J, RestFB) that would require their clients to pass the URL in order to actually use them.
I'm not a Java whiz, but I'd like to get it as right as possible. All answers welcome.
What is fishy about passing the URL? Not sure I get that.
Generally for things like this, don't you want the URL to be a property? I would think in the same way that the database url for your instance is going to be constructed of properties, you would want to do the same here. In which case, in your test you just override the property/ies.
The other interesting thing about these kinds of tests is I think it's a really good idea to have tests of the actual protocol (which is what you are doing with the mock) and also the actual service and then run the service tests on a schedule, just as a way to make sure that the downstream services you are consuming are still there and honoring their end of the contract. Was reading the excellent Continuous Delivery book from Addison Wesley, contemplating making this part of a pipeline today.
if you have written your tests first, you would have never written such code :)
your class violates single responsibility rule. refactor this class. extract part responsible for networking (in your code - getting connection). then ServiceAccess should use that class. then you can easily test ServiceAccess in unit tests. unit testing networking code is pointless - guys from oracle have already done that. all you can test is that you have provided correct parameters and that's the role of integration tests
Iff you can't change the code, you could use PowerMock to mock HttpURLConnection.
Related
I have made simple application for study perpose and i want to write some unit/intagration tests. I read some information about that i can mock data base insted of create new db for tests. I will copy the code which a write. I hope that some one will explain me how to mock database.
public class UserServiceImpl implements UserService {
#Autowired
private UserOptionsDao uod;
#Override
public User getUser(int id) throws Exception {
if (id < 1) {
throw new InvalidParameterException();
}
return uod.getUser(id);
}
#Override
public User changeUserEmail(int id, String email) {
if (id < 1) {
throw new InvalidParameterException();
}
String[] emailParts = email.split("#");
if (emailParts[0].length() < 5) {
throw new InvalidParameterException();
} else if (!emailParts[1].equals("email.com")) {
throw new InvalidParameterException();
}
return uod.changeUserEmail(id, email);
}
This above i a part of the code that i want to test with the mock data base.
Generally you have three options:
Mock the data returned by UserOptionsDao as #Betlista suggested, thus creating a "fake" DAO object.
Use an in-memory database like HSQLDB to create a database with mock data when the test starts, or
Use something like a Docker container to spin up an instance of MySQL or the like and populate it with data, so you can restart it as necessary.
None of these solutions are perfect.
With #1, your test will skip the intermediate steps of authenticating to the database and looking for data. That leaves a part of your code untested, and as they say, "the devil is in the details." Often people run into problems when they mock DAO's like this when they try to deploy.
With #2, you connect to an actual database, but you have to make sure that either you are using the exact same type of database in your production code or something compatible. It also makes debugging a pain because you have to pause the test to see the contents of the database if something goes wrong.
With #3, you avoid all the problems with #1 and #2, but then you have to wire up all the Docker stuff. (I'm doing this right now, and I'm having problems too). The advantage, though, is that like #2 you can set up all of your test data at once, and be guaranteed that the production database you choose will be exactly the same as your unit test.
In your case, I would go with #2 since the application is for study purposes. Yes, I know this is a long-winded answer, but as you gain experience, you will probably want to know how to "scale up."
What you can do very easily is to have your implementation of UserOptionsDao in test package and set this one to UserServiceImpl. This new implementation can return fixed set of data for example...
This is a highlevel idea. You probably do not want to have many implementations (different for each test in general), so you should use some mocking framework like Mockito or EasyMock, look at the documentation for more details.
I know this question has been asked quite a few times, however, I have a different approach of what I want to achieve.
Since Play 1.1, you're able to match hosts. This is very useful, however, it means that for every controller, I will need to pass through the subdomain route param. This is quite a burden and repeatful if I have hundreds of controllers which use the subdomain param.
Is there not a way to create a filter which looks at the host name before everything else is executed and then sets an on-the-fly config value for that request?
For example (brainstorming), a filter would do the following:
// use request host, but hard-coded for now...
String host = "test.example.com";
Pattern p = Pattern.compile("^([a-z0-9]+)\\.example\\.com$");
Matcher m = p.matcher(host);
if (m.matches()) {
// OUT: test
System.out.println(m.group(1));
System.setProperty("host", m.group(1));
}
And in the models I'd do something like System.getProperty("host");
I know this isn't how it should be done, but I'm just brainstorming.
At least with this way:
I don't have to pass the subdomain param through to every
controller.
I don't have to pass the subdomain param through to any models
either
Models have direct access to the subdomain value so I can filter out objects that belong to the client
Also, I'm aware that System.setProperty() always applies to the entire JVM; which is a problem. I only want this value to be available throughout the duration of the request. What should I use?
Let's analyse. How would you do it? What would be a good approach? Is this possible with Play? I'm sure there are quite a few running into this problem. Your input is highly appreciated.
I think you are close. If I had to do this, I would write a controller annotated with #Before and have that method extract the hostname from the request headers and put it in renderArgs.
Something like this (I haven't tested it):
public class HostExtractor extends Controller {
#Before
public static void extractHost() {
// Code to read from request headers and extract whatever you need here.
String host = 'Your Code Here'
renderArgs.put("hostname", host);
}
}
Then, in your other controllers, you tell it you want to use that controller above as a filter.
#With(HostExtractor.class)
public class MyController extends Controller {
public static void homepage() {
String hostname = renderArgs.get("host", String.class);
// Do whatever logic you need to render the page here.
}
}
Again, I haven't tested this, but I'm doing something similar to cache objects in memcache. I hope that helps!
I was wondering how people with more experience and more complex projects get along with this "uglyness" in the REST Communication. Imagine the following Problem:
We'll need a fair amount of functionalities for one specific resource within our REST Infrastructure, in my case that's about 50+ functions that result in different querys and different responses. I tried to think of a meaningful resource-tree and assigned these to methods that will do "stuff". Afterwards, the Server Resource Class looks like this:
#Path("/thisResource")
public class SomeResource {
#GET/POST/PUT/DELETE
#Path("meaningfulPath")
public Response resourceFunction1 ( ...lots of Params) {
... logic ....
}
//
// lots of functions ...
//
#GET/POST/PUT/DELETE
#Path("meaningfulPath")
public Response resourceFunctionN ( ...lots of Params) {
... logic ....
}
}
To construct the urls my client will call, I made a little function to prevent Typos and to take better use of Constants
so my Client looks like this:
public class Client() {
public returnType function1 () {
client.resource = ResourceClass.build(Constants.Resouce, "meaningfulPath");
...
return response.getEntity(returnType);
}
}
Now the questions that bothers me is how could I link the client function and the server function better?
The only connection between these two blocks of code is the URL that will be called by the client and mapped by the server, and if even this URL is generated somewhere else, this leads to a lot of confusion.
When one of my colleagues needs to get into this code, he has a hard time figuring out which of the 50+ client functions leads to wich server function. Also it is hard to determine if there are obsolete functions in the code, etc. I guess most of you know about the problems of unclean code better than I do.
How do you deal with this? How would you keep this code clean, maintainable and georgeous?
Normally, this would be addressed by EJB or similar technologies.
Or at least by "real" web services, which would provide at least WSDL and schemas (with kind of mapping to Java interfaces, or "ports").
But REST communication is very loosely typed and loosely structured.
The only thing I can think of now, is: define a project (let's call it "Definitions") which would be referenced (hence known) by client and server. In this project you could define a class with a lot of public static final String, such as:
public static final String SOME_METHOD_NAME = "/someMethodName";
public static final String SOME_OTHER_METHOD_NAME = "/someOtherMethodName";
Note: a static final String can very well be referenced by an annotation (in that case it is considered to be constant by the compiler). So use the "constants" to annotate your #Path, such as:
#Path(Definitions.SOME_METHOD_NAME)
Same for the client:
ResourceClass.build(Constants.Resouce, Definitions.SOME_METHOD_NAME);
You are missing the idea behind REST. What you are doing is not REST but RPC over HTTP. Generally you are not supposed to construct URLs using out of band knowledge. Instead you should be following links received in the responses received from the server. Read about HATEOAS:
http://en.wikipedia.org/wiki/HATEOAS
Let's say I'm writing an application and I need to be able to do something like this:
String url = "https://someurl/";
GetMethod method = new GetMethod(URLEncoder.encode(url));
String content = method.getResponseBodyAsString();
Is there a way to provide a mock server that would let me handle the https request? What I'm looking for is a way to write unit tests, but I need to be able to mock the part that actually goes out to https://someurl so I can get a known response back.
Take a look at jadler (http://jadler.net), an http stubbing/mocking library I've been working on for some time. The 1.0.0 stable version has been just released, it should provide the capabilities you requested:
#Test
public void getAccount() {
onRequest()
.havingMethodEqualTo("GET")
.havingURIEqualTo("/accounts/1")
.havingBody(isEmptyOrNullString())
.havingHeaderEqualTo("Accept", "application/json")
.respond()
.withTimeout(2, SECONDS)
.withStatus(200)
.withBody("{\"account\":{\"id\" : 1}}")
.withEncoding(Charset.forName("UTF-8"))
.withContentType("application/json; charset=UTF-8");
final AccountService service = new AccountServiceRestImpl("http", "localhost", port());
final Account account = service.getAccount(1);
assertThat(account, is(notNullValue()));
assertThat(account.getId(), is(1));
}
#Test
public void deleteAccount() {
onRequest()
.havingMethodEqualTo("DELETE")
.havingPathEqualTo("/accounts/1")
.respond()
.withStatus(204);
final AccountService service = new AccountServiceRestImpl("http", "localhost", port());
service.deleteAccount(1);
verifyThatRequest()
.havingMethodEqualTo("DELETE")
.havingPathEqualTo("/accounts/1")
.receivedOnce();
}
You essentially have two options:
1. Abstract the call to the framework and test this.
E.g. refactor the code to allow you to inject a mock implementation at some point. There are many ways to do this. e.g. create a getUrlAsString() and mock that. (also suggested above). Or create a url getter factory that returns a GetMethod object. The factory then can be mocked.
2. Start up a app server as part of the test and then run your method against it. (This will be more of an integration test)
This can be achieved in an number of ways. This can be external to the test e.g. the maven jetty plugin. or the test can programmatically start up the server. see: http://docs.codehaus.org/display/JETTY/Embedding+Jetty
Running it over https will complicate this but it will still be possible with self signed certs. But I'd ask yourself - what exactly you want to test? I doubt you actually need to test https functionality, its a proven technology.
Personally I'd go for option 1 - you are attempting to test functionality of an external library. That is usually unnecessary. Also it's good practice to abstract out your dependencies to external libraries.
Hope this helps.
If you are writing a unit test, you dont want any external dependencies. from the api,
GetMethod
extends
HttpMethod
so you can easily mock it with your favorite mocking library. Your
method.getResponseBodyAsString()
call can be mocked to return any data you want.
You can wrap that code in some class and have WebClient.getUrl() and then mock (e.g. jmock) that method to return stored files - say
expectation {
oneOf("https://someurl/"), will(returnValue(someHTML));
}
Take a look at JWebUnit http://jwebunit.sourceforge.net/
Here is an example of a test...Its really quite intuitive.
public class ExampleWebTestCase extends WebTestCase {
public void setUp() {
super.setUp();
setBaseUrl("http://localhost:8080/test");
}
public void test1() {
beginAt("/home");
clickLink("login");
assertTitleEquals("Login");
setTextField("username", "test");
setTextField("password", "test123");
submit();
assertTitleEquals("Welcome, test!");
}
}
You could always launch a thttpd server as part of your unit test to serve the requests locally. Though, ideally, you have a well tested GetMethod, and then you can just mock it, and not have to actually have a remote server around for ALL of your tests.
Resources
thttpd: http://www.acme.com/software/thttpd/
To what extend are you interested in mocking this "Get" call, because if you are looking for a general purpose mocking framework for Java which integrates well with JUnit and allows to setup expectations which are automatically asserted when incorporated into a JUnit suite, then you really ought to take a look at jMock.
Now without more code, it's hard to determine whether this is actually what you are looking for, but a (somewhat useless) example, of something similar to the example code you wrote, would go something like this:
class GetMethodTest {
#Rule public JUnitRuleMockery context = new JunitRuleMockery();
#Test
public void testGetMethod() throws Exception {
// Setup mocked object with expectations
final GetMethod method = context.mock(GetMethod.class);
context.checking(new Expectations() {{
oneOf (method).getResponseBodyAsString();
will(returnValue("Response text goes here"));
}});
// Now do the checking against mocked object
String content = method.getResponseBodyAsString();
}
}
Use xml mimic stub server, that can simulate static http response based on request parameters, headers, etc. It is very simple to configure and use it.
http://xmlmimic.sourceforge.net/
http://sourceforge.net/projects/xmlmimic/
Thank you all for your help. A number of you posted (as I should have expected) answers indicating my whole approach was wrong, or that low-level code should never have to know whether or not it is running in a container. I would tend to agree. However, I'm dealing with a complex legacy application and do not have the option of doing a major refactoring for the current problem.
Let me step back and ask the question the motivated my original question.
I have a legacy application running under JBoss, and have made some modifications to lower-level code. I have created a unit test for my modification. In order to run the test, I need to connect to a database.
The legacy code gets the data source this way:
(jndiName is a defined string)
Context ctx = new InitialContext();
DataSource dataSource = (DataSource) ctx.lookup(jndiName);
My problem is that when I run this code under unit test, the Context has no data sources defined. My solution to this was to try to see if I'm running under the application server and, if not, create the test DataSource and return it. If I am running under the app server, then I use the code above.
So, my real question is: What is the correct way to do this? Is there some approved way the unit test can set up the context to return the appropriate data source so that the code under test doesn't need to be aware of where it's running?
For Context: MY ORIGINAL QUESTION:
I have some Java code that needs to know whether or not it is running under JBoss. Is there a canonical way for code to tell whether it is running in a container?
My first approach was developed through experimention and consists of getting the initial context and testing that it can look up certain values.
private boolean isRunningUnderJBoss(Context ctx) {
boolean runningUnderJBoss = false;
try {
// The following invokes a naming exception when not running under
// JBoss.
ctx.getNameInNamespace();
// The URL packages must contain the string "jboss".
String urlPackages = (String) ctx.lookup("java.naming.factory.url.pkgs");
if ((urlPackages != null) && (urlPackages.toUpperCase().contains("JBOSS"))) {
runningUnderJBoss = true;
}
} catch (Exception e) {
// If we get there, we are not under JBoss
runningUnderJBoss = false;
}
return runningUnderJBoss;
}
Context ctx = new InitialContext();
if (isRunningUnderJboss(ctx)
{
.........
Now, this seems to work, but it feels like a hack. What is the "correct" way to do this? Ideally, I'd like a way that would work with a variety of application servers, not just JBoss.
The whole concept is back to front. Lower level code should not be doing this sort of testing. If you need a different implementation pass it down at a relevant point.
Some combination of Dependency Injection (whether through Spring, config files, or program arguments) and the Factory Pattern would usually work best.
As an example I pass an argument to my Ant scripts that setup config files depending if the ear or war is going into a development, testing, or production environment.
The whole approach feels wrong headed to me. If your app needs to know which container it's running in you're doing something wrong.
When I use Spring I can move from Tomcat to WebLogic and back without changing anything. I'm sure that with proper configuration I could do the same trick with JBOSS as well. That's the goal I'd shoot for.
Perhaps something like this ( ugly but it may work )
private void isRunningOn( String thatServerName ) {
String uniqueClassName = getSpecialClassNameFor( thatServerName );
try {
Class.forName( uniqueClassName );
} catch ( ClassNotFoudException cnfe ) {
return false;
}
return true;
}
The getSpecialClassNameFor method would return a class that is unique for each Application Server ( and may return new class names when more apps servers are added )
Then you use it like:
if( isRunningOn("JBoss")) {
createJBossStrategy....etcetc
}
Context ctx = new InitialContext();
DataSource dataSource = (DataSource) ctx.lookup(jndiName);
Who constructs the InitialContext? Its construction must be outside the code that you are trying to test, or otherwise you won't be able to mock the context.
Since you said that you are working on a legacy application, first refactor the code so that you can easily dependency inject the context or data source to the class. Then you can more easily write tests for that class.
You can transition the legacy code by having two constructors, as in the below code, until you have refactored the code that constructs the class. This way you can more easily test Foo and you can keep the code that uses Foo unchanged. Then you can slowly refactor the code, so that the old constructor is completely removed and all dependencies are dependency injected.
public class Foo {
private final DataSource dataSource;
public Foo() { // production code calls this - no changes needed to callers
Context ctx = new InitialContext();
this.dataSource = (DataSource) ctx.lookup(jndiName);
}
public Foo(DataSource dataSource) { // test code calls this
this.dataSource = dataSource;
}
// methods that use dataSource
}
But before you start doing that refactoring, you should have some integration tests to cover your back. Otherwise you can't know whether even the simple refactorings, such as moving the DataSource lookup to the constructor, break something. Then when the code gets better, more testable, you can write unit tests. (By definition, if a test touches the file system, network or database, it is not a unit test - it is an integration test.)
The benefit of unit tests is that they run fast - hundreds or thousands per second - and are very focused to testing just one behaviour at a time. That makes it possible run then often (if you hesitate running all unit tests after changing one line, they run too slowly) so that you get quick feedback. And because they are very focused, you will know just by looking at the name of the failing test that exactly where in the production code the bug is.
The benefit of integration tests is that they make sure that all parts are plugged together correctly. That is also important, but you can not run them very often because things like touching the database make them very slow. But you should still run them at least once a day on your continuous integration server.
There are a couple of ways to tackle this problem. One is to pass a Context object to the class when it is under unit test. If you can't change the method signature, refactor the creation of the inital context to a protected method and test a subclass that returns the mocked context object by overriding the method. That can at least put the class under test so you can refactor to better alternatives from there.
The next option is to make database connections a factory that can tell if it is in a container or not, and do the appropriate thing in each case.
One thing to think about is - once you have this database connection out of the container, what are you going to do with it? It is easier, but it isn't quite a unit test if you have to carry the whole data access layer.
For further help in this direction of moving legacy code under unit test, I suggest you look at Michael Feather's Working Effectively with Legacy Code.
A clean way to do this would be to have lifecycle listeners configured in web.xml. These can set global flags if you want. For example, you could define a ServletContextListener in your web.xml and in the contextInitialized method, set a global flag that you're running inside a container. If the global flag is not set, then you are not running inside a container.