I have a few APIs I'm trying to thoroughly integration test -- I'm hitting the remote service which is running in a test environment (not the same box that is running the tests, the tests make real service calls), I can't just use dependency injection to solve all my problems. We already have a good suite of unit tests on these classes.
The API I'm testing is SaveThingy. It saves a Thingy if it's valid, and returns you the id of it. One of the checks is that you can only save a thingy at certain times, say only on weekdays. If you call SaveThingy on the weekend, it insults you personally instead of saving a Thingy. The implementation looks something like the following
ThingyId saveThingy(Thingy thingy) {
if (isWeekend(LocalDate.now().getDayOfWeek())) {
throw new PersonalInsultException("your mother wears army boots");
}
return thingyDao.save(thingy);
}
I'd ideally like to have both cases tested each time we run integration tests, without any waiting. In some code, I want tests similar to the following to run each time.
#Test
public void saveThingy_validThingyOnWeekday_savesThingy() {
ThingyId id = serviceUnderTest.saveThingy(THINGY);
assertThat(serviceUnderTest.getThingyById(id)).isEqualTo(THINGY);
}
#Test(expected = PersonalInsultException.class)
public void saveThingy_validThingyOnWeekend_receivePersonalInsult() {
serviceUnderTest.saveThing(THINGY);
}
Are there any standard ways that allow complete testing of such APIs? I've considered a few options (below), but wanted to get additional opinions.
say no to integration testing, live with only unit tests for these APIs
change the remote clock, either using a private API or by literally ssh-ing into the host before running each test
write tests that are time dependent; only testing one of the possible behaviors, or testing one behavior then sleeping until other conditions are met
invent dummy data that will always save or always throw an exception
I suppose in your ThingyService class you have a public
or protected isWeekend method. Probably something like this:
public boolean isWeekend(DayOfWeek dayOfWeek) {
return dayOfWeek == DayOfWeek.SATURDAY || dayOfWeek == DayOfWeek.SUNDAY;
}
In your ThingyServiceTest you can then create two specialized ThingyService instances with mocked isWeekend methods.
In your test-cases you can use either of these:
// service with weekday behavior
private ThingyService serviceUnderTest_weekday = new ThingyService() {
#Override
public boolean isWeekend(DayOfWeek dayOfWeek) {
return false;
}
};
// service with weekend behavior
private ThingyService serviceUnderTest_weekend = new ThingyService() {
#Override
public boolean isWeekend(DayOfWeek dayOfWeek) {
return true;
}
};
#Test
public void saveThingy_validThingyOnWeekday_savesThingy() {
ThingyId id = serviceUnderTest_weekday.saveThingy(THINGY);
assertThat(serviceUnderTest_weekday.getThingyById(id)).isEqualTo(THINGY);
}
#Test(expected = PersonalInsultException.class)
public void saveThingy_validThingyOnWeekend_receivePersonalInsult() {
serviceUnderTest_weekend.saveThing(THINGY);
}
You are trying to achieve black box testing with white box testing requirements: it's simply not possible. Stick to white box testing and mock out your DateProvider locally. Maybe not what you want to hear but you will waste so much time otherwise trying to align the stars to produce the output you want in your assertion.
Now on the other hand if you really want to do this run your application inside a docker container and change the system clock then just tear it down between tests.
Alternatively open up a separate endpoint which allows to specify the current time via your service and only deploy this endpoint when you are testing. Or have a configuration interface responsible for determining the source of your current time and when you deploy to your test environment configure the app accordingly.
Instead of using LocalDate.now() or Instant.now() or Whatever.now() or passing long timestamps, consider adding java.time.Clock field to your class, and either add an initialization parameter to constructor or provide a setter.
Clock.instant() is essentially Instant.now(), and you can convert the instant into any other temporal class.
In production, you will use Clock.systemUTC() as the value of that field.
In some tests, you can use Clock.fixed(), and in all tests you can just mock the way you need it for tests.
This approach is mostly suited for unit tests. You can still inject your custom Clock implementation in integration environment, so that the application as a whole will think that the current time is whatever you need.
Related
Consider the scenario where I am mocking certain service and its method.
Employee emp = mock(Employee.class);
when(emp.getName(1)).thenReturn("Jim");
when(emp.getName(2)).thenReturn("Mark");
//assert
assertEquals("Jim", emp.getName(1));
assertEquals("Mark", emp.getName(2));
In the above code when emp.getName(1) is called then mock will return Jim and when emp.getName(2) is called mock will return Mark. My Question is I am declaring the behavior of Mock and checking it assertEquals what is the point in having above(or same kind of) assert statements? These are obviously going to pass. it is simply like checking 3==(1+2) what is the point? When will these tests fail (apart from changing the return type and param type)?
As you noted, these kind of tests are pointless (unless you're writing a unit test for Mockito itself, of course :-)).
The point of mocking is to eliminate external dependencies so you can unit-test your code without depending on other classes' code. For example, let's assume you have a class that uses the Employee class you described:
public class EmployeeExaminer {
public boolean isJim(Employee e, int i) {
return "Jim".equals(e.getName(i));
}
}
And you'd like to write a unit test for it. Of course, you could use the actual Employee class, but then your test won't be a unit-test any more - it would depend on Employee's implementation. Here's where mocking comes in handy - it allows you to replace Employee with a predictable behavior so you could write a stable unit test:
// The object under test
EmployeeExaminer ee = new EmployeeExaminer();
// A mock Employee used for tests:
Employee emp = mock(Employee.class);
when(emp.getName(1)).thenReturn("Jim");
when(emp.getName(2)).thenReturn("Mark");
// Assert EmployeeExaminer's behavior:
assertTrue(ee.isJim(emp, 1));
assertFalse(ee.isJim(emp, 2));
In your case you are testing a getter, I don't know why you are testing it and no clue why would you need to mock it. From the code you are providing this is useless.
There is many scenarios where mocking make sense when you write unit-test you have to be pragmatic, you should test behaviors and mock dependencies.
Here you aren't testing behavior and you are mocking the class under test.
There is no point in that test.
Mocks are only useful for injecting dependencies into classes and testing that a particular behaviour interacts with that dependency correctly, or for allowing you to test some behaviour that requires an interface you don't care about in the test you are writing.
Mocking the class under test means you aren't even really testing that class.
If the emp variable was being injected into another class and then that class was being tested, then I could see some kind of point to it.
Above testcase is trying to test a POJO.
Actually, You can ignore to test POJO's, or in other words, they are automatically tested when testing other basic functionalities. (there are also utilities as mean-beans to test POJO's)
Goal of unit-testing is to test the functionality without connecting to any external systems. If you are connecting to any external system, that is considered integration testing.
Mocking an object helps in creating mock objects that cannot be created during unit-testing, and testing behavior/logic based on what the mocked object (or real object when connecting to external system) data is returned.
Mocks are structures that simulate behaviour of external dependencies that you don't/can't have or which can't operate properly in the context of your test, because they depend on other external systems themselves (e.g. a connection to a server). Therefore a test like you've described is indeed not very helpful, because you basically try to verify the simulated behaviour of your mocks and nothing else.
A better example would be a class EmployeeValidator that depends on another system EmployeeService, which sends a request to an external server. The server might not be available in the current context of your test, so you need to mock the service that makes the request and simulate the behaviour of that.
class EmployeeValidator {
private final EmployeeService service;
public EmployeeValidator(EmployeeService service) {
this.service = service;
}
public List<Employee> employeesWithMaxSalary(int maxSalary) {
List<Employee> allEmployees = service.getAll(); // Possible call to external system via HTTP or so.
List<Employee> filtered = new LinkedList<>();
for(Employee e : allEmployees) {
if(e.getSalary() <= maxSalary) {
filtered.add(e);
}
}
return filtered;
}
}
Then you can write a test which mocks the EmployeeService and simulates the call to the external system. Afterwards, you can verify that everything went as planned.
#Test
public void shouldContainAllEmployeesWithSalaryFiveThousand() {
// Given - Define behaviour
EmployeeService mockService = mock(EmployeeService.class);
when(mockService.getAll()).thenReturn(createEmployeeList());
// When - Operate the system under test
// Inject the mock
EmployeeValidator ev = new EmployeeValidator(mockService);
// System calls EmployeeService#getAll() internally but this is mocked away here
List<Employee> filtered = ev.employeesWithMaxSalary(5000);
// Then - Check correct results
assertThat(filtered.size(), is(3)); // There are only 3 employees with Salary <= 5000
verify(mockService, times(1)).getAll(); // The service method was called exactly one time.
}
private List<Employee> createEmployeeList() {
// Create some dummy Employees
}
I have written following code to publish some metrics around AWS Step function (its java lambda for aws)
#Override
public void handleRequest(InputStream input, OutputStream output, Context context) throws IOException {
int inProgressStateMachines = 0;
LocalDateTime now = LocalDateTime.now();
long alarmThreshold = getAlarmThreshold(input, context.getLogger());
AWSStepFunctions awsStepFunctions = AWSStepFunctionsClientBuilder.standard().build();
ListStateMachinesRequest listStateMachinesRequest = new ListStateMachinesRequest();
ListStateMachinesResult listStateMachinesResult = awsStepFunctions.listStateMachines(listStateMachinesRequest);
for (StateMachineListItem stateMachineListItem : listStateMachinesResult.getStateMachines()) {
ListExecutionsRequest listExecutionRequest = new ListExecutionsRequest()
.withStateMachineArn(stateMachineListItem.getStateMachineArn())
.withStatusFilter(ExecutionStatus.RUNNING);
ListExecutionsResult listExecutionsResult = awsStepFunctions.listExecutions(listExecutionRequest);
for (ExecutionListItem executionListItem : listExecutionsResult.getExecutions()) {
LocalDateTime stateMachineStartTime = LocalDateTime.ofInstant(
executionListItem.getStartDate().toInstant(), ZoneId.systemDefault());
long elapsedTime = ChronoUnit.SECONDS.between(stateMachineStartTime, now);
if (elapsedTime > alarmThreshold){
inProgressStateMachines++;
}
}
publishMetrics(inProgressStateMachines);
}
}
Now I am trying to unit-test this method and having some issues.
First of all, I get error that Mockito can not mock final class when i tried to mock AWSStepFunctionsClientBuilder.
Secondly, I have private methods which are being called with specific params.
The question is
How can i unit test this code? I read it somewhere that if a code isn't unit-testable then its a bad design. How can i improve this code so that its easily testable? I would prefer to keep those helper methods as private methods.
How can i mock final objects from AWS SDK to test this code? I can not use any other framework but Mockito.
You actually don't want to mock AWSStepFunctionsClientBuilder because you are actually calling AWSStepFunctions, which you'll have to mock anyway even after mocking the builder.
So make AWSStepFunctions an instance variable:
// add appropriate getter/setter as well
private AWSStepFunctions awsStepFunctions;
Where you currently call the builder to initialize awsStepFunctions, change to:
if (awsStepFunctions == null)
awsStepFunctions = AWSStepFunctionsClientBuilder.standard().build();
Now, during unit test, you can set awsStepFunctions to a mocked instance, bypassing the conditional initialization above.
[Edit] Some more thoughts based on #kdgregory's comment below:
The answer above is meant to provide a solution given the existing code structure, without requiring any major refactoring. In general though, ideally you would want to move the bulk of the code into another plain, more testable Java class, where you can properly inject dependencies, manage life cycles, etc.
I have made simple application for study perpose and i want to write some unit/intagration tests. I read some information about that i can mock data base insted of create new db for tests. I will copy the code which a write. I hope that some one will explain me how to mock database.
public class UserServiceImpl implements UserService {
#Autowired
private UserOptionsDao uod;
#Override
public User getUser(int id) throws Exception {
if (id < 1) {
throw new InvalidParameterException();
}
return uod.getUser(id);
}
#Override
public User changeUserEmail(int id, String email) {
if (id < 1) {
throw new InvalidParameterException();
}
String[] emailParts = email.split("#");
if (emailParts[0].length() < 5) {
throw new InvalidParameterException();
} else if (!emailParts[1].equals("email.com")) {
throw new InvalidParameterException();
}
return uod.changeUserEmail(id, email);
}
This above i a part of the code that i want to test with the mock data base.
Generally you have three options:
Mock the data returned by UserOptionsDao as #Betlista suggested, thus creating a "fake" DAO object.
Use an in-memory database like HSQLDB to create a database with mock data when the test starts, or
Use something like a Docker container to spin up an instance of MySQL or the like and populate it with data, so you can restart it as necessary.
None of these solutions are perfect.
With #1, your test will skip the intermediate steps of authenticating to the database and looking for data. That leaves a part of your code untested, and as they say, "the devil is in the details." Often people run into problems when they mock DAO's like this when they try to deploy.
With #2, you connect to an actual database, but you have to make sure that either you are using the exact same type of database in your production code or something compatible. It also makes debugging a pain because you have to pause the test to see the contents of the database if something goes wrong.
With #3, you avoid all the problems with #1 and #2, but then you have to wire up all the Docker stuff. (I'm doing this right now, and I'm having problems too). The advantage, though, is that like #2 you can set up all of your test data at once, and be guaranteed that the production database you choose will be exactly the same as your unit test.
In your case, I would go with #2 since the application is for study purposes. Yes, I know this is a long-winded answer, but as you gain experience, you will probably want to know how to "scale up."
What you can do very easily is to have your implementation of UserOptionsDao in test package and set this one to UserServiceImpl. This new implementation can return fixed set of data for example...
This is a highlevel idea. You probably do not want to have many implementations (different for each test in general), so you should use some mocking framework like Mockito or EasyMock, look at the documentation for more details.
I recently stumbled upon this interesting concept that may save me much testing efforts.
What I do not understand is how can the provider be injected in runtime?
The scenario is trivial: I am constructing a mock object at run-time with my mocking framework of choice, but I do not know the name of the generated class in advance because it is a mock (so I can't configure it in advance, not do I want to).
Did anybody make successful use of this technique in unit tests?
Thank you.
The concept described in that article is an Ambient Context that uses a Service Locator in the background.
Because of the use of a static property and the use of the Service Locator, this pattern is very inconvenient for unit testing. To be able to run a test that verifies code that uses this singleton, you need to set up a valid Service Locator and configure it with the singleton (probably a mock instance) that you care about using testing.
Even the example given by the article already suffers from these problems, since the "Do you like singletons?" code, is hard to test:
if (DialogDisplayer.getDefault().yesOrNo(
"Do you like singletons?"
)) {
System.err.println("OK, thank you!");
} else {
System.err.println(
"Visit http://singletons.apidesign.org to"
+ " change your mind!"
);
}
A better alternative is to use constructor injection to inject that singleton (please excuse my French, but I'm not a native Java speaker):
public class AskTheUserController
{
private DialogDisplayer dialogDisplayer;
private MessageDisplayer messageDisplayer;
public AskTheUserController(DialogDisplayer dialogDisplayer,
MessageDisplayer messageDisplayer)
{
this.dialogDisplayer = dialogDisplayer;
this.messageDisplayer = messageDisplayer;
}
public void AskTheUser()
{
if (this.dialogDisplayer.yesOrNo(
"Do you like singletons?"
)) {
this.messageDisplayer.display("OK, thank you!");
} else {
this.messageDisplayer.display(
"Visit http://singletons.apidesign.org to"
+ " change your mind!"
);
}
}
}
There was another 'hidden' dependency in that code: System.err.println. It got abstracted using a MessageDisplayer interface. This code has a few clear advantages:
By injecting both dependencies, the consumer doesn't even need to know that those dependencies are singletons.
The code clearly communicates the dependencies it takes.
The code can easily be tested using mock objects.
The test code doesn't need to configure a service locator.
Your tests might look like this:
#Test
public void AskTheUser_WhenUserSaysYes_WeThankHim()
{
// Arrange
bool answer = true;
MockMessageDisplayer message = new MockMessageDisplayer();
MockDialogDisplayer dialog = new MockDialogDisplayer(answer);
AskTheUserController controller =
new AskTheUserController(dialog, message);
// Act
controller.AskTheUser();
// Assert
Assert.AreEqual("OK, thank you!", message.displayedMessage);
}
#Test
public void AskTheUser_WhenUserSaysNo_WeLetHimChangeHisMind()
{
// Arrange
bool answer = true;
MockMessageDisplayer message = new MockMessageDisplayer();
MockDialogDisplayer dialog = new MockDialogDisplayer(answer);
AskTheUserController controller =
new AskTheUserController(dialog, message);
// Act
controller.AskTheUser();
// Assert
Assert.IsTrue(
message.displayedMessage.contains("change your mind"));
}
Your test code will never be as intend revealing as the code above when you're using the 'injectable singleton' pattern as shown in the article.
There is nothing wrong with singletons, which are useful and necessary concepts in any software. The problem is that you shouldn't implement them with static fields and methods.
I use Guice to inject my singletons and I haven't had to use static in my code base and tests in a very long time.
Here are a couple of links you might find useful that explain how to achieve testable singletons with Guice:
Rehabilitating the Singleton pattern.
Guice and TestNG
Thank you all for your help. A number of you posted (as I should have expected) answers indicating my whole approach was wrong, or that low-level code should never have to know whether or not it is running in a container. I would tend to agree. However, I'm dealing with a complex legacy application and do not have the option of doing a major refactoring for the current problem.
Let me step back and ask the question the motivated my original question.
I have a legacy application running under JBoss, and have made some modifications to lower-level code. I have created a unit test for my modification. In order to run the test, I need to connect to a database.
The legacy code gets the data source this way:
(jndiName is a defined string)
Context ctx = new InitialContext();
DataSource dataSource = (DataSource) ctx.lookup(jndiName);
My problem is that when I run this code under unit test, the Context has no data sources defined. My solution to this was to try to see if I'm running under the application server and, if not, create the test DataSource and return it. If I am running under the app server, then I use the code above.
So, my real question is: What is the correct way to do this? Is there some approved way the unit test can set up the context to return the appropriate data source so that the code under test doesn't need to be aware of where it's running?
For Context: MY ORIGINAL QUESTION:
I have some Java code that needs to know whether or not it is running under JBoss. Is there a canonical way for code to tell whether it is running in a container?
My first approach was developed through experimention and consists of getting the initial context and testing that it can look up certain values.
private boolean isRunningUnderJBoss(Context ctx) {
boolean runningUnderJBoss = false;
try {
// The following invokes a naming exception when not running under
// JBoss.
ctx.getNameInNamespace();
// The URL packages must contain the string "jboss".
String urlPackages = (String) ctx.lookup("java.naming.factory.url.pkgs");
if ((urlPackages != null) && (urlPackages.toUpperCase().contains("JBOSS"))) {
runningUnderJBoss = true;
}
} catch (Exception e) {
// If we get there, we are not under JBoss
runningUnderJBoss = false;
}
return runningUnderJBoss;
}
Context ctx = new InitialContext();
if (isRunningUnderJboss(ctx)
{
.........
Now, this seems to work, but it feels like a hack. What is the "correct" way to do this? Ideally, I'd like a way that would work with a variety of application servers, not just JBoss.
The whole concept is back to front. Lower level code should not be doing this sort of testing. If you need a different implementation pass it down at a relevant point.
Some combination of Dependency Injection (whether through Spring, config files, or program arguments) and the Factory Pattern would usually work best.
As an example I pass an argument to my Ant scripts that setup config files depending if the ear or war is going into a development, testing, or production environment.
The whole approach feels wrong headed to me. If your app needs to know which container it's running in you're doing something wrong.
When I use Spring I can move from Tomcat to WebLogic and back without changing anything. I'm sure that with proper configuration I could do the same trick with JBOSS as well. That's the goal I'd shoot for.
Perhaps something like this ( ugly but it may work )
private void isRunningOn( String thatServerName ) {
String uniqueClassName = getSpecialClassNameFor( thatServerName );
try {
Class.forName( uniqueClassName );
} catch ( ClassNotFoudException cnfe ) {
return false;
}
return true;
}
The getSpecialClassNameFor method would return a class that is unique for each Application Server ( and may return new class names when more apps servers are added )
Then you use it like:
if( isRunningOn("JBoss")) {
createJBossStrategy....etcetc
}
Context ctx = new InitialContext();
DataSource dataSource = (DataSource) ctx.lookup(jndiName);
Who constructs the InitialContext? Its construction must be outside the code that you are trying to test, or otherwise you won't be able to mock the context.
Since you said that you are working on a legacy application, first refactor the code so that you can easily dependency inject the context or data source to the class. Then you can more easily write tests for that class.
You can transition the legacy code by having two constructors, as in the below code, until you have refactored the code that constructs the class. This way you can more easily test Foo and you can keep the code that uses Foo unchanged. Then you can slowly refactor the code, so that the old constructor is completely removed and all dependencies are dependency injected.
public class Foo {
private final DataSource dataSource;
public Foo() { // production code calls this - no changes needed to callers
Context ctx = new InitialContext();
this.dataSource = (DataSource) ctx.lookup(jndiName);
}
public Foo(DataSource dataSource) { // test code calls this
this.dataSource = dataSource;
}
// methods that use dataSource
}
But before you start doing that refactoring, you should have some integration tests to cover your back. Otherwise you can't know whether even the simple refactorings, such as moving the DataSource lookup to the constructor, break something. Then when the code gets better, more testable, you can write unit tests. (By definition, if a test touches the file system, network or database, it is not a unit test - it is an integration test.)
The benefit of unit tests is that they run fast - hundreds or thousands per second - and are very focused to testing just one behaviour at a time. That makes it possible run then often (if you hesitate running all unit tests after changing one line, they run too slowly) so that you get quick feedback. And because they are very focused, you will know just by looking at the name of the failing test that exactly where in the production code the bug is.
The benefit of integration tests is that they make sure that all parts are plugged together correctly. That is also important, but you can not run them very often because things like touching the database make them very slow. But you should still run them at least once a day on your continuous integration server.
There are a couple of ways to tackle this problem. One is to pass a Context object to the class when it is under unit test. If you can't change the method signature, refactor the creation of the inital context to a protected method and test a subclass that returns the mocked context object by overriding the method. That can at least put the class under test so you can refactor to better alternatives from there.
The next option is to make database connections a factory that can tell if it is in a container or not, and do the appropriate thing in each case.
One thing to think about is - once you have this database connection out of the container, what are you going to do with it? It is easier, but it isn't quite a unit test if you have to carry the whole data access layer.
For further help in this direction of moving legacy code under unit test, I suggest you look at Michael Feather's Working Effectively with Legacy Code.
A clean way to do this would be to have lifecycle listeners configured in web.xml. These can set global flags if you want. For example, you could define a ServletContextListener in your web.xml and in the contextInitialized method, set a global flag that you're running inside a container. If the global flag is not set, then you are not running inside a container.