I have a method where I want to factor out some code into its own method
This is what I have:
public class TD0301AssignmentForm extends Form {
public TD0301AssignmentForm(TD0301AssignmentDAO dao, STKUser authenticatedUser) {
this.dao = dao;
this.authenticatedUser = authenticatedUser;
}
public Object insert(HttpServletRequest request) {
TD0301Assignment tdas = new TD0301Assignment();
TD0301Assignment tdas_orig = null;
Date dateNow = new Date();
try {
// Get the inuput from HTML form
tdas.setCalc_num(FormUtil.getFieldValue(request, FIELD_CALC_NUM));
processDate(request, tdas);
tdas.setCalc_dept(FormUtil.getFieldValue(request, FIELD_CALC_DEPT));
tdas.setYear_oi(Integer.toString(DateUtil.getIntYear(dateNow)));
processCalcSafetyRequirements(request, tdas);
...etc...
if (isSucces()) {
// Instantiate a base work flow instance!
WorkflowInstance wfi = new WorkflowInstance();
WorkflowInstanceDAO wfiDAO = new WorkflowInstanceDAO();
wfi.setWorkflow_class_id(tdas.getCalc_level());
wfi.setStarted_by(authenticatedUser.getBadge());
wfi.setStatus("0");
wfi.setLast_date(dateNow);
// Insert the WorkFlowInstance into the database, db sets returned sequence number into the wfi object.
wfiDAO.insert(wfi, authenticatedUser);
// Insert the TD0301Assignment into the db
tdas.setWorkflow_instance_id(wfi.getWorkflow_instance_id());
}
I'd like to remove the WorkflowInstance code out into its own method (still in this Class) like this:
if (isSucces()) {
insertWorkFlowInstance(request, tdas);
tdas.setWorkflow_instance_id(wfi.getWorkflow_instance_id());
but wfi is now marked by Eclipse as not available. Should I do something like this to fix the error so that I can still get the wfi.getWorkflow_instance_id() in the isSuccess block above? I know it removes the error, but I am trying to apply best practices.
public class TD0301AssignmentForm extends Form {
private WorkflowInstance wfi = new WorkflowInstance();
private WorkflowInstanceDAO wfiDAO = new WorkflowInstanceDAO();
Instance variables ("properties" or "fields") are not necessarily the way to go if they're not used throughout the entire class.
Variables should have the smallest scope possible--this makes code easier to reason about.
With some noise elided, and also guessing, it seems like the WorkflowInstance and WorkflowInstanceDao could be localized (names changed to match Java conventions):
public class TD0301AssignmentForm extends Form {
public Object insert(HttpServletRequest request) {
TD0301Assignment tdas = new TD0301Assignment();
try {
tdas.setCalcNum(FormUtil.getFieldValue(request, FIELD_CALC_NUM));
processDate(request, tdas);
tdas.setCalcDept(FormUtil.getFieldValue(request, FIELD_CALC_DEPT));
tdas.setYearOi(Integer.toString(DateUtil.getIntYear(dateNow)));
processCalcSafetyRequirements(request, tdas);
if (isSuccess()) {
WorkflowInstance wf = buildWorkflow(tdas);
tdas.setWorkflowInstanceId(wf.getId());
}
}
}
private buildWorkflow(TD0301Assignment tdas) {
WorkflowInstance wfi = new WorkflowInstance();
wfi.setWorkflowClassId(tdas.getCalcLevel());
wfi.setStartedBy(authenticatedUser.getBadge());
wfi.setStatus("0");
wfi.setLastDate(new Date());
WorkflowInstanceDao wfiDao = new WorkflowInstanceDao();
wfiDao.insert(wfi, authenticatedUser);
}
}
Whether or not this is appropriate depends on how/if the WorkflowInstance is used in the rest of the method snippet you show. The DAO is almost certainly able to be localized.
As methods become smaller and easier to think about, they become more testable.
For example, buildWorkflow is almost easy to test, except that the DAO is instantiated "manually". This means that testing the method will either (a) depend on having a working DAO layer, or (b) it must be mocked by a framework that can mock static utility methods (several can).
Without seeing all your code it's not easy to see exactlywhat you are trying to achieve. The reason eclipse is complaining is because it no longer has a wfi instance to play with because you've moved its local instance into your method, but creating another wfi instance is not likely to be your answer.
To get this working change the wfi to be class local and either use it's id directly or return wfi.getWorkflow_instance_id() from insertWorkFlowInstance() and then pass that value into tdas.setWorkflow_instance_id()
Related
I have a class like below, with hundreds of methods:
public class APIMethods {
public ToView toView;
public APIMethods(ToView toView) {
this.toView = toView;
}
public static final int SUCCESS = 1;
public static final int ERROR = 0;
public void registerAnonymous(String deviceId, String installRef, final int requestCode) {
APIInterface apiService =
RetrofitClientInstance.getRetrofitInstance().create(APIInterface.class);
JsonObject obj = new JsonObject();
obj.addProperty("androidId", deviceId);
obj.addProperty("projectId", 0);
obj.addProperty("ChannelName", installRef);
Call<Response<BasicUser>> call = apiService.registerAnonymous("application/json", Utils.getFlavorId(), obj);
call.enqueue(new Callback<Response<BasicUser>>() {
#Override
public void onResponse(Call<Response<BasicUser>> call, Response<Response<BasicUser>> response) {
Response<BasicUser> mResponse;
try {
mResponse = response.body();
if (mResponse.getErrorCode() == 0)
toView.updateView(requestCode, SUCCESS, mResponse);
else
toView.updateView(requestCode, ERROR, mResponse);
} catch (Exception e) {
mResponse = new Response<>();
mResponse.setErrorCode(-1);
toView.updateView(requestCode, ERROR, mResponse);
e.printStackTrace();
}
}
#Override
public void onFailure(Call<PetMarkResponse<BasicUser>> call, Throwable t) {
Response<BasicUser> numberValidationResponse = new Response<BasicUser>();
numberValidationResponse.setErrorCode(-1);
toView.updateView(requestCode, ERROR, numberValidationResponse);
}
});
}
///And dozens of such method
}
So in my other classes everywhere in my application, I simply instantiate the class and call the method that I want:
APIMethods api = new APIMethods(this);
api.registerAnonymous(Utils.getAndroidId(this), BuildConfig.FLAVOR, STATE_REGISTER_ANONYMOUS);
My question is how expensive this object (api) is? Note that in each class, a few methods of the object are called.
The object is not expensive at all.
An object contains a pointer to the object's class, and the methods are stored with the class. Essentially, the methods are all shared. An object of a class with no methods and an object of a class with 10000 methods are the same size (assuming everything else is equal).
The situation would be different if you had 100 fields instead of 100 methods.
You may want to think about if having hundreds of methods in a single class is a good idea. Is the code easy to understand and maintain? Is this an example of the "God object" anti pattern? https://en.m.wikipedia.org/wiki/God_object
This seems like a classic example of the XY problem. Your actual problem is how to make the code readable, but you're actually asking about whether a class with hundreds of methods is expensive.
It being expensive is the least of your concerns - you should be more worried about maintenance. There's no reason at all that any class should ever be that large, especially if you have a lot of independent methods and each class is only calling a few of them. This will make the class very hard to understand - having them all in one place will not improve the situation.
Some of the comments have already pointed this out, but you should, at a minimum, break this up topically.
Even better, refactor this to the Strategy pattern and use a Factory to pick which one to use. That will meet your goal of ease of use while avoiding the problem of having hundreds of unrelated methods in one place.
Try to define a Cohesive class, untill and unless the methods are written relevant to the class and it defines its purpose.
Below link describe the importance of methods for a class:
https://www.decodejava.com/coupling-cohesion-java.htm
Currently, my notification request is like this:
public class EmailRequest{
public enum EmailType{
TYPE_1,
TYPE_2,
...
}
EmailType emailType;
String toAddress;
EmailRenderer renderer;
}
where EmailRenderer is an interface
public interface EmailRenderer{
EmailMessage render()
}
Now, each type of email has a separate implementation of the renderer interface and each implementation contains some rendering data that has to be provided by the client. This data can be different for each implementation.
Example:
public class Type1EmailRenderer implements EmailRenderer{
String param1;
String param2;
#Override
EmailMessage render(){
//rendering logic using the params
}
}
But, it seems redundant to me for the user to set the email type and renderer as well. Choosing the renderer should automatically get me the emailType. How should I restructure the request to be free of this redundancy? Also, can I use any design pattern for providing the renderers to my users?
I'll base my answer on a claim that,
putting aside programming-related questions, at the level of human logic, it looks to me strange that if I want to send an email I should know about renderers at all.
In my understanding If I have emails of different types (you've called them TYPE_1 and TYPE_2, let's give more "business" names for better clarity, like "dailyReport" or "advertisement", you'll see later why) I should just prepare a request with my data (param1, param2) and send it. I shouldn't care about renderers at all as long as the same email type assumes that the same type of renderer will be used.
So, lets say, type "advertisement" has a mandatory parameter String topic and optional parameter String targetAudience and type "dailyReport" has Integer totalUsersCount and optional String mostActiveUserName.
In this case, I propose the somewhat hybrid approach mainly based on Builder creation pattern:
public class EmailRequestBuilder {
private String toAddress;
private EmailRequestBuilder(String to) {
this.toAddress = to;
}
public static EmailRequestBuilder newEmailRequest(String to) {
return new EmailRequestBuilder(to);
}
public AdvertisementBuilder ofAdvertisementType(String topic) {
return new AdvertisementBuilder(topic, this);
}
public DailyReportBuilder ofDailyReportType(Integer totalUsersCount) {
return new DailyReportBuilder(totalUsersCount, this);
}
// all builders in the same package, hence package private build method,
// concrete email type builders will call this method, I'll show at the end
EmailRequest build(EmailType type, EmailRenderer emailRenderer) {
return new EmailRequest (to, type, emailRenderer);
}
}
public class AdvertisementBuilder {
private String topic;
private EmailRequestBuilder emailRequestBuilder;
// package private, so that only EmailRequestBuilder will be able to create it
AdvertisementBuilder(String topic, EmailRequestBuilder emailRequestBuilder) // mandatory parameters in constructor + reference to already gathered data {
this.topic = topic;
this.emailRequestBuilder = emailRequestBuilder;
}
// for optional parameters provide an explicit method that can be called
// but its not a mandatory call
public AdvertisementBuilder withTargetAudience(String audience) {
this.audience = audience;
return this;
}
public EmailRequest buildRequest() {
EmailRenderer renderer = new AdvertisementRenderer(topic, audience);
return emailRequestBuilder.build(EmailType.ADVERTISEMENT, renderer);
}
}
// A similar builder for DailyReport (I'll omit it but assume that there is a class
class DailyReportBuilder {}
Now the good part about it that now you can't go wrong as a user. A typical interaction with such a construction will be:
EmailRequest request = EmailRequestBuilder.newEmailRequest("john.smith#gmail.com")
.ofAdvertisementType("sample topic") // its a mandatory param, you have to supply, can't go wrong
.withTargetAudience("target audience") // non-mandatory call
.buildRequest();
Couple of notes:
Once you pick a type by calling ofDailyReportType/ ofAdvertisementType the user can't really supply parameters of different email type, because it gets "routed" to the builder that doesn't have methods for wrong parameters. An immediate implication of this is that an autocomplete will work in your IDE and people who will use this method will thank you about it ;)
It's easy to add new email types this way, no existing code will change.
Maybe with this approach, an enum EmailType will be redundant. I've preserved it in my solution but probably you'll drop it if it's not required.
Since I sometimes restrict the visibility (package private build methods, constructors, and so forth) - it will be __the_only__way to create the request which means that no-one will create "internal" objects only because it's possible to do so. At least a malicious programmer will think twice before breaking encapsulation :)
For example you can use "factory method".
EmailRenderer createRenderer(EmailType type) {
switch (type) {
case: TYPE_1:
return new RendererType1();
case: TYPE_2:
return new RendererType2();
...
}
}
Also, you probably can introduce cashing of this objects in order not to create them every time. Maybe some lazy initialization (you create appropriate Renderer first time when you needed and after that always return that same instance).
I need to unit test a method, and I would like mock the behavior so that I can test the necessary part of the code in the method.
For this I would like access the object returned by a private method inside the method I am trying to test. I created a sample code to give a basic idea of what I am trying to achieve.
Main.class
Class Main {
public String getUserName(String userId) {
User user = null;
user = getUser(userId);
if(user.getName().equals("Stack")) {
throw new CustomException("StackOverflow");
}
return user.getName();
}
private User getUser(String userId) {
// find the user details in database
String name = ""; // Get from db
String address = ""; // Get from db
return new User(name, address);
}
}
Test Class
#Test (expected = CustomException.class)
public void getUserName_UserId_ThrowsException() {
Main main = new Main();
// I need to access the user object returned by getUser(userId)
// and spy it, so that when user.getName() is called it returns Stack
main.getUserName("124");
}
There are only two ways to access private:
using reflection
extend the scope
maybe waiting for Java 9 to use new scope mechanisms?
I would change the scope modifier from private to package scope. Using reflection is not stable for refactoring. It doesn't matter if you use helpers like PowerMock. They only reduce the boiler-plate code around reflection.
But the most important point is you should NOT test too deep in whitbox tests. This can make the test setup explode. Try to slice your code into smaller pieces.
The only information the method "getUserName" needs from the User-object is the name. It will validate the name and either throw an exception or return it. So it should not be necessary to introduce a User-object in the test.
So my suggestion is you should extract the code retreiving the name from the User-object into a separate method and make this method package scope. Now there is no need to mock a User-Object just the Main-Object. But the method has its minimal information available to work properly.
class Main {
public String getUserName(String userId) {
String username = getUserNameFromInternal(userId);
if (userName.equals("Stack")) {
throw new CustomException("StackOverflow");
}
return user.getName();
}
String getUserNameFromInternal(String userId) {
User user = getUser(userId);
return user.getName();
}
...
}
The test:
#Test (expected = CustomException.class)
public void getUserName_UserId_ThrowsException() {
Main main = Mockito.mock(new Main());
Mockito.when(main.getUserNameInternal("124")).thenReturn("Stack");
main.getUserName("124");
}
Your problem that call to new within your private method.
And the answer is not to turn to PowerMock; or to change the visibility of that method.
The reasonable answer is to "extract" that dependency on "something that gives me a User object" into its own class; and provide an instance of that class to your "Main" class. Because then you are able to simply mock that "factory" object; and have it do whatever you want it to do.
Meaning: your current code is simply hard-to-test. Instead of working around the problems that are caused by this, you invest time in learning how to write easy-to-test code; for example by watching these videos as a starting point.
Given your latest comment: when you are dealing with legacy code, then you are really looking towards using PowerMockito. The key part to understand: you don't "mock" that private method; you rather look into mocking the call to new User() instead; as outlined here.
You can use a PowerMock's mockPrivate but I don't recommend it.
If you has such a problem it usually mean that your design is bad.
Why not making the method protected?
OK, so I have an interesting problem. I am using java/maven/spring-boot/cassandra... and I am trying to create a dynamic instantiation of the Mapper setup they use.
I.E.
//Users.java
import com.datastax.driver.mapping.annotations.Table;
#Table(keyspace="mykeyspace", name="users")
public class Users {
#PartitionKey
public UUID id;
//...
}
Now, in order to use this I would have to explicitly say ...
Users user = (DB).mapper(Users.class);
obviously replacing (DB) with my db class.
Which is a great model, but I am running into the problem of code repetition. My Cassandra database has 2 keyspaces, both keyspaces have the exact same tables with the exact same columns in the tables, (this is not my choice, this is an absolute must have according to my company). So when I need to access one or the other based on a form submission it becomes a mess of duplicated code, example:
//myWebController.java
import ...;
#RestController
public class MyRestController {
#RequestMapping(value="/orders", method=RequestMethod.POST)
public string getOrders(...) {
if(Objects.equals(client, "first_client_name") {
//do all the things to get first keyspace objects like....
FirstClientUsers users = (db).Mapper(FirstClientUsers.class);
//...
} else if(Objects.equals(client, "second_client_name") {
SecondClientUsers users = (db).Mapper(SecondClientUsers.class);
//....
}
return "";
}
I have been trying to use methods like...
Class cls = Class.forName(STRING_INPUT_VARIABLE_HERE);
and that works ok for base classes but when trying to use the Accessor stuff it no longer works because Accessors have to be interfaces, so when you do Class cls, it is no longer an interface.
I am trying to find any other solution on how to dynamically have this work and not have to have duplicate code for every possible client. Each client will have it's own namespace in Cassandra, with the exact same tables as all other ones.
I cannot change the database model, this is a must according to the company.
With PHP this is extremely simple since it doesn't care about typecasting as much, I can easily do...
function getData($name) {
$className = $name . 'Accessor';
$class = new $className();
}
and poof I have a dynamic class, but the problem I am running into is the Type specification where I have to explicitly say...
FirstClientUsers users = new FirstClientUsers();
//or even
FirstClientUsers users = Class.forName("FirstClientUsers");
I hope this is making sense, I can't imagine that I am the first person to have this problem, but I can't find any solutions online. So I am really hoping that someone knows how I can get this accomplished without duplicating the exact same logic for every single keyspace we have. It makes the code not maintainable and unnecessarily long.
Thank you in advance for any help you can offer.
Do not specify the keyspace in your model classes, and instead, use the so-called "session per keyspace" pattern.
Your model class would look like this (note that the keyspace is left undefined):
#Table(name = "users")
public class Users {
#PartitionKey
public UUID id;
//...
}
Your initialization code would have something like this:
Map<String, Mapper<Users>> mappers = new ConcurrentHashMap<String, Mapper<Users>>();
Cluster cluster = ...;
Session firstClientSession = cluster.connect("keyspace_first_client");
Session secondClientSession = cluster.connect("keyspace_second_client");
MappingManager firstClientManager = new MappingManager(firstClientSession);
MappingManager secondClientManager = new MappingManager(secondClientSession);
mappers.put("first_client", firstClientManager.mapper(Users.class));
mappers.put("second_client", secondClientManager.mapper(Users.class));
// etc. for all clients
You would then store the mappers object and make it available through dependency injection to other components in your application.
Finally, your REST service would look like this:
import ...
#RestController
public class MyRestController {
#javax.inject.Inject
private Map<String, Mapper<Users>> mappers;
#RequestMapping(value = "/orders", method = RequestMethod.POST)
public string getOrders(...) {
Mapper<Users> usersMapper = getUsersMapperForClient(client);
// process the request with the right client's mapper
}
private Mapper<Users> getUsersMapperForClient(String client) {
if (mappers.containsKey(client))
return mappers.get(client);
throw new RuntimeException("Unknown client: " + client);
}
}
Note how the mappers object is injected.
Small nit: I would name your class User in the singular instead of Users (in the plural).
I'm unit testing a class where I need a certain amount of time to pass before I can check results. Specifically I need x minutes to pass before I can tell whether the test worked or not. I have read that in unit testing we should be testing the interface and not the implementation, so we should not be accessing private variables, but other than putting a sleep in my unit test I don't know how to test without modifying private variables.
My test is set up like this:
#Test
public void testClearSession() {
final int timeout = 1;
final String sessionId = "test";
sessionMgr.setTimeout(timeout);
try {
sessionMgr.createSession(sessionId);
} catch (Exception e) {
e.printStackTrace();
}
DBSession session = sessionMgr.getSession(sessionId);
sessionMgr.clearSessions();
assertNotNull(sessionMgr.getSession(sessionId));
Calendar accessTime = Calendar.getInstance();
accessTime.add(Calendar.MINUTE, - timeout - 1);
session.setAccessTime(accessTime.getTime()); // MODIFY PRIVATE VARIABLE VIA PROTECTED SETTER
sessionMgr.clearSessions();
assertNull(sessionMgr.getSession(sessionId));
}
Is it possible to test this other than modifying the accessTime private variable (via creating the setAccessTime setter or reflection), or inserting a sleep in the unit test?
EDIT 11-April-2012
I am specifically trying to test that my SessionManager object clears sessions after a specific period of time has passed. The database I am connecting to will drop connections after a fixed period of time. When I get close to that timeout, the SessionManager object will clear the sessions by calling a "finalise session" procedure on the database, and removing the sessions from it's internal list.
The SessionManager object is designed to be run in a separate thread. The code I am testing looks like this:
public synchronized void clearSessions() {
log.debug("clearSessions()");
Calendar cal = Calendar.getInstance();
cal.add(Calendar.MINUTE, - timeout);
Iterator<Entry<String, DBSession>> entries = sessionList.entrySet().iterator();
while (entries.hasNext()) {
Entry<String, DBSession> entry = entries.next();
DBSession session = entry.getValue();
if (session.getAccessTime().before(cal.getTime())) {
// close connection
try {
connMgr.closeconn(session.getConnection(), entry.getKey());
} catch (Exception e) {
e.printStackTrace();
}
entries.remove();
}
}
}
The call to connMgr (ConnectionManager object) is a bit convoluted, but I am in the process of refactoring legacy code and it is what it is at the moment. The Session object stores a connection to the database as well as some associated data.
The test could do with some refactoring to make the intent clearer. If what I comprehend is correct...
.
public void TestClearSessionsMaintainsSessionsUnlessLastAccessTimeIsOverThreshold() {
final int timeout = 1;
final String sessionId = "test";
sessionMgr = GetSessionManagerWithTimeout(timeout);
DBSession session = CreateSession(sessionMgr, sessionId);
sessionMgr.clearSessions();
assertNotNull(sessionMgr.getSession(sessionId));
session.setAccessTime(PastInstantThatIsOverThreshold()); // MODIFY PRIVATE VARIABLE VIA PROTECTED SETTER
sessionMgr.clearSessions();
assertNull(sessionMgr.getSession(sessionId));
}
Now to the matter of testing without having to expose private state
How is the private variable modified in real life ? Is there some other public method you could call which updates the access time?
Since the clock/time is an important concept, why not make that explicit as a role. So you could pass a Clock object to the Session, which it uses to update its internal access time. In your tests, you could pass in a MockClock, whose getCurrentTime() method would return whatever value you wish. I'm making up the mocking syntax.. so update with whatever you are using.
.
public void TestClearSessionsMaintainsSessionsUnlessLastAccessTimeIsOverThreshold() {
final int timeout = 1;
final String sessionId = "test";
expect(mockClock).GetCurrentTime(); willReturn(CurrentTime());
sessionMgr = GetSessionManagerWithTimeout(timeout, mockClock);
DBSession session = CreateSession(sessionMgr, sessionId);
sessionMgr.clearSessions();
assertNotNull(sessionMgr.getSession(sessionId));
expect(mockClock).GetCurrentTime(); willReturn(PastInstantThatIsOverThreshold());
session.DoSomethingThatUpdatesAccessTime();
sessionMgr.clearSessions();
assertNull(sessionMgr.getSession(sessionId));
}
It looks like functionality being tested is SessionManager evitcs all expired sessions.
I would consider creating test class extending DBSession.
AlwaysExpiredDBSession extends DBSession {
....
// access time to be somewhere older 'NOW'
}
EDIT: I like Gishu's answer better. He also encourages you to mock the time, but he treats it as a first class object.
What exactly is the rule you're trying to test? If I'm reading your code right, it looks like your desire is to verify that the session associated with the ID "test" expires after a given timeout, correct?
Time is a tricky thing in unit tests because it's essentially global state, so this is a better candidate for an acceptance test (like zerkms suggested).
If you still want to have a unit test for it, generally I try to abstract and/or isolate references to time, so I can mock them in my tests. One way to do this is by subclassing the class under test. This is a slight break in encapsulation, but it works cleaner than providing a protected setter method, and far better than reflection.
An example:
class MyClass {
public void doSomethingThatNeedsTime(int timeout) {
Date now = getNow();
if (new Date().getTime() > now.getTime() + timeout) {
// timed out!
}
}
Date getNow() {
return new Date();
}
}
class TestMyClass {
#Test
public void testDoSomethingThatNeedsTime() {
MyClass mc = new MyClass() {
Date getNow() {
// return a time appropriate for my test
}
};
mc.doSomethingThatNeedsTime(1);
// assert
}
}
This is a bit of a contrived example, but hopefully you get the point. By subclassing the getNow() method, my test is no longer subject to the global time. I can substitute whatever time I want.
Like I said, this breaks encapsulation a little, because the REAL getNow() method never gets tested, and it requires the test to know something about the implementation. That's why it's good to keep such a method small and focused, with no side effects. This example also assumes the class under test is not final.
Despite the drawbacks, it's cleaner (in my opinion) than providing a scoped setter for a private variable, which can actually allow a programmer to do harm. In my example, if some rogue process invokes the getNow() method, there's no real harm done.
I basically followed Gishu's suggestion https://stackoverflow.com/a/10023832/1258214, but I thought I would document the changes just for the benefit of anyone else reading this (and so anyone can comment on issues with the implementation). Thank you to the comment's pointing me to JodaTime and Mockito.
The relevant idea was to recognise the dependency of the code on time and to extract that out (see: https://stackoverflow.com/a/5622222/1258214). This was done by creating an interface:
import org.joda.time.DateTime;
public interface Clock {
public DateTime getCurrentDateTime() ;
}
Then creating an implementation:
import org.joda.time.DateTime;
public class JodaClock implements Clock {
#Override
public DateTime getCurrentDateTime() {
return new DateTime();
}
}
This was then passed into SessionManager's constructor:
SessionManager(ConnectionManager connMgr, SessionGenerator sessionGen,
ObjectFactory factory, Clock clock) {
I was then able to use code similar to what Gishu suggested (note the lower case 't' at the beginning of testClear... my unit tests were very successful with the upper case 'T' until I realised that the test wasn't running...):
#Test
public void testClearSessionsMaintainsSessionsUnlessLastAccessTimeIsOverThreshold() {
final String sessionId = "test";
final Clock mockClock = mock(Clock.class);
when(mockClock.getCurrentDateTime()).thenReturn(getNow());
SessionManager sessionMgr = getSessionManager(connMgr,
sessionGen, factory, mockClock);
createSession(sessionMgr, sessionId);
sessionMgr.clearSessions(defaultTimeout);
assertNotNull(sessionMgr.getSession(sessionId));
when(mockClock.getCurrentDateTime()).thenReturn(getExpired());
sessionMgr.clearSessions(defaultTimeout);
assertNull(sessionMgr.getSession(sessionId));
}
This ran great, but my removal of the Session.setAccessTime() created an issue with another test testOnlyExpiredSessionsCleared() where I wanted one session to expire but not the other. This link https://stackoverflow.com/a/6060814/1258214 led me to thinking about the design of the SessionManager.clearSessions() method, and I refactored the checking if a session is expired from the SessionManager, to the DBSession object itself.
From:
if (session.getAccessTime().before(cal.getTime())) {
To:
if (session.isExpired(expireTime)) {
Then I inserted a mockSession object (similar to Jayan's suggestion https://stackoverflow.com/a/10023916/1258214)
#Test
public void testOnlyOldSessionsCleared() {
final String sessionId = "test";
final String sessionId2 = "test2";
ObjectFactory mockFactory = spy(factory);
SessionManager sm = factory.createSessionManager(connMgr, sessionGen,
mockFactory, clock);
// create expired session
NPIISession session = factory.createNPIISession(null, clock);
NPIISession mockSession = spy(session);
// return session expired
doReturn(true).when(mockSession).isExpired((DateTime) anyObject());
// get factory to return mockSession to sessionManager
doReturn(mockSession).when(mockFactory).createDBSession(
(Connection) anyObject(), eq(clock));
createSession(sm, sessionId);
// reset factory so return normal session
reset(mockFactory);
createSession(sm, sessionId2);
assertNotNull(sm.getSession(sessionId));
assertNotNull(sm.getSession(sessionId2));
sm.clearSessions(defaultTimeout);
assertNull(sm.getSession(sessionId));
assertNotNull(sm.getSession(sessionId2));
}
Thanks to everyone for their help with this. Please let me know if you see any issues with the changes.