Unit Testable convention for Service "Helper Classes" in DDD pattern - java

I'm fairly new to Java and joining a project that leverages the DDD pattern (supposedly). I come from a strong python background and am fairly anal about unit test driven design. That said, one of the challenges of moving to Java is the testability of Service layers.
Our REST-like project stack is laid out as follows:
ServiceHandlers which handles request/response, etc and calls specific implementations of IService (eg. DocumentService)
DocumentService - handles auditing, permission checking, etc with methods such as makeOwner(session, user, doc)
Currently, something like DocumentService has repository dependencies injected via guice. In a public method like DocumentService.makeOwner, we want to ensure the session user is an admin as well as check if the target user is already an owner (leveraging the injected repositories). This results in some dupe code - one for both users involved to resolve the user and ensure membership, permissions, etc etc. To eliminate this redundant code, I want make a sort of super simpleisOwner(user, doc) call that I can concisely mock out for various test scenarios (such as throwing the exception when the user can't be resolved, etc). Here is where my googling fails me.
If I put this in the same class as DocumentService, I can't mock it while testing makeOwner in the same class (due to Mockito limitations) even though it somewhat feels like it should go here (option1).
If I put it in a lower class like DocumentHelpers, it feels slightly funny but I can easily mock it out. Also, DocumentHelpers needs the injected repository as well, which is fine with guice. (option 2)
I should add that there are numerous spots of this nature in our infant code base that are untestable currently because methods are non-statically calling helper-like methods in the same *Service class not used by the upper ServiceHandler class. However, at this stage, I can't tell if this is poor design or just fine.
So I ask more experienced Java developers:
Does introducing "Service Helpers" seem like a valid solution?
Is this counter to DDD principals?
If not, is there are more DDD-friendly naming convention for this aside from "Helpers"?
3 bits to add:
My googling has mostly come up with debates over "helpers" as static utility methods for stateless operations like date formatting, which doesn't fit my issue.
I don't want to use PowerMock since it breaks code coverage and is pretty ugly to use.
In python I'd probably call the "Service Helper" layer described above as internal_api, but that seems to have a different meaning in Java, especially since I need the classes to be public to unit test them.
Any guidance is appreciated.

That the user who initiates the action must be an admin looks like an application-level access control concern. DDD doesn't have much of an opinion about how you should do that. For testability and separation of concerns purposes, it might be a better idea to have some kind of separate non-static class than a method in the same service or a static helper though.
Checking that the future owner is already an owner (if I understand correctly) might be a different animal. It could be an invariant in your domain. If so, the preferred way is to rely on an Aggregate to enforce that rule. However, it's not clear from your description whether Document is an aggregate and if it or another aggregate contains the data needed to tell if a user is owner.
Alternatively, you could verify the rule at the Application layer level but it means that your domain model could go inconsistent if the state change is triggered by something else than that Application layer.

As I learn more about DDD, my question doesn't seem to be all that DDD related and more just about general hierarchy of the code structure and interactions of the layers. We ended up going with a separate DocumentServiceHelpers class that could be mocked out. This contains methods like isOwner that we can mock to return true or false as needed to test our DocumentService handling more easily. Thanks to everyone for playing along.

Related

Why do we not mock domain objects in unit tests?

I give you 2 tests; the purpose of which is solely to confirm that when service.doSomething is called, emailService.sendEmail is called with the person's email as a parameter.
#Mock
private EmailService emailService;
#InjectMocks
private Service service;
#Captor
private ArgumentCaptor<String> stringCaptor;
#Test
public void test_that_when_doSomething_is_called_sendEmail_is_called_NO_MOCKING() {
final String email = "billy.tyne#myspace.com";
// There is only one way of building an Address and it requires all these fields
final Address crowsNest = new Address("334", "Main Street", "Gloucester", "MA", "01930", "USA");
// There is only one way of building a Phone and it requires all these fields
final Phone phone = new Phone("1", "978-281-2965");
// There is only one way of building a Vessel and it requires all these fields
final Vessel andreaGail = new Vessel("Andrea Gail", "Fishing", 92000);
// There is only one way of building a Person and it requires all these fields
final Person captain = new Person("Billy", "Tyne", email, crowsNest, phone, andreaGail);
service.doSomething(captain); // <-- This requires only the person's email to be initialised, it doesn't care about anything else
verify(emailService, times(1)).sendEmail(stringCaptor.capture());
assertThat(stringCaptor.getValue(), eq(email));
}
#Test
public void test_that_when_doSomething_is_called_sendEmail_is_called_WITH_MOCKING() {
final String email = "billy.tyne#myspace.com";
final Person captain = mock(Person.class);
when(captain.getEmail()).thenReturn(email);
service.doSomething(captain); // <-- This requires the person's email to be initialised, it doesn't care about anything else
verify(emailService, times(1)).sendEmail(stringCaptor.capture());
assertThat(stringCaptor.getValue(), eq(email));
}
Why is it that my team is telling me not to mock the domain objects required to run my tests, but not part of the actual test? I am told mocks are for the dependencies of the tested service only. In my opinion, the resulting test code is leaner, cleaner and easier to understand. There is nothing to distract from the purpose of the test which is to verify the call to emailService.sendEmail occurs. This is something that I have heard and accepted as gospel for a long time, over many jobs. But I still can not agree with.
I think I understand your team's position.
They are probably saying that you should reserve mocks for things that have hard-to-instantiate dependencies. That includes repositories that make calls to a database, and other services that can potentially have their own rats-nest of dependencies. It doesn't include domain objects that can be instantiated (even if filling out all the constructor arguments is a pain).
If you mock the domain objects then the test doesn't give you any code coverage of them. I know I'd rather get these domain objects covered by tests of services, controllers, repositories, etc. as much as possible and minimize tests written just to exercise their getters and setters directly. That lets tests of domain objects focus on any actual business logic.
That does mean that if the domain object has an error then tests of multiple components can fail. I think that's ok. I would still have tests of the domain objects (because it's easier to test those in isolation than to make sure all paths are covered in a test of a service), but I don't want to depend entirely on the domain object tests to accurately reflect how those objects are used in the service, it seems like too much to ask.
You have a point that the mocks allow you to make the objects without filling in all their data (and I'm sure the real code can get a lot worse than what is posted). It's a trade-off, but having code coverage that includes the actual domain objects as well as the service under test seems like a bigger win to me.
It seems to me like your team has chosen to err on the side of pragmatism vs purity. If everybody else has arrived at this consensus you need to respect that. Some things are worth making waves over. This isn't one of them.
It is a tradeoff, and you have designed your example nicely to be 'on the edge'. Generally, mocking should be done for a reason. Good reasons are:
You can not easily make the depended-on-component (DOC) behave as intended for your tests.
Does calling the DOC cause any non-derministic behaviour (date/time, randomness, network connections)?
The test setup is overly complex and/or maintenance intensive (like, need for external files) (* see below)
The original DOC brings portability problems for your test code.
Does using the original DOC cause unnacceptably long build / execution times?
Has the DOC stability (maturity) issues that make the tests unreliable, or, worse, is the DOC not even available yet?
For example, you (typically) don't mock standard library math functions like sin or cos, because they don't have any of the abovementioned problems.
Why is it recommendable to avoid mocking where unnecessary?
For one thing, mocking increases test complexity.
Secondly, mocking makes your tests dependent on the inner workings of your code, namely, on how the code interacts with the DOCs (like, in your case, that the captain's first name is obtained using getFirstName, although possibly another way might exist to get that information).
And, as Nathan mentioned, it may be seen as a plus that - without mocking - DOCs are tested for free - although I would be careful here: There is a risk that your tests lose focus if you get tempted to also test the DOCs. The DOCs should have tests of their own.
Why is your scenario 'on the edge'?
One of the abovementioned good reasons for mocking is marked with (*): "The test setup is overly complex ...", and your example is constructed to have a test setup that is a bit complex. Complexity of the test setup is obviously not a hard criterion and developers will simply have to make a choice. If you want to look at it this way, you could say that either way has some risks when it comes to future maintenance scenarios.
Summarized, I would say that neither position (generally to mock or generally not to mock) is right. Instead, developers should understand the decision criteria and then apply them to the specific situation. And, when the scenario is in the grey zone such that the criteria don't lead to a clear decision, don't fight over it.
There are two mistakes here.
First, testing that when a service method is called, it delegates to another method. That is a bad specification. A service method should be specified in terms of the values it returns (for getters) or the values that could be subsequently got (for mutators) through that service interface. The service layer should be treated as a Facade. In general, few methods should be specified in terms of which methods they delegate to and when they delegate. The delegations are implementation details and so should not be tested.
Unfortunately, the popular mocking frameworks encourage this erroneous approach. And so does over zealous use of Behaviour Driven Development.
The second mistake is centered around the very concept of unit testing. We would like each of our unit tests to test one thing, so when there is a fault in one thing, we have one test failure, and locating the fault is easy. And we tend to think of "unit" meaning the same as "method" or "class". This leads people to think that a unit test should involve only one real class, and all other classes should be mocked. This is impossible for all but the simplest of classes. Almost all Java code uses classes from the standard library, such as String or HashSet. Most professional Java code uses classes from various frameworks, such as Spring. Nobody seriously suggests mocking those. We accept that those classes are trustworthy, and so do not need mocking. We accept that it is OK not to mock "trustworthy" classes that the code of our unit uses. But, you say, our classes are not trustworthy, so we must mock them. Not so. You can trust those other classes, by having good unit tests for them. But how to avoid a tangle of interdependent classes that cause a confusing mass of test failures when there is only one fault present? That would be a nightmare to debug! Use a concept from 1970s programming (called, a virtual machine hierarchy, which is now a rather confusing term, given the additional meanings of virtual machine): arrange your software in layers from low level to high level, with higher layers performing operations using lower layers. Each layer provides a more expressive or advanced means of abstractly describing operations and objects. So, domain objects are in a low level, and the service layer is at a higher level. When several tests fail, start debugging the lowest level test failure(s): the fault will probably be in that layer, possibly (but probably not) in a lower layer, and not in a higher layer.
Reserve mocks only for input and output interfaces that would make the tests very expensive to run (typically, this means mocking the repository layer and the logging interface).
The intention of an automated test is to reveal that the intended behavior of some unit of software is no longer performing as expected (aka reveal bugs.)
The granularity/size/bounds of units under test in a given test suite is to be decided by you and your team.
Once that is decided, if something outside of that scope can be mocked without sacrificing the behavior being tested, then that means it is clearly irrelevant to the test, and it should be mocked. This will help with making your tests more:
Isolated
Fast
Readable (as you mentioned)
...and most importantly, when the test fails, it will reveal that the intended behavior of some unit of software is no longer performing as expected. Given a sufficiently small unit under test, it will be obvious where the bug has occurred and why.
If your test-without-mocks example were to fail, it could indicate an issue with Address, Phone, Vessel, or Person. This will cause wasted time tracking down exactly where the bug has occurred.
One thing I will mention is that your example with mocks is actually a bit unreadable IMO because you are asserting that a String will have a value of "Billy" but it is unclear why.

Domain Driven Design - testability and the "new" keyword

I have been trying to follow a domain driven design approach in my new project. I have always generally used Spring for dependency injection, which nicely separates my application code from the construction code, however, with DDD I always seem to have one domain object wanting to create another domain object, both of which have state and behaviour.
For example, given a media file, we want to encode it to a different format - the media asset calls on a transcode service and receives a callback:
class MediaAsset implements TranscodingResultListener {
private NetworkLocation permanentStorage;
private Transcoder transcoder;
public void transcodeTo(Format format){
transcoder.transcode(this,format);
}
public void onSuccessfulTranscode(TranscodeResult result){
Rendition rendition = new Rendition(this, result.getPath(), result.getFormat());
rendition.moveTo(permanentStorage);
}
}
Which throws two problems:
If the rendition needs some dependencies (like the MediaAsset requires a "Transcoder") and I want to use something like Spring to inject them, then I have to use AOP in order for my program to run, which I don't like.
If I want a unit test for MediaAsset that tests that a new format is moved to temporary storage, then how do I do that? I cannot mock the rendition class to verify that it had its method called... the real Rendition class will be created.
Having a factory to create this class is something that I've considered, but it is a lot of code overhead just to contain the "new" keyword which causes the problems.
Is there an approach here that I am missing, or am I just doing it all wrong?
I think that the injection of a RenditionFactory is the right approach in this case. I know it requires extra work, but you also remove a SRP violation from your class. It is often tempting to construct objects inside business logic, but my experience is that injection of the object or a objectfactory pays off 99 out of 100 times. Especially if the mentioned object is complex, and/or if it interacts with system resources.
I assume your approach for unit testing is to test the MediaAsset in isolation. Doing this, I think a factory is the common solution.
Another approach is to test the whole system (or almost the whole system). Let your test access the outer interface[1] (user interface, web service interface, etc) and create test doubles for all external systems that the system accesses (database, file system, external services, etc). Then let the test inject these external dependencies.
Doing this, you can let the tests be all about behaviour. The tests become decoupled from implementation details. For instance, you can use dependency injection for Rendition, or not: the tests don't care. Also, you might discover that MediaAsset and Rendition are not the correct concepts[2], and you might need to split MediaAsset in two and merge half of it with Rendition. Again, you can do it without worrying about the tests.
(Disclaimer: Testing on the outer level does not always work. Sometimes you need to test common concepts, which requires you to write micro tests. And then you might run into this problem again.)
[1] The best level might actually be a "domain interface", a level below the user interface where you can use the domain language instead of strings and integers, and where you can talk domain actions instead of button clicks and focus events.
[2] Perhaps this is actually your problem: Are MediaAsset and Rendition the correct concepts? If you ask your domain expert, does he know what these are? If not, are you really doing DDD?

The missing "framework level" access modifier

Here's the scenario. As a creator of publicly licensed, open source APIs, my group has created a Java-based web user interface framework (so what else is new?). To keep things nice and organized as one should in Java, we have used packages with naming convention
org.mygroup.myframework.x, with the x being things like components, validators, converters, utilities, and so on (again, what else is new?).
Now, somewhere in class org.mygroup.myframework.foo.Bar is a method void doStuff() that I need to perform logic specific to my framework, and I need to be able to call it from a few other places in my framework, for example org.mygroup.myframework.far.Boo. Given that Boo is neither a subclass of Bar nor in the exact same package, the method doStuff() must be declared public to be callable by Boo.
However, my framework exists as a tool to allow other developers to create simpler more elegant R.I.A.s for their clients. But if com.yourcompany.yourapplication.YourComponent calls doStuff(), it could have unexpected and undesirable consequences. I would
prefer that this never be allowed to happen. Note that Bar contains other methods that are genuinely public.
In an ivory tower world, we would re-write the Java language and insert a tokenized analogue to default access, that would allow any class in a package structure of our choice to access my method, maybe looking similar to:
[org.mygroup.myframework.*] void doStuff() { .... }
where the wildcard would mean any class whose package begins with org.mygroup.myframework can call, but no one else.
Given that this world does not exist, what other good options might we have?
Note that this is motivated by a real-life scenario; names have been changed to protect the guilty. There exists a real framework where peppered throughout its Javadoc one will find public methods commented as "THIS METHOD IS INTERNAL TO MYFRAMEWORK AND NOT
PART OF ITS PUBLIC API. DO NOT CALL!!!!!!" A little research shows these methods are called from elsewhere within the framework.
In truth, I am a developer using the framework in question. Although our application is deployed and is a success, my team experienced so many challenges that we want to convince our bosses to never use this framework again. We want to do this in a well thought out presentation of the poor design decisions made by the framework's developers, and not just as a rant. This issue would be one (of several) of our points, but we just can't put a finger on how we might have done it differently. There has already been some lively discussion here at my workplace, so I wondered what the rest of the world would think.
Update: No offense to the two answerers so far, but I think you've missed the mark, or I didn't express it well. Either way allow me to try to illuminate things. Put as simply as I can, how should the framework's developers have refactored the following. Note this is a really rough example.
package org.mygroup.myframework.foo;
public class Bar {
/** Adds a Bar component to application UI */
public boolean addComponentHTML() {
// Code that adds the HTML for a Bar component to a UI screen
// returns true if successful
// I need users of my framework to be able to call this method, so
// they can actually add a Bar component to their application's UI
}
/** Not really public, do not call */
public void doStuff() {
// Code that performs internal logic to my framework
// If other users call it, Really Bad Things could happen!
// But I need it to be public so org.mygroup.myframework.far.Boo can call
}
}
Another update: So I just learned that C# has the "internal" access modifier. So perhaps a better way to have phrased this question might have been, "How to simulate/ emulate internal access in Java?" Nevertheless, I am not in search of new answers. Our boss ultimately agreed with the concerns mentioned above
You get closest to the answer when you mention the documentation problem. The real issue isn't that you can't "protect" your internal methods; rather, it is that the internal methods pollute your documentation and introduce the risk that a client module may call an internal method by mistake.
Of course, even if you did have fine grained permissions, you still aren't going to be able to prevent a client module from calling internal methods---the jvm doesn't protect against reflection based calls to private methods anyway.
The approach I use is to define an interface for each problematic class, and have the class implement it. The interface can be documented solely in terms of client modules, while the implementing class can provide what internal documentation you desire. You don't even have to include the implementation javadoc in your distribution bundle if you don't want to, but either way the boundary is clearly demarcated.
As long as you ensure that at runtime only one implementation is loaded per documentation-interface, a modern jvm will guarantee you don't suffer any performance penalty for using it; and, you can load harness/stub versions during testing for an added bonus.
The only idea that I can think in order to supply this missing "Framework level access modifier" is CDI and a better design.
If you have to use a method from very different classes and packages in various (but few) situations THERE WILL BE certainly a way to redesign those classes in order to make those methods "private" and inacessible.
There is no support in Java language for such kind of access level (you would like something like "internal" with namespace). You can only restrict access to package level (or the known inheritance public-protected-private model).
From my experience, you can use Eclipse convention:
create a package called "internal" that all class hierarchy (including sub-packages) of this package will be considered as non-API code and could be changed anytime with no guarantee for your users. In that non-API code, use public methods whenever you like. Since it is only a convention and it is not enforced by the JVM or Java compiler, you cannot prevent users from using the code, but at least let them know that these classes were not meant to be used by 3rd parties.
By the way, in Eclipse platform source code, there is a complex plugin model that enforces you not to use internal code of other plugins by implementing custom class loader for each plugin that prevents loading classes that should be "internal" in these plugins.
Interfaces and dynamic proxies are sometimes used to make sure you only expose methods that you do want to expose.
However that comes at a fairly hefty performance cost, if your methods are called very often.
Using the #Deprecated annotation might also be an option, although it won't stop external users invoking your "framework private" methods, they can't say they hadn't been warned.
In general I don't think you should worry about your users deliberately shooting themselves in the foot too much, so long as you made it clear to them that they shouldn't use something.

How many GWT services

Starting a new GWT application and wondering if I can get some advice from someones experience.
I have a need for a lot of server-side functionality through RPC services...but I am wondering where to draw the line.
I can make a service for every little call or I can make fewer services which handle more operations.
Let's say I have Customer, Vendor and Administration services. I could make 3 services or a service for each function in each category.
I noticed that much of the service implementation does not provide compile-time help and at times troublesome to get going, but it provides good modularity. When I have a larger service, I don't have the modularity as I described, but I don't have to the service creation issues and reduce the entries in my web.xml file.
Is there a resource issue with using a lot of services? What is the best practice to determine what level of granularity to use?
in my opinion, you should make a rpc service for "logical" things.
in your example:
one for customer, another for vendors and a third one for admin
in that way, you keep several services grouped by meaning, and you will have a few lines to maintain in the web.xml file ( and this is a good news :-)
More seriously, rpc services are usually wrappers to call database stuff, so, you even could make a single 'MagicBlackBoxRpc' with a single web.xml entry and thousands of operations !
but making a separate rpc for admin operations, like you suggest, seems a good thing.
Read general advice on "how big should a class be?", which is available in any decent software engineering book.
In my opinion:
One class = One Subject (ie. group of functions or behaviours that are related)
A class should not deal with more than one subject. For example:
Class PersonDao -> Subject: interface between the DB and Java code.
It WILL NOT:
- cache Person instances
- update fields automatically (for example, update the field 'lastModified')
- find the database
Why?
Because for all these other things, there will be other classes doing it! Respectively:
- a cache around the PersonDao is concerned with the efficient storage of information to avoid hitting the DB more often than necessary
- the Service class which uses the DAO is responsible for modifying anything that needs to be modified automagically.
- To find the database is responsibility of the DataSource (usually part of a framework like Spring) and your Dao should NOT be worried about that. It's not part of its subject.
TDD is the answer
The need for this kind of separation becomes really clear when you do TDD (Test-Driven Development). Try to do TDD on bad code where a single class does all sorts of things! You can't even get started with one unit test! So this is my final hint: use TDD and that will tell you how big a class should be.
I think the thing to optimize for is that you can accomplish a result in one round trip to the server. I have an ad-hoc collection of methods on my service object, one for each situation the client finds itself in when it has to get something done. You do not want the client to RPC to the server several times in a row while the user is sitting there waiting.
REST makes things orthogonal, but orthogonality has a cost: there is a reason that the frequently used verbs in languages are irregular. In terms of maintaing clean orthogonal structure to your app, make sure your schema is well-designed. That is where each class should have semantics orthogonal to that of the other classes. When the semantics of each RPC call can be stated cleanly in the schema there will be no confusion as to what they mean, even if they aren't REST-fully ideal.

How to handle internal calls on Spring/EJB/Mockito... proxies?

As you many know when you proxy an object, like when you create a bean with transactional attributes for Spring/EJB or even when you create a partial mock with some frameworks, the proxies object doesn't know that, and internal calls are not redirected, and then not intercepted either...
That's why if you do something like that in Spring:
#Transactionnal
public void doSomething() {
doSomethingInNewTransaction();
doSomethingInNewTransaction();
doSomethingInNewTransaction();
}
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void doSomethingInNewTransaction() {
...
}
When you call doSomething, you expect to have 3 new transactions in addition to the main one, but actually, due to this problem you only get one...
So i wonder how do you do to handle these kind of problems...
I'm actually in a situation where i must handle a complex transactional system, and i don't see any better way than splitting my service into many small services, so that I'm sure to pass through all the proxies...
That bothers me a lot because all the code belongs to the same functional domain and should not be split...
I've found this related question with interesting answers:
Spring - #Transactional - What happens in background?
Rob H says that we can inject the spring proxy inside the service, and call proxy.doSomethingInNewTransaction(); instead.
It's quite easy to do and it works, but i don't really like it...
Yunfeng Hou says this:
So I write my own version of CglibSubclassingInstantiationStrategy and
proxy creator so that it will use CGLIB to generate a real subclass
which delegates call to its super rather than another instance, which
Spring is doing now. So I can freely annotate on any methods(as long
as it is not private), and from wherever I call these methods, they
will be taken care of. Well, I still have price to pay: 1. I must list
all annotations that I want to enable the new CGLIB sub class
creation. 2. I can not annotate on a final method since I am now
generating subclass, so a final method can not be intercepted.
What does he mean by "which spring is doing now"? Does this mean internal transactional calls are now intercepted?
What do you think is better?
Do you split your classes when you need some transactional granularity?
Or do you use some workaround like above? (please share it)
I'll talk about Spring and #Transactional but the advise applies for many other frameworks also.
This is an inherent problem with proxy based aspects. It is discussed in the spring documentation here:
http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/aop.html#aop-understanding-aop-proxies
There are a number of possible solutions.
Refactor your classes to avoid the self-invocation calls that bypass the proxy.
The Spring documentation describes this as "The best approach (the term best is used loosely here)".
Advantages of this approach are its simplicity and that there are no ties to any framework. However, it may not be appropriate for a very transactional heavy code base as you'd end up with many trivially small classes.
Internally in the class get a reference to the proxy.
This can be done by injecting the proxy or with hard coded " AopContext.currentProxy()" call (see Spring docs above.).
This method allows you to avoid splitting the classes but in many ways negates the advantages of using the transactional annotation. My personal opinion is that this is one of those things that is a little ugly but the ugliness is self contained and might be the pragmatic approach if lots of transactions are used.
Switch to using AspectJ
As AspectJ does not use proxies then self-invocation is not a problem
This is a very clean method though - it is at the expense of introducing another framework. I've worked on a large project where AspectJ was introduced for this very reason.
Don't use #Transactional at all
Refactor your code to use manual transaction demarcation - possibly using the decorator pattern.
An option - but one that requires moderate refactoring, introducing additional framework ties and increased complexity - so probably not a preferred option
My Advice
Usually splitting up the code is the best answer and can also be good thing for seperation of concerns also. However, if I had a framework/application that heavily relied on nested transactions I would consider using AspectJ to allow self-invocation.
As always when modelling and designing complex use cases - focus on understandable and maintainable design and code. If you prefer a certain pattern or design but it clashes with the underlying framework, consider if it's worth a complex workaround to shoehorn your design into the framework, or if you should compromise and conform your design to the framework where necessary. Don't fight the framework unless you absolutely have to.
My advice - if you can accomplish your goal with such an easy compromise as to split out into a few extra service classes - do it. It sounds a lot cheaper in terms of time, testing and agony than the alternative. And it sure sounds a lot easier to maintain and less of a headache for the next guy to take over.
I usually make it simple, so I split the code into two objects.
The alternative is to demarcate the new transaction yourself, if you need to keep everything in the same file, using a TransactionTemplate. A few more lines of code, but not more than defining a new bean. And it sometimes makes the point more obvious.

Categories