I've read through the definitions of transactions on this site, as well as some other external sources. But I'm having a hard time grappling with the specific notion of a transaction when writing code.
I have a BuyService class that is transactional. The BuyService class is declared transactional, and its only method is buyWidget(String widgetId). This method calls on the ExampleService class, which has a deleteWidgit(String widgetId) method. It also calls on the InvoiceService class, which uses the writeInvoice(String widgitId) method. Here is the code:
BuyService class:
import org.springframework.transaction.annotation.Propagation;
import org.springframework.transaction.annotation.Transactional;
#Transactional
public class BuyService implements BuyServiceInterface
{
private ExampleServiceInterface exampleService;
private InvoiceServiceInterface invoiceService;
public void setExampleService(ExampleServiceInterface exampleService)
{
this.exampleService = exampleService;
}
public void setInvoiceService(InvoiceServiceInterface invoiceService)
{
this.invoiceService = invoiceService;
}
#Override
#Transactional(propagation=Propagation.REQUIRED)
public void buyWidget(Widget widgetId)
{
try
{
Widget purchasedWidget = this.exampleService.getWidgetById(String widgetId);
this.exampleService.deleteWidget(purchasedWidget);
this.invoiceService.writeInvoice(purchasedWidget);
}
catch (WidgetNotFoundException e)
{
System.out.println("Widget with widgetId " + widgetId + " not found.");
}
}
}
I am pretty sure that the buyWidget method constitutes a transaction. It requires the deletion of a widget in a database (in exampleService) and the insertion of data in the purchase database (in invoiceService). But I am confused about terminology after this point. Are the methods deleteWidget and writeInvoice themselves transactions as well?
ExampleService class:
public class ExampleService implements ExampleServiceInterface
{
private ExampleServiceDaoInterface dao;
public void setExampleServiceDao(ExampleServiceDaoInterface dao)
{
this.dao = dao;
}
#Override
public void deleteWidget(Widget oldWidget)
throws WidgetNotFoundException
{
this.dao.delete(oldWidget);
}
#Override
public Widget getWidgetById(String widgetId)
{
return this.dao.getById(widgetId);
}
}
InvoiceService class:
public class InvoiceService implements InvoiceServiceInterface
{
private InvoiceServiceDaoInterface InvoiceServiceDao;
public void setInvoiceServiceDao(InvoiceServiceDaoInterface InvoiceServiceDao)
{
this.InvoiceServiceDao = InvoiceServiceDao;
}
#Override
public void writeInvoice(Widget purchasedWidget)
{
Date purchaseDate = new Date(new java.util.Date().getTime());
String isbn = purchasedWidget.getIsbn();
Purchases newPurchase = new Purchases(purchaseDate, isbn);
this.InvoiceServiceDao.savePurchase(newPurchase);
}
}
Are the two methods called on by buyWidget transactions as well? That is, even if neither of those methods are declared as transactions.
What are some potential pitfalls of not declaring the two child methods as transactions? (Since they apparently appear to be a part of one already).
Are the methods deleteWidget and writeInvoice themselves transactions as well?
They will take part in the buyWidget transaction, but they are not by themselves transactional
Are the two methods called on by buyWidget transactions as well?
The transaction is started before entering the buyWidget method and stopped or rolled back before the method is completed. The two methods will take part in the buyWidget transaction, but are not themselves transactions.
Are the two methods called on by buyWidget transactions as well?
None of the methods are transactions. The annotation #Transactional(propagation=Propagation.REQUIRED) means " Support a current transaction, create a new one if none exists." This transaction will include anything called from the buyWidget() method. Basically, when the method is entered a transaction is started, and when it's exited that transaction will be committed (if everything works out) or rolled back (if there is an Exception thrown, a problem on the DB side, or the transaction is rolled back via Java code).
That is, even if neither of those methods are declared as transactions.
As long as those methods operate on a DB that is aware of JTA, it will work with the existing transaction.
What are some potential pitfalls of not declaring the two child methods as transactions? (Since they apparently appear to be a part of one already).
If those methods are called directly, they will not be part of a transaction. This could result in an inconsistent state of the DB if those methods result in multiple SQL statements (it doesn't look like that, but cannot be definitely ruled out just by looking at the code).
Related
I have 3 methods defined the following way. methodX and methodY are defined in different classes. methodY and methodZAsync are defined in same class.
#Transactional(propagation = Propagation.REQUIRED)
public void methodX(){
.....
methodY();
methodZAsync(); //
}
#Transactional(propagation = Propagation.REQUIRED)
public void methodY(){
.....
someDatabaseOperations();
.....
}
#Async
public void methodZAsync(){
.....
pollingBasedOnDataOperationsOfMethodY();
.....
}
The problem here is, methodZAsync requires methodY()'s DB operations to be committed before it can start it's work. And this fails because methodZAsync runs in a different thread.
One option is to make methodY's transaction to use #Transactional(propagation = Propagation.REQUIRES_NEW). But since methodY is used in multiple places with a different use cases, I'm not allowed to do that.
I've checked this question but TransactionSynchronization is an interface and Im not sure what to do with rest of the un-implemented methods.
So I thought, instead of making changes in methodY(), If I can somehow make methodX() to tell methodY() to use a new transaction(naive way: close current running transaction), It'll fix the things for me.
Is this doable from methodX() without having to modify methodY()?
You can create a new wrapper method around methodY() that creates a new transaction. Then you can call this new method from methodX() without impacting any other use cases.
#Transactional(propagation = Propagation.REQUIRED)
public void methodX(){
.....
methodYInNewTxn();
methodZAsync(); //
}
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void methodYInNewTxn() {
methodY();
}
#Transactional(propagation = Propagation.REQUIRED)
public void methodY(){
.....
someDatabaseOperations();
.....
}
Refer spring boot, there is also
#Transactional(propagation = Propagation.REQUIRES_NEW)
method are there go through. Hope you will get your proper answer
I've recently had to implement a cache invalidation system, and ended up hesitating between several ways of doing it.
I'd like to know what the best practice is in my case. I have a classic java back-end, with entities, services and repositories.
Let's say I have a Person object, with the usual setters and getters, persisted in a database.
public class Person {
private Long id;
private String firstName;
private String lastName;
...
}
A PersonRepository, instance of JpaRepository<Person, Long>.
public class PersonRepository extends JpaRepository<Person, Long> {
public Person save(Person person) {return super.save(person);}
public Person find(Person person) {return super.find(person);}
public void delete(Person person) {super.delete(person);}
}
I have a PersonService, with the usual save(), find(), delete() methods and other more functional methods.
public class PersonService {
public Person save(Person person) {
doSomeValidation(person)
return personRepository.save(person);
}
...
}
Now I also have so jobs that run periodically and manipulate the Person objects. One of which is running every second and uses a cache of Person objects, that needs to be rebuilt only if the firstName attribute of a Person has been modified elsewhere in the application.
public class EverySecondPersonJob {
private List<Person> cache;
private boolean cacheValid;
public void invalidateCache() {
cacheValid = false;
}
public void execute() { // run every second
if (!cacheValid)
cache = buildCache();
doStuff(cache);
}
}
There are lots of places in the code that manipulate Person objects and persist them, some may change the firstName attribute, requiring an invalidation of the cache, some change other things, not requiring it, for example:
public class ServiceA {
public void doStuffA(Person person) {
doStuff();
person.setFirstName("aaa");
personRepository.save(person);
}
public void doStuffB(Person person) {
doStuff();
person.setLastName("aaa");
personService.save(person);
}
}
What is the best way of invaliding the cache?
First idea:
Create a PersonService.saveAndInvalidateCache() method then check every method that calls personService.save(), see if they modify an attribute, and if yes, make it call PersonService.saveAndInvalidateCache() instead:
public class PersonService {
public Person save(Person person) {
doSomeValidation(person)
return personRepository.save(person);
}
public Person saveAndInvalidateCache(Person person) {
doSomeValidation(person)
Person saved = personRepository.save(person);
everySecondPersonJob.invalidateCache();
return saved;
}
...
}
public class ServiceA {
public class doStuffA(Person person) {
doStuff();
person.setFirstName("aaa");
personService.saveAndInvalidateCache(person);
}
public class doStuffB(Person person) {
doStuff();
person.setLastName("aaa");
personService.save(person);
}
}
It requires lots of modifications and makes it error prone if doStuffX() are modified or added. Every doStuffX() has to be aware if they must invalidate or not the cache of an entirely unrelated job.
Second idea:
Modify the setFirstName() to track the state of th ePerson object, and make PersonService.save() handle the cache invalidation:
public class Person {
private Long id;
private String firstName;
private String lastName;
private boolean mustInvalidateCache;
setFirstName(String firstName) {
this.firstName = firstName;
this.mustInvalidateCache = true;
}
...
}
public class PersonService {
public Person save(Person person) {
doSomeValidation(person);
Person saved = personRepository.save(person);
if (person.isMustInvalidateCache)
everySecondPersonJob.invalidateCache();
}
...
}
That solution makes it less error prone by not making every doStuffX() need to be aware of if they must invalidate the cache or not, but it makes the setter do more than just change the attribute, which seems to be a big nono.
Which solution is the best practice and why?
Thanks in advance.
Clarification: My job running every second calls, if the cache is invalid, a method that retrieves the Person objects from the database, builds a cache of other objects based upon the properties of the Person objects (here, firstName), and doesn't modify the Person.
The job then uses that cache of other objects for its job, and doesn't persist anything in the database either, so there is no potential consistency issue.
1) You don't
In the usage scenario you described the best practice is not to do any self grown caching but use the cache inside the JPA implementation. A lot of JPA implementations provide that (e.g. Hibernate, EclipseLink, Datanucleus, Apache OpenJPA).
Now I also have so jobs that run periodically and manipulate the Person objects
You would never manipulate a cached object. To manipulate, you need a session/transaction context and the database JPA implementation makes sure that you have the current object.
If you do "invalidation", as you described, you loose transactional properties and get inconsistencies. What happens if a transaction fails and you updated the cache with the new value already? But if you update the cache after the transaction went through, concurrent jobs read the old value.
2) Different Usage Scenario with Eventual Consistent View
You could do caching "on top" of your data storage layer, that provides an eventual consistent view. But you cannot write data back into the same object.
JPA always updates (and caches) the complete object.
Maybe you can store the data that your "doStuff" code derives in another entity?
If this is a possibility, then you have several options. I would "wire in" the cache invalidation via JPA triggers or the "Change Data Capture" capabilities of the database. JPA triggers are similar to your second idea, except that you don't need that all code is using your PersonService. If you run the tigger inside the application, your application cannot have multiple instances, so I would prefer getting change events from the database. You should reread everything from time to time in case you miss an event.
I have 2 methods in my service
public void updateAll() {
long[] ids = new long[] {1,2,3,4,5,6,7,8,9,10};
for (long id : ids) {
updateId(id);
}
}
public updateId(long id) {
repository.update(id);
}
Let's assume that after the 5th update I have an exception, I would like that the first 4 operations would be committed anyway.
I'm using the #Transactional annotation but if I put the annotation in both methods it doesn't work.
Do I need other parameter?? It might be propagation??
Could you show me how to set this methods?
Thank you!!
Just have:
#Transactional(propagation = Propagation.REQUIRES_NEW)
public updateId(long id) {
}
But, the important bit, call that method from another class.
i.e. move your loop out of this class.
The transactional annotations only kick-in when that public method is called from the outside. Within the same class, calling one transactional method from another will still only use the transaction of the first method.
You need a separate #Transactional on updateId with REQUIRES_NEW.
I have three different classes:
Managed bean (singleton scope)
Managed bean (session scope)
Spring #Controller
I read few posts here about synchronization, but I still don't understand how it should be and how it works.
Short examples:
1) Managed bean (singleton scope).
Here all class fields should be the same for all users. All user work with one instance of this object or with his copies(???).
public class CategoryService implements Serializable {
private CategoryDao categoryDao;
private TreeNode root; //should be the same for all users
private List<String> categories = new ArrayList<String>();//should be the same for all users
private List<CategoryEntity> mainCategories = new ArrayList<CategoryEntity>();
//should be the same for all users
public void initCategories() {
//get categories from database
}
public List<CategoryEntity> getMainCategories() {
return mainCategories;
}}
2) Managed bean (session scope)
In this case, every user have his own instance of object.
When user trying to delete category he should check are another users which trying to delete the same category, so we need to use synchronized block???
public class CategoryServiceSession implements Serializable {
private CategoryDao categoryDao;
private CategoryService categoryService;
private TreeNode selectedNode;
public TreeNode getSelectedNode() {
return selectedNode;
}
public void setSelectedNode(TreeNode selectedNode) {
this.selectedNode = selectedNode;
}
public void deleteCategory() {
CategoryEntity current = (CategoryEntity) selectedNode.getData();
synchronized (this) {
//configure tree
selectedNode = null;
categoryDao.delete(current);
}
categoryService.initCategories();
}}
3) Spring #Controller
Here all user may have an instance (or each user have his own instance???). But when some admin try to change parameter of some user he should check is another admin trying to do the same operation??
#Controller
#RequestMapping("/rest")
public class UserResource {
#Autowired
private UserDao userDao;
#RequestMapping(value = "/user/{id}", method = RequestMethod.PUT)
public #ResponseBody UserEntity changeBannedStatus(#PathVariable Long id) {
UserEntity user = userDao.findById(id);
synchronized (id) {
user.setBanned(!user.getBanned());
userDao.update(user);
}
return user;
}
}
So, how it should to be?
Sorry for my English.
In the code that you've posted -- nothing in particular needs to be synchronised, and the synchronised blocks you've defined won't protect you from anything. Your controller scope is singleton by default.
If your singletons change shared objects ( mostly just their fields) then you should likely flag the whole method as synchronised.
Method level variables and final parameters will likely never need synchronization ( at least in the programming model you seem to be using ) so don't worry about it.
The session object is guarded by serialisation, mostly, but you can still have data races if your user has concurrent requests -- you'll have to imagine creative ways to deal with this.
You may/will have concurrency issues in the database ( multiple users trying to delete or modify a database row concurrently ) but this should be handled by a pessimistic or optimistic locking and transaction policy in your DAO.
Luck.
Generally speaking, using synchronized statements in your code reduces scalability. If your ever try to use multiple server instances, your synchronized will most likely be useless. Transaction semantics (using either optimistic or pessimistic locking) should be enough to ensure that your object remains consistent. So in 2 und 3 you don't need that.
As for shared variables in CategoryService it may be possible to synchronize it, but your categories seem to be some kind of cache. If this is the case, you might try to use a cache of your persistence provider (e.g. in Hibernate a second-level cache or query cache) or of your database.
Also calling categoryService.initCategories() in deleteCategory() probably means you are reloading the whole list, which is not a good idea, especially if you have many categories.
In my Stateful bean, I have the following lines:
#Stateful(mappedName = "ejb/RegistrationBean")
#StatefulTimeout(unit = TimeUnit.MINUTES, value = 30)
#TransactionManagement(value=TransactionManagementType.CONTAINER)
public class RegistrationStateful implements RegistrationStatefulRemote {
#PersistenceContext
EntityManager em;
private List<Event> reservedSessions = new ArrayList<Event>();
private boolean madePayment = false;
...
#TransactionAttribute(TransactionAttributeType.REQUIRED)
private void cancelReservation() {
if (reservedSessions.size() != 0) {
Teacher theTeacher;
for (Event session : reservedSessions) {
if ((theTeacher = session.teacher) == null) theTeacher = bestTeacher.teacher;
theTeacher = em.merge(theTeacher) //The exception is thrown here
//Make changes to theTeacher
em.flush(); //The exception is also thrown here
}
//Clear the reservedSessions list
reservedSessions.clear();
}
}
#Remove
public void endRegistration() {}
#PreDestroy
public void destroy() {
//Cancel outstanding reservations if payment has not been made
if (!madePayment) cancelReservation();
}
}
The line em.merge(someEntity) throws the TransactionRequiredException. Could someone please tell me why it happens? I thought with TransactionAttribute.REQUIRED, a transaction will AUTOMATICALLY be created if there isn't an active one. I tried to use em.joinTransaction() but it throws the same Exception. I'm a beginner at this transaction thing. I'd be very grateful if someone could explain this to me.
UPDATE: I'd like to add a bit more information
The Stateful bean actually also has the following function:
#TransactionAttribute(TransactionAttributeType.REQUIRED)
private void reserveSession(List<Event> sessions) throws ReservationException {
//Reserve the sessions
Teacher theTeacher;
for (Event session : sessions) {
if ((theTeacher = session.teacher) == null) theTeacher = bestTeacher.teacher;
theTeacher = em.merge(theTeacher);
//Make changes to theTeacher
em.flush();
}
}
The flow is as following: the user tells me his free time and I reserve some seats for him. After that, I show him his reserved seats and he can choose to make payment or cancel the reservations.
The reserved() function worked perfectly as expected but the cancelReservation() did not.
UPDATE 2: I have fixed the problem last night by commenting out the lines "#TransactionAttribute(TransactionAttributeType.REQUIRED)", "em.merge(theTeacher)" and "em.flush()" in the "cancelReservation()" function. The result is perfect. Would it be safe if I cut off those lines? I was afraid I would get "detached entity" exception when I used "em.merge()" in the first place.
The only thing that springs to mind (if you'll excuse the pun) is that if you're calling cancelReservation() from another method inside the bean, then i'm not sure the transaction annotation will be observed. The annotation ultimately works by summoning an interceptor, and i believe interceptors are only applied to calls between different classes (this is something i should really check).
So, if you have a non-transactional method on the bean which calls a transactional method, then a transaction won't be started when the transactional method is called.
I could be completely wrong about this. I'll go and have a bit of a read of the spec and get back to you.
EDIT: I had a read of the spec, and it reminded me what a disaster zone the J2EE specs are. Horrific. However, the section on transactions does seem to imply that the transaction attributes only apply to calls made to an EJB's business interface. I believe calls from one method to another inside a bean are not considered to go through the business interface, even when the method being called is part of that interface. Therefore, you wouldn't expect them to attract transactions.
Something you could try would be to route them through the interface; there is no nice way of doing this, but you should be able to inject a business-interface self-reference like this:
public class RegistrationStateful implements RegistrationStatefulRemote {
#EJB
private RegistrationStatefulRemote self;
You can then change your #PreDestroy method to look like this:
#PreDestroy
public void destroy() {
self.cancelReservation();
}
And i believe that should count as a normal business interface call, with transactions and so on.
I have never actually tried this, so this could be complete rubbish. If you try it, let me know how it works out!