I have three different classes:
Managed bean (singleton scope)
Managed bean (session scope)
Spring #Controller
I read few posts here about synchronization, but I still don't understand how it should be and how it works.
Short examples:
1) Managed bean (singleton scope).
Here all class fields should be the same for all users. All user work with one instance of this object or with his copies(???).
public class CategoryService implements Serializable {
private CategoryDao categoryDao;
private TreeNode root; //should be the same for all users
private List<String> categories = new ArrayList<String>();//should be the same for all users
private List<CategoryEntity> mainCategories = new ArrayList<CategoryEntity>();
//should be the same for all users
public void initCategories() {
//get categories from database
}
public List<CategoryEntity> getMainCategories() {
return mainCategories;
}}
2) Managed bean (session scope)
In this case, every user have his own instance of object.
When user trying to delete category he should check are another users which trying to delete the same category, so we need to use synchronized block???
public class CategoryServiceSession implements Serializable {
private CategoryDao categoryDao;
private CategoryService categoryService;
private TreeNode selectedNode;
public TreeNode getSelectedNode() {
return selectedNode;
}
public void setSelectedNode(TreeNode selectedNode) {
this.selectedNode = selectedNode;
}
public void deleteCategory() {
CategoryEntity current = (CategoryEntity) selectedNode.getData();
synchronized (this) {
//configure tree
selectedNode = null;
categoryDao.delete(current);
}
categoryService.initCategories();
}}
3) Spring #Controller
Here all user may have an instance (or each user have his own instance???). But when some admin try to change parameter of some user he should check is another admin trying to do the same operation??
#Controller
#RequestMapping("/rest")
public class UserResource {
#Autowired
private UserDao userDao;
#RequestMapping(value = "/user/{id}", method = RequestMethod.PUT)
public #ResponseBody UserEntity changeBannedStatus(#PathVariable Long id) {
UserEntity user = userDao.findById(id);
synchronized (id) {
user.setBanned(!user.getBanned());
userDao.update(user);
}
return user;
}
}
So, how it should to be?
Sorry for my English.
In the code that you've posted -- nothing in particular needs to be synchronised, and the synchronised blocks you've defined won't protect you from anything. Your controller scope is singleton by default.
If your singletons change shared objects ( mostly just their fields) then you should likely flag the whole method as synchronised.
Method level variables and final parameters will likely never need synchronization ( at least in the programming model you seem to be using ) so don't worry about it.
The session object is guarded by serialisation, mostly, but you can still have data races if your user has concurrent requests -- you'll have to imagine creative ways to deal with this.
You may/will have concurrency issues in the database ( multiple users trying to delete or modify a database row concurrently ) but this should be handled by a pessimistic or optimistic locking and transaction policy in your DAO.
Luck.
Generally speaking, using synchronized statements in your code reduces scalability. If your ever try to use multiple server instances, your synchronized will most likely be useless. Transaction semantics (using either optimistic or pessimistic locking) should be enough to ensure that your object remains consistent. So in 2 und 3 you don't need that.
As for shared variables in CategoryService it may be possible to synchronize it, but your categories seem to be some kind of cache. If this is the case, you might try to use a cache of your persistence provider (e.g. in Hibernate a second-level cache or query cache) or of your database.
Also calling categoryService.initCategories() in deleteCategory() probably means you are reloading the whole list, which is not a good idea, especially if you have many categories.
Related
My Spring Boot application implements a Service class which is passed a request object from a RestController. This service method is responsible for updating an entity.
As there are a lot of fields to be updated, I separated the updating logic into several private methods for better readability like this:
#Transactional
public void updateUser(UserRequest userRequest) {}
final User user = userRepository.findById(userRequest.getId).orElseThrow(() -> new EntityNotFoundException()));
updateUserFromRequest(user, userRequest);
}
private void updateUserFromRequest(User user, UserRequest userRequest) {
updateUserMainData(user, userRequest);
updateUserAdditionalData(user, userRequest);
}
private void updateUserMainData(User user, UserRequest userRequest) {
user.setProperty1(userRequest.getProperty1());
user.setProperty2(userRequest.getProperty2());
user.setProperty3(userRequest.getProperty3());
}
private void updateUserAdditionalData(User user, UserRequest userRequest) {
user.setProperty4(userRequest.getProperty4());
user.setProperty5(userRequest.getProperty5());
user.setProperty6(userRequest.getProperty6());
}
While this works just fine, it feels "awkward" to pass around the User object into private methods. Is this apporach considered "good practice" or are there any other patterns?
One possibility could be hiding this complexity in the User class itself by adding an update(UserRequest userRequest) method. No more User being passed around into private methods (which could be considerer to have side effects).
If you don't want to add logic to your entity class, another possibility to make it less "awkward" would be to have all the updateXXXX() methods returning the updated User. Of course, this changes nothing in practical terms but it hints that the User was indeed updated in the method, reducing a little bit the "methods with side-effect" feeling.
I've recently had to implement a cache invalidation system, and ended up hesitating between several ways of doing it.
I'd like to know what the best practice is in my case. I have a classic java back-end, with entities, services and repositories.
Let's say I have a Person object, with the usual setters and getters, persisted in a database.
public class Person {
private Long id;
private String firstName;
private String lastName;
...
}
A PersonRepository, instance of JpaRepository<Person, Long>.
public class PersonRepository extends JpaRepository<Person, Long> {
public Person save(Person person) {return super.save(person);}
public Person find(Person person) {return super.find(person);}
public void delete(Person person) {super.delete(person);}
}
I have a PersonService, with the usual save(), find(), delete() methods and other more functional methods.
public class PersonService {
public Person save(Person person) {
doSomeValidation(person)
return personRepository.save(person);
}
...
}
Now I also have so jobs that run periodically and manipulate the Person objects. One of which is running every second and uses a cache of Person objects, that needs to be rebuilt only if the firstName attribute of a Person has been modified elsewhere in the application.
public class EverySecondPersonJob {
private List<Person> cache;
private boolean cacheValid;
public void invalidateCache() {
cacheValid = false;
}
public void execute() { // run every second
if (!cacheValid)
cache = buildCache();
doStuff(cache);
}
}
There are lots of places in the code that manipulate Person objects and persist them, some may change the firstName attribute, requiring an invalidation of the cache, some change other things, not requiring it, for example:
public class ServiceA {
public void doStuffA(Person person) {
doStuff();
person.setFirstName("aaa");
personRepository.save(person);
}
public void doStuffB(Person person) {
doStuff();
person.setLastName("aaa");
personService.save(person);
}
}
What is the best way of invaliding the cache?
First idea:
Create a PersonService.saveAndInvalidateCache() method then check every method that calls personService.save(), see if they modify an attribute, and if yes, make it call PersonService.saveAndInvalidateCache() instead:
public class PersonService {
public Person save(Person person) {
doSomeValidation(person)
return personRepository.save(person);
}
public Person saveAndInvalidateCache(Person person) {
doSomeValidation(person)
Person saved = personRepository.save(person);
everySecondPersonJob.invalidateCache();
return saved;
}
...
}
public class ServiceA {
public class doStuffA(Person person) {
doStuff();
person.setFirstName("aaa");
personService.saveAndInvalidateCache(person);
}
public class doStuffB(Person person) {
doStuff();
person.setLastName("aaa");
personService.save(person);
}
}
It requires lots of modifications and makes it error prone if doStuffX() are modified or added. Every doStuffX() has to be aware if they must invalidate or not the cache of an entirely unrelated job.
Second idea:
Modify the setFirstName() to track the state of th ePerson object, and make PersonService.save() handle the cache invalidation:
public class Person {
private Long id;
private String firstName;
private String lastName;
private boolean mustInvalidateCache;
setFirstName(String firstName) {
this.firstName = firstName;
this.mustInvalidateCache = true;
}
...
}
public class PersonService {
public Person save(Person person) {
doSomeValidation(person);
Person saved = personRepository.save(person);
if (person.isMustInvalidateCache)
everySecondPersonJob.invalidateCache();
}
...
}
That solution makes it less error prone by not making every doStuffX() need to be aware of if they must invalidate the cache or not, but it makes the setter do more than just change the attribute, which seems to be a big nono.
Which solution is the best practice and why?
Thanks in advance.
Clarification: My job running every second calls, if the cache is invalid, a method that retrieves the Person objects from the database, builds a cache of other objects based upon the properties of the Person objects (here, firstName), and doesn't modify the Person.
The job then uses that cache of other objects for its job, and doesn't persist anything in the database either, so there is no potential consistency issue.
1) You don't
In the usage scenario you described the best practice is not to do any self grown caching but use the cache inside the JPA implementation. A lot of JPA implementations provide that (e.g. Hibernate, EclipseLink, Datanucleus, Apache OpenJPA).
Now I also have so jobs that run periodically and manipulate the Person objects
You would never manipulate a cached object. To manipulate, you need a session/transaction context and the database JPA implementation makes sure that you have the current object.
If you do "invalidation", as you described, you loose transactional properties and get inconsistencies. What happens if a transaction fails and you updated the cache with the new value already? But if you update the cache after the transaction went through, concurrent jobs read the old value.
2) Different Usage Scenario with Eventual Consistent View
You could do caching "on top" of your data storage layer, that provides an eventual consistent view. But you cannot write data back into the same object.
JPA always updates (and caches) the complete object.
Maybe you can store the data that your "doStuff" code derives in another entity?
If this is a possibility, then you have several options. I would "wire in" the cache invalidation via JPA triggers or the "Change Data Capture" capabilities of the database. JPA triggers are similar to your second idea, except that you don't need that all code is using your PersonService. If you run the tigger inside the application, your application cannot have multiple instances, so I would prefer getting change events from the database. You should reread everything from time to time in case you miss an event.
I have some design/implementation issue that I just can't wrap my head around it. I am currently working on a text-based game with multiple players. I kind of understand how it works for Player-to-Server, I meant that Server sees every individual Player as the same.
I'm using spring-boot 2, spring-web, thymeleaf, hibernate.
I implemented a custom UserDetails that returns after the user login.
#Entity
#Table(name = "USER")
public class User implements Serializable {
#Id
private long userId;
#Column(unique = true, nullable = false)
private String userName;
#OneToOne(cascade = CascadeType.ALL)
#JoinColumn(name = "playerStatsId")
private PlayerStats stats;
}
public class CurrentUserDetailsService implements UserDetailsService {
#Override
public CurrentUser loadUserByUsername(String userName) {
User user = this.accountRepository.findByUserName(userName)
.orElseThrow(() ->
new UsernameNotFoundException("User details not found with the provided username: " + userName));
return new CurrentUser(user);
}
}
public class CurrentUser implements UserDetails {
private static final long serialVersionUID = 1L;
private User user = new User();
public CurrentUser(User user) {
this.user = user;
}
public PlayerStats getPlayerStats() {
return this.user.getStats();
}
// removed the rest for brevity
}
Hence, in my controller, I can do this to get the CurrentUser.
*Note each User is also a player.
#GetMapping("/attackpage")
public String viewAttackPage(#AuthenticationPrincipal CurrentUser currentUser) {
// return the page view for list of attacks
return "someview";
}
The currentUser here would reflect to the current user per say (Player 1 or 2 or 3 and so on). Which works fine for most of the stuff happening to themselves such as purchasing some stuff, updating profile and so on.
But what I can't get or know how to achieve is when 2 players interact.
For example, Player 1 attacks Player 2. If I am Player 1, what I'll do is to click the "Attack" on the View and select the Player 2, and submit the command. Hence, in the controller, it will be something like this.
#GetMapping("/attack")
public String launchAttack(#AuthenticationPrincipal CurrentUser currentUser, #RequestParam("playername") String player2) {
updatePlayerState(player2);
return "someview";
}
public void updatePlayerState(String player) {
User user = getUserByPlayername(player);
// perform some update to player state (say health, etc)
// update back to db?
}
Here's is what really got me confused.
As seen previously, when each User/Player logs in, a set of user (player) current state will be pulled from the DB and store "in-memory".
Hence, when Player 1 attacks Player 2,
How do I "notify" or update Player 2 that the stats has changed, and thus, Player 2 should pull updated stats from db to memory.
How to tackle the possible concurrency issue here? For example, Player 2 health is 50 in DB. Player 2 then perform some action (say purchase health potion + 30), which then update the DB (health to 80). However, just before the DB is updated, Player 1 has already launch the attack and grab from DB the state of Player 2 where it will return 50 since DB has yet to be updated. So now, whatever changes made in getUserByPlayername() and update to the DB will be wrong, and the entire state of the Player will be "de-sync". I hope I am making sense here.
I understand that there is #Version in hibernate for optimistic locking but I'm not sure if it's applicable in this case. And would spring-session be useful in such case?
Should I not store the any data in memory when user login? Should I always be retrieving data from DB only when some action is performed? Like when viewProfile, then I pull from accountRepository. or when viewStats then I pull from statsRepository and on so.
Do point me in the right direction. Would appreciate for any concrete example of sort, or some kind of video/articles. If there is any additional information required, do let me know and I'll try to explain my case better.
Thank you.
I think that you should not be updating the currentUser in your Controller methods, and should not be relying on the data in that object to represent a player's current state. There are probably ways to get that to work, but you'd need to mess around with updating the security context.
I also recommend that you lookup Users by id instead of userName, so will write the rest of this answer with that approach. If you insist on finding Users by userName, adjust where necessary.
So, keeping it simple, I would have a reference to the accountRepository in the Controller, and then, whenever you need to get or update a player's state, use
User user = accountRepository.findById(currentUser.getId())
Yes, #Version and optimistic locking will help with the concurrency issues that you're concerned about. You can reload the Entity from the database, and retry the operation if you catch an #OptimisticLockException. Or, you may want to respond to player 1 with something like "Player 2 has just purchased a potion of healing, and is now 80 heath, do you still want to attack?"
I'm not a spring user, but I think that the problem is more conceptual than technical.
I'll try to provide an answer which uses a general approach, while writing the examples in a JavaEE style so that they should be understandable, and hopefully, portable to spring.
First of all: every single DETACHED entity is stale data. And stale data is not "trustable".
So:
each method that modify the state of an object should re-fetch the object from DB inside the transaction:
updatePlayerState() should be a transaction-boundary method (or called inside a tx), and getUserByPlayername(player) should fetch the target object from the DB.
JPA speaking: em.merge() is forbidden (without proper locking, i.e. #Version).
if you (or spring) are doing this already, there's little to add.
WRT the "lost update problem" you mention in your 2. be aware that this covers the application server side (JPA/Hibernate), but the very same problem could be present on DB side, which should be properly configured for, at least, repeatable read isolation. Take a look at MySQL does not conform to Repeatable Read really, if you are using it.
you have to handle controller fields that refer stale Players/Users/Objects. You have, at least, two options.
re-fetch for each request: suppose Player1 has attacked Player2 and diminished Player2 HP by 30. When Player2 goes to a view that shows his HP, the controller behind that view should have re-fetched the Player2/User2 entity before rendering the view.
In other words, all of your presentation (detached) entities should be, sort of, request-scoped.
i.e you can use a #WebListener to reload your Player/User:
#WebListener
public class CurrentUserListener implements ServletRequestListener {
#Override
public void requestInitialized(ServletRequestEvent sre) {
CurrentUser currentUser = getCurrentUser();
currentUser.reload();
}
#Override
public void requestDestroyed(ServletRequestEvent sre) {
// nothing to do
}
public CurrentUser getCurrentUser() {
// return the CurrentUser
}
}
or a request-scoped bean (or whatever-spring-equivalent):
#RequestScoped
public class RefresherBean {
#Inject
private CurrentUser currentUser;
#PostConstruct
public void init()
{
currentUser.reload();
}
}
notify other controller instances: if the update succeeded a notification should be sent to other controllers.
i.e. using CDI #Observe (if you have CDI available):
public class CurrentUser implements UserDetails {
private static final long serialVersionUID = 1L;
private User user = new User();
public CurrentUser(User user) {
this.user = user;
}
public PlayerStats getPlayerStats() {
return this.user.getStats();
}
public void onUpdate(#Observes(during = TransactionPhase.AFTER_SUCCESS) User user) {
if(this.user.getId() == user.getId()) {
this.user = user;
}
}
// removed the rest for brevity
}
Note that CurrentUser should be a server-managed object.
I have a service (which I for some reason call controller) that is injected into the Jersey resource method.
#Named
#Transactional
public class DocCtrl {
...
public void changeDocState(List<String> uuids, EDocState state, String shreddingCode) throws DatabaseException, WebserviceException, RepositoryException, ExtensionException, LockException, AccessDeniedException, PathNotFoundException, UnknowException {
List<Document2> documents = doc2DAO.getManyByUUIDs(uuids);
for (Document2 doc : documents) {
if (EDocState.SOFT_DEL == state) {
computeShreddingFor(doc, shreddingCode); //here the state change happens and it is persisted to db
}
if (EDocState.ACTIVE == state)
unscheduleShredding(doc);
}
}
}
doc2DAO.getManyByUUIDs(uuids); gets an Entity object from the database.
#Repository
public class Doc2DAO {
#PersistenceContext(name = Vedantas.PU_NAME, type = PersistenceContextType.EXTENDED)
private EntityManager entityManager;
public List<Document2> getManyByUUIDs(List<String> uuids) {
if (uuids.isEmpty())
uuids.add("-3");
TypedQuery<Document2> query = entityManager.createNamedQuery("getManyByUUIDs", Document2.class);
query.setParameter("uuids", uuids);
return query.getResultList();
}
}
However When I do second request to my API, I see state of this entity object unchanged, that means the same as before the logic above occoured.
In DB there is still changed status.
After the api service restart, I will get the entity in the correct state.
As I understand it, Hibernate uses it's L2 cache for the managed objects.
So can you, please point me to what I am doing wrong here? Obviously, I need to get cached entity with the changed state without service restart and I would like to keep entities attached to the persistence context for the performance reasons.
Now, can you tell me what I am
In the logic I am making some changes to this object. After the completition of the changeDocState method, the state is properly changed and persisted in the database.
Thanks for the answers;
The question relates to the injection of repositories and utility classes in an entity.
Background
The repository isn't directly used by the entity, but instead is passed through to a strategy which needs to enforce certain business rules.
The utility classes in question help with the validation of the model and image handling functionality. The validation utility applies JSR 303 validation to the model which checks that the entity is valid when created (custom JSR 303 annotations are used for certain business rules too). The image utility is called once the Order is saved (post persist) and uploads the image related to the order. This is done post persist as the ID of the Order in question is needed.
Question
All these dependencies injected into the entity don't feel right. The dilema is whether to move them all (or some, which ones?) out of the domain object and move them elsewhere? (if so what would be the right place?) or whether this is a case of analysis paralysis?
For example, the strategy ensures certain business rules are met. Should that be taken out because it needs the repository? I mean it needs to be run whenever the entity performs that update, so do I really want to lose that encapsulation?
public class OrderFactory {
// This could be autowired
private IOrderRepository orderRepository;
private ValidationUtils validationUtils;
private ImageUtils imageUtils;
public OrderFactory( IOrderRepository orderRepository, ValidationUtils validationUtils, ImageUtils imageUtils ) {
this.orderRepository = orderRepository;
this.validationUtils = validationUtils;
this.imageUtils = imageUtils;
}
public Order createOrderFromSpecialOrder( SpecialOrderDTO dto ) {
Order order = (dto.hasNoOrderId()) ? new Order() : orderRepository.findOrderById(dto.getOrderId());
order.updateFromSpecialOrder( orderRepository, validationUtils, imageUtils, dto.getSpecialOrderAttributes(), dto.getImage());
return order;
}
}
public class Order {
private IOrderRepository orderRepository;
private ValidationUtils validationUtils;
private ImageUtils imageUtils;
private byte[] image;
protected Order() {}
protected void updateFromSpecialOrder( IOrderRepository orderRepository, ValidationUtils validationUtils, ImageUtils imageUtils, ISpecialOrderAttributes attrs, byte[] image ) {
this.orderRepository = orderRepository;
this.imageUtils = imageUtils;
// This uses the orderRepository to do some voodoo based on the SpecialOrderAttributes
SpecialOrderStrategy specialOrderStrategy = new SpecialOrderStrategy( orderRepository );
specialOrderStrategy.handleAttributes( attrs );
this.coupon = coupon;
this.image = image;
validationUtils.validate( this );
}
#PostPersist
public void postPersist() {
if( imageUtils != null ) imageUtils.handleImageUpload( image, id, ImageType.Order );
}
}
What you are looking for is a domain service that plays a coordinating role and it depends on the aforementioned services/repositories. It's a classic mistake to try to stuff too much responsibility into an aggregate due to this twisted view of what invariants really are. Post processing might benefit from a domain event to kick it off (either in or out of process - you've given me too little to go on here). You also might want to assign identifiers from the outside, or allocate one as part of the domain service's method execution. As for the validation, ditch the attributes and hand roll (if they really are about the entity). YMMV.