In short, when #CacheEvict is called on a method and if the key for the entry is not found, Gemfire is throwing EntryNotFoundException.
Now in detail,
I have a class
class Person {
String mobile;
int dept;
String name;
}
I have two Cache regions defined as personRegion and personByDeptRegion and the Service is as below
#Service
class PersonServiceImpl {
#Cacheable(value = "personRegion")
public Person findByMobile(String mobile) {
return personRepository.findByMobile(mobile);
}
#Cacheable(value = "personByDeptRegion")
public List<Person> findByDept(int deptCode) {
return personRepository.findByDept(deptCode);
}
#Caching(
evict = { #CacheEvict(value = "personByDeptRegion", key="#p0.dept"},
put = { #CachePut(value = "personRegion",key = "#p0.mobile")}
)
public Person updatePerson(Person p1) {
return personRepository.save(p1);
}
}
When there is a call to updatePerson and if there are no entries in the personByDeptRegion, this would throw an exception that EntryNotFoundException for the key 1 ( or whatever is the dept code ). There is a very good chance that this method will be called before the #Cacheable methods are called and want to avoid this exception.
Is there any way we could tweak the Gemfire behavior to gracefully return when the key is not existing for a given region ?.
Alternatively, I am also eager to know if there is a better implementation of the above scenario using Gemfire as cache.
Spring Data Gemfire : 1.7.4
Gemfire Version : v8.2.1
Note: The above code is for representation purpose only and I have multiple services with same issue in actual project.
First, I commend you for using Spring's Caching annotations on your application #Service components. All too often developers enable caching in their Repositories, which I think is bad form, especially if complex business rules (or even additional IO; e.g. calling a web service from a service component) are involved prior to or after the Repository interaction(s), particularly in cases where caching behavior should not be affected (or determined).
I also think your caching UC (updating one cache (personRegion) while invalidating another (personByDeptRegion) on a data store update) by following a CachePut with a CacheEvict seems reasonable to me. Though, I would point out that the seemingly intended use of the #Caching annotation is to combine multiple Caching annotations of the same type (e.g. multiple #CacheEvict or multiple #CachePut) as explained in the core Spring Framework Reference Guide. Still, there is nothing preventing your intended use.
I created a similar test class here, modeled after your example above, to verify the problem. Indeed the jonDoeUpdateSuccessful test case fails (with the GemFire EntryNotFoundException, shown below) since no people in Department "R&D" were previously cached in the "DepartmentPeople" GemFire Region prior to the update, unlike the janeDoeUpdateSuccessful test case, which causes the cache to be populated before the update (even if the entry has no values, which is of no consequence).
com.gemstone.gemfire.cache.EntryNotFoundException: RESEARCH_DEVELOPMENT
at com.gemstone.gemfire.internal.cache.AbstractRegionMap.destroy(AbstractRegionMap.java:1435)
NOTE: My test uses GemFire as both a "cache provider" and a System of Record (SOR).
The problem really lies in SDG's use of Region.destroy(key) in the GemfireCache.evict(key) implementation rather than, and perhaps more appropriately, Region.remove(key).
GemfireCache.evict(key) has been implemented with Region.destroy(key) since inception. However, Region.remove(key) was not introduced until GemFire v5.0. Still, I can see no discernible difference between Region.destroy(key) and Region.remove(key) other than the EntryNotFoundException thrown by Region.destroy(key). Essentially, they both destroy the local entry (both key and value) as well as distribute the operation to other caches in the cluster (providing a non-LOCAL Scope is used).
So, I have filed SGF-539 to change SDG to call Region.remove(key) in GemfireCache.evict(key) rather than Region.destroy(key).
As for a workaround, well, there is basically only 2 things you can do:
Restructure your code and your use of the #CacheEvict annotation, and/or...
Make use of the condition on #CacheEvict.
It is unfortunate that a condition cannot be specified using a class type, something akin to a Spring Condition (in addition to SpEL), but this interface is intended for another purpose and the #CacheEvict, condition attribute does not accept a class type.
At the moment, I don't have a good example of how this might work so I am moving forward on SGF-539.
You can following this ticket for more details and progress.
Sorry for the inconvenience.
-John
Related
I am building a REST-API with SpringBoot and using this Controller.
#RestController
class EmployeeController {
private final EmployeeRepository repository;
EmployeeController(EmployeeRepository repository) {
this.repository = repository;
}
#GetMapping("/employees")
List<Employee> all() {
return repository.findAll();
}
#PostMapping("/employees")
Employee newEmployee(#RequestBody Employee newEmployee) {
return repository.save(newEmployee);
}
I want to ensure that API-Consumers cannot spam multiple concurrent POST-Requests with the same Employee. I know that I can check if the entity already exists in the database before saving it, but I am afraid that the performance will be bad. I also already noticed that you can use Annotation like #version in your entity, to make updates on existing Entity`s more save.
But is there also a way or a best practice in Spring how to handle this POST-Requests with a potential new Entity?
What kind of request throughput are you expecting to the POST /employees endpoint? While performance is important, premature optimization is almost always going to cause your code to be uglier than it needs to for little gain.
As your code currently stands, multiple concurrent POST /employees requests would end up with a first-come first-serve basis where the first user with the given UNIQUE constraint in your application (which is hopefully enforced by your underlying DBMS) is created, and all other after that (for the same user) would fail due to a e.g. ConstraintViolationException (mapped to a e.g. DataIntegrityViolationException). From this point of view (as long as you do not have a complicated distributed DBMS setup), the consistency of the data is still guaranteed.
The downside, of course, is that the error messages that would be returned would be:
Vendor-specific and leak the underlying implementation (e.g. we're showing the client that we're using Hibernate)
Potentially difficult for the client to parse.
If you instead, change the implementation to something like the following:
#PostMapping("/employees")
Employee newEmployee(#RequestBody Employee newEmployee) {
verifyEmployeeDoesNotExist(newEmployee);
return repository.save(newEmployee);
}
private void verifyUserDoesNotExist(Employee employee) {
if (repository.exists(newEmployee) {
throw new EmployeeAlreadyExistsException("Employee " + newEmployee.getName() + " already exists";
}
}
then you could more easily control the control flow of your endpoint and the underlying process, which would potentially allow for more easily-digestible exception handling. This could be even further improved by adding e.g. custom exceptions which also contain some code of pre-defined error code such as e.g. error 409 code 1010 Employee already exists.
Of course, Spring's built-in exception translation for Hibernate (e.g. HibernateExceptionTranslator) might already be good enough for your use-case and could even be extended and this extension can be even generalized.
In the end, the best practice is making your code clean, readable and maintainable. Then start adding functionality to monitor your code. After that, and only if you have a problem with performance, you can still optimize it.
I'd like to learn if there are some rules / conditions that a Spring component is wrapped (proxied) by CGLIB. For example, take this case:
#Component
public class TestComponent {
}
#Service
//#Transactional(rollbackFor = Throwable.class)
public class ProcessComponent {
#Autowired
private TestComponent testComponent;
public void doSomething(int key) {
// try to debug "testComponent" instance here ...
}
}
If we let it like this and debug the testComponent field inside the method, then we'll see that it's not wrapped by CGLIB.
Now if we uncomment the #Transactional annotation and debug, we'll find that the instance is wrapped: it's of type ProcessComponent$$EnhancerByCGLIB$$14456 or something like that. It's clearly because Spring needs to create a proxy class to handle the transaction support.
But I'm wondering, is there any way that we can detect how and when does this wrapping happen ? For example, some specific locations in Spring's source code to debug into to find more information; or some documentations on the rules of how they decide to create a proxy.
For your information, I need to know about this because I'm facing a situation where some component (not #Transactional, above example is just for demonstrating purpose) in my application suddenly becomes proxied (I found a revision a bit in the past where it is not). The most important issue is that this'll affect such components that also contain public final methods and another issue (also of importance) is that there must have been some unexpected changes in the design / structure of classes. For these kind of issues, of course we must try to find out what happened / who did the change that led to this etc...
One note is that we have just upgraded our application from Spring Boot 2.1.0RELEASE to 2.1.10RELEASE. And checking the code revision by revision up till now is not feasible, because there have been quite a lot of commits.
Any kind of help would be appreciated, thanks in advance.
You could debug into org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.getAdvicesAndAdvisorsForBean(Class, String, TargetSource).
If any advisor is found, the bean will be proxied.
If you use a #Lookup method injection it will also proxy the component class.
I'd like to have a method in my Repository that returns a single value.
Like this:
TrainingMode findByTrainingNameAndNameEng( String trainingName, String nameEng );
http://docs.spring.io/spring-data/data-jpa/docs/current/reference/html/
Spring Data Docs describe that in this case the method can return null if no entity is found.
I'd like to throw an exception with generic message like No TrainingMode found by %trainingName% and %nameEng% or smth like that.
I can use Optional<TrainingMode> as a return value and then use orElseThrow
Optional<TrainingMode> findByTrainingNameAndNameEng( String trainingName, String nameEng );
repository.findByTrainingNameAndNameEng(name, nameEng).orElseThrow(() -> new RuntimeException(...));
But I should call this method each time when this method is called. It's not clear - DRY priciple is broken.
How to get nonnull single value with orElseThrow using Spring Data?
The DRY principle would be violated if you duplicate null handling throughout the application logic where it is being invoked. If DRY principle is the thing you are worried the most then i can think of:
You can make a "Service" class which would delegate calls to annotated repository and handle null response logic to it, and use that service class instead of calling repositories directly. Drawback would be introducing another layer to your application (which would decouple repositories from your app logic).
There is possibility of adding custom behavior to your data repositories which is described in "3.6.1. Adding custom behavior to single repositories" section of documentation. Sorry for not posting the snippet.
The issue I personally have with second approach is that it pollutes app with interfaces, enforces you to follow a certain naming patterns (never liked 'Impl' suffixes), and might make migrating code a bit more time consuming (when app becomes big it becomes harder to track which interface is responsible for which custom behavior and then people just simply start creating their own behavior which turns out to be duplicate of another).
I found a solution.
First, Spring Data processes getByName and findByName equally. And we can use it: in my case find* can return null (or returns not null Optional, as you wish) and get* should return only value: if null is returned then exception is thrown.
I decided to use AOP for this case.
Here's the aspect:
#Aspect
#Component
public class GetFromRepositoryAspect {
#Around("execution(public !void org.springframework.data.repository.Repository+.get*(..))")
public Object aroundDaoMethod( ProceedingJoinPoint joinpoint ) throws Throwable {
Object result = joinpoint.proceed();
if (null == result) {
throw new FormattedException( "No entity found with arhs %s",
Arrays.toString( joinpoint.getArgs() ) );
}
return result;
}
}
That's all.
You can achieve this by using the Spring nullability annotations. If the method return type is just some Entity and it's not a wrapper type, such as Optional<T>, then org.springframework.dao.EmptyResultDataAccessException will be thrown in case of no results.
Read more about Null Handling of Repository Methods.
I'm trying to implement a few tests with JBPM 6. I'm currently working a a simple hello world bpmn2 file, which is loaded correctly.
My understading of the documentation ( Click ) is that persistence should be disabled by default. "By default, if you do not configure the process engine otherwise, process instances are not made persistent."
However, when I try to implement it, and without doing anything special to enable persistence, I hit persistence related problems every time I try to do anything.
javax.persistence.PersistenceException: No Persistence provider for EntityManager named org.jbpm.persistence.jpa
at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:69)
at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:47)
at org.jbpm.runtime.manager.impl.jpa.EntityManagerFactoryManager.getOrCreate(EntityManagerFactoryManager.java:33)
at org.jbpm.runtime.manager.impl.DefaultRuntimeEnvironment.init(DefaultRuntimeEnvironment.java:73)
at org.jbpm.runtime.manager.impl.RuntimeEnvironmentBuilder.get(RuntimeEnvironmentBuilder.java:400)
at org.jbpm.runtime.manager.impl.RuntimeEnvironmentBuilder.get(RuntimeEnvironmentBuilder.java:74)</blockquote>
I Create my runtime environement the following way,
RuntimeEnvironment environment = RuntimeEnvironmentBuilder.Factory.get()
.newDefaultInMemoryBuilder()
.persistence(false)
.addAsset(ResourceFactory.newClassPathResource("examples/helloworld.bpmn2.xml"), ResourceType.BPMN2)
.addAsset(ResourceFactory.newClassPathResource("examples/newBPMNProcess.bpmn"), ResourceType.BPMN2)
.get();
As my understanding is that persistence should be disabled by default, I don't see what I'm doing wrong. It could be linked to something included in some of my dependencies, but I don't have found anything on it either.
Has anybody faced the same issue already or has any advice.
Thanks
A RuntimeManager is a combination of a process engine and a human task service. The human task service needs persistence (to start the human tasks etc.), that's why it's still asking for a datasource, even if you configure the engine to not use persistence.
If you want to use an engine without our human task service, you don't need persistence at all, but I wouldn't use a RuntimeManager in that case, simply create a ksession from the kbase directly:
http://docs.jboss.org/jbpm/v6.1/userguide/jBPMCoreEngine.html#d0e1805
The InMemoryBuilder which you use in your code is supposed to (as per API documentation) not be persistent, but it is actually adding a persistence manager to the environment, just with an InMemoryMapper instead of a JPAMapper because of the way the init() method in DefaultRuntimeEnvironment is implemented:
public void init() {
if (emf == null && getEnvironmentTemplate().get(EnvironmentName.CMD_SCOPED_ENTITY_MANAGER) == null) {
emf = EntityManagerFactoryManager.get().getOrCreate("org.jbpm.persistence.jpa");
}
addToEnvironment(EnvironmentName.ENTITY_MANAGER_FACTORY, emf);
if (this.mapper == null) {
if (this.usePersistence) {
this.mapper = new JPAMapper(emf);
} else {
this.mapper = new InMemoryMapper();
}
}
}
As you can see above, this still tries to getOrCreate() a persistence unit (I have seen a better implementation which also checks for the value of persistence attribute somewhere, but the issue here is, DefaultRuntimeEnvironment doesn't do that).
What you need to start with to get away without persistence is a newEmptyBuilder():
RuntimeEnvironment env = RuntimeEnvironmentBuilder.Factory.get()
.newEmptyBuilder()
.knowledgeBase(KieServices.Factory.get().getKieClasspathContainer().getKieBase("my-knowledge-base"))
// ONLY REQUIRED FOR PER-REQUEST AND PER-INSTANCE STRATEGY
//.addEnvironmentEntry("IS_JTA_TRANSACTION", false)
.persistence(false)
.get();
Do mind though that this will only work for Singleton runtime managers - PerProcessInstance and PerRequest expect to be able to suspend a running transaction if necessary, which is only possible if you have an entity manager to be able to persist state.
For testing with those two strategies also use addEnvironmentEntry() above.
I need to write an application with which I can do complex queries using spring-data and mongodb. I have been starting by using the MongoRepository but struggled with complex queries to find examples or to actually understand the Syntax.
I'm talking about queries like this:
#Repository
public interface UserRepositoryInterface extends MongoRepository<User, String> {
List<User> findByEmailOrLastName(String email, String lastName);
}
or the use of JSON based queries which I tried by trial and error because I don't get the syntax right. Even after reading the mongodb documentation (non-working example due to wrong syntax).
#Repository
public interface UserRepositoryInterface extends MongoRepository<User, String> {
#Query("'$or':[{'firstName':{'$regex':?0,'$options':'i'}},{'lastName':{'$regex':?0,'$options':'i'}}]")
List<User> findByEmailOrFirstnameOrLastnameLike(String searchText);
}
After reading through all the documentation it seems that mongoTemplate is far better documented then MongoRepository. I'm referring to following documentation:
http://static.springsource.org/spring-data/data-mongodb/docs/current/reference/html/
Can you tell me what is more convenient and powerful to use? mongoTemplate or MongoRepository? Are both same mature or does one of them lack more features then the other?
"Convenient" and "powerful to use" are contradicting goals to some degree. Repositories are by far more convenient than templates but the latter of course give you more fine-grained control over what to execute.
As the repository programming model is available for multiple Spring Data modules, you'll find more in-depth documentation for it in the general section of the Spring Data MongoDB reference docs.
TL;DR
We generally recommend the following approach:
Start with the repository abstract and just declare simple queries using the query derivation mechanism or manually defined queries.
For more complex queries, add manually implemented methods to the repository (as documented here). For the implementation use MongoTemplate.
Details
For your example this would look something like this:
Define an interface for your custom code:
interface CustomUserRepository {
List<User> yourCustomMethod();
}
Add an implementation for this class and follow the naming convention to make sure we can find the class.
class UserRepositoryImpl implements CustomUserRepository {
private final MongoOperations operations;
#Autowired
public UserRepositoryImpl(MongoOperations operations) {
Assert.notNull(operations, "MongoOperations must not be null!");
this.operations = operations;
}
public List<User> yourCustomMethod() {
// custom implementation here
}
}
Now let your base repository interface extend the custom one and the infrastructure will automatically use your custom implementation:
interface UserRepository extends CrudRepository<User, Long>, CustomUserRepository {
}
This way you essentially get the choice: everything that just easy to declare goes into UserRepository, everything that's better implemented manually goes into CustomUserRepository. The customization options are documented here.
FWIW, regarding updates in a multi-threaded environment:
MongoTemplate provides "atomic" out-of-the-box operations updateFirst, updateMulti, findAndModify, upsert... which allow you to modify a document in a single operation. The Update object used by these methods also allows you to target only the relevant fields.
MongoRepository only gives you the basic CRUD operations find, insert, save, delete, which work with POJOs containing all the fields. This forces you to either update the documents in several steps (1. find the document to update, 2. modify the relevant fields from the returned POJO, and then 3. save it), or define your own update queries by hand using #Query.
In a multi-threaded environment, like e.g. a Java back-end with several REST endpoints, single-method updates are the way to go, in order to reduce the chances of two concurrent updates overwriting one another's changes.
Example: given a document like this: { _id: "ID1", field1: "a string", field2: 10.0 } and two different threads concurrently updating it...
With MongoTemplate it would look somewhat like this:
THREAD_001 THREAD_002
| |
|update(query("ID1"), Update().set("field1", "another string")) |update(query("ID1"), Update().inc("field2", 5))
| |
| |
and the final state for the document is always { _id: "ID1", field1: "another string", field2: 15.0 } since each thread is accesing the DB only once and only the specified field is changed.
Whereas the same case scenario with MongoRepository would look like this:
THREAD_001 THREAD_002
| |
|pojo = findById("ID1") |pojo = findById("ID1")
|pojo.setField1("another string") /* field2 still 10.0 */ |pojo.setField2(pojo.getField2()+5) /* field1 still "a string" */
|save(pojo) |save(pojo)
| |
| |
and the final document being either { _id: "ID1", field1: "another string", field2: 10.0 } or { _id: "ID1", field1: "a string", field2: 15.0 } depending on which save operation hits the DB last.
(NOTE: Even if we used Spring Data's #Version annotation as suggested in the comments, not much would change: one of the save operations would throw an OptimisticLockingFailureException, and the final document would still be one of the above, with only one field updated instead of both.)
So I'd say that MongoTemplate is a better option, unless you have a very elaborated POJO model or need the custom queries capabilities of MongoRepository for some reason.
This answer may be a bit delayed, but I would recommend avoiding the whole repository route. You get very little implemented methods of any great practical value. In order to make it work you run into the Java configuration nonsense which you can spend days and weeks on without much help in the documentation.
Instead, go with the MongoTemplate route and create your own Data access layer which frees you from the configuration nightmares faced by Spring programmers. MongoTemplate is really the savior for engineers who are comfortable architecting their own classes and interactions since there is lot of flexibility. The structure can be something like this:
Create a MongoClientFactory class that will run at the application level and give you a MongoClient object. You can implement this as a Singleton or using an Enum Singleton (this is thread safe)
Create a Data access base class from which you can inherit a data access object for each domain object). The base class can implement a method for creating a MongoTemplate object which you class specific methods can use for all DB accesses
Each data access class for each domain object can implement the basic methods or you can implement them in the base class
The Controller methods can then call methods in the Data access classes as needed.