I using spring boot and create api with method look like below:
#RestController
#RequestMapping("/api/products/")
#Api(value = "ProductControllerApi",produces = MediaType.APPLICATION_JSON_VALUE)
public class ProductController {
#PostMapping
public ResponseEntity<ProductDto> createProduct(#RequestBody Product product) {
URI location = ServletUriComponentsBuilder.fromCurrentRequest().path("/{id}").buildAndExpand(product.getId()).toUri();
return ResponseEntity.created(location).body(productService.createProduct(product));
}
}
But when user using device call my method(/api/products/) with same time, it duplicate create with same data. Example: When use create user look like
{
"name": "Samsung"
"cost": "26$"
}
it create two record in database with same data. How to detect duplicate data from different source(example : user using two mobile and call same time with same method and create same data). How to avoid it and if it call same time with same data it only insert one record to database
This isn't a problem for Spring Boot but rather for your persistence layer. The best practice is for your DB tables to modeled in such a way that two identical requests would create exactly the same primary key. Then your application code would deal with any exception from the DB layer at transaction commit time.
Related
So I'm trying for the first time in a not so complex project to implement Domain Driven Design by separating all my code into application, domain, infrastructure and interfaces packages.
I also went with the whole separation of the JPA Entities to Domain models that will hold my business logic as rich models and used the Builder pattern to instantiate. This approach created me a headache and can't figure out if Im doing it all wrong when using JPA + ORM and Spring Data with DDD.
Process explanation
The application is a Rest API consumer (without any user interaction) that process daily through Scheduler tasks a fairly big amount of data resources and stores or updates into MySQL. Im using RestTemplate to fetch and convert the JSON responses into Domain objects and from there Im applying any business logic within the Domain itself e.g. validation, events, etc
From what I have read the aggregate root object should have an identity in their whole lifecycle and should be unique. I have used the id of the rest API object because is already something that I use to identify and track in my business domain. I have also created a property for the Technical id so when I convert Entities to Domain objects it can hold a reference for the update process.
When I need to persist the Domain to the data source (MySQL) for the first time Im converting them into Entity objects and I persist them using the save() method. So far so good.
Now when I need to update those records in the data source I first fetch them as a List of Employees from data source, convert Entity objects to Domain objects and then I fetch the list of Employees from the rest API as Domain models. Up until now I have two lists of the same Domain object types as List<Employee>. I'm iterating them using Streams and checking if an objects are not equal() between them if yes a collection of List items is created as a third list with Employee objects that need to be updated. Here I've already passed the technical Id to the domain objects in the third list of Employees so Hibernate can identify and use to update the records that are already exists.
Up to here are all fairly simple stuff until I use the saveAll() method to update the records.
Questions
I alway see Hibernate using INSERT instead of updating the list of
records. So If Im correct Hibernate session is not recognising the
objects that Im throwing into it because I have detached them when I
used the convert to domain object?
Does anyone have a better idea how can I implement this differently or fix
this problem?
Or should I stop using this approach as two different objects and continue use
them as rich Entity models?
Simple classes to explain it with code
EmployeeDO.java
#Entity
#Table(name = "employees")
public class EmployeeDO implements Serializable {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private String name;
public EmployeeDO() {}
...omitted getter/setters
}
Employee.java
public class Employee {
private Long persistId;
private Long employeeId;
private String name;
private Employee() {}
...omitted getters and Builder
}
EmployeeConverter.java
public class EmployeeConverter {
public static EmployeeDO serialize(Employee employee) {
EmployeeDO target = new EmployeeDO();
if (employee.getPersistId() != null) {
target.setId(employee.getPersistId());
}
target.setName(employee.getName());
return target;
}
public static Employee deserialize(EmployeeDO employee) {
return new Country.Builder(employee.getEmployeeId)
.withPersistId(employee.getId()) //<-- Technical ID setter
.withName(employee.getName())
.build();
}
}
EmployeeRepository.java
#Component
public class EmployeeReporistoryImpl implements EmployeeRepository {
#Autowired
EmployeeJpaRepository db;
#Override
public List<Employee> findAll() {
return db.findAll().stream()
.map(employee -> EmployeeConverter.deserialize(employee))
.collect(Collectors.toList());
}
#Override
public void saveAll(List<Employee> employees) {
db.saveAll(employees.stream()
.map(employee -> EmployeeConverter.serialize(employee))
.collect(Collectors.toList()));
}
}
EmployeeJpaRepository.java
#Repository
public interface EmployeeJpaRepository extends JpaRepository<EmployeeDO, Long> {
}
I use the same approach on my project: two different models for the domain and the persistence.
First, I would suggest you to don't use the converter approach but use the Memento pattern. Your domain entity exports a memento object and it could be restored from the same object. Yes, the domain has 2 functions that aren't related to the domain (they exist just to supply a non-functional requirement), but, on the other side, you avoid to expose functions, getters and constructors that the domain business logic never use.
For the part about the persistence, I don't use JPA exactly for this reason: you have to write a lot of code to reload, update and persist the entities correctly. I write directly SQL code: I can write and test it fast, and once it works I'm sure that it does what I want. With the Memento object I can have directly what I will use in the insert/update query, and I avoid myself a lot of headaches about the JPA of handling complex tables structures.
Anyway, if you want to use JPA, the only solution is to:
load the persistence entities and transform them into domain entities
update the domain entities according to the changes that you have to do in your domain
save the domain entities, that means:
reload the persistence entities
change, or create if there're new ones, them with the changes that you get from the updated domain entities
save the persistence entities
I've tried a mixed solution, where the domain entities are extended by the persistence ones (a bit complex to do). A lot of care should be took to avoid that domain model should adapts to the restrictions of JPA that come from the persistence model.
Here there's an interesting reading about the splitting of the two models.
Finally, my suggestion is to think how complex the domain is and use the simplest solution for the problem:
is it big and with a lot of complex behaviours? Is expected that it will grow up in a big one? Use two models, domain and persistence, and manage the persistence directly with SQL It avoids a lot of caos in the read/update/save phase.
is it simple? Then, first, should I use the DDD approach? If really yes, I would let the JPA annotations to split inside the domain. Yes, it's not pure DDD, but we live in the real world and the time to do something simple in the pure way should not be some orders of magnitude bigger that the the time I need to to it with some compromises. And, on the other side, I can write all this stuff in an XML in the infrastructure layer, avoiding to clutter the domain with it. As it's done in the spring DDD sample here.
When you want to update an existing object, you first have to load it through entityManager.find() and apply the changes on that object or use entityManager.merge since you are working with detached entities.
Anyway, modelling rich domain models based on JPA is the perfect use case for Blaze-Persistence Entity Views.
Blaze-Persistence is a query builder on top of JPA which supports many of the advanced DBMS features on top of the JPA model. I created Entity Views on top of it to allow easy mapping between JPA models and custom interface defined models, something like Spring Data Projections on steroids. The idea is that you define your target structure the way you like and map attributes(getters) via JPQL expressions to the entity model. Since the attribute name is used as default mapping, you mostly don't need explicit mappings as 80% of the use cases is to have DTOs that are a subset of the entity model.
The interesting point here is that entity views can also be updatable and support automatic translation back to the entity/DB model.
A mapping for your model could look as simple as the following
#EntityView(EmployeeDO.class)
#UpdatableEntityView
interface Employee {
#IdMapping("persistId")
Long getId();
Long getEmployeeId();
String getName();
void setName(String name);
}
Querying is a matter of applying the entity view to a query, the simplest being just a query by id.
Employee dto = entityViewManager.find(entityManager, Employee.class, id);
The Spring Data integration allows you to use it almost like Spring Data Projections: https://persistence.blazebit.com/documentation/entity-view/manual/en_US/index.html#spring-data-features and it can also be saved back. Here a sample repository
#Repository
interface EmployeeRepository {
Employee findOne(Long id);
void save(Employee e);
}
It will only fetch the mappings that you tell it to fetch and also only update the state that you make updatable through setters.
With the Jackson integration you can deserialize your payload onto a loaded entity view or you can avoid loading alltogether and use the Spring MVC integration to capture just the state that was transferred and flush that. This could look like the following:
#RequestMapping(path = "/employee/{id}", method = RequestMethod.PUT, consumes = MediaType.APPLICATION_JSON_VALUE)
public ResponseEntity<String> updateEmp(#EntityViewId("id") #RequestBody Employee emp) {
employeeRepository.save(emp);
return ResponseEntity.ok(emp.getId().toString());
}
Here you can see an example project: https://github.com/Blazebit/blaze-persistence/tree/master/examples/spring-data-webmvc
How to validate path variables (storeId,customerId,accountId) given from the following URL or anything similar ?
/store/{storeId}/customers/{customerId}/accounts/{accountId}
In case a user starts writing random storeIds/customerIds, and try to create resources like
POST /store/478489/customers/56423/accounts (supposing 478489 and 56423 don't point to a valid resource) in the URL.
I want to return the correct error code e.g. HttpStatus.NOT_FOUND, HttpStatus.BAD_REQUEST.
I am using Java with spring boot.
Following question explains my problem more detailed, however it does not have much responses.
Validating path to nested resource
From the provided URL /store/{storeId}/customers/{customerId}/accounts/{accountId},
It is evident that store has customers and those customers have accounts.
Following approach includes extra database calls for validating store by ID and customer by ID, but that would be the appropriate approach because if we use query with joins on STORE and CUSTOMER tables then you might not be able to exactly tell if given storeId or customerId is incorrect/not in database.
If you go step by step you can show appropriate error messages,
If storeId is incorrect - There exists no store with given storeId: XYZ
If customerId is incorrect - There exists no customer with customerID: XYZ
Since, you mentioned that you are using Spring Boot, your code should be looking something like this:
#RequestMapping(value = "/store/{storeId}/customers/{customerId}/accounts",
method = RequestMethod.POST)
public ResponseEntity<Account> persistAccount(#RequestBody Account account, #PathVariable("storeId") Integer storeId,
#PathVariable("customerId") Integer customerId) {
// Assuming you have some service class #Autowired that will query store by ID.
// Assuming you have classes like Store, Customer, Account defined
Store store = service.getStoreById(storeId);
if(store==null){
//Throw your exception / Use some Exception handling mechanism like #ExceptionHandler etc.
//Along with proper Http Status code.
//Message will be something like: *There exists no store with given storeId: XYZ*
}
Customer customer = service.getAccountById(storeId, customerId);
if(customer==null){
//Throw your exception with proper message.
//Message will be something like: *There exists no store with given customerID: XYZ*
}
// Assuming you already have some code to save account info in database.
// for convenience I am naming it as saveAccountAgainstStoreAndCustomer
Account account = service.saveAccountAgainstStoreAndCustomer(storeId, customerId, account);
ResponseEntity<Account> responseEntity = new ResponseEntity<Account>(account, HttpStatus.CREATED);
}
Above code snippet is just a skeleton of what your code should look like, you make structure it in better way than it is given above by following some good coding practices.
I hope it helps.
I was following some example in which we can able to build OData service with Olingo from Java (maven project). The provided example doesn't have any database interaction. They are using some Storage.class, which contains hard codded data.
You can find sample code on git. please refer example p0_all in provided url.
Does anyone knows how we can connect git example with some database and furthermore perform CRUD operations
Please do help me with some good examples or concept.
Thanking you in advance.
I recently built an oData producer using Olingo and found myself similarly frustrated. I think that part of the issue is that there really are a lot of different ways to build an oData service with Olingo, and the data access piece is entirely up to the developer to sort out in their own project.
Firstly, you need an application that has a database connection set up. So completely disregarding Olingo, you should have an app that connects to and can query a database. If you are uncertain of how to build a java application that can query a MySQL datasource, then you should Google around for tutorials that are related to that problem and have nothing to do with Olingo.
Next you need to write the methods and queries to perform CRUD operations in your application. Again, these methods have nothing to do with Olingo.
Where Olingo starts to come in to play is in your implementation of the processor classes. EntityCollectionProcessor, EntityProcessor etc. (note that there are other concerns such as setting up your CsdlEntityTypes and Schema/Service Document etc., but those are outside the scope of your question)
Lets start by looking at EntityCollectionProcessor. By implementing the EntityCollectionProcessor class you need to override the readEntityCollection() function. The purpose of this function is to parse the oData URI for the entity name, fetch an EntityCollection for that Entity, and then serialize the EntityCollection into an oData compliant response. Here's the implementation of readEntityCollection() from your example link:
public void readEntityCollection(ODataRequest request, ODataResponse response, UriInfo uriInfo, ContentType responseFormat)
throws ODataApplicationException, SerializerException {
// 1st we have retrieve the requested EntitySet from the uriInfo object
// (representation of the parsed service URI)
List<UriResource> resourcePaths = uriInfo.getUriResourceParts();
UriResourceEntitySet uriResourceEntitySet = (UriResourceEntitySet) resourcePaths.get(0);
// in our example, the first segment is the EntitySet
EdmEntitySet edmEntitySet = uriResourceEntitySet.getEntitySet();
// 2nd: fetch the data from backend for this requested EntitySetName
// it has to be delivered as EntityCollection object
EntityCollection entitySet = getData(edmEntitySet);
// 3rd: create a serializer based on the requested format (json)
ODataSerializer serializer = odata.createSerializer(responseFormat);
// 4th: Now serialize the content: transform from the EntitySet object to InputStream
EdmEntityType edmEntityType = edmEntitySet.getEntityType();
ContextURL contextUrl = ContextURL.with().entitySet(edmEntitySet).build();
final String id = request.getRawBaseUri() + "/" + edmEntitySet.getName();
EntityCollectionSerializerOptions opts = EntityCollectionSerializerOptions.with().id(id).contextURL(contextUrl).build();
SerializerResult serializerResult = serializer.entityCollection(serviceMetadata, edmEntityType, entitySet, opts);
InputStream serializedContent = serializerResult.getContent();
// Finally: configure the response object: set the body, headers and status code
response.setContent(serializedContent);
response.setStatusCode(HttpStatusCode.OK.getStatusCode());
response.setHeader(HttpHeader.CONTENT_TYPE, responseFormat.toContentTypeString());
}
You can ignore (and reuse) everything in this example except for the "2nd" step:
EntityCollection entitySet = getData(edmEntitySet);
This line of code is where Olingo finally starts to interact with our underlying system, and the pattern that we see here informs how we should set up the rest of our CRUD operations.
The function getData(edmEntitySet) can be anything you want, in any class you want. The only restriction is that it must return an EntityCollection. So what you need to do is call a function that queries your MySQL database and returns all records for the given entity (using the string name of the entity). Then, once you have a List, or Set (or whatever) of your records, you need to convert it to an EntityCollection.
As an aside, I think that this is probably where the disconnect between the Olingo examples and real world application comes from. The code behind that getData(edmEntitySet); call can be architected in infinitely different ways, depending on the design pattern used in the underlying system (MVC etc.), styling choices, scalability requirements etc.
Here's an example of how I created an EntityCollection from a List that returned from my query (keep in mind that I am assuming you know how to query your MySQL datasource and have already coded a function that retrieves all records for a given entity):
private List<Foo> getAllFoos(){
// ... code that queries dataset and retrieves all Foo records
}
// loop over List<Foo> converting each instance of Foo into and Olingo Entity
private EntityCollection makeEntityCollection(List<Foo> fooList){
EntityCollection entitySet = new EntityCollection();
for (Foo foo: fooList){
entitySet.getEntities().add(createEntity(foo));
}
return entitySet;
}
// Convert instance of Foo object into an Olingo Entity
private Entity createEntity(Foo foo){
Entity tmpEntity = new Entity()
.addProperty(createPrimitive(Foo.FIELD_ID, foo.getId()))
.addProperty(createPrimitive(Foo.FIELD_FOO_NAME, foo.getFooName()));
return tmpEntity;
}
Just for added clarity, getData(edmEntitySet) might look like this:
public EntityCollection getData(String edmEntitySet){
// ... code to determine which query to call based on entity name
List<Foo> foos = getAllFoos();
EntityCollection entitySet = makeEntityCollection(foos);
return entitySet;
}
If you can find an Olingo example that uses a DataProvider class, there are some basic examples of how you might set up the // ...code to determine which query to call based on entity name. I ended up modifying that pattern heavily using Java reflection, but that is totally unrelated to your question.
So getData(edmEntitySet) is a function that takes an entity name, queries the datasource for all records of that entity (returning a List<Foo>), and then converts that List<Foo> into an EntityCollection. The EntityCollection is made by calling the createEntity() function which takes the instance of my Foo object and turns it into an Olingo Entity. The EntityCollection is then returned to the readEntityCollection() function and can be properly serialized and returned as an oData response.
This example exposes a bit of the architecture problem that Olingo has with its own examples. In my example Foo is an object that has constants that are used to identify the field names, which are used by Olingo to generate the oData Schema and Service Document. This object has a method to return it's own CsdlEntityType, as well as a constructor, its own properties and getters/setters etc. You don't have to set your system up this way, but for the scalability requirements of my project this is how I chose to do things.
This is the general pattern that Olingo uses. Override methods of an interface, then call functions in a separate part of your system that interact with your data in the desired manner. Then convert the data into Olingo readable objects so they can do whatever "oData stuff" needs to be done in the response. If you want to implement CRUD for a single entity, then you need to implement EntityProcessor and its various CRUD methods, and inside those methods, you need to call the functions in your system (totally separate from any Olingo code) that create(), read() (single entity), update(), or delete().
We are using Spring and IBatis and I have discovered something interesting in the way a service method with #Transactional handles multiple DAO calls that return the same record. Here is an example of a method that does not work.
#Transactional
public void processIndividualTrans(IndvTrans trans) {
Individual individual = individualDAO.selectByPrimaryKey(trans.getPartyId());
individual.setFirstName(trans.getFirstName());
individual.setMiddleName(trans.getMiddleName());
individual.setLastName(trans.getLastName());
Individual oldIndvRecord = individualDAO.selectByPrimaryKey(trans.getPartyId());
individualHistoryDAO.insert(oldIndvRecord);
individualDAO.updateByPrimaryKey(individual);
}
The problem with the above method is that the 2nd execution of the line
individualDAO.selectByPrimaryKey(trans.getPartyId())
returns the exact object returned from the first call.
This means that oldIndvRecord and individual are the same object, and the line
individualHistoryDAO.insert(oldIndvRecord);
adds a row to the history table that contains the changes (which we do not want).
In order for it to work it must look like this.
#Transactional
public void processIndividualTrans(IndvTrans trans) {
Individual individual = individualDAO.selectByPrimaryKey(trans.getPartyId());
individualHistoryDAO.insert(individual);
individual.setFirstName(trans.getFirstName());
individual.setMiddleName(trans.getMiddleName());
individual.setLastName(trans.getLastName());
individualDAO.updateByPrimaryKey(individual);
}
We wanted to write a service called updateIndividual that we could use for all updates of this table that would store a row in the IndividualHistory table before performing the update.
#Transactional
public void updateIndividual(Individual individual) {
Individual oldIndvRecord = individualDAO.selectByPrimaryKey(trans.getPartyId());
individualHistoryDAO.insert(oldIndvRecord);
individualDAO.updateByPrimaryKey(individual);
}
But it does not store the row as it was before the object changed. We can even explicitly instantiate different objects before the DAO calls and the second one becomes the same object as the first.
I have looked through the Spring documentation and cannot determine why this is happening.
Can anyone explain this?
Is there a setting that can allow the 2nd DAO call to return the database contents and not the previously returned object?
You are using Hibernate as ORM and this behavior is perfectly described in the Hibernate documentation. In the Transaction chapter:
Through Session, which is also a transaction-scoped cache, Hibernate provides repeatable reads for lookup by identifier and entity queries and not reporting queries that return scalar values.
Same goes for IBatis
MyBatis uses two caches: a local cache and a second level cache. Each
time a new session is created MyBatis creates a local cache and
attaches it to the session. Any query executed within the session will
be stored in the local cache so further executions of the same query
with the same input parameters will not hit the database. The local
cache is cleared upon update, commit, rollback and close.
I have a very simple task,
I have a "User" Entity.
This user has tons of fields, for example :
firstName
age
country
.....
My goal is to expose a simple controller for update:
#RequestMapping(value = "/mywebapp/updateUser")
public void updateUser(data)
I would like clients to call my controller with updates that might include one or more fields to be updated.
What are the best practices to implement such method?
One naive solution will be to send from the client the whole entity, and in the server just override all fields, but that seems very inefficient.
another naive and bad solution might be the following:
#Transactional
#RequestMapping(value = "/mywebapp/updateUser")
public void updateUser(int userId, String[] fieldNames, String[] values) {
User user = this.userDao.findById(userId);
for (int i=0 ; i < fieldsNames.length ; i++) {
String fieldName = fieldsName[i];
switch(fieldName) {
case fieldName.equals("age") {
user.setAge(values[i]);
}
case fieldName.equals("firstName") {
user.setFirstName(values[i]);
}
....
}
}
}
Obviously these solutions aren't serious, there must be a more robust\generic way of doing that (reflection maybe).
Any ideas?
I once did this genetically using Jackson. It has a very convenient ObjectMapper.readerForUpdating(Object) method that can read values from a JsonNode/Tree onto an existing object.
The controller/service
#PATCH
#Transactional
public DomainObject partialUpdate (Long id, JsonNode data) {
DomainObject o = repository.get(id);
return objectMapper.readerForUpdating(o).readValue(data);
}
That was it. We used Jersey to expose the services as REST Web services, hence the #PATCH annotation.
As to whether this is a controller or a service: it handles raw transfer data (the JsonNode), but to work efficiently it needs to be transactional (Changes made by the reader are flushed to the database when the transaction commits. Reading the object in the same transaction allows hibernate to dynamically update only the changed fields).
If your User entity doesn't contains any security fields like login or password, you can simply use it as model attribute. In this case all fields will be updated automatically from the form inputs, those fields that are not supose to be updated, like id should be hidden fields on the form.
If you don't want to expose all your entity propeties to the presentation layer you can use pojo aka command to mapp all needed fields from user entity
BWT It is really bad practice to make your controller methods transactional. You should separate your application layers. You need to have service. This is the layer where #Transactional annotation belongs to. You do all the logic there before crud operations.