I'm trying to implement a partial update of the Manager entity based in the following:
Entity
public class Manager {
private int id;
private String firstname;
private String lastname;
private String username;
private String password;
// getters and setters omitted
}
SaveManager method in Controller
#RequestMapping(value = "/save", method = RequestMethod.PATCH)
public #ResponseBody void saveManager(#RequestBody Manager manager){
managerService.saveManager(manager);
}
Save object manager in Dao impl.
#Override
public void saveManager(Manager manager) {
sessionFactory.getCurrentSession().saveOrUpdate(manager);
}
When I save the object the username and password has changed correctly but the others values are empty.
So what I need to do is update the username and password and keep all the remaining data.
If you are truly using a PATCH, then you should use RequestMethod.PATCH, not RequestMethod.POST.
Your patch mapping should contain the id with which you can retrieve the Manager object to be patched. Also, it should only include the fields with which you want to change. In your example you are sending the entire entity, so you can't discern the fields that are actually changing (does empty mean leave this field alone or actually change its value to empty).
Perhaps an implementation as such is what you're after?
#RequestMapping(value = "/manager/{id}", method = RequestMethod.PATCH)
public #ResponseBody void saveManager(#PathVariable Long id, #RequestBody Map<Object, Object> fields) {
Manager manager = someServiceToLoadManager(id);
// Map key is field name, v is value
fields.forEach((k, v) -> {
// use reflection to get field k on manager and set it to value v
Field field = ReflectionUtils.findField(Manager.class, k);
field.setAccessible(true);
ReflectionUtils.setField(field, manager, v);
});
managerService.saveManager(manager);
}
Update
I want to provide an update to this post as there is now a project that simplifies the patching process.
The artifact is
<dependency>
<groupId>com.github.java-json-tools</groupId>
<artifactId>json-patch</artifactId>
<version>1.13</version>
</dependency>
The implementation to patch the Manager object in the OP would look like this:
Controller
#Operation(summary = "Patch a Manager")
#PatchMapping("/{managerId}")
public Task patchManager(#PathVariable Long managerId, #RequestBody JsonPatch jsonPatch)
throws JsonPatchException, JsonProcessingException {
return managerService.patch(managerId, jsonPatch);
}
Service
public Manager patch(Long managerId, JsonPatch jsonPatch) throws JsonPatchException, JsonProcessingException {
Manager manager = managerRepository.findById(managerId).orElseThrow(EntityNotFoundException::new);
JsonNode patched = jsonPatch.apply(objectMapper.convertValue(manager, JsonNode.class));
return managerRepository.save(objectMapper.treeToValue(patched, Manager.class));
}
The patch request follows the specifications in RFC 6092, so this is a true PATCH implementation. Details can be found here
With this, you can patch your changes
1. Autowire `ObjectMapper` in controller;
2. #PatchMapping("/manager/{id}")
ResponseEntity<?> saveManager(#RequestBody Map<String, String> manager) {
Manager toBePatchedManager = objectMapper.convertValue(manager, Manager.class);
managerService.patch(toBePatchedManager);
}
3. Create new method `patch` in `ManagerService`
4. Autowire `NullAwareBeanUtilsBean` in `ManagerService`
5. public void patch(Manager toBePatched) {
Optional<Manager> optionalManager = managerRepository.findOne(toBePatched.getId());
if (optionalManager.isPresent()) {
Manager fromDb = optionalManager.get();
// bean utils will copy non null values from toBePatched to fromDb manager.
beanUtils.copyProperties(fromDb, toBePatched);
updateManager(fromDb);
}
}
You will have to extend BeanUtilsBean to implement copying of non null values behaviour.
public class NullAwareBeanUtilsBean extends BeanUtilsBean {
#Override
public void copyProperty(Object dest, String name, Object value)
throws IllegalAccessException, InvocationTargetException {
if (value == null)
return;
super.copyProperty(dest, name, value);
}
}
and finally, mark NullAwareBeanUtilsBean as #Component
or
register NullAwareBeanUtilsBean as bean
#Bean
public NullAwareBeanUtilsBean nullAwareBeanUtilsBean() {
return new NullAwareBeanUtilsBean();
}
First, you need to know if you are doing an insert or an update. Insert is straightforward. On update, use get() to retrieve the entity. Then update whatever fields. At the end of the transaction, Hibernate will flush the changes and commit.
You can write custom update query which updates only particular fields:
#Override
public void saveManager(Manager manager) {
Query query = sessionFactory.getCurrentSession().createQuery("update Manager set username = :username, password = :password where id = :id");
query.setParameter("username", manager.getUsername());
query.setParameter("password", manager.getPassword());
query.setParameter("id", manager.getId());
query.executeUpdate();
}
ObjectMapper.updateValue provides all you need to partially map your entity with values from dto.
As an addition, you can use either of two here: Map<String, Object> fields or String json, so your service method may look like this:
#Autowired
private ObjectMapper objectMapper;
#Override
#Transactional
public Foo save(long id, Map<String, Object> fields) throws JsonMappingException {
Foo foo = fooRepository.findById(id)
.orElseThrow(() -> new ResourceNotFoundException("Foo not found for this id: " + id));
return objectMapper.updateValue(foo , fields);
}
As a second solution and addition to Lane Maxwell's answer you could use Reflection to map only properties that exist in a Map of values that was sent, so your service method may look like this:
#Override
#Transactional
public Foo save(long id, Map<String, Object> fields) {
Foo foo = fooRepository.findById(id)
.orElseThrow(() -> new ResourceNotFoundException("Foo not found for this id: " + id));
fields.keySet()
.forEach(k -> {
Method method = ReflectionUtils.findMethod(LocationProduct.class, "set" + StringUtils.capitalize(k));
if (method != null) {
ReflectionUtils.invokeMethod(method, foo, fields.get(k));
}
});
return foo;
}
Second solution allows you to insert some additional business logic into mapping process, might be conversions or calculations ect.
Also unlike finding reflection field Field field = ReflectionUtils.findField(Foo.class, k); by name and than making it accessible, finding property's setter actually calls setter method that might contain additional logic to be executed and prevents from setting value to private properties.
Related
I am attempting to use the DataLoader feature within the graphql-java-kickstart library:
https://github.com/graphql-java-kickstart
My application is a Spring Boot application using 2.3.0.RELEASE. And I using version 7.0.1 of the graphql-spring-boot-starter library.
The library is pretty easy to use and it works when I don't use the data loader. However, I am plagued by the N+1 SQL problem and as a result need to use the data loader to help alleviate this issue. When I execute a request, I end up getting this:
Can't resolve value (/findAccountById[0]/customers) : type mismatch error, expected type LIST got class com.daluga.api.account.domain.Customer
I am sure I am missing something in the configuration but really don't know what that is.
Here is my graphql schema:
type Account {
id: ID!
accountNumber: String!
customers: [Customer]
}
type Customer {
id: ID!
fullName: String
}
I have created a CustomGraphQLContextBuilder:
#Component
public class CustomGraphQLContextBuilder implements GraphQLServletContextBuilder {
private final CustomerRepository customerRepository;
public CustomGraphQLContextBuilder(CustomerRepository customerRepository) {
this.customerRepository = customerRepository;
}
#Override
public GraphQLContext build(HttpServletRequest httpServletRequest, HttpServletResponse httpServletResponse) {
return DefaultGraphQLServletContext.createServletContext(buildDataLoaderRegistry(), null).with(httpServletRequest).with(httpServletResponse).build();
}
#Override
public GraphQLContext build(Session session, HandshakeRequest handshakeRequest) {
return DefaultGraphQLWebSocketContext.createWebSocketContext(buildDataLoaderRegistry(), null).with(session).with(handshakeRequest).build();
}
#Override
public GraphQLContext build() {
return new DefaultGraphQLContext(buildDataLoaderRegistry(), null);
}
private DataLoaderRegistry buildDataLoaderRegistry() {
DataLoaderRegistry dataLoaderRegistry = new DataLoaderRegistry();
dataLoaderRegistry.register("customerDataLoader",
new DataLoader<Long, Customer>(accountIds ->
CompletableFuture.supplyAsync(() ->
customerRepository.findCustomersByAccountIds(accountIds), new SyncTaskExecutor())));
return dataLoaderRegistry;
}
}
I also have create an AccountResolver:
public CompletableFuture<List<Customer>> customers(Account account, DataFetchingEnvironment dfe) {
final DataLoader<Long, List<Customer>> dataloader = ((GraphQLContext) dfe.getContext())
.getDataLoaderRegistry().get()
.getDataLoader("customerDataLoader");
return dataloader.load(account.getId());
}
And here is the Customer Repository:
public List<Customer> findCustomersByAccountIds(List<Long> accountIds) {
Instant begin = Instant.now();
MapSqlParameterSource namedParameters = new MapSqlParameterSource();
String inClause = getInClauseParamFromList(accountIds, namedParameters);
String sql = StringUtils.replace(SQL_FIND_CUSTOMERS_BY_ACCOUNT_IDS,"__ACCOUNT_IDS__", inClause);
List<Customer> customers = jdbcTemplate.query(sql, namedParameters, new CustomerRowMapper());
Instant end = Instant.now();
LOGGER.info("Total Time in Millis to Execute findCustomersByAccountIds: " + Duration.between(begin, end).toMillis());
return customers;
}
I can put a break point in the Customer Repository and see the SQL execute and it returns a List of Customer objects. You can also see that the schema wants an array of customers. If I remove the code above and put in the resolver to get the customers one by one....it works....but is really slow.
What am I missing in the configuration that would cause this?
Can't resolve value (/findAccountById[0]/customers) : type mismatch error, expected type LIST got class com.daluga.api.account.domain.Customer
Thanks for your help!
Dan
Thanks, #Bms bharadwaj! The issue was on my side in understanding how the data is returned in the dataloader. I ended up using a MappedBatchLoader to bring the data in a map. The key in the map being the accountId.
private DataLoader<Long, List<Customer>> getCustomerDataLoader() {
MappedBatchLoader<Long, List<Customer>> customerMappedBatchLoader = accountIds -> CompletableFuture.supplyAsync(() -> {
List<Customer> customers = customerRepository.findCustomersByAccountId(accountIds);
Map<Long, List<Customer>> groupByAccountId = customers.stream().collect(Collectors.groupingBy(cust -> cust.getAccountId()));
return groupByAaccountId;
});
// }, new SyncTaskExecutor());
return DataLoader.newMappedDataLoader(customerMappedBatchLoader);
}
This seems to have done the trick because before I was issuing hundreds of SQL statement and now down to 2 (one for the driver SQL...accounts and one for the customers).
In the CustomGraphQLContextBuilder,
I think you should have registered the DataLoader as :
...
dataLoaderRegistry.register("customerDataLoader",
new DataLoader<Long, List<Customer>>(accountIds ->
...
because, you are expecting a list of Customers for one account Id.
That should work I guess.
Hello everyone I'm new to Spring world. Actually I want to know how can we use converter to update object instead of updating each element one by one using set and get. Right now in my controller I've :
#PostMapping("/edit/{userId}")
public Customer updateCustomer(#RequestBody Customer newCustomer, #PathVariable final String userId)
{
return customerService.update(userId, newCustomer);
}
and this is how I'm updating the customer object :
#Override
public Customer update(String id, Customer newCustomer) {
Customer customer = customerRepository.findById(id).get();
customer.setFirstName(newCustomer.getFirstName());
customer.setLastName(newCustomer.getLastName());
customer.setEmail(newCustomer.getEmail());
return customerRepository.save(customer);
}
Instead of using each time set and get, I want to use a converter.
The approach of passing the entity's id as a path variable when you're updating it isn't really right. Think about this: you have a #RequestBody, why don't you include the id inside this body too? Why do you want to specify a path variable for it?
Now, if you have the full Customer with its id from the body, you don't have to make any calls to your repository because hibernate adds it to a persistent state already based on its id and a simple
public Customer update(Customer newCustomer) {
return customerRepository.save(newCustomer);
}
should work.
Q: What is a persistent state?
A: A persistent entity has been associated with a database table row and it’s being managed by the current running Persistence Context. ( customerRepository.findById() is just asking the DB if the entity with the specified id exists and add it to a persistent state. Hibernate manage all this process if you have an #Id annotated field and is filled, in other words:
Customer customer = new Customer();
customer.setId(1);
is ALMOST the same thing as :
Customer customer = customerRepository.findById(1).get();
)
TIPS: Anyway, you shouldn't have (if you didn't know) a model in the controller layer. Why? Let's say that your Customer model can have multiple permissions. One possible structure could look like this:
#Entity
public class Customer{
//private fields here;
#OneToMany(mappedBy="customer",--other configs here--)
private List<Permission> permissions;
}
and
#Entity
public class Permission{
#Id
private Long id;
private String name;
private String creationDate;
#ManyToOne(--configs here--)
private Customer customer;
}
You can see that you have a cross reference between Customer and Permission entity which will eventually lead to a stack overflow exception (if you don't understand this, you can think about a recursive function which doesn't have a condition to stop and it's called over and over again => stack overflow. The same thing is happening here).
What can you do? Creating a so called DTO class that you want the client to receive instead of a model. How can you create this DTO? Think about what the user NEEDS to know.
1) Is "creationDate" from Permission a necessary field for the user? Not really.
2) Is "id" from Permission a necessary field for the user? In some cases yes, in others, not.
A possible CustomerDTO could look like this:
public class CustomerDTO
{
private String firstName;
private String lastName;
private List<String> permissions;
}
and you can notice that I'm using a List<String> instead of List<Permission> for customer's permissions which are in fact the permissions' names.
public CustomerDTO convertModelToDto(Customer customer)
{
//hard way
CustomerDTO customerDTO = new CustomerDTO();
customerDTO.setFirstName(customer.getFirstName());
customerDTO.setLastName(customer.getLastName());
customerDTO.setPermissions(
customer.getPermissions()
.stream()
.map(permission -> permission.getName())
.collect(Collectors.toList());
);
// easy-way => using a ModelMapper
customerDTO = modelMapper.map(customer,CustomerDTO.class);
return customerDTO;
}
Use ModelMapper to map one model into another.
First define a function that can map source data into the target model. Use this as a library to use whenever want.
public static <T> void merge(T source, T target) {
ModelMapper modelMapper = new ModelMapper();
modelMapper.getConfiguration().setMatchingStrategy(MatchingStrategies.STRICT);
modelMapper.map(source, target);
}
Use merge for mapping data
Customer customer = customerRepository.findById(id).get();
merge(newCustomer, customer);
customerRepository.save(customer);
Add dependency in pom.xml for model mapper
<dependency>
<groupId>org.modelmapper</groupId>
<artifactId>modelmapper</artifactId>
<version>2.3.4</version>
</dependency>
I would like to have Documents stored with an UUID id and createdAt / updatedAt fields. My solution was working with Spring Boot 2.1.x. After I upgraded from Spring Boot 2.1.11.RELEASE to 2.2.0.RELEASE my test for MongoAuditing failed with createdAt = null. What do I need to do to get the createdAt field filled again?
This is not just a testproblem. I ran the application and it has the same behaviour as my test. All auditing fields stay null.
I have a Configuration to enable MongoAuditing and UUID generation:
#Configuration
#EnableMongoAuditing
public class MongoConfiguration {
#Bean
public GenerateUUIDListener generateUUIDListener() {
return new GenerateUUIDListener();
}
}
The listner hooks into the onBeforeConvert - I guess thats where the trouble starts.
public class GenerateUUIDListener extends AbstractMongoEventListener<IdentifiableEntity> {
#Override
public void onBeforeConvert(BeforeConvertEvent<IdentifiableEntity> event) {
IdentifiableEntity entity = event.getSource();
if (entity.isNew()) {
entity.setId(UUID.randomUUID());
}
}
}
The document itself (I dropped the getter and setters):
#Document
public class MyDocument extends InsertableEntity {
private String name;
}
public abstract class InsertableEntity extends IdentifiableEntity {
#CreatedDate
#JsonIgnore
private Instant createdAt;
}
public abstract class IdentifiableEntity implements Persistable<UUID> {
#Id
private UUID id;
#JsonIgnore
public boolean isNew() {
return getId() == null;
}
}
A complete minimal example can be find here (including a test) https://github.com/mab/auditable
With 2.1.11.RELEASE the test succeeds with 2.2.0.RELEASE it fails.
For me the best solution was to switch from event UUID generation to a callback based one. With the implementation of Ordered we can set the new callback to be executed after the AuditingEntityCallback.
public class IdEntityCallback implements BeforeConvertCallback<IdentifiableEntity>, Ordered {
#Override
public IdentifiableEntity onBeforeConvert(IdentifiableEntity entity, String collection) {
if (entity.isNew()) {
entity.setId(UUID.randomUUID());
}
return entity;
}
#Override
public int getOrder() {
return 101;
}
}
I registered the callback with the MongoConfiguration. For a more general solution you might want to take a look at the registration of the AuditingEntityCallback with the `MongoAuditingBeanDefinitionParser.
#Configuration
#EnableMongoAuditing
public class MongoConfiguration {
#Bean
public IdEntityCallback registerCallback() {
return new IdEntityCallback();
}
}
MongoTemplate works in the following way on doInsert()
this.maybeEmitEvent - emit an event (onBeforeConvert, onBeforeSave and such) so any AbstractMappingEventListener can catch and act upon like you did with GenerateUUIDListener
this.maybeCallBeforeConvert - call before convert callbacks like mongo auditing
like you can see in source code of MongoTemplate.class src (831-832)
protected <T> T doInsert(String collectionName, T objectToSave, MongoWriter<T> writer) {
BeforeConvertEvent<T> event = new BeforeConvertEvent(objectToSave, collectionName);
T toConvert = ((BeforeConvertEvent)this.maybeEmitEvent(event)).getSource(); //emit event
toConvert = this.maybeCallBeforeConvert(toConvert, collectionName); //call some before convert handlers
...
}
MongoAudit marks createdAt only to new entities by checking if entity.isNew() == true
because your code (UUID) already set the Id the createdAt is not populated (the entity is not considered new)
you can do the following (order by best to worst):
forget about the UUID and use String for your id, let the mongo itself create and manage it's entities ids (this how MongoTemplate actually works lines 811-812)
keep the UUID at the code level, convert from/to String when inserting and retrieving from the db
create a custom repository like in this post
stay with 2.1.11.RELEASE
set the updateAt by GenerateUUIDListener as well as id (rename it NewEntityListener or smth), basically implement the audit
implement a new isNew() logic that don't depends only on the entity id
in version 2.1.11.RELEASE the order of the methods was flipped (MongoTemplate.class 804-805) so your code worked fine
as an abstract approach, the nature of event is to be sort of send-and-forget (async compatible), so it's a very bad practice to change the object itself, there is NO grantee for order of computation, if any
this is why the audit build on callbacks and not events, and that's why Pivotal don't (need to) keep order between versions
I am using r2dbc, r2dbc-h2 and experimental spring-boot-starter-data-r2dbc
implementation 'org.springframework.boot.experimental:spring-boot-starter-data-r2dbc:0.1.0.M1'
implementation 'org.springframework.data:spring-data-r2dbc:1.0.0.RELEASE' // starter-data provides old version
implementation 'io.r2dbc:r2dbc-h2:0.8.0.RELEASE'
implementation 'io.r2dbc:r2dbc-pool:0.8.0.RELEASE'
I have created reactive repositories
public interface IJsonComparisonRepository extends ReactiveCrudRepository<JsonComparisonResult, String> {}
Also added a custom script that creates a table in H2 on startup
#SpringBootApplication
public class JsonComparisonApplication {
public static void main(String[] args) {
SpringApplication.run(JsonComparisonApplication.class, args);
}
#Bean
public CommandLineRunner startup(DatabaseClient client) {
return (args) -> client
.execute(() -> {
var resource = new ClassPathResource("ddl/script.sql");
try (var is = new InputStreamReader(resource.getInputStream())) {
return FileCopyUtils.copyToString(is);
} catch (IOException e) {
throw new RuntimeException(e);
} })
.then()
.block();
}
}
My r2dbc configuration looks like this
#Configuration
#EnableR2dbcRepositories
public class R2dbcConfiguration extends AbstractR2dbcConfiguration {
#Override
public ConnectionFactory connectionFactory() {
return new H2ConnectionFactory(
H2ConnectionConfiguration.builder()
.url("mem:testdb;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE")
.username("sa")
.build());
}
}
My service where I perform the logic looks like this
#Override
public Mono<JsonComparisonResult> updateOrCreateRightSide(String comparisonId, String json) {
return updateComparisonSide(comparisonId, storedComparisonResult -> {
storedComparisonResult.setRightSide(json);
return storedComparisonResult;
});
}
private Mono<JsonComparisonResult> updateComparisonSide(String comparisonId,
Function<JsonComparisonResult, JsonComparisonResult> updateSide) {
return repository.findById(comparisonId)
.defaultIfEmpty(createResult(comparisonId))
.filter(result -> ComparisonDecision.NONE == result.getDecision()) // if not NONE - it means it was found and completed
.switchIfEmpty(Mono.error(new NotUpdatableCompleteComparisonException(comparisonId)))
.map(updateSide)
.flatMap(repository::save);
}
private JsonComparisonResult createResult(String comparisonId) {
LOGGER.info("Creating new comparison result: {}.", comparisonId);
var newResult = new JsonComparisonResult();
newResult.setDecision(ComparisonDecision.NONE);
newResult.setComparisonId(comparisonId);
return newResult;
}
The domain looks like this
#Table("json_comparison")
public class JsonComparisonResult {
#Column("comparison_id")
#Id
private String comparisonId;
#Column("left")
private String leftSide;
#Column("right")
private String rightSide;
// #Enumerated(EnumType.STRING) - no support for now
#Column("decision")
private ComparisonDecision decision;
private String differences;
The problem is that when I try to add any object to the database it fails with the exception
org.springframework.dao.TransientDataAccessResourceException: Failed to update table [json_comparison]. Row with Id [4] does not exist.
at org.springframework.data.r2dbc.repository.support.SimpleR2dbcRepository.lambda$save$0(SimpleR2dbcRepository.java:91) ~[spring-data-r2dbc-1.0.0.RELEASE.jar:1.0.0.RELEASE]
at reactor.core.publisher.FluxHandle$HandleSubscriber.onNext(FluxHandle.java:96) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:73) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.MonoUsingWhen$MonoUsingWhenSubscriber.deferredComplete(MonoUsingWhen.java:276) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.FluxUsingWhen$CommitInner.onComplete(FluxUsingWhen.java:536) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.onComplete(Operators.java:1858) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.Operators.complete(Operators.java:132) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.MonoEmpty.subscribe(MonoEmpty.java:45) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
For some reason during save in SimpleR2dbcRepository library class it doesn't consider the objectToSave as new, but then it fails to update as it is in reality doesn't exist.
// SimpleR2dbcRepository#save
#Override
#Transactional
public <S extends T> Mono<S> save(S objectToSave) {
Assert.notNull(objectToSave, "Object to save must not be null!");
if (this.entity.isNew(objectToSave)) { // not new
....
}
}
Why it is happening and what is the problem?
TL;DR: How should Spring Data know if your object is new or whether it should exist?
Relational Spring Data Repositories (both, JDBC and R2DBC) must differentiate on [Reactive]CrudRepository.save(…) whether the given object is new or whether it exists in your database. Performing a save(…) operation results either in an INSERT or UPDATE statement. Issuing the wrong statement either causes a primary key violation or a no-op as standard SQL does not have a way to express an upsert.
Spring Data JDBC|R2DBC use by default the presence/absence of the #Id value. Generated primary keys are a widely used mechanism. If the primary key is provided, the entity is considered existing. If the id value is null, the entity is considered new.
Read more in the reference documentation about Entity State Detection Strategies.
You have to implement Persistable because you’ve provided the #Id. The library needs to figure out, whether the row is new or whether it should exist. If your entity implements Persistable, then save(…) will use the outcome of isNew() to determine whether to issue an INSERT or UPDATE.
For example:
public class Product implements Persistable<Integer> {
#Id
private Integer id;
private String description;
private Double price;
#Transient
private boolean newProduct;
#Override
#Transient
public boolean isNew() {
return this.newProduct || id == null;
}
public Product setAsNew() {
this.newProduct = true;
return this;
}
}
May be you should consider this:
Choose data type of your id/Primary Key as INT/LONG and set it to AUTO_INCREMENT (something like below):
CREATE TABLE PRODUCT(id INT PRIMARY KEY AUTO_INCREMENT NOT NULL, modelname VARCHAR(30) , year VARCHAR(4), owner VARCHAR(50));
In your post request body, do not include id field.
Removing #ID issued insert statement
I have a Spring managed bean...
#Component("Foobean")
#Scope("prototype")
public class foobean {
private String bar1;
private String bar2;
public String getBar1() {
return bar1;
}
public void setBar1(String bar1) {
this.bar1 = bar1;
}
public String getBar2() {
return bar2;
}
public void setBar2(String bar2) {
this.bar2 = bar2;
}
}
...and because I am using Dojo Dgrid to display an ArrayList of this bean, I am returning it into the controller as a JSON string:
#Controller
#RequestMapping("/bo")
public class FooController {
#Autowired
private FooService fooService
#RequestMapping("action=getListOfFoos*")
#ResponseBody
public String clickDisplayFoos(
Map<String, Object> model) {
List<Foobean> foobeans = fooService.getFoobeans();
ObjectMapper objMapper = new ObjectMapper();
String FooJson = null;
try {
FooJson = objMapper.writeValueAsString(foobeans);
} catch (JsonGenerationException e) {
etc.
}
However, my grid needs an additional column which will contain a valid action for each Foo; that action is not really dependent on any data in individual Foos -- they'll all have the same valid action -- repeated on each line of the resulting DGrid -- but that value is actually dependent upon security roles on the session...which can't be sent to the front end in a Json. So, my solution is twofold:
First I need to add a "virtual" Json property to the bean... which I can do in the bean with #JsonProperty on a method...
#JsonProperty("validActions")
public String writeValidActions {
return "placeHolderForSerializerToChange";
}
...but it just generates a placeholder. To really generate a valid value,
I need to reference the security role of the session,
which I am very reluctant to code in the above method. (A service call in
the domain bean itself? Seems very wrong.) I
think I should create a custom serializer and put the logic -- and the reference
to the Session.Security role in there. Are my instincts right, not to
inject session info into a domain bean method? And if so, what would such a
custom serializer look like?
Yes, I wouldn't put Session Info in to the domain or access session directly in my domain.
Unless there is a specific reason, you could simply add the logic in your action class.
public String clickDisplayFoos(){
List<Foo> foos = service.getFoos();
for(iterate through foos){
foo.setValidAction(session.hasSecurityRole())
}
String json = objMapper.writeValueAsString(foobeans);
return json;
}
I don't like the idea of setting new values as part of the serialization process. I feel custom serializers are meant to transform the representation of a particular property rather than add new values to a property.