I need to map the return of a native query to an object
Here is my native query
#Query(value = "select collector from relation;", nativeQuery = true)
Stream<RelationStatistics> findRelationsStatistics();
Here is my object
public class RelationStatistics {
private String collector;
public RelationStatistics(String collector) {
this.collector = collector;
}
public String getCollector() {
return collector;
}
public void setCollector(String collector) {
this.collector = collector;
}
}
Here is my test
#Test
public void test() {
Stream<RelationStatistics> test = relations.findRelationsStatistics();
test.forEach(item -> System.out.println(item));
}
This test return me :
org.springframework.core.convert.ConverterNotFoundException: No converter found capable of converting from type [org.springframework.data.jpa.repository.query.AbstractJpaQuery$TupleConverter$TupleBackedMap] to type [RelationStatistics]
This is an example with only one string attribut but the original native query is a big request so creating an entity will be too difficult.
I have find SqlResultSetMapping but I don't really understand how to use it properly
If someone have an idea of what its possible to do 0_o
I found a solution here : JPA : How to convert a native query result set to POJO class collection
Using projection with an interface as an object with getCollector() method and map with this
Related
I am attempting to use the DataLoader feature within the graphql-java-kickstart library:
https://github.com/graphql-java-kickstart
My application is a Spring Boot application using 2.3.0.RELEASE. And I using version 7.0.1 of the graphql-spring-boot-starter library.
The library is pretty easy to use and it works when I don't use the data loader. However, I am plagued by the N+1 SQL problem and as a result need to use the data loader to help alleviate this issue. When I execute a request, I end up getting this:
Can't resolve value (/findAccountById[0]/customers) : type mismatch error, expected type LIST got class com.daluga.api.account.domain.Customer
I am sure I am missing something in the configuration but really don't know what that is.
Here is my graphql schema:
type Account {
id: ID!
accountNumber: String!
customers: [Customer]
}
type Customer {
id: ID!
fullName: String
}
I have created a CustomGraphQLContextBuilder:
#Component
public class CustomGraphQLContextBuilder implements GraphQLServletContextBuilder {
private final CustomerRepository customerRepository;
public CustomGraphQLContextBuilder(CustomerRepository customerRepository) {
this.customerRepository = customerRepository;
}
#Override
public GraphQLContext build(HttpServletRequest httpServletRequest, HttpServletResponse httpServletResponse) {
return DefaultGraphQLServletContext.createServletContext(buildDataLoaderRegistry(), null).with(httpServletRequest).with(httpServletResponse).build();
}
#Override
public GraphQLContext build(Session session, HandshakeRequest handshakeRequest) {
return DefaultGraphQLWebSocketContext.createWebSocketContext(buildDataLoaderRegistry(), null).with(session).with(handshakeRequest).build();
}
#Override
public GraphQLContext build() {
return new DefaultGraphQLContext(buildDataLoaderRegistry(), null);
}
private DataLoaderRegistry buildDataLoaderRegistry() {
DataLoaderRegistry dataLoaderRegistry = new DataLoaderRegistry();
dataLoaderRegistry.register("customerDataLoader",
new DataLoader<Long, Customer>(accountIds ->
CompletableFuture.supplyAsync(() ->
customerRepository.findCustomersByAccountIds(accountIds), new SyncTaskExecutor())));
return dataLoaderRegistry;
}
}
I also have create an AccountResolver:
public CompletableFuture<List<Customer>> customers(Account account, DataFetchingEnvironment dfe) {
final DataLoader<Long, List<Customer>> dataloader = ((GraphQLContext) dfe.getContext())
.getDataLoaderRegistry().get()
.getDataLoader("customerDataLoader");
return dataloader.load(account.getId());
}
And here is the Customer Repository:
public List<Customer> findCustomersByAccountIds(List<Long> accountIds) {
Instant begin = Instant.now();
MapSqlParameterSource namedParameters = new MapSqlParameterSource();
String inClause = getInClauseParamFromList(accountIds, namedParameters);
String sql = StringUtils.replace(SQL_FIND_CUSTOMERS_BY_ACCOUNT_IDS,"__ACCOUNT_IDS__", inClause);
List<Customer> customers = jdbcTemplate.query(sql, namedParameters, new CustomerRowMapper());
Instant end = Instant.now();
LOGGER.info("Total Time in Millis to Execute findCustomersByAccountIds: " + Duration.between(begin, end).toMillis());
return customers;
}
I can put a break point in the Customer Repository and see the SQL execute and it returns a List of Customer objects. You can also see that the schema wants an array of customers. If I remove the code above and put in the resolver to get the customers one by one....it works....but is really slow.
What am I missing in the configuration that would cause this?
Can't resolve value (/findAccountById[0]/customers) : type mismatch error, expected type LIST got class com.daluga.api.account.domain.Customer
Thanks for your help!
Dan
Thanks, #Bms bharadwaj! The issue was on my side in understanding how the data is returned in the dataloader. I ended up using a MappedBatchLoader to bring the data in a map. The key in the map being the accountId.
private DataLoader<Long, List<Customer>> getCustomerDataLoader() {
MappedBatchLoader<Long, List<Customer>> customerMappedBatchLoader = accountIds -> CompletableFuture.supplyAsync(() -> {
List<Customer> customers = customerRepository.findCustomersByAccountId(accountIds);
Map<Long, List<Customer>> groupByAccountId = customers.stream().collect(Collectors.groupingBy(cust -> cust.getAccountId()));
return groupByAaccountId;
});
// }, new SyncTaskExecutor());
return DataLoader.newMappedDataLoader(customerMappedBatchLoader);
}
This seems to have done the trick because before I was issuing hundreds of SQL statement and now down to 2 (one for the driver SQL...accounts and one for the customers).
In the CustomGraphQLContextBuilder,
I think you should have registered the DataLoader as :
...
dataLoaderRegistry.register("customerDataLoader",
new DataLoader<Long, List<Customer>>(accountIds ->
...
because, you are expecting a list of Customers for one account Id.
That should work I guess.
I am using r2dbc, r2dbc-h2 and experimental spring-boot-starter-data-r2dbc
implementation 'org.springframework.boot.experimental:spring-boot-starter-data-r2dbc:0.1.0.M1'
implementation 'org.springframework.data:spring-data-r2dbc:1.0.0.RELEASE' // starter-data provides old version
implementation 'io.r2dbc:r2dbc-h2:0.8.0.RELEASE'
implementation 'io.r2dbc:r2dbc-pool:0.8.0.RELEASE'
I have created reactive repositories
public interface IJsonComparisonRepository extends ReactiveCrudRepository<JsonComparisonResult, String> {}
Also added a custom script that creates a table in H2 on startup
#SpringBootApplication
public class JsonComparisonApplication {
public static void main(String[] args) {
SpringApplication.run(JsonComparisonApplication.class, args);
}
#Bean
public CommandLineRunner startup(DatabaseClient client) {
return (args) -> client
.execute(() -> {
var resource = new ClassPathResource("ddl/script.sql");
try (var is = new InputStreamReader(resource.getInputStream())) {
return FileCopyUtils.copyToString(is);
} catch (IOException e) {
throw new RuntimeException(e);
} })
.then()
.block();
}
}
My r2dbc configuration looks like this
#Configuration
#EnableR2dbcRepositories
public class R2dbcConfiguration extends AbstractR2dbcConfiguration {
#Override
public ConnectionFactory connectionFactory() {
return new H2ConnectionFactory(
H2ConnectionConfiguration.builder()
.url("mem:testdb;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE")
.username("sa")
.build());
}
}
My service where I perform the logic looks like this
#Override
public Mono<JsonComparisonResult> updateOrCreateRightSide(String comparisonId, String json) {
return updateComparisonSide(comparisonId, storedComparisonResult -> {
storedComparisonResult.setRightSide(json);
return storedComparisonResult;
});
}
private Mono<JsonComparisonResult> updateComparisonSide(String comparisonId,
Function<JsonComparisonResult, JsonComparisonResult> updateSide) {
return repository.findById(comparisonId)
.defaultIfEmpty(createResult(comparisonId))
.filter(result -> ComparisonDecision.NONE == result.getDecision()) // if not NONE - it means it was found and completed
.switchIfEmpty(Mono.error(new NotUpdatableCompleteComparisonException(comparisonId)))
.map(updateSide)
.flatMap(repository::save);
}
private JsonComparisonResult createResult(String comparisonId) {
LOGGER.info("Creating new comparison result: {}.", comparisonId);
var newResult = new JsonComparisonResult();
newResult.setDecision(ComparisonDecision.NONE);
newResult.setComparisonId(comparisonId);
return newResult;
}
The domain looks like this
#Table("json_comparison")
public class JsonComparisonResult {
#Column("comparison_id")
#Id
private String comparisonId;
#Column("left")
private String leftSide;
#Column("right")
private String rightSide;
// #Enumerated(EnumType.STRING) - no support for now
#Column("decision")
private ComparisonDecision decision;
private String differences;
The problem is that when I try to add any object to the database it fails with the exception
org.springframework.dao.TransientDataAccessResourceException: Failed to update table [json_comparison]. Row with Id [4] does not exist.
at org.springframework.data.r2dbc.repository.support.SimpleR2dbcRepository.lambda$save$0(SimpleR2dbcRepository.java:91) ~[spring-data-r2dbc-1.0.0.RELEASE.jar:1.0.0.RELEASE]
at reactor.core.publisher.FluxHandle$HandleSubscriber.onNext(FluxHandle.java:96) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:73) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.MonoUsingWhen$MonoUsingWhenSubscriber.deferredComplete(MonoUsingWhen.java:276) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.FluxUsingWhen$CommitInner.onComplete(FluxUsingWhen.java:536) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.onComplete(Operators.java:1858) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.Operators.complete(Operators.java:132) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.MonoEmpty.subscribe(MonoEmpty.java:45) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52) ~[reactor-core-3.3.1.RELEASE.jar:3.3.1.RELEASE]
For some reason during save in SimpleR2dbcRepository library class it doesn't consider the objectToSave as new, but then it fails to update as it is in reality doesn't exist.
// SimpleR2dbcRepository#save
#Override
#Transactional
public <S extends T> Mono<S> save(S objectToSave) {
Assert.notNull(objectToSave, "Object to save must not be null!");
if (this.entity.isNew(objectToSave)) { // not new
....
}
}
Why it is happening and what is the problem?
TL;DR: How should Spring Data know if your object is new or whether it should exist?
Relational Spring Data Repositories (both, JDBC and R2DBC) must differentiate on [Reactive]CrudRepository.save(…) whether the given object is new or whether it exists in your database. Performing a save(…) operation results either in an INSERT or UPDATE statement. Issuing the wrong statement either causes a primary key violation or a no-op as standard SQL does not have a way to express an upsert.
Spring Data JDBC|R2DBC use by default the presence/absence of the #Id value. Generated primary keys are a widely used mechanism. If the primary key is provided, the entity is considered existing. If the id value is null, the entity is considered new.
Read more in the reference documentation about Entity State Detection Strategies.
You have to implement Persistable because you’ve provided the #Id. The library needs to figure out, whether the row is new or whether it should exist. If your entity implements Persistable, then save(…) will use the outcome of isNew() to determine whether to issue an INSERT or UPDATE.
For example:
public class Product implements Persistable<Integer> {
#Id
private Integer id;
private String description;
private Double price;
#Transient
private boolean newProduct;
#Override
#Transient
public boolean isNew() {
return this.newProduct || id == null;
}
public Product setAsNew() {
this.newProduct = true;
return this;
}
}
May be you should consider this:
Choose data type of your id/Primary Key as INT/LONG and set it to AUTO_INCREMENT (something like below):
CREATE TABLE PRODUCT(id INT PRIMARY KEY AUTO_INCREMENT NOT NULL, modelname VARCHAR(30) , year VARCHAR(4), owner VARCHAR(50));
In your post request body, do not include id field.
Removing #ID issued insert statement
So I'd like a "Void-Repository" through which to gain access to stored procedures that are not necessarily operation on entities.
#Repository
public interface StoredProceduresRepository extends CrudRepository<Void, Long> {
#Procedure("my_answer_giver")
String getMyAnswer(#Param("input") String input);
}
But that does, of course, not work because the CrudRepository expects Void to be an entity.
Is there a way to use the #Procedure annotation without having to create dummy entities or am I stuck with an implemented class that makes use of the EntityManager to query via prepared statements?
Because let's be honest, that's fugly:
#Repository
public class StoredProceduresRepository {
#PersistenceContext
EntityManager em;
public String getMyAnswer(String input) {
Query myAnswerGiver = em
.createStoredProcedureQuery("my_answer_giver")
.registerStoredProcedureParameter("input", String.class, ParameterMode.IN)
.setParameter("input", input);
Object result = ((Object[]) myAnswerGiver.getSingleResult())[0];
return (String) result;
}
}
If it is ok for you you can use any Entity you have, in place of this Void. The Entity provided there should not matter.
public interface StoredProceduresRepository extends JpaRepository<SomeUnrelatedEntity, Long> {
#Procedure("my_answer_giver")
String getMyAnswer(#Param("input") String input);
}
I have been using it like this with database views.
We are working on web application using Spring data JPA with hibernate.
In the application there is a field of compid in each entity.
Which means in every DB call (Spring Data methods) will have to be checked with the compid.
I need a way that, this "where compid = ?" check to be injected automatically for every find method.
So that we won't have to specifically bother about compid checks.
Is this possible to achieve from Spring Data JPA framework?
Maybe Hibernate‘s annotation #Where will help you. It adds the passed condition to any JPA queries related to the entity. For example
#Entity
#Where(clause = "isDeleted='false'")
public class Customer {
//...
#Column
private Boolean isDeleted;
}
More info: 1, 2
Agree with Abhijit Sarkar.
You can achieve your goal hibernate listeners and aspects. I can suggest the following :
create an annotation #Compable (or whatever you call it) to mark service methods
create CompAspect which should be a bean and #Aspect. It should have something like this
#Around("#annotation(compable)")`
public Object enableClientFilter(ProceedingJoinPoint pjp, Compable compable) throws Throwable {
Session session = (Session) em.getDelegate();
try {
if (session.isOpen()) {
session.enableFilter("compid_filter_name")
.setParameter("comp_id", your_comp_id);
}
return pjp.proceed();
} finally {
if (session.isOpen()) {
session.disableFilter("filter_name");
}
}
}
em - EntityManager
3)Also you need to provide hibernate filters. If you use annotation this can look like this:
#FilterDef(name="compid_filter_name", parameters=#ParamDef(name="comp_id", type="java.util.Long"))
#Filters(#Filter(name="compid_filter_name", condition="comp_id=:comp_id"))
So your condition where compid = ? will be #Service method below
#Compable
someServicweMethod(){
List<YourEntity> l = someRepository.findAllWithNamesLike("test");
}
That's basically it for Selects,
For updates/deletes this scheme requires an EntityListener.
Like other people have said there is no set method for this
One option is to look at Query by example - from the spring data documentation -
Person person = new Person();
person.setFirstname("Dave");
Example<Person> example = Example.of(person);
So you could default compid in the object, or parent JPA object
Another option is a custom repository
I can contribute a 50% solution. 50% because it seems to be not easy to wrap Query Methods. Also custom JPA queries are an issue for this global approach. If the standard finders are sufficient it is possible to extend an own SimpleJpaRepository:
public class CustomJpaRepositoryIml<T, ID extends Serializable> extends
SimpleJpaRepository<T, ID> {
private JpaEntityInformation<T, ?> entityInformation;
#Autowired
public CustomJpaRepositoryIml(JpaEntityInformation<T, ?> entityInformation,
EntityManager entityManager) {
super(entityInformation, entityManager);
this.entityInformation = entityInformation;
}
private Sort applyDefaultOrder(Sort sort) {
if (sort == null) {
return null;
}
if (sort.isUnsorted()) {
return Sort.by("insert whatever is a default").ascending();
}
return sort;
}
private Pageable applyDefaultOrder(Pageable pageable) {
if (pageable.getSort().isUnsorted()) {
Sort defaultSort = Sort.by("insert whatever is a default").ascending();
pageable = PageRequest.of(pageable.getPageNumber(), pageable.getPageSize(), defaultSort);
}
return pageable;
}
#Override
public Optional<T> findById(ID id) {
Specification<T> filterSpec = filterOperatorUserAccess();
if (filterSpec == null) {
return super.findById(id);
}
return findOne(filterSpec.and((Specification<T>) (root, query, criteriaBuilder) -> {
Path<?> path = root.get(entityInformation.getIdAttribute());
return criteriaBuilder.equal(path, id);
}));
}
#Override
protected <S extends T> TypedQuery<S> getQuery(Specification<S> spec, Class<S> domainClass, Sort sort) {
sort = applyDefaultOrder(sort);
Specification<T> filterSpec = filterOperatorUserAccess();
if (filterSpec != null) {
spec = (Specification<S>) filterSpec.and((Specification<T>) spec);
}
return super.getQuery(spec, domainClass, sort);
}
}
This implementation is picked up e.g. by adding it to the Spring Boot:
#SpringBootApplication
#EnableJpaRepositories(repositoryBaseClass = CustomJpaRepositoryIml.class)
public class ServerStart {
...
If you need this kind of filtering also for Querydsl it is also possible to implement and register a QuerydslPredicateExecutor.
I'm trying to implement a partial update of the Manager entity based in the following:
Entity
public class Manager {
private int id;
private String firstname;
private String lastname;
private String username;
private String password;
// getters and setters omitted
}
SaveManager method in Controller
#RequestMapping(value = "/save", method = RequestMethod.PATCH)
public #ResponseBody void saveManager(#RequestBody Manager manager){
managerService.saveManager(manager);
}
Save object manager in Dao impl.
#Override
public void saveManager(Manager manager) {
sessionFactory.getCurrentSession().saveOrUpdate(manager);
}
When I save the object the username and password has changed correctly but the others values are empty.
So what I need to do is update the username and password and keep all the remaining data.
If you are truly using a PATCH, then you should use RequestMethod.PATCH, not RequestMethod.POST.
Your patch mapping should contain the id with which you can retrieve the Manager object to be patched. Also, it should only include the fields with which you want to change. In your example you are sending the entire entity, so you can't discern the fields that are actually changing (does empty mean leave this field alone or actually change its value to empty).
Perhaps an implementation as such is what you're after?
#RequestMapping(value = "/manager/{id}", method = RequestMethod.PATCH)
public #ResponseBody void saveManager(#PathVariable Long id, #RequestBody Map<Object, Object> fields) {
Manager manager = someServiceToLoadManager(id);
// Map key is field name, v is value
fields.forEach((k, v) -> {
// use reflection to get field k on manager and set it to value v
Field field = ReflectionUtils.findField(Manager.class, k);
field.setAccessible(true);
ReflectionUtils.setField(field, manager, v);
});
managerService.saveManager(manager);
}
Update
I want to provide an update to this post as there is now a project that simplifies the patching process.
The artifact is
<dependency>
<groupId>com.github.java-json-tools</groupId>
<artifactId>json-patch</artifactId>
<version>1.13</version>
</dependency>
The implementation to patch the Manager object in the OP would look like this:
Controller
#Operation(summary = "Patch a Manager")
#PatchMapping("/{managerId}")
public Task patchManager(#PathVariable Long managerId, #RequestBody JsonPatch jsonPatch)
throws JsonPatchException, JsonProcessingException {
return managerService.patch(managerId, jsonPatch);
}
Service
public Manager patch(Long managerId, JsonPatch jsonPatch) throws JsonPatchException, JsonProcessingException {
Manager manager = managerRepository.findById(managerId).orElseThrow(EntityNotFoundException::new);
JsonNode patched = jsonPatch.apply(objectMapper.convertValue(manager, JsonNode.class));
return managerRepository.save(objectMapper.treeToValue(patched, Manager.class));
}
The patch request follows the specifications in RFC 6092, so this is a true PATCH implementation. Details can be found here
With this, you can patch your changes
1. Autowire `ObjectMapper` in controller;
2. #PatchMapping("/manager/{id}")
ResponseEntity<?> saveManager(#RequestBody Map<String, String> manager) {
Manager toBePatchedManager = objectMapper.convertValue(manager, Manager.class);
managerService.patch(toBePatchedManager);
}
3. Create new method `patch` in `ManagerService`
4. Autowire `NullAwareBeanUtilsBean` in `ManagerService`
5. public void patch(Manager toBePatched) {
Optional<Manager> optionalManager = managerRepository.findOne(toBePatched.getId());
if (optionalManager.isPresent()) {
Manager fromDb = optionalManager.get();
// bean utils will copy non null values from toBePatched to fromDb manager.
beanUtils.copyProperties(fromDb, toBePatched);
updateManager(fromDb);
}
}
You will have to extend BeanUtilsBean to implement copying of non null values behaviour.
public class NullAwareBeanUtilsBean extends BeanUtilsBean {
#Override
public void copyProperty(Object dest, String name, Object value)
throws IllegalAccessException, InvocationTargetException {
if (value == null)
return;
super.copyProperty(dest, name, value);
}
}
and finally, mark NullAwareBeanUtilsBean as #Component
or
register NullAwareBeanUtilsBean as bean
#Bean
public NullAwareBeanUtilsBean nullAwareBeanUtilsBean() {
return new NullAwareBeanUtilsBean();
}
First, you need to know if you are doing an insert or an update. Insert is straightforward. On update, use get() to retrieve the entity. Then update whatever fields. At the end of the transaction, Hibernate will flush the changes and commit.
You can write custom update query which updates only particular fields:
#Override
public void saveManager(Manager manager) {
Query query = sessionFactory.getCurrentSession().createQuery("update Manager set username = :username, password = :password where id = :id");
query.setParameter("username", manager.getUsername());
query.setParameter("password", manager.getPassword());
query.setParameter("id", manager.getId());
query.executeUpdate();
}
ObjectMapper.updateValue provides all you need to partially map your entity with values from dto.
As an addition, you can use either of two here: Map<String, Object> fields or String json, so your service method may look like this:
#Autowired
private ObjectMapper objectMapper;
#Override
#Transactional
public Foo save(long id, Map<String, Object> fields) throws JsonMappingException {
Foo foo = fooRepository.findById(id)
.orElseThrow(() -> new ResourceNotFoundException("Foo not found for this id: " + id));
return objectMapper.updateValue(foo , fields);
}
As a second solution and addition to Lane Maxwell's answer you could use Reflection to map only properties that exist in a Map of values that was sent, so your service method may look like this:
#Override
#Transactional
public Foo save(long id, Map<String, Object> fields) {
Foo foo = fooRepository.findById(id)
.orElseThrow(() -> new ResourceNotFoundException("Foo not found for this id: " + id));
fields.keySet()
.forEach(k -> {
Method method = ReflectionUtils.findMethod(LocationProduct.class, "set" + StringUtils.capitalize(k));
if (method != null) {
ReflectionUtils.invokeMethod(method, foo, fields.get(k));
}
});
return foo;
}
Second solution allows you to insert some additional business logic into mapping process, might be conversions or calculations ect.
Also unlike finding reflection field Field field = ReflectionUtils.findField(Foo.class, k); by name and than making it accessible, finding property's setter actually calls setter method that might contain additional logic to be executed and prevents from setting value to private properties.