Querying Spring Data Elasticsearch for nested properties - java

I am trying to query spring data elasticsearch repositories for nested properties. My Repository looks like this:
public interface PersonRepository extends
ElasticsearchRepository<Person, Long> {
List<Person> findByAddressZipCode(String zipCode);
}
The domain objects Person and Address (without getters/setters) are defined as follows:
#Document(indexName="person")
public class Person {
#Id
private Long id;
private String name;
#Field(type=FieldType.Nested, store=true, index = FieldIndex.analyzed)
private Address address;
}
public class Address {
private String zipCode;
}
My test saves one Person document and tries to read it with the repository method. But no results are returned. Here is the test method:
#Test
public void testPersonRepo() throws Exception {
Person person = new Person();
person.setName("Rene");
Address address = new Address();
address.setZipCode("30880");
person.setAddress(address);
personRepository.save(person);
elasticsearchTemplate.refresh(Person.class,true);
assertThat(personRepository.findByAddressZipCodeContaining("30880"), hasSize(1));
}
Does spring data elasticsearch support the default spring data query generation?

Elasticsearch indexes the new document asynchronously...near real-time. The default refresh is typically 1s I think. So you must explicitly request a refresh (to force a flush and the document available for search) if you are wanting the document immediately searchable as with a unit test. So your unit test needs to include the ElasticsearchTemplate bean so that you can explicitly call refresh. Make sure you set waitForOperation to true to force a synchronous refresh. See this related answer. Kinda like this:
elasticsearchTemplate.refresh("myindex",true);

Related

Spring Boot : how to update object efficently?

Hello everyone I'm new to Spring world. Actually I want to know how can we use converter to update object instead of updating each element one by one using set and get. Right now in my controller I've :
#PostMapping("/edit/{userId}")
public Customer updateCustomer(#RequestBody Customer newCustomer, #PathVariable final String userId)
{
return customerService.update(userId, newCustomer);
}
and this is how I'm updating the customer object :
#Override
public Customer update(String id, Customer newCustomer) {
Customer customer = customerRepository.findById(id).get();
customer.setFirstName(newCustomer.getFirstName());
customer.setLastName(newCustomer.getLastName());
customer.setEmail(newCustomer.getEmail());
return customerRepository.save(customer);
}
Instead of using each time set and get, I want to use a converter.
The approach of passing the entity's id as a path variable when you're updating it isn't really right. Think about this: you have a #RequestBody, why don't you include the id inside this body too? Why do you want to specify a path variable for it?
Now, if you have the full Customer with its id from the body, you don't have to make any calls to your repository because hibernate adds it to a persistent state already based on its id and a simple
public Customer update(Customer newCustomer) {
return customerRepository.save(newCustomer);
}
should work.
Q: What is a persistent state?
A: A persistent entity has been associated with a database table row and it’s being managed by the current running Persistence Context. ( customerRepository.findById() is just asking the DB if the entity with the specified id exists and add it to a persistent state. Hibernate manage all this process if you have an #Id annotated field and is filled, in other words:
Customer customer = new Customer();
customer.setId(1);
is ALMOST the same thing as :
Customer customer = customerRepository.findById(1).get();
)
TIPS: Anyway, you shouldn't have (if you didn't know) a model in the controller layer. Why? Let's say that your Customer model can have multiple permissions. One possible structure could look like this:
#Entity
public class Customer{
//private fields here;
#OneToMany(mappedBy="customer",--other configs here--)
private List<Permission> permissions;
}
and
#Entity
public class Permission{
#Id
private Long id;
private String name;
private String creationDate;
#ManyToOne(--configs here--)
private Customer customer;
}
You can see that you have a cross reference between Customer and Permission entity which will eventually lead to a stack overflow exception (if you don't understand this, you can think about a recursive function which doesn't have a condition to stop and it's called over and over again => stack overflow. The same thing is happening here).
What can you do? Creating a so called DTO class that you want the client to receive instead of a model. How can you create this DTO? Think about what the user NEEDS to know.
1) Is "creationDate" from Permission a necessary field for the user? Not really.
2) Is "id" from Permission a necessary field for the user? In some cases yes, in others, not.
A possible CustomerDTO could look like this:
public class CustomerDTO
{
private String firstName;
private String lastName;
private List<String> permissions;
}
and you can notice that I'm using a List<String> instead of List<Permission> for customer's permissions which are in fact the permissions' names.
public CustomerDTO convertModelToDto(Customer customer)
{
//hard way
CustomerDTO customerDTO = new CustomerDTO();
customerDTO.setFirstName(customer.getFirstName());
customerDTO.setLastName(customer.getLastName());
customerDTO.setPermissions(
customer.getPermissions()
.stream()
.map(permission -> permission.getName())
.collect(Collectors.toList());
);
// easy-way => using a ModelMapper
customerDTO = modelMapper.map(customer,CustomerDTO.class);
return customerDTO;
}
Use ModelMapper to map one model into another.
First define a function that can map source data into the target model. Use this as a library to use whenever want.
public static <T> void merge(T source, T target) {
ModelMapper modelMapper = new ModelMapper();
modelMapper.getConfiguration().setMatchingStrategy(MatchingStrategies.STRICT);
modelMapper.map(source, target);
}
Use merge for mapping data
Customer customer = customerRepository.findById(id).get();
merge(newCustomer, customer);
customerRepository.save(customer);
Add dependency in pom.xml for model mapper
<dependency>
<groupId>org.modelmapper</groupId>
<artifactId>modelmapper</artifactId>
<version>2.3.4</version>
</dependency>

Spring boot - partial update best practise?

I am using Spring boot v2 with mongo database. I was wondering what is the best way to do partial updates on the data model. Say I have a model with x attributes, depending on the request I may only want to update 1, 2 , or x of them attributes. Should I be exposing an end point for each type of update operation, or is it possible to expose one end pint and do it in a generic way? Note I will need to be able to validate the contents of the request attributes (e.g tel no must be numbers only)
Thanks,
HTTP PATCH is a nice way to update a resource by specifying only the properties that have changed.
The following blog explain it very well
You can actually expose just one endpoint. This is the situation I had a few months ago:
I wanted people to modify any (or even all)fields of a Projects document (who am I to force the users to manually supply all fields lol). So I have my Model,
Project.java:
package com.foxxmg.jarvisbackend.models;
//imports
#Document(collection = "Projects")
public class Project {
#Id
public String id;
public String projectTitle;
public String projectOverview;
public Date startDate;
public Date endDate;
public List<String> assignedTo;
public String progress;
//constructors
//getters & setters
}
I have my repository:
ProjectRepository.java
package com.foxxmg.jarvisbackend.repositories;
//imports
#Repository
public interface ProjectRepository extends MongoRepository<Project, String>, QuerydslPredicateExecutor<Project> {
//please note, we are going to use findById(string) method for updating
Project findByid(String id);
//other abstract methods
}
Now to my Controller, ProjectController.java:
package com.foxxmg.jarvisbackend.controllers;
//import
#RestController
#RequestMapping("/projects")
#CrossOrigin("*")
public class ProjectController {
#Autowired
private ProjectRepository projectRepository;
#PutMapping("update/{id}")
public ResponseEntity<Project> update(#PathVariable("id") String id, #RequestBody Project project) {
Optional<Project> optionalProject = projectRepository.findById(id);
if (optionalProject.isPresent()) {
Project p = optionalProject.get();
if (project.getProjectTitle() != null)
p.setProjectTitle(project.getProjectTitle());
if (project.getProjectOverview() != null)
p.setProjectOverview(project.getProjectOverview());
if (project.getStartDate() != null)
p.setStartDate(project.getStartDate());
if (project.getEndDate() != null)
p.setEndDate(project.getEndDate());
if (project.getAssignedTo() != null)
p.setAssignedTo(project.getAssignedTo());
return new ResponseEntity<>(projectRepository.save(p), HttpStatus.OK);
} else
return new ResponseEntity<>(HttpStatus.NOT_FOUND);
}
}
That will allow partial update in MongoDB with Spring Boot.
If you are using Spring Data MongoDB, you have two options either use the MongoDB Repository or using the MongoTemplate.

How to set tableName dynamically using environment variable in spring boot?

I am using AWS ECS to host my application and using DynamoDB for all database operations. So I'll have same database with different table names for different environments. Such as "dev_users" (for Dev env), "test_users" (for Test env), etc.. (This is how our company uses same Dynamo account for different environments)
So I would like to change the "tableName" of the model class using the environment variable passed through "AWS ECS task definition" environment parameters.
For Example.
My Model Class is:
#DynamoDBTable(tableName = "dev_users")
public class User {
Now I need to replace the "dev" with "test" when I deploy my container in test environment. I know I can use
#Value("${DOCKER_ENV:dev}")
to access environment variables. But I'm not sure how to use variables outside the class. Is there any way that I can use the docker env variable to select my table prefix?
My Intent is to use like this:
I know this not possible like this. But is there any other way or work around for this?
Edit 1:
I am working on the Rahul's answer and facing some issues. Before writing the issues, I'll explain the process I followed.
Process:
I have created the beans in my config class (com.myapp.users.config).
As I don't have repositories, I have given my Model class package name as "basePackage" path. (Please check the image)
For 1) I have replaced the "table name over-rider bean injection" to avoid the error.
For 2) I printed the name that is passing on to this method. But it is Null. So checking all the possible ways to pass the value here.
Check the image for error:
I haven't changed anything in my user model class as beans will replace the name of the DynamoDBTable when the beans got executed. But the table name over riding is happening. Data is pulling from the table name given at the Model Class level only.
What I am missing here?
The table names can be altered via an altered DynamoDBMapperConfig bean.
For your case where you have to Prefix each table with a literal, you can add the bean as such. Here the prefix can be the environment name in your case.
#Bean
public TableNameOverride tableNameOverrider() {
String prefix = ... // Use #Value to inject values via Spring or use any logic to define the table prefix
return TableNameOverride.withTableNamePrefix(prefix);
}
For more details check out the complete details here:
https://github.com/derjust/spring-data-dynamodb/wiki/Alter-table-name-during-runtime
I am able to achieve table names prefixed with active profile name.
First added TableNameResolver class as below,
#Component
public class TableNameResolver extends DynamoDBMapperConfig.DefaultTableNameResolver {
private String envProfile;
public TableNameResolver() {}
public TableNameResolver(String envProfile) {
this.envProfile=envProfile;
}
#Override
public String getTableName(Class<?> clazz, DynamoDBMapperConfig config) {
String stageName = envProfile.concat("_");
String rawTableName = super.getTableName(clazz, config);
return stageName.concat(rawTableName);
}
}
Then i setup DynamoDBMapper bean as below,
#Bean
#Primary
public DynamoDBMapper dynamoDBMapper(AmazonDynamoDB amazonDynamoDB) {
DynamoDBMapper mapper = new DynamoDBMapper(amazonDynamoDB,new DynamoDBMapperConfig.Builder().withTableNameResolver(new TableNameResolver(envProfile)).build());
return mapper;
}
Added variable envProfile which is an active profile property value accessed from application.properties file.
#Value("${spring.profiles.active}")
private String envProfile;
We have the same issue with regards to the need to change table names during runtime. We are using Spring-data-dynamodb 5.0.2 and the following configuration seems to provide the solutions that we need.
First I annotated my bean accessor
#EnableDynamoDBRepositories(dynamoDBMapperConfigRef = "getDynamoDBMapperConfig", basePackages = "my.company.base.package")
I also setup an environment variable called ENV_PREFIX which is Spring wired via SpEL.
#Value("#{systemProperties['ENV_PREFIX']}")
private String envPrefix;
Then I setup a TableNameOverride bean:
#Bean
public DynamoDBMapperConfig.TableNameOverride getTableNameOverride() {
return DynamoDBMapperConfig.TableNameOverride.withTableNamePrefix(envPrefix);
}
Finally, I setup the DynamoDBMapperConfig bean using TableNameOverride injection. In 5.0.2, we had to setup a standard DynamoDBTypeConverterFactory in the DynamoDBMapperConfig builder to avoid NPE.:
#Bean
public DynamoDBMapperConfig getDynamoDBMapperConfig(DynamoDBMapperConfig.TableNameOverride tableNameOverride) {
DynamoDBMapperConfig.Builder builder = new DynamoDBMapperConfig.Builder();
builder.setTableNameOverride(tableNameOverride);
builder.setTypeConverterFactory(DynamoDBTypeConverterFactory.standard());
return builder.build();
}
In hind sight, I could have setup a DynamoDBTypeConverterFactory bean that returns a standard DynamoDBTypeConverterFactory and inject that into the getDynamoDBMapperConfig() method using the DynamoDBMapperConfig builder. But this will also do the job.
I up voted the other answer but here is an idea:
Create a base class with all your user details:
#MappedSuperclass
public abstract class AbstractUser {
#Id
#GeneratedValue(strategy=GenerationType.AUTO)
private Long id;
private String firstName;
private String lastName;
Create 2 implentations with different table names and spirng profiles:
#Profile(value= {"dev","default"})
#Entity(name = "dev_user")
public class DevUser extends AbstractUser {
}
#Profile(value= {"prod"})
#Entity(name = "prod_user")
public class ProdUser extends AbstractUser {
}
Create a single JPA respository that uses the mapped super classs
public interface UserRepository extends CrudRepository<AbstractUser, Long> {
}
Then switch the implentation with the spring profile
#RunWith(SpringJUnit4ClassRunner.class)
#DataJpaTest
#Transactional
public class UserRepositoryTest {
#Autowired
protected DataSource dataSource;
#BeforeClass
public static void setUp() {
System.setProperty("spring.profiles.active", "prod");
}
#Test
public void test1() throws Exception {
DatabaseMetaData metaData = dataSource.getConnection().getMetaData();
ResultSet tables = metaData.getTables(null, null, "PROD_USER", new String[] { "TABLE" });
tables.next();
assertEquals("PROD_USER", tables.getString("TABLE_NAME"));
}
}

Java Spring REST API Handling Many Optional Parameters

I'm currently messing around with a Spring Boot REST API project for instructional purposes. I have a rather large table with 22 columns loaded into a MySQL database and am trying to give the user the ability to filter the results by multiple columns (let's say 6 for the purposes of this example).
I am currently extending a Repository and have initialized methods such as findByParam1 and findByParam2 and findByParam1OrderByParam2Desc and etc. and have verified that they are working as intended. My question to you guys is the best way to approach allowing the user the ability to leverage all 6 optional RequestParams without writing a ridiculous amount of conditionals/repository method variants. For example, I want to give the user the ability to hit url home/get-data/ to get all results, home/get-data?param1=xx to filter based on param1, and potentially, home/get-data?param1=xx&param2=yy...&param6=zz to filter on all the optional parameters.
For reference, here is what the relevant chunk of my controller looks like (roughly).
#RequestMapping(value = "/get-data", method = RequestMethod.GET)
public List<SomeEntity> getData(#RequestParam Map<String, String> params) {
String p1 = params.get("param1");
if(p1 != null) {
return this.someRepository.findByParam1(p1);
}
return this.someRepository.findAll();
}
My issue so far is that the way I am proceeding about this means that I will basically need n! amount of methods in my repository to support this functionality with n equalling the amount of fields/columns I want to filter on. Is there a better way to approach handling this, perhaps where I am filtering the repository 'in-place' so I can simply filter 'in-place' as I check the Map to see what filters the user did indeed populate?
EDIT: So I'm currently implementing a 'hacky' solution that might be related to J. West's comment below. I assume that the user will be specifying all n parameters in the request URL and if they do not (for example, they specify p1-p4 but not p5 and p6) I generate SQL that just matches the statement to LIKE '%' for the non-included params. It would look something like...
#Query("select u from User u where u.p1 = :p1 and u.p2 = :p2 ... and u.p6 = :p6")
List<User> findWithComplicatedQueryAndSuch;
and in the Controller, I would detect if p5 and p6 were null in the Map and if so, simply change them to the String '%'. I'm sure there is a more precise and intuitive way to do this, although I haven't been able to find anything of the sort yet.
You can do this easily with a JpaSpecificationExecutor and a custom Specification: https://spring.io/blog/2011/04/26/advanced-spring-data-jpa-specifications-and-querydsl/
I would replace the HashMap with a DTO containing all optional get params, then build the specifications based on that DTO, obviously you can also keep the HashMap and build the specification based on it.
Basically:
public class VehicleFilter implements Specification<Vehicle>
{
private String art;
private String userId;
private String vehicle;
private String identifier;
#Override
public Predicate toPredicate(Root<Vehicle> root, CriteriaQuery<?> query, CriteriaBuilder cb)
{
ArrayList<Predicate> predicates = new ArrayList<>();
if (StringUtils.isNotBlank(art))
{
predicates.add(cb.equal(root.get("art"), art));
}
if (StringUtils.isNotBlank(userId))
{
predicates.add(cb.equal(root.get("userId"), userId));
}
if (StringUtils.isNotBlank(vehicle))
{
predicates.add(cb.equal(root.get("vehicle"), vehicle));
}
if (StringUtils.isNotBlank(identifier))
{
predicates.add(cb.equal(root.get("identifier"), fab));
}
return predicates.size() <= 0 ? null : cb.and(predicates.toArray(new Predicate[predicates.size()]));
}
// getter & setter
}
And the controller:
#RequestMapping(value = "/{ticket}/count", method = RequestMethod.GET)
public long getItemsCount(
#PathVariable String ticket,
VehicleFilter filter,
HttpServletRequest request
) throws Exception
{
return vehicleService.getCount(filter);
}
Service:
#Override
public long getCount(VehicleFilter filter)
{
return vehicleRepository.count(filter);
}
Repository:
#Repository
public interface VehicleRepository extends JpaRepository<Vehicle, Integer>, JpaSpecificationExecutor<Vehicle>
{
}
Just a quick example adapted from company code, you get the idea!
Another solution with less coding would be to use QueryDsl integration with Spring MVC.
By using this approach all your request parameters will be automatically resolved to one of your domain properties and appended to your query.
For reference check the documentation https://spring.io/blog/2015/09/04/what-s-new-in-spring-data-release-gosling#querydsl-web-support and the example project https://github.com/spring-projects/spring-data-examples/tree/master/web/querydsl
You can do it even more easily using Query By Example (QBE) technique if your repository class implements JpaRepository interface as that interface implements QueryByExampleExecutor interface which provides findAll method that takes object of Example<T> as an argument.
Using this approach is really applicable for your scenario as your entity has a lot of fields and you want user to be able to get those which are matching filter represented as subset of entity's fields with their corresponding values that have to be matched.
Let's say the entity is User (like in your example) and you want to create endpoint for fetching users whose attribute values are equal to the ones which are specified. That could be accomplished with the following code:
Entity class:
#Entity
public class User implements Serializable {
private Long id;
private String firstName;
private String lastName;
private Integer age;
private String city;
private String state;
private String zipCode;
}
Controller class:
#Controller
public class UserController {
private UserRepository repository;
private UserController(UserRepository repository) {
this.repository = repository;
}
#GetMapping
public List<User> getMatchingUsers(#RequestBody User userFilter) {
return repository.findAll(Example.of(userFilter));
}
}
Repository class:
#Repository
public class UserRepository implements JpaRepository<User, Integer> {
}

Spring Data JPARepository: How to conditionally fetch children entites

How can one configure their JPA Entities to not fetch related entities unless a certain execution parameter is provided.
According to Spring's documentation, 4.3.9. Configuring Fetch- and LoadGraphs, you need to use the #EntityGraph annotation to specify fetch policy for queries, however this doesn't let me decide at runtime whether I want to load those entities.
I'm okay with getting the child entities in a separate query, but in order to do that I would need to configure my repository or entities to not retrieve any children. Unfortunately, I cannot seem to find any strategies on how to do this. FetchPolicy is ignored, and EntityGraph is only helpful when specifying which entities I want to eagerly retrieve.
For example, assume Account is the parent and Contact is the child, and an Account can have many Contacts.
I want to be able to do this:
if(fetchPolicy.contains("contacts")){
account.setContacts(contactRepository.findByAccountId(account.getAccountId());
}
The problem is spring-data eagerly fetches the contacts anyways.
The Account Entity class looks like this:
#Entity
#Table(name = "accounts")
public class Account
{
protected String accountId;
protected Collection<Contact> contacts;
#OneToMany
//#OneToMany(fetch=FetchType.LAZY) --> doesn't work, Spring Repositories ignore this
#JoinColumn(name="account_id", referencedColumnName="account_id")
public Collection<Contact> getContacts()
{
return contacts;
}
//getters & setters
}
The AccountRepository class looks like this:
public interface AccountRepository extends JpaRepository<Account, String>
{
//#EntityGraph ... <-- has type= LOAD or FETCH, but neither can help me prevent retrieval
Account findOne(String id);
}
The lazy fetch should be working properly if no methods of object resulted from the getContacts() is called.
If you prefer more manual work, and really want to have control over this (maybe more contexts depending on the use case). I would suggest you to remove contacts from the account entity, and maps the account in the contacts instead. One way to tell hibernate to ignore that field is to map it using the #Transient annotation.
#Entity
#Table(name = "accounts")
public class Account
{
protected String accountId;
protected Collection<Contact> contacts;
#Transient
public Collection<Contact> getContacts()
{
return contacts;
}
//getters & setters
}
Then in your service class, you could do something like:
public Account getAccountById(int accountId, Set<String> fetchPolicy) {
Account account = accountRepository.findOne(accountId);
if(fetchPolicy.contains("contacts")){
account.setContacts(contactRepository.findByAccountId(account.getAccountId());
}
return account;
}
Hope this is what you are looking for. Btw, the code is untested, so you should probably check again.
You can use #Transactional for that.
For that you need to fetch you account entity Lazily.
#Transactional Annotations should be placed around all operations that are inseparable.
Write method in your service layer which is accepting one flag to fetch contacts eagerly.
#Transactional
public Account getAccount(String id, boolean fetchEagerly){
Account account = accountRepository.findOne(id);
//If you want to fetch contact then send fetchEagerly as true
if(fetchEagerly){
//Here fetching contacts eagerly
Object object = account.getContacts().size();
}
}
#Transactional is a Service that can make multiple call in single transaction
without closing connection with end point.
Hope you find this useful. :)
For more details refer this link
Please find an example which runs with JPA 2.1.
Set the attribute(s) you only want to load (with attributeNodes list) :
Your entity with Entity graph annotations :
#Entity
#NamedEntityGraph(name = "accountGraph", attributeNodes = {
#NamedAttributeNode("accountId")})
#Table(name = "accounts")
public class Account {
protected String accountId;
protected Collection<Contact> contacts;
#OneToMany(fetch=FetchType.LAZY)
#JoinColumn(name="account_id", referencedColumnName="account_id")
public Collection<Contact> getContacts()
{
return contacts;
}
}
Your custom interface :
public interface AccountRepository extends JpaRepository<Account, String> {
#EntityGraph("accountGraph")
Account findOne(String id);
}
Only the "accountId" property will be loaded eagerly. All others properties will be loaded lazily on access.
Spring data does not ignore fetch=FetchType.Lazy.
My problem was that I was using dozer-mapping to covert my entities to graphs. Evidently dozer calls the getters and setters to map two objects, so I needed to add a custom field mapper configuration to ignore PersistentCollections...
GlobalCustomFieldMapper.java:
public class GlobalCustomFieldMapper implements CustomFieldMapper
{
public boolean mapField(Object source, Object destination, Object sourceFieldValue, ClassMap classMap, FieldMap fieldMapping)
{
if (!(sourceFieldValue instanceof PersistentCollection)) {
// Allow dozer to map as normal
return;
}
if (((PersistentCollectiosourceFieldValue).wasInitialized()) {
// Allow dozer to map as normal
return false;
}
// Set destination to null, and tell dozer that the field is mapped
destination = null;
return true;
}
}
If you are trying to send the resultset of your entities to a client, I recommend you use data transfer objects(DTO) instead of the entities. You can directly create a DTO within the HQL/JPQL.
For example
"select new com.test.MyTableDto(my.id, my.name) from MyTable my"
and if you want to pass the child
"select new com.test.MyTableDto(my.id, my.name, my.child) from MyTable my"
That way you have a full control of what is being created and passed to client.

Categories