Jpa Specification to find subset of field's value - java

I am writing a webapp using Spring Data JPA on persistence layer, more specifically, my DAOs extends the JpaSpecificationExecutor interface, so I am able to implement some kind of filter; imagine list of Items with several attributes (I omit annotations and other metadata for sake of clarity):
data class Item(var tags: MutableList<String>)
On my service layer, my filter method looks like this:
fun findBy(tagsToFilterBy: List<String>): List<Items> {
return dao.findAll { root, query, builder ->
builder.//??
}
}
What I want to achieve is to retrieve only Items that contain exactly that tagsToFilterBy, in other words, tagsToFilterBy should be a subset of Item.tags.
I know about isMember(...) method, but I think that it's usage wouldn't be very pleasant with many tags as it accepts only single "entity" at call. Could you advice me something?
My other question is that whether it is safe to use user input directly in let's say builder.like(someExpression, inputFromUser) or I have to put it in builder.parameter(...) and then query.setParameter(...).
Thank you for any idea

So I managed to write by myself. I'm not saying that it is pretty, but it is the prettiest one, I could come with:
dao.findAll { root, query, builder ->
val lst = mutableListOf<Predicate>()
val tagsPath = root.get<List<Tag>>("tags")
tagsToFilterBy.forEach {
lst.add(cb.isMember(it, tagsPath))
}
cb.or(*lst.toTypedArray())
}
This is basically going through the given tags, and checking whether it is a member of tags or not.

One way is to use filter and test each element to see if your filter list contains it.
val result = dao.filter { tagsToFilterBy.contains(it.tag) }
To speed it up, you could force sort your filter list, and maybe use binarySearch, but performance improvement (or not) would depend on the size of the filter list. For example, assuming tagsToFilterBy is sorted, then:
val result2 = dao.filter { tagsToFilterBy.binarySearch(it.tag) >= 0 }
The Kotlin Collection page describes each of these extension methods.

Related

Is there any way to remove specific rule when building the RelNode using RelBuilder?

I am building a RelNode using RelBuilder.
FrameworkConfig frameworkConfig = config().build();
final RelBuilder builder = RelBuilder.create(frameworkConfig);
RelNode root =
builder.scan("ITEM", "TEST")
.filter(
builder.equals(builder.field("ITEM_ID"), builder.literal("AM_TestItem_21Aug17_0614")))
.filter(
builder.equals(builder.field("OBJECT_TYPE"), builder.literal("Item")))
.build();
PreparedStatement preparedStatement = RelRunners.run(root)
preparedStatement.executeQuery()
public static Frameworks.ConfigBuilder config() {
CalciteSchema rootSchema1 = CalciteSchema.createRootSchema(true);
rootSchema1 = rootSchema1.add("ITEM", new MySchema());
// MySchema consists of Map of MyFilterableTable which extends AbstractTable and implements FilterableTable
return Frameworks.newConfigBuilder().defaultSchema(rootSchema1.plus());
}
scan method in MyFilterableTable
public Enumerable<Object[]> scan(DataContext root, List<RexNode> filters)
gets merged filter string as [AND(=($0, 'AM_TestItem_21Aug17_0614'), =($3, 'Item'))]
I want separate entries of these filters and not merged one.
I want planner to ignore the FilterMergeRule.
Is there any way to do that?
There is no easy way to selectively enable/disable rules when using the RelBuilder API and this is normal given that this API is meant only to create plans.
Moreover it is not a good idea to rely on the fact that the filters will not be merged since there is a high chance that in the future the merge that is currently done via FilterMergeRule could be done with the RelBuilder when there are two adjacent filter operators.
Nevertheless, inside the scan operation you could call RelOptUtil#conjunctions method which splits the filter into separate conjunctions which is more or less what you would like to obtain if I understand well your use-case.

Should I always have a separate "DataService" that make invokes another service?

I am building a new RESTful service that interacts with other microservices.
The routine task is to fetch some data from another RESTful service, filter it, match it against existing data and return a response.
My question is is it a good design pattern to always separate steps "get data" and "filter it" in two different classes and name one is as EntityDataService and the second one is simply EntityService?
For instance, I can make a call to a service that returns a list of countries that has to be filtered against some conditions as inclusion in EU or date of creation, etc.
In this case, which option is better:
separate CountryDataService class that only have one method
getAllCountries and EUCountryService that filters them
make one class CountryService with public methods getEUCountries and
getCountriesCreatedInDateRange and private getAllCountries
Which one is better?
I'm trying to follow KISS pattern but also want to make my solution maintainable and extensible.
In systems with lots of data, having a method getAllSomething is not that good of an idea.
If you don't have lots of data, it's ok to have it, but still be careful.
If you have 50 records, it's not that bad, but if you have millions of records that whould be a problem.
Having a Service or Repository with methods getBySomeCriteria is the better way to go.
If you have lots of different queries that you want to perform, so you may end up with lots of methods: getByCriteria1, getByCriteria2,..., getByCriteria50. Also, each time you need a different query you will have to add a new method to the Service.
In this case you can use the Specification Pattern. Here's an example:
public enum Continent { None, Europe, Africa, NorthAmerica, SouthAmerica, Asia }
public class CountrySpecification {
public DateRange CreatedInRange { get; set; }
public Continent Location { get; set; }
}
public class CountryService {
public IEnumerable<Country> Find(CountrySpecification spec) {
var url = "https://api.myapp.com/countries";
url = AddQueryParametersFromSpec(url, spec);
var results = SendGetRequest(url);
return CreateCountryFromApiResults(results);
}
}

Create a custom Aggregate Function with jOOQ

Context
I am working with jOOQ against a PostgreSQL database.
I want to use jsonb_object_agg(name, value) onto the resultset of a LEFT OUTER JOIN.
Problem
The join being an OUTER one, sometimes the name component of the aggregation function is simply null: that can't work. I would then go for:
COALESCE(
json_object_agg(table.name, table.value) FILTER (WHERE table.name IS NOT NULL),
'{}'
)::json
As of now, the code I use to call jsonb_object_agg is (not exactly, but boils down to) the following:
public static Field<?> jsonbObjectAgg(final Field<?> key, final Select<?> select) {
return DSL.field("jsonb_object_agg({0}, ({1}))::jsonb", JSON_TYPE, key, select);
}
... where JSON_TYPE is:
private static final DataType<JsonNode> JSON_TYPE = SQLDataType.VARCHAR.asConvertedDataType(/* a custom Converter */);
Incomplete solution
I would love to leverage jOOQ's AggregateFilterStep interface, and in particular, to be able to use its AggregateFilterStep#filterWhere(Condition... conditions).
However, the org.jooq.impl.Function class that implements AggregateFilterStep (indirectly via AgregateFunction and ArrayAggOrderByStep) is restricted in visibility to its package, so I can't just recycle blindly the implementation of DSL#ArrayAggOrderByStep:
public static <T> ArrayAggOrderByStep<T[]> arrayAgg(Field<T> field) {
return new org.jooq.impl.Function<T[]>(Term.ARRAY_AGG, field.getDataType().getArrayDataType(), nullSafe(field));
}
Attempts
The closest I got to something reasonable is... building my own coalesceAggregation function that specifically coalesces aggregated fields:
// Can't quite use AggregateFunction there
// v v
public static <T> Field<T> coalesceAggregation(final Field<T> agg, final Condition coalesceWhen, #NonNull final T coalesceTo) {
return DSL.coalesce(DSL.field("{0} FILTER (WHERE {1})", agg.getType(), agg, coalesceWhen), coalesceTo);
}
public static <T> Field<T> coalesceAggregation(final Field<T> agg, #NonNull final T coalesceTo) {
return coalesceAggregation(agg, agg.isNotNull(), coalesceTo);
}
... But I then ran into issues with my T type being JsonNode, where DSL#coalesce seems to CAST my coalesceTo to varchar.
Or, you know:
DSL.field("COALESCE(jsonb_object_agg({0}, ({1})) FILTER (WHERE {0} IS NOT NULL), '{}')::jsonb", JSON_TYPE, key, select)
But that'd be the very last resort: it'd feel like I'd merely be one step away from letting the user inject any SQL they want into my database 🙄
In short
Is there a way in jOOQ to "properly" implement one's own aggregate function, as an actual org.jooq.AgregateFunction?
I'd like to avoid having it generated by jooq-codegen as much as possible (not that I don't like it – it's just our pipeline that's horrible).
Starting with jOOQ 3.14.0
The JSON_OBJECTAGG aggregate function is supported natively in jOOQ now:
DSL.jsonObjectAgg(TABLE.NAME, TABLE.VALUE).filterWhere(TABLE.NAME.isNotNull());
Support for the FILTER clause was added in jOOQ 3.14.8.
Starting with jOOQ 3.14.8 and 3.15.0
If jOOQ doesn't implement a specific aggregate function, you can now specify DSL.aggregate() to make use of custom aggregate functions.
DSL.aggregate("json_object_agg", SQLDataType.JSON, TABLE.NAME, TABLE.VALUE)
.filterWhere(TABLE.NAME.isNotNull());
This was implemented with https://github.com/jOOQ/jOOQ/issues/1729
Pre jOOQ 3.14.0
There's a missing feature in the jOOQ DSL API, namely to create plain SQL aggregate functions. The reason why this is not available yet (as of jOOQ 3.11) is because there are a lot of delicate internals to specifying a vendor agnostic aggregate function that supports all of the vendor-specific options including:
FILTER (WHERE ...) clause (as you mentioned in the question), which has to be emulated using CASE
OVER (...) clause to turn an aggregate function into a window function
WITHIN GROUP (ORDER BY ...) clause to support ordered set aggregate functions
DISTINCT clause, where supported
Other, vendor-specific extensions to aggregate functions
The easy workaround in your specific case is to use plain SQL templating all the way as you mentioned in your question as well:
DSL.field("COALESCE(jsonb_object_agg({0}, ({1})) FILTER (WHERE {0} IS NOT NULL), '{}')::jsonb", JSON_TYPE, key, select)
Or you do the thing you've mentioned previously. Regarding that concern:
... But I then ran into issues with my T type being JsonNode, where DSL#coalesce seems to CAST my coalesceTo to varchar.
That's probably because you used agg.getType() which returns Class<?> instead of agg.getDataType() which returns DataType<?>.
But that'd be the very last resort: it'd feel like I'd merely be one step away from letting the user inject any SQL they want into my database
I'm not sure why that is an issue here. You will still be able to control your plain SQL API usage yourself, and users won't be able to inject arbitrary things into key and select because you control those elements as well.

Design to dynamically apply filters on list of products

I need some of your inputs in deciding a right approach in following case.
I am currently working on a ecommerce application (An online shopping website). Here, on homepage of application, I have to display a list of products available in shop. The end user has a facility of applying filters (include/exclude) to products list so that only the products satisfying the applied filter will be displayed.
So, my question is regarding the correct design for applying these filter on products in Java layer. Please note I keep all the data in-memory (hashmap) at Java layer. I just want to apply filters that are passed as an input from UI on this in-memory data store and return back the filtered results.
Backend: Java application hosted on Tomcat. This application has a periodic thread running that reads/refreshes product data from file system every 30 seconds and keeps it in-memory of java process.
Frontend: React application hosted on Nginx. Makes rest calls to backend server to fetch the data.
Approaches I considered:
Create a class called "FilteredProducts" that has attribute (say filtering criteria). Have different implementations for each possible filtering criteria using strategy pattern and apply the filtering criteria on products based on filtering criteria that is passed as an input.
Can anyone please guide me, if there is any recommended way to handle this requirement? Any input is highly appreciated. Please let me know if more information is required in this context.
You could create an interface or abstract class filter and for each filter element you could create a specific class that implements or extends the interdace/superclass.
Then you could implement a matches method and check, which articles meet the criterias (e.g.:)
public interface FilterCriteria() {
public List<ShopElement> getMatchedProducts(int minPrice, int maxPrice);
}
public class PriceFilterCriteria implements FilterCriteria() {
#Override
public List<ShopElement> getMatchedProducts(int minPrice, int maxPrice) {
List<ShopElement> matchedProducts = new ArrayList<>();
foreach(ShopElement se : YourDataSource.getAllElements() ) {
if(se.getPrice() > minPrice && se.getPrice() < maxPrice) {
matchedProducts.add(se);
}
}
}
}
This pattern is really called filter pattern.
Your strategy pattern approach may work, but i think it is some kind of 'wrong-placed' in this context, because i think strategie pattern is mostly for changing concrete implementations within a client and not for combining various, cross-cutting filters.

How to customize ModelMapper

I want to use ModelMapper to convert entity to DTO and back. Mostly it works, but how do I customize it. It has has so many options that it's hard to figure out where to start. What's best practice?
I'll answer it myself below, but if another answer is better I'll accept it.
First here are some links
modelmapper getting started
api doc
blog post
random code examples
My impression of mm is that it is very well engineered. The code is solid and a pleasure to read. However, the documentation is very terse, with very few examples. Also the api is confusing because there seems to be 10 ways to do anything, and no indication of why you’d do it one way or another.
There are two alternatives: Dozer is the most popular, and Orika gets good reviews for ease of use.
Assuming you still want to use mm, here’s what I’ve learned about it.
The main class, ModelMapper, should be a singleton in your app. For me, that meant a #Bean using Spring. It works out of the box for simple cases. For example, suppose you have two classes:
class DogData
{
private String name;
private int mass;
}
class DogInfo
{
private String name;
private boolean large;
}
with appropriate getters/setters. You can do this:
ModelMapper mm = new ModelMapper();
DogData dd = new DogData();
dd.setName("fido");
dd.setMass(70);
DogInfo di = mm.map(dd, DogInfo.class);
and the "name" will be copied from dd to di.
There are many ways to customize mm, but first you need to understand how it works.
The mm object contains a TypeMap for each ordered pair of types, such as <DogInfo, DogData> and <DogData, DogInfo> would be two TypeMaps.
Each TypeMap contains a PropertyMap with a list of mappings. So in the example the mm will automatically create a TypeMap<DogData, DogInfo> that contains a PropertyMap that has a single mapping.
We can write this
TypeMap<DogData, DogInfo> tm = mm.getTypeMap(DogData.class, DogInfo.class);
List<Mapping> list = tm.getMappings();
for (Mapping m : list)
{
System.out.println(m);
}
and it will output
PropertyMapping[DogData.name -> DogInfo.name]
When you call mm.map() this is what it does,
see if the TypeMap exists yet, if not create the TypeMap for the <S,
D> source/destination types
call the TypeMap Condition, if it returns FALSE, do nothing and STOP
call the TypeMap Provider to construct a new destination object if necessary
call the TypeMap PreConverter if it has one
do one of the following:
if the TypeMap has a custom Converter, call it
or, generate a PropertyMap (based on Configuration flags plus any custom mappings that were added), and use it
(Note: the TypeMap also has optional custom Pre/PostPropertyConverters that I think will run at this point before and after each mapping.)
call the TypeMap PostConverter if it has one
Caveat: This flowchart is sort of documented but I had to guess a lot, so it might not be all correct!
You can customize every single step of this process. But the two most common are
step 5a. – write custom TypeMap Converter, or
step 5b. – write custom Property Mapping.
Here is a sample of a custom TypeMap Converter:
Converter<DogData, DogInfo> myConverter = new Converter<DogData, DogInfo>()
{
public DogInfo convert(MappingContext<DogData, DogInfo> context)
{
DogData s = context.getSource();
DogInfo d = context.getDestination();
d.setName(s.getName());
d.setLarge(s.getMass() > 25);
return d;
}
};
mm.addConverter(myConverter);
Note the converter is one-way. You have to write another if you want to customize DogInfo to DogData.
Here is a sample of a custom PropertyMap:
Converter<Integer, Boolean> convertMassToLarge = new Converter<Integer, Boolean>()
{
public Boolean convert(MappingContext<Integer, Boolean> context)
{
// If the dog weighs more than 25, then it must be large
return context.getSource() > 25;
}
};
PropertyMap<DogData, DogInfo> mymap = new PropertyMap<DogData, DogInfo>()
{
protected void configure()
{
// Note: this is not normal code. It is "EDSL" so don't get confused
map(source.getName()).setName(null);
using(convertMassToLarge).map(source.getMass()).setLarge(false);
}
};
mm.addMappings(mymap);
The pm.configure function is really funky. It’s not actual code. It is dummy EDSL code that gets interpreted somehow. For instance the parameter to the setter is not relevant, it is just a placeholder. You can do lots of stuff in here, such as
when(condition).map(getter).setter
when(condition).skip().setter – safely ignore field.
using(converter).map(getter).setter – custom field converter
with(provider).map(getter).setter – custom field constructor
Note the custom mappings are added to the default mappings, so you do not need, for example, to specify
map(source.getName()).setName(null);
in your custom PropertyMap.configure().
In this example, I had to write a Converter to map Integer to Boolean. In most cases this will not be necessary because mm will automatically convert Integer to String, etc.
I'm told you can also create mappings using Java 8 lambda expressions. I tried, but I could not figure it out.
Final Recommendations and Best Practice
By default mm uses MatchingStrategies.STANDARD which is dangerous. It can easily choose the wrong mapping and cause strange, hard to find bugs. And what if next year someone else adds a new column to the database? So don't do it. Make sure you use STRICT mode:
mm.getConfiguration().setMatchingStrategy(MatchingStrategies.STRICT);
Always write unit tests and ensure that all mappings are validated.
DogInfo di = mm.map(dd, DogInfo.class);
mm.validate(); // make sure nothing in the destination is accidentally skipped
Fix any validation failures with mm.addMappings() as shown above.
Put all your mappings in a central place, where the mm singleton is created.
I faced a problem while mapping with ModelMapper. Not only properties but also My source and destination type were different. I solved this problem by doing this ->
if the source and destination type are different. For example,
#Entity
class Student {
private Long id;
#OneToOne
#JoinColumn(name = "laptop_id")
private Laptop laptop;
}
And Dto ->
class StudentDto {
private Long id;
private LaptopDto laptopDto;
}
Here, the source and destination types are different. So, if your MatchingStrategies are STRICT, you won't able to map between these two different types.
Now to solve this, Just simply put this below code in the constructor of your controller class or any class where you want to use ModelMapper->
private ModelMapper modelMapper;
public StudentController(ModelMapper modelMapper) {
this.modelMapper = modelMapper;
this.modelMapper.typeMap(Student.class, StudentDto.class).addMapping(Student::getLaptop, StudentDto::setLaptopDto);
}
That's it. Now you can use ModelMapper.map(source, destination) easily. It will map automatically
modelMapper.map(student, studentDto);
I've been using it from last 6 months, I'm going to explain some of my thoughts about that:
First of all, it is recommended to use it as an unique instance (singleton, spring bean,...), that's explained in the manual, and I think all agree with that.
ModelMapper is a great mapping library and wide flexible. Due to its flexibility, there are many ways to get the same result, and that's why it should be in the manual of best practices of when to use one or other way to do the same thing.
Starting with ModelMapper is a little bit difficult, it has a very tight learning curve and sometimes it is not easy to understand the best ways to do something, or how to do some other thing. So, to start it is required to read and understand the manual precisely.
You can configure your mapping as you want using the next settings:
Access level
Field matching
Naming convention
Name transformer
Name tokenizer
Matching strategy
The default configuration is simply the best (http://modelmapper.org/user-manual/configuration/), but if you want to customise it you are able to do it.
Just one thing related to the Matching Strategy configuration, I think this is the most important configuration and is need to be careful with it. I would use the Strict or Standard but never the Loose, why?
Due Loose is the most flexible and intelligent mapper it could be map some properties you can not expect. So, definitively, be careful with it. I think is better to create your own PropertyMap and use Converters if it is needed instead of configuring it as Loose.
Otherwise, it is important to validate all property matches, you verify all it works, and with ModelMapper it's more need due with intelligent mapping it is done via reflection so you will not have the compiler help, it will continue compiling but the mapping will fail without realising it. That's one of the things I least like, but it needs to avoid boilerplate and manual mapping.
Finally, if you are sure to use ModelMapper in your project you should use it using the way it proposes, don't mix it with manual mappings (for example), just use ModelMapper, if you don't know how to do something be sure is possible (investigate,...). Sometimes is hard to do it with model mapper (I also don't like it) as doing by hand but is the price you should pay to avoid boilerplate mappings in other POJOs.
import org.modelmapper.ModelMapper;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
#Service
public class EntityDtoConversionUtil {
#Autowired
private ModelMapper modelMapper;
public Object convert(Object object,Class<?> type) {
Object MapperObject=modelMapper.map(object, type);
return MapperObject;
}
}
Here is how you can make a customize Conversion Class and can then autowire it where you would like to convert the object to dto and vice versa.
#Component
public class ConversionUtil {
#Bean
public ModelMapper modelMapper() {
return new ModelMapper();
}
public <T,D> D mapItem(T item,Class<D> cl){
return modelMapper().map(item,cl);
}
public <T,D> List<D> map(List<T> list, Class<D> cl){
return list.stream()
.map(item -> modelMapper().map(item, cl))
.collect(Collectors.toList());
}
}

Categories