Efficient way of mimicking hibernate criteria on cached map - java

I have just wrote a code to cach a table in the memory (simple java hashmap). Now one of the code that i am trying to replace is the find the objects based on criteria. it receives multiple field parameters and if those fields are not empty and not null, they were being added as part of hibernate query criteria.
To replace this, what i am thinking to do is
For each valid param (not null and no empty) I will create a HashSet which will satisfy this criteria.
Once i am done making hashsets for all valid criteria, I will call Set.retainAll(second_set) on all sets. So that at the end, I will have only that set which is intersection of all valid criteria.
Does it sound like the best approach or is there any better way to implement this ?
EDIT
Though, My original post is still valid and I am looking for that answer. I ended up implementing it in the following way. The reason is that it was kind a cumbersome with sets since after creating all sets, I had to first figure out which set is non empty so that the retainAll could be called. it was resulting in lots of if-else statements. My current implementation is like this
private List<MyObj> getCachedObjs(Long criteria1, String criteria2, String criteria3) {
List<MyObj> results = new ArrayList<>();
int totalActiveFilters = 0;
if (criteria1 != null){
totalActiveFilters++;
}
if (!StringUtil.isBlank(criteria2)){
totalActiveFilters++;
}
if (!StringUtil.isBlank(criteria3)){
totalActiveFilters++;
}
for (Map.Entry<Long, MyObj> objEntry : objCache.entrySet()){
MyObj obj = objEntry.getValue();
int matchedFilters = 0;
if (criteria1 != null) {
if (obj.getCriteria1().equals(criteria1)) {
matchedFilters++;
}
}
if (!StringUtil.isBlank(criteria2)){
if (obj.getCriteria2().equals(criteria2)){
matchedFilters++;
}
}
if (!StringUtil.isBlank(criteria3)){
if (game.getCriteria3().equals(criteria3)){
matchedFilters++;
}
}
if (matchedFilters == totalActiveFilters){
results.add(obj);
}
}
return results;
}

Related

How to sort a particular page number in Spring Data JPA?

I have this paging problem where when I try to sort a table by field header on a particular page number, PageRequest.of(page-1, 10, sort) is sorting the entire table, not on a particular page. Thus, what record is returned in that page is different from the previous record before sorting.
Code:
#Override
public Page<User> getPageAndSort(String field, String direction, int page) {
Sort sort = direction.equalsIgnoreCase(Sort.Direction.ASC.name())
? Sort.by(field).ascending()
: Sort.by(field).descending();
Pageable pageable = PageRequest.of(page-1, 10, sort);
return userRepo.findAll(pageable);
}
For example. I want to sort only in page 1 by id. Returning a sorted record from page 1. The rest of the pages or entire records shouldn't
be affected.
Thank you.
Edit:
I have a workaround in this problem. After getting a page from:
Page<User> page = userService.findPage(currentPage);
I get the page.getContent() List and then pass to method sortList:
userService.sortList(new ArrayList<>(page.getContent()), field, sortDir)
sort implementation:
public ArrayList<User> sortList(ArrayList<User> users, String field, String direction) {
users.sort((User user1, User user2) -> {
try {
Field field1 = user1.getClass().getDeclaredField(field);
field1.setAccessible(true);
Object object1 = field1.get(user1);
Field field2 = user2.getClass().getDeclaredField(field);
field2.setAccessible(true);
Object object2 = field2.get(user2);
int result = 0;
if (isInt(object1.toString())) {
result = Integer.parseInt(object1.toString()) - Integer.parseInt(object2.toString());
} else {
result = object1.toString().compareToIgnoreCase(object2.toString());
}
if (result > 0) {
return direction.equalsIgnoreCase("asc") ? 1 : -1;
}
if (result < 0) {
return direction.equalsIgnoreCase("asc") ? -1 : 1;
}
return 0;
} catch (Exception e) {
Log.error(e.toString());
return 0;
}
});
return users;
}
With this work around. I successfully sorted a particular page by its column header without affecting the rest of pages. But it's not standard though as it doesn't use PageRequest.of() from Spring Data JPA and I recommend testing the code and review it thoroughly.
I think an if condition could solve the problem. Create Pageable instance with respect to the condition.
#Override
public Page<User> getPageAndSort(String field, String direction, int page) {
Sort sort = direction.equalsIgnoreCase(Sort.Direction.ASC.name())
? Sort.by(field).ascending()
: Sort.by(field).descending();
Pageable pageable = (page == 1)?PageRequest.of(page-1, 10, sort)
:PageRequest.of(page-1, 10);
return userRepo.findAll(pageable);
}
References : https://www.baeldung.com/spring-data-jpa-pagination-sorting#:~:text=We%20can%20create%20a%20PageRequest%20object%20by%20passing,%280%2C%202%29%3B%20Pageable%20secondPageWithFiveElements%20%3D%20PageRequest.of%20%281%2C%205%29%3B
I think this helps.
I don't think there is an easy way to make this kind of sorting in the database and since you are dealing with a single page which is memory anyway since you render it to the UI, I would just sort it in memory.
Alternatively you can go with a custom SQL statement structured like this:
SELECT * FROM (
SELECT * FROM WHATEVER
ORDER BY -- sort clause defining the pagination
OFFSET ... LIMIT ... -- note that this clause is database dependent.
) ORDER BY -- your sort criteria within the page goes here
You'll have to construct this SQL statement programmatically, so you can't use Spring Datas special features like annotated queries or query derivation.
I'm not sure, I got your question, but if you want a certain sorted page, the db should definitely create the query plan, sort all the data and return you a certain offset (Page) of the sorted data.
It's impossible to get a sorted page without sorting the whole data.
I believe you want to sort data only on a given page, this is difficult to manage with database query which probably will sort whole data and would give you nth page.
I would suggest to do a reverse on a given page after retrieving with same order.
Retrieve the nth page with from database with always asc.
Depending on direction do a reverse if needed.
This should be faster than relying on database for sort operation.

How to compare GraphQL query tree in java

My spring-boot application is generating GraphQL queries, however I want to compare that query in my test.
So basically I have two strings where the first one is containing the actual value and the latter one the expected value.
I want to parse that in a class or tree node so I can compare them if both of them are equal.
So even if the order of the fields are different, I need to know if it's the same query.
So for example we have these two queries:
Actual:
query Query {
car {
brand
color
year
}
person {
name
age
}
}
Expected
query Query {
person {
age
name
}
car {
brand
color
year
}
}
I expect that these queries both are semantically the same.
I tried
Parser parser = new Parser();
Document expectedDocument = parser.parseDocument(expectedValue);
Document actualDocument = parser.parseDocument(actualValue);
if (expectedDocument.isEqualTo(actualDocument)) {
return MatchResult.exactMatch();
}
But found out that it does nothing since the isEqualTo is doing this:
public boolean isEqualTo(Node o) {
if (this == o) {
return true;
} else {
return o != null && this.getClass() == o.getClass();
}
}
I know with JSON I can use Jackson for this purpose and compare treenodes, or parsing it into a Java object and have my own equals() implementation, but I don't know how to do that for GraphQL Java.
How can I parse my GraphQL query string into an object so that I can compare it?
I have recently solved this problem myself. You can reduce the query to a hash and compare the values. You can account for varied query order by utilizing a tree structure. You can take advantage of the QueryTraverser and QueryReducer to accomplish this.
First you can create the QueryTraverser, the exact method for creating this will depend on your execution point. Assuming you are doing it in the AsyncExecutor with access to the ExecutionContext the below code snippet will suffice. But you can do this in instrumentation or the data fetcher itself if you so choose;
val queryTraverser = QueryTraverser.newQueryTraverser()
.schema(context.graphQLSchema)
.document(context.document
.operationName(context.operationDefinition.name)
.variables(context.executionInput?.variables ?: emptyMap())
.build()
Next you will need to provide an implementation of the reducer, and some accumulation object that can add each field to a tree structure. Here is a simplified version of an accumulation object
class MyAccumulation {
/**
* A sorted map of the field node of the query and its arguments
*/
private val fieldPaths = TreeMap<String, String>()
/**
* Add a given field and arguments to the sorted map.
*/
fun addFieldPath(path: String, arguments: String) {
fields[path] = arguments
}
/**
* Function to generate the query hash
*/
fun toHash(): String {
val joinedFields = fieldPaths.entries
.joinToString("") { "${it.key}[${it.value}]" }
return HashingLibrary.hashingfunction(joinedFields)
}
A sample reducer implementation would look like the below;
class MyReducer : QueryReducer<MyAccumulation> {
override fun reduceField(
fieldEnvironment: QueryVisitorFieldEnvironment,
acc: MyAccumulation
): MyAccumulation {
if (fieldEnvironment.isTypeNameIntrospectionField) {
return acc
}
// Get your field path, this should account for
// the same node with different parents, and you should recursively
// traverse the parent environment to construct this
val fieldPath = getFieldPath(fieldEnvironment)
// Provide a reproduceable stringified arguments string
val arguments = getArguments(fieldEnvironment.arguments)
acc.addFieldPath(fieldPath, arguments)
return acc
}
}
Finally put it all together;
val queryHash = queryTraverser
.reducePreOrder(MyReducer(), MyAccumulation())
.toHash()
You can now generate a repdocueable hash for a query that does not care about the query structure, only the actual fields that were requested.
Note: These code snippets are in kotlin but are transposable to Java.
Depending on how important is to perform this comparison you can inspect all the elements of Document to determine equality.
If this is to optimize and return the same result for the same input I would totally recommend just compare the strings and kept two entries (one for each string input).
If you really want to go for the deep compare route, you can check the selectionSet and compare each selection.
Take a look at the screenshot:
You can also give EqualsBuilder.html.reflectionEquals(Object,Object) a try but it might inspect too deep (I tried and returned false)

How to tell java's TreeSet or HashMap that the order by which its contenst are indexed is the insertion order(don't want LinkedHashMap)?

Eg; I want my TreeSet / HashMap to store 1,19,3,4,2,0 in that order since it is the order they are added to the map.
I hear that LinkedHashMap is the go-to solution. But my question is, can we produce the same result with TreeSet/HashMap, with some modification introduced to comparedTo()?
No. You cannot do that. You must use LinkedHashMap or another custom map implementation; TreeMap and HashMap cannot support insertion order.
You could do it with a very very complicated compareTo(...) method, but it would involve basically keeping track of the state of the tree yourself, manually, somewhere external to the tree. That's just re-creating a LinkedHashMap without using a LinkedHashMap. Same thing, but much more complex and hard to read and maintain code. No reason to do that.
"You cannot do that." Well, you can. All you need to do is add the insertion order to the object you're storing and account for it in the compareTo or comparator.
#Override
public int compareTo(Object o) {
int returnValue = 0;
if(o instanceof InsertionObject) {
InsertionObject newObject = (InsertionObject)o;
if(this.insertionOrder != newObject.insertionOrder) {
if(this.insertionOrder < newObject.insertionOrder) {
returnValue = -1;
}
else {
returnValue = 1;
}
}
}
else {
throw new RuntimeException("Insert error message here.");
}
return returnValue;
}
I'm not saying you should, but you could. This smells like an interview question or a homework problem, so they probably are just testing to see if the poster understands what is going on inside the underlying collections?

Creating method filters

In my code I have a List<Person>. Attributes to the objects in this list may include something along the lines of:
ID
First Name
Last Name
In a part of my application, I will be allowing the user to search for a specific person by using any combination of those three values. At the moment, I have a switch statement simply checking which fields are filled out, and calling the method designated for that combination of values.
i.e.:
switch typeOfSearch
if 0, lookById()
if 1, lookByIdAndName()
if 2, lookByFirstName()
and so on. There are actually 7 different types.
This makes me have one method for each statement. Is this a 'good' way to do this? Is there a way that I should use a parameter or some sort of 'filter'? It may not make a difference, but I'm coding this in Java.
You can do something more elgant with maps and interfaces. Try this for example,
interface LookUp{
lookUpBy(HttpRequest req);
}
Map<Integer, LookUp> map = new HashMap<Integer, LookUp>();
map.put(0, new LookUpById());
map.put(1, new LookUpByIdAndName());
...
in your controller then you can do
int type = Integer.parseInt(request.getParameter(type));
Person person = map.get(type).lookUpBy(request);
This way you can quickly look up the method with a map. Of course you can also use a long switch but I feel this is more manageable.
If good means "the language does it for me", no.
If good means 'readable', I would define in Person a method match() that returns true if the object matches your search criteria. Also, probably is a good way to create a method Criteria where you can encapsulate the criteria of search (which fields are you looking for and which value) and pass it to match(Criteria criteria).
This way of doing quickly becomes unmanageable, since the number of combinations quickly becomes huge.
Create a PersonFilter class having all the possible query parameters, and visit each person of the list :
private class PersonFilter {
private String id;
private String firstName;
private String lastName;
// constructor omitted
public boolean accept(Person p) {
if (this.id != null && !this.id.equals(p.getId()) {
return false;
}
if (this.firstName != null && !this.firstName.equals(p.getFirstName()) {
return false;
}
if (this.lastName != null && !this.lastName.equals(p.getLastName()) {
return false;
}
return true;
}
}
The filtering is now implemented by
public List<Person> filter(List<Person> list, PersonFilter filter) {
List<Person> result = new ArrayList<Person>();
for (Person p : list) {
if (filter.accept(p) {
result.add(p);
}
}
return result;
}
At some point you should take a look at something like Lucene which will give you the best scalability, manageability and performance for this type of searching. Not knowing the amount of data your dealing with I only recommend this for a longer term solution with a larger set of objects to search with. It's an amazing tool!

Simple database-like collection class in Java

The problem: Maintain a bidirectional many-to-one relationship among java objects.
Something like the Google/Commons Collections bidi maps, but I want to allow duplicate values on the forward side, and have sets of the forward keys as the reverse side values.
Used something like this:
// maintaining disjoint areas on a gameboard. Location is a space on the
// gameboard; Regions refer to disjoint collections of Locations.
MagicalManyToOneMap<Location, Region> forward = // the game universe
Map<Region, <Set<Location>>> inverse = forward.getInverse(); // live, not a copy
Location parkplace = Game.chooseSomeLocation(...);
Region mine = forward.get(parkplace); // assume !null; should be O(log n)
Region other = Game.getSomeOtherRegion(...);
// moving a Location from one Region to another:
forward.put(parkplace, other);
// or equivalently:
inverse.get(other).add(parkplace); // should also be O(log n) or so
// expected consistency:
assert ! inverse.get(mine).contains(parkplace);
assert forward.get(parkplace) == other;
// and this should be fast, not iterate every possible location just to filter for mine:
for (Location l : mine) { /* do something clever */ }
The simple java approaches are: 1. To maintain only one side of the relationship, either as a Map<Location, Region> or a Map<Region, Set<Location>>, and collect the inverse relationship by iteration when needed; Or, 2. To make a wrapper that maintains both sides' Maps, and intercept all mutating calls to keep both sides in sync.
1 is O(n) instead of O(log n), which is becoming a problem. I started in on 2 and was in the weeds straightaway. (Know how many different ways there are to alter a Map entry?)
This is almost trivial in the sql world (Location table gets an indexed RegionID column). Is there something obvious I'm missing that makes it trivial for normal objects?
I might misunderstand your model, but if your Location and Region have correct equals() and hashCode() implemented, then the set of Location -> Region is just a classical simple Map implementation (multiple distinct keys can point to the same object value). The Region -> Set of Location is a Multimap (available in Google Coll.). You could compose your own class with the proper add/remove methods to manipulate both submaps.
Maybe an overkill, but you could also use in-memory sql server (HSQLDB, etc). It allows you to create index on many columns.
I think you could achieve what you need with the following two classes. While it does involve two maps, they are not exposed to the outside world, so there shouldn't be a way for them to get out of sync. As for storing the same "fact" twice, I don't think you'll get around that in any efficient implementation, whether the fact is stored twice explicitly as it is here, or implicitly as it would be when your database creates an index to make joins more efficient on your 2 tables. you can add new things to the magicset and it will update both mappings, or you can add things to the magicmapper, which will then update the inverse map auotmatically. The girlfriend is calling me to bed now so I cannot run this through a compiler - it should be enough to get you started. what puzzle are you trying to solve?
public class MagicSet<L> {
private Map<L,R> forward;
private R r;
private Set<L> set;
public MagicSet<L>(Map forward, R r) {
this.forward = map;
this.r = r;
this.set = new HashSet<L>();
}
public void add(L l) {
set.add(l);
forward.put(l,r);
}
public void remove(L l) {
set.remove(l);
forward.remove(l);
}
public int size() {
return set.size();
}
public in contains(L l){
return set.contains(l);
}
// caution, do not use the remove method from this iterator. if this class was going
// to be reused often you would want to return a wrapped iterator that handled the remove method properly. In fact, if you did that, i think you could then extend AbstractSet and MagicSet would then fully implement java.util.Set.
public Iterator iterator() {
return set.iterator();
}
}
public class MagicMapper<L,R> { // note that it doesn't implement Map, though it could with some extra work. I don't get the impression you need that though.
private Map<L,R> forward;
private Map<R,MagicSet<L>> inverse;
public MagicMapper<L,R>() {
forward = new HashMap<L,R>;
inverse = new HashMap<R,<MagicSet<L>>;
}
public R getForward(L key) {
return forward.get(key);
}
public Set<L> getBackward(R key) {
return inverse.get(key); // this assumes you want a null if
// you try to use a key that has no mapping. otherwise you'd return a blank MagicSet
}
public void put (L l, R r) {
R oldVal = forward.get(l);
// if the L had already belonged to an R, we need to undo that mapping
MagicSet<L> oldSet = inverse.get(oldVal);
if (oldSet != null) {oldSet.remove(l);}
// now get the set the R belongs to, and add it.
MagicSet<L> newSet = inverse.get(l);
if (newSet == null) {
newSet = new MagicSet<L>(forward, r);
inverse.put(r,newSet);
}
newSet.add(l); // magically updates the "forward" map
}
}

Categories