How to compare GraphQL query tree in java - java

My spring-boot application is generating GraphQL queries, however I want to compare that query in my test.
So basically I have two strings where the first one is containing the actual value and the latter one the expected value.
I want to parse that in a class or tree node so I can compare them if both of them are equal.
So even if the order of the fields are different, I need to know if it's the same query.
So for example we have these two queries:
Actual:
query Query {
car {
brand
color
year
}
person {
name
age
}
}
Expected
query Query {
person {
age
name
}
car {
brand
color
year
}
}
I expect that these queries both are semantically the same.
I tried
Parser parser = new Parser();
Document expectedDocument = parser.parseDocument(expectedValue);
Document actualDocument = parser.parseDocument(actualValue);
if (expectedDocument.isEqualTo(actualDocument)) {
return MatchResult.exactMatch();
}
But found out that it does nothing since the isEqualTo is doing this:
public boolean isEqualTo(Node o) {
if (this == o) {
return true;
} else {
return o != null && this.getClass() == o.getClass();
}
}
I know with JSON I can use Jackson for this purpose and compare treenodes, or parsing it into a Java object and have my own equals() implementation, but I don't know how to do that for GraphQL Java.
How can I parse my GraphQL query string into an object so that I can compare it?

I have recently solved this problem myself. You can reduce the query to a hash and compare the values. You can account for varied query order by utilizing a tree structure. You can take advantage of the QueryTraverser and QueryReducer to accomplish this.
First you can create the QueryTraverser, the exact method for creating this will depend on your execution point. Assuming you are doing it in the AsyncExecutor with access to the ExecutionContext the below code snippet will suffice. But you can do this in instrumentation or the data fetcher itself if you so choose;
val queryTraverser = QueryTraverser.newQueryTraverser()
.schema(context.graphQLSchema)
.document(context.document
.operationName(context.operationDefinition.name)
.variables(context.executionInput?.variables ?: emptyMap())
.build()
Next you will need to provide an implementation of the reducer, and some accumulation object that can add each field to a tree structure. Here is a simplified version of an accumulation object
class MyAccumulation {
/**
* A sorted map of the field node of the query and its arguments
*/
private val fieldPaths = TreeMap<String, String>()
/**
* Add a given field and arguments to the sorted map.
*/
fun addFieldPath(path: String, arguments: String) {
fields[path] = arguments
}
/**
* Function to generate the query hash
*/
fun toHash(): String {
val joinedFields = fieldPaths.entries
.joinToString("") { "${it.key}[${it.value}]" }
return HashingLibrary.hashingfunction(joinedFields)
}
A sample reducer implementation would look like the below;
class MyReducer : QueryReducer<MyAccumulation> {
override fun reduceField(
fieldEnvironment: QueryVisitorFieldEnvironment,
acc: MyAccumulation
): MyAccumulation {
if (fieldEnvironment.isTypeNameIntrospectionField) {
return acc
}
// Get your field path, this should account for
// the same node with different parents, and you should recursively
// traverse the parent environment to construct this
val fieldPath = getFieldPath(fieldEnvironment)
// Provide a reproduceable stringified arguments string
val arguments = getArguments(fieldEnvironment.arguments)
acc.addFieldPath(fieldPath, arguments)
return acc
}
}
Finally put it all together;
val queryHash = queryTraverser
.reducePreOrder(MyReducer(), MyAccumulation())
.toHash()
You can now generate a repdocueable hash for a query that does not care about the query structure, only the actual fields that were requested.
Note: These code snippets are in kotlin but are transposable to Java.

Depending on how important is to perform this comparison you can inspect all the elements of Document to determine equality.
If this is to optimize and return the same result for the same input I would totally recommend just compare the strings and kept two entries (one for each string input).
If you really want to go for the deep compare route, you can check the selectionSet and compare each selection.
Take a look at the screenshot:
You can also give EqualsBuilder.html.reflectionEquals(Object,Object) a try but it might inspect too deep (I tried and returned false)

Related

Hazelcast not working correctly with SqlPredicate and Index on optional field

We are storing complex objects in Hazelcast maps and need the possibility to search for objects not only based on the key but also on the content of these complex objects. In order to not take too large a performance hit, we are using indices on those search terms.
We are also using spring-data-hazelcast which provides repositories that allow us to use findByAbcXyz() type semantic queries. For some of the more complex queries we are using the #Query annotation (which spring-data-hazelcast internally translates to SqlPredicates).
We have now encountered an issue where under certain situations these #Query based search methods did not return any values, even if we could verify that the searched objects did in fact exist in the map.
I have managed to reproduce this issue with core hazelcast (i.e. without the use of spring-data-hazelcast).
Here is our object structure:
BetriebspunktKey.java
public class BetriebspunktKey implements Serializable {
private Integer uicLand;
private Integer nummer;
public BetriebspunktKey(final Integer uicLand, final Integer nummer) {
this.uicLand = uicLand;
this.nummer = nummer;
}
public Integer getUicLand() {
return uicLand;
}
public Integer getNummer() {
return nummer;
}
}
Betriebspunkt.java
public class Betriebspunkt implements Serializable {
private BetriebspunktKey key;
private List<BetriebspunktVersion> versionen;
public Betriebspunkt(final BetriebspunktKey key, final List<BetriebspunktVersion> versionen) {
this.key = key;
this.versionen = versionen;
}
public BetriebspunktKey getKey() {
return key;
}
}
BetriebspunktVersion.java
public class BetriebspunktVersion implements Serializable {
private List<BetriebspunktKey> zusatzbetriebspunkte;
public BetriebspunktVersion(final List<BetriebspunktKey> zusatzbetriebspunkte) {
this.zusatzbetriebspunkte = zusatzbetriebspunkte;
}
}
In my main file, I am now setting up hazelcast:
Config config = new Config();
final MapConfig mapConfig = config.getMapConfig("points");
mapConfig.addMapIndexConfig(new MapIndexConfig("versionen[any].zusatzbetriebspunkte[any].nummer", false));
HazelcastInstance instance = Hazelcast.newHazelcastInstance(config);
IMap<BetriebspunktKey, Betriebspunkt> map = instance.getMap("points");
I am also preparing my search criteria for later on:
Predicate equalPredicate = Predicates.equal("versionen[any].zusatzbetriebspunkte[any].nummer", 53090);
Predicate sqlPredicate = new SqlPredicate("versionen[any].zusatzbetriebspunkte[any].nummer=53090");
Next, I am creating two objects, one with the "full depth" of information, the other does not contain any "zusatzbetriebspunkte":
final Betriebspunkt abc = new Betriebspunkt(
new BetriebspunktKey(80, 166),
Collections.singletonList(new BetriebspunktVersion(
Collections.singletonList(new BetriebspunktKey(80, 53090))
))
);
final Betriebspunkt def = new Betriebspunkt(
new BetriebspunktKey(83, 141),
Collections.singletonList(new BetriebspunktVersion(
Collections.emptyList()
))
);
Here is, where things become interesting. If I first insert the "full" object into the map, the search using both the EqualPredicate as well as the SqlPredicate works:
map.put(abc.getKey(), abc);
map.put(def.getKey(), def);
Collection<Betriebspunkt> equalResults = map.values(equalPredicate);
Collection<Betriebspunkt> sqlResults = map.values(sqlPredicate);
assertEquals(1, equalResults.size()); // contains "abc"
assertEquals(1, sqlResults.size()); // contains "abc"
However, if I insert the objects into my map in reverse order (i.e. first the "partial" object and then the "full" one), only the EqualPredicate works correctly, the SqlPredicate returns an empty list, no matter what the content of the map or the search criteria.
map.put(abc.getKey(), abc);
map.put(def.getKey(), def);
Collection<Betriebspunkt> equalResults = map.values(equalPredicate);
Collection<Betriebspunkt> sqlResults = map.values(sqlPredicate);
assertEquals(1, equalResults.size()); // contains "abc"
assertEquals(1, sqlResults.size()); // --> this fails, it returns en empty list
What is the reason for this behaviour? It looks like a bug in the hazelcast code.
The reason for failing
After a lot of debugging, I have found the reason for this issue. The reasons can indeed be found in the hazelcast code.
When putting a value into a hazelcast map DefaultRecordStore.putInternal is called. At the end of this method DefaultRecordStore.saveIndex is called which finds the corresponding indexes and then calls Indexes.saveEntryIndex. This method iterates over each index and calls InternalIndex.saveEntryIndex (or rather its implementation IndexImpl.saveEntryIndex. The interesting part of that method are the following lines:
if (this.converter == null || this.converter == TypeConverters.NULL_CONVERTER) {
this.converter = entry.getConverter(this.attributeName);
}
Aparently each index stores a converter class when the first element is put into the map. Looking at QueryableEntry.getConverter explains what happens:
TypeConverter getConverter(String attributeName) {
Object attribute = this.getAttributeValue(attributeName);
if (attribute == null) {
return TypeConverters.NULL_CONVERTER;
} else {
AttributeType attributeType = this.extractAttributeType(attributeName, attribute);
return attributeType == null ? TypeConverters.IDENTITY_CONVERTER : attributeType.getConverter();
}
}
When first inserting the "full" object, extractAttributeType() will follow the "path" of our index definition "versionen[any].zusatzbetriebspunkte[any].nummer" and find out that nummer is an integer type, accordingly a TypeConverters.IntegerConverter will be returned and stored.
When first inserting the "partial" object, "zusatzbetriebspunkte[any]" is emtpy, and there is no way for extractAttributeType to find out what type nummer hast, it therefore returns null which means that TypeConverters.IdentityConverter is used.
Also, whenever a "full" element is inserted an entry is written into the index map using nummer as key, i.e. the index-map is of type Map.
So much for writing to the map. Let's now look at how data is read from the map. When calling map.values(predicate) we will eventually get to QueryRunner.runUsingGlobalIndexSafely which contains a line:
Collection<QueryableEntry> entries = indexes.query(predicate);
this will in turn after some boilerplate code call
Set<QueryableEntry> result = indexAwarePredicate.filter(queryContext);
For both of our predicates we will eventually get to IndexImpl.getRecords() which looks as follows:
public Set<QueryableEntry> getRecords(Comparable attributeValue) {
long timestamp = this.stats.makeTimestamp();
if (this.converter == null) {
this.stats.onIndexHit(timestamp, 0L);
return new SingleResultSet((Map)null);
} else {
Set<QueryableEntry> result = this.indexStore.getRecords(this.convert(attributeValue));
this.stats.onIndexHit(timestamp, (long)result.size());
return result;
}
}
The crucial call is this.convert(attributeValue) where attributeValue is the value of the predicate.
If we compare our two predicates, we can see that the EqualPredicate has two members:
attributeName = "versionen[any].zusatzbetriebspunkte[any].nummer"
value = {Integer} 53090
The SqlPredicate contains the initial string (which we passed to its constructor) but which at constructions was also parsed and mapped to a internal EqualPredicate (which when evaluating the predicate is eventually used and passed to getRecords() above):
sql = "versionen[any].zusatzbetriebspunkte[any].nummer=53090"
predicate = {EqualPredicate}
attributeName = "versionen[any].zusatzbetriebspunkte[any].nummer"
value = {String} "53090"
And this explains why the manually created EqualPredicate works in both cases: Its value is an integer. When passed to the converter, it does not matter whether it is the IntegerConverter or the IdentityConverter, as both will return the integer which can then be used as key in the index-map (which uses an integer as key).
With the SqlPredicate however, the value is a String. If this is passed to the IntegerConverter, it is converted to its corresponding integer value and accessing the index-map works. If it is passed to the IdentityConverter, the string is returned by the conversion and trying to access the index-map with a string will never find any results.
A possible solution
How can we solve this issue? I see several possibilities:
insert a "fully built" dummy value into our map during startup to ensure the converter is correctly initialised. While this works, it is ugly and not maintenance friendly
avoid using SqlPredicate and use the integer based EqualPredicate. This is not an option when working with spring-data-hazelcast as it always converts #Query based searches to SqlPredicates. We could of course use hazelcast directly and circumvent the spring-data wrapper but while that would work it means having two ways of accessing hazelcast which is also not very maintainable
use hazelcast's ValueExtractor class. This is the elegant solution that works both natively and using spring-data-hazelcast. I will outline what that looks like:
First we need to implement a value extractor which returns all zusatzbetriebspunkte of our Betriebspunkt in a form suitable for us
public class BetriebspunktExtractor extends ValueExtractor<Betriebspunkt, String> implements Serializable {
#Override
public void extract(final Betriebspunkt betriebspunkt, final String argument, final ValueCollector valueCollector) {
betriebspunkt.getVersionen().stream()
.map(BetriebspunktVersion::getZusatzbetriebspunkte)
.flatMap(List::stream)
.map(zbp -> zbp.getUicLand() + "_" + zbp.getNummer())
.forEach(valueCollector::addObject);
}
}
You'll notice that I am not only returning the nummer field but also include the uicLand field this is something we really wanted but couldn't get working using the "...[any]..." notation. We could of course only return the nummer if we wanted the exact same behavior as outlined above.
Now we need to modify our hazelcast configuration slightly:
Config config = new Config();
final MapConfig mapConfig = config.getMapConfig("points");
//mapConfig.addMapIndexConfig(new MapIndexConfig("versionen[any].zusatzbetriebspunkte[any].nummer", false));
mapConfig.addMapIndexConfig(new MapIndexConfig("zusatzbetriebspunkt", false));
mapConfig.addMapAttributeConfig(new MapAttributeConfig("zusatzbetriebspunkt", BetriebspunktExtractor.class.getName()));
You'll notice that the "long" index definition using the "...[any]..." notation is no longer needed.
Now we can use this "pseudo attribute" to query our values and it doesn't matter in which order the objects have been added to the map:
Predicate keyPredicate = Predicates.equal("zusatzbetriebspunkt", "80_53090");
Collection<Betriebspunkt> keyResults = map.values(keyPredicate);
assertEquals(1, keyResults.size()); // always contains "abc"
And in our spring-data-hazelcast repository we can now do this:
#Query("zusatzbetriebspunkt=%d_%d")
List<StammdatenBetriebspunkt> findByZusatzbetriebspunkt(Integer uicLand, Integer nummer);
If you do not need to use spring-data-hazelcast, instead of returning a string to the ValueCollector, you could return the BetriebspunktKey directly and then use it in the predicate as well. That would be the cleanest solution:
public class BetriebspunktExtractor extends ValueExtractor<Betriebspunkt, String> implements Serializable {
#Override
public void extract(final Betriebspunkt betriebspunkt, final String argument, final ValueCollector valueCollector) {
betriebspunkt.getVersionen().stream()
.map(BetriebspunktVersion::getZusatzbetriebspunkte)
.flatMap(List::stream)
//.map(zbp -> zbp.getUicLand() + "_" + zbp.getNummer())
.forEach(valueCollector::addObject);
}
}
and then
Predicate keyPredicate = Predicates.equal("zusatzbetriebspunkt", new BetriebspunktKey(80, 53090));
However, for this to work, BetriebspunktKey needs to implement Comparable and must also provide its own equals and hashCode methods.

find Unique Object matching a property value using Java 7

I have a list of Entity, all entities have unique name, currently to get unique value, I am using MAP of entity name and Entity Object. I dont want to use map just for filter purpose.
I found one solution, but it usage Java-8.
There is one API in Google Guava com.google.common.collect.Sets.filter(), but it returns Set and in this case I have to get 0th element.
Can anyone suggest better approach.
Using the Map approach gives you time benefit as lookup time is reduced whereas, but uses memory.
If you are open to Guava try something like:
Optional<Entity> result = FluentIterable.from(entityList).firstMatch(new Predicate<Entity>() {
#Override
public boolean apply(Entity entity) {
return entity.getName().equals(input); //Input can be from variable in function definition
});
);
Something like this, can solve.
Try below method:
public static Entity findByName(String name, List<Entity> entities) {
if (entities!= null && name != null) {
for (Entity e : entities) {
if (name.equals(e.getName())) {
return e;
}
}
}
return null;
}

Efficient way of mimicking hibernate criteria on cached map

I have just wrote a code to cach a table in the memory (simple java hashmap). Now one of the code that i am trying to replace is the find the objects based on criteria. it receives multiple field parameters and if those fields are not empty and not null, they were being added as part of hibernate query criteria.
To replace this, what i am thinking to do is
For each valid param (not null and no empty) I will create a HashSet which will satisfy this criteria.
Once i am done making hashsets for all valid criteria, I will call Set.retainAll(second_set) on all sets. So that at the end, I will have only that set which is intersection of all valid criteria.
Does it sound like the best approach or is there any better way to implement this ?
EDIT
Though, My original post is still valid and I am looking for that answer. I ended up implementing it in the following way. The reason is that it was kind a cumbersome with sets since after creating all sets, I had to first figure out which set is non empty so that the retainAll could be called. it was resulting in lots of if-else statements. My current implementation is like this
private List<MyObj> getCachedObjs(Long criteria1, String criteria2, String criteria3) {
List<MyObj> results = new ArrayList<>();
int totalActiveFilters = 0;
if (criteria1 != null){
totalActiveFilters++;
}
if (!StringUtil.isBlank(criteria2)){
totalActiveFilters++;
}
if (!StringUtil.isBlank(criteria3)){
totalActiveFilters++;
}
for (Map.Entry<Long, MyObj> objEntry : objCache.entrySet()){
MyObj obj = objEntry.getValue();
int matchedFilters = 0;
if (criteria1 != null) {
if (obj.getCriteria1().equals(criteria1)) {
matchedFilters++;
}
}
if (!StringUtil.isBlank(criteria2)){
if (obj.getCriteria2().equals(criteria2)){
matchedFilters++;
}
}
if (!StringUtil.isBlank(criteria3)){
if (game.getCriteria3().equals(criteria3)){
matchedFilters++;
}
}
if (matchedFilters == totalActiveFilters){
results.add(obj);
}
}
return results;
}

java enum string matching

I have an enum as follows:
public enum ServerTask {
HOOK_BEFORE_ALL_TASKS("Execute"),
COPY_MASTER_AND_SNAPSHOT_TO_HISTORY("Copy master db"),
PROCESS_CHECKIN_QUEUE("Process Check-In Queue"),
...
}
I also have a string (lets say string = "Execute") which I would like to make into an instance of the ServerTask enum based on which string in the enum that it matches with. Is there a better way to do this than doing equality checks between the string I want to match and every item in the enum? seems like this would be a lot of if statements since my enum is fairly large
At some level you're going to have to iterate over the entire set of enumerations that you have, and you'll have to compare them to equal - either via a mapping structure (initial population) or through a rudimentary loop.
It's fairly easy to accomplish with a rudimentary loop, so I don't see any reason why you wouldn't want to go this route. The code snippet below assumes the field is named friendlyTask.
public static ServerTask forTaskName(String friendlyTask) {
for (ServerTask serverTask : ServerTask.values()) {
if(serverTask.friendlyTask.equals(friendlyTask)) {
return serverTask;
}
}
return null;
}
The caveat to this approach is that the data won't be stored internally, and depending on how many enums you actually have and how many times you want to invoke this method, it would perform slightly worse than initializing with a map.
However, this approach is the most straightforward. If you find yourself in a position where you have several hundred enums (even more than 20 is a smell to me), consider what it is those enumerations represent and what one should do to break it out a bit more.
Create static reverse lookup map.
public enum ServerTask {
HOOK_BEFORE_ALL_TASKS("Execute"),
COPY_MASTER_AND_SNAPSHOT_TO_HISTORY("Copy master db"),
PROCESS_CHECKIN_QUEUE("Process Check-In Queue"),
...
FINAL_ITEM("Final item");
// For static data always prefer to use Guava's Immutable library
// http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/collect/ImmutableMap.html
static ImmutableMap< String, ServerTask > REVERSE_MAP;
static
{
ImmutableMap.Builder< String, ServerTask > reverseMapBuilder =
ImmutableMap.builder( );
// Build the reverse map by iterating all the values of your enum
for ( ServerTask cur : values() )
{
reverseMapBuilder.put( cur.taskName, cur );
}
REVERSE_MAP = reverseMapBuilder.build( );
}
// Now is the lookup method
public static ServerTask fromTaskName( String friendlyName )
{
// Will return ENUM if friendlyName matches what you stored
// with enum
return REVERSE_MAP.get( friendlyName );
}
}
If you have to get the enum from the String often, then creating a reverse map like Alexander suggests might be worth it.
If you only have to do it once or twice, looping over the values with a single if statement might be your best bet (like Nizil's comment insinuates)
for (ServerTask task : ServerTask.values())
{
//Check here if strings match
}
However there is a way to not iterate over the values at all. If you can ensure that the name of the enum instance and its String value are identical, then you can use:
ServerTask.valueOf("EXECUTE")
which will give you ServerTask.EXECUTE.
Refer this answer for more info.
Having said that, I would not recommend this approach unless you're OK with having instances have the same String representations as their identifiers and yours is a performance critical application which is most often not the case.
You could write a method like this:
static ServerTask getServerTask(String name)
{
switch(name)
{
case "Execute": return HOOK_BEFORE_ALL_TASKS;
case "Copy master db": return COPY_MASTER_AND_SNAPSHOT_TO_HISTORY;
case "Process Check-In Queue": return PROCESS_CHECKIN_QUEUE;
}
}
It's smaller, but not automatic like #Alexander_Pogrebnyak's solution. If the enum changes, you would have to update the switch.

Creating method filters

In my code I have a List<Person>. Attributes to the objects in this list may include something along the lines of:
ID
First Name
Last Name
In a part of my application, I will be allowing the user to search for a specific person by using any combination of those three values. At the moment, I have a switch statement simply checking which fields are filled out, and calling the method designated for that combination of values.
i.e.:
switch typeOfSearch
if 0, lookById()
if 1, lookByIdAndName()
if 2, lookByFirstName()
and so on. There are actually 7 different types.
This makes me have one method for each statement. Is this a 'good' way to do this? Is there a way that I should use a parameter or some sort of 'filter'? It may not make a difference, but I'm coding this in Java.
You can do something more elgant with maps and interfaces. Try this for example,
interface LookUp{
lookUpBy(HttpRequest req);
}
Map<Integer, LookUp> map = new HashMap<Integer, LookUp>();
map.put(0, new LookUpById());
map.put(1, new LookUpByIdAndName());
...
in your controller then you can do
int type = Integer.parseInt(request.getParameter(type));
Person person = map.get(type).lookUpBy(request);
This way you can quickly look up the method with a map. Of course you can also use a long switch but I feel this is more manageable.
If good means "the language does it for me", no.
If good means 'readable', I would define in Person a method match() that returns true if the object matches your search criteria. Also, probably is a good way to create a method Criteria where you can encapsulate the criteria of search (which fields are you looking for and which value) and pass it to match(Criteria criteria).
This way of doing quickly becomes unmanageable, since the number of combinations quickly becomes huge.
Create a PersonFilter class having all the possible query parameters, and visit each person of the list :
private class PersonFilter {
private String id;
private String firstName;
private String lastName;
// constructor omitted
public boolean accept(Person p) {
if (this.id != null && !this.id.equals(p.getId()) {
return false;
}
if (this.firstName != null && !this.firstName.equals(p.getFirstName()) {
return false;
}
if (this.lastName != null && !this.lastName.equals(p.getLastName()) {
return false;
}
return true;
}
}
The filtering is now implemented by
public List<Person> filter(List<Person> list, PersonFilter filter) {
List<Person> result = new ArrayList<Person>();
for (Person p : list) {
if (filter.accept(p) {
result.add(p);
}
}
return result;
}
At some point you should take a look at something like Lucene which will give you the best scalability, manageability and performance for this type of searching. Not knowing the amount of data your dealing with I only recommend this for a longer term solution with a larger set of objects to search with. It's an amazing tool!

Categories