Anyone has use class FilteredQueryBuilder to create a Cypher query in Java?
I'm trying to create this query using neo4j-ogm:
MATCH (n:Message) WHERE n.messageContext = 'RECEBER_BOLETO_EM_ABERTO'
MATCH (n)-[r0:NEXT]->(m) WHERE r0.response = 'SIM'
return m
Map<String, Object> parameters = new HashedMap<>();
parameters.put("messageContext", "RECEBER_BOLETO_EM_ABERTO");
parameters.put("response", "SIM");
Filters filtersNode = new Filters();
Filter filterStartNode = new Filter("messageContext", ComparisonOperator.EQUALS, "RECEBER_BOLETO_EM_ABERTO");
filterStartNode.setNestedEntityTypeLabel("Message");
filterStartNode.setNestedPropertyName("messageContext");
filterStartNode.setRelationshipDirection(Relationship.OUTGOING);
filterStartNode.setBooleanOperator(BooleanOperator.AND);
filtersNode.add(filterStartNode);
Filter filterEndNode = new Filter("response", ComparisonOperator.EQUALS, "SIM");
filterEndNode.setNestedPropertyName("response");
filterEndNode.setRelationshipDirection(Relationship.TYPE);
filterEndNode.setBooleanOperator(BooleanOperator.AND);
filtersNode.add(filterEndNode);
FilteredQuery fq = FilteredQueryBuilder.buildRelationshipQuery("NEXT", filtersNode);
fq.setReturnClause("return m");
The Builder class doesn't parse the parameters into cypher query and throw the exception as follow:
org.neo4j.ogm.exception.CypherException: Error executing Cypher; Code:
Neo.ClientError.Statement.ParameterMissing; Description: Expected a
parameter named messageContext_messageContext_0
Thanks in advance.
The query builders are internal classes to OGM. Don't rely on them, they could change in the future. The use of Filters with the OGM session is fine though.
To build custom cypher queries, you might be interested by Cypher DSL.
Related
I have indexed IntPointField using lucene which I am able to fetch using below query:
Query query = IntPoint.newRangeQuery("field1", 0, 40);
TopDocs topDocs = searcher.search(query);
System.out.println(topDocs.totalHits);
its fetching the relevant correctly.
If i build the query using parse it doesn't work
Query query = new QueryParser(Version.LUCENE_8_11_0.toString(), new StandardAnalyzer()).parse("field1:[0 TO 40]");
I checked the string representation of both the query they look identical as below
field1:[0 TO 40]
Does anyone know what am I doing wrong?
IntPoint field requires custom query paser.
The below solves the problem
StandardQueryParser parser = new StandardQueryParser();
parser.setAnalyzer(new StandardAnalyzer());
PointsConfig indexableFields = new PointsConfig(new DecimalFormat(), Integer.class);
Map<String, PointsConfig> indexableFieldMap = new HashMap<>();
pointsConfigMap.put("field1", indexableFields);
parser.setPointsConfigMap(indexableFieldMap);
I am tring to use Lookback API. I want to get all features change during a release.
What I tried :
LookbackQuery query = lookbackApi.newSnapshotQuery();
query.addFindClause("_TypeHierarchy", "PortfolioItem");
query.addFindClause("ObjectID", "280075838440");
Map previousValue = new HashMap();
previousValue.put("$exists", "true");
query.addFindClause("_PreviousValues.Release", previousValue);
query.requireFields("_SnapshotDate", "_SnapshotNumber", "FormattedID",
"Name", "Release","_PreviousValues.Release").hydrateFields("Release, _PreviousValues.Release");
LookbackResult resultSet = query.execute();
I have this exception :
Exception in thread "main" com.rallydev.lookback.LookbackException:
Query Error: incomplete intersection between 'hydrate' clause of
[Release, _PreviousValues.Release] with 'fields' clause of
[_SnapshotNumber, _PreviousValues.Release, _SnapshotDate, FormattedID,
Release, Name] at
com.rallydev.lookback.LookbackResult.validate(LookbackResult.java:101)
at
com.rallydev.lookback.LookbackApi.executeQuery(LookbackApi.java:233)
at
com.rallydev.lookback.LookbackQuery.validateAndRun(LookbackQuery.java:243)
at com.rallydev.lookback.LookbackQuery.execute(LookbackQuery.java:59)
at fr.mipih.rally.TestLoockback.main(TestLoockback.java:38)
But when I tried directly via: https://eu1.rallydev.com/analytics/v2.0/service/rally/workspace/9396539899/artifact/snapshot/query.js?hydrate=["Release","_PreviousValues.Release"]&start=1&pagesize=2000&find={$and: [{"ObjectID": 280075838440},{"_PreviousValues.Release": {"$exists":true}}]}&fields=["_SnapshotDate","_SnapshotNumber","FormattedID","Name","Release","_PreviousValues.Release"]
then I get some results!
Could you help me please and show me what I did wrong ?
The issue is in query - please wrap each field quote:
query.requireFields("_SnapshotDate", "_SnapshotNumber", "FormattedID",
"Name", "Release","_PreviousValues.Release").hydrateFields("Release", "_PreviousValues.Release");
Sorry for my English.
I'm working on an android application that stores data on the Datastore Google cloud. I want to carry out a query on my datastore that mixes StContainsFilter and FilterPredicate. It does not work! Here is my code:
DatastoreService service = DatastoreServiceFactory.getDatastoreService();
Query q = new Query("utilisateurs");
Query.Filter filtrage1 = new Query.FilterPredicate("sexe", Query.FilterOperator.EQUAL, "M");
Query.Filter filtrage2 = new Query.FilterPredicate("datenaissance", Query.FilterOperator.LESS_THAN_OR_EQUAL, datemin);
Query.Filter filtrage3 = new Query.FilterPredicate("datenaissance", Query.FilterOperator.GREATER_THAN_OR_EQUAL, datemax);
GeoPt center = new GeoPt(Float.parseFloat(lat), Float.parseFloat(lng));
double radius = km*1000;
Query.Filter filtrage4 = new Query.StContainsFilter("location", new GeoRegion.Circle(center, radius));
Query.Filter present = Query.CompositeFilterOperator.and(filtrage2,filtrage3,filtrage1,filtrage4);
q.setFilter(present);
PreparedQuery pq = service.prepare(q);
List<Entity> results = pq.asList(FetchOptions.Builder.withDefaults());
To mix different filters you can use a CompositeFilter. You can read more about Datastore Queries here. With the CompositeFilter, you can connect multiple Filters, which then act as one. However, you still have to consider not to set inequality Filters on more than one property.
To create a CompositeFilter use this syntax:
CompositeFilter nameOfFilter = CompositeFilterOperator.and(Collection<Filter>);
Collection can also be a List, an Array or you can seperate Filters by comma
Here an example on how to create a CompositeFilter:
Filter filter1 = new FilterPredicate("someProperty", FilterOperator.Equal, someValue)
Filter filtrage4 = new StContainsFilter("location", new GeoRegion.Circle(center, radius));
Filter filtrage2 = new FilterPredicate("datenaissance", Query.FilterOperator.LESS_THAN_OR_EQUAL, datemin);
CompositeFilter filter = CompositeFilterOperator.or(filter1, filtrage4, filtrage2);
Use CompositeFilterOperator.and if you need all Filters to apply and .or if one applying Filter is enough.
Technically your solution should work because StContainsFilter is a direct subclass from Query.Filter. The reason for your problem is a wrong import. You should check your imports and change them if they say anything with "repackaged" (I hade the same problem too)
I am using spring-data-mongodb and I want to use a cursor for an aggregate operation.
MongoTemplate.stream() gets a Query, so I tried creating the Aggregation instance, convert it to a DbObject using Aggregation.toDbObject(), created a BasicQuery using the DbObject and then invoke the stream() method.
This returns an empty cursor.
Debugging the spring-data-mongodb code shows that MongoTemplate.stream() uses the FindOperation, which makes me thinkspring-data-mongodb does not support streaming an aggregation operation.
Has anyone been able to stream the results of an aggregate query using spring-data-mongodb?
For the record, I can do it using the Java mongodb driver, but I prefer using spring-data.
EDIT Nov 10th - adding sample code:
MatchOperation match = Aggregation.match(Criteria.where("type").ne("AType"));
GroupOperation group = Aggregation.group("name", "type");
group = group.push("color").as("colors");
group = group.push("size").as("sizes");
TypedAggregation<MyClass> agg = Aggregation.newAggregation(MyClass.class, Arrays.asList(match, group));
MongoConverter converter = mongoTemplate.getConverter();
MappingContext<? extends MongoPersistentEntity<?>, MongoPersistentProperty> mappingContext = converter.getMappingContext();
QueryMapper queryMapper = new QueryMapper(converter);
AggregationOperationContext context = new TypeBasedAggregationOperationContext(MyClass.class, mappingContext, queryMapper);
// create a BasicQuery to be used in the stream() method by converting the Aggregation to a DbObject
BasicQuery query = new BasicQuery(agg.toDbObject("myClass", context));
// spring-mongo attributes the stream() method to find() operationsm not to aggregate() operations so the stream returns an empty cursor
CloseableIterator<MyClass> iter = mongoTemplate.stream(query, MyClass.class);
// this is an empty cursor
while(iter.hasNext()) {
System.out.println(iter.next().getName());
}
The following code, not using the stream() method, returns the expected non-empty result of the aggregation:
AggregationResults<HashMap> result = mongoTemplate.aggregate(agg, "myClass", HashMap.class);
For those who are still trying to find the answer to this:
From spring-data-mongo version 2.0.0.M4 onwards (AFAIK) MongoTemplate got an aggregateStream method.
So you can do the following:
AggregationOptions aggregationOptions = Aggregation.newAggregationOptions()
// this is very important: if you do not set the batch size, you'll get all the objects at once and you might run out of memory if the returning data set is too large
.cursorBatchSize(mongoCursorBatchSize)
.build();
data = mongoTemplate.aggregateStream(Aggregation.newAggregation(
Aggregation.group("person_id").count().as("count")).withOptions(aggregationOptions), collectionName, YourClazz.class);
I want to write a FacetQuery which may not have any criteria except one filter condition (fq). Following query is an example which I want to build using spring-data-solr API.
http://localhost:8983/solr/jpevents/select?q=*:*&fq=categoryIds:(1101)&facet=true&facet.mincount=1&facet.limit=1&facet.field=primaryCategoryId
How can I set query parameter (q=*:*) in FacetQuery?
Environment: I'm writing a Spring MVC based Search API using spring-data-solr 1.0.0.RELEASE with Solr 4.4.0 and Spring 3.2.4.RELEASE.
you can do this combining #Query and #Facet
#Facet(fields={"primaryCategoryId"}, minCount=1, limit=1)
#Query(value="*:*", filters="categoryIds:(?0)")
public FacetPage<JPEvents> XYZ(List<Long> categories, Pageable page);
or execute FacetQuery using SolrTemplate.
FacetQuery query = new SimpleFacetQuery(new SimpleStringCriteria("*:*"))
.setFacetOptions(new FacetOptions("primaryCategoryId")
.setFacetMinCount(1).setFacetLimit(1));
query.setPageRequest(pageable);
solrTemplate.queryForFacetPage(query, JPEvents.class);
I have done something like this :
public static void main()
{
String url = "http://localhost:8983/solr/autocomplete";
SolrServer solrServer = new HttpSolrServer(url);
SolrQuery query = new SolrQuery();
query.set("q", "*");
query.addFilterQuery("name:*");
query.setFacet(true);
query.addFacetField("name");
System.out.println(query);
QueryResponse queryResponse = solrServer.query(query);
List<FacetField> facetFields = queryResponse.getFacetFields();
FacetField cnameMainFacetField = queryResponse.getFacetField("name");
for (Count cnameAndCount : cnameMainFacetField.getValues()) {
String cnameMain = cnameAndCount.getName();
System.out.println(cnameMain);
System.out.println(cnameAndCount.getCount());
}
This gives me correct counts of faceted fields.
Hope you are able to understand what I am doing. Adding output for better understanding:
q=*&fq=name%3A*&facet=true&facet.field=name
a
10
an
7
w
7
m
6
and
5
c
5
p
5
d
4