I am wondering how I can search between by dates in Hibernate Search using Range-Query or is there any filter I have to implement.Following is my field in Record Entity
/**
* When the analysis started.
*/
#Temporal(TemporalType.TIMESTAMP)
#Field(index = Index.UN_TOKENIZED)
#DateBridge(resolution = Resolution.MILLISECOND)
private Date startTS;
My requirment is to find the records analysed between a two dates eg. 11/11/2011 to 11/11/2012.I am confused how to do this.
You should use a range query using from and to.
query = monthQb
.range()
.onField( "startTS" ).ignoreFieldBridge()
.from( DateTools.dateToString( from, DateTools.Resolution.MILLISECOND ) )
.to( DateTools.dateToString( to, DateTools.Resolution.MILLISECOND ) ).excludeLimit()
.createQuery();
The ignoreFieldBridge is needed since you create the string based search string yourself using DateTools.
Related
Need to fetch all data rows from my collection("data") with my ID and date range as below
Date start = new Date(01/4/2022);
Date end = new Date(30/4/2022);
fStore.collection("data").whereEqualTo("ID", userId).where("date", ">=",start ).where("date", "<=", end).get().addOnCompleteListener(new OnCompleteListener<QuerySnapshot>() {
But its complaining using 2 where clauses
solution which helped me is using .whereGreaterThanOrEqualTo("date", "01/5/2022").whereLessThanOrEqualTo("date", "31/5/2022").
I've an index in my elasticsearch and I want to have a query to compare 2 date fields.
assuming fields name are creationDate and modifiedDate. I want to get all documents which these 2 dates are the same in them.
I know it was possible to use FilteredQuery which is deprecated right now.
something like the bellowing code:
FilteredQueryBuilder query = QueryBuilders.filteredQuery(null,
FilterBuilders.scriptFilter("doc['creationDate'].value = doc['modifiedDate'].value"));
Also it's maybe possible to write manual scripts as string, but I doubt that this is the right solution. Any idea's to create the properly query would be appreciated.
Filtered query have been replaced by bool/filter queries You can do it like this:
BoolQueryBuilder bqb = QueryBuilders.boolQuery()
filter(QueryBuilders.scriptQuery("doc['creationDate'].value = doc['modifiedDate'].value"));
However, instead of using scripts at search time, you'd be better off creating a new field at indexing time that contains the information of whether creationDate and modifiedDate are the same dates. Then, you could simply check that flag at query time, it would be much more optimal and fast.
If you don't want to reindex all your data, you can update all of them with that flag, simply run an update by query like this:
POST my-index/_update_by_query
{
"script": {
"source": """
def creationDate = Instant.parse(ctx._source.creationDate);
def modifiedDate = Instant.parse(ctx._source.modifiedDate);
ctx._source.modified = ChronoUnit.MICROS.between(creationDate, modifiedDate) > 0;
""",
"lang": "painless"
},
"query": {
"match_all": {}
}
}
And then your query will simply be
BoolQueryBuilder bqb = QueryBuilders.boolQuery()
filter(QueryBuilders.termQuery("modified", "false");
I have an implementation of hibernate-search-orm (5.9.0.Final) with hibernate-search-elasticsearch (5.9.0.Final).
I defined a custom analyzer on an entity (see beelow) and I indexed two entities :
id: "1"
title: "Médiatiques : récit et société"
abstract:...
id: "2"
title: "Mediatique Com'7"
abstract:...
The search works fine when I search on title field :
"title:médiatique" => 2 results.
"title:mediatique" => 2 results.
My problem is when I do a global search with accents (or not) :
search on "médiatique => 1 result (id:1)
search on "mediatique => 1 result (id:2)
Is there a way to resolve this?
Thanks.
Entity definition:
#Entity
#Table(name="bibliographic")
#DynamicUpdate
#DynamicInsert
#Indexed(index = "bibliographic")
#FullTextFilterDefs({
#FullTextFilterDef(name = "fieldsElasticsearchFilter",
impl = FieldsElasticsearchFilter.class)
})
#AnalyzerDef(name = "customAnalyzer",
tokenizer = #TokenizerDef(factory = StandardTokenizerFactory.class),
filters = {
#TokenFilterDef(factory = LowerCaseFilterFactory.class),
#TokenFilterDef(factory = ASCIIFoldingFilterFactory.class),
})
#Analyzer(definition = "customAnalyzer")
public class BibliographicHibernate implements Bibliographic {
...
#Column(name="title", updatable = false)
#Fields( {
#Field,
#Field(name = "titleSort", analyze = Analyze.NO, store = Store.YES)
})
#SortableField(forField = "titleSort")
private String title;
...
}
Search method :
FullTextEntityManager ftem = Search.getFullTextEntityManager(entityManager);
QueryBuilder qb = ftem.getSearchFactory().buildQueryBuilder().forEntity(Bibliographic.class).get();
QueryDescriptor q = ElasticsearchQueries.fromQueryString(queryString);
FullTextQuery query = ftem.createFullTextQuery(q, Bibliographic.class).setFirstResult(start).setMaxResults(rows);
if (filters!=null){
filters.stream().map((filter) -> filter.split(":")).forEach((f) -> {
query.enableFullTextFilter("fieldsElasticsearchFilter")
.setParameter("field", f[0])
.setParameter("value", f[1]);
}
);
}
if (facetFields!=null){
facetFields.stream().map((facet) -> facet.split(":")).forEach((f) ->{
query.getFacetManager()
.enableFaceting(qb.facet()
.name(f[0])
.onField(f[0])
.discrete()
.orderedBy(FacetSortOrder.COUNT_DESC)
.includeZeroCounts(false)
.maxFacetCount(10)
.createFacetingRequest() );
}
);
}
List<Bibliographic> bibs = query.getResultList();
To be honest I'm more surprised document 1 would match at all, since there's a trailing "s" on "Médiatiques" and you don't use any stemmer.
You are in a special case here: you are using a query string and passing it directly to Elasticsearch (that's what ElasticsearchQueries.fromQueryString(queryString) does). Hibernate Search has very little impact on the query being run, it only impacts the indexed content and the Elasticsearch mapping here.
When you run a QueryString query on Elasticsearch and you don't specify any field, it uses all fields in the document. I wouldn't bet that the analyzer used when analyzing your query is the same analyzer that you defined on your "title" field. In particular, it may not be removing accents.
An alternative solution would be to build a simple query string query using the QueryBuilder. The syntax of queries is a bit more limited, but is generally enough for end users. The code would look like this:
FullTextEntityManager ftem = Search.getFullTextEntityManager(entityManager);
QueryBuilder qb = ftem.getSearchFactory().buildQueryBuilder().forEntity(Bibliographic.class).get();
Query q = qb.simpleQueryString()
.onFields("title", "abstract")
.matching(queryString)
.createQuery();
FullTextQuery query = ftem.createFullTextQuery(q, Bibliographic.class).setFirstResult(start).setMaxResults(rows);
Users would still be able to target specific fields, but only in the list you provided (which, by the way, is probably safer, otherwise they could target sort fields and so on, which you probably don't want to allow). By default, all the fields in that list would be targeted.
This may lead to the exact same result as the query string, but the advantage is, you can override the analyzer being used for the query. For instance:
FullTextEntityManager ftem = Search.getFullTextEntityManager(entityManager);
QueryBuilder qb = ftem.getSearchFactory().buildQueryBuilder().forEntity(Bibliographic.class)
.overridesForField("title", "customAnalyzer")
.overridesForField("abstract", "customAnalyzer")
.get();
Query q = qb.simpleQueryString()
.onFields("title", "abstract")
.matching(queryString)
.createQuery();
FullTextQuery query = ftem.createFullTextQuery(q, Bibliographic.class).setFirstResult(start).setMaxResults(rows);
... and this will use your analyzer when querying.
As an alternative, you can also use a more advanced JSON query by replacing ElasticsearchQueries.fromQueryString(queryString) with ElasticsearchQueries.fromJsonQuery(json). You will have to craft the JSON yourself, though, taking some precautions to avoid any injection from the user (use Gson to build the Json), and taking care to follow the Elasticsearch query syntax.
You can find more information about simple query string queries in the official documentation.
Note: you may want to add FrenchMinimalStemFilterFactory to your list of token filters in your custom analyzer. It's not the cause of your problem, but once you manage to use your analyzer in search queries, you will very soon find it useful.
I already can execute the desired query on mongoshell, but i need to make the same query using Java and MongoOperations.
I have checked this question, which is very similar, but it only has one condition, as mine has two and uses the $gte and $lt operators. Here's the working mongo Query:
db.getCollection('example').update({"idVar": "desiredValued"}, { $pull: { "listaHoras": { $gte: ISODate("2016-11-06T05:50:00.000Z"), $lt: ISODate("2016-11-06T06:30:00.000Z")}}})
Sample doc:
"_id" : ObjectId("58221b4610a3c71f1894ce75"),
"idVar" : "56b11259272f5515b05d70bc",
"date" : ISODate("2016-11-06T03:00:00.000Z"),
"listaHoras" : [
ISODate("2016-11-06T05:40:00.000Z"),
ISODate("2016-11-06T06:30:00.000Z"),
ISODate("2016-11-06T06:40:00.000Z")
]
Where i'll have the ISODATE as a Date variable in Java, and the desiredValue as a String variable.
So far, i have i did the following, using the previously mentioned question as example:
BasicDBObject match = new BasicDBObject("idVar", desiredValue); // to match your document
BasicDBObject update = new BasicDBObject("listaHoras", new BasicDBObject("itemID", "1"));
coll.update(match, new BasicDBObject("$pull", update));
But, as you can see, this is NOT equivalent to the desired query. Since the match for the $pull is matching "itemID"with "1". I do not know, nor was i able to find how to properly use the $gte and $lt on the same query. Neither on how to use just one or both of them. I know it CAN be done as seen on the MongoOperatioons API which says:
"update - the update document that contains the updated object or $ operators to manipulate the existing object."
Anyone knows how it can be done? And if the Date type in Java matches the ISODATE on the Mongo?
I managed to find a solution. It is similar to what Veeram posted as a answer but it's slightly different. I simply removed his updateCriteria and used a BasicDBObject in it's place.
Here's how the full code looks:
Query findQuery = new Query();
Criteria findCriteria = Criteria.where("idVar").is(idVar);
findQuery.addCriteria(findCriteria);
Update update = new Update().pull("listaHoras", new BasicDBObject( "$gte", start).append("$lte", end));
mongoOps.updateMulti(findQuery, update, "CollectionName");
Where start and end are Date variables recieved by the method. Also important to note that Mongo uses the UTC as default timezone, so we must properly format the time in order for it to remove the desired values.
You can try something like below. This will remove the two items from the listaHoras array.
Query findQuery = new Query();
Criteria findCriteria =
Criteria.where("idVar").is("56b11259272f5515b05d70bc");
findQuery.addCriteria(findCriteria);
LocalDate startDt = LocalDate.of(2016, 11, 6);
LocalTime startTm = LocalTime.of(5, 40, 0);
LocalDate endDt = LocalDate.of(2016, 11, 6);
LocalTime endTm = LocalTime.of(6, 35, 0);
Date start = Date.from(LocalDateTime.of(startDt, startTm).toInstant(ZoneOffset.UTC));
Date end = Date.from(LocalDateTime.of(endDt, endTm).toInstant(ZoneOffset.UTC));
Query updateQuery = new Query();
Criteria updateCriteria =
Criteria.where(null).gte(start).lt(end);
updateQuery.addCriteria(updateCriteria);
mongoOperations.updateMulti(findQuery, update, "example");
I want to find users by most recent date (Assume the User object has a date field). The data is stored in MongoDB and accessed via a Spring MongoTemplate.
Example of the raw data:
{userId:1, date:10}
{userId:1, date:20}
{userId:2, date:50}
{userId:2, date:10}
{userId:3, date:10}
{userId:3, date:30}
The query should return
{{userId:1, date:20}, {userId:2, date:50}, {userId:3, date:30}}
The aggregation method Ï am using is
db.table1.aggregate({$group:{'_id':'$userId', 'max':{$max:'$date'}}},
{$sort:{'max':1}}).result
You could Sort it first by date DESC and select the first while grouping by userID
final Aggregation aggregation = newAggregation(
Aggregation.sort(Sort.Direction.DESC, "date"),
Aggregation.group("userId").first("date").as("Date")
);
final AggregationResults<User> results = mongoTemplate.aggregate(aggregation, "user", User.class);