I'm learning the Hibernate Search Query DSL, and I'm not sure how to construct queries using boolean arguments such as AND or OR.
For example, let's say that I want to return all person records that have a firstName value of "bill" or "bob".
Following the hibernate docs, one example uses the bool() method w/ two subqueries, such as:
QueryBuilder b = fts.getSearchFactory().buildQueryBuilder().forEntity(Person.class).get();
Query luceneQuery = b.bool()
.should(b.keyword().onField("firstName").matching("bill").createQuery())
.should(b.keyword().onField("firstName").matching("bob").createQuery())
.createQuery();
logger.debug("query 1:{}", luceneQuery.toString());
This ultimately produces the lucene query that I want, but is this the proper way to use boolean logic with hibernate search? Is "should()" the equivalent of "OR" (similarly, does "must()" correspond to "AND")?.
Also, writing a query this way feels cumbersome. For example, what if I had a collection of firstNames to match against? Is this type of query a good match for the DSL in the first place?
Yes your example is correct. The boolean operators are called should instead of OR because of the names they have in the Lucene API and documentation, and because it is more appropriate: it is not only influencing a boolean decision, but it also affects scoring of the result.
For example if you search for cars "of brand Fiat" OR "blue", the cars branded Fiat AND blue will also be returned and having an higher score than those which are blue but not Fiat.
It might feel cumbersome because it's programmatic and provides many detailed options. A simpler alternative is to use a simple string for your query and use the QueryParser to create the query. Generally the parser is useful to parse user input, the programmatic one is easier to deal with well defined fields; for example if you have the collection you mentioned it's easy to build it in a for loop.
You can also use BooleanQuery. I would prefer this beacuse You can use this in loop of a list.
org.hibernate.search.FullTextQuery hibque = null;
org.apache.lucene.search.BooleanQuery bquery = new BooleanQuery();
QueryBuilder qb = fulltextsession.getSearchFactory().buildQueryBuilder()
.forEntity(entity.getClass()).get();
for (String keyword : list) {
bquery.add(qb.keyword().wildcard().onField(entityColumn).matching(keyword)
.createQuery() , BooleanClause.Occur.SHOULD);
}
if (!filterColumn.equals("") && !filterValue.equals("")) {
bquery.add(qb.keyword().wildcard().onField(column).matching(value).createQuery()
, BooleanClause.Occur.MUST);
}
hibque = fulltextsession.createFullTextQuery(bquery, entity.getClass());
int num = hibque.getResultSize();
To answer you secondary question:
For example, what if I had a collection of firstNames to match against?
I'm not an expert, but according to (the third example from the end of) 5.1.2.1. Keyword queries in Hibernate Search Documentation, you should be able to build the query like so:
Collection<String> namesCollection = getNames(); // Contains "billy" and "bob", for example
StringBuilder names = new StringBuilder(100);
for(String name : namesCollection) {
names.append(name).append(" "); // Never mind the space at the end of the resulting string.
}
QueryBuilder b = fts.getSearchFactory().buildQueryBuilder().forEntity(Person.class).get();
Query luceneQuery = b.bool()
.should(
// Searches for multiple possible values in the same field
b.keyword().onField("firstName").matching( sb.toString() ).createQuery()
)
.must(b.keyword().onField("lastName").matching("thornton").createQuery())
.createQuery();
and, have as a result, Persons with (firstName preferably "billy" or "bob") AND (lastName = "thornton"), although I don't think it will give the good ol' Billy Bob Thornton a higher score ;-).
I was looking for the same issue and have a somewhat different issue than presented. I was looking for an actual OR junction. The should case didn't work for me, as results that didn't pass any of the two expressions, but with a lower score. I wanted to completely omit these results. You can however create an actual boolean OR expression, using a separate boolean expression for which you disable scoring:
val booleanQuery = cb.bool();
val packSizeSubQuery = cb.bool();
packSizes.stream().map(packSize -> cb.phrase()
.onField(LUCENE_FIELD_PACK_SIZES)
.sentence(packSize.name())
.createQuery())
.forEach(packSizeSubQuery::should);
booleanQuery.must(packSizeSubQuery.createQuery()).disableScoring();
fullTextEntityManager.createFullTextQuery(booleanQuery.createQuery(), Product.class)
return persistenceQuery.getResultList();
Related
I would like to be able to find an entity based on any part of its indexed fields, and the fields must not loose any content while indexing.
Lets say I have the following sample entity class:
#Entity
public class E {
private String f;
// ...
}
And if the value of f in one entity is "This is a nice field!", I would like to be able to find it by any of these queries:
"this"
"a"
"IC"
"!"
"This is a nice field!"
The most obvious decision is to annotate the entity this way:
#Entity
#Indexed
#AnalyzerDef(name = "a",
tokenizer = #TokenizerDef(factory = KeywordTokenizerFactory.class),
filters = #TokenFilterDef(factory = LowerCaseFilterFactory.class)
)
#Analyzer(definition = "a")
public class E {
#Field
private String f;
// ...
}
And then search the following way:
String queryString;
// ...
org.apache.lucene.search.Query query = queryBuilder
.keyword()
.wildcard()
.onField("f")
.matching("*" + queryString.toLowerCase() + "*")
.createQuery();
But it is stated in the documentation that for performance purposes, it is recommended that the query does not start with either ? or *.
So as I understand, this method is ineffective.
The other idea is to use n-grams like this:
#Entity
#Indexed
#AnalyzerDef(name = "a",
tokenizer = #TokenizerDef(factory = KeywordTokenizerFactory.class),
filters = {
#TokenFilterDef(factory = LowerCaseFilterFactory.class),
#TokenFilterDef(factory = NGramFilterFactory.class,
params = {
#Parameter(name = "minGramSize", value = "1"),
#Parameter(name = "maxGramSize", value = E.MAX_LENGTH)
})
}
)
#Analyzer(definition = "a")
public class E {
static final String MAX_LENGTH = "42";
#Field
private String f;
// ...
}
And create queries this way:
String queryString;
// ...
org.apache.lucene.search.Query query = queryBuilder
.keyword()
.onField("f")
.ignoreAnalyzer()
.matching(queryString.toLowerCase())
.createQuery();
This time no wildcard queries are used and the analyzer in the query is ignored. I'm not sure whether ignoring the analyzer is good or bad, but it works with analyzer ignored.
Other possible solution would be to use WhitespaceTokenizerFactory instead of KeywordTokenizerFactory when using n-grams, then split queryString by spaces and combine searches for each substring using MUST.
In this approach, as I understand, I will get a lot less n-grams built, if the length of the string contained in f is E.MAX_LENGTH, what must be good for performance. And I will also be able to find the previously described entity by, for example, "hi ield" query. And that would be ideal.
So what would be the best way to deal with my problem? Or are all my ideas bad?
P.S. Should one ignore analyzer in queries when using n-grams?
Other possible solution would be to use WhitespaceTokenizerFactory instead of KeywordTokenizerFactory when using n-grams, then split queryString by spaces and combine searches for each substring using MUST. In this approach, as I understand, I will get a lot less n-grams built, if the length of the string contained in f is E.MAX_LENGTH, what must be good for performance. And I will also be able to find the previously described entity by, for example, "hi ield" query. And that would be ideal.
This is more or less the ideal solution, except for one thing: you shouldn't ignore the analyzer when querying. What you should do is define another analyzer without the ngram filter, but with the tokenizer, lowercase filter, etc., and explicitly instruct Hibernate Search to use that analyzer at query time.
The other solutions are too expensive, either in I/O and CPU at query time (first solution) or in storage space (second solution). Note that this third solution may still be rather expensive in storage space, depending on the value of E.MAX_LENGTH. It's generally recommended to only have a difference of one or two between minGramSize and maxGramSize, to avoid the indexing of too many grams.
Just define another analyzer, name it something like "ngram_query", and when you need to build the query, create the query builder like this:
QueryBuilder queryBuilder = fullTextEntityManager.getSearchFactory().buildQueryBuilder().forEntity(EPCAsset.class)
.overridesForField( "f" /* name of the field */, "ngram_query" )
.get();
Then create your query as usual.
Note that, if you rely on Hibernate Search to push the index schema and analyzers to Elasticsearch, you will have to use a hack in order for the query-only analyzer to be pushed: by default only the analyzers that are actually used during indexing are pushed. See https://discourse.hibernate.org/t/cannot-find-the-overridden-analyzer-when-using-overridesforfield/1043/4
Right now, I have successfully configured a basic Hibernate Search index to be able to search for full words on various fields of my JPA entity:
#Entity
#Indexed
class Talk {
#Field String title
#Field String summary
}
And my query looks something like this:
List<Talk> search(String text) {
FullTextEntityManager fullTextEntityManager = Search.getFullTextEntityManager(entityManager)
QueryBuilder queryBuilder = fullTextEntityManager.getSearchFactory().buildQueryBuilder().forEntity(Talk).get()
Query query = queryBuilder
.keyword()
.onFields("title", "summary")
.matching(text)
.createQuery()
FullTextQuery jpaQuery = fullTextEntityManager.createFullTextQuery(query, Talk)
return jpaQuery.getResultList()
}
Now I would like to fine-tune this setup so that when I search for "test" it still finds talks where title or summary contains "test" even as the prefix of another word. So talks titled "unit testing", or whose summary contains "testicle" should still appear in the search results, not just talks whose title or summary contains "test" as a full word.
I've tried to look at the documentation, but I can't figure out if I should change something to the way my entity is indexed, or whether it has something to do with the query. Note that I wanted to do something like the following, but then it's hard to search on several fields:
Query query = queryBuilder
.keyword().wildcard()
.onField("title")
.matching(text + "*")
.createQuery()
EDIT:
Based on Hardy's answer, I configured my entity like so:
#Indexed
#Entity
#AnalyzerDefs([
#AnalyzerDef(name = "ngram",
tokenizer = #TokenizerDef(factory = StandardTokenizerFactory.class),
filters = [
#TokenFilterDef(factory = LowerCaseFilterFactory.class),
#TokenFilterDef(factory = NGramFilterFactory.class,
params = [
#Parameter(name = "minGramSize",value = "3"),
#Parameter(name = "maxGramSize",value = "3")
])
])
])
class Talk {
#Field(analyzer=#Analyzer(definition="ngram")) String title
#Field(analyzer=#Analyzer(definition="ngram")) String summary
}
Thanks to that configuration, when I search for 'arti', I get Talks where title or summary contains words whose 'arti' is a subword of (artist, artisanal, etc.). Unfortunately, after those I also get Talks where title or summary contain words that contains subwords of my search term (arts, fart, etc.). There's probably some fine-tuning to eliminate those, but at least I get results sooner now, and they are in a sensible order.
There are multiple things you can do here. A lot can be done via the proper analyzing during index time.
For example, you want to apply a stemmer appropriate for your language. For English this is generally the Snowball stemmer.The idea is that during indexing all words are reduced to their stem, testing and tested to _test for example. This gets you a bit along your way.
The other thing you can look into is ngramm indexing. According to your description you want to find matching in unrelated words as well. The idea here is to index "subwords" of each words, so that they later can be found.
Regarding analyzers you want to look at the named analyzerssection of the Hibernate Search docs. The key here is the #AnalyzerDef annotation.
On the query side you can also apply some "tricks". Indeed you can use wildcard queries, however, if you are using the Hibernate Search query DSL, you cannot use a keyword query, but you need to use a wildcard query. Again, check the Hibernate Search docs.
You should use Ngram or EdgeNGram Filter for indexin as you correctly noted in your answer. But you should use different analyzer for your queries as suggested in lucene documentation (see search_analyzer):
https://www.elastic.co/guide/en/elasticsearch/guide/current/_index_time_search_as_you_type.html
This way your search query wouldn't be tokenized to ngrams and your results would be more like %text% or text% in SQL.
Unfortunately for unknown reasons Hibernate Search currently doesn't support search_analyzer specification on fields. You can only specific analyzer for indexing, which would be also used for search query analysis.
I plan to implement this functionality myself.
EDIT:
You can specify search-time analyzer (search_analyzer) like this:
List<Talk> search(String text) {
FullTextEntityManager fullTextEntityManager = Search.getFullTextEntityManager(entityManager)
EntityContext entityContext = fullTextEntityManager.getSearchFactory().buildQueryBuilder().forEntity(Talk);
entityContext.overridesForField("myField", "myNamedAnalyzerDef");
QueryBuilder queryBuilder = ec.get()
Query query = queryBuilder
.keyword()
.onFields("title", "summary")
.matching(text)
.createQuery()
FullTextQuery jpaQuery = fullTextEntityManager.createFullTextQuery(query, Talk)
return jpaQuery.getResultList()
}
I have used this technique to effectively simulate Lucene search_analyzer property.
In Lucene version 4.9 I used the EnglishAnalyzer for this. I think it is a English only implementation of the SnowballAnalyzer, but not 100% certain. I used it for both creating and searching the indexes. There is nothing special needed to use it.
Analyzer analyzer = new EnglishAnalyzer(Version.LUCENE_4_9);
IndexWriterConfig iwc = new IndexWriterConfig(Version.LUCENE_4_9, analyzer);
and
analyzer = new EnglishAnalyzer(Version.LUCENE_4_9);
parser = new StandardQueryParser(analyzer);
You can see it in action at Guided Code Search. This runs exclusively off Lucene.
Lucene can be integrated into Hibernate searches, but I haven't yet tried to do that myself. I seems like it would be powerful, but I don't know: See Apache Lucene™ Integration.
I've also read that lucene can be patched into SQL engines, but I haven't tried that either. Example: Indexing Databases with Lucene.
I need to determine which part of a Lucene BooleanQuery failed if the entire query returns no results.
I'm using a BooleanQuery made up of 4 NumericRangeQueries and a PhraseQuery. Each is added to the query with Occur.MUST.
If I don't get any results for a query, is there a way to tell which part of the query failed to match anything? Do I need to run queries individually and compare results to get the one that failed?
Edit - Added PhraseQuery code.
if( row.getPropertykey_tx() != null && !row.getPropertykey_tx().trim().isEmpty()){
PhraseQuery pQuery = new PhraseQuery();
String[] words = row.getPropertykey_tx().trim().split(" ");
for( String word : words ){
pQuery.add(new Term(TitleRecordColumns.SA_SITE_ADDR.toString(), word));
}
pQuery.setSlop(2);
topBQuery.add(pQuery, BooleanClause.Occur.MUST);
}
Running individual parts of the query is probably the simplest approach, to my mind.
Another tool available is the getting an Explaination. You can call IndexSearcher.explain to get an Explanation of the scoring for the query against a particular document. If you can provide the docid of a document you believe should match the query, you can analyze Explanation.toString (or toHtml, if you prefer) to determine which subqueries are not matching against it.
If you want to automatically keep a record of which clause of a BooleanQuery doesn't produce results, I believe you will need to run each query independantly. If you no longer have access to the subqueries used to create it, you can get the clauses of it instead:
findTroublesomeQuery(BooleanQuery query) {
for (BooleanClause clause : query.clauses()) {
Query subquery = clause.getQuery()
TopDocs docs = searchHoweverYouDo(subquery);
if (doc.totalSize == 0) {
//If you want to dig down recursively...
if (subquery instanceof BooleanQuery)
findTroublesomeQuery(query);
else
log(query); //Or do whatever you want to keep track of it.
}
}
}
DisjunctionMaxQuery is a commonly used query that wraps multiple subqueries as well, so might be worth considering for this sort of approach.
I have a problem - I create my SQL queries dynamically and basing on user input options. So the user has 5 parameters (actually it's more) and he can choose to use some of them (all if he wants) or none and specify their value in the query. So I construct my query String (basic the WHERE conditions) by checking if a parameter was selected and if a value was provided. However now there is the problem of special characters like '. I could try to use replaceAll("'", "\\") but this is quite dull and I know that preparedStatement.setString() does the job better. However for me I would need than to check again if the parameter was provided and if the previous one were also (to specify the poison of ? and connect it to the right parameter). This causes a lot of combinations and does not look elegant.
So my question is - can I somehow receive the string preparedStatement.setString() produces? Or is there a similar function that would do the same job and give me the String so I can put it in the query manually.
Maybe the intro was too long but someone might have a better idea and I wanted to explain why I need it.
What you can do is construct the basic, unparameterized SQL query based on whether the parameters were specified, and then use the prepared statement to fill in the parameters.
It could look something like this (rough sketch):
Map<String, Object> parameterValues = /*from user*/;
List<String> parameterNames = Arrays.asList("field1", "field2", "field3");
List<Object> valueList = new ArrayList<Object>();
StringBuilder statementBuilder = new StringBuilder("select * from table where ");
for ( String parameterName : parameterNames ) {
if ( parameterValues.containsKey(parameterName) ) {
statementBuilder.append(parameterName + " = ? AND");
valueList.add(parameterValues.get(parameterName));
}
}
PreparedStatement st = conn.prepareStatement(statementBuilder.toString(),
valueList);
//set each parameter here.
It's only hard the first time; then you can make it generic. That said there are probably query builders that abstract all of this away for you. I use QueryDSL but that does not have bindings for pure JDBC but rather JPA and JDO, etc.
On another forum I was given a different, simpler and cleaner approach that work perfectly.
Here are some links for others with the same problem:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1669972300346534908
http://www.akadia.com/services/dyn_modify_where_clause.html
Title asks it all... I want to do a multi field - phrase search in Lucene.. How to do it ?
for example :
I have fields as String s[] = {"title","author","content"};
I want to search harry potter across all fields.. How do I do it ?
Can someone please provide an example snippet ?
Use MultiFieldQueryParser, its a QueryParser which constructs queries to search multiple fields..
Other way is to use Create a BooleanQuery consisting of TermQurey (in your case phrase query).
Third way is to include the content of other fields into your default content field.
Add
Generally speaking, querying on multiple fields isn’t the best practice for user-entered queries. More commonly, all words you want searched are indexed into a contents or keywords field by combining various fields.
Update
Usage:
Query query = MultiFieldQueryParser.parse(Version.LUCENE_30, new String[] {"harry potter","harry potter","harry potter"}, new String[] {"title","author","content"},new SimpleAnalyzer());
IndexSearcher searcher = new IndexSearcher(...);
Hits hits = searcher.search(query);
The MultiFieldQueryParser will resolve the query in this way: (See javadoc)
Parses a query which searches on the
fields specified. If x fields are
specified, this effectively
constructs:
(field1:query1) (field2:query2)
(field3:query3)...(fieldx:queryx)
Hope this helps.
intensified googling revealed this :
http://lucene.472066.n3.nabble.com/Phrase-query-on-multiple-fields-td2292312.html.
Since it is latest and best, I'll go with his approach I guess.. Nevertheless, it might help someone who is looking for something like I am...
You need to use MultiFieldQueryParser with escaped string. I have tested it with Lucene 8.8.1 and it's working like magic.
String queryStr = "harry potter";
queryStr = "\"" + queryStr.trim() + "\"";
Query query = new MultiFieldQueryParser(new String[]{"title","author","content"}, new StandardAnalyzer()).parse(queryStr);
System.out.println(query);
It will print.
(title:"harry potter") (author:"harry potter") (content:"harry potter")