I want to get all doc (millions) in elastic index based on some condition. I used below query in elastic.
GET /<index-name>/_search
{
"from" : 99550, "size" : 500,
"query" : {
"term" : { "CC_ENGAGEMENT_NUMBER" : "1967" }
}
}
And below are my java implementation.
public IndexSearchResult findByStudIdAndcollageId(final String studId, final String collageId,
Integer Page_Number_Start_Index, Integer Total_No_Of_Records) {
SearchSourceBuilder sourceBuilder = new SearchSourceBuilder();
List<Map<String, Object>> searchResults = new ArrayList<Map<String, Object>>();
IndexSearchResult indexSearchResult = new IndexSearchResult();
try {
QueryBuilder qurBd = new BoolQueryBuilder().minimumShouldMatch(2)
.should(QueryBuilders.matchQuery("STUD_ID", studId).operator(Operator.AND))
.should(QueryBuilders.matchQuery("CLG_ID", collageId).operator(Operator.AND));
sourceBuilder.from(Page_Number_Start_Index).size(Total_No_Of_Records);
sourceBuilder.query(qurBd);
sourceBuilder.sort(new FieldSortBuilder("ROLL_NO.keyword").order(SortOrder.DESC));
SearchRequest searchRequest = new SearchRequest();
searchRequest.indices("clgindex");
searchRequest.source(sourceBuilder);
SearchResponse response;
response = rClient.search(searchRequest, RequestOptions.DEFAULT);
response.getHits().forEach(searchHit -> {
searchResults.add(searchHit.getSourceAsMap());
});
indexSearchResult.setListOfIndexes(searchResults);
log.info("searchResultsHits {}", searchResults.size());
} catch (Exception e) {
log.error("search :: Search on clg flat index. {}", e.getMessage());
}
return indexSearchResult;
}
So if the limit from 99550 and size 500 then it will not fetch more that 1L records.
Error: "reason" : "Result window is too large, from + size must be less than or equal to: [100000] but was [100050]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level setting."
}
I don't want to change [index.max_result_window]. Only want solution at Java side to search all docs in index based on conditions by implementing elasticserach API.
Thanks in advance..
this is my previous question - how to insert data in elastic search index
index mapping is as follows
{
"test" : {
"mappings" : {
"properties" : {
"name" : {
"type" : "keyword"
},
"info" : {
"type" : "nested"
},
"joining" : {
"type" : "date"
}
}
}
how can i check the data of field is already present or not before uploading a data to the index
Note :- I dont have id field maintained in index. need to check name in each document if it is already present then dont insert document into index
thanks in advance
As you don't have a id field in your mapping, you have to search on name field and you can use below code to search on it.
public List<SearchResult> search(String searchTerm) throws IOException {
SearchRequest searchRequest = new SearchRequest(INDEX_NAME);
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
MatchQueryBuilder multiMatchQueryBuilder = new
MatchQueryBuilder(searchTerm, "firstName");
searchSourceBuilder.query(matchQueryBuilder);
searchRequest.source(searchSourceBuilder);
SearchResponse searchResponse = esclient.search(searchRequest, RequestOptions.DEFAULT);
return getSearchResults(searchResponse);
}
Note, as you have keyword field instead of match you can use termquerybuilder
And it uses the utility method to parse the searchResponse of ES, code of which is below:
private List<yourpojo> getSearchResults(SearchResponse searchResponse) {
RestStatus status = searchResponse.status();
TimeValue took = searchResponse.getTook();
Boolean terminatedEarly = searchResponse.isTerminatedEarly();
boolean timedOut = searchResponse.isTimedOut();
// Start fetching the documents matching the search results.
//https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-high-search
// .html#java-rest-high-search-response-search-hits
SearchHits hits = searchResponse.getHits();
SearchHit[] searchHits = hits.getHits();
List<sr> sr = new ArrayList<>();
for (SearchHit hit : searchHits) {
// do something with the SearchHit
String index = hit.getIndex();
String id = hit.getId();
float score = hit.getScore();
//String sourceAsString = hit.getSourceAsString();
Map<String, Object> sourceAsMap = hit.getSourceAsMap();
String firstName = (String) sourceAsMap.get("firstName");
sr.add(userSearchResultBuilder.build());
}
I have the following example code and number of queries that I send depends on how much SearchRequestBuilder do I construct, and add them to MultiSearchResponse.
public static void requestBuilder(ArrayList<String> formulae) {
Client client = TransportClient.builder().build()
.addTransportAddress(new InetSocketTransportAddress(new InetSocketAddress("localhost",9300)));
SearchRequestBuilder srb1 = client
.prepareSearch(index)
.setSource(formulae.get(1));
SearchRequestBuilder srb2 = client
.prepareSearch(index)
.setSource(formulae.get(2));
MultiSearchResponse sr = client.prepareMultiSearch()
.add(srb1)
.add(srb2)
.execute()
.actionGet();
long nbHits = 0;
for (MultiSearchResponse.Item item : sr.getResponses()){
SearchResponse response = item.getResponse();
nbHits += response.getHits().getTotalHits();
System.out.println(response);
}
System.out.println(nbHits);
System.out.println(formulae.size());
client.close();
}
IS there any way that I can generate [size of formulae] amount of SearchRequestBuilder? So I can query each element of my ArrayList.
You can simply iterate over your formulae and add them to your multi search one by one and then fire the multi search request.
MultiSearchRequestBuilder sr = client.prepareMultiSearch();
for (String formula : formulae) {
SearchRequestBuilder srb = client
.prepareSearch(index)
.setSource(formula);
sr.add(srb);
}
MultiSearchResponse resp = sr.execute().actionGet();
I want to get all data from elasticsearch with filters without pageable. Which way is the best to get it? I`v got default limit set to 2000. I read I should use scan but I dont know how I should use it. How I should use scan and scroll to get all data?
public Map searchByIndexParams(AuctionIndexSearchParams searchParams, Pageable pageable) {
final List<FilterBuilder> filters = Lists.newArrayList();
final NativeSearchQueryBuilder searchQuery = new NativeSearchQueryBuilder().withQuery(matchAllQuery());
Optional.ofNullable(searchParams.getCategoryId()).ifPresent(v -> filters.add(boolFilter().must(termFilter("cat", v))));
Optional.ofNullable(searchParams.getCurrency()).ifPresent(v -> filters.add(boolFilter().must(termFilter("curr", v))));
Optional.ofNullable(searchParams.getTreeCategoryId()).ifPresent(v -> filters.add(boolFilter().must(termFilter("tcat", v))));
Optional.ofNullable(searchParams.getUid()).ifPresent(v -> filters.add(boolFilter().must(termFilter("uid", v))));
//access for many uids
if(searchParams.getUids() != null){
Optional.ofNullable(searchParams.getUids().split(",")).ifPresent(v -> {
filters.add(boolFilter().must(termsFilter("uid", v)));
});
}
//access for many categories
if(searchParams.getCategories() != null){
Optional.ofNullable(searchParams.getCategories().split(",")).ifPresent(v -> {
filters.add(boolFilter().must(termsFilter("cat", v)));
});
}
final BoolQueryBuilder boolQueryBuilder = new BoolQueryBuilder();
if (Optional.ofNullable(searchParams.getTitle()).isPresent()) {
boolQueryBuilder.should(queryStringQuery(searchParams.getTitle()).analyzeWildcard(true).field("title"));
}
if (Optional.ofNullable(searchParams.getStartDateFrom()).isPresent()
|| Optional.ofNullable(searchParams.getStartDateTo()).isPresent()) {
filters.add(rangeFilter("start_date").from(searchParams.getStartDateFrom()).to(searchParams.getStartDateTo()));
}
if (Optional.ofNullable(searchParams.getEndDateFrom()).isPresent()
|| Optional.ofNullable(searchParams.getEndDateTo()).isPresent()) {
filters.add(rangeFilter("end_date").from(searchParams.getEndDateFrom()).to(searchParams.getEndDateTo()));
}
if (Optional.ofNullable(searchParams.getPriceFrom()).isPresent()
|| Optional.ofNullable(searchParams.getPriceTo()).isPresent()) {
filters.add(rangeFilter("price").from(searchParams.getPriceFrom()).to(searchParams.getPriceTo()));
}
searchQuery.withQuery(boolQueryBuilder);
FilterBuilder[] filterArr = new FilterBuilder[filters.size()];
filterArr = filters.toArray(filterArr);
searchQuery.withFilter(andFilter(filterArr));
final FacetedPage<AuctionIndex> search = auctionIndexRepository.search(searchQuery.build());
response.put("content", search.map(index ->auctionRepository
.findAuctionById(Long.valueOf(index.getId())))
.getContent());
return response;
}
edit:
I`v got:
String scrollId = searchTemplate.scan(searchQuery.build(), 1000, false);
Page<AuctionIndex> page = searchTemplate.scroll(scrollId, 15000L, AuctionIndex.class);
Integer i = 0;
if (page != null && page.hasContent()) {
while(page.hasContent()){
page = searchTemplate.scroll(scrollId, 15000L, AuctionIndex.class);
if(page.hasContent()){
System.out.println(i);
i++;
}
}
}
but iterate go to 166 and stop what`s wrong ?
Scroll API is the best way to go through all the documents in the most efficient way. Using the scroll_id you can find a session that is stored on the server for your specific scroll request.
Here is a sample how you can using elasticsearch java scroll api in your code to fetch all the results matching your query.
SearchResponse searchResponse = client.prepareSearch(<INDEX>)
.setQuery(<QUERY>)
.setSearchType(SearchType.SCAN)
.setScroll(SCROLL_TIMEOUT)
.setSize(SCROLL_SIZE)
.execute()
.actionGet();
while (true) {
searchResponse = client
.prepareSearchScroll(searchResponse.getScrollId())
.setScroll(SCROLL_TIMEOUT)
.execute().actionGet();
if (searchResponse.getHits().getHits().length == 0) {
break; //Break condition: No hits are returned
}
for (SearchHit hit : searchResponse.getHits()) {
// process response
}
}
Sample using Spring-data-elasticsearch
#Autowired
private ElasticsearchTemplate searchTemplate;
String scrollId = searchTemplate.scan(<SEARCH_QUERY>, 1000, false);
Page<ExampleItem> page = searchTemplate.scroll(scrollId, 5000L, ExampleItem.class);
if (page != null && page.hasContent()) {
// process first batch
while (page != null && page.hasContent()) {
page = searchTemplate.scroll(scrollId, 5000L, ExampleItem.class);
if (page != null && page.hasContent()) {
// process remaining batches
}
}
}
Here, ExampleItem specifies the entity that is to be fetched.
I have a database in elastic search and want to get all records on my web site page. I wrote a bean, which connects to the elastic search node, searches records and returns some response. My simple java code, which does the searching, is
SearchResponse response = getClient().prepareSearch(indexName)
.setTypes(typeName)
.setQuery(queryString("\*:*"))
.setExplain(true)
.execute().actionGet();
But Elasticsearch set default size to 10 and I have 10 hits in response. There are more than 10 records in my database. If I set size to Integer.MAX_VALUE my search becomes very slow and this not what I want.
How can I get all the records in one action in an acceptable amount of time without setting size of response?
public List<Map<String, Object>> getAllDocs(){
int scrollSize = 1000;
List<Map<String,Object>> esData = new ArrayList<Map<String,Object>>();
SearchResponse response = null;
int i = 0;
while( response == null || response.getHits().hits().length != 0){
response = client.prepareSearch(indexName)
.setTypes(typeName)
.setQuery(QueryBuilders.matchAllQuery())
.setSize(scrollSize)
.setFrom(i * scrollSize)
.execute()
.actionGet();
for(SearchHit hit : response.getHits()){
esData.add(hit.getSource());
}
i++;
}
return esData;
}
The current highest-ranked answer works, but it requires loading the whole list of results in memory, which can cause memory issues for large result sets, and is in any case unnecessary.
I created a Java class that implements a nice Iterator over SearchHits, that allows to iterate through all results. Internally, it handles pagination by issuing queries that include the from: field, and it only keeps in memory one page of results.
Usage:
// build your query here -- no need for setFrom(int)
SearchRequestBuilder requestBuilder = client.prepareSearch(indexName)
.setTypes(typeName)
.setQuery(QueryBuilders.matchAllQuery())
SearchHitIterator hitIterator = new SearchHitIterator(requestBuilder);
while (hitIterator.hasNext()) {
SearchHit hit = hitIterator.next();
// process your hit
}
Note that, when creating your SearchRequestBuilder, you don't need to call setFrom(int), as this will be done interally by the SearchHitIterator. If you want to specify the size of a page (i.e. the number of search hits per page), you can call setSize(int), otherwise ElasticSearch's default value is used.
SearchHitIterator:
import java.util.Iterator;
import org.elasticsearch.action.search.SearchRequestBuilder;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.search.SearchHit;
public class SearchHitIterator implements Iterator<SearchHit> {
private final SearchRequestBuilder initialRequest;
private int searchHitCounter;
private SearchHit[] currentPageResults;
private int currentResultIndex;
public SearchHitIterator(SearchRequestBuilder initialRequest) {
this.initialRequest = initialRequest;
this.searchHitCounter = 0;
this.currentResultIndex = -1;
}
#Override
public boolean hasNext() {
if (currentPageResults == null || currentResultIndex + 1 >= currentPageResults.length) {
SearchRequestBuilder paginatedRequestBuilder = initialRequest.setFrom(searchHitCounter);
SearchResponse response = paginatedRequestBuilder.execute().actionGet();
currentPageResults = response.getHits().getHits();
if (currentPageResults.length < 1) return false;
currentResultIndex = -1;
}
return true;
}
#Override
public SearchHit next() {
if (!hasNext()) return null;
currentResultIndex++;
searchHitCounter++;
return currentPageResults[currentResultIndex];
}
}
In fact, realizing how convenient it is to have such a class, I wonder why ElasticSearch's Java client does not offer something similar.
You could use scrolling API.
The other suggestion of using a searchhit iterator would also work great,but only when you don't want to update those hits.
import static org.elasticsearch.index.query.QueryBuilders.*;
QueryBuilder qb = termQuery("multi", "test");
SearchResponse scrollResp = client.prepareSearch(test)
.addSort(FieldSortBuilder.DOC_FIELD_NAME, SortOrder.ASC)
.setScroll(new TimeValue(60000))
.setQuery(qb)
.setSize(100).execute().actionGet(); //max of 100 hits will be returned for each scroll
//Scroll until no hits are returned
do {
for (SearchHit hit : scrollResp.getHits().getHits()) {
//Handle the hit...
}
scrollResp = client.prepareSearchScroll(scrollResp.getScrollId()).setScroll(new TimeValue(60000)).execute().actionGet();
} while(scrollResp.getHits().getHits().length != 0); // Zero hits mark the end of the scroll and the while loop.
It has been long since you asked this question, and I would like to post my answer for future readers.
As mentioned above answers, it is better to load documents with size and start when you have thousands of documents in the index. In my project, the search loads 50 results as default size and starting from zero index, if a user wants to load more data, then the next 50 results will be loaded. Here what I have done in the code:
public List<CourseDto> searchAllCourses(int startDocument) {
final int searchSize = 50;
final SearchRequest searchRequest = new SearchRequest("course_index");
final SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
searchSourceBuilder.query(QueryBuilders.matchAllQuery());
if (startDocument != 0) {
startDocument += searchSize;
}
searchSourceBuilder.from(startDocument);
searchSourceBuilder.size(searchSize);
// sort the document
searchSourceBuilder.sort(new FieldSortBuilder("publishedDate").order(SortOrder.ASC));
searchRequest.source(searchSourceBuilder);
List<CourseDto> courseList = new ArrayList<>();
try {
final SearchResponse searchResponse = client.search(searchRequest, RequestOptions.DEFAULT);
final SearchHits hits = searchResponse.getHits();
// Do you want to know how many documents (results) are returned? here is you get:
TotalHits totalHits = hits.getTotalHits();
long numHits = totalHits.value;
final SearchHit[] searchHits = hits.getHits();
final ObjectMapper mapper = new ObjectMapper();
for (SearchHit hit : searchHits) {
// convert json object to CourseDto
courseList.add(mapper.readValue(hit.getSourceAsString(), CourseDto.class));
}
} catch (IOException e) {
logger.error("Cannot search by all mach. " + e);
}
return courseList;
}
Info:
- Elasticsearch version 7.5.0
- Java High Level REST Client is used as client.
I Hope, this will be useful for someone.
You will have to trade off the number of returned results vs the time you want the user to wait and the amount of available server memory. If you've indexed 1,000,000 documents, there isn't a realistic way to retrieve all those results in one request. I'm assuming your results are for one user. You'll have to consider how the system will perform under load.
To query all, you should build a CountRequestBuilder to get the total number of records (by CountResponse) then set the number back to size of your seach request.
If your primary focus is on exporting all records you might want to go for a solution which does not require any kind of sorting, as sorting is an expensive operation.
You could use the scan and scroll approach with ElasticsearchCRUD as described here.
For version 6.3.2, the following worked:
public List<Map<String, Object>> getAllDocs(String indexName, String searchType) throws FileNotFoundException, UnsupportedEncodingException{
int scrollSize = 1000;
List<Map<String,Object>> esData = new ArrayList<>();
SearchResponse response = null;
int i=0;
response = client.prepareSearch(indexName)
.setScroll(new TimeValue(60000))
.setTypes(searchType) // The document types to execute the search against. Defaults to be executed against all types.
.setQuery(QueryBuilders.matchAllQuery())
.setSize(scrollSize).get(); //max of 100 hits will be returned for each scroll
//Scroll until no hits are returned
do {
for (SearchHit hit : response.getHits().getHits()) {
++i;
System.out.println (i + " " + hit.getId());
writer.println(i + " " + hit.getId());
}
System.out.println(i);
response = client.prepareSearchScroll(response.getScrollId()).setScroll(new TimeValue(60000)).execute().actionGet();
} while(response.getHits().getHits().length != 0); // Zero hits mark the end of the scroll and the while loop.
return esData;
}
SearchResponse response = restHighLevelClient.search(new SearchRequest("Index_Name"), RequestOptions.DEFAULT);
SearchHit[] hits = response.getHits().getHits();
1.set the max size ,e.g: MAX_INT_VALUE;
private static final int MAXSIZE=1000000;
#Override
public List getAllSaleCityByCity(int cityId) throws Exception {
List<EsSaleCity> list=new ArrayList<EsSaleCity>();
Client client=EsFactory.getClient();
SearchResponse response= client.prepareSearch(getIndex(EsSaleCity.class)).setTypes(getType(EsSaleCity.class)).setSize(MAXSIZE)
.setQuery(QueryBuilders.filteredQuery(QueryBuilders.matchAllQuery(), FilterBuilders.boolFilter()
.must(FilterBuilders.termFilter("cityId", cityId)))).execute().actionGet();
SearchHits searchHits=response.getHits();
SearchHit[] hits=searchHits.getHits();
for(SearchHit hit:hits){
Map<String, Object> resultMap=hit.getSource();
EsSaleCity saleCity=setEntity(resultMap, EsSaleCity.class);
list.add(saleCity);
}
return list;
}
2.count the ES before you search
CountResponse countResponse = client.prepareCount(getIndex(EsSaleCity.class)).setTypes(getType(EsSaleCity.class)).setQuery(queryBuilder).execute().actionGet();
int size = (int)countResponse.getCount();//this is you want size;
then you can
SearchResponse response= client.prepareSearch(getIndex(EsSaleCity.class)).setTypes(getType(EsSaleCity.class)).setSize(size);