Spring boot Stram API large data - java

I'm using JPA with a Native Query that returns about 13k registers. I thought to use Stream API from Java 8, but the result is not coming.
I Can't paginate because the result will populate a combo box
My repository sends the Stream
I added the #Transactional(readOnly = true) to make it work
#Query(value = "select * from mytable", native = true)
Stream<MyTable> getTableStream()
#Transactional(readOnly = true)
public Stream<MyTable> getTableStream() {
return repository.getTableStream()
}
#GetMapping(value = "/table", produces = MediaType.APPLICATION_STREAM_JSON_VALUE)
#Transactional(readOnly = true)
public ResponseEntity<Stream<MyTable>> getMailingClient() {
Stream<MyTable> body = service.getTableStream();
return ResponseEntity.ok(body);
}
All links and resources I found about stream do not show the implementation of the return JSON with Spring rest API.
My frontend is Angular 6, and nearest I got was one custom object with no result

Loading all 13k records into a combo box sounds like a very slow solution. I would recommend implementing a search based on like query. Something like this:
#Query("SELECT * FROM mytable WHERE name like ':name%'")
Stream<MyTable> getTableStream(#Param("name") String name);
But if you really want to load all of the records, you can use a java.util.Collection or java.util.List instead of a stream.
Collection<MyTable> getTableStream();

Related

How can I get springboot transaction'data in jooq query

So I have a problem in jooq about getting data in spring-boot transaction.
I use a transaction to save the base data, and then I want to use jooq to get these data. But I found that what I fetched is null.
String sql = dslContext.select().from(Tables.SALES_INVENTORY).getSQL();
System.out.println(sql);
Result<Record> fetch1 = dslContext.select().from(Tables.SALES_INVENTORY).fetch();
System.out.println(fetch1);
String groupbySql =
dslContext
.select(Tables.SALES_INVENTORY.ITEM_ID, sum(Tables.SALES_INVENTORY.ON_HAND_QTY))
.from(Tables.SALES_INVENTORY)
.groupBy(Tables.SALES_INVENTORY.ITEM_ID)
.getSQL();
System.out.println(groupbySql);
Result<Record2<UUID, BigDecimal>> fetch =
dslContext
.select(Tables.SALES_INVENTORY.ITEM_ID, sum(Tables.SALES_INVENTORY.ON_HAND_QTY))
.from(Tables.SALES_INVENTORY)
.groupBy(Tables.SALES_INVENTORY.ITEM_ID)
.fetch();
System.out.println(fetch);
List<SalesInventoryEntity> all = salesInventoryRepository.findAll();
all.forEach(s -> System.out.println(s));
Jooq's SQL is correct, but can't find and data return as if I use JPA #Transactional to do my test method. And I use jpa-repository to get data, it found the right data.
So my main problem is how can I get the right data in JPA transactional?
Here is what I used to init the base data. It's a test method, and its class is #Transactional, so this means that the method is also #Transactional?
public void initSalesInventories() {
List<SalesInventoryEntity> salesInventories = Lists.newArrayList();
ItemEntity itemEntity = itemRepository.findById(itemId1).get();
int i = 0;
for (StockLocationEntity stockLocationEntity : stockLocationEntities) {
SalesInventoryEntity salesInventoryEntity = new SalesInventoryEntity();
salesInventoryEntity.setStockLocation(stockLocationEntity);
salesInventoryEntity.setSalesOrganization(usSalesOrgEntity);
salesInventoryEntity.setItem(itemEntity);
salesInventoryEntity.setItemClass(ItemClass.SALEABLE);
salesInventoryEntity.setOnHandQty(100);
salesInventoryEntity.setReservedQty(0);
salesInventoryEntity.setAvailableQty(100);
salesInventoryEntity.setLeadTime(5);
DocumentType[] values = DocumentType.values();
salesInventoryEntity.setDocType(values[i % 7]);
String code = "TO-201906112010000" + i;
i++;
salesInventoryEntity.setDocCode(code);
salesInventories.add(salesInventoryEntity);
}
salesInventoryRepository.saveAll(salesInventories);
}
After init my base data, I used jooq to read the data, and find nothing. I don't know whether jooq can't read other transaction's data or jooq just read the actual data from the database. If you know something about it please give me some advices.

Correct way to implement paging for Cassandra with CassandraRepository from Spring Data

I'm looking for a solution to implement paging for our Spring Boot based REST-Service with a Cassandra (version 3.11.3) database. We are using Spring Boot 2.0.5.RELEASE with spring-boot-starter-data-cassandra as a dependency.
As Spring Data's CassandraRepository<T, ID> interface does not extend the PagingAndSortingRepository we don't get the full paging functionality like we have with JPA.
I read the Spring Data Cassandra documentation and could find a possible way to implement paging with Cassandra and Spring Data as the CassandraRepository interface has the following method available Slice<T> findAll(Pageable pageable);. I am aware that Cassandra is not able to get a specific page adhoc and always needs page zero to iterate through all pages as it is documented in the CassandraPageRequest:
Cassandra-specific {#link PageRequest} implementation providing access to {#link PagingState}. This class allows creation of the first page request and represents through Cassandra paging is based on the progress of fetched pages and allows forward-only navigation. Accessing a particular page requires fetching of all pages until the desired page is reached.
In my usecase we have > 1.000.000 database entries and want to display them paged in our single page application.
My current approach looks like the following:
#RestController
#RequestMapping("/users")
public class UsersResource {
#Autowired
UserRepository userRepository;
#GetMapping
public ResponseEntity<List<User>> getAllTests(
#RequestParam(defaultValue = "0", name = "page") #Positive int requiredPage,
#RequestParam(defaultValue = "500", name = "size") int size) {
Slice<User> resultList = userRepository.findAll(CassandraPageRequest.first(size));
int currentPage = 0;
while (resultList.hasNext() && currentPage <= requiredPage) {
System.out.println("Current Page Number: " + currentPage);
resultList = userRepository.findAll(resultList.nextPageable());
currentPage++;
}
return ResponseEntity.ok(resultList.getContent());
}
}
BUT with this approach I have to find the requested page while fetching all database entries to memory and iterate until I found the correct page. Is there a different approach to find the correct page or do I have to use my current solution?
My Cassandra table definition looks like the following:
CREATE TABLE user (
id int, firstname varchar,
lastname varchar,
code varchar,
PRIMARY KEY(id)
);
What I have done is to create a page object that has the content and the pagingState hash.
In the initial page, we have the simple paging
Pageable pageRequest = CassandraPageRequest.of(0,5);
once the find is performed we get the slice
Slice<Group> slice = groupRepository.findAll(pageRequest);
with the slice you can get the paging state
page.setPageHash(getPageHash((CassandraPageRequest) slice.getPageable()));
where
private String getPageHash(CassandraPageRequest pageRequest) {
return Base64.toBase64String(pageRequest.getPagingState().toBytes());
}
finally returning a Page object with the List content and the pagingState as pageHash
See this below code. It may help.
#GetMapping("/loadData")
public Mono<DataTable> loadData(#RequestParam boolean reset, #RequestParam(required = false) String tag, WebSession session) {
final String sessionId = session.getId();
IMap<String, String> map = Context.get(HazelcastInstance.class).getMap("companygrouping-pageable-map");
int pageSize = Context.get(EnvProperties.class).getPageSize();
Pageable pageRequest;
if (reset)
map.remove(sessionId);
String serializedPagingState = map.compute(sessionId, (k, v) -> (v == null) ? null : map.get(session.getId()));
pageRequest = StringUtils.isBlank(serializedPagingState) ? CassandraPageRequest.of(0, pageSize)
: CassandraPageRequest.of(PageRequest.of(0, pageSize), PagingState.fromString(serializedPagingState)).next();
Mono<Slice<TagMerge>> sliceMono = StringUtils.isNotBlank(tag)
? Context.get(TagMergeRepository.class).findByKeyStatusAndKeyTag(Status.NEW, tag, pageRequest)
: Context.get(TagMergeRepository.class).findByKeyStatus(Status.NEW, pageRequest);
Flux<TagMerge> flux = sliceMono.map(t -> convert(t, map, sessionId)).flatMapMany(Flux::fromIterable);
Mono<DataTable> dataTabelMono = createTableFrom(flux).doOnError(e -> log.error("{}", e));
if (reset) {
Mono<Long> countMono = Mono.empty();
if (StringUtils.isNotBlank(tag))
countMono = Context.get(TagMergeRepository.class).countByKeyStatusAndKeyTag(Status.NEW, tag);
else
countMono = Context.get(TagMergeRepository.class).countByKeyStatus(Status.NEW);
dataTabelMono = dataTabelMono.zipWith(countMono, (t, k) -> {
t.setTotalRows(k);
return t;
});
}
return dataTabelMono;
}
private List<TagMerge> convert(Slice<TagMerge> slice, IMap<String, String> map, String id) {
PagingState pagingState = ((CassandraPageRequest) slice.getPageable()).getPagingState();
if (pagingState != null)
map.put(id, pagingState.toString());
return slice.getContent();
}
Cassandra supports forward pagination which means you can fetch first n rows then you can fetch rows between n+1 and 2n and so on until your data ends but you can't fetch rows between n+1 and 2n directly.

How to define custom analyzer to do global search with hibernate-search and elasticsearch

I have an implementation of hibernate-search-orm (5.9.0.Final) with hibernate-search-elasticsearch (5.9.0.Final).
I defined a custom analyzer on an entity (see beelow) and I indexed two entities :
id: "1"
title: "Médiatiques : récit et société"
abstract:...
id: "2"
title: "Mediatique Com'7"
abstract:...
The search works fine when I search on title field :
"title:médiatique" => 2 results.
"title:mediatique" => 2 results.
My problem is when I do a global search with accents (or not) :
search on "médiatique => 1 result (id:1)
search on "mediatique => 1 result (id:2)
Is there a way to resolve this?
Thanks.
Entity definition:
#Entity
#Table(name="bibliographic")
#DynamicUpdate
#DynamicInsert
#Indexed(index = "bibliographic")
#FullTextFilterDefs({
#FullTextFilterDef(name = "fieldsElasticsearchFilter",
impl = FieldsElasticsearchFilter.class)
})
#AnalyzerDef(name = "customAnalyzer",
tokenizer = #TokenizerDef(factory = StandardTokenizerFactory.class),
filters = {
#TokenFilterDef(factory = LowerCaseFilterFactory.class),
#TokenFilterDef(factory = ASCIIFoldingFilterFactory.class),
})
#Analyzer(definition = "customAnalyzer")
public class BibliographicHibernate implements Bibliographic {
...
#Column(name="title", updatable = false)
#Fields( {
#Field,
#Field(name = "titleSort", analyze = Analyze.NO, store = Store.YES)
})
#SortableField(forField = "titleSort")
private String title;
...
}
Search method :
FullTextEntityManager ftem = Search.getFullTextEntityManager(entityManager);
QueryBuilder qb = ftem.getSearchFactory().buildQueryBuilder().forEntity(Bibliographic.class).get();
QueryDescriptor q = ElasticsearchQueries.fromQueryString(queryString);
FullTextQuery query = ftem.createFullTextQuery(q, Bibliographic.class).setFirstResult(start).setMaxResults(rows);
if (filters!=null){
filters.stream().map((filter) -> filter.split(":")).forEach((f) -> {
query.enableFullTextFilter("fieldsElasticsearchFilter")
.setParameter("field", f[0])
.setParameter("value", f[1]);
}
);
}
if (facetFields!=null){
facetFields.stream().map((facet) -> facet.split(":")).forEach((f) ->{
query.getFacetManager()
.enableFaceting(qb.facet()
.name(f[0])
.onField(f[0])
.discrete()
.orderedBy(FacetSortOrder.COUNT_DESC)
.includeZeroCounts(false)
.maxFacetCount(10)
.createFacetingRequest() );
}
);
}
List<Bibliographic> bibs = query.getResultList();
To be honest I'm more surprised document 1 would match at all, since there's a trailing "s" on "Médiatiques" and you don't use any stemmer.
You are in a special case here: you are using a query string and passing it directly to Elasticsearch (that's what ElasticsearchQueries.fromQueryString(queryString) does). Hibernate Search has very little impact on the query being run, it only impacts the indexed content and the Elasticsearch mapping here.
When you run a QueryString query on Elasticsearch and you don't specify any field, it uses all fields in the document. I wouldn't bet that the analyzer used when analyzing your query is the same analyzer that you defined on your "title" field. In particular, it may not be removing accents.
An alternative solution would be to build a simple query string query using the QueryBuilder. The syntax of queries is a bit more limited, but is generally enough for end users. The code would look like this:
FullTextEntityManager ftem = Search.getFullTextEntityManager(entityManager);
QueryBuilder qb = ftem.getSearchFactory().buildQueryBuilder().forEntity(Bibliographic.class).get();
Query q = qb.simpleQueryString()
.onFields("title", "abstract")
.matching(queryString)
.createQuery();
FullTextQuery query = ftem.createFullTextQuery(q, Bibliographic.class).setFirstResult(start).setMaxResults(rows);
Users would still be able to target specific fields, but only in the list you provided (which, by the way, is probably safer, otherwise they could target sort fields and so on, which you probably don't want to allow). By default, all the fields in that list would be targeted.
This may lead to the exact same result as the query string, but the advantage is, you can override the analyzer being used for the query. For instance:
FullTextEntityManager ftem = Search.getFullTextEntityManager(entityManager);
QueryBuilder qb = ftem.getSearchFactory().buildQueryBuilder().forEntity(Bibliographic.class)
.overridesForField("title", "customAnalyzer")
.overridesForField("abstract", "customAnalyzer")
.get();
Query q = qb.simpleQueryString()
.onFields("title", "abstract")
.matching(queryString)
.createQuery();
FullTextQuery query = ftem.createFullTextQuery(q, Bibliographic.class).setFirstResult(start).setMaxResults(rows);
... and this will use your analyzer when querying.
As an alternative, you can also use a more advanced JSON query by replacing ElasticsearchQueries.fromQueryString(queryString) with ElasticsearchQueries.fromJsonQuery(json). You will have to craft the JSON yourself, though, taking some precautions to avoid any injection from the user (use Gson to build the Json), and taking care to follow the Elasticsearch query syntax.
You can find more information about simple query string queries in the official documentation.
Note: you may want to add FrenchMinimalStemFilterFactory to your list of token filters in your custom analyzer. It's not the cause of your problem, but once you manage to use your analyzer in search queries, you will very soon find it useful.

Search users under organization in Liferay

I have to search users under the specific organization in liferay. At present we have a search available with
UserLocalService.search()
which is based on the companyId . I was wondering if there is any otherway even using the DynamicQueryFactoryUtil do fetch along with organization filter.
The dynamic query looks good, but I found an another way. We can pass the organization id using Map.
params.put("usersOrgs", orgId);
List<User> searchResult = liferayUserLocalService.search(companyId, keyword, WorkflowConstants.STATUS_APPROVED, params, 0, -1, "");
which will filter the users based on organization.
Of Course You can use DynamicQuery for achieving this.
This can be done in two phase ,
Fetch User Id associated with the given Organization.
Use search criterion along with the id received in first phase.
So, the code will look as following ,
// Fetch userId List form Organization id
long[] organiztionIds = UserLocalServiceUtil.getOrganizationUserIds(orgId);
DynamicQuery searchQuery = DynamicQueryFactoryUtil.forClass(User.class, UserLocalServiceUtil.class.getClassLoader());
Criterion searchCriteria = PropertyFactoryUtil.forName("companyId").eq(companyid);
//Add Organization Id in Criterion
if (organiztionIds.length != 0) {
searchCriteria =
RestrictionsFactoryUtil.and(RestrictionsFactoryUtil.in("userId", ArrayUtils.toObject(organiztionIds)), searchCriteria);
}
if (!firstName.isEmpty()) {
searchCriteria = RestrictionsFactoryUtil.or(RestrictionsFactoryUtil.eq("firstName", firstName), searchCriteria);
}
if (!middleName.isEmpty()) {
searchCriteria = RestrictionsFactoryUtil.or(RestrictionsFactoryUtil.eq("middleName", middleName), searchCriteria);
}
if (!lastName.isEmpty()) {
searchCriteria = RestrictionsFactoryUtil.or(RestrictionsFactoryUtil.eq("lastName", lastName), searchCriteria);
}
if (!screenName.isEmpty()) {
searchCriteria = RestrictionsFactoryUtil.or(RestrictionsFactoryUtil.eq("screenName", screenName), searchCriteria);
}
searchQuery.add(searchCriteria);
UserLocalServiceUtil.dynamicQuery(searchQuery);
P.S
I haven't tested this code. But this is the way to do it.
I Hope it helps you.

How to implement proper pagination in Google App Engine (Java)?

I tried to implement pagination in google app engine (Java), but I am not able to achieve. It is working only forward pagination and reverse pagination is not able to achieved.
I tried storing the previous cursor value through HTTP request as below:
JSP file:
<a href='/myServlet?previousCursor=${previousCursor}'>Previous page</a>
<a href='/myServlet?nextCursor=${nextCursor}'>Next page</a>
Servlet file:
String previousCursor = req.getParameter("previousCursor");
String nextCursor = req.getParameter("nextCursor");
String startCursor = null;
if(previousCursor != null){
startCursor = previousCursor;
}
if(nextCursor != null){
startCursor = nextCursor;
}
int pageSize = 3;
FetchOptions fetchOptions = FetchOptions.Builder.withLimit(pageSize);
if (startCursor != null) {
fetchOptions.startCursor(Cursor.fromWebSafeString(startCursor));
}
Query q = new Query("MyQuery");
PreparedQuery pq = datastore.prepare(q);
QueryResultList<Entity> results = pq.asQueryResultList(fetchOptions);
for (Entity entity : results) {
//Get the properties from the entity
}
String endCursor = results.getCursor().toWebSafeString();
req.setAttribute("previousCursor", startCursor);
req.setAttribute("nextCursor", endCursor);
With this I am able to retain the previous cursor value, but unfortunately the previous cursor seems to be invalid.
I also tried using reverse() method, but it is of no use. It work same as forward.
So is the any way to implement proper pagination (forward and backword) in google app engine (Java)?
I found similar one that was posted in 2010. Here also the answer was to use Cursor. But as I shown above it is not working.
Pagination in Google App Engine with Java
If you are familiar with JPA you can give it a try.
Have tested it and pagination works in GAE.
I think they support JPA 1.0 as of now.
What I tried was, created an Employee entity.
Created DAO layer and persisted few employee entities.
To have a paginated fetch, did this:
Query query = em.createQuery("select e from Employee e");
query.setFirstResult(0);
query.setMaxResults(2);
List<Employee> resultList = query.getResultList();
(In this example we get first page which has 2 entities. Argument to
setFirstResult would be start index and argument to setMaxResult would be your page size)
You can easily change the arguments to query.setFirstResult and setMaxResults
and have a pagination logic around it.
Hope this helps,
Regards,

Categories