I'm trying to implement a java REST API allowing the UI to list documents from Firestore (potentially ordering it on multiple fields).
I'm following the official documentation but I'm struggling about how to handle/generate a next page token (since the UI will potentially need to iterate over and over) from the response. Is there any way to implement this behavior with the GRPC client? Should I switch to the REST client (which seems to expose a nextPageToken field)?
Here is a workaround I found to mime a pagination-like behavior:
public <T extends InternalModel> Page<T> paginate(#NonNull Integer maxResults, #Nullable String pageToken) {
try (Firestore db = getFirestoreService()) {
CollectionReference collectionReference = db.collection(getCollection(type));
Query query = collectionReference
.limit(maxResults);
// The page token is the id of the last document
if (!Strings.isNullOrEmpty(pageToken)) {
DocumentSnapshot lastDocument = collectionReference.document(pageToken).get().get();
query = query.startAfter(lastDocument);
}
List<InternalModel> items = (List<InternalModel>) query.get().get().toObjects(type);
String nextPageToken = "";
if (!CollectionUtils.isEmpty(items) && maxResults.equals(items.size())) {
nextPageToken = items.get(items.size() - 1).getId();
}
return Page.create(items, nextPageToken);
}
}
I'm opened to any better solution since this might not be the most optimal way.
Related
I have a Java method in my code, in which I am using following line of code to fetch any data from azure cosmos DB
Iterable<FeedResponse<Object>> feedResponseIterator =
cosmosContainer
.queryItems(sqlQuery, queryOptions, Object.class)
.iterableByPage(continuationToken, pageSize);
Now the whole method looks like this
public List<LinkedHashMap> getDocumentsFromCollection(
String containerName, String partitionKey, String sqlQuery) {
List<LinkedHashMap> documents = new ArrayList<>();
String continuationToken = null;
do {
CosmosQueryRequestOptions queryOptions = new CosmosQueryRequestOptions();
CosmosContainer cosmosContainer = createContainerIfNotExists(containerName, partitionKey);
Iterable<FeedResponse<Object>> feedResponseIterator =
cosmosContainer
.queryItems(sqlQuery, queryOptions, Object.class)
.iterableByPage(continuationToken, pageSize);
int pageCount = 0;
for (FeedResponse<Object> page : feedResponseIterator) {
long startTime = System.currentTimeMillis();
// Access all the documents in this result page
page.getResults().forEach(document -> documents.add((LinkedHashMap) document));
// Along with page results, get a continuation token
// which enables the client to "pick up where it left off"
// in accessing query response pages.
continuationToken = page.getContinuationToken();
pageCount++;
log.info(
"Cosmos Collection {} deleted {} page with {} number of records in {} ms time",
containerName,
pageCount,
page.getResults().size(),
(System.currentTimeMillis() - startTime));
}
} while (continuationToken != null);
log.info(containerName + " Collection has been collected successfully");
return documents;
}
My question is that can we use same line of code to execute delete query like (DELETE * FROM c)? If yes, then what it would be returning us in Iterable<FeedResponse> feedResponseIterator object.
SQL statements can only be used for reads. Delete operations must be done using DeleteItem().
Here are Java SDK samples (sync and async) for all document operations in Cosmos DB.
Java v4 SDK Document Samples
I am trying to use ChangeStreams in MongoDB 4.4 and Camel 3.12.0. According to the Camel documents, the Exchange body will contain the full document of any change. I am building my route as below:
from("mongodb:mongoClient?consumerType=changeStreams&database=test&collection=accounts")
.process(new MongoIdProcessor())
.to("solrCloud://minikube:8983/solr?zkHost=minikube:2181,minikube:2182,minikube:2183&collection=accounts&autoCommit=true")
What I noticed is that if I issue an update (updateOne, updateMany) command on the "accounts" collection, there isn't any data in the Exchange object during processing.
Message message = exchange.getMessage(); // Null
Message in = exchange.getIn();
ObjectId objectId = in.getHeader("_id", ObjectId.class); // Present
Digging a little deeper, it seems that in MongoDbChangeStreamsThread.java, the collection being watched does not have the options set correctly.
#Override
protected MongoCursor initializeCursor() {
ChangeStreamIterable<Document> iterable = bsonFilter != null
? dbCol.watch(bsonFilter)
: dbCol.watch();
It should be this instead
#Override
protected MongoCursor initializeCursor() {
ChangeStreamIterable<Document> iterable = bsonFilter != null
? dbCol.watch(bsonFilter).fullDocument(FullDocument.UPDATE_LOOKUP)
: dbCol.watch().fullDocument(FullDocument.UPDATE_LOOKUP);
Do I really have to make the change in this class, or is there some configuration somewhere I can set? I'm concerned about maintaining code for this modified class of the Camel library.
I have the below Stream class that is getting returned from DB:
Stream<Transaction> transctions=transRepository.findByTransctionId();
public class Transaction{
String transctionId;
String accountId;
String transName;
String accountName;
}
Now my Requirement is as below:
Transaction entity has 4 fields. So, from DB all the 4 fields were fetched by Jpa.
But client who needs this data ,he has sent the columnsName in list that he is looking from Transaction model
List<String> columnNames=Arrays.asList("transctionId","accountName")
I have post this data to Kafka.I have to take each Transction from this stream post it to kafka.
But cline is looking for only this 2 fields "transctionId","accountName" should go as part of Transaction in Kafka instead of all 4 fields.
The data should go in form of json to Kafa having below format:
{
"transctionId":"1234",
"accountName" :"test-account"
}
Basically only those fields should go to kafka which they have asked for instead of converting the whole pojo to json and send it.
Is there any way to achieve that?
If you need to invoke a method, but you only have its name, the only way I know is via reflection. I would do it like this:
Stream<Transaction> transctions=transRepository.findByTransctionId();
List<Transaction> outTransactions = new ArrayList<>();
List<String> columnNames = new ArrayList<>();
transactions.forEach(tr -> {
Transaction outTransaction = new Transaction();
columnNames.forEach( col -> {
try {
var getMethod = tr.getClass().getMethod("get" + StringUtils.capitalize(col));
Object value = getMethod.invoke(tr);
String valueStr = value instanceof String ? value.toString() : "";
var setMethod = outTransaction.getClass().getMethod("set" + StringUtils.capitalize(col));
setMethod.invoke(outTransaction, valueStr);
} catch (NoSuchMethodException | InvocationTargetException | IllegalAccessException e) {
e.printStackTrace();
}
});
outTransactions.add(outTransaction);
});
There is a lot of traversing, but with the requirements you have, this is the generic solution I can come up with. Another shortcoming to this solution is the creation of new Transaction objects. Which means that if Transactions are many, memory usage can grow. Maybe this solution can be optimised to take advantage of streaming the transactions from the DB.
Another way to do it is to have different endpoints for each known set of properties that the client sends you. For example:
#GetMapping("/transaction_id_and_name")
List<Transaction> getTransactionsIdAndName() {
... obtain Transactions, return a new list of Transactions, with transaction_id and name ... }
#GetMapping("/transaction_id_and_status")
List<Transaction> getTransactionsNameAndStatus() {...}
I'm looking for a solution to implement paging for our Spring Boot based REST-Service with a Cassandra (version 3.11.3) database. We are using Spring Boot 2.0.5.RELEASE with spring-boot-starter-data-cassandra as a dependency.
As Spring Data's CassandraRepository<T, ID> interface does not extend the PagingAndSortingRepository we don't get the full paging functionality like we have with JPA.
I read the Spring Data Cassandra documentation and could find a possible way to implement paging with Cassandra and Spring Data as the CassandraRepository interface has the following method available Slice<T> findAll(Pageable pageable);. I am aware that Cassandra is not able to get a specific page adhoc and always needs page zero to iterate through all pages as it is documented in the CassandraPageRequest:
Cassandra-specific {#link PageRequest} implementation providing access to {#link PagingState}. This class allows creation of the first page request and represents through Cassandra paging is based on the progress of fetched pages and allows forward-only navigation. Accessing a particular page requires fetching of all pages until the desired page is reached.
In my usecase we have > 1.000.000 database entries and want to display them paged in our single page application.
My current approach looks like the following:
#RestController
#RequestMapping("/users")
public class UsersResource {
#Autowired
UserRepository userRepository;
#GetMapping
public ResponseEntity<List<User>> getAllTests(
#RequestParam(defaultValue = "0", name = "page") #Positive int requiredPage,
#RequestParam(defaultValue = "500", name = "size") int size) {
Slice<User> resultList = userRepository.findAll(CassandraPageRequest.first(size));
int currentPage = 0;
while (resultList.hasNext() && currentPage <= requiredPage) {
System.out.println("Current Page Number: " + currentPage);
resultList = userRepository.findAll(resultList.nextPageable());
currentPage++;
}
return ResponseEntity.ok(resultList.getContent());
}
}
BUT with this approach I have to find the requested page while fetching all database entries to memory and iterate until I found the correct page. Is there a different approach to find the correct page or do I have to use my current solution?
My Cassandra table definition looks like the following:
CREATE TABLE user (
id int, firstname varchar,
lastname varchar,
code varchar,
PRIMARY KEY(id)
);
What I have done is to create a page object that has the content and the pagingState hash.
In the initial page, we have the simple paging
Pageable pageRequest = CassandraPageRequest.of(0,5);
once the find is performed we get the slice
Slice<Group> slice = groupRepository.findAll(pageRequest);
with the slice you can get the paging state
page.setPageHash(getPageHash((CassandraPageRequest) slice.getPageable()));
where
private String getPageHash(CassandraPageRequest pageRequest) {
return Base64.toBase64String(pageRequest.getPagingState().toBytes());
}
finally returning a Page object with the List content and the pagingState as pageHash
See this below code. It may help.
#GetMapping("/loadData")
public Mono<DataTable> loadData(#RequestParam boolean reset, #RequestParam(required = false) String tag, WebSession session) {
final String sessionId = session.getId();
IMap<String, String> map = Context.get(HazelcastInstance.class).getMap("companygrouping-pageable-map");
int pageSize = Context.get(EnvProperties.class).getPageSize();
Pageable pageRequest;
if (reset)
map.remove(sessionId);
String serializedPagingState = map.compute(sessionId, (k, v) -> (v == null) ? null : map.get(session.getId()));
pageRequest = StringUtils.isBlank(serializedPagingState) ? CassandraPageRequest.of(0, pageSize)
: CassandraPageRequest.of(PageRequest.of(0, pageSize), PagingState.fromString(serializedPagingState)).next();
Mono<Slice<TagMerge>> sliceMono = StringUtils.isNotBlank(tag)
? Context.get(TagMergeRepository.class).findByKeyStatusAndKeyTag(Status.NEW, tag, pageRequest)
: Context.get(TagMergeRepository.class).findByKeyStatus(Status.NEW, pageRequest);
Flux<TagMerge> flux = sliceMono.map(t -> convert(t, map, sessionId)).flatMapMany(Flux::fromIterable);
Mono<DataTable> dataTabelMono = createTableFrom(flux).doOnError(e -> log.error("{}", e));
if (reset) {
Mono<Long> countMono = Mono.empty();
if (StringUtils.isNotBlank(tag))
countMono = Context.get(TagMergeRepository.class).countByKeyStatusAndKeyTag(Status.NEW, tag);
else
countMono = Context.get(TagMergeRepository.class).countByKeyStatus(Status.NEW);
dataTabelMono = dataTabelMono.zipWith(countMono, (t, k) -> {
t.setTotalRows(k);
return t;
});
}
return dataTabelMono;
}
private List<TagMerge> convert(Slice<TagMerge> slice, IMap<String, String> map, String id) {
PagingState pagingState = ((CassandraPageRequest) slice.getPageable()).getPagingState();
if (pagingState != null)
map.put(id, pagingState.toString());
return slice.getContent();
}
Cassandra supports forward pagination which means you can fetch first n rows then you can fetch rows between n+1 and 2n and so on until your data ends but you can't fetch rows between n+1 and 2n directly.
I tried to implement pagination in google app engine (Java), but I am not able to achieve. It is working only forward pagination and reverse pagination is not able to achieved.
I tried storing the previous cursor value through HTTP request as below:
JSP file:
<a href='/myServlet?previousCursor=${previousCursor}'>Previous page</a>
<a href='/myServlet?nextCursor=${nextCursor}'>Next page</a>
Servlet file:
String previousCursor = req.getParameter("previousCursor");
String nextCursor = req.getParameter("nextCursor");
String startCursor = null;
if(previousCursor != null){
startCursor = previousCursor;
}
if(nextCursor != null){
startCursor = nextCursor;
}
int pageSize = 3;
FetchOptions fetchOptions = FetchOptions.Builder.withLimit(pageSize);
if (startCursor != null) {
fetchOptions.startCursor(Cursor.fromWebSafeString(startCursor));
}
Query q = new Query("MyQuery");
PreparedQuery pq = datastore.prepare(q);
QueryResultList<Entity> results = pq.asQueryResultList(fetchOptions);
for (Entity entity : results) {
//Get the properties from the entity
}
String endCursor = results.getCursor().toWebSafeString();
req.setAttribute("previousCursor", startCursor);
req.setAttribute("nextCursor", endCursor);
With this I am able to retain the previous cursor value, but unfortunately the previous cursor seems to be invalid.
I also tried using reverse() method, but it is of no use. It work same as forward.
So is the any way to implement proper pagination (forward and backword) in google app engine (Java)?
I found similar one that was posted in 2010. Here also the answer was to use Cursor. But as I shown above it is not working.
Pagination in Google App Engine with Java
If you are familiar with JPA you can give it a try.
Have tested it and pagination works in GAE.
I think they support JPA 1.0 as of now.
What I tried was, created an Employee entity.
Created DAO layer and persisted few employee entities.
To have a paginated fetch, did this:
Query query = em.createQuery("select e from Employee e");
query.setFirstResult(0);
query.setMaxResults(2);
List<Employee> resultList = query.getResultList();
(In this example we get first page which has 2 entities. Argument to
setFirstResult would be start index and argument to setMaxResult would be your page size)
You can easily change the arguments to query.setFirstResult and setMaxResults
and have a pagination logic around it.
Hope this helps,
Regards,