I am working with Firestore right now and have a little bit of a problem with pagination.
Basically, I have a collection (assume 10 items) where each item has some data and a timestamp.
Now, I am fetching the first 3 items like this:
Firestore.firestore()
.collection("collectionPath")
.order(by: "timestamp", descending: true)
.limit(to: 3)
.addSnapshotListener(snapshotListener())
Inside my snapshot listener, I save the last document from the snapshot, in order to use that as a starting point for my next page.
So, at some time I will request the next page of items like this:
Firestore.firestore()
.collection("collectionPath")
.order(by: "timestamp", descending: true)
.start(afterDocument: lastDocument)
.limit(to: 3)
.addSnapshotListener(snapshotListener2()) // Note that this is a new snapshot listener, I don't know how I could reuse the first one
Now I have the items from index 0 to index 5 (in total 6) in my frontend. Neat!
If the document at index 4 now updates its timestamp to the newest timestamp of the whole collection, things start to go down.
Remember that the timestamp determines its position on account of the order clause!
What I expected to happen was, that after the changes are applied, I still show 6 items (and still ordered by their timestamps)
What happened was, that after the changes are applied, I have only 5 items remaining, since the item that got pushed out of the first snapshot is not added to the second snapshot automatically.
Am I missing something about Pagination with Firestore?
EDIT: As requested, I post some more code here:
This is my function to return a snapshot listener. Well, and the two methods I use to request the first page and then the second page I posted already above
private func snapshotListener() -> FIRQuerySnapshotBlock {
let index = self.index
return { querySnapshot, error in
guard let snap = querySnapshot, error == nil else {
log.error(error)
return
}
// Save the last doc, so we can later use pagination to retrieve further chats
if snap.count == self.limit {
self.lastDoc = snap.documents.last
} else {
self.lastDoc = nil
}
let offset = index * self.limit
snap.documentChanges.forEach() { diff in
switch diff.type {
case .added:
log.debug("added chat at index: \(diff.newIndex), offset: \(offset)")
self.tVHandler.dataManager.insert(item: Chat(dictionary: diff.document.data() as NSDictionary), at: IndexPath(row: Int(diff.newIndex) + offset, section: 0), in: nil)
case .removed:
log.debug("deleted chat at index: \(diff.oldIndex), offset: \(offset)")
self.tVHandler.dataManager.remove(itemAt: IndexPath(row: Int(diff.oldIndex) + offset, section: 0), in: nil)
case .modified:
if diff.oldIndex == diff.newIndex {
log.debug("updated chat at index: \(diff.oldIndex), offset: \(offset)")
self.tVHandler.dataManager.update(item: Chat(dictionary: diff.document.data() as NSDictionary), at: IndexPath(row: Int(diff.oldIndex) + offset, section: 0), in: nil)
} else {
log.debug("moved chat at index: \(diff.oldIndex), offset: \(offset) to index: \(diff.newIndex), offset: \(offset)")
self.tVHandler.dataManager.move(item: Chat(dictionary: diff.document.data() as NSDictionary), from: IndexPath(row: Int(diff.oldIndex) + offset, section: 0), to: IndexPath(row: Int(diff.newIndex) + offset, section: 0), in: nil)
}
}
}
self.tableView?.reloadData()
}
}
So again, I am asking if I can have one snapshot listener that listens for changes in more than one page I requested from Firestore
Well, I contacted the guys over at Firebase Google Group for help, and they were able to tell me that my use case is not yet supported.
Thanks to Kato Richardson for attending to my problem!
For anyone interested in the details, see this thread
I came across the same use case today and I have successfully implemented a working solution in Objective C client. Below is the algorithm if anyone wants to apply in their program and I will really appreciate if google-cloud-firestore team can put my solution on their page.
Use Case: A feature to allow paginating a long list of recent chats along with the option to attach real time listeners to update the list to have chat with most recent message on top.
Solution: This can be made possible by using pagination logic like we do for other long lists and attaching real time listener with limit set to 1:
Step 1: On page load fetch the chats using pagination query as below:
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view.
[self fetchChats];
}
-(void)fetchChats {
__weak typeof(self) weakSelf = self;
FIRQuery *paginateChatsQuery = [[[self.db collectionWithPath:MAGConstCollectionNameChats]queryOrderedByField:MAGConstFieldNameTimestamp descending:YES]queryLimitedTo:MAGConstPageLimit];
if(self.arrChats.count > 0){
FIRDocumentSnapshot *lastChatDocument = self.arrChats.lastObject;
paginateChatsQuery = [paginateChatsQuery queryStartingAfterDocument:lastChatDocument];
}
[paginateChatsQuery getDocumentsWithCompletion:^(FIRQuerySnapshot * _Nullable snapshot, NSError * _Nullable error) {
if (snapshot == nil) {
NSLog(#"Error fetching documents: %#", error);
return;
}
///2. Observe chat updates if not attached
if(weakSelf.chatObserverState == ChatObserverStateNotAttached) {
weakSelf.chatObserverState = ChatObserverStateAttaching;
[weakSelf observeChats];
}
if(snapshot.documents.count < MAGConstPageLimit) {
weakSelf.noMoreData = YES;
}
else {
weakSelf.noMoreData = NO;
}
[weakSelf.arrChats addObjectsFromArray:snapshot.documents];
[weakSelf.tblVuChatsList reloadData];
}];
}
Step 2: On success callback of "fetchAlerts" method attach the observer for real time updates only once with limit set to 1.
-(void)observeChats {
__weak typeof(self) weakSelf = self;
self.chatsListener = [[[[self.db collectionWithPath:MAGConstCollectionNameChats]queryOrderedByField:MAGConstFieldNameTimestamp descending:YES]queryLimitedTo:1]addSnapshotListener:^(FIRQuerySnapshot * _Nullable snapshot, NSError * _Nullable error) {
if (snapshot == nil) {
NSLog(#"Error fetching documents: %#", error);
return;
}
if(weakSelf.chatObserverState == ChatObserverStateAttaching) {
weakSelf.chatObserverState = ChatObserverStateAttached;
}
for (FIRDocumentChange *diff in snapshot.documentChanges) {
if (diff.type == FIRDocumentChangeTypeAdded) {
///New chat added
NSLog(#"Added chat: %#", diff.document.data);
FIRDocumentSnapshot *chatDoc = diff.document;
[weakSelf handleChatUpdates:chatDoc];
}
else if (diff.type == FIRDocumentChangeTypeModified) {
NSLog(#"Modified chat: %#", diff.document.data);
FIRDocumentSnapshot *chatDoc = diff.document;
[weakSelf handleChatUpdates:chatDoc];
}
else if (diff.type == FIRDocumentChangeTypeRemoved) {
NSLog(#"Removed chat: %#", diff.document.data);
}
}
}];
}
Step 3. On listener callback check for document changes and handle only FIRDocumentChangeTypeAdded and FIRDocumentChangeTypeModified events and ignore the FIRDocumentChangeTypeRemoved event. We are doing this by calling "handleChatUpdates" method for both FIRDocumentChangeTypeAdded and FIRDocumentChangeTypeModified event in which we are first trying to find the matching chat document from local list and if it exist we are removing it from the list and then we are adding the new document received from listener callback and adding it to the beginning of the list.
-(void)handleChatUpdates:(FIRDocumentSnapshot *)chatDoc {
NSInteger chatIndex = [self getIndexOfMatchingChatDoc:chatDoc];
if(chatIndex != NSNotFound) {
///Remove this object
[self.arrChats removeObjectAtIndex:chatIndex];
}
///Insert this chat object at the beginning of the array
[self.arrChats insertObject:chatDoc atIndex:0];
///Refresh the tableview
[self.tblVuChatsList reloadData];
}
-(NSInteger)getIndexOfMatchingChatDoc:(FIRDocumentSnapshot *)chatDoc {
NSInteger chatIndex = 0;
for (FIRDocumentSnapshot *chatDocument in self.arrChats) {
if([chatDocument.documentID isEqualToString:chatDoc.documentID]) {
return chatIndex;
}
chatIndex++;
}
return NSNotFound;
}
Step 4. Reload the tableview to see the changes.
my solution is to create 1 maintainer query - listener to observe on those removed item from first query, and we will update it every time there's new message coming.
To make pagination with snapshot listener first we have to create reference point document from the collection.After that we are listening to collection based on that reference point document.
Let's you have a collection called messages and timestamp called createdAt with each document in that collection.
//get messages
getMessages(){
//first we will fetch the very last/latest document.
//to hold listeners
listnerArray=[];
const very_last_document= await this.afs.collectons('messages')
.ref
.limit(1)
.orderBy('createdAt','desc')
.get({ source: 'server' });
//if very_last.document.empty property become true,which means there is no messages
//present till now ,we can go with a query without having a limit
//else we have to apply the limit
if (!very_last_document.empty) {
const start = very_last_document.docs[very_last_document.docs.length - 1].data().createdAt;
//listner for new messages
//all new message will be registered on this listener
const listner_1 = this.afs.collectons('messages')
.ref
.orderBy('createdAt','desc')
.endAt(start) <== this will make sure the query will fetch up to 'start' point(including 'start' point document)
.onSnapshot(messages => {
for (const message of messages .docChanges()) {
if (message .type === "added")
//do the job...
if (message.type === "modified")
//do the job...
if (message.type === "removed")
//do the job ....
}
},
err => {
//on error
})
//old message will be registered on this listener
const listner_2 = this.afs.collectons('messages')
.ref
.orderBy('createdAt','desc')
.limit(20)
.startAfter(start) <== this will make sure the query will fetch after the 'start' point
.onSnapshot(messages => {
for (const message of messages .docChanges()) {
if (message .type === "added")
//do the job...
if (message.type === "modified")
//do the job...
if (message.type === "removed")
//do the job ....
}
this.listenerArray.push(listner_1, listner_2);
},
err => {
//on error
})
} else {
//no document found!
//very_last_document.empty = true
const listner_1 = this.afs.collectons('messages')
.ref
.orderBy('createdAt','desc')
.onSnapshot(messages => {
for (const message of messages .docChanges()) {
if (message .type === "added")
//do the job...
if (message.type === "modified")
//do the job...
if (message.type === "removed")
//do the job ....
}
},
err => {
//on error
})
this.listenerArray.push(listner_1);
}
}
//to load more messages
LoadMoreMessage(){
//Assuming messages array holding the the message we have fetched
//getting the last element from the array messages.
//that will be the starting point of our next batch
const endAt = this.messages[this.messages.length-1].createdAt
const listner_2 = this.getService
.collections('messages')
.ref
.limit(20)
.orderBy('createdAt', "asc") <== should be in 'asc' order
.endBefore(endAt) <== Getting the 20 documnents (the limit we have applied) from the point 'endAt';
.onSnapshot(messages => {
if (messages.empty && this.messages.length)
this.messages[this.messages.length - 1].hasMore = false;
for (const message of messages.docChanges()) {
if (message.type === "added")
//do the job...
if (message.type === "modified")
//do the job
if (message.type === "removed")
//do the job
}
},
err => {
//on error
})
this.listenerArray.push(listner_2)
}
Related
I am new to drools. I am expecting a sensor data that will send data from a tracking device (like a tag device). I am using Drools entry point to track the sensor data. I need to do some alerts on some events based on this sensor data.
DRL file is as below
import com.sample.AlertRuleModel;
declare AlertRuleModel
#role( event )
#timestamp( timespamp )
end
rule "No signals are coming from any entry-point for more than 10s"
when
$f : AlertRuleModel() from entry-point "AlertRuleStream"
not(AlertRuleModel(this != $f, this after[0s, 10s] $f) from entry-point "AlertRuleStream")
then
$f.setRuleId(1);
<Do alert here>
end
rule "Rule on Tag1 has not been in zone1 for more than 1 minutes"
when
$f : AlertRuleModel( tagId == 1, zoneId == 1 ) from entry-point "AlertRuleStream"
not(AlertRuleModel(this != $f, tagId == 1, zoneId != 1, this after[0s, 1m] $f) from entry-point "AlertRuleStream")
then
$f.setRuleId(2);
<Do alert here>
end
Java code
kSession = RuleExecutionService.getKieSession(packetProcessorData.getAlertRuleDrlPath());
ruleStream = kSession.getEntryPoint("AlertRuleStream");
kSession.addEventListener(new DefaultAgendaEventListener() {
public void afterMatchFired(AfterMatchFiredEvent event) {
super.afterMatchFired(event);
onPostExecution(event, RuleTypeEnum.ALERT_RULE.getName());
}
});
new Thread() {
#Override
public void run() {
kSession.fireUntilHalt();
}
}.start();
Stream data insertion part
private BlockingQueue<AlertRuleModel> alertFactQueue;
.
.
AlertRuleModel alertRuleModel = null;
while (true) {
alertRuleModel = alertFactQueue.poll(1, TimeUnit.SECONDS);
if (alertRuleModel != null) {
//LOGGER.debug("Inserting alertRuleModel into \"AlertRuleStream\"");
ruleStream.insert(alertRuleModel);
continue;
}
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
LOGGER.error("Exception while sleeping thread during alertFactQueue polling..", e);
}
}
But when i run application,
First rule "No signals are coming from any entry-point for more than 10s" is not hitting at all. I dont know why, please tell me if i am doing anything wrong or any syntax error is for first rule.
In case of second rule "Tag1 has not been in zone1 for more than 1 minutes", it alway hit immediately when i pass fact with tagId == 1 and zoneId == 1. I tried with different time gaps like after[0s, 10m]. But still it hits immediately after passing fact with above values.
Please tell me where i am making mistakes..?
I am learning about DDS using RTI (still very new to this topic) . I am creating a Publisher that writes to a Subscriber, and the Subscriber outputs the message. One thing I would like to simulate is dropped packages. As an example, let's say the Publisher writes to the Subscriber 4 times a second but the Subscriber can only read one a second (the most recent message).
As of now, I am able to create a Publisher & Subscriber w/o any packages being dropped.
I read through some documentation and found HistoryQosPolicyKind.KEEP_LAST_HISTORY_QOS.
Correct me if I am wrong, but I was under the impression that this would essentially keep the most recent message received from the Publisher. Instead, the Subscriber is receiving all the messages but delayed by 1 second.
I don't want to cache the messages but drop the messages. How can I simulate the "dropped" package?
BTW: I don't want to change anything in the .xml file. I want to do it programmatically.
Here are some snippets of my code.
//Publisher.java
//writer = (MsgDataWriter)publisher.create_datawriter(topic, Publisher.DATAWRITER_QOS_DEFAULT,null /* listener */, StatusKind.STATUS_MASK_NONE);
writer = (MsgDataWriter)publisher.create_datawriter(topic, write, null,
StatusKind.STATUS_MASK_ALL);
if (writer == null) {
System.err.println("create_datawriter error\n");
return;
}
// --- Write --- //
String[] messages= {"1", "2", "test", "3"};
/* Create data sample for writing */
Msg instance = new Msg();
InstanceHandle_t instance_handle = InstanceHandle_t.HANDLE_NIL;
/* For a data type that has a key, if the same instance is going to be
written multiple times, initialize the key here
and register the keyed instance prior to writing */
//instance_handle = writer.register_instance(instance);
final long sendPeriodMillis = (long) (.25 * 1000); // 4 per second
for (int count = 0;
(sampleCount == 0) || (count < sampleCount);
++count) {
if (count == 11)
{
return;
}
System.out.println("Writing Msg, count " + count);
/* Modify the instance to be written here */
instance.message =words[count];
instance.sender = "some user";
/* Write data */
writer.write(instance, instance_handle);
try {
Thread.sleep(sendPeriodMillis);
} catch (InterruptedException ix) {
System.err.println("INTERRUPTED");
break;
}
}
//writer.unregister_instance(instance, instance_handle);
} finally {
// --- Shutdown --- //
if(participant != null) {
participant.delete_contained_entities();
DomainParticipantFactory.TheParticipantFactory.
delete_participant(participant);
}
//Subscriber
// Customize time & Qos for receiving info
DataReaderQos readerQ = new DataReaderQos();
subscriber.get_default_datareader_qos(readerQ);
Duration_t minTime = new Duration_t(1,0);
readerQ.time_based_filter.minimum_separation.sec = minTime.sec;
readerQ.time_based_filter.minimum_separation.nanosec = minTime.nanosec;
readerQ.history.kind = HistoryQosPolicyKind.KEEP_LAST_HISTORY_QOS;
readerQ.reliability.kind = ReliabilityQosPolicyKind.BEST_EFFORT_RELIABILITY_QOS;
reader = (MsgDataReader)subscriber.create_datareader(topic, readerQ, listener, StatusKind.STATUS_MASK_ALL);
if (reader == null) {
System.err.println("create_datareader error\n");
return;
}
// --- Wait for data --- //
final long receivePeriodSec = 1;
for (int count = 0;
(sampleCount == 0) || (count < sampleCount);
++count) {
//System.out.println("Msg subscriber sleeping for "+ receivePeriodSec + " sec...");
try {
Thread.sleep(receivePeriodSec * 1000); // in millisec
} catch (InterruptedException ix) {
System.err.println("INTERRUPTED");
break;
}
}
} finally {
// --- Shutdown --- //
On the subscriber side, it is useful to distinguish three different types of interaction between your application and the DDS Domain: polling, Listeners and WaitSets
Polling means that the application decides when it reads available data. This is often a time-driven mechanism.
Listeners are basically callback functions that get invoked as soon as data becomes available, by an infrastructure thread, to read that data.
WaitSets implement a mechanism similar to the socket select mechanism: an application thread waits (blocks) for data to become available and after unblocking reads the new data.
Your application uses a Listener mechanism. You did not post the implementation of the callback function, but from the overall picture, it is likely that the listener implementation immediately tries to read the data at the moment that the callback is invoked. There is no time for the data to be "pushed out" or "dropped" as you called it. This reading happens in a different thread than your main thread, which is sleeping most of the time. You can find a Knowledge Base article about it here.
The only thing that is not clear is the impact of the time_based_filter QoS setting. You did not mention that in your question, but it does show up in the code. I would expect this to filter out some of your samples. That is a different mechanism than the pushing out of the history though. The behavior for the time based filter may be implemented differently for different DDS implementations. Which product do you use?
I wantta make async call to cassandra db with execute.Async call
in manuel I found this code but I couldn't understand how to collect all rows into any list.
Really basic call like Select * from table, and I want to store all the results.
https://docs.datastax.com/en/developer/java-driver/4.4/manual/core/async/
CompletionStage<CqlSession> sessionStage = CqlSession.builder().buildAsync();
// Chain one async operation after another:
CompletionStage<AsyncResultSet> responseStage =
sessionStage.thenCompose(
session -> session.executeAsync("SELECT release_version FROM system.local"));
// Apply a synchronous computation:
CompletionStage<String> resultStage =
responseStage.thenApply(resultSet -> resultSet.one().getString("release_version"));
// Perform an action once a stage is complete:
resultStage.whenComplete(
(version, error) -> {
if (error != null) {
System.out.printf("Failed to retrieve the version: %s%n", error.getMessage());
} else {
System.out.printf("Server version: %s%n", version);
}
sessionStage.thenAccept(CqlSession::closeAsync);
});
You need to refer to the section about asynchronous paging - you need to provide a callback that will collect data into list supplied as external object. Documentation has a following example:
CompletionStage<AsyncResultSet> futureRs =
session.executeAsync("SELECT * FROM myTable WHERE id = 1");
futureRs.whenComplete(this::processRows);
void processRows(AsyncResultSet rs, Throwable error) {
if (error != null) {
// The query failed, process the error
} else {
for (Row row : rs.currentPage()) {
// Process the row...
}
if (rs.hasMorePages()) {
rs.fetchNextPage().whenComplete(this::processRows);
}
}
}
in this case processRows can store data in the list that is the part of the current object, something like this:
class Abc {
List<Row> rows = new ArrayList<>();
// call to executeAsync
void processRows(AsyncResultSet rs, Throwable error) {
....
for (Row row : rs.currentPage()) {
rows.add(row);
}
....
}
}
but you'll need to be very careful with select * from table as it may return a lot of results, plus it may timeout if you have too much data - in this case it's better to perform token range scan (I have an example for driver 3.x, but no for 4.x yet).
Here is a sample for 4.x (you also find sample for reactive code available from 4.4 BTW)
https://github.com/datastax/cassandra-reactive-demo/blob/master/2_async/src/main/java/com/datastax/demo/async/repository/AsyncStockRepository.java
I have a server which streams the data for a given request below is the method which does that function
#Override
public void getChangeFeed(ChangeFeedRequest request, StreamObserver<ChangeFeedResponse> responseObserver) {
long queryDate = request.getFromDate();
long offset = request.getPageNo();
ChangeFeedResponse changeFeedResponse = processData(responseObserver, queryDate, offset);
while(true){
if(changeFeedResponse!=null && !changeFeedResponse.getFinalize()){
responseObserver.onNext(changeFeedResponse);
changeFeedResponse = processData(responseObserver, changeFeedResponse.getToDate(), changeFeedResponse.getPageNo());
}else{
break;
}
}
responseObserver.onNext(changeFeedResponse);
responseObserver.onCompleted();
}
When the client get disconnected the server still keeps on processing, this might be issue when multiple clients are fetching the data. Need to know how to tell server to stop processing
There's two fairly-equivalent ways. One is to use the Context, which is cancelled when the RPC is completed/cancelled:
while(!Context.current().isCancelled()){ // THIS LINE CHANGED
if(changeFeedResponse!=null && !changeFeedResponse.getFinalize()){
responseObserver.onNext(changeFeedResponse);
changeFeedResponse = processData(responseObserver, changeFeedResponse.getToDate(), changeFeedResponse.getPageNo());
}else{
break;
}
}
The other would be to use the ServerCallStreamObserver:
// THE NEXT TWO LINES CHANGED
ServerCallStreamObserver scso = (ServerCallStreamObserver) responseObserver;
while(!scso.isCancelled()){
if(changeFeedResponse!=null && !changeFeedResponse.getFinalize()){
responseObserver.onNext(changeFeedResponse);
changeFeedResponse = processData(responseObserver, changeFeedResponse.getToDate(), changeFeedResponse.getPageNo());
}else{
break;
}
}
Both approaches can also provide notification when a cancellation occurs, but polling is easiest in your case.
I have a GCM-backend Java server and I'm trying to send to all users a notification msg. Is my approach right? To just split them into 1000 each time before giving the send request? Or is there a better approach?
public void sendMessage(#Named("message") String message) throws IOException {
int count = ofy().load().type(RegistrationRecord.class).count();
if(count<=1000) {
List<RegistrationRecord> records = ofy().load().type(RegistrationRecord.class).limit(count).list();
sendMsg(records,message);
}else
{
int msgsDone=0;
List<RegistrationRecord> records = ofy().load().type(RegistrationRecord.class).list();
do {
List<RegistrationRecord> regIdsParts = regIdTrim(records, msgsDone);
msgsDone+=1000;
sendMsg(regIdsParts,message);
}while(msgsDone<count);
}
}
The regIdTrim method
private List<RegistrationRecord> regIdTrim(List<RegistrationRecord> wholeList, final int start) {
List<RegistrationRecord> parts = wholeList.subList(start,(start+1000)> wholeList.size()? wholeList.size() : start+1000);
return parts;
}
The sendMsg method
private void sendMsg(List<RegistrationRecord> records,#Named("message") String message) throws IOException {
if (message == null || message.trim().length() == 0) {
log.warning("Not sending message because it is empty");
return;
}
Sender sender = new Sender(API_KEY);
Message msg = new Message.Builder().addData("message", message).build();
// crop longer messages
if (message.length() > 1000) {
message = message.substring(0, 1000) + "[...]";
}
for (RegistrationRecord record : records) {
Result result = sender.send(msg, record.getRegId(), 5);
if (result.getMessageId() != null) {
log.info("Message sent to " + record.getRegId());
String canonicalRegId = result.getCanonicalRegistrationId();
if (canonicalRegId != null) {
// if the regId changed, we have to update the datastore
log.info("Registration Id changed for " + record.getRegId() + " updating to " + canonicalRegId);
record.setRegId(canonicalRegId);
ofy().save().entity(record).now();
}
} else {
String error = result.getErrorCodeName();
if (error.equals(Constants.ERROR_NOT_REGISTERED)) {
log.warning("Registration Id " + record.getRegId() + " no longer registered with GCM, removing from datastore");
// if the device is no longer registered with Gcm, remove it from the datastore
ofy().delete().entity(record).now();
} else {
log.warning("Error when sending message : " + error);
}
}
}
}
Quoting from Google Docs:
GCM is support for up to 1,000 recipients for a single message. This capability makes it much easier to send out important messages to your entire user base. For instance, let's say you had a message that needed to be sent to 1,000,000 of your users, and your server could handle sending out about 500 messages per second. If you send each message with only a single recipient, it would take 1,000,000/500 = 2,000 seconds, or around half an hour. However, attaching 1,000 recipients to each message, the total time required to send a message out to 1,000,000 recipients becomes (1,000,000/1,000) / 500 = 2 seconds. This is not only useful, but important for timely data, such as natural disaster alerts or sports scores, where a 30 minute interval might render the information useless.
Taking advantage of this functionality is easy. If you're using the GCM helper library for Java, simply provide a List collection of registration IDs to the send or sendNoRetry method, instead of a single registration ID.
We can not send more than 1000 push notification at time.I searched a lot but not result then i did this with same approach split whole list in sub lists of 1000 items and send push notification.