I have a server which streams the data for a given request below is the method which does that function
#Override
public void getChangeFeed(ChangeFeedRequest request, StreamObserver<ChangeFeedResponse> responseObserver) {
long queryDate = request.getFromDate();
long offset = request.getPageNo();
ChangeFeedResponse changeFeedResponse = processData(responseObserver, queryDate, offset);
while(true){
if(changeFeedResponse!=null && !changeFeedResponse.getFinalize()){
responseObserver.onNext(changeFeedResponse);
changeFeedResponse = processData(responseObserver, changeFeedResponse.getToDate(), changeFeedResponse.getPageNo());
}else{
break;
}
}
responseObserver.onNext(changeFeedResponse);
responseObserver.onCompleted();
}
When the client get disconnected the server still keeps on processing, this might be issue when multiple clients are fetching the data. Need to know how to tell server to stop processing
There's two fairly-equivalent ways. One is to use the Context, which is cancelled when the RPC is completed/cancelled:
while(!Context.current().isCancelled()){ // THIS LINE CHANGED
if(changeFeedResponse!=null && !changeFeedResponse.getFinalize()){
responseObserver.onNext(changeFeedResponse);
changeFeedResponse = processData(responseObserver, changeFeedResponse.getToDate(), changeFeedResponse.getPageNo());
}else{
break;
}
}
The other would be to use the ServerCallStreamObserver:
// THE NEXT TWO LINES CHANGED
ServerCallStreamObserver scso = (ServerCallStreamObserver) responseObserver;
while(!scso.isCancelled()){
if(changeFeedResponse!=null && !changeFeedResponse.getFinalize()){
responseObserver.onNext(changeFeedResponse);
changeFeedResponse = processData(responseObserver, changeFeedResponse.getToDate(), changeFeedResponse.getPageNo());
}else{
break;
}
}
Both approaches can also provide notification when a cancellation occurs, but polling is easiest in your case.
Related
I am learning about DDS using RTI (still very new to this topic) . I am creating a Publisher that writes to a Subscriber, and the Subscriber outputs the message. One thing I would like to simulate is dropped packages. As an example, let's say the Publisher writes to the Subscriber 4 times a second but the Subscriber can only read one a second (the most recent message).
As of now, I am able to create a Publisher & Subscriber w/o any packages being dropped.
I read through some documentation and found HistoryQosPolicyKind.KEEP_LAST_HISTORY_QOS.
Correct me if I am wrong, but I was under the impression that this would essentially keep the most recent message received from the Publisher. Instead, the Subscriber is receiving all the messages but delayed by 1 second.
I don't want to cache the messages but drop the messages. How can I simulate the "dropped" package?
BTW: I don't want to change anything in the .xml file. I want to do it programmatically.
Here are some snippets of my code.
//Publisher.java
//writer = (MsgDataWriter)publisher.create_datawriter(topic, Publisher.DATAWRITER_QOS_DEFAULT,null /* listener */, StatusKind.STATUS_MASK_NONE);
writer = (MsgDataWriter)publisher.create_datawriter(topic, write, null,
StatusKind.STATUS_MASK_ALL);
if (writer == null) {
System.err.println("create_datawriter error\n");
return;
}
// --- Write --- //
String[] messages= {"1", "2", "test", "3"};
/* Create data sample for writing */
Msg instance = new Msg();
InstanceHandle_t instance_handle = InstanceHandle_t.HANDLE_NIL;
/* For a data type that has a key, if the same instance is going to be
written multiple times, initialize the key here
and register the keyed instance prior to writing */
//instance_handle = writer.register_instance(instance);
final long sendPeriodMillis = (long) (.25 * 1000); // 4 per second
for (int count = 0;
(sampleCount == 0) || (count < sampleCount);
++count) {
if (count == 11)
{
return;
}
System.out.println("Writing Msg, count " + count);
/* Modify the instance to be written here */
instance.message =words[count];
instance.sender = "some user";
/* Write data */
writer.write(instance, instance_handle);
try {
Thread.sleep(sendPeriodMillis);
} catch (InterruptedException ix) {
System.err.println("INTERRUPTED");
break;
}
}
//writer.unregister_instance(instance, instance_handle);
} finally {
// --- Shutdown --- //
if(participant != null) {
participant.delete_contained_entities();
DomainParticipantFactory.TheParticipantFactory.
delete_participant(participant);
}
//Subscriber
// Customize time & Qos for receiving info
DataReaderQos readerQ = new DataReaderQos();
subscriber.get_default_datareader_qos(readerQ);
Duration_t minTime = new Duration_t(1,0);
readerQ.time_based_filter.minimum_separation.sec = minTime.sec;
readerQ.time_based_filter.minimum_separation.nanosec = minTime.nanosec;
readerQ.history.kind = HistoryQosPolicyKind.KEEP_LAST_HISTORY_QOS;
readerQ.reliability.kind = ReliabilityQosPolicyKind.BEST_EFFORT_RELIABILITY_QOS;
reader = (MsgDataReader)subscriber.create_datareader(topic, readerQ, listener, StatusKind.STATUS_MASK_ALL);
if (reader == null) {
System.err.println("create_datareader error\n");
return;
}
// --- Wait for data --- //
final long receivePeriodSec = 1;
for (int count = 0;
(sampleCount == 0) || (count < sampleCount);
++count) {
//System.out.println("Msg subscriber sleeping for "+ receivePeriodSec + " sec...");
try {
Thread.sleep(receivePeriodSec * 1000); // in millisec
} catch (InterruptedException ix) {
System.err.println("INTERRUPTED");
break;
}
}
} finally {
// --- Shutdown --- //
On the subscriber side, it is useful to distinguish three different types of interaction between your application and the DDS Domain: polling, Listeners and WaitSets
Polling means that the application decides when it reads available data. This is often a time-driven mechanism.
Listeners are basically callback functions that get invoked as soon as data becomes available, by an infrastructure thread, to read that data.
WaitSets implement a mechanism similar to the socket select mechanism: an application thread waits (blocks) for data to become available and after unblocking reads the new data.
Your application uses a Listener mechanism. You did not post the implementation of the callback function, but from the overall picture, it is likely that the listener implementation immediately tries to read the data at the moment that the callback is invoked. There is no time for the data to be "pushed out" or "dropped" as you called it. This reading happens in a different thread than your main thread, which is sleeping most of the time. You can find a Knowledge Base article about it here.
The only thing that is not clear is the impact of the time_based_filter QoS setting. You did not mention that in your question, but it does show up in the code. I would expect this to filter out some of your samples. That is a different mechanism than the pushing out of the history though. The behavior for the time based filter may be implemented differently for different DDS implementations. Which product do you use?
We're having some trouble trying to implement a Pool of SftpConnections for our application.
We're currently using SSHJ (Schmizz) as the transport library, and facing an issue we simply cannot simulate in our development environment (but the error keeps showing randomly in production, sometimes after three days, sometimes after just 10 minutes).
The problem is, when trying to send a file via SFTP, the thread gets locked in the init method from schmizz' TransportImpl class:
#Override
public void init(String remoteHost, int remotePort, InputStream in, OutputStream out)
throws TransportException {
connInfo = new ConnInfo(remoteHost, remotePort, in, out);
try {
if (config.isWaitForServerIdentBeforeSendingClientIdent()) {
receiveServerIdent();
sendClientIdent();
} else {
sendClientIdent();
receiveServerIdent();
}
log.info("Server identity string: {}", serverID);
} catch (IOException e) {
throw new TransportException(e);
}
reader.start();
}
isWaitForServerIdentBeforeSendingClientIdent is FALSE for us, so first of all the client (we) send our identification, as appears in logs:
"Client identity String: blabla"
Then it's turn for the receiveServerIdent:
private void receiveServerIdent() throws IOException
{
final Buffer.PlainBuffer buf = new Buffer.PlainBuffer();
while ((serverID = readIdentification(buf)).isEmpty()) {
int b = connInfo.in.read();
if (b == -1)
throw new TransportException("Server closed connection during identification exchange");
buf.putByte((byte) b);
}
}
The thread never gets the control back, as the server never replies with its identity. Seems like the code is stuck in this While loop. No timeouts, or SSH exceptions are thrown, my client just keeps waiting forever, and the thread gets deadlocked.
This is the readIdentification method's impl:
private String readIdentification(Buffer.PlainBuffer buffer)
throws IOException {
String ident = new IdentificationStringParser(buffer, loggerFactory).parseIdentificationString();
if (ident.isEmpty()) {
return ident;
}
if (!ident.startsWith("SSH-2.0-") && !ident.startsWith("SSH-1.99-"))
throw new TransportException(DisconnectReason.PROTOCOL_VERSION_NOT_SUPPORTED,
"Server does not support SSHv2, identified as: " + ident);
return ident;
}
Seems like ConnectionInfo's inputstream never gets data to read, as if the server closed the connection (even if, as said earlier, no exception is thrown).
I've tried to simulate this error by saturating the negotiation, closing sockets while connecting, using conntrack to kill established connections while the handshake is being made, but with no luck at all, so any help would be HIGHLY appreciated.
: )
I bet following code creates a problem:
String ident = new IdentificationStringParser(buffer, loggerFactory).parseIdentificationString();
if (ident.isEmpty()) {
return ident;
}
If the IdentificationStringParser.parseIdentificationString() returns empty string, it will be returned to the caller method. The caller method will keep calling the while ((serverID = readIdentification(buf)).isEmpty()) since the string is always empty. The only way to break the loop would be if call to int b = connInfo.in.read(); returns -1... but if server keeps sending the data (or resending the data) this condition is never met.
If this is the case I would add some kind of artificial way to detect this like:
private String readIdentification(Buffer.PlainBuffer buffer, AtomicInteger numberOfAttempts)
throws IOException {
String ident = new IdentificationStringParser(buffer, loggerFactory).parseIdentificationString();
numberOfAttempts.incrementAndGet();
if (ident.isEmpty() && numberOfAttempts.intValue() < 1000) { // 1000
return ident;
} else if (numberOfAttempts.intValue() >= 1000) {
throw new TransportException("To many attempts to read the server ident").
}
if (!ident.startsWith("SSH-2.0-") && !ident.startsWith("SSH-1.99-"))
throw new TransportException(DisconnectReason.PROTOCOL_VERSION_NOT_SUPPORTED,
"Server does not support SSHv2, identified as: " + ident);
return ident;
}
This way you would at least confirm that this is the case and can dig further why .parseIdentificationString() returns empty string.
Faced a similar issue where we would see:
INFO [net.schmizz.sshj.transport.TransportImpl : pool-6-thread-2] - Client identity string: blablabla
INFO [net.schmizz.sshj.transport.TransportImpl : pool-6-thread-2] - Server identity string: blablabla
But on some occasions, there were no server response.
Our service would typically wake up and transfer several files simultaneously, one file per connection / thread.
The issue was in the sshd server config, we increased maxStartups from default value 10
(we noticed the problems started shortly after batch sizes increased to above 10)
Default in /etc/ssh/sshd_config:
MaxStartups 10:30:100
Changed to:
MaxStartups 30:30:100
MaxStartups
Specifies the maximum number of concurrent unauthenticated connections to the SSH daemon. Additional connections will be dropped until authentication succeeds or the LoginGraceTime expires for a connection. The default is 10:30:100. Alternatively, random early drop can be enabled by specifying the three colon separated values start:rate:full (e.g. "10:30:60"). sshd will refuse connection attempts with a probability of rate/100 (30%) if there are currently start (10) unauthenticated connections. The probability increases linearly and all connection attempts are refused if the number of unauthenticated connections reaches full (60).
If you cannot control the server, you might have to find a way to limit your concurrent connection attempts in your client code instead.
I have a GCM-backend Java server and I'm trying to send to all users a notification msg. Is my approach right? To just split them into 1000 each time before giving the send request? Or is there a better approach?
public void sendMessage(#Named("message") String message) throws IOException {
int count = ofy().load().type(RegistrationRecord.class).count();
if(count<=1000) {
List<RegistrationRecord> records = ofy().load().type(RegistrationRecord.class).limit(count).list();
sendMsg(records,message);
}else
{
int msgsDone=0;
List<RegistrationRecord> records = ofy().load().type(RegistrationRecord.class).list();
do {
List<RegistrationRecord> regIdsParts = regIdTrim(records, msgsDone);
msgsDone+=1000;
sendMsg(regIdsParts,message);
}while(msgsDone<count);
}
}
The regIdTrim method
private List<RegistrationRecord> regIdTrim(List<RegistrationRecord> wholeList, final int start) {
List<RegistrationRecord> parts = wholeList.subList(start,(start+1000)> wholeList.size()? wholeList.size() : start+1000);
return parts;
}
The sendMsg method
private void sendMsg(List<RegistrationRecord> records,#Named("message") String message) throws IOException {
if (message == null || message.trim().length() == 0) {
log.warning("Not sending message because it is empty");
return;
}
Sender sender = new Sender(API_KEY);
Message msg = new Message.Builder().addData("message", message).build();
// crop longer messages
if (message.length() > 1000) {
message = message.substring(0, 1000) + "[...]";
}
for (RegistrationRecord record : records) {
Result result = sender.send(msg, record.getRegId(), 5);
if (result.getMessageId() != null) {
log.info("Message sent to " + record.getRegId());
String canonicalRegId = result.getCanonicalRegistrationId();
if (canonicalRegId != null) {
// if the regId changed, we have to update the datastore
log.info("Registration Id changed for " + record.getRegId() + " updating to " + canonicalRegId);
record.setRegId(canonicalRegId);
ofy().save().entity(record).now();
}
} else {
String error = result.getErrorCodeName();
if (error.equals(Constants.ERROR_NOT_REGISTERED)) {
log.warning("Registration Id " + record.getRegId() + " no longer registered with GCM, removing from datastore");
// if the device is no longer registered with Gcm, remove it from the datastore
ofy().delete().entity(record).now();
} else {
log.warning("Error when sending message : " + error);
}
}
}
}
Quoting from Google Docs:
GCM is support for up to 1,000 recipients for a single message. This capability makes it much easier to send out important messages to your entire user base. For instance, let's say you had a message that needed to be sent to 1,000,000 of your users, and your server could handle sending out about 500 messages per second. If you send each message with only a single recipient, it would take 1,000,000/500 = 2,000 seconds, or around half an hour. However, attaching 1,000 recipients to each message, the total time required to send a message out to 1,000,000 recipients becomes (1,000,000/1,000) / 500 = 2 seconds. This is not only useful, but important for timely data, such as natural disaster alerts or sports scores, where a 30 minute interval might render the information useless.
Taking advantage of this functionality is easy. If you're using the GCM helper library for Java, simply provide a List collection of registration IDs to the send or sendNoRetry method, instead of a single registration ID.
We can not send more than 1000 push notification at time.I searched a lot but not result then i did this with same approach split whole list in sub lists of 1000 items and send push notification.
I have this java agent that processes a huge amount of documents that it could run overnight. The problem is that I need the agent to retry if the network got suddenly disconnected briefly. The retry could have a maximum number.
int numberOfRetries = 0;
try {
while(nextdoc != null) {
// process documents
numberOfRetries = 0;
}
} catch (NotesException e) {
numberOfRetries++;
if (numberOfRetries > 4) {
// go back and reprocess current document
} else {
// message reached max number of retries. did not successfully finished
}
}
Also, of course I do not want to actually retry the whole process. Basically I need to continue on the document it was processing and move on to the next loop
You should do a retry loop around each piece of code that gets a document. Since the Notes classes generally require a getFirst and getNext paradigm, that means you need two separate retry loops. E.g.,
numberOfRetries = 0;
maxRetries = 4;
// get first document, with retries
needToRetry = false;
while (needToRetry)
{
try
{
while (needToRetry)
{
nextDoc = myView.getFirstDocument();
needToRetry=false;
}
}
catch (NotesException e)
{
numberOfRetries++;
if (numberOfRetries < maxRetries) {
// you might want to sleep here to wait for the network to recover
// you could use numberOfRetries as a factor to sleep longer on
// each failure
needToRetry = true;
} else {
// write "Max retries have been exceeded getting first document" to log
nextDoc = null; // we won't go into the processing loop
}
}
}
// process all documents
while(nextdoc != null)
{
// process nextDoc
// insert your code here
// now get next document, with retries
while (needToRetry)
{
try
{
nextDoc = myView.getNextDocument();
needToRetry=false;
}
catch (NotesException e)
{
numberOfRetries++;
if (numberOfRetries < maxRetries) {
// you might want to sleep here to wait for the network to recover
// you could use numberOfRetries as a factor to sleep longer on
// each failure
needToRetry = true;
} else {
// write "Max retries have been exceeded getting first document" to log
nextDoc = false; // we'lll be exiting the processing loop without finishing all docs
}
}
}
}
Note that I'm treating maxRetries as the max total retries across all documents in the data set, not the max for each document.
Also note that it's probably cleaner to break this up a little. E.g.
numberOfRetries = 0;
maxRetries = 4;
nextDoc = getFirstDocWithRetries(view); // this contains while loop and try-catch
while (nextDoc != null)
{
processOneDoc(nextDoc);
nextDoc = getNextDocWithRetries(view,nextDoc); // and so does this
}
I would not recommend what you are doing at all.
The NotesException can fire for a number of reasons, and there is no guarantee you will be returning to a safe state.
Also the fact the agent needs to run for such a long time means you need to change the server "Maximum execution timeout" to allow it to run correctly. Setting that to a very high value makes the server more prone to performance/deadlock issues.
A better solution would be to batch workload and have the agent run for a set time on a batch. Update as you go so that when the agent comes back it knows to work on the next batch.
I am using Google AppEngine's Channel API. I am having some issue to re-start the lost connection due to the user's network connection. When you loose the internet connection, channel call onError but it will not call onClose. As far as JavaScript object is concerned, the channel socket is open.
How do you handle lost connection due to the internet issue? I am thinking of 1) by trigger to close the channel and re-open it when RPC unrelated to the channel somewhere in the application succeeds for the first time (which indicates the internet is alive again) or 2) Use timer that runs all the time and pings the server for network status (which was the point of introducing the long polling to avoid consuming unwanted resource this way). Any other ideas would be great.
Observation:
When the internet connection is dead, onError is called in incremental interval (10sec, 20sec, 40sec) twice. Once the internet connection is back, channel does not resume connection. It stops working without any indication that it is dead.
Thanks.
When you see the javascript console, presumably you will see "400 Unknown SID Error".
If so, here is my workaround for this. This is a service module for AngularJS, but please look at the onerror callback. Please try this workaround and let me know if it works or not.
Added: I neglected to answer your main question, but in my opinion, it is hard to determine if you're connected to the internet unless actually pinging the "internet". So probably you may want to use some retry logic similar to the following code with some tweak. In the following example, I just retrying 3 times, but you can do it more with some back offs. However, I think the best way to handle this is, when the app consume the retry max count, you can indicate the user that the app lost the connection, ideally showing a button or a link to re-connect to the channel service.
And, you can also track the connection on the server side, see:
https://developers.google.com/appengine/docs/java/channel/#Java_Tracking_client_connections_and_disconnections
app.factory('channelService', ['$http', '$rootScope', '$timeout',
function($http, $rootScope, $timeout) {
var service = {};
var isConnectionAlive = false;
var callbacks = new Array();
var retryCount = 0;
var MAX_RETRY_COUNT = 3;
service.registerCallback = function(pattern, callback) {
callbacks.push({pattern: pattern, func: callback});
};
service.messageCallback = function(message) {
for (var i = 0; i < callbacks.length; i++) {
var callback = callbacks[i];
if (message.data.match(callback.pattern)) {
$rootScope.$apply(function() {
callback.func(message);
});
}
}
};
service.channelTokenCallback = function(channelToken) {
var channel = new goog.appengine.Channel(channelToken);
service.socket = channel.open();
isConnectionAlive = false;
service.socket.onmessage = service.messageCallback;
service.socket.onerror = function(error) {
console.log('Detected an error on the channel.');
console.log('Channel Error: ' + error.description + '.');
console.log('Http Error Code: ' + error.code);
isConnectionAlive = false;
if (error.description == 'Invalid+token.' || error.description == 'Token+timed+out.') {
console.log('It should be recovered with onclose handler.');
} else {
// In this case, we need to manually close the socket.
// See also: https://code.google.com/p/googleappengine/issues/detail?id=4940
console.log('Presumably it is "Unknown SID Error". Try closing the socket manually.');
service.socket.close();
}
};
service.socket.onclose = function() {
isConnectionAlive = false;
console.log('Reconnecting to a new channel');
openNewChannel();
};
console.log('A channel was opened successfully. Will check the ping in 20 secs.');
$timeout(checkConnection, 20000, false);
};
function openNewChannel(isRetry) {
console.log('Retrieving a clientId.');
if (isRetry) {
retryCount++;
} else {
retryCount = 0;
}
$http.get('/rest/channel')
.success(service.channelTokenCallback)
.error(function(data, status) {
console.log('Can not retrieve a clientId');
if (status != 403 && retryCount <= MAX_RETRY_COUNT) {
console.log('Retrying to obtain a client id')
openNewChannel(true);
}
})
}
function pingCallback() {
console.log('Got a ping from the server.');
isConnectionAlive = true;
}
function checkConnection() {
if (isConnectionAlive) {
console.log('Connection is alive.');
return;
}
if (service.socket == undefined) {
console.log('will open a new connection in 1 sec');
$timeout(openNewChannel, 1000, false);
return;
}
// Ping didn't arrive
// Assuming onclose handler automatically open a new channel.
console.log('Not receiving a ping, closing the connection');
service.socket.close();
}
service.registerCallback(/P/, pingCallback);
openNewChannel();
return service;
}]);