I've registered a trial account to test Cumulocity and its mqtt api.
I want to send operation to some device (currently emulated by java service) and receive operation result.
As manual I use the following links:
https://www.cumulocity.com/guides/users-guide/device-management/#-a-name-operation-monitoring-a-working-with-operations
https://cumulocity.com/guides/device-sdk/mqtt#hello-mqtt-java
The following code is used to make a response to Cumulocity.
if (payload.startsWith("510")) {
System.out.println("Simulating device restart...");
client.publish("s/us", "501,c8y_Restart".getBytes(), 2, false);
System.out.println("...restarting...");
Thread.sleep(TimeUnit.SECONDS.toMillis(1));
client.publish("s/us", "503,c8y_Restart".getBytes(), 2, false);
System.out.println("...done...");
}
501 code means that restart operation started and 503 code means that device restarted successfully.
But actually in Cumulocity UI operation status has changed to Pending.
If I send restart operation again the previous operation will changed to Success but the new one to Pending.
So, what am I doing wrong?
I expect to mark the operation as Failed or Success.
The "PENDING" state is always the initial state of every operation in Cumulocity.
The SmartREST MQTT for operations always follows the following order:
PENDING -> EXECUTING -> SUCCESSFUL/FAILED
SmartREST will always update the oldest operation (as we want to execute operations in the historical order).
So if you send 501 it will look for the oldest matching operation in PENDING state.
If you send 503 it will look for the oldest matching operation in EXECUTING state.
From your explanation it is not fully clear if there were already 2 restart operations when you your code was executed. You code is fully correct but if there were already two restart operation this would explain why one is now SUCCESSFUL and the other still in PENDING.
Related
I'm running an HL Fabric private network and submitting transactions to the ledger from a Java Application using Fabric-Java-Sdk.
Occasionally, like 1/10000 of the times, the Java application throws an exception when I'm submitting the transaction to the ledger, like the message below:
ERROR 196664 --- [ Thread-4] org.hyperledger.fabric.sdk.Channel
: Future completed exceptionally: sendTransaction
java.lang.IllegalArgumentException: The proposal responses have 2
inconsistent groups with 0 that are invalid. Expected all to be
consistent and none to be invalid. at
org.hyperledger.fabric.sdk.Channel.doSendTransaction(Channel.java:5574)
~[fabric-sdk-java-2.1.1.jar:na] at
org.hyperledger.fabric.sdk.Channel.sendTransaction(Channel.java:5533)
~[fabric-sdk-java-2.1.1.jar:na] at
org.hyperledger.fabric.gateway.impl.TransactionImpl.commitTransaction(TransactionImpl.java:138)
~[fabric-gateway-java-2.1.1.jar:na] at
org.hyperledger.fabric.gateway.impl.TransactionImpl.submit(TransactionImpl.java:96)
~[fabric-gateway-java-2.1.1.jar:na] at
org.hyperledger.fabric.gateway.impl.ContractImpl.submitTransaction(ContractImpl.java:50)
~[fabric-gateway-java-2.1.1.jar:na] at
com.apidemoblockchain.RepositoryDao.BaseFunctions.Implementations.PairTrustBaseFunction.sendTrustTransactionMessage(PairTrustBaseFunction.java:165)
~[classes/:na] at
com.apidemoblockchain.RepositoryDao.Implementations.PairTrustDataAccessRepository.run(PairTrustDataAccessRepository.java:79)
~[classes/:na] at java.base/java.lang.Thread.run(Thread.java:834)
~[na:na]
While my submitting method goes like this:
public void sendTrustTransactionMessage(Gateway gateway, Contract trustContract, String payload) throws TimeoutException, InterruptedException, InvalidArgumentException, TransactionException, ContractException {
// Prepare
checkIfChannelIsReady(gateway);
// Execute
trustContract.submitTransaction(getCreateTrustMethod(), payload);
}
I'm using a 4 org network with 2 peers each and I am using 3 channels, one for each chaincode DataType, in order to keep the things clean.
I think that the error coming from the Channel doesn't make sense because I am using the Contract to submit it...
Like I'm opening the gateway and then I keep it open for continuously submit the txs.
try (Gateway gateway = getBuilder(getTrustPeer()).connect()) {
Contract trustContract = gateway.getNetwork(getTrustChaincodeChannelName()).getContract(getTrustChaincodeId(), getTrustChaincodeName());
while (!terminateLoop) {
if (message) {
String payload = preparePayload();
sendTrustTransactionMessage(gateway, trustContract, payload);
}
...
wait();
}
...
}
EDIT:
After reading #bestbeforetoday advice, I've managed to catch the ContractException and analyze the logs. Still, I don't fully understand where might be the bug and, therefore, how to fix it.
I'll add 3 prints that I've taken to the ProposalResponses received in the exception and a comment after it.
ProposalResponses-1
ProposalResponses-2
ProposalResponses-3
So, in the first picture, I can see that 3 proposal responses were received at the exception and the exception cause message says:
"The proposal responses have 2 inconsistent groups with 0 that are invalid. Expected all to be consistent and none to be invalid."
In pictures, 2/3 is represented the content of those responses and I notice that there are 2 fields saving null value, namely "ProposalRespondePayload" and "timestamp_", however, I don't know if those are the "two groups" referred at the message cause of the exception.
Thanks in advance...
It seems that, while the endorsing peers all successfully endorsed your transaction proposal, those peer responses were not all byte-for-byte identical.
There are several things that might differ, including read/write sets or the value returned from the transaction function invocation. There are several reasons why differences might occur, including non-deterministic transaction function implementation, different transaction function behaviour between peers, or different ledger state at different peers.
To figure out what caused this specific failure you probably need to look at the peer responses to identify how they differ. You should be getting a ContractException thrown back from your transaction submit call, and this should allow you to access the proposal responses by calling e.getProposalResponses():
https://hyperledger.github.io/fabric-gateway-java/release-2.2/org/hyperledger/fabric/gateway/ContractException.html#getProposalResponses()
I have a long-running application that uses azure eventhub SDK (5.1.0), continually publishing data to Azure event hub. The service threw the below exception after few days. What could be the cause of this and how we can overcome this?
Stack Trace:
Exception in thread "SendTimeout-timer" reactor.core.Exceptions$BubblingException: com.azure.core.amqp.exception.AmqpException: Entity(abc): Send operation timed out, errorContext[NAMESPACE: abc-eventhub.servicebus.windows.net, PATH: abc-metrics, REFERENCE_ID: 70288acf171a4614ab6dcfe2884ee9ec_G2S2, LINK_CREDIT: 210]
at reactor.core.Exceptions.bubble(Exceptions.java:173)
at reactor.core.publisher.Operators.onErrorDropped(Operators.java:612)
at reactor.core.publisher.FluxTimeout$TimeoutMainSubscriber.onError(FluxTimeout.java:203)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.secondError(MonoFlatMap.java:185)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onError(MonoFlatMap.java:251)
at reactor.core.publisher.FluxHide$SuppressFuseableSubscriber.onError(FluxHide.java:132)
at reactor.core.publisher.MonoCreate$DefaultMonoSink.error(MonoCreate.java:185)
at com.azure.core.amqp.implementation.ReactorSender$SendTimeout.run(ReactorSender.java:565)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
Caused by: com.azure.core.amqp.exception.AmqpException: Entity(abc-metrics): Send operation timed out, errorContext[NAMESPACE: abc-eventhub.servicebus.windows.net, PATH: abc-metrics, REFERENCE_ID: 70288acf171a4614ab6dcfe2884ee9ec_G2S2, LINK_CREDIT: 210]
at com.azure.core.amqp.implementation.ReactorSender$SendTimeout.run(ReactorSender.java:562)
... 2 more
I'm using Azure eventhub Java SDK 5.1.0
According to official documentation:
A TimeoutException indicates that a user-initiated operation is taking
longer than the operation timeout.
For Event Hubs, the timeout is specified either as part of the
connection string, or through ServiceBusConnectionStringBuilder. The
error message itself might vary, but it always contains the timeout
value specified for the current operation.
Common causes
There are two common causes for this error: incorrect
configuration, or a transient service error.
Incorrect configuration The operation timeout might be too small for
the operational condition. The default value for the operation timeout
in the client SDK is 60 seconds. Check to see if your code has the
value set to something too small. The condition of the network and CPU
usage can affect the time it takes for a particular operation to
complete, so the operation timeout should not be set to a small value.
Transient service error Sometimes the Event Hubs service can
experience delays in processing requests; for example, during periods
of high traffic. In such cases, you can retry your operation after a
delay, until the operation is successful. If the same operation still
fails after multiple attempts, visit the Azure service status site to
see if there are any known service outages.
If you are consistently seeing this error frequently, I would suggest to reach Azure support for a deeper look.
At medium to high load (test and production), when using the Vert.x Redis client, I get the following warning after a few hundred requests.
2019-11-22 11:30:02.320 [vert.x-eventloop-thread-1] WARN io.vertx.redis.client.impl.RedisClient - No handler waiting for message: [null, 400992, <data from redis>]
As a result, the handler supplied to the Redis call (see below) does not get called and the incoming request times out.
Handler<AsyncResult<String>> handler = res -> {
// success handler
};
redis.get(key, res -> {
handler.handle(res);
});
The real issue is that once the "No handler ..." warning comes up, the Redis client becomes useless because all further calls to Redis made via the client fails with the same warning resulting in the handler not getting called. I have an exception handler set on the client to attempt reconnection, but I do not see any reconnections being attempted.
How can one recover from this problem? Any workarounds to alleviate the severity would also be great.
I'm on vertx-core and vertx-redis-client 3.8.1 .
The upcoming 4.0 release had addressed this issue and a release should be hapening soon, how soon, I can't really tell.
The problem is that we can't easily port back from the master branch to the 3.8 branch because a major refactoring has happened on the client and the codebases are very different.
The new code, uses a connection pool and has been tested for concurrent access (and this is where the issue you're seeing comes from). Under load the requests are routed across all event loops and the queue that maintains the state between in flight requests (requests sent to redis) and waiting handlers would get out of sync in very special conditions.
So I'd first try to see if you can already start moving your code to 4.0, you can have a try with the 4.0.0-milestone3 version but to be totally fine, just have a run with the latest master which has more issues solved in this area.
I have a process where, at some point, two different kind of message can occurs, and if none appears after a time, the workflow goes timeout.
Based on the documentation, I have modelised the process using a event gateway :
To progress my activiti workflow, I am using activiti REST API. However, I cannot find in the documentation how to send a message to the gateway in order to continue to either Message 1 or Message 2. I tried triggering message to all execution IDs linked to my process ID but to no avail.
What is the right REST API command to progress in this workflow ?
Thanks for your support.
Edit 1 :
It seems that the Event Gateway is subbed to only one event.
It react to :
POST http://localhost:8082/activiti-rest/service/runtime/executions/20178
{"action":"messageEventReceived","messageName":"Message 1"}
and continue the process for the Message 1. However, with Message 2 defined exactly the same (but with another message), it returns the not found subscription error :
Execution with id '20178' does not have a subscription to a message event with name 'Message 2'"
For an event gateway (https://www.activiti.org/userguide/#bpmnEventbasedGateway). The intermediate message/signal catching events are mutually exclusive. It will follow only one path from the gateway depending on which message is received. In your case, you have fired message 1 already, so the execution continues on message 1 path and other message subscriptions are deleted. Therefore you are getting the error.
This might be a simple problem, but I can't seem to find a good solution right now.
I've got:
OldApp - a Java application started from the command line (no web front here)
NewApp - a Java application with a REST api behind Apache
I want OldApp to call NewApp through its REST api and when NewApp is done, OldApp should continue.
My problem is that NewApp is doing a lot of stuff that might take a lot of time which in some cases causes a timeout in Apache, and then sends a 502 error to OldApp. The computations continue in NewApp, but OldApp does not know when NewApp is done.
One solution I thought of is fork a thread in NewApp and store some kind of ID for the API request, and return it to OldApp. Then OldApp could poll NewApp to see if the thread is done, and if so - continue. Otherwise - keep polling.
Are there any good design patterns for something like this? Am I complicating things? Any tips on how to think?
If NewApp is taking a long time, it should immediately return a 202 Accepted. The response should contain a Location header indicating where the user can go to look up the result when it's done, and an estimate of when the request will be done.
OldApp should wait until the estimate time is reached, then submit a new GET call to the location. The response from that GET will either be the expected data, or an entity with a new estimated time. OldApp can then try again at the later time, repeating until the expected data is available.
So The conversation might look like:
POST /widgets
response:
202 Accepted
Location: "http://server/v1/widgets/12345"
{
"estimatedAvailableAt": "<whenever>"
}
.
GET /widgets/12345
response:
200 OK
Location: "http://server/v1/widgets/12345"
{
"estimatedAvailableAt": "<wheneverElse>"
}
.
GET /widgets/12345
response:
200 OK
Location: "http://server/v1/widgets/12345"
{
"myProperty": "myValue",
...
}
Yes, that's exactly what people are doing with REST now. Because there no way to connect from server to client, client just polls very often. There also some improved method called "long polling", when connection between client and server has big timeout, and server send information back to connected client when it becomes available.
The question is on java and servlets ... So I would suggest looking at Servlet 3.0 asynchronous support.
Talking from a design perspective, you would need to return a 202 accepted with an Id and an URL to the job. The oldApp needs to check for the result of the operation using the URL.
The thread that you fork on the server needs to implement the Callable interface. I would also recommend using a thread pool for this. The GET url for the Job that was forked can check the Future object status and return it to the user.