Asana Events autosync - java

I want check events and change sync parameter in Asana Api in java. By Asana docs (https://developers.asana.com/docs/events). We have:
Client client = Client.accessToken("PERSONAL_ACCESS_TOKEN");
List<JsonElement> result = client.events.getEvents(sync, resource)
.option("pretty", true)
.execute();
In java I have error and don't see new "sync" parameter:
com.asana.errors.InvalidTokenError: Sync token invalid or too old
(Sync token invalid or too old. If you are attempting to keep
resources in sync, you must fetch the full dataset for this query now
and use the new sync token for the next sync.)
In Asana API explorer (https://developers.asana.com/explorer) I see this:
GET /events?resource=1164252
{
"errors": [
{
"message": "Sync token invalid or too old. If you are attempting to keep resources in sync, you must fetch the full dataset for this query now and use the new sync token for the next sync."
}
],
"sync": "c66c5705fb8286666f944f8e314a82c6:0"
}
So in Java I take error message, but not a parameter "sync". If use try / catch (catch (IOException ex)) system, then I can see "sync", that I need
Picture of ex with breakpoint, but how to get this info from Exception object (it's not main message from exception)?
Maybe somebody work with Asana library and know this problem (how to copy sync parameter first time and later from server response)?

Related

Azure blob services - AuthorizationFailed - /Microsoft.Storage/storageAccounts/<ACCOUNT_NAME>/blobServices/default - JAVA sdk

I am trying to obtain the "Versioning" status of my Storage account using azure-sdk-for-java
// Azure environment URL is ".core.windows.net" hence used "AzureEnvironment.AZURE"
// this.clientSecretCredential ->is built using AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, AZURE_TENANT_ID
AzureProfile profile = new AzureProfile("<TENANT_ID", "<SUBSCRIPTION_ID>", AzureEnvironment.AZURE);
StorageManager manager = StorageManager.authenticate(this.clientSecretCredential, profile);
BlobServicesClient blobServicesClient = manager.serviceClient().getBlobServices();
// Exception is thrown at the following line
BlobServicePropertiesInner blobServicePropertiesInner = blobServicesClient.getServiceProperties("<RESOURCE_GROUP_NAME>", "<ACCOUNT_NAME>");
boolean versionFlag = blobServicePropertiesInner.isVersioningEnabled();
Azure configuration details:
Subscrition: "<SUBSCRIPTION_ID>" is created in the subscriptions.
Resource Group: "<RESOURCE_GROUP_NAME>" is configured with the "<SUBSCRIPTION_ID>".
Storage Accouunt: "<ACCOUNT_NAME>" is configured with the "<SUBSCRIPTION_ID>".
App Registartion: "<APP_REGISTARTION>" is created to provide the "<AZURE_CLIENT_ID>", "<AZURE_CLIENT_SECRET>", "<AZURE_TENANT_ID>"
Role Assignments: "DEVELOPER" has "Reader" access across subscriptions, resource groups, storage accounts but still I still have no idea on how the App registration is configured into the subscription
Error Message:
{
"code": "ERROR",
"message": "Status code 403, "{"error":{"code":"AuthorizationFailed","message":"The client '<APP_REGISTARTION_OBJECT_ID>' with object id '<APP_REGISTARTION_OBJECT_ID>' does not have authorization to perform action 'Microsoft.Storage/storageAccounts/blobServices/read' over scope '/subscriptions/{"<SUBSCRIPTION_ID>"}/resourceGroups/Titaniam-Sandbox/providers/Microsoft.Storage/storageAccounts/sandboxtestaccount/blobServices/default' or the scope is invalid. If access was recently granted, please refresh your credentials."}}""
}
Kinldy let me know what is the mistake i am making whether it is a code issue or configuration issue.
Things i am need of clarification:
Is there any other way to get the "is Versioning enabled" value using azure-sdk-for-java?
How App registration is connected with subscription.
How are roles connected with App registration as well as subscription.
How to set the scope
How to add the application and how to identify the application.
Thanks in advance.

Failed to use Regions.AP_SOUTH_1 sending raw email with AWS SES on Android

I use the following java code to send user emails, it works as expected if I use Regions.US_EAST_1 and related identity pool id.
AmazonSimpleEmailServiceAsyncClient client = new AmazonSimpleEmailServiceAsyncClient(new CognitoCachingCredentialsProvider(context, identityPoolId, Regions.AP_SOUTH_1));
client.setRegion(Region.getRegion(Regions.AP_SOUTH_1));
client.sendRawEmailAsync(new SendRawEmailRequest(rawMessage), new AsyncHandler<SendRawEmailRequest, SendRawEmailResult>()
{
#Override
public void onError(Exception exception)
{
exception.printStackTrace();
}
#Override
public void onSuccess(SendRawEmailRequest request, SendRawEmailResult sendEmailResult)
{
}
});
But after I changed the region to AP_SOUTH_1, I also changed the identity pool id, the code stopped to work, the email will not be sent, and I started to see log saying
com.amazonaws.AmazonServiceException: The security token included in
the request is invalid. (Service: AmazonSimpleEmailService; Status
Code: 403; Error Code: InvalidClientTokenId; Request ID:
1168c9c2-a940-4bef-be36-8568787bc130)
Why US_EAST_1 works but AP_SOUTH_1 not? How to get the region AP_SOUTH_1 work? How can I identify and fix this problem?
Important note:
I have verified the sender email address in both regions.
I have granted the role ses:sendRawEmail permission.
I would like to post my 2 days investigation and found to help others:
Tested with
AWS SDK for Android:
Using the latest adk implementation 'com.amazonaws:aws-android-sdk-ses:2.16.12', you can only use us-east-1, us-west-2, eu-west-1 3 regions to succeed in sending emails in production, unfortunately ap-south-1 is not on the list, even the SES console page shows you can use more regions, if you do it by code, you will always get the AmazonServiceException saying The security token included in the request is invalid.
AWS SDK for Java:
Using the latest adk implementation 'com.amazonaws:aws-java-sdk-ses:1.11.789', users can use Region ap-source-1 to send raw emails.

Best way to handle badly written external exception?

I have an external service I'm calling that just returns 500's with the SAME exception each time no matter the issue.
For example(my api to their service):
Action: Fetch image that does not exist
IMGException: Status code: 500, ErrMsg: File not found
Action: Fetch image that does exist but there are server side issues
IMGException: Status code: 500, ErrMsg: Cannot grab img at this time
Action: Fetch image that does exist but is expired
IMGException: Status code: 500, ErrMsg: Img is expired
What would be the best way to handle this? I was catching them and giving them more descriptive messages to throw to my #ExceptionHandler for logging, etc. Should I just throw them and never catch them since I cannot dependably predict what the exception is caused from and therefore cannot correctly change the status code or message?
You can parse the ErrMsg and throw your own exceptions. But Since the response is from an external service, you can as well include the message from external service in the response to your API in a separate field like ExternalMessage.
This will help the users in case the response from external API changes and you end up throwing a different exception.
I recommend you to simply rethrow these exceptions with the information that the server sends to you and add any information you have. But do not add new information based on what you recieved (with if's, for example), because if they change something your code will just be deprecated.
Of course, never show crypt messages to your final user. In this case add some generic message with instructions about what they can do.

Error while making Azure IoT Hub Device Identities in bulk

I am following https://learn.microsoft.com/en-us/azure/iot-hub/iot-hub-bulk-identity-mgmt to do Bulk upload of Device Identities in Azure IoT Hub. All codes given here are in C# so I am converting it to JAVA equivalent.
Using Import devices example – bulk device provisioning I am getting following json-
{"id":"d3d78b0d-6c8c-4ef5-a321-91fbb6a4b7d1","importMode":"create","status":"enabled","authentication":{"symmetricKey":{"primaryKey":"f8/UZcYbhPxnNdbSl2J+0Q==","secondaryKey":"lbq4Y4Z8qWmfUxAQjRsDjw=="}}}
{"id":"70bbe407-8d65-4f57-936f-ef402aa66d07","importMode":"create","status":"enabled","authentication":{"symmetricKey":{"primaryKey":"9e7fDNIFbMu/NmOfxo/vGg==","secondaryKey":"nwFiKR4HV9KYHzkeyu8nLA=="}}}
To import the file from blob following function is called-
CompletableFuture<JobProperties> importJob = registryManager
.importDevicesAsync(inURI, outURI);
In the above code, we need to provide URI with SAS code, for that Get the container SAS URI equivalent code is below-
static String GetContainerSasUri(CloudBlobContainer container) {
SharedAccessBlobPolicy sasConstraints = new SharedAccessBlobPolicy();
sasConstraints.setSharedAccessExpiryTime(new Date(new Date().getTime() + TimeUnit.DAYS.toMillis(1)));
sasConstraints.setPermissions(EnumSet.of(SharedAccessBlobPermissions.READ, SharedAccessBlobPermissions.WRITE,
SharedAccessBlobPermissions.LIST, SharedAccessBlobPermissions.DELETE));
BlobContainerPermissions permissions = new BlobContainerPermissions();
permissions.setPublicAccess(BlobContainerPublicAccessType.CONTAINER);
permissions.getSharedAccessPolicies().put("testpolicy", sasConstraints);
try {
container.uploadPermissions(permissions);
} catch (StorageException e1) {
e1.printStackTrace();
}
String sasContainerToken = null;
try {
sasContainerToken = container.generateSharedAccessSignature(sasConstraints, "testpolicy");
} catch (InvalidKeyException e) {
e.printStackTrace();
} catch (StorageException e) {
e.printStackTrace();
}
System.out.println("URI " + container.getUri() +"?"+ sasContainerToken);
return container.getUri() + "?" + sasContainerToken;
}
Now the problem is coming here. For the output container I am getting following error-
java.util.concurrent.ExecutionException: com.microsoft.azure.sdk.iot.service.exceptions.IotHubBadFormatException: Bad message format! ErrorCode:BlobContainerValidationError;Unauthorized to write to output blob container. Tracking ID:2dcb2efbf1e14e33ba60dc8415dc03c3-G:4-TimeStamp:11/08/2017 16:16:10
Please help me to know why I am getting Bad Message Format error? Is there a problem with the SAS key generating code or my blob container is not having Write permission?
are you using a service or Account-level SAS? The error thrown suggests the service isn't authorized or have the delegated permissions to write to the designated blob container. Check out the resource here on how to setup an account level SAS and how to delegate read, write and delete operations on blob containers. https://learn.microsoft.com/en-us/rest/api/storageservices/Delegating-Access-with-a-Shared-Access-Signature?redirectedfrom=MSDN "snipped content: "An account-level SAS, introduced with version 2015-04-05. The account SAS delegates access to resources in one or more of the storage services. All of the operations available via a service SAS are also available via an account SAS. Additionally, with the account SAS, you can delegate access to operations that apply to a given service, such as Get/Set Service Properties and Get Service Stats. You can also delegate access to read, write, and delete operations on blob containers, tables, queues, and file shares that are not permitted with a service SAS. See Constructing an Account SAS for more information about account SAS."
I was facing the same issue while using private storage account as import/output container.
It is working smooth after I started using public storage account.
Anyway, it should work even with private storage account. So, I have raised an issue. For more into, you may refer this link.

error 204 in a Google App Engine API in java

We have an API with Googe App Engine. The API consist on a search engine, when a user requests a productID the API returns a json with a group of other productIDs (with a specific criteria). This is the current configuration:
<instance-class>F4_1G</instance-class>
<automatic-scaling>
<min-idle-instances>3</min-idle-instances>
<max-idle-instances>automatic</max-idle-instances>
<min-pending-latency>automatic</min-pending-latency>
<max-pending-latency>automatic</max-pending-latency>
</automatic-scaling>
We use app_engine_release=1.9.23
The process does as follows. We have two calls to datastore and a call with urlfetch (to an external API).
The problem consist on that from time to time we receive en error 204 with this trace:
ms=594 cpu_ms=0 exit_code=204 app_engine_release=1.9.23
A problem was encountered with the process that handled this request, causing it to exit. This is likely to cause a new process to be used for the next request to your application. (Error code 204)
This is what we got in the client:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "backendError",
"message": ""
}
],
"code": 503,
"message": ""
}
}
We changed the number of resident instances from 3 to 7 and we got the same error. Also the errors occur in the same instances. We see 4 errors within a very small amount of time.
We found that the problem was with the urlfecth call. If we put a high timeout, then it returns a lot of errors.
any idea why this is happening???
I believe I have found the problem. The problem was related to the urlfetch call. I did many tests until I isolate the problem. When i did calls only to datastore everything worked as expected. However when I added the urlfetch call it produced the 204 errors. It happened always so I believe that could be a bug.
What I did to get rid of the error was to remove the cloud end point from Google and use a basic servlet. I found that mixing the servlet with the urlfetch call we don't get the error, therefore the problem might not be only related to urlfetch but a combination of urlfetch and Google cloud end point.

Categories