So I have a kinesis consumer that is running in ECS fargate that I am trying to add on x-ray. I have added the x-ray side car to my CloudFormation for the task definition, and it shows up in the task and is running
{
"name": "xray-daemon",
"Image": {************.dkr.ecr.us-east-1.amazonaws.com/xray-daemon},
"cpu": 32,
"memoryReservation": 256,
"portMappings" : [
{
"containerPort": 2000,
"protocol": "udp"
}
]
},
I then put before and after an SNS publish
AWSXRay.beginSubsegment("SNS Publish")
-- do the publish
AWSXRay.endSubsegment();
And still no luck.
Finally, I added the following in the start of my app to, which I beleive, is logging the entire ECS process to x-ray
AWSXRayRecorderBuilder builder = AWSXRayRecorderBuilder.standard().withPlugin(new ECSPlugin())
AWSXRay.setGlobalRecorder(builder.build())
So far, everything runs fine (consumer is unaffected, and running fine) but nothing is showing up in x-ray. Any ideas on what I might be missing?
Thanks
You need to wrap your kinesis consumer's code in a segment to be able to see any of the trace data. Segment will denote your consumer as a node in the X-Ray service map.
https://github.com/aws/aws-xray-sdk-java#applications-not-using-javaxservlet-may-include-custom-interceptors-to-begin-and-end-trace-segments
Use the AWSXRay.beginSegment and AWSXRay.endSegment APIs (similar to the subsegment APIs you're already using) to create a segment around the process. Subsegments require a segment to be present. You're probably getting x-ray ContextMissing errors in your log while trying to create subsegment.
As you add this in your app:
AWSXRayRecorderBuilder builder = AWSXRayRecorderBuilder.standard().withPlugin(new ECSPlugin())
AWSXRay.setGlobalRecorder(builder.build())
I'm not expert on aws but, you didn't miss that?
Since Fargate is a service that manages the instances your tasks run on, access to the underlying host is prohibited. Consequently, the ECSPlugin and EC2Plugins for X-Ray will not work.
If not, take a look on code snippet how you can add x-ray sdk to your app and run x-ray on fargate:
Sending tracing information to AWS X-Ray
Related
I am Using the Aws IVS for live streaming . when the stream ends I need to get the notification. I have configured the Event Bridge with source as IVS and destination as DEV, QA and PROD endpoints. when the streams ends I am getting the notification in all the endpoints.
But my requirement is, if streaming starts from the dev, only dev endpoint should receive the stream end notification. if streaming starts from qa, only qa endpoint should receive the stream end notification. how to achieve this ? Thanks in Advance.
I had a similar issue, and we end up creating and event-bridge for production, one for develop and another for staging.
Depending of the environment the producers sends to one event-bridge or another, same with the consumers.
the price remains the same
In Azure DevOps Pipeliens, when deploying an app to Functions, the app may restart during the process of the app.
Is there a way to monitor if the Functions app is running in the pipeline, make sure it's done, and then deploy the app?
Conditions
Functions runtime: Java
Trigger: Service Bus Trigger
I tried to check the lock status of Service Bus messages or the processing status of the Functions app with the Azure CLI, but it seems that there is no interface to check the processing status.
https://learn.microsoft.com/en-us/cli/azure/functionapp?view=azure-cli-latest
https://learn.microsoft.com/en-us/cli/azure/servicebus/queue?view=azure-cli-latest
You should never rely on that better build your functions to execute fast.
When Azure function sends signal to stop in c# we have CancellationToken this way we can have extra code to implement shutdown, otherwise as soon as AF get signal to stop it will not accept new events from service bus but will continue to execute current functions, and if they wont stop for some time they can be terminated (cant find exact time but will update answer )
I would also suggest you to utilise deployment slots this way you can minimise your downtime.
I have a long running AWS Lambda function that I am executing from my webapp. Using the documentation [1], it works fine however my problem is this particular lambda function does not return anything back to the application its output is saved to S3 and it runs for a long time 20-30s. Is there a way to trigger the lambda and not wait for the return value since I don't want to wait/block my app while the lambda is running. Right now I am using an ExecutorService as a que to execute lambda requests since I have to wait for each invocation, when the app crashes or restarts I lose jobs that are waiting to be executed.
[1] https://aws.amazon.com/blogs/developer/invoking-aws-lambda-functions-from-java/
Tracking status is not necessarily a difficult issue. Use a simple S3 "file exists" call after each job execution to know if the lambda is done.
However, as you've pointed out, you might lose job information at some point. To remove this issue, you need some persistence layer outside your JVM. A KV store would work, store some (timestamp, jobId, status) fields in a database, and periodically check from your web server and only update from the lambda.
Alternatively, to reduce end-to-end time frame further, a queuing mechanism would be better (unless you also want the full history of jobs, but this can be constructed along with the queue). As mentioned in the comments, AWS offers many built in solutions that can directly be used with Lambda, or you need additional infrastructure like RabbitMQ / Redis to built a task event bus.
With that, lambda is now optional. You'd effectively periodically pull off events into a worker queue, which either can be very dumb passthroughs and invoke the lambda, or do the work themselves directly. Combine this with ECS/EKS/EC2 autoscaling and it might actually run faster than lambda since you can scale in/out based on queue size. Then you write the output events to a success/error notification "channel" after the S3 file is written
Back in the web server, you'll have to modify code to now be listening for messages asynchronously from that channel, and when you get a success message, you'll know that you should be able to access the S3 resources
I have a service that sends data to SQS which is working perfectly (code same as seen from Amazon Java SDK) while writing the consumer to read these messages in another queue I am facing issues. The function is never called? Again, the consumer code is also the same as that from the SDK, do I need to provide something else? Or are some more configurations required which are not present in the SDK?enter image description here
I have also attached the code which I have seen from the SDK. I am doing long-polling as well.
I'm using AWS SDK for Java.
Imagine I create a RDS instance as described in the AWS documentation.
AmazonRDS client = AmazonRDSClientBuilder.standard().build();
CreateDBInstanceRequest request = new CreateDBInstanceRequest().withDBInstanceIdentifier("mymysqlinstance").withAllocatedStorage(5)
.withDBInstanceClass("db.t2.micro").withEngine("MySQL").withMasterUsername("MyUser").withMasterUserPassword("MyPassword");
DBInstance response = client.createDBInstance(request);
If I call instance.getEndpoint() right after making the request it will return null to me, because AWS is still creating the database. I need to know this endpoint when it becomes available, but I'm not figuring out how to do it.
Is there a way, using the AWS SDK, to be notified when the instance was finally created?
You can use the RDS SNS notifications:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html#USER_Events.Messages
Subscribing to Amazon RDS Event Notification
You can create an Amazon
RDS event notification subscription so you can be notified when an
event occurs for a given DB instance, DB snapshot, DB security group,
or DB parameter group. The simplest way to create a subscription is
with the RDS console. If you choose to create event notification
subscriptions using the CLI or API, you must create an Amazon Simple
Notification Service topic and subscribe to that topic with the Amazon
SNS console or Amazon SNS API. You will also need to retain the Amazon
Resource Name (ARN) of the topic because it is used when submitting
CLI commands or API actions. For information on creating an SNS topic
and subscribing to it, see Getting Started with Amazon SNS.
Disclaimer: Opinionated Answer
IMO creating infrastructure at runtime in code like this is devil's work. Stacks are the way to go here, much more modular and you will get some of the following benefits:
If you start creating more than one table per customer you will be able to logically group them into a stack and clean then up easier as needed
If for some reason the creation of a resource fails you can see this very easily in the stack console
Management is much easier to search through stacks as you have a console already built for you
Updating a stack in AWS is much easier as well than updating tables individually
MOST IMPORTANT: If an error occurs the stack functionality already has rollback and redundancy functionality built in, which you control the behaviour of. If something happens in your code during your on boarding process it will be a mess to clean up, what if one table succeeded and the other not? You will have to troll through logs (if they exist) to find out what happened.
You can also combine this approach with using something like AWS Pipelines or even AWS Simple Workflow Service to add custom steps in your custom on-boarding process, eg run a lambda function, send a notification when completed, wait for some payment. This builds on my last point that if this pipeline does fail, you will be able to see which step failed, and why it failed. You will also be able to see if things timeout.
Lastly I want to advise caution in creating infrastructure per customer. It's much more work and adds allot more ways in which things can break. Make sure you put limits in AWS as well that you don't have a situation in which your bill sky-rockets because of some bug creating infrastructure.