How to make Aws Lambda to run continuously - java

I have a AWS Lambda function which is deployed in console. Since, it will consume the message from some external queue automatically,I want it to be keep on running continuously instead of scheduling.
How can I make aws lambda to run continuously?

You can add a trigger to your Lambda, and set the SQS queue you want to respond to. In the AWS console (on the web), you can either do this from the Lambda functions itself, or from SQS (i'd advise this strategy). The console will guide you through the details for setting up the proper security stuff.
More info on the setup:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-lambda-function-trigger.html
Some general info on consuming SQS messages in Lambda:
https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html
Your preferred programming language probably has a client library that implements the API for you.
IMPORTANT: If you want to process your queue sequentially, make sure you set the reserved concurrency of your lambda to 0.
Also, if you use CLI or other automatic deploy tools, make sure to update your config files, so you don't overwrite your Lambda's settings on deploy.
EDIT: when you say external queue; you mean a non SQS queue?
I guess it could still be done with another system. Best way to do it is to raise an event. Trigger the Lambda with a http request when a message is added. If for some reason you can't do this, you could add a schedule for your Lambda, and run it, let's say every 5 minutes. More info on scheduling: https://docs.aws.amazon.com/eventbridge/latest/userguide/run-lambda-schedule.html

If instead of an external queue you could use the SQS provided by AWS you could set that queue as a trigger to your Lambda function and have it execute once a new item is set on the queue.

Related

How do I control TaskProtection when using Java Spring and AWS Autoscaling?

I am working on a Java Spring Web Service using #SqsLister to hand items on an SQS Queue. My cluster is configured to use start and stop tasks based on the depth of the SQS Queue. This part works fine.
I am worried about a task being terminated while the previously dequeued item is still being processed.
Does Spring support this in any way? Is there a preferred way of handing this scenario?
You have to implement that yourself in your tasks. Basically it is done through ecs task termination protection, and AWS provides examples of how to do it:
Elastic Container Service (ECS) Task Protection Examples
The examples are not for spring, but you have to implement something similar yourself in your application.

AWS Lambda function used with a DLQ

I have an SQS queue with a supported DLQ for failures. I want to write a custom Lambda function in java (spring boot) to get the messages from this dlq, write it to a file, upload the file to an S3 bucket and send the file as an alert to a specified webhook.
I'm new to lambda and I hope this design can be implemented.
One requirement is that I want to execute the Lambda only once per day. So let's say at 6:00 am everyday. I want all the messages in the queue to be written to a file.
I'm trying to see examples of RequestHandler being implemented where the messages in the queue are received and iterated to be saved in the file one at a time.
I'm not sure how to configure the lambda such that it runs only once per day instead of each time a message entering the DLQ.
Any documentation relating to these queries will be really helpful. Please critique my expected implementation and offer any better solutions for the same.
You can have your lambda code run on any schedule (once per day for your case) using CloudWatch Event Schedule.
To create the schedule, follow this link
In you lambda code, you can fetch the messages from DLQ and process them iteratively.
You no need to use Spring framework in AWS Lambda use Java only
Use Lambda with Cloud watch cron expression and schedule daily run.
Write your own logic
https://docs.aws.amazon.com/lambda/latest/dg/java-samples.html
https://www.freecodecamp.org/news/using-lambda-functions-as-cronjobs/

How to make IBM MQ listener process slowly using spring java application

We are trying to migrate our legacy system to Micro service
With Paas environment, we have scheduler jobs to trigger and put messages in MQ one by one and we have MQ listener in our Microservice to get message and create request and send request to external party.
Here the problem comes our micro service is capable doing Asynchronous call to external service, but our external service is not able to handle Asynchronous call so it is returning wrong data.
For example, we are hitting external service with 40 to 60 request per minute and external service is capable to handle only 6 request per minute.
So how can I make the MQ listener to process slowly.
I have tried reducing setMaxConcurrenceConsumer to 1 and
Used observable.toblocking.single() to make the process to run in only one thread.
We use RxJava in our micro service.
It sounds like either your micro service or the external service is not following the use case for Request-Reply messaging.
(1) Is the external service setting the Reply's message Correlation ID with the Request message's Message ID?
(2) Is your micro service performing an MQGET with the matching option of getting by Correlation ID.
You can blame the external service for the error but if your micro service is actually picking up the wrong message then it is your application's fault. i.e. Does your micro service simply get the "next" message on the queue?
Read this answer: How to match MQ Server reply messages to the correct request
Here's a explanation (looks like from the 90's but has good information): https://www.enterpriseintegrationpatterns.com/patterns/messaging/RequestReplyJmsExample.html
In long term approach we are planning to migrate the External service to as well.
In short time i have fixed it using the observable.toblocking.single() ,thread.sleep(), and setMaxConcurrenceConsumer() to 1 so only one thread will run at a time. which will avoid the Asynchronous call to external service.The sleep time will set dynamically with some analysis done on the external service.

RabbitMQ and Application Decoupling

I need to setup RabbitMQ in an attempt to redesign our architecture using asynchronous messaging.
Existing Application Flow:
JEE web application (via browser) creates a new thread.
This thread creates a new OS process to invoke a Perl script to do some processing.
Perl script writes its output in a file and the control comes back to the thread.
The thread then reads the output file and loads the results to the database.
The control passes to the servlet which displays the result to the UI.
All these are synchronous and time consuming and we need to convert this to asynchronous messaging communication.
Now, I am planning to break this down to the following different components but not sure if this would work with RabbitMQ:
Application Breakdown:
JEE Web Application which is the Producer for RabbitMQ
Separate the Perl Script in to its own application that supports RabbitMQ communication. This Perl client will consume the message, process it and places a new message in RabbitMQ for the next step
Separate the output file to database loader into its own Java application that suppors RabbitMQ communication. This would consume the message from the queue corresponding to the Perl client's message from the previous step.
This way, the output would be available in the database and the asynchronous flow would be completed.
Is is possible to separate the applications this way compatible to RabbitMQ?
Are there any better ways to do this?
Please suggest some framework components for RabbitMQ and Perl
Appreciate your inputs with this.
Yes, you can do it that way. If it's not a hard work, I'll include the database load on the Perl step. This probably avoids to handle an intermediate file, but I don't know if it's a viable task on your project.
In order to use RabbitMQ, I'll recommend you the AnyEvent::RabbitMQ CPAN module. As the documentation establishes, You can use AnyEvent::RabbitMQ to:
Declare and delete exchanges
Declare, delete, bind and unbind queues
Set QoS and confirm mode
Publish, consume, get, ack, recover and reject messages
Select, commit and rollback transactions

Configuring storm cluster for production cluster

We have configured storm cluster with one nimbus server and three supervisors. Published three topologies which does different calculations as follows
Topology1 : Reads raw data from MongoDB, do some calculations and store back the result
Topology2 : Reads the result of topology1 and do some calculations and publish results to a queue
Topology3 : Consumes output of topology2 from the queue, calls a REST Service, get reply from REST service, update result in MongoDB collection, finally send an email.
As new bee to storm, looking for an expert advice on the following questions
Is there a way to externalize all configurations, for example a config.json, that can be referred by all topologies?
Currently configuration to connect MongoDB, MySql, Mq, REST urls are hard-coded in java file. It is not good practice to customize source files for each customer.
Wanted to log at each stage [Spouts and Bolts], Where to post/store log4j.xml that can be used by cluster?
Is it right to execute blocking call like REST call from a bolt?
Any help would be much appreciated.
Since each topology is just a Java program, simply pass the configuration into the Java Jar, or pass a path to a file. The topology can read the file at startup, and pass any configuration to components as it instantiates them.
Storm uses slf4j out of the box, and it should be easy to use within your topology as such. If you use the default configuration, you should be able to see logs either through the UI, or dumped to disk. If you can't find them, there are a number of guides to help, e.g. http://www.saurabhsaxena.net/how-to-find-storm-worker-log-directory/.
With storm, you have the flexibility to push concurrency out to the component level, and get multiple executors by instantiating multiple bolts. This is likely the simplest approach, and I'd advise you start there, and later introduce the complexity of an executor inside of your topology for asynchronously making HTTP calls.
See http://storm.apache.org/documentation/Understanding-the-parallelism-of-a-Storm-topology.html for the canonical overview of parallelism in storm. Start simple, and then tune as necessary, as with anything.

Categories