I'm building a cloud stream application using Spring with Azure Event Hub and Service bus.
In my use case I'm trying to achieve the following functionality :
Application that receives messages from a single binder (event hub)
Process the messages in a few steps, step A, B, C for example
Each step in the process creates objects
Stream the objects which were created in each step to different binders. If sending messages failed in any step, don't proceed
The question is, does the message sending is sync or async? Will it wait until all messages sent in step A before executing the next steps?
To make sure the message production happens synchronously you can simply set the spring.cloud.stream.eventhub.bindings.<channel-name>.producer.sync property to true.
The related documentation can be found here.
You can see the property referenced in the code here and here.
Related
Requirement is to create a background listener process that will receive and process the message from the service bus subscriptions.
I have searched many resources but couldnt able to find a solution. The message should not be in DELETE mode but rather be in PEEK_LOCK mode so that we can abandon the messages whenever an error occurs in the processing logic where it will re queue again.
Service bus tier : Standard.
Thank you.
I have a scenario in Spring boot applications - where I will be getting requests from other applications via REST "Run Process" Service and these will be placed on MQ queues. Then consumers will process one by one and the consumer calls another REST "Initiate Request" service which will take around 10mins to get a response back. I am looking for ideas/solutions where I can fire the REST "Initiate Request" service and forget then stop the consumer. Once "Initiate Request" completes its processing, the system will send an event notification indicating this process has completed/failed. Based on this I would like to proceed with the next Queue item. Is there a way to Stop and Start consumers based on notification to avoid long-running threads? If you have come across this problem, let me know how you have resolved it.
There are other solutions like
Consumers to persist the data to database and process row by row.
Using webflux we can avoid JMS-Consumer thread but not consumer calling REST
I am not able to find a way to send/broadcast a message to all application instances in Pivotal Cloud Foundry. How can we notify to all app instances of some events? If we use the HTTP request, PCF router will dispatch it to a single instance of the app. How can we solve this problem?
What #Florian said is probably the safer option, but if you want something quick and easy, you can send HTTP requests directly to an app instance by using the X-CF-APP-INSTANCE header. The format for the header is YOUR-APP-GUID:YOUR-INSTANCE-INDEX.
https://docs.cloudfoundry.org/concepts/http-routing.html#app-instance-routing
So given an app guid, you could iterate over the number of instances, say 0 to 5, and send an HTTP request to each one. Make sure to check the response to confirm that each one succeeded.
This also requires that you know the app guid for your app (i.e. cf app <name> --guid) and the number of instances of your app.
CF, out of the box, does not provide any event queue mechanism where apps can subscribe to.
What I would do (assuming you've two app instances A and B):
Provide an event endpoint in your application code, e.g. POST /api/event (alternatively, if the event should arise from another app (e.g. another microservice), this one could directly send messages onto the queue)
All app instances are listening on an internal event queue for new events
instance A receives the call from the CF router and processes it by issuing an event on an internal event queue, the instance will not react to the event, yet
When A publishes the event, A and B receives the event and processes it accordingly
Now, the internal event queue you can use highly depends on your deployment. On AWS you probably can use SQS or SNS or something similar. PCF, as I know, may also provide a messaging system which would suit here as well, rabbitmq. You could also use features of other services that would allow you to subscribe to events, such as redis (pub/sub commands) or similar.
If you provide more information about what you want to achieve more concretely, more detailed answer would be possible, though.
I'm using AWS SDK for Java.
Imagine I create a RDS instance as described in the AWS documentation.
AmazonRDS client = AmazonRDSClientBuilder.standard().build();
CreateDBInstanceRequest request = new CreateDBInstanceRequest().withDBInstanceIdentifier("mymysqlinstance").withAllocatedStorage(5)
.withDBInstanceClass("db.t2.micro").withEngine("MySQL").withMasterUsername("MyUser").withMasterUserPassword("MyPassword");
DBInstance response = client.createDBInstance(request);
If I call instance.getEndpoint() right after making the request it will return null to me, because AWS is still creating the database. I need to know this endpoint when it becomes available, but I'm not figuring out how to do it.
Is there a way, using the AWS SDK, to be notified when the instance was finally created?
You can use the RDS SNS notifications:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html#USER_Events.Messages
Subscribing to Amazon RDS Event Notification
You can create an Amazon
RDS event notification subscription so you can be notified when an
event occurs for a given DB instance, DB snapshot, DB security group,
or DB parameter group. The simplest way to create a subscription is
with the RDS console. If you choose to create event notification
subscriptions using the CLI or API, you must create an Amazon Simple
Notification Service topic and subscribe to that topic with the Amazon
SNS console or Amazon SNS API. You will also need to retain the Amazon
Resource Name (ARN) of the topic because it is used when submitting
CLI commands or API actions. For information on creating an SNS topic
and subscribing to it, see Getting Started with Amazon SNS.
Disclaimer: Opinionated Answer
IMO creating infrastructure at runtime in code like this is devil's work. Stacks are the way to go here, much more modular and you will get some of the following benefits:
If you start creating more than one table per customer you will be able to logically group them into a stack and clean then up easier as needed
If for some reason the creation of a resource fails you can see this very easily in the stack console
Management is much easier to search through stacks as you have a console already built for you
Updating a stack in AWS is much easier as well than updating tables individually
MOST IMPORTANT: If an error occurs the stack functionality already has rollback and redundancy functionality built in, which you control the behaviour of. If something happens in your code during your on boarding process it will be a mess to clean up, what if one table succeeded and the other not? You will have to troll through logs (if they exist) to find out what happened.
You can also combine this approach with using something like AWS Pipelines or even AWS Simple Workflow Service to add custom steps in your custom on-boarding process, eg run a lambda function, send a notification when completed, wait for some payment. This builds on my last point that if this pipeline does fail, you will be able to see which step failed, and why it failed. You will also be able to see if things timeout.
Lastly I want to advise caution in creating infrastructure per customer. It's much more work and adds allot more ways in which things can break. Make sure you put limits in AWS as well that you don't have a situation in which your bill sky-rockets because of some bug creating infrastructure.
I'm starting out with Grails and want to build a sample application.
Below is the flow of the application I'm envisioning. I'll follow up with questions.
The flow of the app:
User uploads a file
controller gets the file and just sends a response back saying "uploaded"
File is put in a JMS queue
Java service running separately fetches the file from the queue and processes it (just reads the first word)
Java service puts the response back (where does it put the response?)
Grails App will read the response and present it to the user
Questions
Where does the java service put the data after reading the file?
How does the grails app read the data put by the java sevice?
Is there something missing from my understanding? I plan to use grails jms plugin and ActiveMQ
Can something be improved in this architecture? This is a prototype I'm putting together for a bigger application.
I would really appreciate any articles/tutorials on an example of a simple app like the one above...?
In you case JMS is used in a synchonous way, so it depends on your JMS provider if you can do this. If the JMS provider is able of doing synchronous communication you put the answer after the file processsing into a reply queue.
In the synchronous JMS way, the java service will wait for a response from the JMS provider so can can present the response from the service back to the controller and then to the user...
So..:
User uploads a file
controller gets the file and just sends a it to the JMS queue and waits for response!
Java service running separately fetches the file from the queue and processes it (just reads the first word)
Java service puts the response back in a reply queue
Controller wil get the response reads the response and present it to the user
Your page could be a nice ajax page that presents the user with a processing spinner.