Any patterns to follow to share huge DB data over REST - java

I have a situation where I need to share huge data (probably in billions) over REST. Any particular design pattern to follow. I read something about scrolls that are available in Elastic where we can initialize the scroll once and then use it to retrieve subsequent data in batches.
I have using MySQL database where all my data is stored and i need to share all the data over REST services. I am using Spring BOOT for REST.
I have thought of a way where I open a connection and starting getting the data and write in a blocking queue in a producer consumer pattern and then give back a handle (id) to client for subsequent calls to get next batch.
Is there a better way to achieve the same.

Related

Best Practice for Kafka rollback scenario in microservices [duplicate]

We have a micro-services architecture, with Kafka used as the communication mechanism between the services. Some of the services have their own databases. Say the user makes a call to Service A, which should result in a record (or set of records) being created in that service’s database. Additionally, this event should be reported to other services, as an item on a Kafka topic. What is the best way of ensuring that the database record(s) are only written if the Kafka topic is successfully updated (essentially creating a distributed transaction around the database update and the Kafka update)?
We are thinking of using spring-kafka (in a Spring Boot WebFlux service), and I can see that it has a KafkaTransactionManager, but from what I understand this is more about Kafka transactions themselves (ensuring consistency across the Kafka producers and consumers), rather than synchronising transactions across two systems (see here: “Kafka doesn't support XA and you have to deal with the possibility that the DB tx might commit while the Kafka tx rolls back.”). Additionally, I think this class relies on Spring’s transaction framework which, at least as far as I currently understand, is thread-bound, and won’t work if using a reactive approach (e.g. WebFlux) where different parts of an operation may execute on different threads. (We are using reactive-pg-client, so are manually handling transactions, rather than using Spring’s framework.)
Some options I can think of:
Don’t write the data to the database: only write it to Kafka. Then use a consumer (in Service A) to update the database. This seems like it might not be the most efficient, and will have problems in that the service which the user called cannot immediately see the database changes it should have just created.
Don’t write directly to Kafka: write to the database only, and use something like Debezium to report the change to Kafka. The problem here is that the changes are based on individual database records, whereas the business significant event to store in Kafka might involve a combination of data from multiple tables.
Write to the database first (if that fails, do nothing and just throw the exception). Then, when writing to Kafka, assume that the write might fail. Use the built-in auto-retry functionality to get it to keep trying for a while. If that eventually completely fails, try to write to a dead letter queue and create some sort of manual mechanism for admins to sort it out. And if writing to the DLQ fails (i.e. Kafka is completely down), just log it some other way (e.g. to the database), and again create some sort of manual mechanism for admins to sort it out.
Anyone got any thoughts or advice on the above, or able to correct any mistakes in my assumptions above?
Thanks in advance!
I'd suggest to use a slightly altered variant of approach 2.
Write into your database only, but in addition to the actual table writes, also write "events" into a special table within that same database; these event records would contain the aggregations you need. In the easiest way, you'd simply insert another entity e.g. mapped by JPA, which contains a JSON property with the aggregate payload. Of course this could be automated by some means of transaction listener / framework component.
Then use Debezium to capture the changes just from that table and stream them into Kafka. That way you have both: eventually consistent state in Kafka (the events in Kafka may trail behind or you might see a few events a second time after a restart, but eventually they'll reflect the database state) without the need for distributed transactions, and the business level event semantics you're after.
(Disclaimer: I'm the lead of Debezium; funnily enough I'm just in the process of writing a blog post discussing this approach in more detail)
Here are the posts
https://debezium.io/blog/2018/09/20/materializing-aggregate-views-with-hibernate-and-debezium/
https://debezium.io/blog/2019/02/19/reliable-microservices-data-exchange-with-the-outbox-pattern/
first of all, I have to say that I’m no Kafka, nor a Spring expert but I think that it’s more a conceptual challenge when writing to independent resources and the solution should be adaptable to your technology stack. Furthermore, I should say that this solution tries to solve the problem without an external component like Debezium, because in my opinion each additional component brings challenges in testing, maintaining and running an application which is often underestimated when choosing such an option. Also not every database can be used as a Debezium-source.
To make sure that we are talking about the same goals, let’s clarify the situation in an simplified airline example, where customers can buy tickets. After a successful order the customer will receive a message (mail, push-notification, …) that is sent by an external messaging system (the system we have to talk with).
In a traditional JMS world with an XA transaction between our database (where we store orders) and the JMS provider it would look like the following: The client sets the order to our app where we start a transaction. The app stores the order in its database. Then the message is sent to JMS and you can commit the transaction. Both operations participate at the transaction even when they’re talking to their own resources. As the XA transaction guarantees ACID we’re fine.
Let’s bring Kafka (or any other resource that is not able to participate at the XA transaction) in the game. As there is no coordinator that syncs both transactions anymore the main idea of the following is to split processing in two parts with a persistent state.
When you store the order in your database you can also store the message (with aggregated data) in the same database (e.g. as JSON in a CLOB-column) that you want to send to Kafka afterwards. Same resource – ACID guaranteed, everything fine so far. Now you need a mechanism that polls your “KafkaTasks”-Table for new tasks that should be send to a Kafka-Topic (e.g. with a timer service, maybe #Scheduled annotation can be used in Spring). After the message has been successfully sent to Kafka you can delete the task entry. This ensures that the message to Kafka is only sent when the order is also successfully stored in application database. Did we achieve the same guarantees as we have when using a XA transaction? Unfortunately, no, as there is still the chance that writing to Kafka works but the deletion of the task fails. In this case the retry-mechanism (you would need one as mentioned in your question) would reprocess the task an sends the message twice. If your business case is happy with this “at-least-once”-guarantee you’re done here with a imho semi-complex solution that could be easily implemented as framework functionality so not everyone has to bother with the details.
If you need “exactly-once” then you cannot store your state in the application database (in this case “deletion of a task” is the “state”) but instead you must store it in Kafka (assuming that you have ACID guarantees between two Kafka topics). An example: Let’s say you have 100 tasks in the table (IDs 1 to 100) and the task job processes the first 10. You write your Kafka messages to their topic and another message with the ID 10 to “your topic”. All in the same Kafka-transaction. In the next cycle you consume your topic (value is 10) and take this value to get the next 10 tasks (and delete the already processed tasks).
If there are easier (in-application) solutions with the same guarantees I’m looking forward to hear from you!
Sorry for the long answer but I hope it helps.
All the approach described above are the best way to approach the problem and are well defined pattern. You can explore these in the links provided below.
Pattern: Transactional outbox
Publish an event or message as part of a database transaction by saving it in an OUTBOX in the database.
http://microservices.io/patterns/data/transactional-outbox.html
Pattern: Polling publisher
Publish messages by polling the outbox in the database.
http://microservices.io/patterns/data/polling-publisher.html
Pattern: Transaction log tailing
Publish changes made to the database by tailing the transaction log.
http://microservices.io/patterns/data/transaction-log-tailing.html
Debezium is a valid answer but (as I've experienced) it can require some extra overhead of running an extra pod and making sure that pod doesn't fall over. This could just be me griping about a few back to back instances where pods OOM errored and didn't come back up, networking rule rollouts dropped some messages, WAL access to an aws aurora db started behaving oddly... It seems that everything that could have gone wrong, did. Not saying Debezium is bad, it's fantastically stable, but often for devs running it becomes a networking skill rather than a coding skill.
As a KISS solution using normal coding solutions that will work 99.99% of the time (and inform you of the .01%) would be:
Start Transaction
Sync save to DB
-> If fail, then bail out.
Async send message to kafka.
Block until the topic reports that it has received the
message.
-> if it times out or fails Abort Transaction.
-> if it succeeds Commit Transaction.
I'd suggest to use a new approach 2-phase message. In this new approach, much less codes are needed, and you don't need Debeziums any more.
https://betterprogramming.pub/an-alternative-to-outbox-pattern-7564562843ae
For this new approach, what you need to do is:
When writing your database, write an event record to an auxiliary table.
Submit a 2-phase message to DTM
Write a service to query whether an event is saved in the auxiliary table.
With the help of DTM SDK, you can accomplish the above 3 steps with 8 lines in Go, much less codes than other solutions.
msg := dtmcli.NewMsg(DtmServer, gid).
Add(busi.Busi+"/TransIn", &TransReq{Amount: 30})
err := msg.DoAndSubmitDB(busi.Busi+"/QueryPrepared", db, func(tx *sql.Tx) error {
return AdjustBalance(tx, busi.TransOutUID, -req.Amount)
})
app.GET(BusiAPI+"/QueryPrepared", dtmutil.WrapHandler2(func(c *gin.Context) interface{} {
return MustBarrierFromGin(c).QueryPrepared(db)
}))
Each of your origin options has its disadvantage:
The user cannot immediately see the database changes it have just created.
Debezium will capture the log of the database, which may be much larger than the events you wanted. Also deployment and maintenance of Debezium is not an easy job.
"built-in auto-retry functionality" is not cheap, it may require much codes or maintenance efforts.

Domino java xpage - caching values server-wide

I have a Java XPages application with a REST service that functions as an API for rooms & resources database (getting appointments for specific room, creating etc).
The basic workflow is that an HTTP request is being made to a specific REST action, having the room's mail address in the search query. Then in the java code I'm iterating over all documents from the rooms & resources database, until I find a document with the InternetAddress field with the searched mail address.
This isn't as fast as I would like it to be, and there are multiple queries like this being made all the time.
I'd like to do some sort of caching in my application, that when one room is found once, it's document UID is being stored in a server-wide cache so next time a request is made for this mail address, I can directly go to the document using getDocumentByUNID(), which I think should be way faster than searching over the entire database.
Is it possible to have such persistent lookup table in Java XPages without having any additional applications, while keeping it as fast as possible? A hash table would be perfect for this.
To clarify: I don't want caching in a single request, because I'm not doing more than one database lookups in a single query, I'd want to keep the caching server-wide, so it would be kept between multiple requests.
Yes, it is possible to store persistent data. What you are looking for is called an application scoped managed bean.

How to deal with multiple database results from different servers for a request

I have cloud statistics (Structured data :: CSV) information; which i have to expose to administrator and user.
But for scalability; data collection will be collected by multiple machines (perf monitor) which is connected with individual DBs.
Now Manager (Mgr) is responsible of multicasting the request to all perf monitor; to collect the overall stats data to satisfy single UI request.
So questions are:
1) How will i make the mutiple monitor datas to be sorted based on
the client request at Mgr. Each monitor may give the result as per the client
request; but still how to merge multiple machines datas through java?
Means How to perform in memory sql aggregate/scalar (e.g. Groupby, orderby, avg) function on all the results retrieved from multiple clusters at MGR. How do i implement DB sql aggregate/scalar functionality in java side, any known APIs?
I think what i need is Reduce part of mapreduce technique in hadoop.
2) A request from UI (assume select count(*) from DB where Memory >
1000MB) have to be forwarded to multiple machines. Now how to send parallel
requests to individual monitor and consume only when all the nodes
are responded? Means how to wait User thread till consuming all the
responses from perf monitors? How to trigger parallel REST request for single UI request on MGR.
3) Do I have to authenticate UI user at both Mgr and Perf monitor?
4) Are you thinking any drawback in this approach?
Notes:
1) I didn't go for NoSql because datas are structured and no joins are required.
2) I didn't go for node.js since i am new for that and may take more time on developing it. Also i am not developing any concurrent critical where single threaded are best suited. Here only push/retrieve of data is done. No modification happening.
3) I want individual DB for each monitor OR at-least two instances of DB's with multiple clusters for an instance to support faster accessing of real time BIG statistical data.
You want to scale your app, but you designed an inherent bottleneck. Namely: the Mgr.
What I would do is that I would split the Mgr into at least two parts. Front-end and backend. The front end could simply be an aggregator and/or controller which collects all the requests from all the different UI servers, timestamps those requests and put them in a queue (RabbitMQ, Kafka, Redis, whatever) making a message with the UI session ID or something similar which uniquely identifies the source of request. Then you just have to wait until you get a response on the queue (with a different topic of course).
Then on your backend (the other side of the queue) you can set up as many nodes as your load requires and make them performing the same task. Namely: pull off requests from the queue and call those performance monitoring APIs as necessary. You can scale these backend nodes as much as you wish since they don't have any state, all the state which needs to be stored is already part of the messages in the queue which will be automagically persisted for you by Redis/Kafka/RabbitMQ or whatever else you choose.
You can also use Apache Storm or something similar to do this for you in the backend, since it was designed for exactly this kind of applications.
Apache Storm has also built-in merging capability exposed through the Trident API.
Note on the authentication: you should authenticate the HTTP requests on the front-end side and then you will be all right. Just assign unique IDs (session IDs most probably) to the users connected to your mgr and use this internal ID when you forward your requests further to downstream servers.
Now how to send parallel requests to individual monitor and consume
only when all the nodes are responded? Means how to wait User thread
till consuming all the responses from perf monitors? How to trigger
parallel REST request for single UI request on MGR.
Well if you have so many questions regarding handling user connections and serving those clients with responses then I would suggest to pick up a book on the Java servlets API. You might want to read this one for example: Servlet & JSP: A Tutorial (A Tutorial series). It is a bit outdated but well written.
But with all due respect, if you have so many questions on these quite fundamental topics, then it might be better to leave the architecture design to someone more experienced.
Don't reinvent the wheel, use some good existing BAM and Database monitoring tools, they have lot of built in dashboards and statistics, easy to connect with Java and work-flows.
But for scalability; data collection will be collected by multiple
machines (perf monitor) which is connected with individual DBs.
Approximately what sort of scaling do you anticipate ... is it 100s of GB's Multiple Terra Bytes .... Reason is these days SQL Server and Oracle can handle really large volumes of data. Once data is collected in a central db its game over as far as searching and crunching are concerned.
Now Manager (Mgr) is responsible of multicasting the request to all
perf monitor; to collect the overall stats data to satisfy single UI
request.
This will be a major task to write this and it will be really complex IMHO. That said Iam not an expert in this aspect.
What I would do is to put a layer of Hazelcast or Infinispan or something like this in your Performance Monitor instead of the Hazelcast. The Performance monitor itself like a logic can be part of the DataGrid. Then the MySQL will work as a persistent storage of this data grid. In this sense you can have more then one Mysql and each mysql will just hold a portion of the data It will just work as extension ability to go beyond your maximum RAM. Overtime you scale your performance monitor you will also scale your persistent capabilities.
Young then Map Reduce or other distributed functions for aggregation can lead to massive amount of paralelism and ability to server significantly more requests. Also such architecture scales horizontal. At the end it should look something like this:
And just on another note to say that it is not necessary in general to have 1 MySQL for each hazelcast. That depends on what the goal is. I also kind of forgot the Manager from the diagram but things there are simple it can either work as a gateway to the Data Grid or alternatively it can be merged with the grid.
Not sure if my answer would be useful for you since this question has been posted sometimes back.
I would like to answer it based on your question, problems in the current approach and proposed solution...
1) How will i make the mutiple monitor datas to be sorted based on the
client request at Mgr. Each monitor may give the result as per the
client request; but still how to merge multiple machines datas through
java? Means How to perform in memory sql aggregate/scalar (e.g.
Groupby, orderby, avg) function on all the results retrieved from
multiple clusters at MGR. How do i implement DB sql aggregate/scalar
functionality in java side, any known APIs? I think what i need is
Reduce part of mapreduce technique in hadoop.
Java provided in-build Java DB as part of Java distribution which is also available as Apache Derby database. This database can be used as in-memory SQL database. JavaDB & Apache Derby stores the data into disk. So you won't loose the data after restart.
Check here http://www.oracle.com/technetwork/java/javadb/overview/index.html https://db.apache.org/derby/
For Map-Reduce simple Java collection based approached would work. I don't think you need any special Map-Reduce framework in this case. You should however consider Out Of Memory, Network bandwidth etc. when you read data from multiple sources
2) A request from UI (assume select count(*) from DB where Memory >
1000MB) have to be forwarded to multiple machines. Now how to send
parallel requests to individual monitor and consume only when all the
nodes are responded? Means how to wait User thread till consuming all
the responses from perf monitors? How to trigger parallel REST request
for single UI request on MGR.
Ideally NodeJS kind of application are really best suite in this case where application get callback whenever there is a response of the HTTP call. However you can implement Observer Pattern like explained here How do I perform a JAVA callback between classes?
3) Do I have to authenticate UI user at both Mgr and Perf monitor?
It should be based on your requirement
4) Are you thinking any drawback in this approach?
There are several drawbacks with this approach
Data should not be pulled on-demand from UI. At-least data should be available in the centralised database whenever there is a request to generate the data. Pulling data from various end-points is expensive.
Stats must be collected periodically to maintain history and reports must be generated based on the moving time window.
JVM might go OutOfMemory if large data needs to be process. Proper handling is required.
Large data might get transferred over the network every time there is a new request. It might be for the same data again.
Notes:
1) I didn't go for NoSql because datas are structured and no joins are
required.
No SQL doesn't mean there is not structure followed. Even NoSQL database is the best fit for such data where you don't update the records, transactions etc are not required.
2) I didn't go for node.js since i am new for that and may take more
time on developing it. Also i am not developing any concurrent
critical where single threaded are best suited. Here only
push/retrieve of data is done. No modification happening.
NodeJS won't be a good choice since it is single threaded. NodeJS should not be used when you have CPU intensive job to perform. Like yours.
3) I want individual DB for each monitor OR at-least two instances of
DB's with multiple clusters for an instance to support faster
accessing of real time BIG statistical data.
**I would rather suggest you to either store data into any database which can horizontally scale, process the data either as and when it arrives or batch processing so that your user experience is good. **

Optimizing async multi-request operations calling the same service

We are developing a document management web application and right now we are thinking about how to tackle actions on multiple documents. For example lets say a user multi selects 100 documents and wants to delete all of them. Until now (where we did not support multiple selection) the deleteDoc action does an ajax request to a deleteDocument service according to docId. The service in turn calls the corresponding utility function which does the required permission checking and proceeds to delete the document from the database. When it comes to multiple-deletion we are not sure what is the best way to proceed. We have come to many solutions but do not know which one is the best(-practice) and I'm looking for advice. Mind you, we are keen on keeping the back end code as intact as possible:
Creating a new multipleDeleteDocument service which calls the single doc delete utility function a number of times according to the amount of documents we want to delete (ugly in my opinion and counter-intuitive with modern practices).
Keep the back end code as is and instead, for every document, make an ajax request on the service.
Somehow (I have no idea if this is even possible) batch the requests into one but still have the server execute the deleteDocument service X amount of times.
Use WebSockets for the multi-delete action essentially cutting down on the communication overhead and time. Our application generally runs over lan networks with low latency which is optimal for websockets (when latency is introduced web sockets tend to match http request speeds).
Something we haven't thought of?
Sending N Ajax calls or N webSocket messages when all the data could be combined into a single call or message is never the most optimal solution so options 2 and 4 are certainly not ideal. I see no particular reason to use a webSocket over an Ajax call. If you already have a webSocket connection, then you can certainly just send a single delete message with a list of document IDs over the webSocket, but if an Ajax call could work just as well so I wouldn't create a webSocket connection just for this purpose.
Options 1 and 3 both require a new service endpoint that lets you make a single call to delete multiple documents. This would be recommended.
If I were designing an API like this, I'd design a single delete endpoint that takes one or more document IDs. That way the same API call can be used whether deleting a single document or multiple documents.
Then, from the client anytime you have multiple documents to delete, always collect them together and make one API call to delete all of them at once.
Internal to the server, how you implement that API depends upon your data store. If your data store also permits sending multiple documents to delete, then you would likewise call the data store that way. If it only supports single deletes, then you would just loop and delete each one individually.
Doing the option 3 would be the most elegant solution for me.
Assuming you send requests like POST /deleteDocument where you have docId as a parameter, you could instead pass an array of document ids to remove.
Then in backend you would only have to iterate through the list of ids and perform the deletion. You should be able keep the deletion code relatively intact.

Listen for Changes In Cassandra Datastore?

I wonder if it is possible to add a listener to Cassandra getting the table and the primary key for changed entries? It would be great to have such a mechanism.
Checking Cassandra documentation I only find adding StateListener(s) to the Cluster instance.
Does anyone know how to do this without hacking Cassandras data store or encapsulate the driver and do something on my own?
Check out this future jira --
https://issues.apache.org/jira/browse/CASSANDRA-8844
If you like it vote for it : )
CDC
"In databases, change data capture (CDC) is a set of software design
patterns used to determine (and track) the data that has changed so
that action can be taken using the changed data. Also, Change data
capture (CDC) is an approach to data integration that is based on the
identification, capture and delivery of the changes made to enterprise
data sources."
-Wikipedia
As Cassandra is increasingly being used as the Source of Record (SoR)
for mission critical data in large enterprises, it is increasingly
being called upon to act as the central hub of traffic and data flow
to other systems. In order to try to address the general need, we,
propose implementing a simple data logging mechanism to enable
per-table CDC patterns.
If clients need to know about changes, the world has mostly gone to the message broker model-- a middleman which connects producers and consumers of arbitrary data. You can read about Kafka, RabbitMQ, and NATS here. There is an older DZone article here. In your case, the client writing to the database would also send out a change message. What's nice about this model is you can then pull whatever you need from the database.
Kafka is interesting because it can also store data. In some cases, you might be able to dispose of the database altogether.
Are you looking for something like triggers?
https://github.com/apache/cassandra/tree/trunk/examples/triggers
A database trigger is procedural code that is automatically executed
in response to certain events on a particular table or view in a
database. The trigger is mostly used for maintaining the integrity of
the information on the database. For example, when a new record
(representing a new worker) is added to the employees table, new
records should also be created in the tables of the taxes, vacations
and salaries.

Categories