I have a java application that is connected to a view on a remote Oracle db.
Does anyone know of a way in Java to monitor this table for changes? I.e. if there are inserts of updates etc I would need to react.
Look at Oracle Change Notification, a so interesting Oracle feature.
From the Oracle documentation: "Database Change Notification is a feature that enables client applications to register queries with the database and receive notifications in response to DML or DDL changes on the objects associated with the queries. The notifications are published by the database when the DML or DDL transaction commits."
You can place a INSERT/UPDATE/DELETE trigger on the table to perform some action when 'data' changes are made to the table. (as opposed to changes to the structure of the table)
I believe 10g also supports triggers on views.
but I'm not sure how you can notifiy the java process of this other then by polling.
sorry.
you could possibly create some solution where the java app has a 'listen' server and the database pushes it back a message. but that sounds hard to maintain.
Justin Cave in the comments suggests you could configure Oracle Streams to send out logical change records (LCRs) that a Java app could subscribe to via JMS. Or the trigger could write records to an Advanced Queue that Java could subscribe to via JMS.
you will still need to be wary of Oracle transations..
due to the way Oracle transactions work, the trigger will fire on the change, but it may also fire multiple times..
and in anycase the java app would not bee able to 'see' the changes until a commit had been performed.
Related
I want to create a Spring Boot App that has an API that clients query to get data from. Also I'd like to update any change that is made on the db appear in the front in real time. Something like firebase does when updating a document. I'm using in-memory H2 Database at the moment as it is not something that I need to persist between runs (for now I guess...)
The API is not a problem but the real-time updates as that part is already done.
I thought about implementing pub-sub strategy or something like that but I'm a bit lost about it actually.
I also know the existence about WebFlux and have read info about it but I'm not sure that fulfill my needs.
I had implemented CDC (Change Data Capture) using Debezium.
To set it up, you create a relevant connector for your DB (example), it uses Kafka connect and keeps track of your DB operations.
When any DB operation (Insert, Update, Delete) occurs then it publishes CDC messages to a Kafka topic.
You can setup a Spring Kafka Consumer application to listen to this Kafka topic, consume the CDC events and react on the basis of the type of operation [ op = c (create), u (update), d (delete) ].
Here is a sample project that I have created. You can use it as a reference.
This is how do the Kafka messages look like - link
I need to make a web application in java, that offers a dashboard based on the content of a db table.
It needs to be "autorefreshing", and always syncronized with the actual data in the db.
For the browser <-> servlet interaction I can use websockets or at least long polling to achieve the "freshness", but I'm stuck with the java <-> db communication.
I can have some polling, but I would really have some "notification" from the db itself.
Is there some way / some library to achieve?
For my case the db is oracle, but I'm interested also in solution for postgres.
To monitor db changes debezium connector is good. By using this u will get every change event of database in kafka topics.
For oracle look at this tutorial
For postgresql look at this tutorial
The problem statement :
Example : I have table name called "STUDENT" and it has 10 rows and consider one of the rows has name as "Jack". So when my server started and running I make the DB database into cache memory so my application has the value of "jack" and I am using it all over my application.
Now external source changed my "STUDENT" table and changed name "Jack" into "Prabhu Jack". I want the updated information asap into my application with out reloading/refresh into my application.. I dont want to run some constant thread to monitor and update my application. All I want is part of hibernate or any feasible solution to achieve this?
..
What you describe is the classic case of whether to pull or push updates.
Pull
This approach relies on the application using some background thread or task system that periodically polls a resource and requests the desired information. Its the responsibility of the application to perform this task.
In order to use a pull mechanism in conjunction with a cache implementation with Hibernate, this would mean that you'd want your Hibernate query results to be stored in a L2 cache implementation, such as ehcache.
Your ehcache would specify the storage capacity and expiration details and you simply query for the student data at each point you require it. The L2 cache would be consulted first, which lives on the application server side, and would only consult the database if the L2 cache had expired.
The downside is you would need to specify a reasonable time-to-live setting for the L2 cache so that the cache got updated by a query within reason after the rows were updated. Depending on the frequency of change and usage, maybe a 5 minute window is sufficient.
Using the L2 cache prevents the need for a useless background poll thread and allows you to specify a reasonable poll time all within the Hibernate framework backed by a cache implementation.
Push
This approach relies on the point where a change occurs to be capable of notifying interested parties that something changed and allowing the interested party to perform some action.
In order to use a push mechanism, your application would need to expose a way to be told a change occurred and preferably what the change actually was. Then when your external source modifies the table in question, that operation would need to raise an event and notify interested parties.
One way to architect this would be to use a JMS broker and have the external source submit a JMS message to a queue and have your application subscribe to the JMS queue to read the message when its sent.
Another solution would be to couple the place where the external source manipulates the data tightly with your application such that the external source doesn't just manipulate the data in question, but also sends a JSON request to your application, allowing it to update its internal cache immediately.
Conclusion
Using a push situation could require the introduction of additional middleware components, should you want to efficiently decouple the external source side & your application. But it does come with the added benefit that the eventual consistency between the database and your application's cache should happen with relative real-time. This solution also has no additional needs for querying the database after startup for those rows.
Using a pull situation doesn't require anything more than what you're likely already using in your application, other than maybe using a supported L2 cache provider rather than some homegrown solution. However, the eventual consistency between the database and your application's cache is completely dependent on your TTL configuration for that entity's cache. But be aware that this solution will continue to query the database to refresh the cache once your TTL has expired.
We have a command and control system which persists historical data in a database. We'd like to make the system independent of the database. So if the database is there, great we will persist data there, if it is not, we will do some backup storage to files and memory until the database is back. The command and control functionality must be able to continue uninterrupted by the loss or restoration of the database; it should not even know the database exists. So the database and DAO functionality needs to be decoupled from the rest of the application.
We are using RESTful service calls, Spring framework, ActiveMQ, JDBCTemplate with SQL Server database. Currently following standard connection practices using Hikari datasource and JTDS driver. The problem is that if the database goes down or the database connection is lost we start to have data issues as too many service calls (mainly the getters) are still too dependent on the database existence for processing. This dependence is what we'd like to eliminate.
What are the best practices/technologies for totally decoupling the database from the application? We are considering using AMQ to broadcast data updates and have the DAO listen for those messages and then do the update to the database if it is available or flat files as a backup. Then for the getters, provide replies based on what is available either from the actual database or from the short-term backup.
My team has little experience with this and we want to know what others have done that works well.
I would like to ask for an starting point of what technology or framework to research.
What I need to accomplish is the following:
We have a Java EE 6 application using JPA for persistance; we would like to use a primary database as some sort of scratchpad, where users can insert/delete records according to the tasks they are are given. Then, at the end of the day an administrator will do some kind of check on their work approving or disapproving it. If he approves the work, all changes will be done permanent and the primary database will be synced - replicated to another one (for security reasons). Otherwise, if administrator do not approve changes they will be rolled back.
Now here I got two problems to figure out:
First.- Is it possible to rollback a bunch of JPA operations done through a certain amount of time?
Second.- Trigger the replication (This can be done by RDBMS engines) process by code.
Now, if RDBMS replication is not possible (maybe because of client requirement) we would need a sync framework for JPA as a backup. I was looking at some JMS solutions, however not clear about the exact process or how to make them work on JPA.
Any help would be greatly appreciated,
Thanks.
I think, your design steps are having too much risk on loosing data. What I understand that you are talking about holding data in memory until admin approves/reject it. You must think about a disaster scenario and saving your data in that case.
Rather this problem statement is more inclined towards a workflow design, where the
data is entered by one entity, it is persisted.
Other entity approve/> reject the data.
All the approved data is further replicated to next database.
All these three steps could be implemented in 3 modules, backed by a persistent storage/ JMS technology. Depending on how real time, each of these steps needs to be; you could think of an elegant design to accomplish this in a cost effective manner.
Add a "workflow state" column to your table. States: Wait for approval, approved, replicated
Persist your data normally using JPA (state: wait for approval)
Approver approves: Update using JPA, change to approved state
As for the replication
In the approve method you could replicate the data synchronously to the other database (using JPA)
You could copy as well the approved data to another table, and use some RDBMS functionality to have the RDBMS replicate the data of that table
You could as well send a JMS message. At the end of the day a job reads the queue and persists the data into the other database
Anyway I suggest using a normal RDBMS cluster with synchronous replication. In that scenario you don't have to develop a self-made replication scheme, and you always have a copy of your data. You always have the workflow state.