This is my first stack-overflow post, so please ignore/forgive if I am not being specific enough.I'm sure I will learn the process gradually.
I have built a JSON to be displayed in angular data grid. This JSON comes from a complex query over materialized view.My thought to refresh the JSON as underlying data changes is as follows:
a) Register query for Oracle CQRN (Oracle Continuous Query Result Change Notification) at application startup
b) When the Underlying data changes, Oracle Database Change Listener in Java side gets invoked and Ire-query the data (with change) and push it to socket end-point. That way the JSON gets changed with latest data.
This works fine with simple query.
Issues are:
a) In my case the query is very complex and involves multiple materialized views with UNION ALL and complex JOINS. CQRN does not support materialized view registration for query result change.
b) The query I am registering at start-up, for query result change notification, is pretty static. It does not meet the requirement of various different parameterized queries behind the data-grid.
Can anyone suggest any other alternative for example cache the grid data in the middle-tire and and refresh cache with updated data whenever the underlying grid data changes. I should be notified when underlying grid data changes so I will re-query & send the updated data to socket end-point, which will refresh the grid.
I have to show the grid-data changes in real-time, so I have used Java WebSocket (JSR 356)
Technology stack:
UI: Javascaipt/AngularJS
Middle-tier: Java 1.7
Server: Jetty 9.2
Database: Oracle 11g R2
Build Platform: Maven 3.3
Suggestion for any other suitable approach also will be much appreciated.
Thanks & regards,
- Joy
While not directly answering your question we just implemented a real time data grid involving multiple data sources and CQRN. This built in is based on a table changing. Our technique was:
add on insert trigger (data feed was real time, no deletes, no updates) to the base tables
call a stored procedure to manipulate the data. You would use the logic in your materialized view. The procedure inserts data into a destination table. That has a trigger to call the CQRN.
often with realtime you need to delete old data so everything stays fast
Related
We have a PLM system where users go and create/update objects (i.e. Products, Colorway etc...). This objects eventually gets stored to sqlserver database. The tables do have a column for modifyTimeStamp. The field has updated timestamp when a user updated an object.
We are integrating this tool with some other application. This other application needs to know when someone creates/update objects to our PLM System.
What's the best way to achieve this? Writing some kind of listener which will keep listening and if there is a change in the table, it will notify?
The other approach could be having a trigger. But, then how my code will call that trigger as the triggers are only within the scope of that table?
I think there are many ways to go about to solve this problem. I will try to describe a few.
Creating a scheduler on the listening application. I suggest to implement a scheduler that will run every given interval to fetch the latest data according to the modify time and processing them.
Creating a new API on the listening application and to call it via the creating/updating application.
Using a microservice architecture such as using messaging services between the applications to inform one or another of creation/update events.
I hope it will help you and good luck!
SQL Server has a feature called "Change Tracking". It must first be activated for a database. If it is activated, you can issue special queries that return information about data changes in a specific table.
According to the example in the docs, the query
DECLARE #last_sync_version bigint;
SET #last_sync_version = <value obtained from query>;
SELECT [Emp ID], SSN,
SYS_CHANGE_VERSION, SYS_CHANGE_OPERATION,
SYS_CHANGE_COLUMNS, SYS_CHANGE_CONTEXT
FROM CHANGETABLE (CHANGES Employees, #last_sync_version) AS C;
would return the data changes in the Employees table since #last_sync_version.
I have a Java web app (WAR deployed to Tomcat) that keeps a cache (Map<Long,Widget>) in memory. I have a Postgres database that contains a widgets table:
widget_id | widget_name | widget_value
(INT) (VARCHAR 50) (INT)
To O/R map between Widget POJOs and widgets table records, I am using MyBatis. I would like to implement a solution whereby the Java cache (the Map) is updated in real-time whenever a value in the widgets table changes. I could have a polling component that checks the table every, say, 30 seconds, but polling just doesn't feel like the right solution here. So here's what I'm proposing:
Write a Postgres trigger that calls a stored procedure (run_cache_updater())
The procedure in turns runs a shell script (run_cache_updater.sh)
The script base-64 encodes the changed widgets record and then cURLs the encoded record to an HTTP URL
The Java WAR has a servlet listening on the cURLed URL and handles any HttpServletRequests sent to it. It base-64 decodes the record and somehow transforms it into a Widget POJO.
The cache (Map<Long,Widget>) is updated with the correct key/value.
This solution feels awkward, and so I am first wondering how any Java/Postgres gurus out there would handle such a situation. Is polling the better/simpler choice here (am I just being stubborn?) Is there another/better/more standard solution I am overlooking?
If not, and this solution is the standard way of pushing changed records from Postgres to the application layer, then I'm choking on how to write the trigger, stored procedure, and shell script so that the entire widgets record gets passed into the cURL statement. Thanks in advance for any help here.
I can't speak to MyBatis, but I can tell you that PostgreSQL has a publish/subscribe system baked in, which would let you do this with much less hackery.
First, set up a trigger on widgets that runs on every insert, update, and delete operation. Have it extract the primary key and NOTIFYwidgets_changed, id. (Well, from PL/pgSQL, you'd probably want PERFORM pg_notify(...).) PostgreSQL will broadcast your notification if and when that transaction commits, making both the notification and the corresponding data changes visible to other connections.
In the client, you'd want to run a thread dedicated to keeping this map up-to-date. It would connect to PostgreSQL, LISTENwidgets_changed to start queueing notifications, SELECT * FROM widgets to populate the map, and wait for notifications to arrive. (Checking for notifications apparently involves polling the JDBC driver, which sucks, but not as bad as you might think. See PgNotificationPoller for a concrete implementation.) Once you see a notification, look up the indicated record and update your map. Note that it's important to LISTEN before the initial SELECT *, since records could be changed between SELECT * and LISTEN.
This approach doesn't require PostgreSQL to know anything about your application. All it has to do is send notifications; your application does the rest. There's no shell scripts, no HTTP, and no callbacks, letting you reconfigure/redeploy your application without also having to reconfigure the database. It's just a database, and it can be backed up, restored, replicated, etc. with no extra complications. Similarly, your application has no extra complexities: all it needs is a connection to PostgreSQL, which you already have.
I made Java/JDBC code which performs simple/basic operations on a database.
I want to add code which helps me to keep a track of when a particular database was accessed, updated, modified etc by this program.
I am thinking of creating another database inside my DBMS where these details or logs will be stored for each database involved.
Is this the best way to do it ? Are there any other ways (preferably simple) to do this ?
EDIT-
For now, I am using MySQL. But, I also want my code to work with at least
Oracle SQL and MS-SQL as well.
It is pretty standard to add a "last_modified" column to a table and then add an update trigger on the table to set it to the db current time. Then your apps don't need to worry about it. Also, a "create_time" is often used as well, populated by an insert trigger.
Update after comment:
Seems you are looking for audit logs. Some write apps where data manipulation only happens through stored procedures and not through inserts and updates. A fixed api. So you want to add an item to a table, you call the stored proc:
addItem(itemName, itemDescription)
Then the proc inserts into the item table and does what ever logging is necessary.
Another technique, if you are using some kind of framework for your jdbc access (say Spring) might be to intercept at that layer.
In almost all tables, I have the following columns:
CreatedBy
CreatedAt
These columns have default values of the current user and current time, respectively. They are populated when a row is added.
This solves only part of your problem. You can start adding triggers, but that gets complicated. Another method is to force modification access to the database through stored procedures, and then log the stored procedures. This has other advantages, in terms of controlling what users can do. But, you might want more flexibility.
A third possibility are auditing tools, that keep track of all queries being run on the database. I think most databases have a way of turning on internal auditing, although these are very specific to the database. There are also third party tools that allow you to see what has happened. Note, though, that these methods will affect performance if your database is doing high volume transactions.
For more information, you should revise your question to specify which database you are using or planning on using.
I have Problem with SQl server Performance because of Heavy Calculation query,
so we decided that we put Solr as intermediate and index all data from either Hibernate or Direct from SQl server,
so can anybody suggest/help me that it is possible ?
please suggest any tutorial link for this.
You can use DataImportHandler to transfer data, which you can schedule using DataImportScheduler.
I had the similar problem where SQL Server SP took 12 hours to update relationships between objects (rows), so we ended up using Neo4j (open source graph database), which exactly matched our data model.
We needed object relationships to be reflected in Solr searches, e.g. give me all objects whose name starts with "obj" and whose parent is of type "typ".
i have got listing screens in my web app that pull quite a heavy of data from oracle database.
Each time the listing screen loads it goes to DB and pull data.
What I want is ,i want to have some caching teching technique that can extract data from DB and keep that in memory and that when any next request is made I should be getting data.and just like DB I should be able to filter out data from that with any sql query,jst that it won't go to DB rather pull data from memory.so that set of extracted data will be just like a view of the table and it should consistently moniter the corresponding tables so that if any update operation is made on d table it should again fetch new set of data from DB and serve.
Is there any API in java to achieve d same?
in ADO.net they hv got something like recordset...i dnt know much about that.
so is there any way out.my app is based on J2EE and oracle as DB.we hv got jboss as d server.Any suggestion is welcome.Thanks.
Try using Ehcache, it supports JDBC caching. And avoid creating custom solutions, if you're not JDBC guru.
You could cache the results of your query in memcached.
When your application modifies the table that you're caching, delete the cached item out of your memcached instances.
I found this quick guide to be useful: http://pragprog.com/titles/memcd/using-memcached
you can store that data into an in-memory dataset.
give this library a try:
http://casperdatasets.googlecode.com
you can iterate and scroll through the results just like a resultset, issue queries on it, sort the data, and create indexes to optimize searches - and its all memory-based.
I have 2 options for this
1) Jboss cache and you check all the details at the following link
JBOSS Cache