We have a legacy system that uses Java and an Oracle database.
I now want to set up an integration environment where we can run tests through HTTP calls.
Before the whole cycle of tests starts the database would be set up anew. We already have a functionality for this.
Now after every test only the modified data from this test should be rolled back. Is this somehow possible on an Oracle database?
If you are on Oracle 11 you can use the FLASHBACK command to restore a table to a point in time. Usage is like this
FLASHBACK TABLE employees_test
TO TIMESTAMP (SYSTIMESTAMP - INTERVAL '1' minute);
Your user will need special privileges
SELECT ANY DICTIONARY or FLASHBACK ANY TABLE or the SELECT_CATALOG_ROLE
I assume this is for a development or test instance as these privileges are not allowed to all users on production.
Two options I see for this:
Flashback database - you can do a sort of global "rollback" of your database to a predefined restore point.
Use Datapump: when your schema is ready, export it. After each test, import it with CONTENT=DATA_ONLY and TABLE_EXISTS_ACTION=TRUNCATE (for example). Can be fairly fast, doesn't require setting things up for flashback.
Related
In my Java application I have set:
flyway.setBaselineVersionAsString("7")
however on a brand new database which doesn't yet have the schema_version table Flyway doesn't consider the baseline setting and runs all migrations.
Is there a way to force the creation of schema_version table before migrations start, as I tried to create the table manually and the code worked fine. Or is there any other solution for that problem?
Which command are you running, baseline or migrate?
If you are running baseline then you need to publish more configuration in order to establish what is wrong - as creating a schema_version table with a basline version is exactly what it does.
If you are running migrate the observed behaviour is correct - that is, on a non-Flyway managed database the schema_version table will be created and all migrations run. The one exception to this is if you have set baselineOnMigrate which will effectively run an implicit baseline before the migrate is started.
Creating the schema_version yourself is certainly something you should not be doing, you will completely compromise Flyways intelligence.
I have been in this kind situation too many times. the way I do is detect the result of flyway.info(). If it is null, it means the schema has objects but no "schema_version" table - then set the baseline like you did.
When my app runs it first checks for the existence of the db. The first time it runs the db should not exist and if that's the case it will create the dB's tables, then it will populate specific tables with various support data. So, in testing this works fine. So I then delete the db through adb shell. Then I rerun the app and it determines that the db still exists!! I have 2 different methods that checks existence, or not,and both behave in the same aberrant way.
Method 1 simply tries to open the db as a Java file and then uses the exists method to check. Method 2 is a bit more elaborate using the db path and name as args to thedb.open database method.
Both methods fail to determine that the db does not exist, after I delete it in adb shell.
I can provide the code if needed, but thought I'd see if there are some ideas for this behavior. I have cycled the genymotion emulator but this did not fix it.
Thanks for any suggestions.
No. I stop the app, delete the db, then restart (All in Android Studio). By the way. Do you know why the db path, as reported by Gene motion, is not the same as my apps db path? Both tho do point to the same db, as shown in adb.
At the moment, I have a little JavaFX app that generates reports and statistics from the data on a remote MySQL-Server. I use EclipseLink for persistence. Since the access is read-only and the data doesn´t always need to be fresh, I thought I could speed things up by using an embedded DB (H2) that can be synchronized to the remote server when and if the user wishes to. The problem is, I don´t have a clue how to go about it.
What I came up with so far, is to execute mysqldump, make a dump of the remote server and execute the resulting SQL script locally. This is surely far from elegant, so: Is there a patent solution for this task?
Well, 50 tables possible have a considerable amount of relations, this can be tricky... As far as I know there is nothing that automate this for you or something like that. Very possible that you will have to create your own logic to that. When I did something like what are you trying to do I used the logic of "last update", like, the local data have the timestamp of the time it was last synced with the remote, and the remote data have the timestamp of the last time the data was updated there (himself on the table, or even a relation to it like a One-To-One). Having that data, every time the local user enter a part of the system that can be outdated, the client connect to the server and check if the last update timestamp is bigger that the local synced timestamp, if so, it updates the full object and relation. I consumed some time to develop but at the end worked like a charm. There may be some other way to do it, but this was the way I found at the time. Hope it helps you with your problem.
I am writing a sample application in which I wrote all sql in files.
Whenever I added new functionality I am creating new sql script.
What technology should I use when my application run first check all the script if not exist create or if any new script added create that one only. I gave files name like script1.sql,script2.sql and so on.
You are using a set of sql scripts to set up your DB. Every time you have a modification you add another script - incremental script.
You can use DB maintain for managing them. It will hold information about which scripts were executed and it will know to just execute the last patch. It seems to be exactly what you are looking for:
Keeps track of what scripts were executed ( DB version )
Can ease deployment to another environment ( with a lower DB version )
Can do much more than that , but it seems this is what you need.
PS Not sure if there are other frameworks which can do the same.
Does anyone have a code sample of a multithreading case that has about 25 threads that monitor a specific table in the database for a change then execute a process based on that change?
If you just want to be notified in the client application that something has changed in the database and you need to react on that in the application itself (so that triggers are not an option) you can use Oracle's change notification.
In order to do that, you register a listener with the JDBC driver specifying the "result set" that should be monitored. That listener will be called whenever something changes in the database.
For details on how this works, see the manual:
http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/dbmgmnt.htm#CHDEJECF
If you want to monitor the table (in the database) and make changes also in the database, then you should really consider Triggers.
Simply, Triggers are kind of procedures that run automatically BEFORE or AFTER changes on a table. You can monitor UPDATE, INSERT or DELETE transactions and then make your action.
Here is Simple tutorial on Oracle Triggers
Try directly using a Database Trigger instead.
Check this question about getting events in java from Database