I have a locally installed MySQL server on my laptop, and I want to use the information in it for a unit-test, so I want to create a script to generate all the data automatically. I'm using MySQL Workbench which already generates the tables (from the model). Is it possible to use it, or another tool, to create an automatic script to populate it with data?
EDIT: I see now that I wasn't clear. I do have meaningful data for the unit test. When I said "generate all the data automatically", I meant the tool should take the meaningful data I have in my local DB today and create a script to generate the same data in other developers' DBs.
The most useful unit tests are those that reflect data you expect or have seen in practice. Pumping your schema full of random bits is not a substitute for carefully crafted test data. As #McWafflestix suggested mysqldump is a useful tool, but if you want something simplier, consider using LOAD DATA with INFILE, which populates a table from a CSV.
Some other things to think about:
Test with a database in a known state. Wrap all your database interaction unit tests in transactions that always roll back.
Use dbunit to achieve the same end.
Update
If you're in a Java environment, dbUnit is a good solution:
You can import and export data in an XML format through its APIs, which would solve the issue of going from your computer to other members on your team.
It's designed to restore database state. So it snapshots the database before tests are executed and then restores at then end. So tests are side effect free (i.e. they don't permanently change data).
You can populate with defaults (if defined)
CREATE TABLE #t(c1 int DEFAULT 0,c2 varchar(10) DEFAULT '-')
GO
--This insert 50 rows in table
INSERT INTO #t( c1, c2 )
DEFAULT VALUES
GO 50
SELECT * FROM #t
DROP TABLE #t
Related
I came across this problem:
For out integration tests, we have an older database with already populated data. Some data don't have the right values (for example, for a boolean column, there is also null value). Now, when creating some integration tests, these ones are failing due to data not having correct values.
What I thought it would be a good idea was to have some scripts in the data.sql file that corrects the data (for example UPDATE my_table SET my_column = 0 WHERE my_column IS NULL) But the problem is that this update also commits to the database and thus the data is changed (now there are no more null values). Changing the database data in not an option, so what I'm trying to do is some sort of a rollback of the data.sql file at the end of each test / class. Can you please adivse?
The version is Spring boot 2.0.7.RELEASE, the depedency for testing is spring-boot-starter-test, the tests are annotated with #SpringBootTest and the database is Oracle.
application.yml:
spring:
datasource:
driver-class-name: oracle.jdbc.OracleDriver
url: ${URL}
username: ${USERNAME}
password: ${PASSWORD}
continue-on-error: true
You might be able to use Oracle's flashback table feature to rollback all DML changes that happened since data.sql was ran. I'm not sure how Spring Boot testing works, but I assume there is some way to call a pre- and post- actions that can call the below Oracle commands.
First, you will likely need to enable row movement on the relevant tables. This step is only needed once for each table and cannot be undone. (But the change is also pretty harmless. If I recall correctly, the only downside is a very tiny increase in metadata space.)
alter table my_table1 enable row movement;
alter table my_table2 enable row movement;
Right before the test begins, create a uniquely named restore point that is used to record the exact system change number to roll back to.
create restore point restore_point_1;
The run data.sql and all other testing changes.
When testing is done, run a FLASHBACK TABLE command that will restore all of the relevant tables back to their state as of the restore point.
flashback table my_table1, my_table2 to restore point restore_point_1;
As commenters have suggested, there are cleaner, more modern ways to instantly recreate data. But not everybody has containers or build scripts setup. I've seen this flashback approach used successfully by testers with only a small amount of effort.
There are some potential gotchas when using flashback. The restore point and changed data will only last so long. If your tests will go on for days, you may need to look into guaranteed restore points and adjusting your database's UNDO tablespace. Flashback table only works on DML, and will not work if a table is altered, and will not restore things like stored procedures or sequence values. If you need everything flashed back, then you might be able to use flashback database, but that command also has some complications.
The situation looks like this:
the Spring production application uses a table with a varchar column (MySQL),
I need to change the column to binary blob,
and encrypt the existing data (using Java, not database).
The two known steps of this process are:
I need to update the Entity, changing the annotations (from varchar to blob)
I need to run the migration changing the column format (using SQL with Flyway)
all this will be done when I stop the application, replace the application jar (to the new version with changes) and run it again.
The problem:
I need to take the old data from the column when it is still a varchar, encrypt it with Java, and after migration store it again in the blob column. (The new data which will be entered after the changes is not a problem, it will be automatically encrypted; the problem is with the old data.)
what approach should I use to deal with this update? What general steps would be correct?
This is more a general Spring question - how do you deal with the situation, when you need to change the structure existing on production and manipulate the old data to fit the new format?
For example in PHP I'm using a terminal script integrated with application and run in the application environment (with "artisan" command in Laravel); I can easily create a proper order of the actions: take the old data and remember it, change the database structure, manipulate the old data, and insert the data to new structure - it's all in one script and one transaction. I don't know how to do this in Spring.
I've found an answer - the Flyway Java-based migrations: https://flywaydb.org/documentation/migrations#java-based-migrations
Java-based migrations are a great fit for all changes that can not easily be expressed using SQL. These would typically be things like BLOB & CLOB changes, Advanced bulk data changes (Recalculations, advanced format changes, …)
It looks like this:
package db.migration;
import org.flywaydb.core.api.migration.spring.SpringJdbcMigration;
import org.springframework.jdbc.core.JdbcTemplate;
/**
* Example of a Spring Jdbc migration.
*/
public class V1_2__Another_user implements SpringJdbcMigration {
public void migrate(JdbcTemplate jdbcTemplate) throws Exception {
jdbcTemplate.execute("INSERT INTO test_user (name) VALUES ('Obelix')");
}
}
Database manipulation can be interleaved here with Java code and data manipulation.
I suspect that Liquibase has similar functionality.
i have some large data in one table and small data in other table,is there any way to run initial load of golden gate so that same data in both tables wont be changed and rest of the data got transferred from one table to other.
Initial loads are typically for when you are setting up the replication environment; however, you can do this as well on single tables. Everything in the Oracle database is driven by System Change Numbers/Change System Numbers (SCN/CSN).
By using the SCN/CSN, you can identify what the starting point in the table should be and start CDC from there. Any prior to the SCN/CSN will not get captured and would require you to manually move that data in some fashion. That can be done by using Oracle Data Pump (Export/Import).
Oracle GoldenGate also provided a parameter called SQLPredicate that allows you to use a "where" clause against a table. This is handy with initial load extracts because you would do something like TABLE ., SQLPredicate "as of ". Then data before that would be captured and moved to the target side for a replicat to apply into a table. You can reference that here:
https://www.dbasolved.com/2018/05/loading-tables-with-oracle-goldengate-and-rest-apis/
Official Oracle Doc: https://docs.oracle.com/en/middleware/goldengate/core/19.1/admin/loading-data-file-replicat-ma-19.1.html
On the replicat side, you would use HANDLECOLLISIONS to kick out any ducplicates. Then once the load is complete, remove it from the parameter file.
Lots of details, but I'm sure this is a good starting point for you.
That would require programming in java.
1) First you would read your database
2) Decide which data has to be added in which table on the basis of data that was read.
3) Execute update/ data entry queries to submit data to tables.
If you want to run Initial Load using GoldenGate:
Target tables should be empty
Data: Make certain that the target tables are empty. Otherwise, there
may be duplicate-row errors or conflicts between existing rows and
rows that are being loaded. Link to Oracle Documentations
If not empty, you have to treat conflicts. For instance if the row you are inserting already exists in the target table (INSERTROWEXISTS) you should discard it, if that's what you want to do. Link to Oracle Documentation
I made Java/JDBC code which performs simple/basic operations on a database.
I want to add code which helps me to keep a track of when a particular database was accessed, updated, modified etc by this program.
I am thinking of creating another database inside my DBMS where these details or logs will be stored for each database involved.
Is this the best way to do it ? Are there any other ways (preferably simple) to do this ?
EDIT-
For now, I am using MySQL. But, I also want my code to work with at least
Oracle SQL and MS-SQL as well.
It is pretty standard to add a "last_modified" column to a table and then add an update trigger on the table to set it to the db current time. Then your apps don't need to worry about it. Also, a "create_time" is often used as well, populated by an insert trigger.
Update after comment:
Seems you are looking for audit logs. Some write apps where data manipulation only happens through stored procedures and not through inserts and updates. A fixed api. So you want to add an item to a table, you call the stored proc:
addItem(itemName, itemDescription)
Then the proc inserts into the item table and does what ever logging is necessary.
Another technique, if you are using some kind of framework for your jdbc access (say Spring) might be to intercept at that layer.
In almost all tables, I have the following columns:
CreatedBy
CreatedAt
These columns have default values of the current user and current time, respectively. They are populated when a row is added.
This solves only part of your problem. You can start adding triggers, but that gets complicated. Another method is to force modification access to the database through stored procedures, and then log the stored procedures. This has other advantages, in terms of controlling what users can do. But, you might want more flexibility.
A third possibility are auditing tools, that keep track of all queries being run on the database. I think most databases have a way of turning on internal auditing, although these are very specific to the database. There are also third party tools that allow you to see what has happened. Note, though, that these methods will affect performance if your database is doing high volume transactions.
For more information, you should revise your question to specify which database you are using or planning on using.
I have two databases. Changes like edits, insertions to one, need to be made to the second one as well and vice versa.
Actually, one database is an old legacy database (with a very bad Entity-Relationship structure) and a legacy app front-end currently used by users.
The second database is a newly built, better restructure of the legacy with a separate app. front-end.
I want both apps (accessing the legacy and the new database respectively) to run simultaneously so users can have the option to use both applications and changes in one app are visible across the other.
I want to write triggers which call stored procedures, which restructure the data and put it in the opposite database.
My question is:
Is my line of execution as it is supposed to be? I mean, triggers >call> stored procedures >call> database.
Can triggers / stored procedures be written in Java?
Any good/ recommended tips, tutorials etc etc out there?
There are many links on google but none of them are useful. I wonder if MySQL and Java work together when it comes to MySQL triggers? is it possible or not?. Is there a better way of achieving what I need?
Triggers are named database objects. They define some action that the database should take when certain database related events occur. They are written in SQL. Their execution is transparent to the user. You write your Java JDBC code as usual and the DBMS will automatically execute the appropriate trigger whenever necessary.
mysql> delimiter //
mysql> CREATE TRIGGER insert_trigger BEFORE INSERT ON Customer
-> FOR EACH ROW
-> BEGIN
-> UPDATE Customer SET Price=Price-10 WHERE CustomerGroup=32 and CityCode=11;
-> END;
-> //
This example shows you how to make your trigger write to another database. Be careful with auto-increment attributes.
I think you should forget about Java stored procedures in MySQL, but you could always move the business logic to your own Java program.