jOOQ continuous integration approaches - java

I've set a Java project using jOOQ. Currently we are about to create a CI pipeline on Jenkins.
Ideally, we wouldn't like commit the generated code on the repo but to generate it on the building process. However jOOQ requires to connect to the database in order to generate the code.
The first approach would be to allow Jenkins to connect to the database. In case we are forbidden to access to the DB from Jenkins, which are the approaches we should consider?
Any comments or hint is welcomed and much appreciated.

Why not commit generated code to a repository?
There are pros and cons to each approach as you have noticed, but in general, committing the generated code has more pros. Look at that code like any other library with its own release cycle and versioning. You might have such libraries, and you might call them libraryAbc-1.3.17.jar and you don't have any issues committing that jar file to the repository, right? Especially, when it's a third party dependency.
Here's an interesting article illustrating the above with more details:
https://blog.jooq.org/2014/09/08/look-no-further-the-final-answer-to-where-to-put-generated-code
And a recent discussion on the jOOQ user group:
https://groups.google.com/d/msg/jooq-user/M3PKEhrXnZ8/0PyFVMfQAgAJ
Options for regenerating code without a database connection
Notice how that discussion references an option for re-generating the code from a meta model that is not the database, e.g.:
The XMLDatabase
The JPADatabase
The DDLDatabase
All of these have the advantage of taking a meta model from the file system at the price that they don't support all the vendor-specific functionality that would be supported if connecting directly to the database.
But why not use testcontainers with your actual database product? An example can be seen here.

Related

how to move from hibernate auto to production

im building a Spring Boot web app with MySQL and till now i used the
spring.jpa.hibernate.ddl-auto=create-drop
in my properties file, and now i want to move to production and i cant use this line anymore because as u all know its saving the data on the memory and the worst thing its destroying all the data and create a new table every deploy.
for dev purposes its wonderful but what i need to do next cause i want it to behave exactly as i was on the ddl-auto but to persistently save the data and most inportantly never to drop the data.
P.S. the hibernate.ddl-auto has nothig to do with the JPA Repository?
cause i use Crud Repository alot and i need this to continue working with Crud Repository, will it?
the best thing to do, according to me, is:
use the option spring.jpa.hibernate.ddl-auto=create-drop to create
the DB schema and the default data (if any) in development environment
export the created DB schema in a normal DDL
give the DDL to DBAs to check if any improvement must be done (e.g.
add some indexes, review some FK etc..)
adapt JPA models after DBAs review
give the final DDL to the "production DBAs" in order to create the
final correct schema in production environment too
Regarding to your question:
the hibernate.ddl-auto has nothig to do with the JPA Repository? cause
i use Crud Repository alot and i need this to continue working with
Crud Repository, will it?
You can of course use the crud repository; this option will not influence your business logic
I hope it's useful
Angelo
You don't want to be sending DDLs around. You will end up trying to invent version control system for your scripts, by naming them specifically or putting them in folders, you will struggle communicating with DBAs, scripts breaking..
You want your database definition code to be a part of your code base so you can put it under version control (yes, git).
Try to use Liquibase for this. It will help you do automatic updates of the schema, data, everything db related, and it knows how to migrate your app's db, lets say, from 1.1 to 1.2 but also from 1.1 to 1.6. There are also other db migration tools like Flyway, you can look them up and play around.

Execute .sql files using JDBC

I want to test a handwritten DAO that uses the SQLite JDBC driver. My plan was to keep the schema and data insertion in version control as .sql files and execute them before the test to get a populated database that i can use for testing.
Searching for a solution to execute a whole sql script using JDBC turned up a bunch of Stackoverflow threads saying that it is not possible and providing some parsing scripts that split the sql script into separate sql statements (SQLScriptRunner).
These posts were mostly 3+ years old, so i am wondering if there still is no "easy" way to execute sql scripts using the JDBC API.
I am asking, because SQLite provides me with the option to clone a database from an existing one, which i would prefer over using a big Script-Executer implementation (The executer would probably be bigger than all my data access code combined).
So, is there a easy out of the box approach to execute sql scripts using JDBC, or is it still only possible using some parsing script?
I guess Spring Framework ScriptUtils might do this trick for you.
Please check
http://docs.spring.io/autorepo/docs/spring/4.0.9.RELEASE/javadoc-api/org/springframework/jdbc/datasource/init/ScriptUtils.html
My plan was to keep the schema and data insertion in version control
as .sql files and execute them before the test to get a populated
database that i can use for testing.
For this purpose there is such library as DbUnit http://dbunit.sourceforge.net/
Which i personally find a little bit tricky to use without a propper wrapper.
Some of those wrappers:
Spring Test DbUnit https://springtestdbunit.github.io/spring-test-dbunit/
Unitils DbUnit http://www.unitils.org/tutorial-database.html
While not an "out of the box" JDBC solution, I would be inclined to use SqlTool. It is especially friendly with HSQLDB, but it can be used with (almost?) any JDBC driver. (The documentation shows examples on how to connect with PostgreSQL, SQL Server, Oracle, Derby, etc.) It's available via Maven, and the JAR file is only ~150KB. For an example of how to run an SQL script with it, see my other answer here.
Regarding the proposed Spring "ScriptUtils" solution in light of the comment in your question ...
i would prefer [some other solution] over using a big Script-Executer implementation (The executer would probably be bigger than all my data access code combined).
... note that the dependencies for spring-jdbc include spring-core, spring-beans, spring-tx, and commons-logging-1.2 for a total of ~2.5MB. That is over 15 times larger than the SqlTool JAR file, and it doesn't even include any additional DbUnit "wrappers" that may or may not be required to make ScriptUtils easier to use.

What is the proper way to test a hibernate dialect?

I have written my own hibernate dialect for a RDBMS. What is the best way to test the dialect? Are there any testsuites/tests that could be helpful for me? What is the best way to make sure, that my implementation is correct/supports all necessary features?
This is purely from reading stuff from the Hibernate GitHub repos, not from experience with "doing" Hibernate testing. However, it may be sufficient to get you started ...
The Hibernate matrix testing framework allows you to run tests against specific database backends; see https://github.com/hibernate/hibernate-matrix-testing. The README.md file says how to configure the framework for a specific database.
The Hibernate ORM tree includes a number of tests for the core of Hibernate; see https://github.com/hibernate/hibernate-orm. The README.md for that project mentions Gradle tasks for running the tests. (I haven't looked at the available tests in detail, but since the ORM tree includes the "dialect" classes for a range of database, I would imagine that includes the corresponding tests.)
Hibernate's build and test framework is implemented using Gradle, so I expect that you will need to get your head around that technology to figure out how it all works.
Not aware of any such suite or tests. To start with:
test various query scenarios & see if queries generated are running fine for that DB - when ran standalone.
test exceptions expected in all scenarios.
check on how it behaves for a new/old version of the DB driver.
Best of luck!

java library to maintain database structure

My application is always developing, so occasionally - when the version upgrades - some tables need to be created/altered/deleted, some data modified, etc. Generally some sql code needs to be executed.
Is there a Java library that can be used to keep my database structure up to date (by analyzing something like "db structure version" information and executing custom sql to code to update from one version to another)?
Also it would be great to have some basic actions (like add/remove column) ready to use with minimal configuration, ie name/type and no sql code.
Try DBDeploy. Although I haven't used it in the past, it sounds like this project would help in your case. DBDeploy is a database refactoring manager that:
"Automates the process of establishing
which database refactorings need to be
run against a specific database in
order to migrate it to a particular
build."
It is known to integrate with both Ant and Maven.
Try Liquibase.
Liquibase is an open source (Apache
2.0 Licensed), database-independent library for tracking, managing and
applying database changes. It is built
on a simple premise: All database
changes are stored in a human readable
yet trackable form and checked into
source control.
Supported features:
Extensibility
Merging changes from multiple developers
Code branches
Multiple Databases
Managing production data as well as various test datasets
Cluster-safe database upgrades
Automated updates or generation of SQL scripts that can be approved and
applied by a DBA
Update rollbacks
Database ”diff“s
Generating starting change logs from existing databases
Generating database change documentation
We use a piece of software called Liquibase for this. It's very flexible and you can set it up pretty much however you want it. We have it integrated with Maven so our database is always up to date.
You can also check Flyway (400 questions tagged on SOW) or mybatis (1049 questions tagged). To add to the comparison the other options mentioned: Liquibase (663 questions tagged) and DBDeploy (24 questions tagged).
Another resource that you can find useful is the feature comparison in the Flyway website (There are other related projects mentioned there).
You should take a look into OR Mapping libraries, e.g. Hibernate
Most ORM mappers have logic to do schema upgrades for you, I have successfully used Hibernate which gets at least the basic stuff right automatically.

JDBC SQL Server Database Migrations

I'm working on a Java web application (Adobe Flex front-end, JPA/Hibernate/BlazeDS/Spring MVC backend) and will soon reach the point where I can no longer wipe the database and regenerate it.
What's the best approach for handling changes to the DB schema? The production and test databases are SQL Server 2005, dev's use MySQL, and unit tests run against an HSQLDB in-memory database. I'm fine with having dev machines continue to wipe and reload the DB from sample data using Hibernate to regenerate the tables. However, for a production deploy the DBA would like to have a DDL script that he can manually execute.
So, my ideal solution would be one where I can write Rails-style migrations, execute them against the test servers, and after verifying that they work be able to write out SQL Server DDL that the DBA can execute on the production servers (and which has already been validated to work agains the test servers).
What's a good tool for this? Should I be writing the DDL manually (and just let dev machines use Hibernate to regenerate the DB)? Can I use a tool like migrate4j (which seems to have limited support for SQL Server, if at all)?
I'm also looking to integrate DB manipulation scripts into this process (for example, converting a "Name" field into a "First Name", "Last Name" field via a JDBC script that splits all the existing strings).
Any suggestions would be much appreciated!
What's the best approach for handling changes to the DB schema?
Idempotent change scripts with a version table (and a tool to apply all the change scripts with a number greater than the version currently stored in the version table). Also check the mentioned post Bulletproof Sql Change Scripts Using INFORMATION_SCHEMA Views.
To implement this, you could roll out your own solutions or use existing tools like DbUpdater (mentioned in the comments of change scripts), LiquiBase or dbdeploy. The later has my preference.
I depend on hibernate to create whatever it needs on the production server. There's no risk of losing data because it never removes anything: it only adds what is missing.
On the current project, we have established a convention by which any feature which requires a change in the database (schema or data) is required to provide it's own DDL/DML snippets, meaning that all we need to do is to aggregate the snippets into a single script and execute it to get production up to date. None of this works on a very large scale (order of snippets becomes critical, not everyone follows the convention etc.), but in a small team and an iterative process it works just fine.

Categories