Its adds new ones, but as far as I can see it does not drop the old ones ?
When I say old ones, I mean properties of entity objects that are now completely removed,where previously they were present and annotated with #column
Are my only options to drop the col manually or change the config value to create ? Neither of which are particularly charming.
Or something else ?
For what it's worth, never EVER use hbm2ddl.auto on any live/production database.
Yes, it is "working as intended" that "update" doesn't drop any columns that are not referenced (probably to allow you to use "legacy" databases that have columns that are not used by your hibernate app, but may be used by external applications). However, in certain circumstances, hibernate can drop and recreate columns if, for instance, you change the datatype in your entity. That is one of the reasons you should never use it for any production system.
Personally, I would never trust an automated "black box" framework to handle changes to the datamodel in anything but strictly local/dev environments. I have always set it up so in the local dev environments, you may do create-drop. Once it's time to start promoting your app to central test/stage and then prod, all database changes are done by DBA:s with good old fashioned DDL scripts. Data is far too valuable to risk on a potential bug or unexpected behavior in hibernate (or any other ORM/automated framework). I even make sure that the database user configured in my applications doesn't even have create/drop/alter privileges in the database, just to prevent disasters happening due to bad configuration in hibernate.
So, to answer your question - if you want hibernate to always maintain your database reflecting your entities exactly, "create-drop" is your only option. Just don't ever use it on anything but local dev databases.
I'd have a look into liquibase for keeping your database in sync with your enitities. Maybe a bit of an overkill but well worth it.
Related
Answer says, not to trust hibernate.hbm2ddl.auto setting for production.
My understanding of using ORM:
1) To avoid designing & normalising DB schema at database layer(say RDBMS). In mongoDB world, ODM is used.
2) To avoid embedding SQL query language in code(say java).
3) To just think about storing and retrieving objects(in OOP sense)
Running DDL scripts breaks the purpose of using ORM tool and looks similar to JDBC approach except it provides the SQL dialect for vendor specific database.
For production, Can running of DDL scripts mandatory for safety?
Running DDL scripts manually breaks the purpose of using ORM tool.
No, it does not.
An Object-Relational Mapping tool is tool that helps translate data from your tables into objects that you can use in your object-orianted programming language - it has nothing to do with database administration.
Hibernate can generate a DDL based on what your classes look like right now, but it has no sense of history.
If all you're doing is adding new columns or tables you'll probably be fine but the minute you rename a column you're out of luck because Hibernate will see the old column and won't find a mapping to it so it will remove it and then create a new column using the new name. If you have a non-null requirement on that column you're screwed because you can't tell Hibernate what the default value is (well, there's a hack but please don't do this.)
You're also very limited in how you can change the types of columns - if the contents of the column can't be translated automatically by the database you're out of luck.
As an example we switched our databases from storing UUIDs in binary to storing it as a VARCHAR a while back and we had to manually convert them from binary to hexadecimal notation becasue MySQL can't do that automatically - you'd be properly screwed if you tried to do that with Hibernate's auto-DDL.
There's also no way of telling Hibernate where to create indexes - you'll get an index on each primary key column but if you want extra indexes you'll have to add these manually.
The DDL auto-generation of Hibernate is good for validating that your classes map correctly to your tables, but it should never be used to alter your production databases.
So to answer your question:
For production, does manual run of DDL scripts mandatory for safety?
Yes! And I recommend you use a management tool like Liquibase or Flyway to aid with it.
Yes, they are required. If you want to work efficiently that is.
Running DDL scripts manually breaks the purpose of using ORM tool
No it doesn't. ORM stands for Object Relational Mapping, meaning it maps the relational data of the RDBMS to Objects. Nowhere does it imply that the database schema must be changed by the ORM, even though the possibility exists (and works in very simple cases).
Besides you're not going to be running anything manually. There are database migration/refactoring products like Flyway and Liquibase that attempt to solve the problem of a database schema changing over time. They're also separate products, so you don't need to care whether you're using Hibernate or some other method of data access. They also try to provide some amount of transactionality, meaning you can revert a change to the schema in some cases.
In any non-trivial project one would try to make sure they can improve the database without being permanently locked into a legacy schema, as well as making incredibly sure that the data stays safe. A proper tool designed for that purpose makes it a lot easier, an ORM's half-baked mechanism does not.
I have a much used project that I am working on currently updating. There are several places where this project can be installed, and in the future it is not certain what version is used where and to what version one might be updated to in the future. Right now they are all the same, though.
My problem stems from the fact that there might be many changes to the hibernate entity classes, and it must be easy to update to a newer version without any hassle, and no loss of database content. Just replace WAR and start and it should migrate itself.
To my knowledge Hibernate does no altering of tables unless hibernate.hbm2ddl.auto=create, but which actually throws away all the data?
So right now when the Spring context has fully loaded, it executes a bean that will migrate the database to the current version by going through all the changes from versionX to versionY (what version it previously was is saved in the database), and manually alter the table.
It's not much hassle doing a few hard-coded ALTER TABLE to add some columns, but when it comes to adding complete new tables, it feels silly to have to write all that...
So my question(s) is this:
Is there any way to send an entity class and a dialect to Hibernate
code somewhere, and get back a valid SQL query for creating a table?
And even better, somehow create an SQL string for adding a column to a table, dialect-safe?
I hope this is not a silly question, and I have not missed something obvious when it comes to Hibernate...
have you tried
hibernate.hbm2ddl.auto=update
it retains all the database with the data and append only columns and tables you have changed in entity.
I don't think you'll be able to fully automate this. Hibernate has the hbm2ddl tool (available as an ant task or a maven plugin) to generate the required DDL statements from your hibernate configuration to create an empty database but I'm not aware of any tools that can do an automatic "diff" between two versions. In any case you're probably better off doing the diff carefully by hand, as only you know your object model well enough to be able to pick the right defaults for new properties of existing entities etc.
Once you have worked out your diffs you can use a tool like liquibase to manage them and handle actually applying the updates to a database at application start time.
Maybe you should try a different approach. In stead of generating an schema at runtime update, make one 'by hand' (could be based on a hibernate generated script though).
Store a version number in the database and create an update script for every next version. The only thing you have to do now is determine in which version the database currently is and sequentially run the necessary update scripts to get it to the current version.
To make it extra robust you can make a unit/integration test which runs every possible database update and checks the integrity of the resulting database.
I used this method for an application I build and it works flawlessly. An other example of an implementation of this pattern is Android. They have an upgrade method in their API
http://developer.android.com/reference/android/database/sqlite/SQLiteOpenHelper.html#onUpgrade(android.database.sqlite.SQLiteDatabase, int, int)
Don't use Hibernate's ddl. It throws away your data if you want to migrate. I suggest you take a look at Liquibase. Liquibase is a database version control. It works using changesets. Each changeset can be created manually or you can let Liquibase read your Hibernate config and generate a changeset.
Liquibase can be started via Spring so it should fit right in with your project ;-)
I've got an Oracle database that has two schemas in it which are identical. One is essentially the "on" schema, and the other is the "off" schema. We update data in the off schema and then switch the schemas behind an alias which our production servers use. Not a great solution, but it's what I've been given to work with.
My problem is that there is a separate application that will now be streaming data to the database (also handed to me) which is currently only updating the alias, which means it is only updating the "on" schema at any given time. That means that when the schemas get switched, all the data from this separate application vanishes from production (the schema it is in is now the "off" schema).
This application is using Hibernate 3.3.2 to update the database. There's Spring 3.0.6 in the mix as well, but not for the database updates. Finally, we're running on Java 1.6.
Can anyone point me in a direction to updating both "on" and "off" schemas simultaneously that does not involve rewriting the whole DAO layer using Spring JDBC to load two separate connection pools? I have not been able to find anything about getting hibernate to do this. Thanks in advance!
You shouldn't be updating two seperate databases this way, especially from the application's point of view. All it should know/care about is whether or not the data is there, not having to mess with two separate databases.
Frankly, this sounds like you may need to purchase an ETL tool. Even if you can't get it to update the 'on' schema from the 'off' one (fast enough to be practical), you will likely be able to use it to keep the two in sync (mirror changes from 'on' to 'off').
HA-JDBC is a replicating JDBC Driver we investigated for a short while. It will automatically replicate all inserts and updates, and distribute all selects. There are other database specific master-slave solutions as well.
On the other hand, I wouldn't recommend doing this for 4-8 hour procedures. Better lock the database before, update one database, and then backup-restore a copy, and then unlock again.
I just wanted to hear the opinion of Hibernate experts about DB schema generation best practices for Hibernate/JPA based projects. Especially:
What strategy to use when the project has just started? Is it recommended to let Hibernate automatically generate the schema in this phase or is it better to create the database tables manually from earliest phases of the project?
Pretending that throughout the project the schema was being generated using Hibernate, is it better to disable automatic schema generation and manually create the database schema just before the system is released into production?
And after the system has been released into production, what is the best practice for maintaining the entity classes and the DB schema (e.g. adding/renaming/updating columns, renaming tables, etc.)?
It's always recommended to generate the schema manually, preferably by a tool supporting database schema revisions, such as the great Liquibase. Generating the schema from the entities is great in theory, but were fragile in practice and causes lots of problems in the long run(trust me on this).
In productions it's always best to have manually generated and review the schema.
You make an update to an entity and create a matching update script(revision) to update your database schema to reflect the entity change. You can create a custom solution(I've written a few) or use something more popular like liquibase(it even supports schema changes rollbacks). If you're using a build tool such as maven or ant - it's recommend to plug the db schema update util into the build process so that fresh builds stay in sync with the schema.
Although disputable, I'd say that the answer to all 3 questions is: let hibernate automatically generate the tables in the schema.
I haven't had any problems with that so far. You might need to clean some field up manually from time to time, but this is no headache compared to separately keeping track of DDL scripts - i.e. managing their revisions and synchronizing them with entity changes (and vice-versa)
For deploying on production - an obvious tip - first make sure everything is generated OK on the test environment and then deploy on production.
Manually, because:
Same database may be used by different applications and not all of
them would be using hibernate or even java. Database schema should
not be dictated by ORM, it should be designed around the data and
business requirements.
The datatypes chosen by hibernate might not be best suited for the application.
As mentioned in an earlier comment, changes to the entities would require manual intervention if data loss is not acceptable.
Things such as additional properties (generic term not java
properties) on join tables work wonderfully in RDBMS but are
somewhat complex and inefficient to use in an ORM. Doing such a
mapping from ORM -> RDBMS might create tables that are not
efficient. In theory, it is possible to build the exact same join
table using hibernate generated code, but it would require some
special care while writing the Entities.
I would use automatic generation for standalone applications or databases that are accessed via the same ORM layer and also if the app needs to be ported to different databases. It would save lot of time in by not requiring one to write and maintain DB vendor specific DDL scripts.
Like Bozhidar said, donĀ“t let Hibernate create&update the database schema.
Let your application create and update the database schema.
For java the best tool to do this is Flyway. You need to create one or more SQL files with DDL statements which are describing your database schema. These SQL files are then executed by Flyway. For more information look at the site of Flyway.
I believe that a lot of what is being discussed or argued here should also be related to if you are more confortable with the code-first or the database-first approach.
Personally, I am more intended to go for latter and, making a reference to Single Responsibility Principle (SRP), I prefer having DB specialist handling the DB and an application specialist handling the application, than having the application handling the DB. Additionally, I am of the opinion that taking too many shortcuts will work fine at the beginning but create unmanageable problems as things grow/evolve.
I'm introducing a DAO layer in our application currently working on SQL Server because I need to port it to Oracle.
I'd like to use Hibernate and write a factory (or use dependency injection) to pick the correct DAOs according to the deployment configuration. What are the best practices in this case? Should I have two packages with different hibernate.cfg.xml and *.hbm.xml files and pick them accordingly in my factory? Is there any chance that my DAOs will work correctly with both DBMS without (too much) hassle?
Assuming that the table names and columns are the same between the two, you should be able to use the same hbm.xml files. However you will certainly need to supply different a Hibernate Configuration value (hibernate.cfg.xml), as you will need to change Hibernate's dialect from SQLServer to Oracle.
If there are slight name differences between the two, then I would create two sets of mapping files - one per Database server - and package these up into separate JARs (such as yourproject-sqlserver-mappings.jar and yourproject-oracle-mappings.jar), and deploy the application with one JAR or the other depending on the environment.
I did this for a client a while back -- at deployment depending on a property set in a production.properties file I changed out the hibernate.dialect in the cfg file using Ant (you can use any xml transformer). However this would only work if the Hibernate code is seamless btw both DBs i.e. no db-specific function calls etc. HQL/JPAQL has standard function calls that help ion this regard like UPPER(s), LENGTH(s) etc.
If the db implementations must necessarily be different then you'd have to do something like what #matt suggested.
I've worked on an app that supports a lot of databases (Oracle, Informix, SQL Server, MySQL). We have one configuration file and one set of mappings. We use jndi for the database connection so we don't have to deal with different connection URLs in the app. When we initialize the SessionFactory we have a method that deduces the type of database from the underlying connection. For example, manually get a connection via JNDI and then use connection.getMetaData().getDatabaseProductName() to find out what the database is. You could also use a container environment variable to explicitly set it. Then set the dialect using configuration.setProperty(Environment.DIALECT, deducedDialect) and initialize the SessionFactory as normal.
Some things you have to deal with:
Primary key generation. We use a customized version of the TableGenerator strategy so we have one key table with columns for table name and next key. This way every database can use the same strategy rather than sequence in Oracle, native for SQL Server, etc.
Functions specific to databases. We avoid them when possible. Hibernate dialects handle the most common ones. Occasionally we'll have to add our own to our custom dialect classes, .e.g. date arithmetic is pretty non-standard, so we'll just make up a function name and map it to each database's way of doing it.
Schema generation - we use the Hibernate schema generation class - it works with the dialects to create the correct DDL for each type of database and forces the database to match the mappings. You have to be aware of the keywords for each database, e.g. don't try to have a USER table in Oracle (USERS will work), or a TRANSLATION table in MySQL.
There is a table mapping the differences between Oracle and SQLServer here: http://psoug.org/reference/sqlserver.html
In my opinion the biggest pitfalls are:
1) Dates. The functions and mechanics are completely different. You will have to use different code for each DB.
2) Key generation - Oracle and SQLServer use different mechanics and if you try to avoid the "native" generation altogether by having your own keys table - well, you just completely serialized all your "inserts". Not good for performance.
3) Concurrency/locking is a bit different. Parts of the code that is performance sensitive will probably be different for each DB.
4) Oracle is case sensitive, SQLServer is not. You need to be careful with that.
There are lots more :)
Writing SQL code that will run on two DBs is challenging. Making it fast can seem nearly impossible at times.