I am working on spring project, we have data already in production, i am using annotation configuration for my entities, i have data already running, i want to add new data types and modify some already existing types, how to do that smoothly without the need to manually export the data and create import script for the new schema.
You can add tables, columns in existing tables and switch data types of some column but you should take care of these points:
add new column: if the column is not nullable you have to either enable not null constraint after data inserts or use a default dummy value. You can remove the default clause after insertions of good values too.
switch of type: If your db provider is capable of doing the cast directly everything is ok but if you got an error you should check at your db provider documentation if you can provide a hint for casting. For instance Postgres provides USING keyword for that.
Last thing if you try change type on column serving as FK , you should drop the fk, switch type of both columns and recreate the fk.
Finally I would advice you to use a database migration tool to handle that changes. Using tool like Flyway for instance.
Related
I am using Cassandra database integrated into a spring boot application.
My Question is around the schema actions. If I need to make structural changes to the DB, say add a column to a table, the database needs to be recreated, however this means all the existing data gets deleted:
schema-action: CREATE_IF_NOT_EXISTS
The only way I have managed to solve this is by using the RECREATE scheme action, but as mentioned earlier, this results in data-loss.
What would be the best approach to handle this? To add structural changes such as a column name with out having to recreate the database and lose all existing data?
Thanks
Cassandra does allow you to modify the schema of an existing table without recreating it from scratch, using the ALTER TABLE statement via cqlsh. However, as explained in that link, there are some important limitations on the kind of changes you can do. You cannot modify the primary key of the table at all, you can add or delete regular columns, and you can't change the type of a column to a non-compatible one.
The reason for most of these limitations is how Cassandra needs to deal with the old data that already exists in the table. For example, it doesn't make sense to say that a column A that until now contained strings - will now contain integers - how are we supposed to handle all the old values in column A which weren't integers?
As Aaron rightly said in a comment, it is unlikely you'll want to do these schema changes as part of your application. These are usually rare operations which are done manually, or via some management application - not your usual application.
Currently there are schema actions that let you recreate tables on each startup, but dropping them obviously means you lose all rows of that table.
In CQL you can make a query like
CREATE TABLE IF NOT EXISTS keyspace.tablename(....)
But I can't find any way of achieving a similar result with spring-data-casssandra, one that would let me start my app for the first time and on without changing anything.
Is there any way to create a table defined in a POJO with #Table ONLY if said table does not already exist?
See DATACASS-219.
I just recently added support for CREATE TABLE IF NOT EXISTS keyspace.tablename (..);. This will be available in SD Cassandra 1.5 M1 (Ingals). I'll consider backing porting this to 1.4 for the 1.4.2.RELEASE.
The only other way to accomplish this for the time being (if not using the 1.5.0.BUILD-SNAPSHOT containing the DATACASS-219 fix) is to set your SchemaAction to NONE and provide your own raw CQL, initialization scripts to the CassandraSessionFactoryBean using the setStartupScripts(:List) method.
Hibernate automatically performs some updates such as creating tables or columns, but don't changing types of columns. For example we are changed column type from long to int and column type in database still bigint (PostgreSQL 9.5). Also, we added type converter for LocalDateTime fields, and Hibernate creating new field as timestamps but don't changing type of old fields. How can we configure Hibernate to let it automatically manage such things?
While I think this practice is pretty bad, and very dangerous, the reality is you just need the right permissions.
Most SQL databases store the database information in a system schema. The user for your app would have to have permission to utilize and possibly CRUD that schema. Once you have that, it is just a matter of writing the hibernate classes to manage the tables.
For example, if I wanted to change the schema a particular table belonged to I can do that by executing this statement in PostgreSQL:
update pg_tables set schemaname = 'newSchema' where tablename = 'xxx';
Allowing your application to do so opens you up to all kinds of pain and suffering. Including faults that are expecting a certain data model that was dynamically updated; and, if your application is hacked you could have all your tables dropped.
Is there a way to tell Hibernate to first check if the current primary key generated by a Table Generator is usable or outdated?
I have an application which uses hibernate to create new entries in several tables in my database, but sometimes these generated values are outdated and already used. This happens because this database is used by quite a few applications and scripts, and some of these use the "select MAX(ID)+1"-Keygeneration"strategy". It is not really an option to change all other components to use the table generator (although it would solve the problem), so I have to make sure that the values I get from the table generator are really usable.
Is there any way to tell Hibernate to check the validity of the generated values before it tries to insert a new record into the database (and throw a ConstraintViolationException)?
Or, alternatively, is there a way to manually update the generator tables before hibernate uses them to generate new Ids?
The obvious way would be to run a native query like UPDATE pk_generator SET value=(SELECT MAX(ID)+1 from members) WHERE column='members'
When you save a object with saveOrUpdate() the objects id field will get updated with the auto generated id if it was a create operation. So that it will never conflict with id which was already generated and used.
I'm currently using ORMLite to work with a SQLite database on Android. As part of this I am downloading a bunch of data from a backend server and I'd like to have this data added to the SQLite database in the exact same format it is on the backend server (ie the IDs are the same, etc).
So, my question to you is if I populate my database entry object (we'll call it Equipment), including Equipment's generatedId/primary key field via setId(), and I then run a DAO.create() with that Equipment entry will that ID be saved correctly? I tried it this way and it seems to me that this was not the case. If that is the case I will try again and look for other problems, but with the first few passes over the code I was not able to find one. So essentially, if I call DAO.create() on a database object with an ID set will that ID be sent to the database and if it is not, how can I insert a row with a primary key value already filled out?
Thanks!
#Femi is correct that an object can either be a generated-id or an id, but not both. The issue is more than how ORMLite stores the object but it also has to match the schema that the database was generated with.
ORMLite supports a allowGeneratedIdInsert=true option to #DatabaseField annotation that allows this behavior. This is not supported by some database types (Derby for example) but works under Android/SQLite.
For posterity, you can also create 2 objects that share the same table -- one with a generated-id and one without. Then you can insert using the generated-id Dao to get that behavior and the other Dao to take the id value set by the caller. Here's another answer talking about that. The issue for you sounds like that this will create a lot of of extra DAOs.
The only other solution is to not use the id for your purposes. Let the database generate the id and then have an additional field that you use that is set externally for your purposes. Forcing the database-id in certain circumstances seems to me to be a bad pattern.
From http://ormlite.com/docs/generated-id:
Boolean whether the field is an auto-generated id field. Default is false. Only one field can have this set in a class. This tells the database to auto-generate a corresponding id for every row inserted. When an object with a generated-id is created using the Dao.create() method, the database will generate an id for the row which will be returned and set in the object by the create method. Some databases require sequences for generated ids in which case the sequence name will be auto-generated. To specify the name of the sequence use generatedIdSequence. Only one of this, id, and generatedIdSequence can be specified.
You must use either generatedId (in which case it appears all ids must be generated) or id (in which case you can set them) but not both.