Android Room: Create table based on external input - java

I am developing a Java-based Android App where I am using Room. The App is connected to a server from which it downloads project specific configurations. One of these configurations is the setup of a table. I have a table which number and types of columns differ for each project. I need to have a local copy of this table on the phone to store data in case no internet connection is available. The configuration of the table contains the name of the table and the column composition like
[{
"name":"column1",
"datatype":"VARCHAR(20)"
},
{
"name":"column2",
"datatype":"INT(5)"
},
{
"name":"column3",
"datatype":"DOUBLE"
}]
How can I generate such a table with Room? Generating the create query is not a problem but where should I execute it. Additionally, How can I insert, update and query data from the table? Is it possible to generate such SQL queries and execute them? Is there something like a row mapper which can be used to read the queried data from the table?
If this is not possible, any idea how I can solve it otherwise?
Thank you for your support.

You won't be able to do this with Room and use Room's object mapping because Room builds tables according to the mapping of objects.
That is a table is defined according to a class annotated with #Entity and defined as an Entity to the database.
It undertakes much at compile time such as verification of queries building it's component creation SQL. At run time as part of opening the database it checks for the expected components and if it finds a difference will fail/crash.
I did at one time have a project ongoing that built entities/classes based upon a database which could be copied into a project but with Room changing and introducing features (e.g. DEFAULT constraints which were ignored but then introduced).
You can have components, including tables, that are not controlled/known by Room which will not violate the run time schema checking but then you would have to use a SupportSQLiteDatabase so you might as well use the native SQLite.
How can I generate such a table with Room?
In the case of the example then you would need a class annotated with #Entity. However, one rule that Room imposes is that there must be a PRIMARY KEY. So an option could be to introduce one. Lets say a column called id
Another rule that Room enforces are that column types can only be INTEGER, TEXT, REAL or BLOB. However this is determined from the variable type.
So VARCHAR(50) would be for a String,
INT(5) would likely store long or int long would cover all.
DOUBLE would likely store a DOUBLE (REAL) so double.
So the class(entity) could be :-
#Entity
class TableX {
#PrimaryKey
Long id = null;
String column1;
long column2;
double column3;
}
the table name would be TableX
How can I insert, update and query data from the table?
You use an interface or abstract class annotated with #Dao, so for the above you could, for example have:-
#Dao
abstract class TableXDao {
#Insert
abstract long insert(TableX tableX);
#Insert
abstract long[] insert(TableX...tableX);
#Query("INSERT INTO TableX (column1,column2,column3) VALUES(:column1,:column2,:column3)")
abstract long insert(String column1,long column2, double column3);
#Update
abstract int update(TableX tableX);
#Update
abstract int update(TableX...tableX);
#Query("UPDATE tablex set column1=:newColumn1, column2=:newColumn2,column3=:newColumn3 WHERE id=:id")
abstract int update(long id, String newColumn1, long newColumn2, double newColumn3);
#Query("SELECT * FROM tablex")
abstract List<TableX> getAllTableXRows();
}
Note the 3 forms of Insert/Update the #Insert/#Update uses the convenience methods (based upon passing the object or objects) the #Query uses a more free-format/adaptable approach.
Although not asked for Room needs to know about the Database itself so another class, annotated with #Database is required, this will annotation defines the Entities that form the database, the version number (and other options). The class should extend the RoomDatabase class, it must be an abstract class or implement the abstract method createOpenHelper (typically the former). So :-
#Database(entities = {TableX.class},version = 1)
abstract class TheDatabase extends RoomDatabase {
abstract TableXDao getTableXDao();
/* Often :- */
private static volatile TheDatabase instance = null;
public TheDatabase getInstance(Context context) {
if (instance == null) {
instance = Room.databaseBuilder(
context,TheDatabase.class,"thedatabase.db"
)
.build();
}
return instance;
}
}
When the above is compiled much is undertaken, the build will include a warning:-
E:\AndroidStudioApps\SO70351715JavaSQLite\app\src\main\java\a\a\so70351715javasqlite\TheDatabase.java:10: warning: Schema export directory is not provided to the annotation processor so we cannot export the schema. You can either provide `room.schemaLocation` annotation processor argument OR set exportSchema to false.
abstract class TheDatabase extends RoomDatabase {
^
1 warning
purposefully allowed to happen to demonstrate the extensive compile time checking
Additionally it will generated quite a bit of java code :-
the TableXDao_Impl class being code that is invoked when the Dao's are used
the TheDatabase_Impl class being code for accessing the database, including the creation of the tables in the createAllTables method:-
#Override
public void createAllTables(SupportSQLiteDatabase _db) {
_db.execSQL("CREATE TABLE IF NOT EXISTS `TableX` (`id` INTEGER, `column1` TEXT, `column2` INTEGER NOT NULL, `column3` REAL NOT NULL, PRIMARY KEY(`id`))");
_db.execSQL("CREATE TABLE IF NOT EXISTS room_master_table (id INTEGER PRIMARY KEY,identity_hash TEXT)");
_db.execSQL("INSERT OR REPLACE INTO room_master_table (id,identity_hash) VALUES(42, '5f1c580621c8b86aef3b3cccc44d8d76')");
}
As you can see the room_master_table is created and populated with a row that stores a hash, this is part of the verification, if the hash is changed then room will know that the schema has changed (the source code has been changed).
Is there something like a row mapper which can be used to read the queried data from the table?
As can be seen its all done with the compiled code via the annotations so there is not a map but the expectation that everything is known/defined at compile time.
If this is not possible, any idea how I can solve it otherwise?
Use Native SQLite or manage the server database and the room database as a whole.

Related

JPA Query with several different #Id columns

Problem
To make my code cleaner i want to introduce a generic Repository that each Repository could extend and therefore reduce the code i have to have in each of them. The problem is, that the Ids differ from Class to Class. On one (see example below) it would be id and in the other randomNumber and on the other may even be an #EmbeddedId. I want to have a derived (or non derived) query in the respository that gets One by id.
Preferred solution
I Imagine having something like:
public interface IUniversalRepository<T, K>{
#Query("select t from # {#entityName} where #id = ?1")
public T findById(K id);
}
Ecample Code
(that does not work because attribute id cannot be found on Settings)
public interface IUniversalRepository<T, K>{
//should return the object with the id, reagardless of the column name
public T findById(K id);
}
// two example classes with different #Id fields
public class TaxRate {
#Id
#Column()
private Integer id;
...
}
public class Settings{
#Id
#Column() //cannot rename this column because it has to be named exactly as it is for backup reason
private String randomNumber;
...
}
// the Repository would be used like this
public interface TaxRateRepository extends IUniversalRepository<TaxRate, Integer> {
}
public interface SettingsRepository extends IUniversalRepository<TaxRate, String> {
}
Happy for suggestions.
The idea of retrieving JPA entities via "id query" is not so good as you might think, the main problem is that is much slower, especially when you are hitting the same entity within transaction multiple times: if flush mode is set to AUTO (with is actually the reasonable default) Hibernate needs to perform dirty checking and flush changes into database before executing JPQL query, moreover, Hibernate doesn't guarantee that entities, retrieved via "id query" are not actually stale - if entity was already present in persistence context Hibernate basically ignores DB data.
The best way to retrieve entities by id is to call EntityManager#find(java.lang.Class<T>, java.lang.Object) method, which in turn backs up CrudRepository#findById method, so, yours findByIdAndType(K id, String type) should actually look like:
default Optional<T> findByIdAndType(K id, String type) {
return findById(id)
.filter(e -> Objects.equals(e.getType(), type));
}
However, the desire to place some kind of id placeholder in JQPL query is not so bad - one of it's applications could be preserving order stability in queries with pagination. I would suggest you to file corresponding CR to spring-data project.

How to prepopulate database using Room? I don't see tutorials that explains in details using java

I checked this link: https://developer.android.com/training/data-storage/room/prepopulate
and I don't know the step by step of doing it.
I don't know what will I write in the db file, I still need some examples.
I don't know how to access the prepopulated db, like what functions/methods will I use. I really have no idea. Please help.
Most tutorials are done in kotlin.
you can create a database using other tools like "db browser for SQLite", there you will be able to create tables and fill them with data and save the database as .db file, which you will be able to move to your android project.
You may wish to consider the following, even though it may appear to be the wrong order (it may overcome perplexing issues with the dreaded Expected/Found issue that is often associated with trying to adapt room's entities to cater for an existing database)
If you already have a populated pre-packaged database you may find it easier to add steps to create the Room expected tables, copy the data from the existing tables and drop them afterwards (still probably less perplexing than trying to ascertain the nuances of Room).
Design and create your schema for the database.
Using the schema create the entities for Room, that is a class annotated with #Entity per table.
Create an appropriate #Database annotated abstract class that includes ALL of the #Entity annotated class in the entities = parameter of the #Database annotation.
Compile (successfully) the project.
In the Android View of Android Studio look at the generated java there will be a class that has the same names as the #Database annotated class but suffixed with _Impl. In this class there will be a method called createAllTables and you will see an execSQL statement for each table and any supplementary indexes
note if you use Views then you need to create the #View annotated classes and follow the similar procedure. However, Views are not common though and hence they have not been included.
Use the SQL from the execSQL statements to create your tables in favourite SQL tool (e.g. DB Browser for SQLite, Navicat, DBeaver).
Populate the database using your SQLite Tool.
Close and Save the database (it is suggested that you then Open the database and check that it is as expected and then Close and Save the database again (some tools sometimes don't close the database))
Create the assets folder in your project.
Copy the saved database file into the assets folder.
add/amend where you undertake the Room's build method, to precede it with .createFromAsset("the_database_file_name_including_extension_if_one")
this tells Room, when building the database, if the database does not exist, to copy the database from the assets folder.
Save and compile and then the App.
Following the above order, especially using Room to generate the SQL, will circumvent having to understand the nuances of Room in regards to column type affinities and column constraints.
for example you CANNOT have a column type that is not INTEGER, TEXT, REAL or BLOB.
Here's an example with a single table named Table1 that has 6 columns
The Table1 #Entity annotated class :-
#Entity /*(
primaryKeys = {"column_name","more_columns_for_a_composite_primary"},
tableName = "alternative_table_name"
)*/
class Table1 {
/* in ROOM every table MUST have a column either annotated with #PrimaryKey
or have a primary key specified in the primaryKeys parameter of the #Entity annotation
*/
#PrimaryKey
Long id = null; /* this will be a typically id column that will, if no value is specified, be generated (must be Long not long) */
String text_column;
#NonNull
String notnull_text_column;
double real_column;
byte[] blob_column;
#NonNull
#ColumnInfo(index = true)
double notnull_real_column_also_indexed;
}
The #Database annotated class, TheDatabase,including singleton for getting an instance.
#Database(entities = {Table1.class}, version = 1, exportSchema = false)
abstract class TheDatabase extends RoomDatabase {
private volatile static TheDatabase instance = null;
public static TheDatabase getInstance(Context context) {
if (instance == null) {
instance = Room.databaseBuilder(context,TheDatabase.class,"the_database.db")
.allowMainThreadQueries() /* for convenience and brevity */
.createFromAsset("the_database.db")
.build();
}
return instance;
}
}
With the above, the project can be compiled (Ctrl+F9), and then the generated java associated with the #Database annotated class can be located e.g.
Within the class there will be a method name createAllTables :-
#SuppressWarnings({"unchecked", "deprecation"})
public final class TheDatabase_Impl extends TheDatabase {
#Override
protected SupportSQLiteOpenHelper createOpenHelper(DatabaseConfiguration configuration) {
final SupportSQLiteOpenHelper.Callback _openCallback = new RoomOpenHelper(configuration, new RoomOpenHelper.Delegate(1) {
#Override
public void createAllTables(SupportSQLiteDatabase _db) {
_db.execSQL("CREATE TABLE IF NOT EXISTS `Table1` (`id` INTEGER, `text_column` TEXT, `notnull_text_column` TEXT NOT NULL, `real_column` REAL NOT NULL, `blob_column` BLOB, `notnull_real_column_also_indexed` REAL NOT NULL, PRIMARY KEY(`id`))");
_db.execSQL("CREATE INDEX IF NOT EXISTS `index_Table1_notnull_real_column_also_indexed` ON `Table1` (`notnull_real_column_also_indexed`)");
_db.execSQL("CREATE TABLE IF NOT EXISTS room_master_table (id INTEGER PRIMARY KEY,identity_hash TEXT)");
_db.execSQL("INSERT OR REPLACE INTO room_master_table (id,identity_hash) VALUES(42, 'a6ca75e8ee6037ad13c258cdc0405ef1')");
}
#Override
public void dropAllTables(SupportSQLiteDatabase _db) {
_db.execSQL("DROP TABLE IF EXISTS `Table1`");
if (mCallbacks != null) {
for (int _i = 0, _size = mCallbacks.size(); _i < _size; _i++) {
mCallbacks.get(_i).onDestructiveMigration(_db);
}
}
}
....
As can be seen there is an execSQL for the table1 table, another for the index (as index = true was specified in the #ColumnInfo annotation).
the room_master_table is used by room to store a hash of the schema, this is not needed, and should NOT be created in the pre-packaged database that will be copied into the assets folder.
The hash will change if the schema changes (the #Entity annotated classes)
Nuances
if you look closely, you will see that both the real_column and the notnull_real_column have the NOT NULL constraint, but only the latter has the #NonNull annotation. This is because double, is a primitive and ALWAYS has a value, so Room implicitly applies the NOT NULL constraint. If the NOT NULL constraint is not coded when creating the pre-packaged database then after the asset has been copied, when running the App, an exception will occur as the database that was found (the one copied from the asset) will be different (in Room's view) from what Room expected (the schema according to the #Entity annotated classed defined in the list of entities in the #Database annotation). Hence, why it is suggested to create the schema via room, extract the generated SQL and use this to create the pre-packaged database. This ensures that the database schema is as expected.
Note this is just one example of the nuances
Continuing with a working example
One thing that often trips new users of SQLite and also Room is that when you instantiate the Database class, is that, it does not then create or open the database. It is not until an attempt is made to access the database (changing or extracting data from the database) that the database is opened and if necessary created and in the case of a pre-populated database copied from the asset (or file, normally the former).
As such, in preparation, for this a (can be one or more), an interface or abstract class annotated with #Dao is created. In this case AllDAO as per:-
#Dao
abstract class AllDAO {
#Insert
abstract long insert(Table1 table1);
#Query("SELECT * FROM table1")
abstract List<Table1> getAllTable1s();
}
using either the insert or the getAllTable1s would access the database.
The #Dao annotated class(es) have to be known/defined to Room, typically the #Database class includes this so the TheDatabase class could then be:-
#Database(entities = {Table1.class}, version = 1, exportSchema = false)
abstract class TheDatabase extends RoomDatabase {
abstract AllDAO getAllDAO(); //<<<<< ADDED to allow use of the database
private volatile static TheDatabase instance = null;
public static TheDatabase getInstance(Context context) {
if (instance == null) {
instance = Room.databaseBuilder(context,TheDatabase.class,"the_database.db")
.allowMainThreadQueries() /* for convenience and brevity */
.createFromAsset("the_database.db")
.build();
}
return instance;
}
}
So the App is virtually ready (using it in an activity will be dealt with later).
Now the pre-packaged database can be perpared/built using an SQLite tool (Navicat for SQLite has been used in this case, it shouldn't matter which).
A connection is made and opened, this detailing where the database file will be stored. (see the tool's help if needed). In this case the database is named SOQuestions (already exists) :-
New Query is clicked, and the SQL for the user defined tables is pasted, as well as the indexes. e.g.:-
So the table(s) and indexes now exist but are unpopulated. So now to populate the database by inserting some records. A query will be used (in this case as it's only an example queries won't be saved, they could if desired).
So the existing SQL is deleted and replaced with (not deletes all rows, so it is rerunnable) and then run:-
The resultant data being :-
The database should be saved. It is suggested that the database/connection is closed and then reopened to check that the data has been saved and then finally closed again (this was done).
The database is now ready to be copied into the assets folder (which currently doesn't exist). So create the assets folder in the project (back to Android Studio). File/Directory was used to select src\main\assets :-
to get :-
The file is copied, from Windows Explorer (right click on the file and copy)
and pasted (right click on the assets folder in Andriod Studio and Paste), renaming it to the_database.db (the database name, as per the createFromAsset (could use soquestions.db as the asset file name))
resulting in :-
Now to running the App by using the database in an activity (note that for brevity and convenience this is run on the main thread).
The activity code :-
public class MainActivity extends AppCompatActivity {
TheDatabase dbInstance;
AllDAO dao;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
dbInstance = TheDatabase.getInstance(this); /* get DB Instance */
dao = dbInstance.getAllDAO(); /* get the appropriate Dao */
logAllRowsFromTable1("PRENEWDATA"); /* WILL INITIATE THE ASSET COPY (if DB does not exist) */
Table1 newTable1Row = new Table1();
newTable1Row.text_column = "a new row";
newTable1Row.blob_column = new byte[30];
newTable1Row.notnull_text_column = " the new nottnull_text_column";
newTable1Row.real_column = 4444.55555;
newTable1Row.notnull_real_column_also_indexed = 7777.8888;
dao.insert(newTable1Row); /* IF NOT INITIATED ABOVE WILL INITIATE THE ASSET COPY (if DB does not exist)*/
logAllRowsFromTable1("POSTNEWDATA");
}
void logAllRowsFromTable1(String suffix) {
for (Table1 t: dao.getAllTable1s()) {
Log.d("DB-" + suffix,
"ID is " + t.real_column
+ "\n\tTEXT_COLUMN is " + t.text_column
+ "\n\t NOTNULL_TEXT_COLUMN is " + t.notnull_text_column
+ "\n\t REAL_COLUMN is " + t.real_column
+ "\n\t NOTNULL_REAL_COLUMN... is " + t.notnull_real_column_also_indexed
/* not doing the blob so as not to complicate matters */
);
}
}
This will first output some of the data, for all rows, from the pre-packaged database to the log.
It will then add a new run (each run, it is only a demo/example) and then output some of the data for all rows, from the updated (new row) database.
e.g. :-
2022-04-22 11:00:43.689 D/DB-PRENEWDATA: ID is 10.3333
TEXT_COLUMN is some text
NOTNULL_TEXT_COLUMN is some notnull text
REAL_COLUMN is 10.3333
NOTNULL_REAL_COLUMN... is 3.1
2022-04-22 11:00:43.689 D/DB-PRENEWDATA: ID is 11.3333
TEXT_COLUMN is null
NOTNULL_TEXT_COLUMN is more not null text
REAL_COLUMN is 11.3333
NOTNULL_REAL_COLUMN... is 4.1
2022-04-22 11:00:43.692 D/DB-POSTNEWDATA: ID is 10.3333
TEXT_COLUMN is some text
NOTNULL_TEXT_COLUMN is some notnull text
REAL_COLUMN is 10.3333
NOTNULL_REAL_COLUMN... is 3.1
2022-04-22 11:00:43.692 D/DB-POSTNEWDATA: ID is 11.3333
TEXT_COLUMN is null
NOTNULL_TEXT_COLUMN is more not null text
REAL_COLUMN is 11.3333
NOTNULL_REAL_COLUMN... is 4.1
2022-04-22 11:00:43.692 D/DB-POSTNEWDATA: ID is 4444.55555
TEXT_COLUMN is a new row
NOTNULL_TEXT_COLUMN is the new nottnull_text_column
REAL_COLUMN is 4444.55555
NOTNULL_REAL_COLUMN... is 7777.8888
blank lines added to distinguish between the two sets of output
Android Studio's App Inspection can be used to see the actual data:-

How to save entities with manually assigned identifiers using Spring Data JPA?

I'm updating an existing code that handles the copy or raw data from one table into multiple objects within the same database.
Previously, every kind of object had a generated PK using a sequence for each table.
Something like that :
#Id
#Column(name = "id")
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Integer id;
In order to reuse existing IDs from the import table, we removed GeneratedValue for some entities, like that :
#Id
#Column(name = "id")
private Integer id;
For this entity, I did not change my JpaRepository, looking like this :
public interface EntityRepository extends JpaRepository<Entity, Integer> {
<S extends Entity> S save(S entity);
}
Now I'm struggling to understand the following behaviour, within a spring transaction (#Transactional) with the default propagation and isolation level :
With the #GeneratedValue on the entity, when I call entityRepository.save(entity) I can see with Hibernate show sql activated that an insert request is fired (however seems to be only in the cache since the database does not change)
Without the #GeneratedValue on the entity, only a select request is fired (no insert attempt)
This is a big issue when my Entity (without generated value) is mapped to MyOtherEntity (with generated value) in a one or many relationship.
I thus have the following error :
ERROR: insert or update on table "t_other_entity" violates foreign key constraint "other_entity_entity"
Détail : Key (entity_id)=(110) is not present in table "t_entity"
Seems legit since the insert has not been sent for Entity, but why ? Again, if I change the ID of the Entity and use #GeneratedValue I don't get any error.
I'm using Spring Boot 1.5.12, Java 8 and PostgreSQL 9
You're basically switching from automatically assigned identifiers to manually defined ones which has a couple of consequences both on the JPA and Spring Data level.
Database operation timing
On the plain JPA level, the persistence provider doesn't necessarily need to immediately execute a single insert as it doesn't have to obtain an identifier value. That's why it usually delays the execution of the statement until it needs to flush, which is on either an explicit call to EntityManager.flush(), a query execution as that requires the data in the database to be up to date to deliver correct results or transaction commit.
Spring Data JPA repositories automatically use default transactions on the call to save(…). However, if you're calling repositories within a method annotated with #Transactional in turn, the databse interaction might not occur until that method is left.
EntityManager.persist(…) VS. ….merge(…)
JPA requires the EntityManager client code to differentiate between persisting a completely new entity or applying changes to an existing one. Spring Data repositories w ant to free the client code from having to deal with this distinction as business code shouldn't be overloaded with that implementation detail. That means, Spring Data will somehow have to differentiate new entities from existing ones itself. The various strategies are described in the reference documentation.
In case of manually identifiers the default of inspecting the identifier property for null values will not work as the property will never be null by definition. A standard pattern is to tweak the entities to implement Persistable and keep a transient is-new-flag around and use entity callback annotations to flip the flag.
#MappedSuperclass
public abstract class AbstractEntity<ID extends SalespointIdentifier> implements Persistable<ID> {
private #Transient boolean isNew = true;
#Override
public boolean isNew() {
return isNew;
}
#PrePersist
#PostLoad
void markNotNew() {
this.isNew = false;
}
// More code…
}
isNew is declared transient so that it doesn't get persisted. The type implements Persistable so that the Spring Data JPA implementation of the repository's save(…) method will use that. The code above results in entities created from user code using new having the flag set to true, but any kind of database interaction (saving or loading) turning the entity into a existing one, so that save(…) will trigger EntityManager.persist(…) initially but ….merge(…) for all subsequent operations.
I took the chance to create DATAJPA-1600 and added a summary of this description to the reference docs.

#GeneratedValue polymorphic abstract superclass over MySQL

In a Spring MVC application using Hibernate and MySQL, I have an abstract superclass BaseEntity that manages the values of the IDs for all the other entities in the model. The id field uses #GeneratedValue. I am encountering a problem whenever my code tries to save any of the subclasses that extend BaseEntity. The problem comes with the choice of GenerationType for the #GeneratedValue.
At every place in my code where a subclass of BaseEntity tries to save to the underlying MySQL database, I get the following error:
ERROR SqlExceptionHelper - Table 'docbd.hibernate_sequences' doesn't exist
I have read many postings about this on SO and on google, but they either deal with other databases (not MySQL) or they do not deal with abstract superclasses. I cannot solve the problem by using GenerationType.IDENTITY because I am using an abstract superclass to manage id fields for all entities in the model. Similarly, I cannot use GenerationType.SEQUENCE because MySQL does not support sequences.
So how do I solve this problem?
Here is the code for BaseEntity.java:
#Entity
#Inheritance(strategy = InheritanceType.TABLE_PER_CLASS)
public abstract class BaseEntity {
#Id
#GeneratedValue(strategy = GenerationType.TABLE)
protected Integer id;
public void setId(Integer id) {this.id = id;}
public Integer getId() {return id;}
public boolean isNew() {return (this.id == null);}
}
Here is an example of the code for one of the entities that extends BaseEntity:
#Entity
#Table(name = "ccd")
public class CCD extends BaseEntity{
//other stuff
}
Here is the DDL:
CREATE TABLE IF NOT EXISTS ccd(
id int(11) UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
#other stuff
)engine=InnoDB;SHOW WARNINGS;
Here is the JPQL code in the DAO:
#Override
#Transactional
public void saveCCD(CCD ccd) {
if (ccd.getId() == null) {
System.out.println("[[[[[[[[[[[[ about to persist CCD ]]]]]]]]]]]]]]]]]]]]");
this.em.persist(ccd);
this.em.flush();
}
else {
System.out.println("]]]]]]]]]]]]]]]]]] about to merge CCD [[[[[[[[[[[[[[[[[[[[[");
this.em.merge(ccd);
this.em.flush();
}
}
EDIT:
The reason I cannot use #MappedSuperClass in this situation is that I need to have ManyToOne relationships that allow for multiple subtypes to be used interchangeably. Look at the AccessLog class below as an example. It has an actor_entity and a target_entity. There can be many types of actor entities and many types of target entities, but they all inherit from BaseEntity. This inheritance enables the underlying accesslogs data table in MySQL to just have one actor_entity_id field and just one target_entity_id field instead of having to have several fields for each. When I change #Entity above BaseEntity to #MappedSuperClass, a different error gets thrown indicating that AccessLog cannot find BaseEntity. BaseEntity needs #Entity annotation in order for AccessLog to have polymorphic properties.
#Entity
#Table(name = "accesslogs")
public class AccessLog extends BaseEntity{
#ManyToOne
#JoinColumn(name = "actorentity_id")
private BaseEntity actor_entity;
#ManyToOne
#JoinColumn(name = "targetentity_id")
private BaseEntity target_entity;
#Column(name="action_code")
private String action;
//getters, setters, & other stuff
}
SECOND EDIT:
As per JBNizet's suggestion, I created a hibernate_sequences table as follows:
CREATE TABLE IF NOT EXISTS hibernate_sequences(
sequence_next_hi_value int(11) UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY
)engine=InnoDB;SHOW WARNINGS;
But now I am getting the following error:
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown column 'sequence_name' in 'where clause'
Here is the hibernate sql causing the error, followed by the next 2 lines of the stack trace:
Hibernate: select sequence_next_hi_value from hibernate_sequences where sequence_name = 'BaseEntity' for update
ERROR MultipleHiLoPerTableGenerator - HHH000351: Could not read or init a hi value
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown column 'sequence_name' in 'where clause'
How do I resolve this?
What a mess... AUTO_INCREMENT is MySQL's hidden sequence. The radical problem is that MySQL can not insert and return the PK at the same time, but Hibernate need this while INSERTing a new Entity.
The Problems you run into:
If Hibernate save a new Entity, he try to immerdentelly set the id to the new EntityBean. Therefore hibernate must read what ID will the Database use before hibernate save the new Tuple to the Table.
If you have multiple Servers who access the database, you shall let hibernate's session-factory decide to use the built-in sequence(AUTO-INCREMENT) or let hibernate decide (GenerationType.AUTO/GenerationType.IDENTITY) how large the open range of reserved PK's is (Job of a DB-Architect). (We have about 20 servers to one Database, so on a good-used table we use a PK-distance of +100). If only one server have access to the database GenerationType.TABLE shall be correct.
Hibernate must calculate the next id by yourself using max(*)+1 but:
What if two requests ask for max(*)+1 at the same time/with the same result? Right: The last try to insert will fail.
So you need to have a Table LAST_IDS in the database who stores the last Table-PK's. If you like to add one, you must do this steps:
Start read-optimistic transaction.
SELECT MAX(address_id) FROM LAST_IDS
store the maximum in a java-variable i.e.: $OldID.
$NewID = $OldID + 1. (+100 in pessimistic-lock)
UPDATE LAST_IDS SET address_id= $newID WHERE address_id= $oldID?
commit the read-optimistic transaction.
if commit was successfull, store $newID to setID() in the HibernateBean you like to save.
Finally let Hibernate call the insert.
This is the only way i know.
BTW: Hibernate-Entitys shall only use inheritance if the Database support inheritance between tables like PostgreSQL or Oracle.
Because you use the TABLE identifier generator you need to have that table created. If you are not using the enhanced identifier generators, chances are you are going to use the MultipleHiLoPerTableGenerator.
The MultipleHiLoPerTableGenerator can use one table for all table identifier generators.
My suggestion is to grab the table ddl from your integration tests, in case you use hbmddl to build the test schema. If you use flyway or liquibase for testing, you can add a maven plugin to generate the ddl schema.
Once you have the schema, you need to take the exact create table command and make add it to your MySQL database.

Can I remove the discriminator column in a Hibernate single table inheritance?

We use single table inheritance for every table in our application. This allows different instances of the same application stack to work with the same DAOs while their entities might differ slightly potentially containing information unique to that instance. An abstract class defines the basic table structure and an extension defines additional columns, if needed by that instance:
#Entity
#Inheritance(strategy = InheritanceType.SINGLE_TABLE)
#Table(name = "client")
public abstract class Client extends AbstractPersistable<Long> {
// ...
}
application A:
#Entity
public class ClientSimple extends Client {
private String name;
// getter, setter
}
application B:
#Entity
public class ClientAdvanced extends Client {
private String description;
// getter, setter
}
Now a DAO can work with Client objects for application A and B but application B can define additional information for its client object that may be read by a manager method unique to application B:
application A:
Client client = new ClientSimple();
clientDao.save(client);
application B:
Client client = new ClientAdvanced();
clientDao.save(client);
Unfortunately this means there is a DTYPE column in every table (or any other name that I might choose). Is there any way to get rid of this? We don't need it and it's using up DB space...
Thanks!
EDIT
Important to note: #MappedSuperclass won't work. We're using QueryDSL as our HQL abstraction layer. This requires automatically generated Query Type classes for type save querying. These however will only be generated correctly if the abstract class is annotated with #Entity.
This is neccessairy because we want to query against the abstract class Client while in truth querying ClientSimple in application A and ClientAdvanced in application B:
So in any application this will work:
query.where(QClient.client.name.equals("something");
and in application B this will work:
query.where(QClientSimple.client.description.equals("something else");
EDIT2 - boil down
It seems to boil down to this: Can I configure hibernate at deploy time to set the discriminator type for an inhertited entity to a fixed value. So going with my example a Client will always be ClientSimple in one application and ClientAdvanced in the other so that I don't have to store that information in the database?
Like I said: Each application will be an instance of the base application stack. Each application might define additional columns for their local database but ALL objects will be of the same type for that instance so we guarantee that the discriminator is always the same making it redundant in the database and a use case for hibernate configuration.
I know, this is a very old question, but I encountered this problem recently and this might prove useful to someone.
This can be done using Hibernate's #DiscriminatorFormula annotation. The following description is based on the book Java Persistence with Hibernate, section 5.1.3; the relevant part begins at page the last paragraph on page 202.
With #DiscriminatorFormula you can provide an SQL statement that determines the value of the discriminator while fetching the relevant rows from the database. In your case, it would have to be a simple string that evaluates to some arbitrarily selected value. For this to work, you need to decide upon a name that would be used for your Client entity. Suppose that you select 'GenericClient' as the name of the entity. This is the name that should appear within #Entity annotation as the value of the name attribute. So, the complete example, in your case would look like the following.
#Entity
#Inheritance(strategy = InheritanceType.SINGLE_TABLE)
#Table(name = "client")
#DiscriminatorFormula("'GenericClient'") // *1*
public abstract class Client extends AbstractPersistable<Long> {
// ...
}
// Application A
#Entity
#DiscriminatorValue("GenericClient") // *2*
public class SimpleClient extends Client {
// ...
}
// Application B
#Entity
#DiscriminatorValue("GenericClient") // *3*
public class AdvancedClient extends Client {
// ...
}
The line that is denoted by '1' is a part of the SQL snippet that will always return 'GenericClient' as its value. The subclasses of the Client should always be annotated with the #DiscriminatorValue("GenericClient"). What this means is that when Hibernate fetches the rows from the DB, the type of the object to be constructed would always be the specific subclass of Client.
If the package where the subclasses of Client reside, and the name of the subclasses are fixed:
In that case, the #DiscriminatorValue("GenericClient") on the sub-classes wouldn't be required, all you would need to do is:
#Entity
#Inheritance(strategy = InheritanceType.SINGLE_TABLE)
#Table(name = "client")
#DiscriminatorFormula("'com.example.fixed.path.FixedSubClassName'")
public abstract class Client extends AbstractPersistable<Long> {
// ...
}
The subclasses wouldn't need any annotations. The discriminator-value defaults to the entity-name, which itself defaults to the fully-qualified class-name.
Note: The SQL statement inside #DiscriminatorFormula() can be any valid SQL statement for your targeted DB server.
If you never need to use both ClientSimple and ClientAdvanced in the same application you can declare Client as #MappedSuperclass rather than #Entity.
In Hibernate, Single Table per Class hierarchy would always need a discriminator column to distinguish between the entities as all classes in one hierarchy are stored in one table.
Here is an example of Hibernate Single Table per Class Hierarchy.
But you may want to consider a different Hierarchy scheme like below:
Hibernate Single Table per Subclass
Advantages
Using this hierarchy, does not require complex changes to the
database schema when a single parent class is modified.
It works well
with shallow hierarchy.
Disadvantages
As the hierarchy grows, it may result in poor performance.
The number of joins required to construct a subclass also grows.
Hibernate Single Table per Concrete class
Advantages
This is the easiest method of Inheritance mapping to implement.
Disadvantages
Data thats belongs to a parent class is scattered across a number of
subclass tables, which represents concrete classes.
This hierarchy is not recommended for most cases.
Changes to a parent class is reflected to large number of tables
A query couched in terms of parent class is likely to cause a large
number of select operations
I would suggest you to have a look at Single Table Per Subclass scheme. Although I am not sure about your exact requirement. But this may help.

Categories