Maybe someone faced the same situation? In my project I have two services. I create database tables using liquebase. When creating a table of permissions, I immediately enter 4 values there using inserts. and everything is buzzing. So, now in the second service (admin panel) I created an apish that allows you to create permissions. When I try to make a record in the table of permissions, in which there are already 4 records, I get an error ERROR: duplicate key value violates unique constraint "permissions_pkey". The error occurs if I make a request 5 times, the record is added for the 5th time. Thanks in advance)
<insert tableName="permissions">
<column name="id" value="1"/>
<column name="name" value="USERS_READ"/>
</insert>
<insert tableName="permissions">
<column name="id" value="2"/>
<column name="name" value="USERS_GET"/>
</insert>
<insert tableName="permissions">
<column name="id" value="3"/>
<column name="name" value="USERS_CREATE"/>
</insert>
<insert tableName="permissions">
<column name="id" value="4"/>
<column name="name" value="USERS_DELETE"/>
</insert>
public class Permission {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
}
If it's an identity column, you don't have to provide values for the id column in the liquibase script. Let the DB calculate them for you. That's the whole point of using an identity column, after all.
If you need to reference the ids later in the script somehow, look up the way of obtaining the last inserted id from the RDBMS you're using, quite often there will be a method like LAST_INSERTED_ID(). Or, look up the inserted rows using the name column.
You are using generation type IDENTITY, mapping to data type serial in postgres. When you want to insert permissions created from your admin panel, postgres uses a sequence, starting at 1 from your description and the id 1 through 4 are already used by your manual inserts. Leave the ids out of your manual inserts, or explicitly set ids on your entitities created from your admin panel by eg. querying the current maximum id used in the table. Consider changing the generation type.
Related
Let say I have a many-to-many relation in spring-data-jpa between Post entity and Post_Tag entity.
Now if I persist a post with tags like java, testing. The post_tags java, testing will be persisted with the post with a cascade type of persist. Now if I save another post with tags like php, testing, will the testing post_tag row duplicated in the Post_Tag table? Or the previous entry will be used?
It depend upon id. If you have field tagName in Post_Tag entity and used it as id then only it will use previous entry else it will create a duplicate entry.
<id name="tagName" type="string">
<column name="tag_name" />
<generator class="assigned" />
</id>
I am new to Hibernate and was writing some test program.
I am wondering if its a must to have a table , one column of which will be updated using some kind of sequence.
For ex. I created a table
create table course(course_name varchar2(20));
and when I am defining Course.hbm.xml in the following way
<class name="Course" table="COURSE" >
<property name="course">
<column name="course"/>
</property>
</class>
I am getting an error in the XML file saying a declaration of "id" or something similar is expected. I can give the whole error message if required.
You need an ID column so hibernate can identify that row in the table. I'm not fluent in that oldschool hibernate xml mapping but it should look roughly like that:
create table course(id integer primary key, course_name varchar2(20));
<class name="Course" table="COURSE" >
<id name="id">
<!-- uses sequence, auto increment or whatever your DBMS uses for id generation -->
<generator class="native"/>
</id>
<property name="course">
<column name="course"/>
</property>
</class>
As a side note: mapping your entities with annotations is a bit more common nowadays. Makes it easier, especially for starters.
In my Java applicaion, I am using hibernate .hbm file to access database; Is this possible to update the primary key 'id' column in the table; Where the 'id' column in my .hbm file is like:
<hibernate-mapping package="org.jems.user.model">
<class name="Student_Details" table="t_student">
<id name="id" type="int" column="id">
<generator class="increment"/>
</id>
<property name="name" column="name" type="string" unique="true" not-null="true" />
<property name="description" column="description" type="string" />
<property name="comments" column="comments" type="string" />
<property name="active" column="isActive" type="boolean" not-null="true" />
</class>
</hibernate-mapping>
try this:
String hql="update table set id=? where id=? ";
Query query=HibernateSessionFactory.getSession().createQuery(hql);
query.setInteger(0,1);
query.setInteger(1,2);
query.executeUpdate();
HibernateSessionFactory.getSession().beginTransaction().commit();
or just use sql:
String sql = "update table set id = ? where id= ?"
Session session = HibernateSessionFactory.getSession();
SQLQuery query = session.createSQLQuery(sql);
query.setParameter(0, 1);
query.setParameter(1, 2);
No. Hibernate doesn't allow to change the primary key. In general, a primary key value should never change, if needs to be changed than the primary key column(s) are not good candidate(s) for a primary key.
There is a workaround if you prefer to update via entity instead of query:
1) clone the entity to a new entity.
2) delete the old entity. (be careful of your cascade children entities)
3) change the primary key of new entity (or set it null depend your generation strategy).
4) save the new entity.
In hibernate id column values are automatically incremented while session.save() is executed based on which generation strategy you use. Check this post for a simple example
try to writing query like
update table_name set id=value where...............(specify remaining conditions)
Basically, Hibernate does not allow to update or change the primary key of a database entity.
The reason is the entity data that you fetch from database via some query or .get or .load method goes into persistent context layer.
So from hibernate perspective updating of such persistent entity object means deletion of the older one from database and creation of a new one.
It better to make normal update query like
Query query=HibernateSessionFactory.getSession().createQuery(update table set id=? where id=?);
query.executeUpdate();
HibernateSessionFactory.getSession().beginTransaction().commit();
Hibernate mapping question where the behavior is ambiguous and/or dangerous. I have a one-to-many relationship that has a cascade-delete-orphan condition AND a where condition to limit the items in the collection. Mapping here -
<hibernate-mapping>
<class name="User" table="user" >
<!-- properties and id ... -->
<set table="email" inverse="true" cascade="all,delete-orphan" where="deleted!=true">
<key column="user_id">
<one-to-many class="Email"/>
</set>
</class>
</hibernate-mapping>
Now suppose that that I have a User object which is associated to one or more Email objects, at least one of which has a 'true' value for the deleted property. Which of the following two will happen when I call session.delete() on the User object?
The User and all the Email objects, including those with deleted=true, are deleted
The User and the Email objects that are deleted!=null are deleted.
On one hand, scenario 1) ignores the where condition, which may not be correct according to the domain model. BUT in scenario 2) if the parent is deleted, and there's a foreign key constraint on the child (email) table's join key, then the delete command will fail. Which happens and why? Is this just another example of how Hibernate's features can be ambiguous?
I didn't test the mapping but in my opinion, the correct (default) behavior should be to ignore the where condition and to delete all the child records (that's the only option to avoid FK constraints violations when deleting the parent). That's maybe not "correct" from a business point of view but the other option is not "correct" either as it just doesn't work.
To sum up, the mapping itself looks incoherent. You should either not cascade the delete operation (and handle the deletion of the child Email manually).
Or, and I think that this might be the most correct behavior, you should implement a soft delete of both the User and associated Email. Something like this:
<hibernate-mapping>
<class name="User" table="user" where="deleted<>'1'">
<!-- properties and id ... -->
<set table="email" inverse="true" cascade="all,delete-orphan" where="deleted<>'1'">
<key column="user_id">
<one-to-many class="Email"/>
</set>
<sql-delete>UPDATE user SET deleted = '1' WHERE id = ?</sql-delete>
</class>
<class name="Email" table="email" where="deleted<>'1'">
<!-- properties and id ... -->
<sql-delete>UPDATE email SET deleted = '1' WHERE id = ?</sql-delete>
</class>
</hibernate-mapping>
What is done here:
We override the default delete using sql-delete to update a flag instead of a real delete (the soft delete).
We filter the entities and the association(s) using the where to only fetch entities that haven't been soft deleted.
This is inspired by Soft deletes using Hibernate annotations. Not tested though.
References
5.1.3. Class
6.2. Collection mappings
16.3. Custom SQL for create, update and delete
I have a user object that has a one-to-many relationship with String types. I believe they are simple mappings. The types table hold the associated user_id and variable type names, with a primary key 'id' that is basically a counter.
<class name="Users" table="users">
<id column="id" name="id" />
...
<set name="types" table="types" cascade="save-update">
<key column="id" />
<one-to-many class="Types" />
</set>
</class>
<class name="Types" table="types">
<id column="id" name="id" />
<property column="user_id" name="user_id" type="integer" />
<property column="type" name="type" type="string" />
</class>
This is the java I used for adding to the database:
User u = new User();
u.setId(user_id);
...
Collection<Types> t = new HashSet<Types>();
t.add(new Type(auto_incremented_id, user_id, type_name));
u.setTypes(t);
getHibernateTemplate().saveOrUpdate(u);
When I run it, it gives this error:
61468 [http-8080-3] WARN org.hibernate.util.JDBCExceptionReporter - SQL Error: 1062, SQLState: 23000
61468 [http-8080-3] ERROR org.hibernate.util.JDBCExceptionReporter - Duplicate entry '6' for key 'PRIMARY'
61468 [http-8080-3] ERROR org.hibernate.event.def.AbstractFlushingEventListener - Could not synchronize database state with session
org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update
When I check the sql, it shows:
Hibernate: insert into users (name, id) values (?, ?)
Hibernate: insert into types (user_id, type, id) values (?, ?, ?)
Hibernate: update types set id=? where id=?
Why does Hibernate try to update the types' id?
The error says: Duplicate entry '6' for key 'PRIMARY', but there really isn't? I made sure the ids are incremented each time. And the users and types are added into the database correctly.
I logged the information going in, and the types added has an id of 7 and a user id of 6. Could it be that Hibernate takes the user_id of 6 and tried to update types and set id=6 where id=7? Therefore the duplicate primary key error?
But why would it do something so strange? Is there a way to stop it from updating?
Should I set the id manually? If not, then how should I add the types? It gives other errors when I add a type object that only has a type string in it and no ids.
Thanks guys. Been mulling over it for days...
Your biggest problem is incorrect column in the <key> mapping - it should be "user_id", not "id". That said, your whole mapping seems a bit strange to me.
First of all, if you want IDs auto generated you should really let Hibernate take care of that by specifying appropriate generator:
<id column="id" name="id">
<generator class="native"/>
</id>
Read Hibernate Documentation on generators for various available options.
Secondly, if all you need is a set of string types, consider re-mapping them into a collection of elements rather than one-to-many relationship:
<set name="types" table="types">
<key column="user_id"/>
<element column="type" type="string"/>
</set>
That way you won't need explicit "Types" class or mapping for it. Even if you do want to have additional attributes on "Types", you can still map it as component rather than entity.
Finally, if "Types" must be an entity due to some requirement you have not described, the relationship between "Users" and "Types" is bi-directional and needs to be mapped as such:
<set name="types" table="types" inverse="true">
<key column="user_id"/>
<one-to-many class="Types"/>
</set>
...
in Types mapping:
<many-to-one name="user" column="user_id" not-null="true"/>
In the latter case "Types" would have to have a "user" property of type "Users".
Here is a detailed example.
The solution that worked for me was to separately save the two parts (without adding type to user):
getHibernateTemplate().save(user);
getHibernateTemplate().save(new Type(user.id, type_name));
And with the < generator class="native" /> on only the types id.
Bad practice?
Had mapped it as a collection of elements, but it was somewhat wrongly adding the user_id to the types id column which causes duplicate error sometimes; since types id is the only primary key column. Weird! I sort of remember there was some other column error, but forgot about it, because I immediately reverted back to one-to-many. Couldn't grasp the strange workings of Hibernate...
I will try the bi-directional solution sometime... Thanks a lot for all the help :)