I am working on a large codebase using Spring MVC with EclipseLink 2.5.2 on a mysql database. The database and its structure are created directly, not through any code-first approach. My problem concerns 2 tables in a one-to-many relationship.
CREATE TABLE ROLE (
ID BIGINT(20) PRIMARY KEY,
-- OTHER FIELDS --
);
CREATE TABLE ROLE_DOMAIN (
ID BIGINT(20) PRIMARY KEY,
ROLE_ID BIGINT(20) NOT NULL,
DOMAIN VARCHAR(255) NOT NULL
-- OTHER FIELDS --
);
ALTER TABLE ROLE_DOMAIN ADD CONSTRAINT FK_ROLE_DOMAIN_ROLE_ID FOREIGN KEY (ROLE_ID) REFERENCES ROLE_BASE (ID) ON DELETE CASCADE;
ALTER TABLE ROLE_DOMAIN ADD CONSTRAINT UQ_ROLE_DOMAIN_ROLE_ID_DOMAIN UNIQUE (ROLE_ID, DOMAIN);
And in java, this is how I've got the two entities configured.
#Entity
public class Role {
private Long id;
private Set<RoleDomain> roleDomains = new HashSet<>();
#Id
#TableGenerator(name = "ROLE.ID", allocationSize = 1, initialValue = 1)
#GeneratedValue(strategy = GenerationType.TABLE, generator = "ROLE.ID")
public Long getID() {
return this.id;
}
public void setId(Long id) {
this.id = id;
}
#OneToMany(fetch = FetchType.LAZY, cascade = CascadeType.ALL, orphanRemoval = true)
#JoinColumn(name = "ROLE_ID", referencedColumnName = "ID", insertable = false, updatable = false)
public Set<RoleDomain> getRoleDomains() {
return roleDomains;
}
public void setRoleDomains(Set<RoleDomain> roleDomains) {
this.roleDomains = roleDomains;
}
}
#Entity
#Table(name = "ROLE_DOMAIN")
public class RoleDomain {
private Long id;
private Long roleId;
private String domain;
#Id
#TableGenerator(name = "ROLE_DOMAIN.ID", allocationSize = 1, initialValue = 1)
#GeneratedValue(strategy = GenerationType.TABLE, generator = "ROLE_DOMAIN.ID")
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
#Column(name = "ROLE_ID", nullable = false)
public Long getRoleId() {
return roleId;
}
public void setRoleId(Long roleId) {
this.roleId = roleId;
}
#Column(name = "DOMAIN", length = 255)
public String getDomain() {
return domain;
}
public void setDomain(String domain) {
this.domain = domain;
}
}
Say that in this table structure, I already have a record in ROLE and a record in ROLE_DOMAIN that references it, translating to a Role object named myRole containing the RoleDomain in roleDomains.
Now, when I add a new RoleDomain and save using a spring data repository like this:
myRole.add(new RoleDomain("some string"));
roleRepository.save(myRole);
I get an exception for a duplicate insert violating my unique constraint on ROLE_ID and DOMAIN in the database.
[EL Warning]: 2020-10-22 14:53:22.405--UnitOfWork(994047815)--Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.8.v20190620-d6443d8be7): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: java.sql.SQLIntegrityConstraintViolationException: Duplicate entry '198732-some string' for key 'UQ_ROLE_DOMAIN_ROLE_ID_DOMAIN'
Error Code: 1062
Call: INSERT INTO ROLE_DOMAIN (ID, DOMAIN, ROLE_ID) VALUES (?, ?, ?)
bind => [27, some other string, 198732]
The weirdest thing about this problem is that if I remove the unique constraint from the database (Note: keeping the java annotation configuration EXACTLY the same. Literally just "DROP CONSTRAINT..." in the db) then the save call works just fine. It doesn't create duplicates in ROLE_DOMAIN. It does exactly what it's supposed to, just adds the new record to ROLE_DOMAIN.
I don't understand how a unique constraint in the db would cause eclipselink to act this inconsistently. Do I have something configured wrongly? Thanks.
EDIT:
I have just now tried replacing the #Table annotation on the RoleDomain class with this:
#Table(name = "ROLE_DOMAIN", uniqueConstraints =
#UniqueConstraint(columnNames = {"ROLE_ID", "DOMAIN"}))
It didn't change anything.
The issue with your constraint is that EclipseLink orders statements for batching, putting deletes last - this is to give you a chance to clean up other constraints, to modify existing rows before rows get deleted. This can be changed so that deletes are issued first using the setShouldPerformDeletesFirst method on the UnitOfWork. As this is native api, you will have to unwrap the EntityManager to get at it, using
em.unwrap(org.eclipse.persistence.sessions.UnitOfWork.class)
if you are in a transaction. This will only be set for the UnitOfWork within this EntityManager, so if you need it everywhere always, you will want to have a session listener with your own session adaptor class to listen for postAcquireUnitOfWork and call setShouldPerformDeletesFirst on it.
I cannot get the below OneToMany mappings to work properly, even though they are supposedly validated (by hibernate.ddl-auto=validate). I can insert all entities in the application with no problems, but while doing a findAll or findById, the queries Hibernate generates for me are wrong and result in exceptions. This is very likely due to a problem with my OneToMany mappings, or lack of a ManyToOne mapping but I don't see how to make it work.
Currently, the following tables exist in my postgres12 database:
CREATE TABLE battlegroups (
id uuid,
gameworld_id uuid,
name varchar(255),
PRIMARY KEY(id)
);
CREATE TABLE battlegroup_players (
id uuid,
battlegroup_id uuid,
player_id integer,
name varchar(255),
tribe varchar(255),
PRIMARY KEY (id)
);
CREATE TABLE battlegroup_player_villages(
battlegroup_id uuid,
player_id integer,
village_id integer,
x integer,
y integer,
village_name varchar(255),
tribe varchar(255),
PRIMARY KEY(battlegroup_id, player_id, village_id, x, y)
);
These are mapped to the following entities in Kotlin:
#Entity
#Table(name = "battlegroups")
class BattlegroupEntity(
#Id
val id: UUID,
#Column(name = "gameworld_id")
val gameworldId: UUID,
val name: String? = "",
#OneToMany(mappedBy = "battlegroupId", cascade = [CascadeType.ALL],fetch = FetchType.EAGER)
private val players: MutableList<BattlegroupPlayerEntity>)
#Entity
#Table(name = "battlegroup_players")
class BattlegroupPlayerEntity(#Id
val id: UUID,
#Column(name = "battlegroup_id")
val battlegroupId: UUID,
#Column(name = "player_id")
val playerId: Int,
val name: String,
#Enumerated(EnumType.STRING)
val tribe: Tribe,
#OneToMany(mappedBy= "id.playerId" , cascade = [CascadeType.ALL], fetch = FetchType.EAGER)
val battlegroupPlayerVillages: MutableList<BattlegroupPlayerVillageEntity>)
#Entity
#Table(name = "battlegroup_player_villages")
class BattlegroupPlayerVillageEntity(
#EmbeddedId
val id: BattlegroupPlayerVillageId,
#Column(name ="village_name")
val villageName: String,
#Enumerated(EnumType.STRING)
val tribe: Tribe)
#Embeddable
data class BattlegroupPlayerVillageId(
#Column(name = "battlegroup_id")
val battlegroupId: UUID,
#Column(name = "player_id")
val playerId: Int,
#Column(name = "village_id")
val villageId: Int,
val x: Int,
val y: Int
): Serializable
This is the SQL hibernate generates when I do a findAll/findById on a battlegroup:
select
battlegrou0_.id as id1_2_0_,
battlegrou0_.gameworld_id as gameworl2_2_0_,
battlegrou0_.name as name3_2_0_,
players1_.battlegroup_id as battlegr2_1_1_,
players1_.id as id1_1_1_,
players1_.id as id1_1_2_,
players1_.battlegroup_id as battlegr2_1_2_,
players1_.name as name3_1_2_,
players1_.player_id as player_i4_1_2_,
players1_.tribe as tribe5_1_2_,
battlegrou2_.player_id as player_i2_0_3_,
battlegrou2_.battlegroup_id as battlegr1_0_3_,
battlegrou2_.village_id as village_3_0_3_,
battlegrou2_.x as x4_0_3_,
battlegrou2_.y as y5_0_3_,
battlegrou2_.battlegroup_id as battlegr1_0_4_,
battlegrou2_.player_id as player_i2_0_4_,
battlegrou2_.village_id as village_3_0_4_,
battlegrou2_.x as x4_0_4_,
battlegrou2_.y as y5_0_4_,
battlegrou2_.tribe as tribe6_0_4_,
battlegrou2_.village_name as village_7_0_4_
from
battlegroups battlegrou0_
left outer join
battlegroup_players players1_
on battlegrou0_.id=players1_.battlegroup_id
left outer join
battlegroup_player_villages battlegrou2_
on players1_.id=battlegrou2_.player_id -- ERROR: comparing integer to uuid
where
battlegrou0_.id=?
This results in an exception:
PSQLException: ERROR: operator does not exist: integer = uuid
Which makes perfect sense, since it is comparing the battlegroup_players id, which is a uuid, to the battlegroup_player_villages player_id, which is an integer. It should instead be comparing/joining on the battlegroup_player's player_id to the battlegroup_player_village's player_id.
If I change the sql to reflect that and manually execute the above query with the error line replaced:
on players1_.player_id=battlegrou2_.player_id
I get exactly the results I want. How can I change the OneToMany mappings so that it does exactly that?
Is it possible to do this without having a BattlegroupPlayerEntity object in my BattlegroupPlayerVillageEntity class?
Bonus points if you can get the left outer joins to become regular inner joins.
EDIT:
I tried the current answer, had to slightly adjust my embedded id because my code could not compile otherwise, should be the same thing:
#Embeddable
data class BattlegroupPlayerVillageId(
#Column(name = "battlegroup_id")
val battlegroupId: UUID,
#Column(name = "village_id")
val villageId: Int,
val x: Int,
val y: Int
): Serializable {
#ManyToOne
#JoinColumn(name = "player_id")
var player: BattlegroupPlayerEntity? = null
}
Using this still results in a comparison between int and uuid, for some reason.
Schema-validation: wrong column type encountered in column [player_id] in table [battlegroup_player_villages]; found [int4 (Types#INTEGER)], but expecting [uuid (Types#OTHER)]
Interestingly, if I try to put a referencedColumnName = "player_id" in there, I get a stackoverflow error instead.
I did some digging and found some issues with the mapping as well as classes, I will try to explain as much as possible.
WARNING!!! TL;DR
I will use Java for code, I hope that should not be a problem converting to kotlin.
There are some issues with classes also(hint: Serializable), so classes must implements Serializable.
Used lombok to reduce the boilerplate
Here is the changed BattleGroupPlayer entity:
#Entity
#Getter
#NoArgsConstructor
#Table(name = "battle_group")
public class BattleGroup implements Serializable {
private static final long serialVersionUID = 6396336405158170608L;
#Id
private UUID id;
private String name;
#OneToMany(mappedBy = "battleGroupId", cascade = CascadeType.ALL, orphanRemoval = true, fetch = FetchType.LAZY)
private List<BattleGroupPlayer> players = new ArrayList();
public BattleGroup(UUID id, String name) {
this.id = id;
this.name = name;
}
public void addPlayer(BattleGroupPlayer player) {
players.add(player);
}
}
and BattleGroupVillage and BattleGroupVillageId entity
#AllArgsConstructor
#Entity
#Getter
#NoArgsConstructor
#Table(name = "battle_group_village")
public class BattleGroupVillage implements Serializable {
private static final long serialVersionUID = -4928557296423893476L;
#EmbeddedId
private BattleGroupVillageId id;
private String name;
}
#Embeddable
#EqualsAndHashCode
#Getter
#NoArgsConstructor
public class BattleGroupVillageId implements Serializable {
private static final long serialVersionUID = -6375405007868923427L;
#Column(name = "battle_group_id")
private UUID battleGroupId;
#Column(name = "player_id")
private Integer playerId;
#Column(name = "village_id")
private Integer villageId;
public BattleGroupVillageId(UUID battleGroupId, Integer playerId, Integer villageId) {
this.battleGroupId = battleGroupId;
this.villageId = villageId;
this.playerId = playerId;
}
}
Now, Serializable needs to be implemented in every class as we have used #EmbeddedId which requires the container class to be Serializable as well, hence every parent class must implement serializable, otherwise it would give error.
Now, we can solve the problem using #JoinColumn annotation like below:
#OneToMany(cascade = CasacadeType.ALL, fetch =EAGER)
#JoinColumn(name = "player_id", referencedColumnName = "player_id")
private List<BattleGroupVillage> villages = new ArrayList<>();
name -> field in child table and referenceColumnName -> field in parent table.
This will join the column player_id column in both entities.
SELECT
battlegrou0_.id AS id1_0_0_,
battlegrou0_.name AS name2_0_0_,
players1_.battle_group_id AS battle_g2_1_1_,
players1_.id AS id1_1_1_,
players1_.id AS id1_1_2_,
players1_.battle_group_id AS battle_g2_1_2_,
players1_.player_id AS player_i3_1_2_,
villages2_.player_id AS player_i4_2_3_,
villages2_.battle_group_id AS battle_g1_2_3_,
villages2_.village_id AS village_2_2_3_,
villages2_.battle_group_id AS battle_g1_2_4_,
villages2_.player_id AS player_i4_2_4_,
villages2_.village_id AS village_2_2_4_,
villages2_.name AS name3_2_4_
FROM
battle_group battlegrou0_
LEFT OUTER JOIN
battle_group_player players1_ ON battlegrou0_.id = players1_.battle_group_id
LEFT OUTER JOIN
battle_group_village villages2_ ON players1_.player_id = villages2_.player_id
WHERE
battlegrou0_.id = 1;
But this would give 2 players if you check the BattleGroup#getPlayers() method, below is the test case to verify.
UUID battleGroupId = UUID.randomUUID();
doInTransaction( em -> {
BattleGroupPlayer player = new BattleGroupPlayer(UUID.randomUUID(), battleGroupId, 1);
BattleGroupVillageId villageId1 = new BattleGroupVillageId(
battleGroupId,
1,
1
);
BattleGroupVillageId villageId2 = new BattleGroupVillageId(
battleGroupId,
1,
2
);
BattleGroupVillage village1 = new BattleGroupVillage(villageId1, "Village 1");
BattleGroupVillage village2 = new BattleGroupVillage(villageId2, "Village 2");
player.addVillage(village1);
player.addVillage(village2);
BattleGroup battleGroup = new BattleGroup(battleGroupId, "Takeshi Castle");
battleGroup.addPlayer(player);
em.persist(battleGroup);
});
doInTransaction( em -> {
BattleGroup battleGroup = em.find(BattleGroup.class, battleGroupId);
assertNotNull(battleGroup);
assertEquals(2, battleGroup.getPlayers().size());
BattleGroupPlayer player = battleGroup.getPlayers().get(0);
assertEquals(2, player.getVillages().size());
});
If your use case was to get the single player from BattleGroup then you would have to use FETCH.LAZY, which is btw good for performance as well.
Why LAZY works?
Because LAZY loading will issue separate select statement when you really access them. EAGER will load whole graph, wherever you have it. It means, it will try to load all relationship mapped with this type, hence it will perform outer join (which may result in 2 rows for players as your criteria is unique because of villageId, which you cannot know before querying).
If you have more than 1 such fields i.e want join on battleGroupId as well, you would need this
#JoinColumns({
#JoinColumn(name = "player_id", referencedColumnName = "player_id"),
#JoinColumn(name = "battle_group_id", referencedColumnName = "battle_group_id")
}
)
NOTE: Used h2 in memory db for test case
I have two entities :
RawDeviceMessage which represents a raw message from a device
TagDetail which represents the message after being parsed
A TagDetail may or may not be associated with a RawDeviceMessage, because it may be created directly without a raw message to parse. Thus, I have a optional bi-directional OneToOne relation between RawDeviceMessage and TagDetail.
In the database I have the following tables :
raw_device_message (id + other columns)
tag_detail (id + other columns)
tag_detail_has_raw_device_message (tag_detail_id , raw_device_message_id) : this table is a JoinTable with the proper SQL constraints and foreign keys to enforce the OneToOne relation at the database level.
I have mapped my Java classes like that :
RawDeviceMessage
#Entity
#Table(name = "raw_device_message")
public class RawDeviceMessage implements Serializable {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
#Column(name = "id", unique = true, updatable = false, nullable = false)
private Long id;
#OneToOne(mappedBy = "rawDeviceMessage", fetch = FetchType.LAZY)
private TagDetail tagDetail;
public RawDeviceMessage(){}
public Long getId(){...}
public void setId(final Long id){...}
public TagDetail getTagDetail(){...}
public RawDeviceMessage setTagDetail(TagDetail tagDetail){...}
}
TagDetail
#Entity
#Table(name = "tag_detail")
public class TagDetail implements Serializable {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
#Column(name = "id", unique = true, updatable = false, nullable = false)
private Long id;
#OneToOne(fetch = FetchType.EAGER, cascade = { CascadeType.REFRESH, CascadeType.MERGE })
#JoinTable(
name="tag_detail_has_raw_device_message",
joinColumns=#JoinColumn(name="tag_detail_id"),
inverseJoinColumns=#JoinColumn(name="raw_device_message_id"))
private RawDeviceMessage rawDeviceMessage;
public TagDetail() {}
public Long getId(){...}
public void setId(final Long id){...}
public RawDeviceMessage getRawDeviceMessage(){...}
public void setRawDeviceMessage(RawDeviceMessage rawDeviceMessage){...}
}
The issue
My issue is that when performing a find all on the RawDeviceMessage resource, Hibernate generates the wrong SQL query :
SELECT rawdevicem0_.id AS id1_15_,
rawdevicem0_2_.tag_detail_id AS tag_deta0_37_,
FROM raw_device_message rawdevicem0_
LEFT OUTER JOIN tag_detail_has_raw_device_message rawdevicem0_2_ ON rawdevicem0_.id=rawdevicem0_2_.tag_detail_id
CROSS JOIN tag_detail tagdetail1_
LEFT OUTER JOIN tag_detail_has_raw_device_message tagdetail1_1_ ON tagdetail1_.id=tagdetail1_1_.tag_detail_id
WHERE rawdevicem0_2_.tag_detail_id=tagdetail1_.id
ORDER BY rawdevicem0_.id ASC
As you can see, in the first LEFT OUTER JOIN, the join condition is rawdevicem0_.id=rawdevicem0_2_.tag_detail_id
It tries to join raw_device_message.id with tag_detail_has_raw_device_message.tag_detail_id , which makes no sense and messes up with all the results.
Instead the join condition should be, rawdevicem0_.id=rawdevicem0_2_.raw_device_message_id
This condition would correctly join raw_device_message.id with tag_detail_has_raw_device_message.raw_device_message_id
I have shortened the query generated by hibernate to remove all unrelated fields, but in the generated query there is nowhere the column raw_device_message_id, so there is definitely something wrong.
Is it an hibernate bug or am I doing my mapping wrong ?
If the purpose of tag_detail_has_raw_device_message table is only to link the two tables, then you can drop it. You can have One-to-One with just the two tables.
More details here -
Setting up a One To ManyJoins Against a Bridge Table using JPA
However if you want to have an intermediate mapping table, because it has some additional info for that relationship, then more details here.
http://what-when-how.com/hibernate/advanced-entity-association-mappings-hibernate/
I am trying to fetch the list of records from a view which has a composite primary key with three columns.
I tried to embed the composite key in the entity class. But I am getting the below mentioned errors. The columns of the views (VW_ALERTS) are C_ID, MAT_ID, P_MONTH, CO_TYPE, CO_SUBTYPE.
Here the composite keys are C_ID, MAT_ID, P_MONTH. I am making the property of them in the embeddable class.
Please help to resolve the issue
org.hibernate.QueryException: could not resolve property: coreId of: com.sp.cpem.dto.VwAlerts [FROM com.ct.cpem.dto.VwAlerts d ORDER BY d.cId ASC]
This following code is used to execute the hql.
Session session = sessionFactory.openSession();
String hql = "FROM VwAlerts d ORDER BY d.coId ASC";
Query query = session.createQuery(hql);
return query.list();
The entity class :
#SuppressWarnings("unchecked")
#Entity
#Table(schema = "TIGER", name = "VW_ALERTS")
public class VwAlerts {
#Embedded
private VwAlertsPK vwAlertsPK;
#Basic
#Column(name = "CO_TYPE", nullable = true)
private String coType;
#Basic
#Column(name = "CO_SUBTYPE", nullable = true)
private String coSubType;
Class used to get the composite key
#Embeddable
public class VwAlertsPK implements Serializable {
#Basic
#Column(name = "C_ID", nullable = false)
private BigDecimal cId;
#Basic
#Column(name = "MAT_ID", nullable = true)
private BigDecimal matId;
#Basic
#Column(name = "P_MONTH", nullable = true)
private BigDecimal pMonth;
I am expecting to get all the records from the view.
I tried with the #Id column in the entity class, it failed by returning only the duplicate records of the first row from the view.
Your entity VwAlerts has only 3 properties --> vwAlertsPK, coType, coSubType
but in your HQL you are trying to access a property coreId which does not exist in your entity.
FROM com.ct.cpem.dto.VwAlerts d ORDER BY d.coreId ASC
So add the property coreId to your entity or else just update the ORDER BY clause so you are pointing to correct properties of your entity.
It is common practice to map the same entity twice or even thrice, every time with a subset of columns needed for processing. I have found that with Hibernate 3.5.1, every time a #ManyToOne or a #OneToMany exists in two entities mapping the same table, the foreign key is created twice. This has no impact on MySQL and SQL Server, but Oracle refuses the second creation statement.
Here is an example:
#Entity
#javax.persistence.SequenceGenerator(name = "SEQ_STORE", sequenceName = "SEQ_ENTITY")
#Table(name = "ENTITIES")
class Entity {
//All columns
//And then.....
#ManyToMany(fetch = FetchType.EAGER)
#JoinTable(name = "BRIDGE_TABLE", joinColumns = { #JoinColumn(name = "ENTITY_ID") }, inverseJoinColumns = { #JoinColumn(name = "ROLE_ID") })
#OrderBy("id DESC")
private Set<Role> roles = new HashSet<Roles>();
}
#Entity
#javax.persistence.SequenceGenerator(name = "SEQ_STORE", sequenceName = "SEQ_ENTITY")
#Table(name = "ENTITIES")
class EntityListItem {
//Only a subset of the previous columns
//And then.....
#ManyToMany(fetch = FetchType.EAGER)
#JoinTable(name = "BRIDGE_TABLE", joinColumns = { #JoinColumn(name = "ENTITY_ID") }, inverseJoinColumns = { #JoinColumn(name = "ROLE_ID") })
#OrderBy("id DESC")
private Set<Role> roles = new HashSet<Roles>();
}
Currently, Role is designed not to be navigable to Entity (otherwise I guess there will be 4 foreign keys).
Here are the statement being issued by Hibernate:
create table BRIDGE_TABLE (ENTITY_ID number(19,0) not null, ROLE_ID varchar2(60 char) not null, primary key (ENTITY_ID, ROLE_ID)); //Creates the table
alter table BRIDGE_TABLE add constraint FK47CFB9F0B068EF3F foreign key (ENTITY_ID) references ENTITIES;
alter table BRIDGE_TABLE add constraint FK47CFB9F0B068EF3F foreign key (ENTITY_ID) references ENTITIES;
I'm not sure whether this is a Hibernate bug. We cannot currently move to Hibernate 4. Can be this fixed via code or does it need a new Hibernate version?
I have made a workaround:
Add a #ForeignKey annotation with the same FK name to both entities (e.g. #ForeignKey(name = "FK_TO_ENTITY", inverseName = "FK_TO_ROLE"))
Extend LocalSessionFactoryBean like the following:
#override
public void createDatabaseSchema() throws DataAccessException
{
logger.info("Creating database schema for Hibernate SessionFactory");
SessionFactory sessionFactory = getSessionFactory();
final Dialect dialect = ((SessionFactoryImplementor) sessionFactory).getDialect();
final LinkedHashSet<String> sql = new LinkedHashSet<String>();
for (String query : getConfiguration().generateSchemaCreationScript(dialect))
sql.add(query);
HibernateTemplate hibernateTemplate = new HibernateTemplate(sessionFactory);
hibernateTemplate.execute(new HibernateCallback<Void>()
{
#Override
public Void doInHibernate(Session session) throws SQLException
{
session.doWork(new Work()
{
#Override
public void execute(Connection conn) throws SQLException
{
PhoenixAnnotationSessionFactoryBean.this.executeSchemaScript(conn, sql.toArray(new String[0]));
}
});
return null;
}
});
}
Reason: the #ForeignKey annotation ensures that the FKs will have the same name, hence the SQL statements will be equal each other. The overriden LSFB will store the SQL queries needed to create the schema in a Set so that no duplicate will be allowed.