I'm trying to map a HashMap similar to the one that is specified as example 3 in the JavaDoc for #MapKeyJoinColumn (see http://www.objectdb.com/api/java/jpa/MapKeyJoinColumn):
#Entity
public class Student {
#Id int studentId;
...
#ManyToMany // students and courses are also many-many
#JoinTable(name="ENROLLMENTS",
joinColumns=#JoinColumn(name="STUDENT"),
inverseJoinColumns=#JoinColumn(name="SEMESTER"))
#MapKeyJoinColumn(name="COURSE")
Map<Course, Semester> enrollment;
...
}
The generated join table (generated with EclipseLink 2.3) has the following layout:
TABLE enrollments (
student_id bigint NOT NULL,
semester_id bigint NOT NULL,
course_id bigint,
CONSTRAINT enrollments_pkey PRIMARY KEY (student_id, semester_id)
)
Why is the primary key generated for Student and Semester and not for Student and Course? This doesn't make any sense in this case. With this primary key, a Student can participate in only one course per semester. 'student_id' and 'course_id' should be defined as primary key! This would also match the Java map definition (the key must be unique, but the same value may be assigned to different keys)
JPA sees the relationship as being between Student and Semester, as in a traditional #ManyToMany without the #MapKeyJoinColumn, and in traditional #ManyToMany duplicates would not be allowed, and the items are deleted by source/target ids, so the pk/index is desired to be on these.
For a finer level of control of the model, consider mapping the ENROLLMENTS table to an Enrollment Entity instead.
I can see from the Java model how you may desire different, so please log a bug/enhancement for this.
Related
I'm trying to achieve the following with JPA: I have Users, Organizations and Roles. A User can have multiple Roles in a given Organization. He can also belong to multiple Organizations, and of course have different Roles per Organization.
Currently I would think that a schema for this should look like this (but also open to alternative aproaches):
CREATE TABLE user
(
id INT NOT NULL AUTO_INCREMENT
);
CREATE TABLE role
(
id INT NOT NULL AUTO_INCREMENT
);
CREATE TABLE organization
(
id INT NOT NULL AUTO_INCREMENT
);
CREATE TABLE `user_and_organization_to_role`
(
`id` INT NOT NULL AUTO_INCREMENT,
`fk_user` INT NOT NULL REFERENCES user (id),
`fk_organization` INT NOT NULL REFERENCES organization (id),
`fk_role` INT NOT NULL REFERENCES role (id),
PRIMARY KEY (`id`),
UNIQUE KEY (`fk_user`, `fk_organization`, `fk_role`)
);
I wouldn't have problems checking roles with native SQL Queries, but I would like to model this in JPA to use the Hibernate Metamodel and Criteria API to implement permission checks.
I thought that something like this would be achievable, even though I'm not 100% sure if I'll reach my goal with Criteria API then:
#Entity
public class Organization {
}
#Entity
public class Role {
}
#Entity
public class User {
private Map<Organization, List<Role>> organizationToRoles;
}
Unfortunately I didn't manage to find a way for the correct annotation, so organizationToRoles is mapped correctly. And even though I would think that is a common problem I didn't find a tutorial that explains how to do this.
Could somebody tell me if such a map is doable with JPA at all, and maybe give an example?
Or if it is not possible to directly have a Map<Organization, List<Role>> organizationToRoles in User, how a mapping could be achived, e.g. with an intermediate Entity that forms the relation between User, Organization and Roles?
In Spring Data JDBC if an entity (e.g. Customer) has a value (e.g. Address) like in the example here the value has a back reference column (column customer in table address) to the entity in the db schema:
CREATE TABLE "customer" (
"id" BIGSERIAL NOT NULL,
"name" VARCHAR(255) NOT NULL,
PRIMARY KEY (id)
);
CREATE TABLE "address" (
"customer" BIGINT,
"city" VARCHAR(255) NOT NULL
);
The problem with this is that if you use that Address value more than once in one entity or even in different entities you have to define an extra column for each usage. Only the primary id of the entity is stored in these columns and otherwise there is no way to distinguish from which entity it is. In my actual implementation I have five of these columns for the Address value:
"order_address" BIGINT, -- backreference for orderAddress to customer id
"service_address" BIGINT, -- backreference for serviceAddress to customer id
"delivery_address" BIGINT, -- backreference for deliveryAddress to customer id
"installation_address" BIGINT, -- backreference for installationAddress to provider_change id
"account_address" BIGINT, -- backreference for accountAddress to payment id
I understand how it works, but I don't understand the idea behind this back reference implementation. So can someone please shed some light on that issue? Thanks!
As to most good questions there are many sides to the answer.
The historical/symmetry answer
When it comes to references between entities Spring Data JDBC supports 1:1 (the one you ask about) and 1:N (lists, sets and maps).
For the latter anything but a back-reference is just weird/wrong.
And with using a back-reference for 1:1 becomes basically the same, simplifying the code, which is a good thing.
The DML process answer
With the back-reference, the process of inserting and deleting becomes much easier: Insert the aggregate root (customer in your example) first, then all the referenced entities. And it continues to work if those entities have further entities. Deletes work the other way round but are equally straight forward.
The dependency answer
Referenced entities in an aggregate can only exist as part of that aggregate. In that sense they depend on the aggregate root. Without that aggregate root there is no inner entity, while the aggregate root very often might just as well exist without the inner entity. It therefore makes sense, that the inner entity carries the reference.
The ID answer
With this design, the inner entity doesn't even need an id. It's identity is perfectly given by the identity of the aggregate root and in case of multiple one-to-one relationships to the same entity class, the back-reference column used.
Alternatives
All the reasons are more or less based on a single one-to-one relationship. I certainly agree that it looks a little weird for two such relationships to the same class and with 5 as in your example it becomes ridiculous. In such cases you might want to look in alternatives:
Use a map
Instead of modelling your Customer class like this:
class Customer {
#Id
Long id;
String name;
Address orderAddress
Address serviceAddress
Address deliveryAddress
Address installationAddress
Address accountAddress
}
Use a map like this
class Customer {
#Id
Long id;
String name;
Map<String,Address> addresses
}
Which would result in an address table like so
CREATE TABLE "address" (
"customer" BIGINT,
"customer_key" VARCHAR(20). NOT NULL,
"city" VARCHAR(255) NOT NULL
);
You may control the column names with a #MappedCollection annotation and you may add transient getter and setter for individual addresses if you want.
Make it a true value
You refer to Address as a value while I referred to it as an entity. If it should be considered a value I think you should map it as an embedded like so
class Customer {
#Id
Long id;
String name;
#Embedded(onEmpty = USE_NULL, prefix="order_")
Address orderAddress
#Embedded(onEmpty = USE_NULL, prefix="service_")
Address serviceAddress
#Embedded(onEmpty = USE_NULL, prefix="delivery_")
Address deliveryAddress
#Embedded(onEmpty = USE_NULL, prefix="installation_")
Address installationAddress
#Embedded(onEmpty = USE_NULL, prefix="account_")
Address accountAddress
}
This would make the address table superfluous since the data would be folded into the customer table:
CREATE TABLE "customer" (
"id" BIGSERIAL NOT NULL,
"name" VARCHAR(255) NOT NULL,
"order_city" VARCHAR(255) NOT NULL,
"service_city" VARCHAR(255) NOT NULL,
"deliver_city" VARCHAR(255) NOT NULL,
"installation_city" VARCHAR(255) NOT NULL,
"account_city" VARCHAR(255) NOT NULL,
PRIMARY KEY (id)
);
Or is it an aggregate?
But maybe you need addresses on their own, not as part of a customer.
If that is the case an address is its own aggregate.
And references between aggregates should be modelled as ids or AggregateReference. This is described in more detail in Spring Data JDBC, References, and Aggregates
i've a problem understanding how to represent a sql table into my java web app and ho to handle the various types of relationship.
Let's suppose to have a department table:
CREATE TABLE IF NOT EXISTS `department` (
`name` varchar(15) NOT NULL,
`number` int(11) NOT NULL,
PRIMARY KEY (`number`),
) ENGINE=InnoDB;
A department has a lot of employees and one employee can work for only one department, so there is a 1:N relationship between department and employees. This could be the employees table:
CREATE TABLE IF NOT EXISTS `employee` (
`first_name` varchar(15) NOT NULL,
`last_name` varchar(15) NOT NULL,
`SSN` char(9) NOT NULL,
`address` varchar(30) DEFAULT NULL,
`gender` char(1) DEFAULT NULL,
`salary` decimal(10,0) DEFAULT NULL,
`N_D` int(11) NOT NULL,
PRIMARY KEY (`SSN`),
FOREIGN KEY `N_D` REFERENCES `department`(`number`),
) ENGINE=InnoDB;
No problem so far but let's go to java now. My application will have a DAO to read the selected elements from the database, maybe two class called DepartmentDAO and EmployeeDAO. My question is how i represent correctly the department entity and employee entity? How should i handle the relationship?
I've seen someone using the array for 1 to many relation and a single object for the opposite case:
public class Department{
private String name;
private Long number;
/*1:N*/
private Employee[] employees;
/*getter and setter*/
}
The employee class:
public class Employee{
private String firstName;
private String lastName;
private String SSN;
private String address;
private String gender;
private int salary;
/*N:1*/
private Department department;
/*getter and setter*/
}
It seems okay but how do i read nested objects? In the employee case i could use the join in my query for reading two object but in the other case i've got an array.
I want, for example, visualize the number of employees for a certain department. When i read the department object i should also read N employee objects and the by employees.length i could get the number. Isn't it a bit too expensive? It would be wrong put another attribute in my department class (private int numberOfEmployees;) and read it by using COUNT in sql?
Thanks in advance for the help.
If I got it right, you want to count the employees foreach department, or not?
If so, I recommend using the dao pattern.
In additon to the dao pattern, you can use a Key : Value List or Map like the Hashmap.
The Map should contain as Key the specific department and as value the employee. Besides, you have to synchronize the database and the Java Objects to add pairs to the map.
Summarized, you need to create a Data Management Class which contains a List or Map with a key (department) and a value (employee).
Besides, it must be able to do transactions with the database to synchronize the data.
Therefore, you can use the dao pattern and a HashMap, references below.
I do not know if this is good practicse, but it will work.
Dao Pattern
HashMap
EDIT: As I read your question again, the HashMap should be in your department class, but you can still use the dao pattern for synchronizing and data management.
I am new to Hibernate. I have a OneToMany relationship with bidirectional mapping between Account and Transaction. I am not using #JoinColumn in either class and using #mappedBy in the non owning Account class. And everything is working fine. Using H2 in memory database, new join column is created in Transaction table. Then what is the use of #JoinColumn in OneToMany relationships? Is it for unidirectional mapping only? Below is the code. I also read for reference JPA JoinColumn vs mappedBy
public class Account {
#OneToMany( mappedBy="account", cascade=CascadeType.ALL)
List<Transaction> list= new ArrayList<Transaction>();
}
public class Transaction {
#ManyToOne
Account account;
}
Application class :
Account a = new Account("savings");
Transaction t1 = new Transaction("shoe purchase", 45);
t1.setAccount(a);
a.getList().add(t1);
accountRepository.save(a);
output:
Transaction table has an entry with foreign key which is account number in that one row in Account table. ACCOUNT_ID column in created in Transaction table.
There are no extra tables created.
Jpa works on the idea of configuration by convention. So, it will perform configuration on your behalf whenever it can. Think of the #Column annotation, you don't have to apply it on every entity attribute, you would need it only when you have to change something about the attributes.
It's the same with #JoinColumn, when you added #ManyToOne, Jpa already knows that you will need the join column and thus was added for you and the default naming convention for the foreign key was applied (attributename_primarykeyoftheothertype).
Use of
mappedBy
is instruct framework to enable bi-directional relationship. Because of #ManyToOne on Transaction class you Your Transaction Table will have foreign key referring to Account table primary key. By default, Hibernate generates the name of the foreign key column based on the name of the relationship mapping attribute and the name of the primary key attribute. In this example, Hibernate would use a column with the name account_id to store the foreign key to the Account entity.
#JoinColum
can be used If you would like override default foreign key name like #JoinColum(name="acc_id")
MappedBy intructs Hibernate that the key used for the association is on the other side of the association.Like in this case ACCOUNT_ID is being created on Account table.
That means that although you associate two tables together, only one table is having foreign key constraint to the other one.
MappedBylets you to still associate from the table not having foreign key constraint to the other table.
How can i model in hibernate o table which the primary key is also the foreign key
as this schema :
CREATE TABLE policies (
policy_id int,
date_issued datetime,
-- // other common attributes ...
);
CREATE TABLE policy_motor (
policy_id int,
vehicle_reg_no varchar(20),
-- // other attributes specific to motor insurance ...
FOREIGN KEY (policy_id) REFERENCES policies (policy_id)
);
CREATE TABLE policy_property (
policy_id int,
property_address varchar(20),
-- // other attributes specific to property insurance ...
FOREIGN KEY (policy_id) REFERENCES policies (policy_id)
);
Is this possible in hibernate mapping or the best is to separate
them and use a new PK ?
and also it's okey if i use auto increment on the main table
Sure you can. This is useful for one-to-one associations where you can have:
#Id
#Column(name="policy_id")
private policyId;
#MapsId
#OneToOne
#JoinColumn(name = "policy_id")
private Policy policy;
You can use AUTO-INCREMENT on the main table, which means you need to use an IDENTITY generator in JPA/Hibernate.
The #MapsId allows you to share the same SQL column with the entity identifier property. It instructs Hibernate to use this column for resolving the #OneToOne or #ManyToOne association. Changes should always be propagated through the #Id property, rather than the to-one side (which is ignored during inserts/updates).