This can seem a stupid questions for some people, but I couldn't find any information anywhere why we should use mappings (#OneToOne, #OneToMany etc) in JPA while defining entity classes. I know one the advantage is code reduction, so that we don't have to explicitly write queries in order to fetch data from relationship tables. But is there any other benefit (from code optimisation perspective at SQL side) that we have?
The reason is to load object trees or graphs.
That's the goal of object-relational mapping to fill the gap between database tables and objects.
And as you said it reduces code.
In a summary the idea of the ORM is to map tables to objects, so the developer works with the objects instead of the tables.
Tables in SQL have relationships through key columns.
in JPA these relationships are expressed via #OneToMany, #OneToOne etc.
This means that if you want to fetch a row from one table and join that with corresponding row from another table (via a relationship) the JPA implementation (mostly Hibernate) can do that for you, looking at the relationships you have defined for your entities.
You need to describe the entities relationships because it's part of the DB schema model which you are mapping to application level objects. As you mention - it saves you writing the SQL queries yourself, but that's not the main point.
The main point is that you can model/represent one domain (database tables, rows, relationships, SQL commands) as another type of domain (objects/classes, OOP paradigm, programming language commands) which completely shifts the way you work with it.
Related
I have two table called table_1 and table_2, and on each table I have to perform some insert, delete and update operation.
Can anyone please let me know that should I create two different (Data access object) implementation or should I have only one? and whats the advantage or disadvantage in both the approaches.
If rows can be inserted/updated/deleted independently in both the tables then yes, you should go ahead with separate DAO classes. Below are the advantages:
It promotes separation of concerns design pattern.
Spring data jpa also uses the same design, it works on one Repository per entity (table in our case)
If you have any functionality that requires querying both table 1 and table2 then it should ideally go into service layer and call two DAOs. Also, if you have any foreign key relationships between these tables, you can map it using #OneToMany, #ManyToMany etc annotations.
I am developing a JavaEE application and I use JPA/Hibernate as a persistence engine. While developing the application, some questions raised to my mind.
The application consists of users and their roles in a N:M relationship. Here is the database subset of the above tables.
I am relatively new to Hibernate and at first I asked IntelliJ IDEA to generate the mapping for me. What it did was generate the following Java classes:
UserEntity.java
RoleEntity.java
UserXRoleEntity.java
UserXRoleEntityPK.java
Thus, it generated a mapping for the relation table and two 1:N relationships, one between user and userXrole and one between role and userXrole.
After some research, I found that, by using the #ManyToMany annotation, I could omit mapping the userXrole table into a Java class and just declare it within the annotation as the #JoinTable.
So the question is:
Why does IntelliJ generate the entities that way?
Is it just a more generic way that helps generation, or does it have any other advantages. Would you argue in favour of one way or the other?
Is it just a more generic way that helps generation, or does it have any other advantages.
JPA doesn't know if a table is just a join table, that's why you have to tell it (using #JoinTable). The generator might guess, but it will probably only generate #ManyToMany if your table names match JPAs defaults.
Would you argue in favour of one way or the other?
I'd use #ManyToMany if i don't have a reason (finer grained control over lazy/eager fetching maybe?) for separate mapping entities, mostly because less code = less errors.
In my java application I have some serialized entity classes with inheritance. When saving instances of these classes i am converting them to a byte array and saving to a longblob column in my database table. Is there any advantage using hibernate to implement this program. Because as far I understand hibernate is used to map entities with database tables in a proper way. But here I don't have a relational model to map attributes of entities. I am saving them as objects. Am I missing something. Please clarify me. Thanks in advance.
If you don't have a relational data model to save those objects and you can't change your schema, then you can use your current approach.
If you use PostgreSQL you might be interested in JSON storage as well. That way you can store your hierarchies using JSON objects and you can even run native SQL queries against them (although not inheritance-aware, but you can cope with that if you use some _class column to differ between object types).
The cleanest approach is to have the relation model in sync with your business domain model. That way you can benefit from:
optimistic locking (preventing lost updates phenomena)
caching (2nd level cache and query cache)
query-able hierarchies
an external DBA hierarchies could run an update on your hierarchies using mere SQL
auditing
There's an enterprise application using Java + Hibernate + PostgreSQL. Hibernate is configured via annotations in the Java source code. So far the database schema is fixed, but I faced the problem that it needs to be dynamic:I can receive data from different locations and I have to store these in different tables. This means that I have to create tables run-time.
Fortunately, it seems that all of these data coming from the different institutes can have the same schema. But I still don't know how to do that using Hibernate. There are two main problems:
How to tell to Hibernate that many different tables have the same structure? For example the "Patient" class can be mapped to not just the "patient" table, but the "patient_mayo_clinic" table, "patient_northwestern" table, etc. I can feel that this causes ambiguity: how Hibernate knows which table to access when I do operations on the Patient class? It can be any (but only one) of the former listed tables.
How can I dynamically create tables with Hibernate and bind a class to them?
Response to suggestions:
Thanks for all of the suggestions. So far all of the answers discouraged the dynamic creation of tables. I'll mark Axel's answer, since it achieves certain goals, and it is a supported solution. More specifically it's called multi-tenancy. Sometimes it's important to know some important phrases which describes our problem (or part of our problem).
Here are some links about multi-tenancy:
Multi-tenancy in Hibernate
Hibernate Chapter 16. Multi-tenancy
Multi-tenancy Design
EclipseLink JPA multi-tenancy
In real world scenario multi-tenancy also involves the area of isolating the sets of data from each other (also in terms of access and authorization by different credentials) once they are shoved into one table.
You can't do this with Hibernate.
Why not extend your patient table with an institute column?
This way you'll be able to differentiate, without running into mapping issues.
I am afraid you can't do this easily in Hibernate. You would have to generate the Java source, compile it, add it to your classpath and load it dynamically with java.reflection package. If that works, which I doubt it, it will be an ugly solution (IMHO).
Have you consider using a schema less database i.e: NoSQL databases or RDF
databases. They are much more flexible in terms of what you can store in them , basically things are not tight up against a relational schema.
In most environments it is not a good idea to create tables dynamically simply because dbas will not give you the rights to create tables in production.
Axel's answer may be right for you. Also look into Inheritance Mapping for Hibernate.
I agree that its not advisable to create tables dynamically nevertheless it's doable.
Personally i would do as Axel Fontaine proposed but if dynamic tables is a must-have for you I would consider using Partitioning.
PostgreSQL allows you to create ona main table and few child tables (partitions), records are disjunctive between child tables, but every record from any child table is visible in parent table. This means that you can insert rows into any child table you want using just simple insert statement (its not cool but has the same level of complexity as composing and persisting an entity, so its acceptable in your case) and query database using HQL
I have a query that joins 5 tables.
Then I fill my hand-made object with the column values that I need.
What are solutions here that are wide-common to solve that problem using specific tools ? are there such tools?
I'm only beginning to learn Hibernate, so my question would be: is Hibernate the right decision for this problem?
Hibernate maps a table to a class. So, there's no difference if I would have 5 classes instead of 5 tables. It would still be difficult to join the query result into a class
Could hibernate be used to map THE QUERY into the structure (class) I would define beforehand as we do with table mapping? Or even better, can it map the query result into the meaningful fields [auto-create the class with fields] as it does with reverse-engineering?
I've been thinking about views but.. create a new view everytime we need a complex query.. too verbose.
As S.Lott asked, here is a simple version of a question:
General problem:
select A.field_a, B.field_b, C.field_c
from table_a A inner join table_b B inner join table_c C
where ...
every table contains 100 fields
query returns 3 fields, but every field belongs to the unique table
How do I solve that problem in an OO style?
Design a new object with properties corresponding to the returning values of the query.
I want to know if it is the right [and the only one possible] decision and are there any common solutions.
See also my comments.
The point of ORM is to Map Objects to Relations.
The point of ORM is -- explicitly -- not to sweat the details of a specific SQL join.
One fundamental guideline to understanding ORM is this.
SQL joins are a hack because SQL doesn't have proper navigation.
To do ORM design, we to intentionally set the SQL join considerations aside as (largely) irrelevant. Give up the old ways. It's okay, really. The SQL crutches aren't supporting us very well.
Step 1. Define the domain of discourse. The real-world objects.
Step 2. Define implementation classes that are a high-fidelity model of real-world things.
Step 3. Map the objects to relations. Here's where the hack-arounds start. SQL doesn't have a variety of collections -- it only has tables. SQL doesn't have subclasses, it only has tables. So you have to design a "good-enough" mapping between object classes and tables. Ideally, this is one-to-one. But in reality, it doesn't work out that way. Often you will have some denormalization to handle class hierarchies. Other than that, it should work out reasonably well.
Yes you have to add many-to-many association tables that have no object mapping.
Step 4. You're done. Write your application.
"But what about my query that joins 5 (or 3) tables but only takes one attribute from each table?"
What about it? One of those tables is the real object you're dealing with. The other of those 5 (or 3) tables are either part of 1-m nested collections, m-1 containers or m-m associations. That's just navigation among objects.
A 1-m nested collection is the kind of thing that SQL treats as a "result set". In ORM it will become a proper object collection.
A m-1 contain is the typical FK relationship. In ORM it's just a fetch of a related object through ordinary object navigation.
A m-m association is also an object collection. It's a strange collection because two objects are members of each other's collections, but it's just an object collection.
At no time do you design an object that matches a query. You design an object that matches the real world, map that to the database.
"What about performance?" It's only a problem when you subvert the ORM's simple mapping rules. Once in a blue moon you have to create a special-purpose view to handle really big batch-oriented joins among objects. But this is really rare. Often, rethinking your Java program's navigation patterns will improve performance.
Remember, ORM's cache your results. Each "navigation" may not be a complete "round-trip" to the database query. Some queries may be batched by the ORM for you.
There are a few options:
Create a single table mapping using <join> elements for the related tables. A join in that way will allow other tables to contribute properties to your class.
Use a database view as previously suggested.
Use a Hibernate mapping view - instead of <class name=... table=... you can use <class name=... select="select A.field_a, B.field_b, ... from A, B, ...">. It's essentially creating a view on the Hibernate side so the database doesn't have to change. The generated sql will end up looking like "select * from (select A.field_a, B.field_b from A, B, ...)
". I know that works in Oracle, DB2, and MySQL
All that is fine for selecting; if you need to do insert/update, you'll probably need to rethink your data model or your object model.
I think you could use the Criteria API in Hibernate to map the results of your join into your target class.