In my hypothetical I have an annotated User model class. This User model also holds references to two sets:
A set of Pet objects (a Pet object is also an annotated model represented in the data layer)
A set of Food objects (a Pet object is also an annotated model represented in the data layer)
When I pull the User entity from the database (entityManager.find(User.class, id)) it will automatically fill all the User fields but it obviously wont fill the two sets.
Do I need to do entityManager.createQuery and just use a normal SQL join query then manually create the User object?
Thanks in advance
If you map your relations from User to Pet and Food using OneToMany you can chose whether to have the fields automatically collected or not.
See the API doc for javax.persistence OneToMany.
Depending on how you constructed the mapping (PK-FK or join tables etc), you may or may not get good performance with this. Having two OneToMany relations that are joined, means you may end up with a ridiculous amount of rows when you read up your user.
Mmm, No? That's probably not how you want to do it. I don't know why you say "it obviously won't fill the two sets." It's quite capable of filling in the sets for you, that's sort of the point behind using an ORM like hibernate in the first place. Your objects do what they look like they should in code and 'databasey' things are handled automatically as much as possible.
It is true that Hibernate will complain if you mark more than one collection as EAGER fetched, but it's not really clear you actually need either of them to be eager. Essentially once they are mapped, just accessing them causes the queries to be run to fill them in with data (assuming the Session is still open and so forth.) If you explain how you want it to work it would be easier to help with a solution.
Related
I have a dozen of tables like Product, Category, Customer, Order, ...
with not null relationships to one another. I created ORM and right now I am in the middle of tests.
However I find it pretty tedious, because in order to test for example Order entity which has to belong to Customer (namely perform persist operation) I have to create Customer instance as well. Lets go further: because Order cannot exist separately to Product I have to create Product and add it to Order. Product has to be in some Category, and so on. So you can see a chain of mandatory relationships, which makes testing individual entity very difficult.
Natural solution would be to defer constraint check to commit (Oracle):
alter session set constraints=deferred;
However I found a piece of information that Hibernate doesn't care of deferring (support for deferred constraint).
Does it mean, that persistence testing has to be so problematic or I can do it better/different?
I believe that db constraints are sacred, so resigning from them, because hibernate does not support it sounds bad.
You can create factories for test purposes. Their job is to create a graph of objects that are valid. Such class can be used in any kind of testing (not only the repositories):
ProductFactory.withType(...).withOtherImportantOption(...).create()
Such factory could leverage randomization if you wish, but it's not mandatory. Though some amount of randomization will have to be introduced for fields with unique constraints.
PS: not satisfying FK seems like a wrong path - you'll eventually need tests that persist many objects, maybe even commit transactions. And you may also want to test cascades.
Why not just create your test database prepopulated with a bunch of test data.
You can define test data in a file named import.sql, or specified via javax.persistence.sql-load-script-source, and it will be automatically loaded by the schema export tool.
UPDATE: To be clear, you only need a couple of rows of test data in each table, that you'll use in order to set up valid references from the objects you're testing.
For example, if you want to test persisting a Book, you would write:
Book b = new Book();
b.setTitle("Feersum Endjin");
b.setAuthor( em.getReference(Author.class, AUTHOR_ID) );
em.persist(b);
Where AUTHOR_ID is the id of a row of test data.
This is a lot better, IMO, than writing tests that work with objects in an inconsistent state (i.e. with null attributes) and test data that violates the database constraints.
If you're using JPA, then your repositories are tested automatically. If you really need to, then have a look here.
Hope this answers your question.
I am starting to use JPA and I always get confused with the term of entities and their usage, I have read a lot but I still don't quite get it.
I read the Oracle documentation of it but it does not really explain its role in the transaction.
What are JPA enities? does they actually hold the data for each row, I mean, are they stored instances that hold the row data? or they just map tables of the db and then insert and delete in them?
for example if I use this:
entity.setUserName("michel");
Then persisting it, then changing the user name, and persisitig it again (i.e merging it)
Does this change the previously entered user name? or does it create a new row in the db?
An Entity is roughly the same thing as an instance of a class when you are thinking from a code perspective or a row in a table (basically) when you are thinking from a database perspective.
So, it's essentially a persisted / persistable instance of a class. Changing values on it works just like changing values on any other class instance. The difference is that you can persist those changes and, in general, the current state of the class instance (entity) will overwrite the values the row for that instance (entity) had in the database, based on the primary key in the database matching the "id" or similar field in the class instance (entity).
There are exceptions to this behavior, of course, but this is true in general.
It's a model. It's a domain object that can be persisted. Don't over think it. Akin to a Rails model. And remember, models (in this paradigm) are mutable!
I have an odd business requirement.
We have multiple, unrelated entity types that will need to be displayed in a unified list, with some basic information from the entity, sorted by the only field they are all guaranteed to have, DATE. These entities may or may not even be in the same database. The result set needs to be pageable.
Is there any feasible way of achieving this through Criteria, HQL or some sane means?
Normally you would let all these classes extend common base class and use polymorphic Hibernate query. From your description this doesn't seem to be feasible.
Of course if you want to go the Hibernate way, you would have to first fetch the size of each unrelated table, determine in which table do the records in requested page lie (or maybe in several ones) and manually fetch proper page. This is really cumbersome and definitely should be hidden under some deep DAO.
Looks like do only sane solution is the good old SQL with UNION and mapping native query to your domain objects. Hibernate supports native queries quite well.
I have the following use case: There's a class called Template and with that class I can create instances of the ActualObject class (ActualObject copies its inital data from the Template). The Template class has a list of Product:s.
Now here comes the tricky part, the user should be able to delete Products from the database but these deletions may not affect the content of a Template. In other words, even if a Product is deleted, the Template should still have access to it. This could be solved by adding a flag "deleted" to the Product. If a Product is deleted, then it may not be searched explicitly from the database, but it can be fetched implicitly (for example via the reference in the Template class).
The idea behind this is that when an ActualObject is created from a template, the user is notified in the user interface that "The Template X had a Product Z with the parameters A, B and C, but this product has been deleted and cannot be added as such in ActualObject Z".
My problem is how I should mark these deleted objects as deleted. Before someone suggests that just update the delete flag instead of doing an actual delete query, my problem is not that simple. The delete flag and its behaviour should exist in all POJOs, not just in Product. This means I'll be getting cascade problems. For example, if I delete a Template, then the Products should also be deleted and each Product has a reference to a Price-object which also should be deleted and each Price may have a reference to a VAT-object and so forth. All these cascaded objects should be marked as deleted.
My question is how can I accomplish this in a sensible manner. Going through every object (which are being deleted) checking each field for references which should be deleted, going through their references etc is quite laborious and bugs are easy to slip in.
I'm using Hibernate, I was wondering if Hibernate would have any such inbuilt features. Another idea that I came to think of was to use hibernate interceptors to modify an actual SQL delete query to an update query (I'm not even 100% sure this is possible). My only concern is that does Hibernate rely on cascades in foreign keys, in other words, the cascaded deletes are done by the database and not by hibernate.
My problem is how I should mark these
deleted objects as deleted.
I think you have choosen a very complex way to solve the task. It would be more easy to introduce ProductTemplate. Place into this object all required properties you need. And also you need here a reference to a Product instance. Than instead of marking Product you can just delete it (and delete all other entities, such as prices). And, of course, you should clean reference in ProductTemplate. When you are creating an instance of ActualObject you will be able to notify the user with appropriate message.
I think you're trying to make things much more complicated than they should be... anyway, what you're trying to do is handling Hibernate events, take a look at Chapter 12 of Hibernate Reference, you can choose to use interceptors or the event system.
In any case... well good luck :)
public interface Deletable {
public void delete();
}
Have all your deletable objects implement this interface. In their implementations, update the deleted flag and have them call their children's delete() method also - which implies that the children must be Deletable too.
Of course, upon implementation you'll have to manually figure which children are Deletable. But this should be straightforward, at least.
If I understand what you are asking for, you add an #OneToMany relationship between the template and the product, and select your cascade rules, you will be able to delete all associated products for a given template. In your product class, you can add the "deleted" flag as you suggested. This deleted flag would be leveraged by your service/dao layer e.g. you could leverage a getProdcuts(boolean includeDeleted) type concept to determine if you should include the "deleted" records for return. In this fashion you can control what end users see, but still expose full functionality to internal business users.
The flag to delete should be a part of the Template Class itself. That way all the Objects that you create have a way to be flagged as alive or deleted. The marking of the Object to be deleted, should go higher up to the base class.
I have a query that joins 5 tables.
Then I fill my hand-made object with the column values that I need.
What are solutions here that are wide-common to solve that problem using specific tools ? are there such tools?
I'm only beginning to learn Hibernate, so my question would be: is Hibernate the right decision for this problem?
Hibernate maps a table to a class. So, there's no difference if I would have 5 classes instead of 5 tables. It would still be difficult to join the query result into a class
Could hibernate be used to map THE QUERY into the structure (class) I would define beforehand as we do with table mapping? Or even better, can it map the query result into the meaningful fields [auto-create the class with fields] as it does with reverse-engineering?
I've been thinking about views but.. create a new view everytime we need a complex query.. too verbose.
As S.Lott asked, here is a simple version of a question:
General problem:
select A.field_a, B.field_b, C.field_c
from table_a A inner join table_b B inner join table_c C
where ...
every table contains 100 fields
query returns 3 fields, but every field belongs to the unique table
How do I solve that problem in an OO style?
Design a new object with properties corresponding to the returning values of the query.
I want to know if it is the right [and the only one possible] decision and are there any common solutions.
See also my comments.
The point of ORM is to Map Objects to Relations.
The point of ORM is -- explicitly -- not to sweat the details of a specific SQL join.
One fundamental guideline to understanding ORM is this.
SQL joins are a hack because SQL doesn't have proper navigation.
To do ORM design, we to intentionally set the SQL join considerations aside as (largely) irrelevant. Give up the old ways. It's okay, really. The SQL crutches aren't supporting us very well.
Step 1. Define the domain of discourse. The real-world objects.
Step 2. Define implementation classes that are a high-fidelity model of real-world things.
Step 3. Map the objects to relations. Here's where the hack-arounds start. SQL doesn't have a variety of collections -- it only has tables. SQL doesn't have subclasses, it only has tables. So you have to design a "good-enough" mapping between object classes and tables. Ideally, this is one-to-one. But in reality, it doesn't work out that way. Often you will have some denormalization to handle class hierarchies. Other than that, it should work out reasonably well.
Yes you have to add many-to-many association tables that have no object mapping.
Step 4. You're done. Write your application.
"But what about my query that joins 5 (or 3) tables but only takes one attribute from each table?"
What about it? One of those tables is the real object you're dealing with. The other of those 5 (or 3) tables are either part of 1-m nested collections, m-1 containers or m-m associations. That's just navigation among objects.
A 1-m nested collection is the kind of thing that SQL treats as a "result set". In ORM it will become a proper object collection.
A m-1 contain is the typical FK relationship. In ORM it's just a fetch of a related object through ordinary object navigation.
A m-m association is also an object collection. It's a strange collection because two objects are members of each other's collections, but it's just an object collection.
At no time do you design an object that matches a query. You design an object that matches the real world, map that to the database.
"What about performance?" It's only a problem when you subvert the ORM's simple mapping rules. Once in a blue moon you have to create a special-purpose view to handle really big batch-oriented joins among objects. But this is really rare. Often, rethinking your Java program's navigation patterns will improve performance.
Remember, ORM's cache your results. Each "navigation" may not be a complete "round-trip" to the database query. Some queries may be batched by the ORM for you.
There are a few options:
Create a single table mapping using <join> elements for the related tables. A join in that way will allow other tables to contribute properties to your class.
Use a database view as previously suggested.
Use a Hibernate mapping view - instead of <class name=... table=... you can use <class name=... select="select A.field_a, B.field_b, ... from A, B, ...">. It's essentially creating a view on the Hibernate side so the database doesn't have to change. The generated sql will end up looking like "select * from (select A.field_a, B.field_b from A, B, ...)
". I know that works in Oracle, DB2, and MySQL
All that is fine for selecting; if you need to do insert/update, you'll probably need to rethink your data model or your object model.
I think you could use the Criteria API in Hibernate to map the results of your join into your target class.