I have a database with 3 tables. The main table is Contract, and it is joined with pairs of keys from two tables: Languages and Regions.
each pair is unique, but it is possible that one contract will have the following pair ids:
{ (1,1), (1,2), (2,1), (2,2) }
Today, the three tables are linked via a connecting entity called ContractLanguages. It contains a sequence id, and triplets of ids from the three tables.
However, in large enough contracts this causes a serious performance issue, as the hibernate environment creates a staggering amount of objects.
Therefore, we would like to remove this connecting entity, so that Contract will hold some collection of these pairs.
Our proposed solution: create an #embeddable class containing the Language and Region id's, and store them in the Contract entity.
The idea behind this is that there is a relatively small number of languages and regions.
We are assuming that hibernate manages a list of such pairs and does not create duplicates, therefore substantially reducing the amount of objects created.
However, we have the following questions:
Will this solution work? Will hibernate know to create the correct object?
Assuming the solution works (the link is created correctly), will hibernate optimize the object creation to stop creating duplicate objects?
If this solution does not work, how do we solve the problem mentioned above without a connecting entity?
From your post and comments I assume the following situation, please correct me if I'm wrong:
You have a limited set of Languages + Regions combinations (currently modelled as ContractLanguages entities)
You have a huge amount of Contract entities
Each contract can reference multiple Languages and Regions
You have problems loading all the contract languages because currently the combination consists of contract + language + region
Based on those assumptions, several possible optimizations come to my mind:
You could create a LanguageRegion entity which has a unique id and each contract references a set of those. That way you'd get one more table but Hibernate would just create one entity per LanguageRegion and load it once per session, even if multiple contracts would reference it. For that to work correctly you should employ lazy loading and maybe load those LanguageRegion entities into the first level cache before loading the contracts.
Alternatively you could just load columns that are needed, i.e. just load parts of an entity. You'd employ lazy loading as well but wouldn't access the contract languages directly but load them in a separate query, e.g. (names are guessed)
SELECT c.id, lang.id, lang.name, region.id, region.name FROM Contract c
JOIN c.contractlangues cl
JOIN cl.language lang
JOIN cl.region region
WHERE c.id in (:contractIds)
Then you load the contracts, get their ids, load the language and region details using that query (it returns a List<Object[]> with the object array containing the column values as selected. You put those into an appropriate data structure and access them as needed. That way you'd bypass entity creation and just get the data that is needed.
Related
Supposed I have the following class:
class Example {
List<ObjectA> objectsOfA;
List<ObjectB> objectsOfB;
List<ObjectC> objectsOfC;
....
}
I would usually have these tables:
Example (Id, other-attributes)
ObjectA (some attributes, ExampleId)
ObjectB (some attributes, ExampleId)
ObjectC (some attributes, ExampleId)
If I want to restore an Example-object, I imagine I have two options:
join every table together, resulting in a lot of entries and organizing it in hashmaps to reassemble the objects
loading every Example and for every example doing single requests for the lists of ObjectA, ObjectB, ObjectC.
If the number of Example-entries is low, option2 might be the best. But for every single entry in Example, I need to do x more requests where x is the number of tables in my class.
Otherwise, having everything in a single join requires me to reorganize all the data by myself - creating hashmaps, iterating through data and stuff, which is usually a lot of work in code.
I also get the possibility of lazy loading with option 2.
Do I have other choices? Did I miss something? (Of course I know about ORMs, but I decided to not use them on Android)
If I understood correctly, your question is, which of these ways is better to load data from a main table and related tables:
Load a JOIN of the main table and related tables
Load the data of the main table and related tables separately and join them in Java
Something else
Unless you are extremely constrained for bandwidth,
I'd say option 1 is simple and I would go with that.
It's easier to let the DB do the joining and in Java just map the records to objects.
If saving bandwidth with the database is important,
then option 2 is better,
because every piece of data will be fetched only once without duplication,
as the result of a JOIN in option is essentially denormalized data.
In any case, I recommend to follow Occam's razor: the simplest solution is often the best.
Google Apps Engine offers the Google Datastore as the only NoSQL database (I think it is based on BigTable).
In my application I have a social-like data structure and I want to model it as I would do in a graph database. My application must save heterogeneous objects (users,files,...) and relationships among them (such as user1 OWNS file2, user2 FOLLOWS user3, and so on).
I'm looking for a good way to model this typical situation, and I thought to two families of solutions:
List-based solutions: Any object contains a list of other related objects and the object presence in the list is itself the relationship (as Google said in the JDO part https://developers.google.com/appengine/docs/java/datastore/jdo/relationships).
Graph-based solution: Both nodes and relationships are objects. The objects exist independently from the relationships while each relationship contain a reference to the two (or more) connected objects.
What are strong and weak points of these two approaches?
About approach 1: This is the simpler approach one can think of, and it is also presented in the official documentation but:
Each directed relationship make the object record grow: are there any limitations on the number of the possible relationships given for instance by the object dimension limit?
Is that a JDO feature or also the datastore structure allows that approach to be naturally implemented?
The relationship search time will increase with the list, is this solution suitable for large (million) of relationships?
About approach 2: Each relationship can have a higher level of characterization (it is an object and it can have properties). And I think memory size is not a Google problem, but:
Each relationship requires its own record, so the search time for each related couple will increase as the total number of relationships increase. Is this suitable for large amount of relationships(millions, billions)? I.e. does Google have good tricks to search among records if they are well structured? Or I will be soon in a situation in which if I want to search a friend of User1 called User4 I have to wait seconds?
On the other side each object doesn't increase in dimension as new relationships are added.
Could you help me to find other important points on the two approaches in such a way to chose the best model?
First, the search time in the Datastore does not depend on the number of entities that you store, only on the number of entities that you retrieve. Therefore, if you need to find one relationship object out of a billion, it will take the same time as if you had just one object.
Second, the list approach has a serious limitation called "exploding indexes". You will have to index the property that contains a list to make it searchable. If you ever use a query that references more than just this property, you will run into this issue - google it to understand the implications.
Third, the list approach is much more expensive. Every time you add a new relationship, you will rewrite the entire entity at considerable writing cost. The reading costs will be higher too if you cannot use keys-only queries. With the object approach you can use keys-only queries to find relationships, and such queries are now free.
UPDATE:
If your relationships are directed, you may consider making Relationship entities children of User entities, and using an Object id as an id for a Relationship entity as well. Then your Relationship entity will have no properties at all, which is probably the most cost-efficient solution. You will be able to retrieve all objects owned by a user using keys-only ancestor queries.
I have an AppEngine application and I use both approaches. Which is better depends on two things: the practical limits of how many relationships there can be and how often the relationships change.
NOTE 1: My answer is based on experience with Objectify and heavy use of caching. Mileage may vary with other approaches.
NOTE 2: I've used the term 'id' instead of the proper DataStore term 'name' here. Name would have been confusing and id matches objectify terms better.
Consider users linked to the schools they've attended and vice versa. In this case, you would do both. Link the users to schools with a variation of the 'List' method. Store the list of school ids the user attended as a UserSchoolLinks entity with a different type/kind but with the same id as the user. For example, if the user's id = '6h30n' store a UserSchoolLinks object with id '6h30n'. Load this single entity by key lookup any time you need to get the list of schools for a user.
However, do not do the reverse for the users that attended a school. For that relationship, insert a link entity. Use a combination of the school's id and the user's id for the id of the link entity. Store both id's in the entity as separate properties. For example, the SchoolUserLink for user '6h30n' attending school 'g3g0a3' gets id 'g3g0a3~6h30n' and contains the fields: school=g3g0a3 and user=6h30n. Use a query on the school property to get all the SchoolUserLinks for a school.
Here's why:
Users will see their schools frequently but change them rarely. Using this approach, the user's schools will be cached and won't have to be fetched every time they hit their profile.
Since you will be getting the user's schools via a key lookup, you won't be using a query. Therefore, you won't have to deal with eventual consistency for the user's schools.
Schools may have many users that attended them. By storing this relationship as link entities, we avoid creating a huge single object.
The users that attended a school will change a lot. This way we don't have to write a single, large entity frequently.
By using the id of the User entity as the id for the UserSchoolLinks entity we can fetch the links knowing just the id of the user.
By combining the school id and the user id as the id for the SchoolUser link. We can do a key lookup to see if a user and school are linked. Once again, no need to worry about eventual consistency for that.
By including the user id as a property of the SchoolUserLink we don't need to parse the SchoolUserLink object to get the id of the user. We can also use this field to check consistency between both directions and have a fallback in case somehow people are attending hundreds of schools.
Downsides:
1. This approach violates the DRY principle. Seems like the least of evils here.
2. We still have to use a query to get the users who attended a school. That means dealing with eventual consistency.
Don't forget Update the UserSchoolLinks entity and add/remove the SchoolUserLink entity in a transaction.
You question is too complex but I try explain the best solution (I will answer in Python but same can be done in Java).
class User(db.User):
followers = db.StringListProperty()
Simple add follower.
user = User.get(key)
user.followers.append(str(followerKey))
This allow fast query who is followed and followers
User.all().filter('followers', followerKey) # -> followed
This query i/o costly so you can make it faster but more complicated and costly in i/o writes:
class User(db.User):
followers = db.StringListProperty()
follows = db.StringListProperty()
Whatever this is complicated during changes since delete of Users need update follows so you need 2 writes.
You can also store relationships but it is the worse scenario since it is more complex than second example with followers and follows ... - keep in mind than entity can have 1Mb it is not limit but can be.
Here's the case: I am creating a batch script that runs daily, parsing logfiles and exporting the data to a database. The format of this file is basically
std_prop1;std_prop2;std_prop3;[opt_prop1;[opt_prop2;[opt_prop3;[..]]]
The standard properties map to a table with a column for each property, where each line in the logfile basically maps to a corresponding row. It might look like LOGDATA(id,timestamp,systemId,methodName,callLenght). Since we should be able to log as many optional properties as we like, we cannot map them to the same table, since that would mean adding a row the table every time a new property was introduced. Not to think of the number of NULL references ...
So the additional properties go in another table, say EXTRA_PROPS(logdata_foreign_key,propname,value). In reality, most of the optional properties are the same (e.g. os version, app container, etc), making it somewhat wasteful to log for instance 4 rows in EXTRA_PROPS for each row in LOGDATA (in the case that one on average had 4 extra properties). So what I would like my batch job to do is
for each additionalProperty in logRow:
see if additionalProperty already exist
if exists:
create a reference to it in a reference table
if not:
add the property to the extra properties table
create a reference to it in a reference table
I would then probably have three slightly different tables:
LOGDATA(id,timestamp,systemId,methodName,callLenght)
EXTRA_PROPS(id,propname,value)
LOGDATA_HAS_EXTRA_PROPS(logid,extra_prop_id)
I am not 100% this is a better way of doing it, I would still create N rows in the LOGDATA_HAS_EXTRA_PROPS table for N properties, but at least I would not add any new rows to EXTRA_PROPS.
Even if this might not be the best way (what is?), I am still wondering about the tecnhical side: How would I implement this using Hibernate? It does not have to be superfast, but it would need to chew through 100K+ rows.
Firstly, I would not recommend using Hibernate for this type of logic. Hibernate is a great product but doing this kind of high load data operations may not be it's strongest point.
From data modeling standpoint, it appears to me that (propname,value) is actually a primary key in EXTRA_PROPS. Basically, you want to express the logic that, for example, hostname + foo.bar.com combination will only appear once in the table. Am I right? That would be PK. So you will need to use that in LOGDATA_HAS_EXTRA_PROPS. Using name alone will not be sufficient for reference.
In Hibernate (if you choose to use it), that can be expressed via composite key using #EmbeddedId or Embeddable on object mapped to EXTRA_PROPS. And then you can have many to many relationship that uses LOGDATA_HAS_EXTRA_PROPS as association table.
What is the convention for this? Say for example I have the following, where an item bid can only be a bid on one item:
public class Item {
#OneToMany(mappedBy="item", nullable="false"
Set<ItemBid> itemBids = new HashSet<ItemBid>()
}
If I am given the name of the item bidder (which is stored in ItemBid) should I A) Load the club using a club dao and iterate over over the collection of it's itemBids until I find the one with the name I want, or B ) Create an ItemBid dao where the club and item bid name are used in criteria or HQL.
I would presume that B) would be the most efficient with very large collections, so would this be standard for retrieving very specific items from large collections? If so, can I have a general guideline as to what reasons I should be using the collections, and what time I should be using DAO's / Criteria?
Yes, you should definitely query bids directly. Here are the guidelines:
If you are searching for a specific bid, use query
If you need a subset of bids, use query
If you want to display all the bids for a given item - it depends. If the number of bids is reasonably small, fetch an item and use collection. Otherwise - query directly.
Of course from OO perspective you should always use a collection (preferably having findBy*() methods in Item accessing bids collection internally) - which is also more convenient. However if the number of bids per item is significant, the cost of (even lazy-) loading will be significant and you will soon run out of memory. This approach is also very wasteful.
You should be asking yourself this question much sooner: by the time you were doing the mapping. Mapping for ORM should be an intellectual work, not a matter of copying all the foreign keys onto attributes on both sides. (if only because of YAGNI, but there are many other good reasons)
Chances are, the bid-item mapping would be better as unidirectional (then again, maybe not).
In many cases we find that certain entities are strongly associated with an almost fixed number of some other entities (they would probably be called "aggregates" in DDD parlance). For example invoices and invoice items. Or a person and a list of his hobbies. Or a post and a set of tags for this post. We do not expect that the number of items in a given invoice will grow over time, nor will the number of tags. So they are all good places to map a #OneToMany. On the other hand, the number of invoices for each client will be growing - so we would just map an unidirectional #ManyToOne from client an invoice - and query.
Repositories (daos, whatever) that do queries are perfectly good OO (nothing wrong with a query; it is just an object describing your requirements in a storage-neutral way); using finders in entities - not so. From practical point of view it binds your entities to data access layer (DAOs or even JPA classes), and this will make them unusable in many use cases (GWT) or tricky to use when detached (you will have to guess which methods work outside session). From the philosophical point of view - it violates the single responsibility principle and changes your JPA entities into a sort of active record wannabe.
So, my answer would be:
if you need a single bid, query directly,
if you want to display all the bids for a given item - fetch an item and use the collection. This does not depend on the number of bids per item, as the query performed by JPA will be identical as a query you might perform yourself. If this approach needs tuning (like in a case where you need to fetch a lot of items and want to avoid the "N + 1 selects problem") then there is plenty of ways (join fetch, eager fetching, hints) to make it right, without changing the part of the code that uses getBids().
The simplest way to think about it is: if you think that some collection will never be displayed with paging (like tags on post, items on invoice, hobbies on person), map it with #OneToMany and access as a collection.
A very common use case in many web applications is that domain objects can be ordered by a user in the web interface - and that order is persisted for later; but I've noticed that every time I need to implement this, I always end up coming up with a solution that is different and adds considerable weight and complexity to a simple domain object. As an example, suppose I have the persistent entity Person
Person {
long id
String name
}
A user goes to /myapp/persons and sees all people in the system in the order in which they receive compensation (or something) - all of the people can be clicked and dragged and dropped into another position - when the page is loaded again, the order is remembered. The problem is that relational databases just don't seem to have a good way of doing this; nor do ORMs (hibernate is what I use)
I've been thinking of a generic ordering methodology - but there will be some overhead as the data would be persisted separately which could slow down access in some use cases. My question is: has anyone come up with a really good way to model the order of persistent domain objects?
I work in JavaEE with Hibernate or JDBCTemplate - so any code examples would be most useful with those technologies as their basis. Or any conceptual ideas would be welcome too.
UPDATE: I'm not sure where I went wrong, but it seems I was unclear as most have responded with responses that don't really match my question (as it is in my head). The problem is not how do I order rows or columns when I fetch them - it is that the order of the domain objects change - someone clicks and drags a "person" from the bottom of the list to the top of the list - they refresh the page and the list is now in the order they specified.
When fetching results, just build HQL queries with a different ORDER BY clause, depending on what the user has used last time
The problem is that relational databases just don't seem to have a good way of doing this; nor do ORMs (hibernate is what I use)
I'm not sure where you would get this impression. Hibernate specifically has support for mapping indexed collections (which is a "list" by another name), which usually boils down to storing a "list-index" column in the table holding the collection of items.
An example taken directly from the manual:
<list name="carComponents"
table="CarComponents">
<key column="carId"/>
<list-index column="sortOrder"/>
<composite-element class="CarComponent">
<property name="price"/>
<property name="type"/>
<property name="serialNumber" column="serialNum"/>
</composite-element>
</list>
This would allow a List<CarComponents> to be associated with your root entity, stored in the CarComponents table with a sortOrder column.
One possible generic solution:
Create a table to persist sort information, simplest case would be be one sortable field per entity with a direction:
table 'sorts'
* id: PK
* entity: String
* field: String
* direction: ASC/DESC enumeration (or ascending boolean flag)
It could be made more complicated by adding a userId to do per-user sorting or by adding a sort_items table with a foreign key to support sorting by multiple fields at a time.
Once you're persisting the sort information, it's a simple matter of adding Order instances to criteria (if that's what you're using) or concatenating order by statements to your HQL.
This also keeps your entities themselves free of and ordinal information, which in this case sounds like the right approach since the ordering is purely for user interaction purposes.
Update - Persisting entity order
Given the fact that you want to be able to reorder entities, not just define a sort for them, then you really do need to make an ordinal or index value part of the entity's definition.
The problem, as I'm sure you realize is the number of entities that would need to be updated, with the worst case scenario being moving the last entity to the top of the list.
You could use an increment value other than 1 (say 10) so you would have:
ordinal | name
10 | Crosby
20 | Stills
30 | Nash
40 | Young
Most of the time, updating the row would involve selecting two items and updating one. If I want to move Young to position 2, I select current item 2 and the previous item from the database to get the ordinals 10 and 20. Use these to create the new ordinal ((20 - 10) / 2 + 10 = 15). Now do a single update of Young with an ordinal of 15.
If you get to the point where division by two yields the same index as one of the entities you just loaded, that means it's time to spawn a task to normalize the ordinal values according to your original increment.
As far as I know, JPA 2.0 provides support for ordered lists:
https://secure.wikimedia.org/wikibooks/en/wiki/Java_Persistence/Relationships#Order_Column_.28JPA_2.0.29
I think that relational databases cannot do better than a dedicated ordering column.
The idea of "order" is not really defined in SQL for anything but cursors, and they are not a core relational concept but rather an implementation detail.
For all I know the only thing to do is to abstract the ordering column away with #OrderColumn (JPA2, so Hibernate 3.5+ compatibile).