ORMLite - force read objects to have same identity - java

I'm reading a hierarchy of objects with ORMLite. It is shaped like a tree, parents have a #ForeignCollection of 0+ children, and every child refers to its parent with #DatabaseField(foreign=true). I'm reading and saving the whole hierarchy at once.
As I'm new to ORM in general, and to ORMLite as well, I didn't know that when objects with the same ID in the database are read, they won't be created as the actually same object with the same Identity, but as several duplicates having the same ID. Meaning, I'm now facing the problem that (let's say "->" stands for "refers to") A -> B -> C != C -> B -> A.
I was thinking to solve the problem by manually reading them through the provided DAOs and puting them together by their ID, assuring that objects with the same ID have the same identity.
Is there are ORMLite-native way of solving this? If yes, what is it, if not, what are common ways of solving this problem? Is this a general problem of ORM? Does it have a name (I'd like to learn more about it)?
Edit:
My hierarchy is so that one building contains several floors, where each floor knows its building, and each floor contains several zones, where every zone knows its floor.

Is this a general problem of ORM? Does it have a name (I'd like to learn more about it)?
It is a general pattern for ORMs and is called “Identity Map”: within a session, no matter where in your code you got a mapped object from the ORM, there will be only one object representing a specific line in the db (i.e. having it’s primary key).
I love this pattern: you can retrieve something from the db in one part of your code, even do modifications to it, store that object in a instance variable, etc... And in another part of the code, if you get hold of an object for the same “db row” (by whatever means: you got it passed as a argument, you made a bulk query to the db, you created a “new” mapped object with the primary key set to the same and add it to the session), you will end up with the same object. – Even the modifications from before (including unflushed) will be there.
(adding an mapped object to the session may fail because of this, and depending on the ORM and programming language this adding may give you another object back as “the same”)

Unfortunately there is not a ORMLite-native way of solving this problem. More complex ORM systems (such as Hibernate) have caching layers which are there specifically for this reason. ORMLite does not have a cache layer so it doesn't know that it just returned an object with the same id "recently". Here's documentation of Hibernate caching:
http://docs.jboss.org/hibernate/core/3.3/reference/en/html/performance.html
However, ORMLite is designed to be Lite and cache layers violate that designation IMO. About the only [unfortunate] solution that I see to your issue in ORMLite is to do what you are doing -- rebuilding the object tree based on the ids. If you give more details about your hierarchy we may be able to help more specifically.
So after thinking about your case a bit more #riwi, it occurred to me that if you have a Building that contains a collection of Floors, there is no reason why the Building object on each of the Floors in the collection cannot be set with the parent Building object. Duh. ORMLite has all of the information it needs to make this happen. I implemented this behavior and it was released in version 4.24.
Edit:
As of ORMLite version 4.26 we added an initial take on an object-cache that can support the requested features asked for. Here are the docs:
http://ormlite.com/docs/object-cache

Related

Pragmatic Programmer: avoid data source duplication by using map?

In the Pragmatic Programmer book, chapter “Data source duplication” the authors state:
Many Data sources allow you to introspect on their data schema. This can be used to remove much of the duplication between them and your code. Rather than manually creating the code to contain this stored data, you can generate the containers directly from the schema. Many persistence frameworks will do this heavy lifting for you.
So far so good. We can achieve this easily connecting our IDE to the DB and let it create our entities for us.
Then it continues:
There’s another option, and one we often prefer. Rather than writing code that represents external data in a fixed structure (an instance of a struct of class for example), just stick it into a key/value data structure (your language might call it a map, hash, dictionary, or even object). On its own this is risky .... we recommend adding a second layer to this solution: a simply table-driven validation suite that verifies that the map you’ve created contains at least the data you need. Your API documentation tool might be able to generate this.
The idea if I got it right is to avoid having an Entity to represent the table in the DB (so to avoid duplication of knowledge) but rather to use a map, so that if we add a new column to the schema we don’t need to update our representation of that schema (i.e. the entity) as well in our application.
Then comes the part that is not clear to me: he talks about an autogenerated “table-driven validation suite that verifies that the map you’ve created contains at least the data you need”.
Does any of you know how these concept implemented would look like?
The closest thing i could find on Google about this topic is this question on StackOverflow but the answers skipped the second part.
I think it really depends on the language you’re using and on the data you need to read. For Java, if you’re mapping the raw data to a Map what you could do is to use validators (ex Hibernate validators or spring validators) to define your custom annotation and enforce that schema’s constraints are respected when creating the in memory representation (eg: you’re reading a user table with id as primary key, the map must then contain the id key with a valid value)

Spring Data Neo4j memory consumption on "supernode" entities

As far as I know, once a NodeEntity in Spring Data Neo4j is loaded, the default behaviour is to lazily load its relations by fetching only ids of related nodes.
While it seems quite ok in most situation, I have doubts about it in the case of so called "supernodes" - the nodes that have numerous relations to other nodes. That kind of nodes, even if small by themselves, will hold a huge collection of ids, using more memory than we would like it to use, and possibly being not "lazily loaded enough" in effect...
So my question is - how shall I deal with that kind of supernode?
My first idea is to simply remove all #RelatedTo/#RelatedToVia mappings (or at least the ones with relation types that are "numerous") from that kind of nodes and simply bypass SDN when operations on those relations are needed, and use SDN in other cases.
Does it seem to have sense? Do you have some other suggestions or some experience in that kind of situations?
I have not worked with SDN but I will give a try to the approximation of metanodes. With this approximation you build a structure that split the total number of relations into the number of metanodes (if a node has 1000 connections and you use 10 metanodes, each metanode will have 100 connection while the supernode just 4. You can see a graphic representation in the folowing image: http://i.stack.imgur.com/DMQGs.png.
In this way you can have a good control of how many relations can have a node and therefore how many node will be maximal loaded by SDN.
You can read more about it on http://neo4j.com/book-learning-neo4j/ and also in this similar post Neo4j how to avoid supernodes
For supernodes I'd just not specify the relationship on the supernode entity. But only on the related nodes.
And if you're interested in the relationship you either lookup the related node and follow to the supernode.
Or if you really need to load the millions of relationships, use a cypher statement.
You can also put the many relationships on a separate node for that purpose or add a tree-like-substructure which also allows to deal with subselections.
First, can you provide the version of SDN you are using so we can target the question to the right maintainers of the library.
Secondly, while I don't know really the internals of SDN but have worked heavily with other OGMs, my understanding of LazyLoading is quite different that the one you provide, for the simple reason that lazy loading the ids can be very harmful in the sense that you can have corrupted data if another process is deleting one of the nodes having one of these ids.
Generally, and it is quite common in other OGMs, in the case of an object has no annotations representing relationships, you would just recreate the object from his metadata and the loaded node.
However if it has relationships, you would then create a proxy of that object that will extend the entity itself.
The entity values on the proxy will not be instantiated in the first instance, you would then override all getters and add in the proxy the methods for retrieving the related nodes (so the Entity manager would be injected in the proxy).
So basically, a proxy will be empty until you call one of the getters on it.
You can also "fine-grain" this behavior by creating Custom repositories that extend the default one, in the sense you can choose to only LAZY_LOAD one type of relationships and EAGER_LOAD the others.
The method described by albert makes lot of sense in some cases, however it is hard to accomplish on the basic OGM side, you would better have a BehaviorComponent that will handle this for you during lifecycle events, or add some kind of pagination to the getter method, which I think is not part of the OGM right now.

Simulating DELETE cascades with WeakHashMaps

I'm developing a service that monitors computers. Computers can be added to or removed from monitoring by a web GUI. I keep reported data basically in various maps like Map<Computer, Temperature>. Now that the collected data grows and the data structures become more sophisticated (including computers referencing each other) I need a concept for what happens when removing computers from monitoring. Basically I need to delete all data reported by the removed computer. The most KISS-like approach would be removing the data manually from memory, like
public void onRemove(Computer computer) {
temperatures.remove(computer);
// ...
}
This method had to be changed whenever I add features :-( I know Java has a WeakHashMap, so I could store reported data like so:
Map<Computer, Temperature> temperatures = new WeakHashMap<>();
I could call System.gc() whenever a computer is removed from monitoring in order have all associated data eagerly removed from these maps.
While the first approach seems a bit like primitive MyISAM tables, the second one resembles DELETE cascades in InnoDB tables. But still it feels a bit uncomfortable and is probably the wrong approach. Could you point out advantages or disadvantages of WeakHashMaps or propose other solutions to this problem?
Not sure if it is possible for your case, but couldn't your Computer class have all the attributes, and then have a list of monitoredComputers (or have a wrapper class called MonitoredComputers, where you can wrap any logic needed like getTemperatures()). By that they can be removed from that list and don't have to look through all attribute lists. If the computer is referenced from another computer then you have to loop through that list and remove references from those who have it.
I'm not sure using a WeakHashMap is a good idea. As you say you may reference Computer objects from several places, so you'll need to make sure all references except one go through weak references, and to remove the hard reference when the Computer is deleted. As you have no control over when weak references are deleted, you may not get consistent results.
If you don't want to have to maintain manually the removal, you could have a flag on Computer objects, like isAlive(). Then you store Computers in special subclasses of Maps and Collections that at read time check if the Computer is alive and if not silently remove it. For example, on a Map<Computer, ?>, the get method would check if the computer is alive, and if not will remove it and return null.
Or the subclasses of Maps and Collections could just register themselves to a single computerRemoved() event, and automatically know how to remove the deleted computers, and you wouldn't have to manually code the removal. Just make sure you keep references to Computer only inside your special maps and collections.
Why not use an actual SQL database? You could use an embedded database engine such as H2, Apache Derby / Java DB, HSQLDB, or SQLite. Using an embedded database engine has the added benefits:
You could inspect the live contents of the monitoring data at any time using the corresponding DB engine's command line client.
You could build a new tool to access and manipulate the data by connecting to a shared database instance.
The schema itself is a form of documentation as to the structure of the monitoring data and the relationships between entities.
You could store different types of data for different types of computers by way of schema normalization.
You can back up the monitoring data.
If you need to restart the monitoring server, you won't lose all of the monitoring data.
Your Web UI could use a JPA implementation such as Hibernate to access the monitoring data and add new records. Or, for a more lightweight solution, you might consider using Spring Framework's JdbcTemplate and SimpleJdbcInsert classes. There is also OrmLite, ActiveJDBC, and jOOQ which each aim to offer simpler access to databases than JDBC.
The problem with WeakHashMap is that managing the references to Computer objects seems difficult and easily breakable.
Hash table based implementation of the Map interface, with weak keys. An entry in a WeakHashMap will automatically be removed when its key is no longer in ordinary use. More precisely, the presence of a mapping for a given key will not prevent the key from being discarded by the garbage collector, that is, made finalizable, finalized, and then reclaimed. When a key has been discarded its entry is effectively removed from the map, so this class behaves somewhat differently from other Map implementations.
It could be the case that a reference to a Computer object might still exist somewhere and the object will not be deleted for the WeakHashMaps. I would prefer a more deterministic approach.
But if you decide to go down this route, you can mitigate the problem I point out by wrapping all these Computer object keys in a class that has strict controls. This wrapper object will create and store the keys and will pay attention to never let references of those keys to leak out.
Novice coder here, so maybe this is too clunky:
Why not keep the monitored computers in a HashMap, and removed computers go to a WeakHashMap? That way all removed computers are seperate and easy to work with, with the gc cleaning up the oldest entries.

Saving tree-structures in Databases

I use Hibernate/Spring and a MySQL Database for my data management.
Currently I display a tree-structure in a JTable. A tree can have several branches, in turn a branch can have several branches (up to nine levels) again, or having leaves. Lately I have performanceproblemes, as soon as I want to create new branches on deeper levels.
At this time a branch has a foreign key to its parent. The domainobject has access to its parent by calling getParent(), which returns the parent-branch. The deeper the level, the longer it takes to create a new branch.
Microbenchmark results for creating a new branch are like:
Level 1: 32 ms.
Level 3: 80 ms.
Level 9: 232 ms.
Obviously the level (which means the number of parents) is responsible for this. So I wanted to ask, if there are any appendages to work around this kind of problem. I don’t understand why Hibernate needs to know about the whole object tree (all parents until the root) while creating a new branch. But as far as I know this can be the only reason for the delay while creating a new branch, because a branch doesn’t have any other relations to any other objects.
I would be very thankful for any workarounds or suggestions.
greets,
ymene
Basically you are having some sort of many to one relationships structure right?
In hibernate all depends on mapping. Tweak your mapping, Use One-to-many relationship from parent to child using java.util.Set.
Do not use ArrayList becasue List is ordered, so hibernate will add extra column for that ordering only.
Also check your lazy property. If you load parent and you have set lazy="false" on its child set property, then all of its children will be loaded from DB which can affect the performance.
Also check 'inverse' property for children. If inverse is true in child table, that means you can manage the child entity separately. Otherwise you have to do that using the parent only.
google around for inverse, it will sure help you.
thank.
I don't know how Hibernate handles this internally. However, there are different ways to store tree structures in a database. One which is quite efficient for many queries done on the tree is using a "nested set" approach - but this would basically yield the performance issues that you're seeing (e.g. expensive insertion). If you need fast insertion or removal I'd go with what you have, e.g. a simple parent-ID, and try to see what Hibernate is doing all this time.
If you don't need to report on your data in SQL, you could just serialize your JTable to the database instead (perhaps using something like XStream). That way you wouldn't have to worry about expensive database queries that deal with trees.
One thing you can do is use the XML support in MySQL. This will give you native ability to support hierarchies. I've never used XML support in MySQL, so I don't know if it is as full-featured as other DBMSes (SQL Server and DB2 I know have great support, probably Oracle too I would guess).
Note, that I have never used hibernate, so I don't know if you could interface with that, or if you would have to write your own DB code in this case (my guess is, you're going to be writing your own queries).

Hibernate: best collection type to use - bag, idbag, set, list, map

I am looking for what most people use as their collection type when making one-to-many associations in Hibernate. The legacy application I am maintaining uses bags exclusively, but keeps them as lists in code. The tables associated have an id field, so an idbag seems more appropriate, but documentation recommends a Set.
EDIT: I mistakenly referenced that the documentation recommends a set. In reality, the official documentation is equally vague on all collection types. What I find is that some websites seem to infer that Set is the most common, and the Hibernate book I am reading explicitly says this about sets:
This is the most common persistent collection in a typical Hibernate application. (see: page 242 of 'Java Persistence with Hibernate' by Christian Bauer and Gavin King)
I guess that is what threw me and made me seek out what others are using.
EDIT2: note that Gavin King is the creator of Hibernate
Based on the experience of using both I would recommend using a List. If you are getting data out of the database and displaying / manipulating it then it nearly always needs to be kept in a consistent order. You can use SortedSet but that can add a whole world of pain (overriding equals, hashcode etc. and sorting in different ways) compared to just adding an order by and storing it in a List. Lists are easier to manipulate - if a user deletes line 3 on the page, then just remove item 3 in the List. Working with a Set seems to involve lots of unnecessary code and messing about with iterators.
When I have used Sets with Hibernate I have frequently found myself ripping all the Sets out after a few weeks and replacing with Lists because Sets are giving me too many limitations.
The Hibernate documentation and third party tools seem to use Sets by default but from hard experience I have found it much more productive to use Lists.
Ok, after quite some time I have found a reason NOT to use a Set as a collection type. Due to problems with the hashcode/equals overrides and the way hibernate persists, using any java API functionality that calls hashcode/equals is a bad idea. There is no good way to consistently compare objects pre- and post-persistence. Stick with collections that do not rely on equals/hashcode like bag.
More info here:
http://community.jboss.org/wiki/EqualsandHashCode (this link makes it sound like a business key is the way to go, but read the next link fully to see why that is not always a good idea)
https://forum.hibernate.org/viewtopic.php?f=1&t=928172 (read the whole discussion to make your head spin)
I'm guessing people use all kinds of things :-) - different collection types serve different purposes so the "best" one depends on what you need it for.
That said, using List in code is usually more convenient than using Set even though said List is unordered. If nothing else, '.get(0)' is easier on the eyes than .iterator().next() :-) Hibernate bag support is definitely adequate for this purpose plus you can even add an order-by declaration (if applicable) and have your list sorted.
idbag is a whole different animal used for many-to-many associations; you can't really compare it to regular Set or List.
I would recommend using a set because a set is defined as a collection of unique items and thats normally what you deal with.
And .iterator().next() is save when there is no element in your collection.
.get(0) might throw an IndexOutOfBoundsException if you access an empty list.

Categories