Why is buildSessionFactory() deprecated? - java

Why is buildSessionFactory() replaced by buildSessionFactory(ServiceRegistry)? What is the importance of ServiceRegistry?

The reasons for this are explained on Hibernate's Jira
https://hibernate.onjira.com/browse/HHH-2578
Currently a SessionFactory is built by throwing a bunch of stuff into a Configuration object, stirring it, letting it come to a boil, and then pulling out the SessionFactory. In seriousness, there are a few problems with the way we currently operate within a Configuration and how we use it to build a SessionFactory:
The general issue that there is no "lifecycle" to when various pieces of information will be available. This is an important omission in a number of ways:
1) consider schema generation. currently we cannot even know the dialect when a lot of db object names are being determined. this would be nice because it would allow us to transparently handle table/column names which are also keywords/reserved-words in the dialect, for example.
2) static-ness of types and the type-mappings. Because we currently have nothing to which to scope them. Ideally a type instance would be aware of the SessionFactory to which it is bound. Instead, what we have now is to change API methods quite a lot of the time to add in the SessionFactory as a passed parameter whenever it is discovered that it is needed.
3) also, most (all?) of the "static" configuration parameters in Hibernate are currently required to be so because of their use from within these static types; thus scoping types would allow us to also scope those config parameters (things like bytecode-provider, use of binary streams, etc).
Ideally what I see happening is a scheme where users build a org.hibernate.cfg.Settings (or something similiar) instance themselves. Additionally they would apply metadata to a registry of some sort (lets call it MetadataRegistry for now). Then in order to build a SessionFactory, they would supply these two pieces of information (via ctor? via builder?). The important aspect though is that the information in MetadataRegistry would not be dealt with until that point in time, which would allow us to guarentee that resolving schema object names, types, etc would have access to the runtime Settings (and specifically the dialect)
You can read also comments on this one: https://hibernate.onjira.com/browse/HHH-7580
Its too much to copy paste and I guess Jira won't go down so this answer should be valid.

Related

Spring properties hot reloading

In a project, I have a org.apache.commons.configuration.PropertiesConfiguration object registered as a Bean, to provide configuration values around the application, with hot-reloading capabilities.
Example: I defined a DataSource singleton Bean. I then created a ReloadingDataSource object, which wraps and delegate to the "real" DataSource, and each time the configuration file changes, it is able to recreate it in a thread-safe manner.
I'd like to do something similar for simple properties values.
I'd like to create a simple, Autowireable object that delegate retrieval to the Apache PropertiesConfiguration Bean.
The usage should be similar to:
#Property("my.config.database")
private Property<String> database;
And the call site would simply be:
final String databaseValue = database.get()
You'll say, just pass around the PropertiesConfiguration object. Maybe you're right, but I'd like to provide another abstraction over that, a simpler-to-use one.
I know that with ProxyFactoryBean it is possible to create an AOP proxy for method calls. Is this the right path, or are there better alternatives? Maybe pure Spring AOP/AspectJ?
I don't want to use Spring Cloud or similar dependencies.
Spring Cloud will recreate the beans, so keep in mind whatever solution you come up with, if you have another bean which only reads this value once for instance when it is initiated, it won't re-initialize itself, that is the problem Spring Cloud Config takes care of.
AOP only works at the method level as I understand, so you can definitely intercept a call to somebean.getFoo(). But within somebean, there is no way to proxy calls to the variable itself: somebean.foo. You would have to reset foo every time your PropertiesConfiguration changed, and again keep in mind that, if anything else needs the new value of foo you would need to handle this or bite the bullet use Spring Cloud.
The overhead you have with changing stuff at run-time to avoid a re-deploy should really be thought carefully about. For Netflix, this makes sense because they have thousands and thousands of servers. But for smaller players I can't see the justification, the decision adds much complexity. Nightmare to test.
Do you test changing your configuration at run-time or accept the risk and assume it works?
Do you test changing from A -> B whilst under load of a user performing a transaction to the database?
Test other raise conditions where foo is changing?
Some things to think about.

string decoupling and field names

I have a number of domain/business objects, which when used in a hibernate criteria are referenced by the field name as a string, for example :
Criteria crit = session.createCriteria(User.class);
Order myOrdering = Order.desc("firstname");
crit.addOrder(myOrdering);
Where firstname is a field/property of User.class.
I could manually create an Enum and store all the strings in there; is there any other way that I am missing and requires less work(I'll probably forget to maintain the Enum).
I'm afraid there is no a good way to do that.
Even if you decide to use reflections, you'll discover the problem only when the query will run.
But there is a little bit better solution how to discover the problem early: if you use the Named Queries (javax.persistence.NamedQueries) you'll get all your queries compiled as soon as your entities are processed by Hibernate, so basically it will happen during the server's start-up. So if some object was changed breaking the query, you'll know about it the next time you start the server and not when the query is actually run.
Hope it helps.
This is one of the things that irritates me about Hibernate.
In any case, I've solved this in the past using one of two mechanisms, either customizing the templates used to generate base classes from Hibernate config files, or interrogating my Hibernate classes for annotations/properties and generating appropriate enums, classes, constants, etc. from that. It's pretty straight-forward.
It adds a step to the build process, but IMO it was exactly what I needed when I did it. (The last few projects I haven't done it, but for large, multi-dev things I really like it.)

Using Dependency Injection in Spring to replace Factory pattern

I am currently working on an application in which the an instance of the domain object D is injected in to the application. The domain object can contain many classes together in different combinations and permutations as defined by its bean and hence leading to many different final objects D, which I refer to as different versions of D. For a given version of D, I have to fill up the primitive values in it and then save it to the database. Saving it to the database is pretty simple using JPA and Hibernate. The problem is filling up the values in D. The values are fetched over the network using SNMP and then filled up. For each version of D, there is different a strategy to follow, since each version of D may have a different MIB. I are currently following the factory pattern. The factory takes a version of D and returns a valueRetriever for specific to that version of D, which is then used to fetch the values and fill D.
The other obvious way is to inject a configuration retriever in with D and then use it to retrieve the configuration. But I also need to use the retriever during runtime to re-fetch the configurations, so that makes it necessary to store the retriever too in the database, hence creating a new table for each retriever, which seems to an overhead currently.
My question is: Can there be a better way to retrieve the configurations i.e. have a valueRetriever given the above scenario using dependency injection.
Edit: Can AOP be of any use here?
It seems that some of the objects you needing to create have a complex creation logic. You may wont to look at the Spring FactoryBean interface, since a FactoryBean can get all the complex details over the network while allowing you to create an instance and then inject it into other beans.
The basis for Spring's DI is the Bean Factory/Application Context, so it's entirely possible to replace what you're doing.
The difference will be that you'll have to be able to put all your permutations into the Spring configuration and give control over to the application context. If you can't do that, perhaps the solution you've got is preferred.
UPDATE: I would start to fear that your Spring solution is adding in too many unfamiliar technologies into what might be an overly complicated situation.
Take a breath and think "simple".
I wouldn't worry about the database for now. The Spring application context will be the database if you can get all the combinations you need into the bean factory. I'm assuming these configurations are read-only and not altered once you declare them. If that's not the case all bets are off.

What's the best way to read a UDT from a database with Java?

I thought I knew everything about UDTs and JDBC until someone on SO pointed out some details of the Javadoc of java.sql.SQLInput and java.sql.SQLData JavaDoc to me. The essence of that hint was (from SQLInput):
An input stream that contains a stream
of values representing an instance of
an SQL structured type or an SQL
distinct type. This interface, used
only for custom mapping, is used by
the driver behind the scenes, and a
programmer never directly invokes
SQLInput methods.
This is quite the opposite of what I am used to do (which is also used and stable in productive systems, when used with the Oracle JDBC driver): Implement SQLData and provide this implementation in a custom mapping to
ResultSet.getObject(int index, Map mapping)
The JDBC driver will then call-back on my custom type using the
SQLData.readSQL(SQLInput stream, String typeName)
method. I implement this method and read each field from the SQLInput stream. In the end, getObject() will return a correctly initialised instance of my SQLData implementation holding all data from the UDT.
To me, this seems like the perfect way to implement such a custom mapping. Good reasons for going this way:
I can use the standard API, instead of using vendor-specific classes such as oracle.sql.STRUCT, etc.
I can generate source code from my UDTs, with appropriate getters/setters and other properties
My questions:
What do you think about my approach, implementing SQLData? Is it viable, even if the Javadoc states otherwise?
What other ways of reading UDT's in Java do you know of? E.g. what does Spring do? what does Hibernate do? What does JPA do? What do you do?
Addendum:
UDT support and integration with stored procedures is one of the major features of jOOQ. jOOQ aims at hiding the more complex "JDBC facts" from client code, without hiding the underlying database architecture. If you have similar questions like the above, jOOQ might provide an answer to you.
The advantage of configuring the driver so that it works behind the scenes is that the programmer does not need to pass the type map into ResultSet.getObject(...) and therefore has one less detail to remember (most of the time). The driver can also be configured at runtime using properties to define the mappings, so the application code can be kept independent of the details of the SQL type to object mappings. If the application could support several different databases, this allows different mappings to be supported for each database.
Your method is viable, its main characteristic is that the application code uses explicit type mappings.
In the behind the scenes approach the ResultSet.getObject(int) method will use the type mappings defined on the connection rather than those passed by the application code in ResultSet.getObject(int index, Map mapping). Otherwise the approaches are the same.
Other Approaches
I have seen another approach used with JBoss 4 based on these classes:
org.jboss.ejb.plugins.cmp.jdbc.JDBCParameterSetter
org.jboss.ejb.plugins.cmp.jdbc.JDBCResultSetReader.AbstractResultSetReader
The idea is the same but the implementation is non-standard (it probably pre-dates the version of the JDBC standard defining SQLData/SQLInput).
What other ways of reading UDT's in Java do you know of? E.g. what does Spring do? what does Hibernate do? What does JPA do? What do you do?
An example of how something similar to this can be done in Hibernate/JPA is shown in this answer to another question:
Java Enums, JPA and Postgres enums - How do I make them work together?
I know what Spring does: you write implementations of their RowMapper interface. I've never used SQLData with Spring. Your post was the first time I'd ever heard of or thought about that interface.

How to know what made a hibernate persisted object dirty?

An object I mapped with hibernate has strange behavior. In order to know why the object behaves strangely, I need to know what makes that object dirty. Can somebody help and give me a hint?
The object is a Java class in a Java/Spring context. So I would prefer an answer targetting the Java platform.
Edit: I would like to gain access to the Hibernate dirty state and how it changes on an object attached to a session. I don't know how a piece of code would help.
As for the actual problem: inside a transaction managed by a Spring TransactionManager I do some (read) queries on Objects and without doing an explicit save on these Objects they are saved by the TransactionManager because Hibernate thinks that some of these (and not all) are dirty. Now I need to know why Hibernate thinks those Objects are dirty.
I would use an interceptor. The onFlushDirty method gets the current and previous state so you can compare them. Implement the Interceptor interface and extend EmptyInterceptor, overriding onFlushDirty. Then add an instance of that class using configuration.setInterceptor (Spring may require you to do this differently). You can also add an interceptor to the session rather than at startup.
Here is the documentation on interceptors.
create a Test Case or similar, so you can reproduce the problem with a single click.
enable logging for org.hibernate check the logging for the string "dirty" (actually you don't need all of org.hibernate, but I don't know the exact logger.
Find to spots in the program, one where the entity is not dirty, one where it is dirty. Find the middle of the code between the two points, and put a logging statement there, for logging the isdirty Value. Continue with the strategy until you have reduced the code to a single line.
Check out the hibernate code. Find the code that does the dirty checking. Use a debugger to step through it.
Assuming that the state of the object cannot be accessed directly (e.g. no public or package protected fields) and is not fiddled with by reflection, you can put a breakpoint at the start of all of the object's methods and run through the scenario that makes the object dirty in the debugger.

Categories