Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I want to develop a simple orm which performs CRUD functionality.Shold i use reflection?
does libs like hibernate use reflection??
does using reflection will cause the speeed to drop down by a large extent?
Yes Hibernate uses reflection and annotations (or XML configuration files), but it will only index and read all meta information once (at startup). I would recommend though to look at the existing ORM solutions first before you start rolling your own.
A simple ORM is DAO (Data Access Object). You can specify your CRUD operations very well.
For More ORM patterns or Methodology, read Martin Fowler's book: Patterns of Enterprise Application Architecture
Also, you can use the existing JPA (Java Persistence API) and write your own JPA.
Reflection, dynamic proxies, cglib, asm, javassit - all are used in ORM tools.
But, you really don't want to create a new one. Because you can't create a simple ORM. ORMs aren't simple to create and you will realize it once you reach a certain point. So don't waste your time. Use an existing one. There are plenty, some more complicated, some less complicated (and less powerful).
You can google for "simple ORM" and you will have plenty of choices that are (more or less) easy to use. (But not to implement)
Well, not so long ago, I wrote an ORM layer for GAE named gaedo. This framework is modular enough to also fit relational databases. Hopefully, it was my third attempt at such a job. So, here are what is needed and why.
Reflection is the root of all ORM mapping tools, since it'll allow you to explore classes looking for their attributes names and values. This is a first use. It will also allow you to load values from your datastore, provided your bean has a convenient constructor (usually, ORM frameworks rely upon Java Beans, since these beans ensure a no-arg constructor exists). Finally, reflection will allow you to load values from datastore in beans, which is, i think, the most important thing. Unfortunately, you'll fast be faced with the issue of the query that loads the whole database, which will require you the two newt steps
Considering graph loading, you'll fast need to rely upon dynamic proxies to create lazy loadable objects. Obviously, if you rely solely upon JDK, you will only able to use that on objects implementing well-known interfaces (as an example, collections and maps are very good examples of objects benefiting from dynamic proxies implementing their interface).
Finally, annotations will be of smaller use. They'll allow you to define key elements (used to generate the database key for an object, as an example), define parent-children relationships, or even define lazy-loading strategy, in association with previously mentioned dynamic proxies.
This is an interesting, but mostly useless, research effort. Interesting, because it will learn you tons of concepts regarding reflection, proxies, and all those things people ignore and tend to consider as reserved to so-called dynamic languages.
But useless, because you'll always encounter corner cases requiring you hacking your code.
As Emmanuel Bernard told in "Les castcodeurs" (a french Java podcast), I think, each year, someone come with a "reimplementation" of Hibernate. And each year, this implementation reveals itself lacking some important fragments, like transaction, local or distributed, cache handling, ...
So, try to code it, and never forget it can be dropped soon due to a too great overlap with installed frameworks.
To answer the last part of your question, yes; reflection is a serious performance hit. All the work that you normally have the compiler to you instead have to do at run time, so use reflection sparingly (cache classes for example so you only create them once, preferably at startup).
I haven't looked through Hibernate's code, but I expect it uses reflection as well, but as optimized as possible.
My recommendation is that you write a working dead-simple solution first, then start optimizing as you go along.
Try JLibs-JDBC.
This is simple ORM which doesn't use reflection or xml configuration
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm looking for some best practices for developing a clean domain object model. By 'clean', I mean a domain (or business) model that isn't cluttered up with a bunch of DB persistence, xml/json serialization/deserialization, dependency injection stuff. For example, I've read thru several 'how-to' tutorials about implementing the REST API. When they all get to the point of implementing the 'model', they all end up having some annotations about transforming from the 'pojo/poco' to the xml/json view via [XmlAttribute], or making the field be more user friendly in the UI via [Display/Display Type] attribute. The platform doesn't matter, I've seen the cluttering in the Java world (not familiar with other scripting languages).
I'm aware of the Data Transfer Object design pattern as those objects could use these attributes, but is this the only method? DTO seems like it would require a lot of object mapping to/from view to the business layer. If that's what it takes to have a clean domain layer, then great, just looking for feedback.
Thanks
The simple truth is that all of that "annotation clutter" rose up out of a rejection of all the "XML clutter".
Taking both JPA and JAXB in Java as examples, all of those annotations can be replaced by external XML files describing the same meta data for the underlying frameworks. In both of these cases, the frameworks offer "ok" defaults for unannotated data, but the truth is few are really satisfied with the Convention over Configuration default mappings the frameworks offer, and thus more explicit configuration needs to be done.
And all of that configuration has to be captured somewhere, somehow.
For many folks and many applications, the embedded meta data via annotations is a cleaner and easier to use than the external XML mapping methods.
In the end, from a Java perspective, the domain models are "just" objects, the annotations have no bearing, in general, outside of the respective frameworks. But in truth, there's always some coupling with the frameworks, and they have a tendency to influence implementation details within the model. These aren't particularly glaring, but the simple fact is that when there may be two ways to model something, and one way is "more friendly" to the framework, for many that's enough to tilt the decision to go in that direction rather than fighting for purity above the framework.
This question already has answers here:
Java Programming - Where should SQL statements be stored? [closed]
(15 answers)
Closed 9 years ago.
As part of my Java program, I need to do a run a lot of queries against (Oracle) database.
Currently, we create a mix SQL and Java, which (i know) is a bad bad thing.
What is a right way to handle something like this? If possible, include examples.
Thank you.
EDIT:
A bit more information about the application. It is a web application that derives content mainly from the database (it takes user input and paints content to be seen next based on what database believes to be true).
The biggest concern I have with how it's done today is that mixing Java code and a SQL queries look "out-of-place" when coupled as tightly as it is (Queries hardcoded as part of source code)
I am looking for a cleaner way to handle this situation, which would improve maintainability and clarity of the project at hand
For what you've described, incorporating an object relational mapper (ORM) or rewriting as stored procedures is probably more work than you want to embrace. Both have non-trivial learning curves.
Instead a good practice is consolidating SQL in a class per table or purpose. Take a look at the table data gateway object and the data access object design patterns to see how this is done in practice.
The upshot of this approach is myriad. You are better positioned for reuse because queries are in one spot. Client code becomes more readable as you replace several lines of JDBC and SQL with a method call (e.g. userTableDataGateway.getContentToShow(pageId)). Finally, this will help you see the problem more clearly an ORM helps solve.
Well, one thing you could consider is an Object Relational Mapper (for example, Hibernate). This would allow you to map your database schema to Java objects, which would generally clean up your Java code.
However, if performance and speed is of the essence, you might be better off using a plain JDBC driver.
This would of course also be dependent upon the task your application is trying to accomplish. If, for example, you need to do batch updates based on a CSV file, I migh go with a pure JDBC solution. If you're designing a web application, I would definitely go with an ORM solution.
Also, note that a pure JDBC solution would involve having SQL in your Java code. Actually, for that matter, you would have to have some form of SQL, be it HQL, JPQL, or plain SQL, in any ORM solution as well. Point being, there's nothing wrong with some SQL in your Java application.
Edit in response the OP's edits
If I were writing a web application from scratch, I would use an ORM. However, since you already have a working application, making the transition from a pure JDBC solution to an ORM would be pretty painful. It would clean up your code, but there is a significant learning curve involved and it takes quite a bit of set-up. Some of the pain from setting-up would be alleviated if you are working with some sort of bean-management system, like Spring, but it would still be pretty significant.
It would also depend on where you want to go with your application. If you plan on maintaining and adding to this code for a significant period, a refactor may be in order. I would not, however, recommend a re-write of your system just because you don't like having SQL hard-coded in your application.
Based on your updates, I concur with Tim Pote's edits re: the learning curve to integrate ORM. However, instead of integrating ORM, you could do things like using prepared statements, which you in turn store in a properties file. Or even store your queries in the DB so that you can make subtle updates to them that can then be read in immediately without restarting your app server. Both of these strategies would declutter your Java code of hard-coded SQL.
Ultimately though, I don't think there's a clear answer to your question, because there's nothing inherently wrong with what you're doing. It's just a bit inflexible, but perhaps acceptably so for your circumstances.
That said, I'm posting this as an answer!
I'm not sure of the state of the project but you may also be able to find an 'alternate' object relational mapper called MyBatis. It has a lower learning curve than the popular hibernate or eclipselink and let's you actually write the queries so you know what the code is doing. That is if ORM is your thing.
I'm working with JPA right now (mainly because it is the current trend and it needs to be learned). JPA is the Java standard for ORM. If you are going to learn what is currently a typical ORM way of doing things, JPA is probably the best way to go. Frameworks like Hibernate and Eclipselink drive it. Depending on what framework you choose to underpin your JPA app, you can use proprietary features but that will tie you to that framework pretty much for good. JPA is not hard to start using, but can be very cryptic when it doesn't work since it obfuscates the interaction with the database quite a bit (mind you, it does allow the option using native SQL queries, but that kind of negates the reason why people say JPA style DB access is good).
And yes, there are still people using JDBC with prepared statements. And normally there are practices/patterns that you will use when programming with plain old JDBC that act like a very, very minimalist ORM... or really, closer to MyBatis. Again, if you go this route, use prepared statements. They negate a number of dangers.
This is a religious kind of question, so you will hear a lot of proselytizing the way you wrote the question. In fact someone might shoot down your question for this. I think the only thing you could ask that might be worse is whether emacs or vi is better to a crowd of unix geeks.
Your question seems too generic, however if you have a mix of Direct SQL on Oracle and Java SQL, it would be better to invest some time in an ORM like Hibernate or Apache Cayenne. The ORM is a separate design approach to segregate Database operations from the Java side. All the db interactions and DB design is implemented on the ORM and all the access and business logic will reside in Java, this is a suggestion. Still unclear about your actual problem though.
The biggest concern I have with how it's done today is that mixing
Java code and a SQL queries look "out-of-place" when coupled as
tightly as it is (Queries hardcoded as part of source code)
This assumption of yours is not really "correct" in a way that there is going to be a true / false answer to your question. This question here explains that there are several ways of dealing with mixing Java and SQL:
Java Programming - Where should SQL statements be stored?
It essentially distinguishes between SQL being:
Hardcoded in business objects
Embedded in SQLJ clauses
Encapsulated in separate classes e.g. Data Access Objects
Metadata driven (decouple the object schema from the data schema - describe the mappings between them in metadata)
Put into external files (e.g. Properties or Resource files)
Put into stored procedures
I'll add to that:
Embedded in CriteriaQuery statements
Embedded in jOOQ statements.
Apache Cayenne, is one of the easiest ORM to use. It comes with a Cayenne Modeller to Model data objects and does mappings. I would recommend Cayenne for a beginner in ORM. It can create mapping classes and DB sync through the modeller.
I have taken over some code that has been using the Firestorm DAO code generator from CodeFutures. I believe that the license for this is going to be up soon, and was wondering if anyone could recommend any alternatives, open source or not, so that I can get an idea of what's out there to better make a decision.
This is probably a bit late for your concrete decision in April, but if you are used to Firestorm DAO, using generated code for every database entity, you might find it easy to switch over to jOOQ. jOOQ omits the "DAO layer" entirely, generating classes that directly represent your relational model. This is generally referred to as the Active Record pattern. Instead of writing DAOs, you can directly query your database from Java using jOOQ's built-in DSL, similar to that of Microsoft's Linq
I agree with JavadocMD, that JPA (or Hibernate) is what is currently considered "best practice". But maybe you don't want to add object-relational mapping to your application for well-known reasons...
I would strongly suggest not switching off of firestorm. Firestorm makes writing DAO's a thing of the past for about 90% of the use cases. For all the other cases, just subclass the dao that firestorm makes and add functionality to it that you want, using the inherited helper methods. You don't need a license for this, you can use the free license.
No, I'm not from Firestorm, but Firestorm helped me get my project off of the ground with about a 40% time savings. Once I get into more complex queries, it will start saving me about 20% of the dev time, but hey, it's still 20% savings over other solutions. Also, it transforms into raw JDBC. When something goes wrong, it's much easier to debug if you're familiar with ODBC/JDBC.
One option would be to completely change directions and go with a persistence framework like JPA. You create your Java object model, add the appropriate annotations, and JPA handles everything else for you without any messy generated code.
Granted, depending on the specifics of your architecture and business situation this kind of change might not be feasible for you. However if you can manage it, JPA seems to be much more in line with current best-practices for Java persistence.
I've used OpenJPA in a production environment: http://openjpa.apache.org/
And we considered TopLink (Oracle's implementation) but ran into a few issues that I can't recall. http://www.oracle.com/technology/products/ias/toplink/index.html
Hi everybody: let me do a bit of "concept mining" here: I am involved in mantaining/extending an application whose functionality is distributed across several servers. For example, we have a machine running the ApplicationServer, another running the DataServer and so on.
This application has a Web Interface. The current UI is totally implemented in Java, and in a way that makes adding new functionality hard. One of my goals is extending this interface, and we're considering shifting the whole thing to another platform, like Rails, for example.
Problem being, the database that is manipulated by the UI (possibly Rails in the future) is also manipulated by ApplicationServer (Java).
So, my main question is: both Rails and Java can access databases through their own ORM (ActiveRecord for Rails and Hibernate or similar for Java). Is there any way to guarantee that the mappings are consistent?*
Even if the answer is a hard "no", I'd also like to hear your thoughts on how you'd approach this scenario.
I hope the question is clear enough, but warn me if it isn't and I'll edit accordingly. =D
*Edit: per request, I'm extending this explanation: what I mean is, how to make sure things don't break when someone needs to add a new field to the database and edits the Hibernate mapping because of it? I know that Rails "guesses" the entity attributes pretty much by itself (making things easier), but I was wondering if there was some "magical way" to "connect" the ActiveRecord directly to the Hibernate mapping.
Depends on your case and how important it is to actually ensure that things won't break. I would probably code the Rails app to do its best, and then write a good set of db integration test cases for Rails to test against breakage.
Because Hibernate needs a mapping conf whereas Rails uses the database layout directly, it's best to do the db changes on Hibernate/mapped Java class side and then run the test suite on Rails side after changes.
this might be coming too late to the party, but ActiveJDBC is an ActiveRecord- like implementation in Java which reads metadata and configures self pretty much the same as ActiveRecord: http://code.google.com/p/activejdbc/
You should look at using DataMapper instead of ActiveRecord. DataMapper and Hibernate following roughly the same pattern so the mappings would be similar. Also, DataMapper defines the mapping in the class itself rather than figuring it out from the model. This is much closer to Hibernate and you could probably write a simple hbm to dm converter and just eval the output at the top of your model classes. If you didn't design your original data model with Rails in mind, none of the convention over configuration standards are likely to be there; with DataMapper, the default seems to be to map properties and relationships like Hibernate.
Another idea: if you use the Hibernate annotations instead of xml mapping, maybe you could JRuby as the bridge to build the Ruby model from the Java one.
But either way, if you have good tests, it should be obvious when a data model change break something.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I've observed the strange fact (based on the questions in the hibernate tag) that people are still actively using xml files instead of annotations to specify their ORM (Hibernate/JPA) mappings.
There are a few cases, where this is necessary:
you are using classes that are provided, and you want to map them.
you are writing an API, whose domain classes can be used without a JPA provider, so you don't want to force a JPA/Hibernate dependency.
But these are not common cases, I think.
My assumptions are:
people are used to xml files and don't feel comfortable / don't want to bother learning to use the annotation approach.
Java pre-1.5 is forced upon the project and there is nothing to do about it
people don't know that annotations are a full-featured replacement of xml mapping.
legacy systems are supported and hence changing the approach is considered risky
people fear that mixing annotations (meta-information) with their classes is wrong.
Any other possible explanations?
The domain layer and the persistence layer are considered by some to be separate concerns. Using the pure XML approach keeps the two layers as loosely coupled as possible; using annotations couples the two layers more tightly as you have persistence-related code embedded in the domain code.
Lack of overview of what's been mapped. You need to dig in the source code.
people don't know that annotations are
a full-featured replacement of xml
mapping.
Ah, but they're not. Three cases off the top of my head (there are probably more) you can't do (well) with annotations:
Use formula as part of association key (admittedly, rather esoteric).
Join-via-subselect - #Loader is not an adequate replacement. Not too common but quite useful. Envers provides a viable alternate approach.
Losing column order for schema generation. This one's an absolute killer. I understand why it's done this way, but it still annoys me to no end.
Don't get me wrong, though - annotations are great; doubly so when they're coupled with Validator (though, again, #3 above kills the buzz on this one). They also provide certain aspects of functionality that XML mappings do not.
Using XML to complement the annotations, where environment or system specific configuration is needed.
Some information is carried nicely in annotations, such as the cardinality of relationships between entities. These annotations provide more detail about the model itself, rather than how the model relates to something else.
However, bindings, whether to a persistence store or XML or anything else, are extrinsic to the model. They change depending on the context in which the model is used. Including them in the model is as bad as using inline style definitions in HTML. I use external binding (usually—though not necessarily—XML) documents for the same reasons I reference an external CSS.
I initially found the annotation syntax very weird. It looks like line noise and mixes in with where I usually put comments. It's vastly better than dealing with the XML files though, because all of the changes are in one place, the model file. Perhaps one limitation of annotation is possible collision with other annotations, but I haven't seen that yet.
I think the real reason that it isn't used more is that it isn't really considered the default. You have to use an additional jar file. It should be part of core and the XML approach should be the optional one.
I've switched to annotations, but sometimes I miss the XML mappings, mainly because the documentation was so much more comprehensive, with examples of many scenarios. With annotations, I stick to pretty basic mappings (which is great if you control the data and object model), but I've done some very complex things in the XML that I don't know if I could replicate in the annotations.
So if you want to deploy your class to multiple datastores. And you want to annotate column definitions into it do you ? Different datastores have different conventions etc and using XML is the only sane place in that situation, being able to have one for MySQL, and one for Derby, and one for Oracle or whatever. You can still put the basic persistence/relation annotations in if you wish, but the schema-specific stuff would go into XML in that case.
--Andy (DataNucleus)
I have a new one : http://www.summerofnhibernate.com/
Very nice screencast series not yet covering annotations. I have written some apps with it to learn the basics, not for my job but out of curiosity, but never migrated to annotations yet. The series where suggested as still relevant on SO. I still will migrate to annotations if I have some more spare time but for the time being I could be one of the persons asking questions about it.
I worked on a project where the database would change very frequently and we have to regenerate the java files and configuration files each time it happens. Actually we do not use all the relationships and configurations generated by hibernate tool. So basically we use the tool and then modify/tweak them.
So when you want to modify/tweak the default configurations, it is easier to do in the XML file in comparison to doing it through annotations.
I feel that it makes the code much more readable if we donot use Annotations.Use of Annotations can really help if the configuration info changes frequently, but take the case of web.xml, how many times does the info in that change, so why use annotations for Servlets.
We continue to use XML because typically for deployed sites, getting a patch (binary code) approved for installation takes time that you may not have. Updates to ASCII files (e.g. xml files) are considered configuration changes and not patches...
t