Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Currently there are two main popular Java Object to Object mapping frameworks that supersede Dozer (http://dozer.sourceforge.net/documentation/mappings.html), they are:
Selma - http://www.selma-java.org/
MapStruct - http://mapstruct.org/
With the exception of this page (http://vytas.io/blog/java/java-object-to-object-mapping-which-framework-to-choose-part-2/) I haven't been able to find much online regarding which framework is better than the other, or under what circumstances they are better. Wondering if anyone you can shed some light on this. In terms of functionality based on the documents, they seem to be doing the same thing.
(Original author of Selma so slight different point of view)
Selma and MapStruct does the same job with some differences. First it appears that Selma generated code is just a bit faster than MapStruct (http://javaetmoi.com/wp-content/uploads/2015/09/2015-09-mapping-objet-objet2.png). The 0.13 release number does not really reflects the maturity of code Selma is stable and robust it is in use in production for 2 years.
The main idea behind Selma is to prohibit magic conversion and just automate all mappings without any side effects. When mapping appears to be too complex, the developer should handle it by himself using custom mappings or interceptor.
The footprint of Selma is built to be as small as possible we only depend on a JavaWriter and the JDK.
Selma tries to only use static compiled generated code without any reflection at runtime or pseudo-code written in string fields.
You can use composition to build a chain of mappers and inside a single mapper you can have global configuration that can be overwritten on a per method basis.
Compiler messages are built to give developer early feedback, tips to solve the issue and learn the API.
At the end for sure MapStruct is more feature rich but Selma gives developer all the tools needed for complex mapping with the responsibility of writing the business logic. You could also find one of the 2 APIs nicer than the other from a user perspective so best thing to do is to try both and choose the one you feel more comfortable with. It won't be time consuming.
(Original author of MapStruct here, so naturally I am biased)
Indeed, both projects are based on the same general idea of generating mapping code at compile time; I recommend you MapStruct for the following reasons:
Proven and stable codebase: MapStruct is the older of the two, coming up with the idea of mapping generation originally. It has been enhanced and polished over quite a long time, based on real-world feedback from usage in many different projects; We released the stable 1.0 Final last year
Larger developer and user community as per the number of committers (MapStruct, Selma) and user questions (MapStruct, Selma)
Feature-rich (Some things supported in MapStruct I didn't find (to the same extend) in the Selma docs):
Many built-in type conversions, including advanced support for JAXB types such as JAXBElement
Support for default values and constants
Mapping customizations through inline expressions
Sharing configurations across mappers
Nicely integrates with CDI and JSR 330 (in addition to Spring)
Eclipse plug-in avaible: Still work in progress, but its quickfixes and auto-completions are already very helpful when designing mapper interfaces
IntelliJ plug-in: helps when editing mapper interfaces via auto-completion, go to referenced properties, refactoring support etc.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm looking for some best practices for developing a clean domain object model. By 'clean', I mean a domain (or business) model that isn't cluttered up with a bunch of DB persistence, xml/json serialization/deserialization, dependency injection stuff. For example, I've read thru several 'how-to' tutorials about implementing the REST API. When they all get to the point of implementing the 'model', they all end up having some annotations about transforming from the 'pojo/poco' to the xml/json view via [XmlAttribute], or making the field be more user friendly in the UI via [Display/Display Type] attribute. The platform doesn't matter, I've seen the cluttering in the Java world (not familiar with other scripting languages).
I'm aware of the Data Transfer Object design pattern as those objects could use these attributes, but is this the only method? DTO seems like it would require a lot of object mapping to/from view to the business layer. If that's what it takes to have a clean domain layer, then great, just looking for feedback.
Thanks
The simple truth is that all of that "annotation clutter" rose up out of a rejection of all the "XML clutter".
Taking both JPA and JAXB in Java as examples, all of those annotations can be replaced by external XML files describing the same meta data for the underlying frameworks. In both of these cases, the frameworks offer "ok" defaults for unannotated data, but the truth is few are really satisfied with the Convention over Configuration default mappings the frameworks offer, and thus more explicit configuration needs to be done.
And all of that configuration has to be captured somewhere, somehow.
For many folks and many applications, the embedded meta data via annotations is a cleaner and easier to use than the external XML mapping methods.
In the end, from a Java perspective, the domain models are "just" objects, the annotations have no bearing, in general, outside of the respective frameworks. But in truth, there's always some coupling with the frameworks, and they have a tendency to influence implementation details within the model. These aren't particularly glaring, but the simple fact is that when there may be two ways to model something, and one way is "more friendly" to the framework, for many that's enough to tilt the decision to go in that direction rather than fighting for purity above the framework.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
In my project I would like to have compile time checks on my existing resource bundle. I already have a set of *.properties localized files and I'm about to hook them up to some i18n tool. I was thinking about regular ResourceBundles, but I don't like the fact, that this mechanism does not guarantee any kind of checks, neither compile-time or maintenance checks like - finding duplicates or finding unused keys.
So, I'm looking for a library, which would take my existing *.properties files and converted them into neat and clean Java code, which I could use in my project.
The best possible outcome would be to have a mechanism similar to GWT i18n support. One, clean interface with all messages as a separate methods.
I have looked at jlibs and ForgeRock. I really like jlibs, but it's not a separate lib, so it's hard for me to imagine introducing so huge lib dependency just for i18n. ForgeRock does pretty much what I would like, but it produces constants rather than clean interfaces to work with, like jlibs does.
This entry blog is also helpful in understanding which approach I would like to use. I made a big research regarding available i18n tools, I just cannot find 'that one', which would suit my needs the best.
Regards.
Another library satisfying your requirements of code generation would be i18n-binder.
Personally, I would approach this problem from another angle, using the gettext framework you would mark translatable strings in the source code and generate the resource bundles from them. There are tools and editors that can then update the translations based on the extracted strings, and detect no longer used or modified strings.
I am currently working exactly on the kind of library you are looking for, check it out. It's still work in progress, but I should be having my first release fairly soon.
The first release will just contain support for annotation based translations. I don't yet have any ideas on how to migrate existing projects into c10n style, though. Any ideas, suggestions are always welcome!
To solve this problem I implemented a Message Compiler, which creates the resource bundle files and constant definitions as Java enum for the keys from one single source file. So the constants can be used in the Java source code, which is a much safer way. The message compiler cannot only be used for Java. It creates also resource files and constants for Objective-C or Swift and can be extended for other Programming environments.
I like JUnit. Not exactly what you are looking for, but by creating tests you are sure that all the items in de propertyfiles are available.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I want to develop a simple orm which performs CRUD functionality.Shold i use reflection?
does libs like hibernate use reflection??
does using reflection will cause the speeed to drop down by a large extent?
Yes Hibernate uses reflection and annotations (or XML configuration files), but it will only index and read all meta information once (at startup). I would recommend though to look at the existing ORM solutions first before you start rolling your own.
A simple ORM is DAO (Data Access Object). You can specify your CRUD operations very well.
For More ORM patterns or Methodology, read Martin Fowler's book: Patterns of Enterprise Application Architecture
Also, you can use the existing JPA (Java Persistence API) and write your own JPA.
Reflection, dynamic proxies, cglib, asm, javassit - all are used in ORM tools.
But, you really don't want to create a new one. Because you can't create a simple ORM. ORMs aren't simple to create and you will realize it once you reach a certain point. So don't waste your time. Use an existing one. There are plenty, some more complicated, some less complicated (and less powerful).
You can google for "simple ORM" and you will have plenty of choices that are (more or less) easy to use. (But not to implement)
Well, not so long ago, I wrote an ORM layer for GAE named gaedo. This framework is modular enough to also fit relational databases. Hopefully, it was my third attempt at such a job. So, here are what is needed and why.
Reflection is the root of all ORM mapping tools, since it'll allow you to explore classes looking for their attributes names and values. This is a first use. It will also allow you to load values from your datastore, provided your bean has a convenient constructor (usually, ORM frameworks rely upon Java Beans, since these beans ensure a no-arg constructor exists). Finally, reflection will allow you to load values from datastore in beans, which is, i think, the most important thing. Unfortunately, you'll fast be faced with the issue of the query that loads the whole database, which will require you the two newt steps
Considering graph loading, you'll fast need to rely upon dynamic proxies to create lazy loadable objects. Obviously, if you rely solely upon JDK, you will only able to use that on objects implementing well-known interfaces (as an example, collections and maps are very good examples of objects benefiting from dynamic proxies implementing their interface).
Finally, annotations will be of smaller use. They'll allow you to define key elements (used to generate the database key for an object, as an example), define parent-children relationships, or even define lazy-loading strategy, in association with previously mentioned dynamic proxies.
This is an interesting, but mostly useless, research effort. Interesting, because it will learn you tons of concepts regarding reflection, proxies, and all those things people ignore and tend to consider as reserved to so-called dynamic languages.
But useless, because you'll always encounter corner cases requiring you hacking your code.
As Emmanuel Bernard told in "Les castcodeurs" (a french Java podcast), I think, each year, someone come with a "reimplementation" of Hibernate. And each year, this implementation reveals itself lacking some important fragments, like transaction, local or distributed, cache handling, ...
So, try to code it, and never forget it can be dropped soon due to a too great overlap with installed frameworks.
To answer the last part of your question, yes; reflection is a serious performance hit. All the work that you normally have the compiler to you instead have to do at run time, so use reflection sparingly (cache classes for example so you only create them once, preferably at startup).
I haven't looked through Hibernate's code, but I expect it uses reflection as well, but as optimized as possible.
My recommendation is that you write a working dead-simple solution first, then start optimizing as you go along.
Try JLibs-JDBC.
This is simple ORM which doesn't use reflection or xml configuration
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
A few years ago, I did a survey of DbC packages for Java, and I wasn't wholly satisfied with any of them. Unfortunately I didn't keep good notes on my findings, and I assume things have changed. Would anybody care to compare and contrast different DbC packages for Java?
There is a nice overview on
WikiPedia about Design by Contract, at the end there is a section regarding languages with third party support libraries, which includes a nice serie of Java libraries. Most of these Java libraries are based on Java Assertions.
In the case you only need Precondition Checking there is also a lightweight Validate Method Arguments solution, at SourceForge under Java Argument Validation (Plain Java implementation).
Depending on your problem, maybe the OVal framework, for field/property Constraints validation is a good choice. This framework lets you place the Constraints in all kind of different forms (Annotations, POJO, XML). Create customer constraints through POJO or scripting languages (JavaScript, Groovy, BeanShell, OGNL, MVEL). And it also party implements Programming by Contract.
Google has a open source library called contracts for java.
Contracts for Java is our new open source tool. Preconditions,
postconditions, and invariants are added as Java boolean expressions
inside annotations. By default these do nothing, but enabled via a JVM
argument, they’re checked at runtime.
• #Requires, #Ensures, #ThrowEnsures and #Invariant specify contracts as Java boolean expressions
• Contracts are inherited from both interfaces and classes and can be selectively enabled at runtime
contracts for java.
I tested contract4J one time and found it usable but not perfect.
You are creating contracts for for and after method calls and invars over the whole class.
The contract is created as an assertion for the method. The Problem is that the contract itself is written in a string so you don't have IDE support for the contracts or compile time cheching if the contract still works.
A link to the library
It's been a long time since I've looked at these, but found some old links. One was for JASS.
The other one that I had used (and liked) was iContract by Reliable Systems. It had an ant task that you would run as a preprocessor. However, I can't seem to find it with some google searches, it looks like it has vanished. The original site is now a link farm. Check out this link for some possible ways to get to it.
I'd highly recommend you to consider the Java Modeling Language (JML).
There is a Groovy extensions that enables Design by Contract(tm) in Groovy/Java code - GContracts. It uses so-called closure annotations to specify class invariants, pre- and postconditions. Examples can be found on the project's github wiki.
Major advantage: it is only a single jar without external dependencies and it can be resolved via Maven compliant repositories since its been placed in the central Maven repo.
If you want a plain and simple basic support for expressing your contracts, have a look on valid4j (found on Maven Central as org.valid4j:valid4j). It lets you express your contracts using regular hamcrest-matchers in plain code (no annotations, nor comments).
For preconditions and postconditions (basically assertions -> throwing AssertionError):
import static org.valid4j.Assertive.*;
require(inputList, hasSize(greaterThan(0)));
...
ensure(result, lessThan(4.0));
If you are not happy with the default global policy (throwing AssertionError), valid4j provides a customization mechanism that let's you provide your own implementation of org.valid4j.AssertiveProvider.
Links:
http://www.valid4j.org/
https://github.com/helsing/valid4j
I would suggest a combination of a few tools:
Java's assert condition... or it's more advanced Groovy cousin, Guava's Preconditions.checkXXXX(condition...) and Verify.verify(condition...), or a library like AssertJ, if all you need is just to do simple checks in your 'main' or 'test' code
you'll get more features with a tool like OVal; it can check both objects as well as method arguments and results, you can also fire checks manually (eg to show validation errors on UI before a method is called). It can understand existing annotations eg from JPA or javax.validation (like #NotNull, #Pattern, #Column), or you can write inline constraints like #Pre(expr="x >= 0 && x <= y"). If the annotation is #Documented, the checks will be also visible in Javadocs (you don't have to describe them there as well).
OVal uses reflection, which can make performance issues and other problems in some environments like Android; then you should consider tool like Google's Cofoja, which has less functionality, but depends on compile-time Annotation Processing Tool instead of reflection
I think that many DbC libraries were surclassed by the builtin assert keyword, introduced since Java 1.4:
it is a built-in, no other library is required
it works with inheritance
you can activate/deactivate on package basis
easy to refactoring (e.g. no assertions in comments)
I personally think that the DbC libraries available at present have left a lot to be desired, none of the libraries i looked at played well with the Bean Validation API.
The libraries i looked at have been documented here
The Bean Validation API has a lot of over lap with the concepts from DbC. In certain cases Bean Validation API cannot be used like simple POJO's (non CDI managed code). IMO a think wrapper around the Bean Validation API should suffice.
I found that the existing libraries are a little tricky to add into existing web projects given that they are implemented either via AOP or Byte code instrumentation. Probably with the advent of Bean Validation API this kind of complexity to implement DbC is unwarranted.
I have also documented my rant in this post and hope to build a small library which leverages on the Bean Validation API
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I've observed the strange fact (based on the questions in the hibernate tag) that people are still actively using xml files instead of annotations to specify their ORM (Hibernate/JPA) mappings.
There are a few cases, where this is necessary:
you are using classes that are provided, and you want to map them.
you are writing an API, whose domain classes can be used without a JPA provider, so you don't want to force a JPA/Hibernate dependency.
But these are not common cases, I think.
My assumptions are:
people are used to xml files and don't feel comfortable / don't want to bother learning to use the annotation approach.
Java pre-1.5 is forced upon the project and there is nothing to do about it
people don't know that annotations are a full-featured replacement of xml mapping.
legacy systems are supported and hence changing the approach is considered risky
people fear that mixing annotations (meta-information) with their classes is wrong.
Any other possible explanations?
The domain layer and the persistence layer are considered by some to be separate concerns. Using the pure XML approach keeps the two layers as loosely coupled as possible; using annotations couples the two layers more tightly as you have persistence-related code embedded in the domain code.
Lack of overview of what's been mapped. You need to dig in the source code.
people don't know that annotations are
a full-featured replacement of xml
mapping.
Ah, but they're not. Three cases off the top of my head (there are probably more) you can't do (well) with annotations:
Use formula as part of association key (admittedly, rather esoteric).
Join-via-subselect - #Loader is not an adequate replacement. Not too common but quite useful. Envers provides a viable alternate approach.
Losing column order for schema generation. This one's an absolute killer. I understand why it's done this way, but it still annoys me to no end.
Don't get me wrong, though - annotations are great; doubly so when they're coupled with Validator (though, again, #3 above kills the buzz on this one). They also provide certain aspects of functionality that XML mappings do not.
Using XML to complement the annotations, where environment or system specific configuration is needed.
Some information is carried nicely in annotations, such as the cardinality of relationships between entities. These annotations provide more detail about the model itself, rather than how the model relates to something else.
However, bindings, whether to a persistence store or XML or anything else, are extrinsic to the model. They change depending on the context in which the model is used. Including them in the model is as bad as using inline style definitions in HTML. I use external binding (usually—though not necessarily—XML) documents for the same reasons I reference an external CSS.
I initially found the annotation syntax very weird. It looks like line noise and mixes in with where I usually put comments. It's vastly better than dealing with the XML files though, because all of the changes are in one place, the model file. Perhaps one limitation of annotation is possible collision with other annotations, but I haven't seen that yet.
I think the real reason that it isn't used more is that it isn't really considered the default. You have to use an additional jar file. It should be part of core and the XML approach should be the optional one.
I've switched to annotations, but sometimes I miss the XML mappings, mainly because the documentation was so much more comprehensive, with examples of many scenarios. With annotations, I stick to pretty basic mappings (which is great if you control the data and object model), but I've done some very complex things in the XML that I don't know if I could replicate in the annotations.
So if you want to deploy your class to multiple datastores. And you want to annotate column definitions into it do you ? Different datastores have different conventions etc and using XML is the only sane place in that situation, being able to have one for MySQL, and one for Derby, and one for Oracle or whatever. You can still put the basic persistence/relation annotations in if you wish, but the schema-specific stuff would go into XML in that case.
--Andy (DataNucleus)
I have a new one : http://www.summerofnhibernate.com/
Very nice screencast series not yet covering annotations. I have written some apps with it to learn the basics, not for my job but out of curiosity, but never migrated to annotations yet. The series where suggested as still relevant on SO. I still will migrate to annotations if I have some more spare time but for the time being I could be one of the persons asking questions about it.
I worked on a project where the database would change very frequently and we have to regenerate the java files and configuration files each time it happens. Actually we do not use all the relationships and configurations generated by hibernate tool. So basically we use the tool and then modify/tweak them.
So when you want to modify/tweak the default configurations, it is easier to do in the XML file in comparison to doing it through annotations.
I feel that it makes the code much more readable if we donot use Annotations.Use of Annotations can really help if the configuration info changes frequently, but take the case of web.xml, how many times does the info in that change, so why use annotations for Servlets.
We continue to use XML because typically for deployed sites, getting a patch (binary code) approved for installation takes time that you may not have. Updates to ASCII files (e.g. xml files) are considered configuration changes and not patches...
t