Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
In my project I would like to have compile time checks on my existing resource bundle. I already have a set of *.properties localized files and I'm about to hook them up to some i18n tool. I was thinking about regular ResourceBundles, but I don't like the fact, that this mechanism does not guarantee any kind of checks, neither compile-time or maintenance checks like - finding duplicates or finding unused keys.
So, I'm looking for a library, which would take my existing *.properties files and converted them into neat and clean Java code, which I could use in my project.
The best possible outcome would be to have a mechanism similar to GWT i18n support. One, clean interface with all messages as a separate methods.
I have looked at jlibs and ForgeRock. I really like jlibs, but it's not a separate lib, so it's hard for me to imagine introducing so huge lib dependency just for i18n. ForgeRock does pretty much what I would like, but it produces constants rather than clean interfaces to work with, like jlibs does.
This entry blog is also helpful in understanding which approach I would like to use. I made a big research regarding available i18n tools, I just cannot find 'that one', which would suit my needs the best.
Regards.
Another library satisfying your requirements of code generation would be i18n-binder.
Personally, I would approach this problem from another angle, using the gettext framework you would mark translatable strings in the source code and generate the resource bundles from them. There are tools and editors that can then update the translations based on the extracted strings, and detect no longer used or modified strings.
I am currently working exactly on the kind of library you are looking for, check it out. It's still work in progress, but I should be having my first release fairly soon.
The first release will just contain support for annotation based translations. I don't yet have any ideas on how to migrate existing projects into c10n style, though. Any ideas, suggestions are always welcome!
To solve this problem I implemented a Message Compiler, which creates the resource bundle files and constant definitions as Java enum for the keys from one single source file. So the constants can be used in the Java source code, which is a much safer way. The message compiler cannot only be used for Java. It creates also resource files and constants for Objective-C or Swift and can be extended for other Programming environments.
I like JUnit. Not exactly what you are looking for, but by creating tests you are sure that all the items in de propertyfiles are available.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Currently there are two main popular Java Object to Object mapping frameworks that supersede Dozer (http://dozer.sourceforge.net/documentation/mappings.html), they are:
Selma - http://www.selma-java.org/
MapStruct - http://mapstruct.org/
With the exception of this page (http://vytas.io/blog/java/java-object-to-object-mapping-which-framework-to-choose-part-2/) I haven't been able to find much online regarding which framework is better than the other, or under what circumstances they are better. Wondering if anyone you can shed some light on this. In terms of functionality based on the documents, they seem to be doing the same thing.
(Original author of Selma so slight different point of view)
Selma and MapStruct does the same job with some differences. First it appears that Selma generated code is just a bit faster than MapStruct (http://javaetmoi.com/wp-content/uploads/2015/09/2015-09-mapping-objet-objet2.png). The 0.13 release number does not really reflects the maturity of code Selma is stable and robust it is in use in production for 2 years.
The main idea behind Selma is to prohibit magic conversion and just automate all mappings without any side effects. When mapping appears to be too complex, the developer should handle it by himself using custom mappings or interceptor.
The footprint of Selma is built to be as small as possible we only depend on a JavaWriter and the JDK.
Selma tries to only use static compiled generated code without any reflection at runtime or pseudo-code written in string fields.
You can use composition to build a chain of mappers and inside a single mapper you can have global configuration that can be overwritten on a per method basis.
Compiler messages are built to give developer early feedback, tips to solve the issue and learn the API.
At the end for sure MapStruct is more feature rich but Selma gives developer all the tools needed for complex mapping with the responsibility of writing the business logic. You could also find one of the 2 APIs nicer than the other from a user perspective so best thing to do is to try both and choose the one you feel more comfortable with. It won't be time consuming.
(Original author of MapStruct here, so naturally I am biased)
Indeed, both projects are based on the same general idea of generating mapping code at compile time; I recommend you MapStruct for the following reasons:
Proven and stable codebase: MapStruct is the older of the two, coming up with the idea of mapping generation originally. It has been enhanced and polished over quite a long time, based on real-world feedback from usage in many different projects; We released the stable 1.0 Final last year
Larger developer and user community as per the number of committers (MapStruct, Selma) and user questions (MapStruct, Selma)
Feature-rich (Some things supported in MapStruct I didn't find (to the same extend) in the Selma docs):
Many built-in type conversions, including advanced support for JAXB types such as JAXBElement
Support for default values and constants
Mapping customizations through inline expressions
Sharing configurations across mappers
Nicely integrates with CDI and JSR 330 (in addition to Spring)
Eclipse plug-in avaible: Still work in progress, but its quickfixes and auto-completions are already very helpful when designing mapper interfaces
IntelliJ plug-in: helps when editing mapper interfaces via auto-completion, go to referenced properties, refactoring support etc.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm trying to evaluate different approaches to have some code in our Java project generated automatically from definitions in a domain-specific language while building the project. I have manually written a code generator or two in the past but I have no experience with existing code generation frameworks. We have not yet decided whether to use such a framework or build the generator by hand.
I need help with a conceptual problem; I would like to understand how a code generator can be built which allows the DSL to refer to existing (hand-written) Java classes, methods and fields. It should be possible to refer to classes that are in the same compilation unit (e.g. Maven project) as the generated Java classes. This means that those hand-written classes cannot be compiled before the code generator is run and the code generator would have to look at Java source files in addition to everything required to be on the classpath for compiling those classes.
How do existing frameworks handle such cases, if at all? Do they parse the Java source files themselves or do they re-use some machinery of the Java compiler?
I think this is the same problem that any (non-dynamic) non-Java language targeting the JVM faces, if it allows its own code to reference Java classes and vice-versa in the same compilation units. Maybe it is helpful to look at how those compilers work, unless they circumvent javac by also include a Java compiler themselves.
There are multiple reasons why the code generator needs access to the classes in the Java files of the same compilation unit:
I would like to provide semantics similar to those in Java where I can import <package>.* and then use the names of those classes without fully qualifying the name of each of them.
I would like to reject code in the DSL if it refers to symbols that don't exists or don't meet some required criteria.
There will be cases where I want to generate code that depends on the members of a class or the signatures of methods. An example would be to automatically generate a decorator or builder or implement an interface but where the base class or interface is not generated by the code generator.
I may want to use the type information of referenced symbols in the generate code. e.g. generating different code depending on the signature of a method.
Our project uses Maven. I'm interested in general approaches to solving these problems but information or examples that apply to Maven are greatly appreciated.
How can I extend Java with a DSL that allows the DSL compiler to refer to external Java elements (classes, methods, fields)?
Actually unclear what you're asking, furthermore this question is more theoretical, than programmic.
In any case, from my experience of own DSL implementation, there isn't any problem use java classloaders for dynamically access to new generated and compiled java classes. Also, if you are using maven, so all dependencies with production scope must be loaded in main classloader and be available to load them using reflection.
Here are some useful links:
http://www.javaworld.com/article/2077260/learn-java/the-basics-of-java-class-loaders.html
http://tutorials.jenkov.com/java-reflection/dynamic-class-loading-reloading.html
http://docs.oracle.com/javase/tutorial/reflect/
Do not parse java programs, use compiled classes instead. The referenced classes can be written in different languages, including other DSL - the only common denominator is class file format.
This cause a problem of circular dependency, when a java program refers to a DSL program and at the same time that DSL program refers back to java program. Possible solutions are:
do not analyse any other programs while converting DSL to Java. All possible errors would be reporting while compiling generated java code
redirect references to common interfaces, thus breaking dependency loop
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Recently I came across this javalobby post http://java.dzone.com/articles/how-changing-java-package on packaging java code by feature.
I like the idea, but i have few questions on this approach. I asked my question but didn't get a satisfactory reply. I hope someone on StackOverflow can clarify my questions.
I like the idea of package by feature which greately reduces the time for moving across the packages while coding and all the related stuff will be at one place(package). But what about interactions between the services in different packages?
Suppose we are building a blog app and we are putting all user related operations(controllers/services/repositories) in com.mycompany.myblog.users package. And all blog post related operations(controllers/services/repositories) in com.mycompany.myblog.posts package.
Now I want to show User Profile along with all the posts that he posted. Should I call myblog.posts.PostsService.getPostsByUser(userId) from myblog.users.UserController.showUserProfile()?
What about coupling between packages?
Also wherever I read about package by feature, everyone says its a good practice. Then why many book authors and even frameworks encourage to group by layers? Just curious to know :-)
Take a look at uncle Bob's Package Design Principles. He explains reasons and motivations behind those principles, which I have elaborated on below:
Classes that get reused together should be packaged together so that the package can be treated as a sort of complete product available for you. And those which are reused together should be separated away from the ones those are not reused with. For example, your Logging utility classes are not necessarily used together with your file io classes. So package all logging them separately. But logging classes could be related to one another. So create a sort of complete product for logging, say, for the want of better name commons-logging package it in a (re)usable jar and another separate complete product for io utilities, again for the want of better name, say commons-io.jar.
If you update say commons-io library to say support java nio, then you may not necessarily want to make any changes to the logging library. So separating them is better.
Now, let's say you wanted your logging utility classes to support structured logging for say some sort of log analysis by tools like splunk. Some clients of your logging utility may want to update to your newer version; some others may not. So when you release a new version, package all classes which are needed and reused together for migration. So some clients of your utility classes can safely delete your old commons-logging jar and move to commons-logging-new jar. Some other clients are still ok with older jar. However no clients are needed to have both these jars (new and old) just because you forced them to use some classes for older packaged jar.
Avoid cyclic dependencies. a depend on b; b on c; c on d; but d depends on a. The scenario is obviously deterring as it will be very difficult to define layers or modules, etc and you cannot vary them independly relative to each other.
Also, you could package your classes such that if a layer or module changes, other module or layers do not have to change necessarily. So, for example, if you decide to go from old MVC framework to a rest APIs upgrade, then only view and controller may need changes; your model does not.
I personally like the "package by feature" approach, although you do need to apply quite a lot of judgement on where to draw the package boundaries. It's certainly a feasible and sensible approach in many circumstances.
You should probably achieve coupling between packages and modules using public interfaces - this keeps the coupling clean and manageable.
It's perfectly fine for the "blog posts" package to call into the "users" package as long as it uses well designed public interfaces to do so.
One big piece of advice though if you go down this approach: be very thoughtful about your dependencies and in particular avoid circular dependencies between packages. A good design should looks like a dependency tree - with the higher level areas of functionality depending on a set of common services which depend upon libraries of utility functions etc. To some extent, this will start to look like architectural "layers" with front-end packages calling into back-end services.
There many other aspect other than coupling for package design i would suggest to look at OOAD Priciples, especially package design priciples like
REP The Release Reuse Equivalency Principle The granule of reuse is the granule of release.
CCP The Common Closure Principle Classes that change together are packaged together.
CRP The Common Reuse Principle Classes that are used together are packaged together.
ADP The Acyclic Dependencies Principle The dependency graph of packages must have no cycles.
SDP The Stable Dependencies Principle Depend in the direction of stability.
SAP The Stable Abstractions Principle Abstractness increases with stability.
for more information you can read book "Agile Software Development, Principles, Patterns, and Practices"
I am dealing with a Java EE web application that needs some refactoring. I am currently in charge of doing this job and am currently at a loss on what would be necessary to be done or changed in order to improve the application.
My question is: how can the frontend part be refactored?
I already refactored the CSS files in order for them to have generic rules and classes and removing unused or wrong rules, I refactored all Javascript files using some patterns (not using prototype inheritance since it's not really useful here) and adding PrototypeJS and still need to finish aggregating JS functions (when possible) in Objects and included files.
Now I am finishing up adding Localization to pages that missed it or where it wasn't complete and I want to migrate the whole application to XHTML Transitional following W3C guidelines strictly.
I also have in mind to start using Struts Tiles to add templates and in the mean time remove the old "Tables Layout" the frontend is currently using, so actually redesigning the whole application.
But I am at a loss here: is what I am doing useful? Does all this work need to be done or am I just going too far? What would you add up? What would you do instead?
I think this Stack Exchange Thread (How to approach refactoring an existing web application?), would better answer your question.
Hope that helps.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
A few years ago, I did a survey of DbC packages for Java, and I wasn't wholly satisfied with any of them. Unfortunately I didn't keep good notes on my findings, and I assume things have changed. Would anybody care to compare and contrast different DbC packages for Java?
There is a nice overview on
WikiPedia about Design by Contract, at the end there is a section regarding languages with third party support libraries, which includes a nice serie of Java libraries. Most of these Java libraries are based on Java Assertions.
In the case you only need Precondition Checking there is also a lightweight Validate Method Arguments solution, at SourceForge under Java Argument Validation (Plain Java implementation).
Depending on your problem, maybe the OVal framework, for field/property Constraints validation is a good choice. This framework lets you place the Constraints in all kind of different forms (Annotations, POJO, XML). Create customer constraints through POJO or scripting languages (JavaScript, Groovy, BeanShell, OGNL, MVEL). And it also party implements Programming by Contract.
Google has a open source library called contracts for java.
Contracts for Java is our new open source tool. Preconditions,
postconditions, and invariants are added as Java boolean expressions
inside annotations. By default these do nothing, but enabled via a JVM
argument, they’re checked at runtime.
• #Requires, #Ensures, #ThrowEnsures and #Invariant specify contracts as Java boolean expressions
• Contracts are inherited from both interfaces and classes and can be selectively enabled at runtime
contracts for java.
I tested contract4J one time and found it usable but not perfect.
You are creating contracts for for and after method calls and invars over the whole class.
The contract is created as an assertion for the method. The Problem is that the contract itself is written in a string so you don't have IDE support for the contracts or compile time cheching if the contract still works.
A link to the library
It's been a long time since I've looked at these, but found some old links. One was for JASS.
The other one that I had used (and liked) was iContract by Reliable Systems. It had an ant task that you would run as a preprocessor. However, I can't seem to find it with some google searches, it looks like it has vanished. The original site is now a link farm. Check out this link for some possible ways to get to it.
I'd highly recommend you to consider the Java Modeling Language (JML).
There is a Groovy extensions that enables Design by Contract(tm) in Groovy/Java code - GContracts. It uses so-called closure annotations to specify class invariants, pre- and postconditions. Examples can be found on the project's github wiki.
Major advantage: it is only a single jar without external dependencies and it can be resolved via Maven compliant repositories since its been placed in the central Maven repo.
If you want a plain and simple basic support for expressing your contracts, have a look on valid4j (found on Maven Central as org.valid4j:valid4j). It lets you express your contracts using regular hamcrest-matchers in plain code (no annotations, nor comments).
For preconditions and postconditions (basically assertions -> throwing AssertionError):
import static org.valid4j.Assertive.*;
require(inputList, hasSize(greaterThan(0)));
...
ensure(result, lessThan(4.0));
If you are not happy with the default global policy (throwing AssertionError), valid4j provides a customization mechanism that let's you provide your own implementation of org.valid4j.AssertiveProvider.
Links:
http://www.valid4j.org/
https://github.com/helsing/valid4j
I would suggest a combination of a few tools:
Java's assert condition... or it's more advanced Groovy cousin, Guava's Preconditions.checkXXXX(condition...) and Verify.verify(condition...), or a library like AssertJ, if all you need is just to do simple checks in your 'main' or 'test' code
you'll get more features with a tool like OVal; it can check both objects as well as method arguments and results, you can also fire checks manually (eg to show validation errors on UI before a method is called). It can understand existing annotations eg from JPA or javax.validation (like #NotNull, #Pattern, #Column), or you can write inline constraints like #Pre(expr="x >= 0 && x <= y"). If the annotation is #Documented, the checks will be also visible in Javadocs (you don't have to describe them there as well).
OVal uses reflection, which can make performance issues and other problems in some environments like Android; then you should consider tool like Google's Cofoja, which has less functionality, but depends on compile-time Annotation Processing Tool instead of reflection
I think that many DbC libraries were surclassed by the builtin assert keyword, introduced since Java 1.4:
it is a built-in, no other library is required
it works with inheritance
you can activate/deactivate on package basis
easy to refactoring (e.g. no assertions in comments)
I personally think that the DbC libraries available at present have left a lot to be desired, none of the libraries i looked at played well with the Bean Validation API.
The libraries i looked at have been documented here
The Bean Validation API has a lot of over lap with the concepts from DbC. In certain cases Bean Validation API cannot be used like simple POJO's (non CDI managed code). IMO a think wrapper around the Bean Validation API should suffice.
I found that the existing libraries are a little tricky to add into existing web projects given that they are implemented either via AOP or Byte code instrumentation. Probably with the advent of Bean Validation API this kind of complexity to implement DbC is unwarranted.
I have also documented my rant in this post and hope to build a small library which leverages on the Bean Validation API