Writing a code generator that refers to existing Java classes [closed] - java

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm trying to evaluate different approaches to have some code in our Java project generated automatically from definitions in a domain-specific language while building the project. I have manually written a code generator or two in the past but I have no experience with existing code generation frameworks. We have not yet decided whether to use such a framework or build the generator by hand.
I need help with a conceptual problem; I would like to understand how a code generator can be built which allows the DSL to refer to existing (hand-written) Java classes, methods and fields. It should be possible to refer to classes that are in the same compilation unit (e.g. Maven project) as the generated Java classes. This means that those hand-written classes cannot be compiled before the code generator is run and the code generator would have to look at Java source files in addition to everything required to be on the classpath for compiling those classes.
How do existing frameworks handle such cases, if at all? Do they parse the Java source files themselves or do they re-use some machinery of the Java compiler?
I think this is the same problem that any (non-dynamic) non-Java language targeting the JVM faces, if it allows its own code to reference Java classes and vice-versa in the same compilation units. Maybe it is helpful to look at how those compilers work, unless they circumvent javac by also include a Java compiler themselves.
There are multiple reasons why the code generator needs access to the classes in the Java files of the same compilation unit:
I would like to provide semantics similar to those in Java where I can import <package>.* and then use the names of those classes without fully qualifying the name of each of them.
I would like to reject code in the DSL if it refers to symbols that don't exists or don't meet some required criteria.
There will be cases where I want to generate code that depends on the members of a class or the signatures of methods. An example would be to automatically generate a decorator or builder or implement an interface but where the base class or interface is not generated by the code generator.
I may want to use the type information of referenced symbols in the generate code. e.g. generating different code depending on the signature of a method.
Our project uses Maven. I'm interested in general approaches to solving these problems but information or examples that apply to Maven are greatly appreciated.
How can I extend Java with a DSL that allows the DSL compiler to refer to external Java elements (classes, methods, fields)?

Actually unclear what you're asking, furthermore this question is more theoretical, than programmic.
In any case, from my experience of own DSL implementation, there isn't any problem use java classloaders for dynamically access to new generated and compiled java classes. Also, if you are using maven, so all dependencies with production scope must be loaded in main classloader and be available to load them using reflection.
Here are some useful links:
http://www.javaworld.com/article/2077260/learn-java/the-basics-of-java-class-loaders.html
http://tutorials.jenkov.com/java-reflection/dynamic-class-loading-reloading.html
http://docs.oracle.com/javase/tutorial/reflect/

Do not parse java programs, use compiled classes instead. The referenced classes can be written in different languages, including other DSL - the only common denominator is class file format.
This cause a problem of circular dependency, when a java program refers to a DSL program and at the same time that DSL program refers back to java program. Possible solutions are:
do not analyse any other programs while converting DSL to Java. All possible errors would be reporting while compiling generated java code
redirect references to common interfaces, thus breaking dependency loop

Related

How can I access a java compile time parameter at runtime?

We are migrating a system written in C to Java and must retain existing processes (no debate). We currently "embed" compile-time information into the C application using the C preprocessor, for example:
cc -o xxx.o -DCOMP_ARG='"compile time arg"' xxx.c
The xxx.c file can then use "COMP_ARG" and its value will be embedded in the code and we have little worry about it being changed inadvertently.
We realize Java likes to use properties files, however, our requirements are such that some information ** ** be embedded in the code, so properties files are not an option - these certain values cannot be specified at runtime. To illustrate the point, such data could be a date-stamp of when the file was compiled, but the exact data is irrelevant to the question.
We are looking for a way to specify at compile time various values that are available to the Java code. We are quite aware that Java does not have a pre-processor as does C, so the mechanism would be different.
Our current solution is using a code generation step (Maven), which does work, however, Eclipse is wreaking havoc trying to deal with the source files so that we had turn off "Build Automatically". We really want to find a more robust solution.
We appreciate any help, thanks.
The xxx.c file can then use "COMP_ARG" and its value will be embedded
in the code and we have little worry about it being changed
inadvertently.
...our requirements are such that some information be embedded in the
code....
We are looking for a way to specify at compile time various values
that are available to the Java code. We are quite aware that Java does
not have a pre-processor as does C, so the mechanism would be
different.
It seems that the best way to solve this problem would be to make use of annotations in your code.
In Java, annotations are a kind of interface declaration, but they do not enforce a behavioral contract with an implementing class. Rather, they are meant to define a contract with some external framework, preprocessor, or with the compiler itself. Annotations are used extensively in Java EE 5.0 (and later) to specify configuration and behavior to the framework within which the developer's code runs. Annotations are also used extensively by the JavaDoc documentation processor. Here, the annotations in the doc comments allow you to specify and format the information which you intend to appear in the documentation when the JavaDoc processor runs.
Annotations can be defined to be accessible at runtime. In such a case, the primary mechanism for accessing annotations is the Java Reflection facility. For example, annotations with a retention policy of RUNTIME and defined on a class, can be accessed through that class's corresponding Class object:
Class myCls = MyClass.class; // the "class literal" for MyClass
Annotation[] annotations = myCls.getDeclaredAnnotations();
Annotations can include arguments for parameters to allow for more flexibility in configuration. The use of annotations is most convenient when the code itself can be so annotated.
A quick tutorial on how annotations are defined and used in Java is available here: https://docs.oracle.com/javase/tutorial/java/annotations/
I'm going to post my own answer which seems to be "Can't be done" - what can't be done, apparently, is provide at compile time to Java, a set of parameters that gets passed to the program at execution time. The solution appears to be to continue with what I am doing which is to update a Java source file with the compile-time data and figure out how to coax Eclipse to stop over-writing the files.
Thanks to everyone who commented.

Java mapping: Selma vs MapStruct [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Currently there are two main popular Java Object to Object mapping frameworks that supersede Dozer (http://dozer.sourceforge.net/documentation/mappings.html), they are:
Selma - http://www.selma-java.org/
MapStruct - http://mapstruct.org/
With the exception of this page (http://vytas.io/blog/java/java-object-to-object-mapping-which-framework-to-choose-part-2/) I haven't been able to find much online regarding which framework is better than the other, or under what circumstances they are better. Wondering if anyone you can shed some light on this. In terms of functionality based on the documents, they seem to be doing the same thing.
(Original author of Selma so slight different point of view)
Selma and MapStruct does the same job with some differences. First it appears that Selma generated code is just a bit faster than MapStruct (http://javaetmoi.com/wp-content/uploads/2015/09/2015-09-mapping-objet-objet2.png). The 0.13 release number does not really reflects the maturity of code Selma is stable and robust it is in use in production for 2 years.
The main idea behind Selma is to prohibit magic conversion and just automate all mappings without any side effects. When mapping appears to be too complex, the developer should handle it by himself using custom mappings or interceptor.
The footprint of Selma is built to be as small as possible we only depend on a JavaWriter and the JDK.
Selma tries to only use static compiled generated code without any reflection at runtime or pseudo-code written in string fields.
You can use composition to build a chain of mappers and inside a single mapper you can have global configuration that can be overwritten on a per method basis.
Compiler messages are built to give developer early feedback, tips to solve the issue and learn the API.
At the end for sure MapStruct is more feature rich but Selma gives developer all the tools needed for complex mapping with the responsibility of writing the business logic. You could also find one of the 2 APIs nicer than the other from a user perspective so best thing to do is to try both and choose the one you feel more comfortable with. It won't be time consuming.
(Original author of MapStruct here, so naturally I am biased)
Indeed, both projects are based on the same general idea of generating mapping code at compile time; I recommend you MapStruct for the following reasons:
Proven and stable codebase: MapStruct is the older of the two, coming up with the idea of mapping generation originally. It has been enhanced and polished over quite a long time, based on real-world feedback from usage in many different projects; We released the stable 1.0 Final last year
Larger developer and user community as per the number of committers (MapStruct, Selma) and user questions (MapStruct, Selma)
Feature-rich (Some things supported in MapStruct I didn't find (to the same extend) in the Selma docs):
Many built-in type conversions, including advanced support for JAXB types such as JAXBElement
Support for default values and constants
Mapping customizations through inline expressions
Sharing configurations across mappers
Nicely integrates with CDI and JSR 330 (in addition to Spring)
Eclipse plug-in avaible: Still work in progress, but its quickfixes and auto-completions are already very helpful when designing mapper interfaces
IntelliJ plug-in: helps when editing mapper interfaces via auto-completion, go to referenced properties, refactoring support etc.

Any library for clean code generation from i18n resource bundles? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
In my project I would like to have compile time checks on my existing resource bundle. I already have a set of *.properties localized files and I'm about to hook them up to some i18n tool. I was thinking about regular ResourceBundles, but I don't like the fact, that this mechanism does not guarantee any kind of checks, neither compile-time or maintenance checks like - finding duplicates or finding unused keys.
So, I'm looking for a library, which would take my existing *.properties files and converted them into neat and clean Java code, which I could use in my project.
The best possible outcome would be to have a mechanism similar to GWT i18n support. One, clean interface with all messages as a separate methods.
I have looked at jlibs and ForgeRock. I really like jlibs, but it's not a separate lib, so it's hard for me to imagine introducing so huge lib dependency just for i18n. ForgeRock does pretty much what I would like, but it produces constants rather than clean interfaces to work with, like jlibs does.
This entry blog is also helpful in understanding which approach I would like to use. I made a big research regarding available i18n tools, I just cannot find 'that one', which would suit my needs the best.
Regards.
Another library satisfying your requirements of code generation would be i18n-binder.
Personally, I would approach this problem from another angle, using the gettext framework you would mark translatable strings in the source code and generate the resource bundles from them. There are tools and editors that can then update the translations based on the extracted strings, and detect no longer used or modified strings.
I am currently working exactly on the kind of library you are looking for, check it out. It's still work in progress, but I should be having my first release fairly soon.
The first release will just contain support for annotation based translations. I don't yet have any ideas on how to migrate existing projects into c10n style, though. Any ideas, suggestions are always welcome!
To solve this problem I implemented a Message Compiler, which creates the resource bundle files and constant definitions as Java enum for the keys from one single source file. So the constants can be used in the Java source code, which is a much safer way. The message compiler cannot only be used for Java. It creates also resource files and constants for Objective-C or Swift and can be extended for other Programming environments.
I like JUnit. Not exactly what you are looking for, but by creating tests you are sure that all the items in de propertyfiles are available.

Reading bytecode from an AnnotationProcessor [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
Plugging in to Java compilers
Edit - this appears to be a dupe of Plugging in to Java compilers
I would like to implement an AnnotationProcessor for use with the apt tool that will be invoked after compiling a class to bytecode, that can read and modify the bytecode.
The reason for doing this is that I want to translate annotated methods to another language and replace the java methods with stubs that invoke the translated versions.
However the AnnotationProcessorEnvironment interface only provides methods to generate new classes, not to read back a class file that was generated in a previous round.
The instrumentation API does something similar to what I want, but only at run-time. I am looking for a way to do this at compile time.
Related: Plugging in to Java compilers
I had a look when I wanted to do some manipulation in the compiler, but ended up using a post-processor.
You can manipulate the abstract syntax tree (AST) using the APT, but only with compiler-specific hacks. If you want a sample of how that's done, Project Lombok does it with the Sun javac and Eclipse compilers. At present, there doesn't seem to be a better method.

A good Design-by-Contract library for Java? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
A few years ago, I did a survey of DbC packages for Java, and I wasn't wholly satisfied with any of them. Unfortunately I didn't keep good notes on my findings, and I assume things have changed. Would anybody care to compare and contrast different DbC packages for Java?
There is a nice overview on
WikiPedia about Design by Contract, at the end there is a section regarding languages with third party support libraries, which includes a nice serie of Java libraries. Most of these Java libraries are based on Java Assertions.
In the case you only need Precondition Checking there is also a lightweight Validate Method Arguments solution, at SourceForge under Java Argument Validation (Plain Java implementation).
Depending on your problem, maybe the OVal framework, for field/property Constraints validation is a good choice. This framework lets you place the Constraints in all kind of different forms (Annotations, POJO, XML). Create customer constraints through POJO or scripting languages (JavaScript, Groovy, BeanShell, OGNL, MVEL). And it also party implements Programming by Contract.
Google has a open source library called contracts for java.
Contracts for Java is our new open source tool. Preconditions,
postconditions, and invariants are added as Java boolean expressions
inside annotations. By default these do nothing, but enabled via a JVM
argument, they’re checked at runtime.
• #Requires, #Ensures, #ThrowEnsures and #Invariant specify contracts as Java boolean expressions
• Contracts are inherited from both interfaces and classes and can be selectively enabled at runtime
contracts for java.
I tested contract4J one time and found it usable but not perfect.
You are creating contracts for for and after method calls and invars over the whole class.
The contract is created as an assertion for the method. The Problem is that the contract itself is written in a string so you don't have IDE support for the contracts or compile time cheching if the contract still works.
A link to the library
It's been a long time since I've looked at these, but found some old links. One was for JASS.
The other one that I had used (and liked) was iContract by Reliable Systems. It had an ant task that you would run as a preprocessor. However, I can't seem to find it with some google searches, it looks like it has vanished. The original site is now a link farm. Check out this link for some possible ways to get to it.
I'd highly recommend you to consider the Java Modeling Language (JML).
There is a Groovy extensions that enables Design by Contract(tm) in Groovy/Java code - GContracts. It uses so-called closure annotations to specify class invariants, pre- and postconditions. Examples can be found on the project's github wiki.
Major advantage: it is only a single jar without external dependencies and it can be resolved via Maven compliant repositories since its been placed in the central Maven repo.
If you want a plain and simple basic support for expressing your contracts, have a look on valid4j (found on Maven Central as org.valid4j:valid4j). It lets you express your contracts using regular hamcrest-matchers in plain code (no annotations, nor comments).
For preconditions and postconditions (basically assertions -> throwing AssertionError):
import static org.valid4j.Assertive.*;
require(inputList, hasSize(greaterThan(0)));
...
ensure(result, lessThan(4.0));
If you are not happy with the default global policy (throwing AssertionError), valid4j provides a customization mechanism that let's you provide your own implementation of org.valid4j.AssertiveProvider.
Links:
http://www.valid4j.org/
https://github.com/helsing/valid4j
I would suggest a combination of a few tools:
Java's assert condition... or it's more advanced Groovy cousin, Guava's Preconditions.checkXXXX(condition...) and Verify.verify(condition...), or a library like AssertJ, if all you need is just to do simple checks in your 'main' or 'test' code
you'll get more features with a tool like OVal; it can check both objects as well as method arguments and results, you can also fire checks manually (eg to show validation errors on UI before a method is called). It can understand existing annotations eg from JPA or javax.validation (like #NotNull, #Pattern, #Column), or you can write inline constraints like #Pre(expr="x >= 0 && x <= y"). If the annotation is #Documented, the checks will be also visible in Javadocs (you don't have to describe them there as well).
OVal uses reflection, which can make performance issues and other problems in some environments like Android; then you should consider tool like Google's Cofoja, which has less functionality, but depends on compile-time Annotation Processing Tool instead of reflection
I think that many DbC libraries were surclassed by the builtin assert keyword, introduced since Java 1.4:
it is a built-in, no other library is required
it works with inheritance
you can activate/deactivate on package basis
easy to refactoring (e.g. no assertions in comments)
I personally think that the DbC libraries available at present have left a lot to be desired, none of the libraries i looked at played well with the Bean Validation API.
The libraries i looked at have been documented here
The Bean Validation API has a lot of over lap with the concepts from DbC. In certain cases Bean Validation API cannot be used like simple POJO's (non CDI managed code). IMO a think wrapper around the Bean Validation API should suffice.
I found that the existing libraries are a little tricky to add into existing web projects given that they are implemented either via AOP or Byte code instrumentation. Probably with the advent of Bean Validation API this kind of complexity to implement DbC is unwarranted.
I have also documented my rant in this post and hope to build a small library which leverages on the Bean Validation API

Categories