Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
A few years ago, I did a survey of DbC packages for Java, and I wasn't wholly satisfied with any of them. Unfortunately I didn't keep good notes on my findings, and I assume things have changed. Would anybody care to compare and contrast different DbC packages for Java?
There is a nice overview on
WikiPedia about Design by Contract, at the end there is a section regarding languages with third party support libraries, which includes a nice serie of Java libraries. Most of these Java libraries are based on Java Assertions.
In the case you only need Precondition Checking there is also a lightweight Validate Method Arguments solution, at SourceForge under Java Argument Validation (Plain Java implementation).
Depending on your problem, maybe the OVal framework, for field/property Constraints validation is a good choice. This framework lets you place the Constraints in all kind of different forms (Annotations, POJO, XML). Create customer constraints through POJO or scripting languages (JavaScript, Groovy, BeanShell, OGNL, MVEL). And it also party implements Programming by Contract.
Google has a open source library called contracts for java.
Contracts for Java is our new open source tool. Preconditions,
postconditions, and invariants are added as Java boolean expressions
inside annotations. By default these do nothing, but enabled via a JVM
argument, they’re checked at runtime.
• #Requires, #Ensures, #ThrowEnsures and #Invariant specify contracts as Java boolean expressions
• Contracts are inherited from both interfaces and classes and can be selectively enabled at runtime
contracts for java.
I tested contract4J one time and found it usable but not perfect.
You are creating contracts for for and after method calls and invars over the whole class.
The contract is created as an assertion for the method. The Problem is that the contract itself is written in a string so you don't have IDE support for the contracts or compile time cheching if the contract still works.
A link to the library
It's been a long time since I've looked at these, but found some old links. One was for JASS.
The other one that I had used (and liked) was iContract by Reliable Systems. It had an ant task that you would run as a preprocessor. However, I can't seem to find it with some google searches, it looks like it has vanished. The original site is now a link farm. Check out this link for some possible ways to get to it.
I'd highly recommend you to consider the Java Modeling Language (JML).
There is a Groovy extensions that enables Design by Contract(tm) in Groovy/Java code - GContracts. It uses so-called closure annotations to specify class invariants, pre- and postconditions. Examples can be found on the project's github wiki.
Major advantage: it is only a single jar without external dependencies and it can be resolved via Maven compliant repositories since its been placed in the central Maven repo.
If you want a plain and simple basic support for expressing your contracts, have a look on valid4j (found on Maven Central as org.valid4j:valid4j). It lets you express your contracts using regular hamcrest-matchers in plain code (no annotations, nor comments).
For preconditions and postconditions (basically assertions -> throwing AssertionError):
import static org.valid4j.Assertive.*;
require(inputList, hasSize(greaterThan(0)));
...
ensure(result, lessThan(4.0));
If you are not happy with the default global policy (throwing AssertionError), valid4j provides a customization mechanism that let's you provide your own implementation of org.valid4j.AssertiveProvider.
Links:
http://www.valid4j.org/
https://github.com/helsing/valid4j
I would suggest a combination of a few tools:
Java's assert condition... or it's more advanced Groovy cousin, Guava's Preconditions.checkXXXX(condition...) and Verify.verify(condition...), or a library like AssertJ, if all you need is just to do simple checks in your 'main' or 'test' code
you'll get more features with a tool like OVal; it can check both objects as well as method arguments and results, you can also fire checks manually (eg to show validation errors on UI before a method is called). It can understand existing annotations eg from JPA or javax.validation (like #NotNull, #Pattern, #Column), or you can write inline constraints like #Pre(expr="x >= 0 && x <= y"). If the annotation is #Documented, the checks will be also visible in Javadocs (you don't have to describe them there as well).
OVal uses reflection, which can make performance issues and other problems in some environments like Android; then you should consider tool like Google's Cofoja, which has less functionality, but depends on compile-time Annotation Processing Tool instead of reflection
I think that many DbC libraries were surclassed by the builtin assert keyword, introduced since Java 1.4:
it is a built-in, no other library is required
it works with inheritance
you can activate/deactivate on package basis
easy to refactoring (e.g. no assertions in comments)
I personally think that the DbC libraries available at present have left a lot to be desired, none of the libraries i looked at played well with the Bean Validation API.
The libraries i looked at have been documented here
The Bean Validation API has a lot of over lap with the concepts from DbC. In certain cases Bean Validation API cannot be used like simple POJO's (non CDI managed code). IMO a think wrapper around the Bean Validation API should suffice.
I found that the existing libraries are a little tricky to add into existing web projects given that they are implemented either via AOP or Byte code instrumentation. Probably with the advent of Bean Validation API this kind of complexity to implement DbC is unwarranted.
I have also documented my rant in this post and hope to build a small library which leverages on the Bean Validation API
Related
I am interested in modifying Java syntax and some implicit paradigms. Since I develop with Eclipse which provides it's own compiler, which can also be used standalone, I was wondering if it wasn't possible to extend ecj to respect additional grammar rules (and correctly handle them).
My syntactical changes are all resolvable by removing elements from the AST and creating some new ones, so I assume that what I want to do is possible without diving into bytecode.
Essentially, what I want to do could be done by 'virtually' modifying the source code before the actual compilation. However I suspect that doing so would mess up the source mapping, which would make debugging a hell.
On a sidenote: I am aware of project Lombok, which extends and alters class compilation, however Lombok uses annotations only, and does not modify syntax, strictly speaking. So what I want to do is more invasive to the language specs.
As Object Teams has been mentioned in comments:
(1) Object Teams itself extends JDT for its own language OT/J which is an extension of Java. This is done in a dual strategy:
We maintain a fork of org.eclipse.jdt.core. While this is quite heavy lifting it successfully demonstrates that the JDT architecture is suitable for modification.
We use our own concepts of role objects to non-invasively adapt the behavior of other parts of the IDE (notably org.eclipse.jdt.ui) to reflect the semantics of OT/J
(2) I have a few (oldish) blog posts that demonstrate how OT/J can be used for creating non-invasive variants of JDT including support for extended syntax:
IDE for your own language embedded in Java? (part 1)
IDE for your own language embedded in Java? (part 2)
Get for free what Coin doesn’t buy you
Disclaimer: I am author of OT/J and lead of its implementation, and later became a committer on Eclipse JDT.
For further questions, there's a forum.
I am trying to understand how Checker Framework implements Pluggable Type Checkers.
By reading the documentation,
Checker Framework (Maven)
I see a lot of setup involved, and looks to me either outdated or not quite mantained.
As far as I read, Java 8 supported both Type Annotations and Pluggable Type Checkers on JSR-308 and JSR-269 , allowing an interface to create custom annotations on almost every element, and process it with a snippet of interfaced code with a simple flag on javac (-processor), which maven supports through META-INF/services/javax.annotation.processing.Processor
Then why documentation states that Checker requires that many customization..?:
- com.google.errorprone.javac "error-prone" jdk if javac should support custom annotation processors (JSR-269)?
- maven dependency plugin
- mvn compiler plugin with annotationProcessorPaths (which I understand it overrides anything from the META-INF file) instead of `META-INF/services/javax.annotation.processing.Processor`
I presume Checker framework has remained effectively a collection of custom annotation processors since Java 8 feature. Is it like that? It does not seem needed anymore to enable the compiler, to create custom checkings (JSR-269) and to enable /* #Nullable */ and the like... I'll be happy to stand corrected
I see a lot of setup involved, and looks to me either outdated or not quite mantained.
What, specifically, looks outdated or "not quite maintained"? What is your evidence?
If you just make unsubstantiated assertions, the community cannot help you.
As far as I read, Java 8 supported both Type Annotations and Pluggable Type Checkers on JSR-308 and JSR-269
Your reading is incorrect. JSR 308 supports Type Annotations, but JSR 269 does not support Pluggable Type Checkers. You need a third-party tool such as the Checker Framework to perform pluggable type-checking.
Can you point out the specific text that led you to this conclusion? Specifics would be helpful rather than just an assertion without support.
to enable /* #Nullable */
Support for annotations in comments was ended 20 months ago. Can you point out the text that led you to your question about annotations in comments?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Currently there are two main popular Java Object to Object mapping frameworks that supersede Dozer (http://dozer.sourceforge.net/documentation/mappings.html), they are:
Selma - http://www.selma-java.org/
MapStruct - http://mapstruct.org/
With the exception of this page (http://vytas.io/blog/java/java-object-to-object-mapping-which-framework-to-choose-part-2/) I haven't been able to find much online regarding which framework is better than the other, or under what circumstances they are better. Wondering if anyone you can shed some light on this. In terms of functionality based on the documents, they seem to be doing the same thing.
(Original author of Selma so slight different point of view)
Selma and MapStruct does the same job with some differences. First it appears that Selma generated code is just a bit faster than MapStruct (http://javaetmoi.com/wp-content/uploads/2015/09/2015-09-mapping-objet-objet2.png). The 0.13 release number does not really reflects the maturity of code Selma is stable and robust it is in use in production for 2 years.
The main idea behind Selma is to prohibit magic conversion and just automate all mappings without any side effects. When mapping appears to be too complex, the developer should handle it by himself using custom mappings or interceptor.
The footprint of Selma is built to be as small as possible we only depend on a JavaWriter and the JDK.
Selma tries to only use static compiled generated code without any reflection at runtime or pseudo-code written in string fields.
You can use composition to build a chain of mappers and inside a single mapper you can have global configuration that can be overwritten on a per method basis.
Compiler messages are built to give developer early feedback, tips to solve the issue and learn the API.
At the end for sure MapStruct is more feature rich but Selma gives developer all the tools needed for complex mapping with the responsibility of writing the business logic. You could also find one of the 2 APIs nicer than the other from a user perspective so best thing to do is to try both and choose the one you feel more comfortable with. It won't be time consuming.
(Original author of MapStruct here, so naturally I am biased)
Indeed, both projects are based on the same general idea of generating mapping code at compile time; I recommend you MapStruct for the following reasons:
Proven and stable codebase: MapStruct is the older of the two, coming up with the idea of mapping generation originally. It has been enhanced and polished over quite a long time, based on real-world feedback from usage in many different projects; We released the stable 1.0 Final last year
Larger developer and user community as per the number of committers (MapStruct, Selma) and user questions (MapStruct, Selma)
Feature-rich (Some things supported in MapStruct I didn't find (to the same extend) in the Selma docs):
Many built-in type conversions, including advanced support for JAXB types such as JAXBElement
Support for default values and constants
Mapping customizations through inline expressions
Sharing configurations across mappers
Nicely integrates with CDI and JSR 330 (in addition to Spring)
Eclipse plug-in avaible: Still work in progress, but its quickfixes and auto-completions are already very helpful when designing mapper interfaces
IntelliJ plug-in: helps when editing mapper interfaces via auto-completion, go to referenced properties, refactoring support etc.
In software development we are all using the libraries by software providers. Consider in class A there are four functions viz., x,y,z. I just want my development team to avoid using the function x. So instead of telling them not to use, I found an idea. Inherit the class and override all the functions and for the function x an unsupportedmethod exception is thrown and for the rest I'm calling the super methods. There also I found a problem, developers can use the base class A directly, how to avoid the class A being used directly. I found a similar functionality in OSGi, the lib bundles can be brought in and then not exported and so on. Is there are any way to achieve this is java?
I suppose code reviews exist for these reasons. Consider situation where you can not edit the source of a third party, what would you do ? Like Siddharth says, sub class it and throw a meaningful exception and document it with a clear reasons. If someone is using base class even after that, mostly it may not out of ignorance,but it may out of curiosity. That kind of thing can be appreciated personally and for learning, but for the project sake developer has to follow the guidelines.
I think simply telling your developers what to do is preferred over a complex software solution. Sometimes the simple thing is better.
But, if you insist on going down this path, you can enforce your architecture standards using aspects if you're a Spring user. Weave the offending methods with an aspect that throws an exception if they're called.
You can edit library class file in hex editor and modify its access modifier from public to package private. Also you can rename it and then use inheritance to wrap this class. Here you can find class file specification. Once I've tried this technique to substitute jdbc driver class with wraper class that provide some additional logging and other useful tricks.
There is a variety of tools that check source code for adherence to certain rules, such as formatting, dead code, naming conventions for variables etc. Popular ones for Java include the Maven Enforcer plugin, checkstyle and PMD.
These might allow you to write a rule that forbids certain method calls. Then you could check automatically at compile time. As far as I can tell, unfortunately none of the tools above support "illegal method calls" out-of-the-box; however, at least for PMD writing new checks is fairly simple.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have written a pretty extensive REST API using Java Jersey (and JAXB). I have also written the documentation using a Wiki, but its been a totally manual process, which is very error-prone, especially when we need to make modifications, people tend to forget to update the wiki.
From looking around, most other REST API's are also manually creating their documentation. But I'm wondering if theres maybe a good solution to this.
The kind of things which need to be documented for each endpoint are:
Service Name
Category
URI
Parameter
Parameter Types
Response Types
Response Type Schema (XSD)
Sample requests and responses
Request type (Get/Put/Post/Delete)
Description
Error codes which may be returned
And then of course there are some general things which are global such as
Security
Overview of REST
Error handling
Etc
These general things are fine to describe once and don't need to be automated, but for the web service methods themselves it seems highly desirable to automate it.
I've thought of maybe using annotations, and writing a small program which generates XML, and then an XSLT which should generate the actual documentation in HTML. Does it make more sense to use custom XDoclet?
Swagger is a beautiful option. It's a project on GitHub, has Maven integration and loads of other options to keep it flexible.
Integration guide: https://github.com/swagger-api/swagger-core/wiki
More Information: http://swagger.io/
Unfortunately, Darrel's answer is technically correct, but is hocus-pocus in the real world. It's based on the ideal that only some agree on and even if you were very careful about it, the chances are that for some reason outside your control, you can't conform exactly.
Even if you could, other developers that might have to use your API may not care or know the details of RESTful patterns... Remember that the point of creating the API is to make it easy for others to use it and good documentation is a must.
Achim's point about the WADL is good however. Because it exists, we should be able to create a basic tool for generating documentation of the API.
Some folks have taken this route, and an XSL stylesheet has been developed to do the transform:
https://wadl.dev.java.net/
Although i'm not sure it will totally fit your needs, take a look at enunciate. It seems like a good documentation generator for various web-services architectures.
EDIT Enunciate is available under github umbrella
you might be interested in Jersey's ability to provide so called WADL description for all published resources in XML format at runtime (generated automatically from annotations). This should be containing already what you need for basic documentation. Further you might be able to add additional JavaDoc, though that requires more configuration.
Please look here:
https://jersey.java.net/documentation/latest/wadl.html
Darrel's answer is exactly right. The kind of description must not be given to clients of a REST API because it will lead the client developer to couple the implementation of the client to the current implementation of the service. This is what REST's hypermedia constraint aims to avoid.
You might still develop an API that is described that way, but you should be aware that the resulting system will not implement the REST architectural style and will therefore not have the properties (esp. evolvability) guaranteed by REST.
Your interface might still be a better solution than RPC for example. But be aware what it is that you are building.
Jan
You might find rest-tool useful.
It follows language agnostic approach to write specification, mock implementation and automated unit-testing for RESTful APIs.
You can use it only for documenting your APIs, but this specification can immediately be used to quality assure the implementation of the real services.
If your services are not fully implemented yet, but for example should be used by a web frontend application, rest-tool provides instant mocking based on the service description. content schema validation (JSON schema) also can be easily added beside the documentation as well as used by the unit tests.
I hate to be the bearer of bad news, but if you feel the need to document the things you listed, then you probably did not create a REST interface.
REST interfaces are documented by identifying a single root URL and then by describing the media type of the representation that is returned from that URL and all the media types that can be accessed via links in that representation.
What media types are you using?
Also, put a link to RFC2616 in your docs. That should explain to any consumer how to interact with your service.