The documentation of Maven Compiler plugin mentions the following:
annotationProcessors:
Names of annotation processors to run. Only applies to JDK 1.6+ If not
set, the default annotation processors discovery process applies.
What is the default annotation processors discovery process here? Is there any other way to set up annotation processors than this configuration tag?
I've found that the Getting Started with the Annotation Processing Tool (apt) documentation mentions a default discovery procedure, but it works with factory classes not processors and unfortunately it uses the tools.jar and com.sun packages from the JDK. Is this the default annotation processors discovery process?
The default way to make an annotation processor available to the compiler is to register it in a file in META-INF/services/javax.annotation.processing.Processor. The file can contain a number of processors: each the fully-qualified class name on its own line, with a newline at the end. The compiler will default to using processors found in this way if none are specified.
Related
Is there a way to overwrite a configuration in a Quarkus extension with a hard-coded value?
What I'm trying to do: I am creating a custom Quarkus extension for JSON logging, based on quarkus-logging-json but with additional (non static) fields. I reuse some classes from the extension's runtime library, so it is a Maven dependency of the runtime module of the extension (and the deployment also needs to be declared as a dependency to my deployment module, because the quarkus extension plugin checks this).
It seems to work fine, except that I now have 2 formatters, and the following line is logged:
LogManager error of type GENERIC_FAILURE: Multiple console formatters were activated
I would like to disable the quarkus-logging-json extension completely by hard-coding these values:
quarkus.console.json.enable=false
quarkus.file.json.enable=false.
Is there a way to do this?
Thank you.
An extension cannot override runtime configuration values, it can however set a default value using io.quarkus.deployment.builditem.RunTimeConfigurationDefaultBuildItem
I have a Spring Boot application that works as expected when ran with embedded tomcat, but I noticed that if I try to run it from an existing tomcat instance that I'm using with a previous project then it fails with a NoClassDefFoundError for a class that I don't use anywhere in my application.
I noticed in the /lib directory I had a single jar that contained a few Spring annotated classes, so as a test I cleaned out the /lib directory which resolved the issue. My assumption is that Spring is seeing some of the configurations/beans/imports on the classpath due to them existing in the /lib directory and either trying to autoconfigure something on its own, or is actually trying to instantiate some of these classes.
So then my question is - assuming I can't always fully control the contents of everything on the classpath, how can I prevent errors like this from occurring?
EDIT
For a little more detail - the class not being found is DefaultCookieSerializer which is part of the spring-session-implementation dependency. It is pulled into one of the classes in the jar located in /lib, but it is not any part of my application.
Check for features provided by #EnableAutoConfiguration. You can explicitly configure set of auto-configuration classes for your application. This tutorial can be a good starting point.
You can remove the #SpringBootApplication annotation from the main class and replace it with an #ComponentScan annotation and an #Import annotation that explicitly lists only the configuration classes you want to load. For example, in a Spring boot MVC app that uses metrics, web client, rest template, Jackson, etc, I was able to replace the #SpringBootApplication annotation with below code and get it working exactly as it was before, with all functional tests passing:
#Import({ MetricsAutoConfiguration.class,
InfluxMetricsExportAutoConfiguration.class,
ServletWebServerFactoryAutoConfiguration.class,
DispatcherServletAutoConfiguration.class,
WebMvcAutoConfiguration.class,
JacksonAutoConfiguration.class,
WebClientAutoConfiguration.class,
RestTemplateAutoConfiguration.class,
RefreshAutoConfiguration.class,
ValidationAutoConfiguration.class
})
#ComponentScan
The likely culprit of mentioned exception are incompatible jars on the classpath.
As we don't know with what library you have the issue we cant tell you the exact reason, but the situation looks like that:
One of Spring-Boot autoconfiguration classes is being triggered by the presence of class on the classpath
Trigerred configuration tries to create some bean of class that is not present in the jar you have (but it is in the specific version mentioned in the Spring BOM)
Version incompatibilities may also cause MethodNotFound exceptions.
That's one of the reasons why it is good practice not to run Spring Boot applications inside the container (make jar not war), but as a runnable jar with an embedded container.
Even before Spring Boot it was preferred to take account of libraries being present on runtime classpath and mark them as provided inside your project. Having different versions of the library on a classpath may cause weird ClassCastExceptions where on both ends names match, but the rest doesn't.
You could resolve specific cases by disabling autoconfiguration that causes your issue. You can do that either by adding exclude to your #SpringBootApplication or using a property file.
Edit:
If you don't use very broad package scan (or use package name from outside of your project in package scan) in your Spring Boot application it is unlikely that Spring Boot simply imports configuration from the classpath.
As I have mentioned before it is rather some autoconfiguration that is being triggered by existence of a class in the classpath.
Theoretical solution:
You could use maven shade plugin to relocate all packages into your own package space: see docs.
The problems is you'd have face:
Defining very broad relocation pattern that would exclude JEE classes that need to be used so that container would know how to run your application.
Relocation most likely won't affect package names used as strings in the Spring Boot annotations (like annotations #PackageScan or #ConditionalOnClass). As far as I know it is not implemented yet. You'd have to implement that by yourself - maybe as some kind of shade plugin resource processor.
When relocating classes you'd have to replace package names in all relevant configuration located in the jars. Possibly also merge some of those.
You'd also have to take into account how libraries that you use, or spring uses use package names or files.
This is definitely not a trivial tasks with many traps ahead. But if done right, then it would possibly allow you to disregard what is on the containers classpath. Spring Boot would also look for classes in relocated packages, and you wouldn't have those in ordinary jars.
I have an ear artifact deployed on a wildfly server. On some beans I used the following configuration injection
#Inject
private Config config;
I want to change the properties specified on the "microprofile-config.properties" file on runtime. It is not necessary to change the file itself, I just want to change the properties. I think there might be a way using the console, but I cannot find if there is any.
If you take a look at the spec or even at articles like this, you will see that, by default, Microprofile config reads configuration values from the following 3 places in this order - i.e. from wherever it finds it first:
System.getProperties()
System.getenv()
The configuration file
So, you can override values in the configuration file in 2 ways:
Defining -D command line arguments to the VM (e.g. java -DXXX=yyy ...)
Defining system environment variables (e.g. export XXX=yyy in bash or set XXX=yyy in Windows)
Note that there are some rules for defining environment variables and matching them to actual configurations, e.g. for a configuration aaa.bbb.ccc you may need to set an environment variable as AAA_BBB_CCC. Read ch. 5.3.1 in the specs, and experiment a little.
You can always extend the configuration sources with your own custom ones (to read configuration from JNDI, DB, Zookeeper, whatever).
I'm working on a multi module maven based project in which one of the modules contains a few annotation processors for the custom annotations used by other modules. When I add a dependency of annotation processor module to any other module, the annotations of that modules are processed by those annotation processors.
But recently I integrated Checker Framework (for type annotations) and then all the custom annotation processors (I mentioned above) stopped working. Any idea on how to get them to work even with Checker Framework is greatly appreciated?
To clear the scenario,
Let's say I have a maven module named module_A. In this module I have a annotation (class level) called "#FoodItem". I need to enforce a rule that any class annotated with "#FoodItem" annotation should implement the interface "Food". So I wrote an annotation processor "FoodItemAnnotationProcessor" in the same module (module_A) which processes such classes and check for the compliance with that rule.
Then let's say I have another module named module_B which has a maven dependency to the module_A. In this module I have a class called "Pizza" which is annotated with "#FoodItem" annotation.
If a build the project (which has module_A and module_B) with the above configuration, the "FoodItemAnnotationProcessor" is executed at compile stage and validates the class "Pizza" for the rule mentioned above.
After that I integrated Checker framework to module_B (as mentioned here). Then checker framework related validations are executed at compile time as expected, but the "FoodItemAnnotationProcessor" ceased to work.
To understand the problem you must know how javac finds your annotation processors.
When you don't supply the --processor argument for javac (see doc-javac-options), then the annotation-processor auto-discovery feature (see javac-doc: Annotation processing) is activated. This means, that javac will search for all available annotation-processors in your classpath (or processorpath, if you have specified it).
Jars, which include a META-INF/services/javax.annotation.processing.Processor file, can specify their annotation processor classes and javac will automatically use them.
The "problem" is that the checker-framework has multiple multiple annotation processors for the checks, but you may only want to use some of those: thus the annotation-discovery process cannot be used and you must manually specify all annotation processors to run in your build file.
For a Maven build you can do it like this:checker-framework doc for Maven
<annotationProcessors>
<!-- Add all the checkers you want to enable here -->
<annotationProcessor>org.checkerframework.checker.nullness.NullnessChecker</annotationProcessor>
</annotationProcessors>
This will explicitly set the --processor argument for javac (see doc-javac-options), which disables the default annotation-discovery process.
So the solution is to manually add all annotation processors that you want to run (in addition to the checker-framework checkers).
E.g. when you want to run the NullnessChecker and Dagger, you must specify both:
<annotationProcessors>
<!-- Add all the checkers you want to enable here -->
<annotationProcessor>org.checkerframework.checker.nullness.NullnessChecker</annotationProcessor>
<!-- Add all your other annotation processors here -->
<annotationProcessor>dagger.internal.codegen.ComponentProcessor</annotationProcessor>
</annotationProcessors>
Hint:
to find out which annotation processors you are currently using, run your build and pass the Non-Standard Option -XprintProcessorInfo to javac.
UPDATE:
The checkers also support some sort of auto-discovery (doc-ref) - Note: I have not used this yet.
2.2.3 Checker auto-discovery
“Auto-discovery” makes the javac compiler always run a checker plugin,
even if you do not explicitly pass the -processor command-line option.
This can make your command line shorter, and ensures that your code is
checked even if you forget the command-line option.
To enable auto-discovery, place a configuration file named
META-INF/services/javax.annotation.processing.Processor in your
classpath. The file contains the names of the checker plugins to be
used, listed one per line. For instance, to run the Nullness Checker
and the Interning Checker automatically, the configuration file should
contain:
org.checkerframework.checker.nullness.NullnessChecker
org.checkerframework.checker.interning.InterningChecker
In an ivy dependency,
Q1.
What is the difference between
conf="runtime->compile"
vs
conf="runtime->compile(*)"
What does the extra bracketed wildcard do?
Q2.
What does the following do?
conf="compile->compile(*)"
Isn't it a cyclical/self dependency? What is the point of mapping a conf back to itself?
The brackets are a fallback:
since 1.3 a fallback mechanism can be used when you are not sure that
the dependency will have the required conf. You can indicate to ivy
that you want one configuration, but if it isn't present, use another
one. The syntax for specifying this adds the fallback conf between
parenthesis right after the required conf. For instance,
test->runtime(default)
means that in the test configuration of the
module the
runtime
conf of the dependency is required, but if doesn't
exist, it will use the
default
conf instead. If default conf doesn't
exist then it will be considered as an error. Note that the
* wildcard can be used as fallback conf.
For Question2:
a conf is always read like:
ConfFromThisFile -> ConfFromDependency
So
compile->compile
will map the compile configuration of the dependency to the compile configuration of this file. It is no cycle. The bracket says: If compile does not exist in the dependency then use *.
See the Configuration Mapping Section of the ivy documentation for dependencies.
This syntax is for dependency fallback. runtime->compile means that the runtime configuration depends on the compile config. The compile config must be present or Ivy will report an error. However, runtime->compile(*) will try the compile configuration first to satisfy dependencies. But if compile doesn't exist, it will try all the other configurations. See the Configurations mapping section of the Ivy docs for more info.
Based on that, compile->compile(*) would indicate that compile needs any (all?) configurations. I am guessing that compile->(*) isn't valid syntax so the extra compile guarantees the fallback is used since compile isn't defined until after the configuration XML stanza is complete.
Note that it's not clear from the documentation if (*) means 'any' or 'all' configurations. So, I am not sure if Ivy will stop at the first configuration that matches all dependencies (if there is one) or if it brings in all the other configurations in a union.