I have an unusual use case.
In my left hand, I have a String value (say, "5001"). In my right hand I have an Annotation literal with some interesting information. It basically defines a named "slot" for which (in this example) "5001" is an appropriate value.
Let us now also say that I have some javax.validation.constraints Annotation (like Digits) in my...other hand. Note that I do not have any reference to any field or method or any other AnnotatedElement—just the javax.validation.constraints Annotation literal itself. This is the weird part, but take it as fact.
Armed with these bits, I can almost use ConstraintValidator to see if "5001" is a valid value for the "slot" defined by my annotation literal. But not quite, as I cannot acquire a ConstraintValidatorContext for use in the isValid() method.
(I have read this question on the subject, which suggests that I'm out of luck.)
I also cannot simply use the Validator API as I am not validating a bean instance but instead merely a value that, if everything goes well, may be used indirectly in an XSLT file, if you must know. :-) It's the determining if everything is going to go well part that I'd like to use javax.validation for here. But, as mentioned, I don't have a bean instance with an annotated element to validate—I just have the value that would go into that annotated element.
Is there a way forward here?
Related
I am getting started with MapStruct. I am unable to understand when do we use "expression" tag in MapStruct? Why do we have certain mappings where we use "target" tag and "expression" tag? Does it mean that expressions are used when you want to map two or more fields within a bean to a single property/field in the target as mentioned in the documentation "http://mapstruct.org/documentation/stable/reference/html/#expressions"
Expressions are used when you can't map a source - to a target property or when a constant does not apply. MapStruct envisioned that several language could be used to address expressions. However, only plain java is implemented (hence "java(... )" ). EL was envisioned but not yet realised.
A typical use case that I use is generating a UUID. But even there you could try the new #Context to achieve that goal.
Remember, the stuff within the brackets is put directly in the generated code. The IDE can't check its correctness, and you will only spot problems during compilation.
Expressions are IMHO a fallback means / gap filler for stuff that is not yet implemented in MapStruct.
Note: Mapping target-to-source by means of a custom method as suggested in the other answers can be done automatically. MapStruct will recognised the signature (return type, source type) and call your custom method. You can do this in the same interface (default method) or in a used mapper.
In general, MapStruct expressions are used when you simple cannot write a MapStruct mapper. They should be used as a fallback approach when the library doesn't apply to your use-case.
For example, -- as the documentation says -- when a mapping requires more than one source variable, an expression can be used to "inject" them to a mapper method.
Another use case is when the source variable you need to use -- say bar -- is not a part of the source class but a member of one of its variables (here, classVar). You would map it to the target field foo using a custom myCustomMethod method with #Mapping(target="foo", expression="java(myCustomMethod(source.classVar.bar)))".
I have a VariableElement field that is annotated with a generated Annotation (which is why I can't use field.getAnnotation(annotationClass)). I need to get all parameters passed to this annotation.
Note that by "a generated Annotation" I mean that literally the Annotation class itself (not the annotated one) has been generated by an Annotation Processor. The field/class that is being annotated is in the handwritten source code.
It didn't look like it'd be that hard, so far I've come up with this:
for (AnnotationMirror annotation : field.getAnnotationMirrors()) {
Map<? extends ExecutableElement, ? extends AnnotationValue> annotationValueMap = annotation.getElementValues();
messager.printMessage(Diagnostic.Kind.WARNING, annotation.toString() + ":" + annotationValueMap.toString());
}
I thought this would do it, but the output for the field is the following:
#MyAnnotation:{}
So, the processor does recognize that the field is annotated, but I'm unable to access the passed parameters. Even though the field is definetely annotated and does pass parameters with the annotation (it has to, since the annotation defines required parameters and no defaults):
#MyAnnotation(max = 387, min = 66876, ...)
private Integer myField;
Here's the generated annotation code:
#Retention(RetentionPolicy.SOURCE)
#Target(ElementType.FIELD)
public #interface MyAnnotation {
int max();
boolean allowAuto();
int min();
}
I've clean-compiled the project multiple times, the processor never sees the values. What am I overlooking here? The processor can obviously see the annotation itself, yet the parameters passed to it are hidden.
Recall that annotation processors run as part of the compiler, in steps called "rounds". This process runs iteratively until there is no new code to compile, and then processors get one last chance to run (not necessary for this answer, but helpful for more context). Each round only the newly created types are directly given to the processor to examine.
What seems to be happening here is that during a round you are emitting a new annotation type, which should allow the processor to observe certain features about some code submitted to be compiled. However, any types created during a given round are not yet compiled until the next round begins.
For this question, we run into a conflict here - some Java sources are compiled which use an annotation that doesn't exist yet. The processor first creates the annotation, and then tries to read the newly-created annotation out of those partly-compiled sources. Unfortunately, until the annotation has been compiled, we can't actually read the annotation. Instead, we need to wait until the subsequent round (once the annotation itself has compiled), then go back to that class which has finished being compiled and examine it.
This can be implemented yourself without too much trouble, but the easiest way is often to rely on the google/auto project (specifically the auto-common library, see https://github.com/google/auto/tree/master/common), and extend their BasicAnnotationProcessor class. One of the nice features it supports is to automatically examine types and check if there are any compilation issues - if so, they are deferred until a later round so you can handle them without any type resolution issues.
Use getAnnotation(MyAnnotation.class) available from VariableElement
in your example code you can do this to get the min and max parameters
MyAnnotation myAnnotation= field.getAnnotation(MyAnnotation.class);
int max = myAnnotation.max();
int min = myAnnotation.min();
this will work unless the annotation members returns class/class[] value, in which you will get an exception if you try to get the value using this method.
more about how to get class literal values can be found on this answer
How to read a Class[] values from a nested annotation in an annotation processor
Or using annotation mirrors
for (AnnotationMirror annotation : field.getAnnotationMirrors()) {
Map<? extends ExecutableElement, ? extends AnnotationValue> annotationValueMap = annotation.getElementValues();
annotationValueMap.forEach((element, annotationValue) -> {
messager.printMessage(Diagnostic.Kind.WARNING, element.getSimpleName().toString() + ":" + annotationValue.getValue());
});
}
In case you have more than one annotation on the field then you can iterate over the annotation mirrors and use the check types.isSameType(annotationMirror.getAnnotationType(), elements.getTypeElement(MyAnnotation.class.getName()).asType()) to find the annotation you are interested in
Yes, you will not be able to instantiate a Class object for a type which is not available in your annotation processor's classloader, and may not even have been compiled into a class file yet at all. A similar problem exists for retrieving enum constants.
There are a few wrinkles to dealing with this sort of thing:
Any annotation value that is declared as an array might come to you at compile time either as a single value, or a list of values - so any code needs a path to handle both the list and non-list case - like this
What you get may be a generic type, and if you are generating Java code or similar that wants to insert a reference to Foo.class you need to get the erasure of that type, so you don't generate Foo<Bar>.class into your generated sources.
One of the places your annotation processor is going to get run is in an IDE, on broken code still being typed, so it is important to fail gracefully in the case that code elements that, you would think, can't possibly be missing or broken or unlikely values, are. In an IDE, your annotation processor may also be kept alive for a long time and reused, so it's important not to pile up objects modeling stuff that has already been generated and emitted.
FWIW, I wrote a library to solve this and related problems, which can be found on Maven central at the coordinates com.mastfrog:annotations-tools:2.8.3.4 (check for newer versions). The usage pattern is simple:
Instantiate an instance of AnnotationUtils in an override of the init() method of your annotation processor and store it in a field
Use it to, for example, resolve a Class<?>[] into a list of string class names that you can work with inside javac, and similar
It makes it pretty straightforward to write annotation processors that do not directly depending on the classes they processes at all - which means the annotation processors (and their dependency graphs!) be completely independent of what they process, and can depend on whatever libraries they like without forcing those dependencies into the dependency graph of any project that uses them - the most common pattern is someone writes some annotations and then puts the annotation processor in the same project, or even package, and so anything the annotation processor uses becomes a dependency of every consumer of the annotations, even though those dependencies will probably never be used at runtime at all. That, it seems to me, is an antipattern worth avoiding.
Just another Java problem (I'm a noob, I know): is it possible to use dynamic property binding in a Custom Control with a dynamic property getter in a Java bean?
I'll explain. I use this feature extensively in my Custom Controls:
<xp:inputTextarea id="DF_TiersM">
<xp:this.value><![CDATA[#{compositeData.dataSource[compositeData.fieldName]}]]></xp:this.value>
This is used in a control where both datasource and the name of the field are passed as parameters. This works, so far so good.
Now, in some cases, the datasource is a managed bean. When the above lines are interpreted, apparently code is generated to get or set the value of ... something. But what exactly?
I get this error: Error getting property 'SomeField' from bean of type com.sjef.AnyRecord which I guess is correct for there is no public getSomeField() in my bean. All properties are defined dynamically in the bean.
So how can I make XPages read the properties? Is there a universal getter (and setter) that allows me to use the name of a property as a parameter instead of the inclusion in a fixed method name? If XPages doesn't find getSomeField(), will it try something else instead, e.g. just get(String name) or so?
As always: I really appreciate your help and answers!
The way the binding works depends on whether or not your Java object implements a supported interface. If it doesn't (if it's just some random Java object), then any properties are treated as "bean-style" names, so that, if you want to call ".getSomeField()", then the binding would be like "#{obj.someField}" (or "#{obj['someField']}", or so forth).
If you want it to fall back to a common method, that's a job for either the DataObject or Map interfaces - Map is larger to implement, but is more standard (and you could inherit from AbstractMap if applicable), while DataObject is basically an XPages-ism but one I'm a big fan of (for reference, document data sources are DataObjects). Be warned, though: if you implement one of those, EL will only bind to the get or getValue method and will ignore normal setters and getters. If you want to use those when present, you'll have to write reflection code to do that (I recommend using Apache BeanUtils).
I have a post describing this in more detail on my blog: https://frostillic.us/f.nsf/posts/expanding-your-use-of-el-%28part-1%29
This question is essentially the opposite of this one.
I have a method like so:
public boolean isVacant() {
return getEmployeeNum() != null && getEmployeeNum().equals("00000000");
}
When I load it up, Hibernate is complaining that I have no attribute called vacant. But I don't want an attribute called vacant - I have no need to store that data - it's simply logic.
Hibernate says:
org.hibernate.PropertyNotFoundException: Could not find a setter for property vacant in class com.mycomp.myclass...
Is there an annotation I can add to my isVacant() method to make Hibernate ignore it?
Add #Transient to the method then Hibernate should ignore it.
To quote the Hibernate Documentation:
Every non static non transient property (field or method depending on the access type) of an entity is considered persistent, unless you annotate it as #Transient.
RNJ is correct, but I might add why this happens:
I'm guessing that you have annotated the getters of your persistent class. The prefixes used by java beans are "set" and "get", which are used do read and write to variables, but there is also the prefix "is", which is used for boolean values (instead of "get"). When Hibernate sees your getter-annotated persistent class, and finds a method "isVacant", it assumes that there is a property "vacant", and assumes that there is a "set"-method as well.
So, to fix it, you could either add the #Transient annotation, or you could change the name of your method to something that doesn't start with "is". I don't think this would be a problem if your class was annotated on the fields, instead of the get-methods.
Many frameworks (like Hibernate and Drools) are smart enough understand that Boolean variables need to be accessed by "is" instead of "get". But they don't always understand perfectly, and that is when "interesting" problems can develop. Or, worse yet, the different frameworks interpret the methods slightly differently, and they are supposed to work together.
BTW, the #Transient solution is not guaranteed to solve all your problems. Most notably, say that you are adding it to a toString() that returns a huge and complex object. You might be getting a stack overflow not because the method is huge and complex, or even because all the sub-obejcts have their own toString() methods, but because your structure has circular structures. That is what causes the stack overflows.
I'm working with three separate classes: Group, Segment and Field. Each group is a collection of one or more segments, and each segment is a collection of one or more fields. There are different types of fields that subclass the Field base class. There are also different types of segments that are all subclasses of the Segment base class. The subclasses define the types of fields expected in the segment. In any segment, some of the fields defined must have values inputted, while some can be left out. I'm not sure where to store this metadata (whether a given field in a segment is optional or mandatory.)
What is the most clean way to store this metadata?
I'm not sure you are giving enough information about the complete application to get the best answer. However here are some possible approaches:
Define an isValid() method in your base class, which by default returns true. In your subclasses, you can code specific logic for each Segment or FieldType to return false if any requirements are missing. If you want to report an error message to say which fields are missing, you could add a List argument to the isValid method to allow each type to report the list of missing values.
Use Annotations (as AlexR said above).
The benefit of the above 2 approaches is that meta data is within the code, tied directly to the objects that require it. The disadvantage is that if you want to change the required fields, you will need to update the code and deploy a new build.
If you need something which can be changed on the fly, then Gangus suggestion of Xml is a good start, because your application could reload the Xml definition at run-time and produce different validation results.
I think, the best placement for such data will be normal XML file. And for work with such data the best structure will be also XMLDOM with XPATH. Work with classes will be too complicated.
Since java 5 is released this kind of metadata can be stored using annotations. Define your own annotation #MandatoryField and mark all mandatory fields with it. Then you can discover object field-by-field using reflection and check whether not initiated fields are mandatory and throw exception in this case.