How-to dynamically fill a annotation - java

Sadly, I forgot to take the code from work with me today. But maybe this little example will clarify things.
I use hibernate to map a bean to a table.
Example:
import javax.persistence.column;
….
String columnameA;
….
#Column(name="columnameA")
public String getColumname(){
return columnameA
}
….
I do not want to hardcode the columnname (“columnameA”) in my sourcecode, because I need to switch the columname without building the entire project.
I wanted to use something like:
#Column(name=getColumnName())
This does not work. The idea is, to to write the columnname somewhere in the jndi tree and use it at startup. So i only need to restart the application to change the columnname.
The only way around this problem – which I can think of – is to write my own annotation, which extends the hibernate class. Is there a simpler way of doing this?

You can't achieve this with annotations, but a solution to your specific problem is to implement a custom NamingStrategy:
public class NamingStrategyWrapper implements NamingStrategy {
private NamingStrategy target;
public NamingStrategyWrapper(NamingStrategy target) {
this.target = target;
}
public String columnName(String arg0) {
if ("columnameA".equals(arg0)) return getColumnName();
else return target.columnName(arg0);
}
...
}
-
AnnotationConfiguration cfg = new AnnotationConfiguration();
cfg.setNamingStrategy(new NamingStrategyWrapper(cfg.getNamingStrategy()));
factory = cfg.configure().buildSessionFactory();

The only values you can assign to attributes are constant values, specified by hand, or stored in public static final variables.
Annotations do not define behavior, but only meta-informations about class, methods and the likes. You can specify behavior in annotation processors, that read your annotations and generate new source code or other files.
Writing an annotation processo is beyond my knowledge, but you could find other information in the Annotations Processing Tool guide by Sun.

Related

Accessing CDI from simple objects?

Assume I have a configuration class accessible via the stock CDI that defines some application-wide parameters:
#ApplicationScoped
class AppConfig {
public double getMaxAllowedBrightness() { ... }
};
And I have a simple class for my data objects:
class LightSource {
double brightness;
...
boolean isValid() {
double maxAllowedBrightness = ...; // Somehow use AppConfig#getMaxAllowedBrightness() here
return brightness <= maxAllowedBrightness;
}
}
How can my data object access the single AppConfig instance?
Somehow I hate the idea of autowiring AppConfig into every single data object (there are lots of them). Is there any other way to get access to AppConfig in the above example from my data object?
What's the best pattern to use here?
The simplest example is a runtime lookup akin to:
import jakarta.enterprise.inject.spi.CDI;
CDI.current().select(cls).get();
With cls being the class that you're looking up. (Note the package name, this is the latest version of CDI 2.x in the new jakarta namespace, the original is in javax.)
It gets more detailed from there, but that's the gist of it.
Note, that semantically there's little difference between autowiring something and doing a runtime lookup, especially for something mostly static at the instance level. It's still a dependency. You still have to touch the code of the classes to pull it off.
A nice thing of relying on the autowiring is that you can disable it situationally, and the class reverts to a simple bean, that you can do with what you will. Coding in the lookup, it's a little bit more than that.
Dynamic lookup is more for special circumstances.
On my current project, our team has been doing this using the #Value annotation. In our case, we have all the properties in a properties bean, which I'll call mainAppConfiguration. The bean is populated from a properties file like main-app-config.properties (which was read into the bean with a Properties prop = new Properties().load(mainAppConfigFilePath) method.
Assuming you have something like that set up, then we inject the properties into the classes that need them using a little SpEL magic something like:
private Integer refreshRateSeconds;
#Value("#{ mainAppConfiguration.getProperties()['funny-property-base-name.refreshRateSeconds'] }")
public void setRefreshRateSeconds(Integer refreshRateSeconds) {
if (refreshRateSeconds == null) {
throw new IllegalArgumentException("Required config property 'funny-property-base-name.refreshRateSeconds' was not found"));
}
this.refreshRateSeconds = refreshRateSeconds;
}
Baeldung has examples (without defaults) and more with defaults.

How to customize ModelMapper

I want to use ModelMapper to convert entity to DTO and back. Mostly it works, but how do I customize it. It has has so many options that it's hard to figure out where to start. What's best practice?
I'll answer it myself below, but if another answer is better I'll accept it.
First here are some links
modelmapper getting started
api doc
blog post
random code examples
My impression of mm is that it is very well engineered. The code is solid and a pleasure to read. However, the documentation is very terse, with very few examples. Also the api is confusing because there seems to be 10 ways to do anything, and no indication of why you’d do it one way or another.
There are two alternatives: Dozer is the most popular, and Orika gets good reviews for ease of use.
Assuming you still want to use mm, here’s what I’ve learned about it.
The main class, ModelMapper, should be a singleton in your app. For me, that meant a #Bean using Spring. It works out of the box for simple cases. For example, suppose you have two classes:
class DogData
{
private String name;
private int mass;
}
class DogInfo
{
private String name;
private boolean large;
}
with appropriate getters/setters. You can do this:
ModelMapper mm = new ModelMapper();
DogData dd = new DogData();
dd.setName("fido");
dd.setMass(70);
DogInfo di = mm.map(dd, DogInfo.class);
and the "name" will be copied from dd to di.
There are many ways to customize mm, but first you need to understand how it works.
The mm object contains a TypeMap for each ordered pair of types, such as <DogInfo, DogData> and <DogData, DogInfo> would be two TypeMaps.
Each TypeMap contains a PropertyMap with a list of mappings. So in the example the mm will automatically create a TypeMap<DogData, DogInfo> that contains a PropertyMap that has a single mapping.
We can write this
TypeMap<DogData, DogInfo> tm = mm.getTypeMap(DogData.class, DogInfo.class);
List<Mapping> list = tm.getMappings();
for (Mapping m : list)
{
System.out.println(m);
}
and it will output
PropertyMapping[DogData.name -> DogInfo.name]
When you call mm.map() this is what it does,
see if the TypeMap exists yet, if not create the TypeMap for the <S,
D> source/destination types
call the TypeMap Condition, if it returns FALSE, do nothing and STOP
call the TypeMap Provider to construct a new destination object if necessary
call the TypeMap PreConverter if it has one
do one of the following:
if the TypeMap has a custom Converter, call it
or, generate a PropertyMap (based on Configuration flags plus any custom mappings that were added), and use it
(Note: the TypeMap also has optional custom Pre/PostPropertyConverters that I think will run at this point before and after each mapping.)
call the TypeMap PostConverter if it has one
Caveat: This flowchart is sort of documented but I had to guess a lot, so it might not be all correct!
You can customize every single step of this process. But the two most common are
step 5a. – write custom TypeMap Converter, or
step 5b. – write custom Property Mapping.
Here is a sample of a custom TypeMap Converter:
Converter<DogData, DogInfo> myConverter = new Converter<DogData, DogInfo>()
{
public DogInfo convert(MappingContext<DogData, DogInfo> context)
{
DogData s = context.getSource();
DogInfo d = context.getDestination();
d.setName(s.getName());
d.setLarge(s.getMass() > 25);
return d;
}
};
mm.addConverter(myConverter);
Note the converter is one-way. You have to write another if you want to customize DogInfo to DogData.
Here is a sample of a custom PropertyMap:
Converter<Integer, Boolean> convertMassToLarge = new Converter<Integer, Boolean>()
{
public Boolean convert(MappingContext<Integer, Boolean> context)
{
// If the dog weighs more than 25, then it must be large
return context.getSource() > 25;
}
};
PropertyMap<DogData, DogInfo> mymap = new PropertyMap<DogData, DogInfo>()
{
protected void configure()
{
// Note: this is not normal code. It is "EDSL" so don't get confused
map(source.getName()).setName(null);
using(convertMassToLarge).map(source.getMass()).setLarge(false);
}
};
mm.addMappings(mymap);
The pm.configure function is really funky. It’s not actual code. It is dummy EDSL code that gets interpreted somehow. For instance the parameter to the setter is not relevant, it is just a placeholder. You can do lots of stuff in here, such as
when(condition).map(getter).setter
when(condition).skip().setter – safely ignore field.
using(converter).map(getter).setter – custom field converter
with(provider).map(getter).setter – custom field constructor
Note the custom mappings are added to the default mappings, so you do not need, for example, to specify
map(source.getName()).setName(null);
in your custom PropertyMap.configure().
In this example, I had to write a Converter to map Integer to Boolean. In most cases this will not be necessary because mm will automatically convert Integer to String, etc.
I'm told you can also create mappings using Java 8 lambda expressions. I tried, but I could not figure it out.
Final Recommendations and Best Practice
By default mm uses MatchingStrategies.STANDARD which is dangerous. It can easily choose the wrong mapping and cause strange, hard to find bugs. And what if next year someone else adds a new column to the database? So don't do it. Make sure you use STRICT mode:
mm.getConfiguration().setMatchingStrategy(MatchingStrategies.STRICT);
Always write unit tests and ensure that all mappings are validated.
DogInfo di = mm.map(dd, DogInfo.class);
mm.validate(); // make sure nothing in the destination is accidentally skipped
Fix any validation failures with mm.addMappings() as shown above.
Put all your mappings in a central place, where the mm singleton is created.
I faced a problem while mapping with ModelMapper. Not only properties but also My source and destination type were different. I solved this problem by doing this ->
if the source and destination type are different. For example,
#Entity
class Student {
private Long id;
#OneToOne
#JoinColumn(name = "laptop_id")
private Laptop laptop;
}
And Dto ->
class StudentDto {
private Long id;
private LaptopDto laptopDto;
}
Here, the source and destination types are different. So, if your MatchingStrategies are STRICT, you won't able to map between these two different types.
Now to solve this, Just simply put this below code in the constructor of your controller class or any class where you want to use ModelMapper->
private ModelMapper modelMapper;
public StudentController(ModelMapper modelMapper) {
this.modelMapper = modelMapper;
this.modelMapper.typeMap(Student.class, StudentDto.class).addMapping(Student::getLaptop, StudentDto::setLaptopDto);
}
That's it. Now you can use ModelMapper.map(source, destination) easily. It will map automatically
modelMapper.map(student, studentDto);
I've been using it from last 6 months, I'm going to explain some of my thoughts about that:
First of all, it is recommended to use it as an unique instance (singleton, spring bean,...), that's explained in the manual, and I think all agree with that.
ModelMapper is a great mapping library and wide flexible. Due to its flexibility, there are many ways to get the same result, and that's why it should be in the manual of best practices of when to use one or other way to do the same thing.
Starting with ModelMapper is a little bit difficult, it has a very tight learning curve and sometimes it is not easy to understand the best ways to do something, or how to do some other thing. So, to start it is required to read and understand the manual precisely.
You can configure your mapping as you want using the next settings:
Access level
Field matching
Naming convention
Name transformer
Name tokenizer
Matching strategy
The default configuration is simply the best (http://modelmapper.org/user-manual/configuration/), but if you want to customise it you are able to do it.
Just one thing related to the Matching Strategy configuration, I think this is the most important configuration and is need to be careful with it. I would use the Strict or Standard but never the Loose, why?
Due Loose is the most flexible and intelligent mapper it could be map some properties you can not expect. So, definitively, be careful with it. I think is better to create your own PropertyMap and use Converters if it is needed instead of configuring it as Loose.
Otherwise, it is important to validate all property matches, you verify all it works, and with ModelMapper it's more need due with intelligent mapping it is done via reflection so you will not have the compiler help, it will continue compiling but the mapping will fail without realising it. That's one of the things I least like, but it needs to avoid boilerplate and manual mapping.
Finally, if you are sure to use ModelMapper in your project you should use it using the way it proposes, don't mix it with manual mappings (for example), just use ModelMapper, if you don't know how to do something be sure is possible (investigate,...). Sometimes is hard to do it with model mapper (I also don't like it) as doing by hand but is the price you should pay to avoid boilerplate mappings in other POJOs.
import org.modelmapper.ModelMapper;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
#Service
public class EntityDtoConversionUtil {
#Autowired
private ModelMapper modelMapper;
public Object convert(Object object,Class<?> type) {
Object MapperObject=modelMapper.map(object, type);
return MapperObject;
}
}
Here is how you can make a customize Conversion Class and can then autowire it where you would like to convert the object to dto and vice versa.
#Component
public class ConversionUtil {
#Bean
public ModelMapper modelMapper() {
return new ModelMapper();
}
public <T,D> D mapItem(T item,Class<D> cl){
return modelMapper().map(item,cl);
}
public <T,D> List<D> map(List<T> list, Class<D> cl){
return list.stream()
.map(item -> modelMapper().map(item, cl))
.collect(Collectors.toList());
}
}

Spring dependency injection with static constructors

I have been using Guice for a few years now and just switched to a company that uses Spring. I am a fan of Dependency Injection but having a few issues figuring out how to get Spring to do what I want.
Here is what I have in the code right now (its not scala code, just shorter so using that syntax):
class A(b: B)
class B(exe: ExecutorService)
...
#Value("${search.threads}") var searchThreads: int
exe = Executors.newFixedThreadPool(searchThreads)
In guava I could use Named annotations to have different executors, or just use one executor for anyone that needed it and just define
final int searchThreads = readSearchThreadsFromConfigs()
bind(Executor.class).toInstance(Executors.newFixedThreadPool(searchThreads));
I am not sure how to set this up within Spring. It seems every example I see doesn't really cover generics, nor does it really go over static constructors or being able to "provide" the value.
What is the best way to get similar results to what I had above from Guice? Is there a notion of a "module" like guice and dagger use (other than the xml file, something statically checked)?
EDIT:
Here is a bit of the code currently used. It creates the executor within the constructor:
#Autowired
public LogsModule(#Value("${search.threads}") final int searchThreads) {
searchPool = Executors.newFixedThreadPool(searchThreads);
}
In Spring it's basically the same.
Your example can be rewritten as follows using #Configuration:
#Bean(value = "searchExecutor", destroyMethod = "shutdownNow")
public ExecutorService executorService(Environment env) {
final int searchThreads = env.getProperty("searchThreads", Integer.class, 3);
return Executors.newFixedThreadPool(searchThreads));
}
This example uses Environment - you can either add properties from your config to it, or use your config directly instead.
With XML configuration it would be more complex, but you can mix #Configuration with XML.
If you need multiple executors, you can use #Qualifier (or perhaps #Named) to distinguish between candidates by their bean names:
#Autowired
public LogsModule(#Qualifier("searchExecutor") ExecutorService e) { ... }

What's the common way to deal with Jackson serialization

Currently I have a project that makes use of Spring-Hibernate and also Jackson to deal with JSON. The first time I tried to use Jackson I always got LazyInitializationException and sometimes infinite loop for multiple entities that references each other. Then I found #JsonIgnore and #JsonIdentityInfo.
Now the problem is sometimes it is needed to ignore properties but sometimes I just need those properties to be serializable. Is there a way to sometimes ignore several fields and sometimes serialize the fields at the runtime?
I found "Serialization and Deserialization with Jackson: how to programmatically ignore fields?"
But if I always have to use the mix in annotation, it would be cumbersome if an object dozens of properties to retrieve. Eg. In page1 I need propertyA, propertyB, propertyC; in page2 I need propertyA and propertyC; in page3 I only need propertyB. In those cases alone I would have to create 1 class for each page resulting in 3 classes.
So in that case is there a way to define something like:
objectA.ignoreAllExcept('propertyA');
String[] properties = {'propertyA', 'propertyC'};
objectB.ignoreAllExcept(properties); // Retrieve propertyA and propertyC
objectC.ignore(properties);
What you might be looking for is a Module. The documentation says that Modules are
Simple interface for extensions that can be registered with ObjectMappers to provide a well-defined set of extensions to default functionality.
Following is am example of how you might use them to accomplish what you want. Note, there are other ways using which this can be achieved; this is just one of them.
A simple DTO that can be used for specifying the properties to filter:
public class PropertyFilter {
public Class<?> classToFilter;
public Set<String> propertiesToIgnore = Collections.emptySet();
public PropertyFilter(Class<?> classToFilter, Set<String> propertiesToIgnore) {
this.classToFilter = classToFilter;
this.propertiesToIgnore = propertiesToIgnore;
}
}
A custom module that filters out properties based on some attribute that you store in the current request.
public class MyModule extends Module {
#Override
public String getModuleName() {
return "Test Module";
}
#Override
public void setupModule(SetupContext context) {
context.addBeanSerializerModifier(new MySerializerModifier());
}
#Override
public Version version() {
// Modify if you need to.
return Version.unknownVersion();
}
public static class MySerializerModifier extends BeanSerializerModifier {
public BeanSerializerBuilder updateBuilder(SerializationConfig config,
BeanDescription beanDesc,
BeanSerializerBuilder builder) {
List<PropertyFilter> filters = (List<PropertyFilter>) RequestContextHolder.getRequestAttributes().getAttribute("filters", RequestAttributes.SCOPE_REQUEST);
PropertyFilter filter = getPropertyFilterForClass(filters, beanDesc.getBeanClass());
if(filter == null) {
return builder;
}
List<BeanPropertyWriter> propsToWrite = new ArrayList<BeanPropertyWriter>();
for(BeanPropertyWriter writer : builder.getProperties()) {
if(!filter.propertiesToIgnore.contains(writer.getName())) {
propsToWrite.add(writer);
}
}
builder.setProperties(propsToWrite);
return builder;
}
private PropertyFilter getPropertyFilterForClass(List<PropertyFilter> filters, Class<?> classToCheck) {
for(PropertyFilter f : filters) {
if(f.classToFilter.equals(classToCheck)) {
return f;
}
}
return null;
}
}
}
Note: There is a changeProperties method in the BeanSerializerModifier class that is more appropriate for changing the property list (according to the documentation). So you can move the code written in the updateBuilder to changeProperties method with appropriate changes.
Now, you need to register this custom module with your ObjectMapper. You can get the Jackson HTTP message converter from your application context, and get its object mapper. I am assuming you already know how to do that as you have been dealing with the lazy-initialization issue as well.
// Figure out a way to get the ObjectMapper.
MappingJackson2HttpMessageConverter converter = ... // get the jackson-mapper;
converter.getObjectMapper().registerModule(new MyModule())
And you are done. When you want to customize the serialization for a particular type of object, create a PropertyFilter for that, put it in a List and make it available as an attribute in the current request. This is just a simple example. You might need to tweak it a bit to suit your needs.
In your question, you seem to be looking for a way to specify the properties-to-filter-out on the serialized objects themselves. That, in my opinion, should be avoided as the list of properties to filter-out doesn't belong to your entities. However, if you do want to do that, create an interface that provides setters and getters for the list of properties. Suppose the name of the interface is CustomSerialized Then, you can modify the MyModule class to look for the instances of this CustomSerialized interface and filter out the properties accordingly.
Note: You might need to adjust/tweak a few things based on the versions of the libraries you are using.
I think there is a more flexible way to do it. You can configure Jackson in a such a way that it will silently ignore lazy loaded properties instead of stopping serialization process. So you can reuse the same class. Just load all necessary properties / relations and pass it to Jackson. You can try to do it by declaring your custom ObjectMapper and by turning off SerializationFeature.FAIL_ON_EMPTY_BEANS feature. Hope it helps.
You can filter out properties without modifying classes by creating a static interface for a mixin annotation. Next, annotate that interface with the #JsonFilter annotation. Create a SimpleBeanPropertyFilter and a SimpleFilterProvider. Then create an ObjectWriter with your filter provider by invoking objectMapper.writer(filterProvider)

How do I use Local Variable Annotations for Wicket Authorization?

I'm rolling my own IAuthorizationStrategy for Wicket 1.5.x I've setup type annotation for pages to use with isInstantiationAuthorized(). It works well and I'd like to use annotations for isActionAuthorized() as well. Ideally I'd like to be able annotate local variables and then check the annotations in my AuthStrategy. From what I've read Local variable Annotation doesn't work that way.
Is there any kind of known work around, maybe some sort of Compile time annotation processing to turn an annotated local variable into an "anonymous" subclass with the annotation as a type annotation?
For the record, the annotation I'm trying to use looks like this:
#Retention(RetentionPolicy.Runtime)
#Target(ElementType.Type, ElementType.LOCAL_VARIABLE)
public #interface AdminOnly
{
int isVisible() default 0;
int isEnabled() default 1;
}
UPDATE
So based on #Xavi López'es answer what I was hoping to do isn't exactly possible.
Annotated LocalVariables should be available at compile time though. Is there some way maybe I could use them as a shortcut for boiler-plating the meta-data code examples that are available in Wicket Examples or the excellent Apache Wicket Cookbook?
I've struggled with a similar issue some time ago with Wicket 1.3.x, and didn't find any way to achieve this with annotations. Annotations on local variables can't be retained at run-time, as explained in the JLS (9.6.3.2. #Retention):
An annotation on a local variable declaration is never retained in the binary representation.
In this related question: How can I create an annotation processor that processes a Local Variable? they talked about LAPT-javac, a patched javac version to allow this. On their site there's a link to the Type Annotations Specification (JSR 308), which will hopefully address this subject (JDK 8 ?).
I ended up defining a plain old interface with a related functionality code:
public interface RestrictedComponent {
Integer getFunction();
}
The main problem with this approach is that it's not possible to make instant anonymous subclasses of a specific class implement other interfaces (such as Component c = new TextField() implements AdminOnly { }) , but you can always define Component extensions that just implement RestrictedComponent in a class:
public abstract class RestrictedTextField extends TextField implements RestrictedComponent { }
Finally, I ended up implementing a RestrictedContainer that just subclassed WebMarkupContainer and put every secured component inside one, modelling it with a <wicket:container> in the markup.
public class RestrictedContainer extends WebMarkupContainer implements RestrictedComponent {
private final Integer function;
public RestrictedContainer(String id, IModel model, final Integer function) {
super(id, model);
this.function = function;
}
public RestrictedContainer(String id, final Integer funcionalitat) {
super(id);
this.function = function;
}
public Integer getFunction() {
return function;
}
}
And then in the Authorization Strategy checked for component instanceof RestrictedComponent and returned true or false depending on user permissions on the associated function.

Categories