I'm trying to fix a bug in our company's homemade framework. Basically, we should be able to inject EclipseLink properties into the EntityManager through the following class which is part of our framework:
#ConfigurationProperties(prefix = "our.framework.eclipselink")
public class CustomEclipseLinkProperties {
private Map<String, Object> properties;
public Map<String, Object> getProperties() {
return properties;
}
public void setProperties(Map<String, Object> properties) {
this.properties = properties;
}
public String getBatchSize() {
return (String) properties.get(PersistenceUnitProperties.BATCH_WRITING_SIZE);
}
}
Our service built on top of that framework has the following properties file (application.properties):
our.framework.eclipselink.properties.eclipselink.logging.level=FINEST
our.framework.eclipselink.properties.eclipselink.logging.level.cache=FINEST
our.framework.eclipselink.properties.eclipselink.logging.level.sql=FINEST
our.framework.eclipselink.properties.eclipselink.logging.parameters=true
our.framework.eclipselink.properties.eclipselink.jdbc.batch-writing.size=1000
our.framework.eclipselink.properties.eclipselink.jdbc.bind-parameters=true
our.framework.eclipselink.properties.eclipselink.jdbc.batch-writing=JDBC
our.framework.eclipselink.properties.eclipselink.jpa.uppercase-column-names=false
our.framework.eclipselink.properties.eclipselink.jpa.uppercase-columns=false
When I put a breakpoint after the CustomEclipseLinkProperties has been initialized, I can see that getBatchSize() returns null. If I look into getProperties(), I do see the values were detected, but they were inserted as a LinkedHashMap.
The expected behavior would be that we would obtain a Map that would use the entire suffix as the String key, rather than getting this LinkedHashMap that has essentially called String#split() on the properties list. This would mean that the call to getBatchSize() would return 1000.
I've seen a few answers such as this one but they don't seem generic enough to my liking. Is there not a way to simply get the entire suffix as the key when injected by #ConfigurationProperties? Else, it seems like it would require intervention whenever we would want to support a different type of property.
Turns out "suffix as key" is the default behavior if I swap from Map<String, Object> to Map<String, String>.
The Object value isn't actually useful in our case, so that solves this problem.
Related
I have an Object called ReconciliationResult
public class ReconciliationResult {
Map<String, Object> recordValue;
List<Object> keyValues;
Origin origin;
ReconciliationResultStatus status;
public enum ReconciliationResultStatus {
INVALID_KEY,
DUPLICATE_KEY,
MATCHING,
NON_MATCHING;
}
public enum Origin {
LEFT_SIDE,
RIGHT_SIDE;
}
}
I am comparing an instance of this object to the result of my classUnder test
EDIT:
List<ReconciliationResult> results =
reconciliationRegistry.getRecon("BPSVsGPM_TradeDated").reconcile(TESTDATE);
assertThat(results).usingRecursiveComparison().ignoringFields("id").isEqualTo(expectedMatch);
However I don't want to test the ID field in the recordValue field inside of my ReconciliationResult Object. I don't want to test it because I have multiple tests in this class and every time I insert something to my embedded PG db the ID increments so on assertions, ID is never the same.
I have tried clearing the database after every run using JdbcTemplate, but that didn't work I also added the #DirtiesContext annotation since the tests are transactional. Alas those approaches didn't work as well.
Any clarification on the matter would be greatly appreciated. Please let me know if you need any more information from me.
Are you not invoking the methods in incorrect sequence here?
assertThat(results).isEqualTo(expectedMatch).usingRecursiveComparison().ignoringFields("id");
You probably need to invoke comparison and ignore the fields before equals.
assertThat(results).usingRecursiveComparison().ignoringFields("id").isEqualTo(expectedMatch);
I had to do the comparison of the map separately since the ignoringFields is for object members, and can't go into the map specifically to ignore the key. I created a helper function to filter out the ID field in my Map<String, Object> field during comparison.
assertThat(results)
.usingRecursiveComparison()
.ignoringFields("recordValue", "keyValues")
.isEqualTo(expectedBreak);
assertThat(filterIgnoredKeys(results.get(0).getRecordValue()))
.usingRecursiveComparison()
.isEqualTo(filterIgnoredKeys(expectedBreak.get(0).getRecordValue()));
private static <V> Map<String, V> filterIgnoredKeys(Map<String, V> map) {
return Maps.filterKeys(map, key -> !IGNORED_FIELDS.contains(key));
}
I have a Map that I receive from a browser redirection back from a third party to my Spring Controller as below -
#RequestMapping(value = "/capture", method = RequestMethod.POST, consumes = MediaType.APPLICATION_FORM_URLENCODED_VALUE)
public void capture(#RequestParam
final Map<String, String> response)
{
// TODO : perform validations first.
captureResponse(response);
}
Before using this payload, I need to do non-trivial validation, involving first checking for non-null values of a map, and then using those values in a checksum validation. So, I would like to validate my payload programmatically using the Spring Validator interface. However, I could not find any validator example for validating a Map.
For validating a Java Object, I understand how a Validator is invoked by passing the object and a BeanPropertyBindingResult to contain the errors to the Validator as below -
final Errors errors = new BeanPropertyBindingResult(object, objectName);
myValidator.validate(object, errors);
if (errors.hasErrors())
{
throw new MyWebserviceValidationException(errors);
}
For a Map, I can see that there is a MapBindingResult class that extends AbstractBindingResult. Should I simply use it, and pass my map in the Object object and in the validator cast it back to a Map? Also, how would the Validator method of supports(final Class<?> clazz) be implemented in my validator? Would it simply be like below code snippet where there can only be one validator supporting this generic class of HashMap? Somehow doesn't feel right. (Although this does not matter to me as I will be injecting my validator and use it directly and not through a validator registry, but still curious.)
#Override
public boolean supports(final Class<?> clazz)
{
return HashMap.class.equals(clazz);
}
Since, there is a MapBindingResult, I'm positive that Spring must be supporting Maps for validation, would like to know how. So would like to know if this is the way to go, or am I heading in the wrong direction and there is a better way of doing this.
Please note I would like to do this programmatically and not via annotations.
Just like I thought, Spring Validator org.springframework.validation.Validator does support validation of a Map. I tried it out myself, and it works!
I created a org.springframework.validation.MapBindingResult by passing in the map I need to validate and an identifier name for that map (for global/root-level error messages). This Errors object is passed in the validator, along with the map to be validated as shown in the snippet below.
final Errors errors = new MapBindingResult(responseMap, "responseMap");
myValidator.validate(responseMap, errors);
if (errors.hasErrors())
{
throw new MyWebserviceValidationException(errors);
}
The MapBindingResult extends AbstractBindingResult and overrides the method getActualFieldValue to give it's own implementation to get field from a map being validated.
private final Map<?, ?> target;
#Override
protected Object getActualFieldValue(String field) {
return this.target.get(field);
}
So, inside the Validator I was able to use all the useful utility methods provided in org.springframework.validation.ValidationUtils just like we use in a standard object bean validator. For example -
ValidationUtils.rejectIfEmpty(errors, "checksum", "field.required");
where "checksum" is a key in my map. Ah, the beauty of inheritance! :)
For the other non-trivial validations, I simply cast the Object to Map and wrote my custom validation code.
So the validator looks something like -
#Override
public boolean supports(final Class<?> clazz)
{
return HashMap.class.equals(clazz);
}
#Override
public void validate(final Object target, final Errors errors)
{
ValidationUtils.rejectIfEmpty(errors, "transactionId", "field.required");
ValidationUtils.rejectIfEmpty(errors, "checksum", "field.required");
final Map<String, String> response = (HashMap<String, String>) target;
// do custom validations with the map's attributes
// ....
// if validation fails, reject the whole map -
errors.reject("response.map.invalid");
}
Validate the parameters inside the map
For the validation of your Map following a specific mapping you will need a custom validator.
As this may be the usecase for some, validation of #RequestParam can be done using org.springframework.validation annotations, e.g.
#GetMapping(value = "/search")
public ResponseEntity<T> search(#RequestParam
Map<#NotBlank String,#NotBlank String> searchParams,
While #NotBlank checks if your string is not "",
#NotNull can validate non-null parameters which I guess was something you needed.
An alternative is to create your custom constraint annotation for a Map.
You can take a look the following link:
https://www.baeldung.com/spring-mvc-custom-validator
I am using JsonDeserializer to deserialize my custom Object, but in my method annotated with #KafkaListener get the object with Map field as null.
public ConsumerFactory<String, BizWebKafkaTopicMessage> consumerFactory(String groupId) {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
return new DefaultKafkaConsumerFactory<>(props, new StringDeserializer(), new JsonDeserializer<>(BizWebKafkaTopicMessage.class));
}
and my BizWebKafkaTopicMessage is
#Data
public class BizWebKafkaTopicMessage {
// Elastic Search Index Name
private String indexName;
// ElasticSearch Index's type name
private String indexType;
// Source document to be used
private Map<String, Object> source; <=== This is being delivered as null.
// ElasticSearch document primary id
private Long id;
}
and the listener method listenToKafkaMessages
#KafkaListener(topics = "${biz-web.kafka.message.topic.name}", groupId = "${biz-web.kafka.message.group.id}")
public void listenToKafkaMessages(BizWebKafkaTopicMessage message) {
............................................
............................................
// Here message.source is null
............................................
............................................
}
Inside listenToKafkaMessages method, message argument looks like this
message.indexName = 'neeraj';
message.indexType = 'jain';
message.id = 123;
message.source = null;
My strong suspicion would be that it is the polymorphic nature of your value rather than Maps per-se.
Spring is using Jackson underneath the hood for it's serialisation/deserialisation - and by default Jackson (in serialisation) when handling Object instances does not encode what class it is serialising.
Why? Well it makes for bad compatibility issues e.g. you serialised an Object (Really MyPojoV1.class) into the database 1 year ago, and then later read it out - but your code has no MyPojoV1.class anymore because things have moved on... It can even cause issues if you move MyPojoV1 to a different package anywhere within the lifetime of your application!
So when it comes to deserialising Jackson doesn't know what class to deserialise the Object into.
A hacky idea would be to run the following somewhere:
ObjectMapper.enableDefaultTyping(ObjectMapper.DefaultTyping.NON_FINAL);
Or the nicer/more spring way would be to:
#Configuration
public class JacksonConfiguration {
#Bean
public ObjectMapper objectMapper() {
ObjectMapper mapper = new ObjectMapper();
#Your Configuration Here for example mapper.configure(DeserializationFeature.something, true);
return mapper;
}
}
Finally it's worth adding that deserialising classes arbitrarily is generally a big security risk. There exists classes in Java which execute command line or even reflection based logic based on the value in their fields (which Jackson will happily fill for you). Hence someone can craft JSON such that you deserialise into a class that basically executes whatever command is in the value={} field.
You can read more about exploits here - although I recognise it may not concern you since your Kafka cluster and it's producers may inherently be within your 'trusted boundaries':
https://www.nccgroup.trust/globalassets/our-research/us/whitepapers/2018/jackson_deserialization.pdf
I have facade interface where users can ask for information about lets say Engineers. That information should be transferred as JSON of which we made a DTO for. Now keep in mind that I have multiple datasources that can provide an item to this list of DTO.
So I believe right now that I can use Decorative Pattern by adding handler of the datasource to the myEngineerListDTO of type List<EngineerDTO>. So by that I mean all the datasources have the same DTO.
This picture below shows that VerticalScrollbar and HorizontalScrollBar have different behaviours added. Which means they add behaviour to the WindowDecorator interface.
My question, does my situation fit the decorator pattern? Do I specifically need to add a behaviour to use this pattern? And is there another pattern that does fit my situation? I have already considered Chain of Responsibility pattern, but because I don't need to terminate my chain on any given moment, i thought maybe Decorator pattern would be better.
Edit:
My end result should be: List<EngineersDTO> from all datasources. The reason I want to add this pattern is so that I can easily add another datasource behind the rest of the "pipeline". This datasource, just like the others, will have addEngineersDTOToList method.
To further illustrate on how you can Chain-of-responsibility pattern I put together a small example. I believe you should be able to adapt this solution to suit the needs of your real world problem.
Problem Space
We have an unknown set of user requests which contain the name of properties to be retrieved. There are multiple datasources which each have varying amounts of properties. We want to search through all possible data sources until all of the properties from the request have been discovered. Some data types and data sources might look like bellow (note I am using Lombok for brevity):
#lombok.Data
class FooBarData {
private final String foo;
private final String bar;
}
#lombok.Data
class FizzBuzzData {
private final String fizz;
private final String buzz;
}
class FooBarService {
public FooBarData invoke() {
System.out.println("This is an expensive FooBar call");
return new FooBarData("FOO", "BAR");
}
}
class FizzBuzzService {
public FizzBuzzData invoke() {
System.out.println("This is an expensive FizzBuzz call");
return new FizzBuzzData("FIZZ", "BUZZ");
}
}
Our end user might require multiple ways to resolve the data. The following could be a valid user input and expected response:
// Input
"foobar", "foo", "fizz"
// Output
{
"foobar" : {
"foo" : "FOO",
"bar" : "BAR"
},
"foo" : "FOO",
"fizz" : "FIZZ"
}
A basic interface and simple concrete implementation for our property resolver might look like bellow:
interface PropertyResolver {
Map<String, Object> resolve(List<String> properties);
}
class UnknownResolver implements PropertyResolver {
#Override
public Map<String, Object> resolve(List<String> properties) {
Map<String, Object> result = new HashMap<>();
for (String property : properties) {
result.put(property, "Unknown");
}
return result;
}
}
Solution Space
Rather than using a normal "Decorator pattern", a better solution may be a "Chain-of-responsibility pattern". This pattern is similar to the decorator pattern, however, each link in the chain is allowed to either work on the item, ignore the item, or end the execution. This is helpful for deciding if a call needs to be made, or terminating the chain if the work is complete for the request. Another difference from the decorator pattern is that resolve will not be overriden by each of the concrete classes; our abstract class can call out to the sub class when required using abstract methods.
Back to the problem at hand... For each resolver we need two components. A way to fetch data from our remote service, and a way to extract all the required properties from the data retrieved. For fetching the data we can provide an abstract method. For extracting a property from the fetched data we can make a small interface and maintain a list of these extractors seeing as multiple properties can be pulled from a single piece of data:
interface PropertyExtractor<Data> {
Object extract(Data data);
}
abstract class PropertyResolverChain<Data> implements PropertyResolver {
private final Map<String, PropertyExtractor<Data>> extractors = new HashMap<>();
private final PropertyResolver successor;
protected PropertyResolverChain(PropertyResolver successor) {
this.successor = successor;
}
protected abstract Data getData();
protected final void setBinding(String property, PropertyExtractor<Data> extractor) {
extractors.put(property, extractor);
}
#Override
public Map<String, Object> resolve(List<String> properties) {
...
}
}
The basic idea for the resolve method is to first evaluate which properties can be fulfilled by this PropertyResolver instance. If there are eligible properties then we will fetch the data using getData. For each eligible property we extract the property value and add it to a result map. Each property which cannot be resolved, the successor will be requested to be resolve that property. If all properties are resolved the chain of execution will end.
#Override
public Map<String, Object> resolve(List<String> properties) {
Map<String, Object> result = new HashMap<>();
List<String> eligibleProperties = new ArrayList<>(properties);
eligibleProperties.retainAll(extractors.keySet());
if (!eligibleProperties.isEmpty()) {
Data data = getData();
for (String property : eligibleProperties) {
result.put(property, extractors.get(property).extract(data));
}
}
List<String> remainingProperties = new ArrayList<>(properties);
remainingProperties.removeAll(eligibleProperties);
if (!remainingProperties.isEmpty()) {
result.putAll(successor.resolve(remainingProperties));
}
return result;
}
Implementing Resolvers
When we go to implement a concrete class for PropertyResolverChain we will need to implement the getData method and also bind PropertyExtractor instances. These bindings can act as an adapter for the data returned by each service. This data can follow the same structure as the data returned by the service, or have a custom schema. Using the FooBarService from earlier as an example, our class could be implemented like bellow (note that we can have many bindings which result in the same data being returned).
class FooBarResolver extends PropertyResolverChain<FooBarData> {
private final FooBarService remoteService;
FooBarResolver(PropertyResolver successor, FooBarService remoteService) {
super(successor);
this.remoteService = remoteService;
// return the whole object
setBinding("foobar", data -> data);
// accept different spellings
setBinding("foo", data -> data.getFoo());
setBinding("bar", data -> data.getBar());
setBinding("FOO", data -> data.getFoo());
setBinding("__bar", data -> data.getBar());
// create new properties all together!!
setBinding("barfoo", data -> data.getBar() + data.getFoo());
}
#Override
protected FooBarData getData() {
return remoteService.invoke();
}
}
Example Usage
Putting it all together, we can invoke the Resolver chain as shown bellow. We can observe that the expensive getData method call is only performed once per Resolver only if the property is bound to the resolver, and that the user gets only the exact fields which they require:
PropertyResolver resolver =
new FizzBuzzResolver(
new FooBarResolver(
new UnknownResolver(),
new FooBarService()),
new FizzBuzzService());
Map<String, Object> result = resolver.resolve(Arrays.asList(
"foobar", "foo", "__bar", "barfoo", "invalid", "fizz"));
ObjectMapper mapper = new ObjectMapper();
mapper.enable(SerializationFeature.INDENT_OUTPUT);
System.out.println(mapper
.writerWithDefaultPrettyPrinter()
.writeValueAsString(result));
Output
This is an expensive FizzBuzz call
This is an expensive FooBar call
{
"foobar" : {
"foo" : "FOO",
"bar" : "BAR"
},
"__bar" : "BAR",
"barfoo" : "BARFOO",
"foo" : "FOO",
"invalid" : "Unknown",
"fizz" : "FIZZ"
}
I am working on measuing my application metrics using below class in which I increment and decrement metrics.
public class AppMetrics {
private final AtomicLongMap<String> metricCounter = AtomicLongMap.create();
private static class Holder {
private static final AppMetrics INSTANCE = new AppMetrics();
}
public static AppMetrics getInstance() {
return Holder.INSTANCE;
}
private AppMetrics() {}
public void increment(String name) {
metricCounter.getAndIncrement(name);
}
public AtomicLongMap<String> getMetricCounter() {
return metricCounter;
}
}
I am calling increment method of AppMetrics class from multithreaded code to increment the metrics by passing the metric name.
Problem Statement:
Now I want to have metricCounter for each clientId which is a String. That means we can also get same clientId multiple times and sometimes it will be a new clientId, so somehow then I need to extract the metricCounter map for that clientId and increment metrics on that particular map (which is what I am not sure how to do that).
What is the right way to do that keeping in mind it has to be thread safe and have to perform atomic operations. I was thinking to make a map like that instead:
private final Map<String, AtomicLongMap<String>> clientIdMetricCounterHolder = Maps.newConcurrentMap();
Is this the right way? If yes then how can I populate this map by passing clientId as it's key and it's value will be the counter map for each metric.
I am on Java 7.
If you use a map then you'll need to synchronize on the creation of new AtomicLongMap instances. I would recommend using a LoadingCache instead. You might not end up using any of the actual "caching" features but the "loading" feature is extremely helpful as it will synchronizing creation of AtomicLongMap instances for you. e.g.:
LoadingCache<String, AtomicLongMap<String>> clientIdMetricCounterCache =
CacheBuilder.newBuilder().build(new CacheLoader<String, AtomicLongMap<String>>() {
#Override
public AtomicLongMap<String> load(String key) throws Exception {
return AtomicLongMap.create();
}
});
Now you can safely start update metric counts for any client without worrying about whether the client is new or not. e.g.
clientIdMetricCounterCache.get(clientId).incrementAndGet(metricName);
A Map<String, Map<String, T>> is just a Map<Pair<String, String>, T> in disguise. Create a MultiKey class:
class MultiKey {
public String clientId;
public String name;
// be sure to add hashCode and equals
}
Then just use an AtomicLongMap<MultiKey>.
Edited:
Provided the set of metrics is well defined, it wouldn't be too hard to use this data structure to view metrics for one client:
Set<String> possibleMetrics = // all the possible values for "name"
Map<String, Long> getMetricsForClient(String client) {
return Maps.asMap(possibleMetrics, m -> metrics.get(new MultiKey(client, m));
}
The returned map will be a live unmodifiable view. It might be a bit more verbose if you're using an older Java version, but it's still possible.