Context:
Microservice that has exposed REST API and handles input requests in JSON format sequentially. Handled request is deserialized, validated and put into Kafka topic for futher processing.
Given example of class to be validated:
// autogenerated from openapi schema using custom templates
public class DataPayload {
#NotNull
#Size(min=1, max=100)
private String description;
#Valid
#Size(max=1024)
private List<DataLine> dataLines;
// getters, setters, etc.
public static class DataLine {
// lots of fields to be validated..
}
}
We run validation using jsr303/jsr380 Bean Validation:
public static void main(String[] args) {
var validator = Validation.buildDefaultValidatorFactory().getValidator();
var violations = validator.validate(getDataPayload());
}
Does anybody have an idea how validation of List<DataLine> dataLines could be parallelized with minimal efforts?
Several (obvious) options I have so far:
Manually run in parallel validator.validate(dataLine) from the list along with validation DataPayload without dataLines validator.validate(withoutDataLines(dataPayload)).
Similar to 1st option but with some tricks around Validation Groups.
(Not sure if it is possible). Custom ConstraintValidator that runs validation for container objects in parallel. Open question - how to delegate nested/cascaded validation to default mechanism?
Despite options are viable I am wondering is there more smart and elegant way to solve this problem..
Related
I am trying to solve some vulnerabilities issues, and I have one that I couldn't solve it, I tried to add #Valid annotation in sync method but same error, this is the description from fortify:
The framework binder used for binding the HTTP request parameters to
the model class has not been explicitly configured to allow, or
disallow certain attributes.
To ease development and increase productivity, most modern frameworks
allow an object to be automatically instantiated and populated with
the HTTP request parameters whose names match an attribute of the
class to be bound. Automatic instantiation and population of objects
speeds up development, but can lead to serious problems if implemented
without caution. Any attribute in the bound classes, or nested
classes, will be automatically bound to the HTTP request parameters.
Therefore, malicious users will be able to assign a value to any
attribute in bound or nested classes, even if they are not exposed to
the client through web forms or API contracts.
The error I am getting in this line:
public ResponseClass sync(#BeanParam MyClassRequest request) throws
Exception {
MyClassResource.java
#Api(tags = "Relay")
#Stateless
public class MyClassResource extends AbstractService<MyClassRequest, ResponseClass> {
#EJB
private MyClassService myClassService;
#POST
#Path("/api/v1/service")
#Produces({"application/json"})
#ApiOperation(value = "Processes Conn",
response = ResponseClass.class, responseContainer = "ResponseClass", hidden = true)
#Override
public ResponseClass sync(#BeanParam MyClassRequest request) throws Exception {
myClassService.processFeed(request);
return new RelayResponse(HttpStatuses.ACCEPTED.getStatus());
}
MyClassRequest.java
In this file I have tried #FormParam("ccc") but same
public class MyClassRequest extends RelayRequest {
public MyClassRequest() {
super.setMessageType("not required");
}
private String myData;
private String conneRid;
private String connectionCreatedDate;
If someone could give some hint that how I can solve it, I will really appreciate it.
Do you expect all fields to be present in request? You are using #Valid annotation but there are no validation annotations in MyClassRequest model. Try to add some validation annotations like #JsonIgnore for non mandatory fields. Or #JsonInclude on class. If this does not help, may be also try explicitly adding #JsonProperty on each field.
I am currently have a spring service consuming an REST service via autogenerated code.
I placed an internal interface to have an abstraction from the REST interface since it is still in development.
The REST service is in fact two on the same host.
So I generate two times code having two components I can inject into my internal interface implementation.
In the interface implementation I do adapt the base paths of the REST client components via #PostConstruct since the URL is dependent on the deployment environment.
This works so far so good. Even though I believe it would be better to adapt the base path not in the internal interface implementation but instead in another place.
Thankful for any hints here.
The Problem
Now the tricky part.
The REST Services I consume exists multiple times with different data in the environment.
Some times there are two, some times three and so on.
The user of our website should be able to select which backend he wants to consume.
The information about which service backends are available should be configurable for the environment.
To be able to configure these environment dependent I would thought about adding a map in the properties like:
service-name: url
second-name: url
and so on.
This map would contain a default with the always existing service.
Via environment variables it can be overwritten to list more backend services.
So, now I want to be able to route the website request to the chosen backend service.
My idea is, that I would need some kind of service.
The service holds the internal interfaces with the different backend instances and can identify which to use based on the name.
The question is now, how to build this with Spring?
More specifically:
How do I construct multiple time my InternalRestClient with different dependencies?
How can I tell them apart/Identify and use them?
Thank you very much for your suggestions in advance.
Code Examples
The internal Rest Interface
public interface InternalRestClient {
String someAbstractMethodUsingBothServices(String someDate);
}
The Implementation
#Service
public class InternalRestClientImpl implements InternalRestClient{
#Value("${url}")
private String url;
private FirstRestService firstService;
private SecondRestService secondService;
public InternalRestClientImpl(FirstRestService firstService, SecondRestService secondService) {
this.firstService = firstService;
this.secondService = secondService;
}
#PostConstruct
void correctPaths() {
firstService.setBasePath(url);
secondService.setBasePath(url);
}
#Override
public String someAbstractMethodUsingBothServices(String someDate) {
return null;
}
}
The autogenerated openapi components
#Component
public class FirstRestService {
private String basePath;
public void setBasePath(String basePath) {
this.basePath = basePath;
}
// some methods
}
#Component
public class SecondRestService {
private String basePath;
public void setBasePath(String basePath) {
this.basePath = basePath;
}
// some other methods
}
We have "beans" that are meant to be serialized to JSON, to be then returned to our (vue.js based) UI layer. So far, my beans look like this:
public class ExampleBean {
private final int id;
private final String name;
public ExampleBean(int id, String name) {
this.id = id; ...
}
// getter for all fields
}
They are instantiated by some mapper:
public ExampleBean map(SomeInternalThing foo) {
int id = getIdFromFoo(foo);
String name = doSomethingElse(foo.itsBar());
return new ExampleBean(id, name);
}
I then have some unit tests (for the mapper):
#Test
public void testGetId() {
... do some mocking setup so that the mapper can do its job
assertThat(mapperUnderTest.map(someFoo).getId(), is(5));
}
The main advantage of this approach is that bean objects are immutable (and the compiler tells me when I forgot to initialize a field).
But: the number of fields for that bean keeps increasing. That SomeInternalThing context has maybe 30 to 50 "properties", and the number of fields required in the bean ... went from 3 to 5 to 8 by now.
What is really "killing" me is the fact that the mapping code is doing different things for each required field. Which requires me to have more and more "common" mock specifications to deal with.
By now I am wondering if there are better choices to implement such "data only objects".
Personally I prefer lombok ( https://projectlombok.org/ ), when creating data objects. It gets rid of the boilerplate code. You should take a look into the "#Builder" and "#Data" annotation.
Since using lombok is always a team decision, you could start by implementing the builder-pattern by yourself (for such data-objects).
This enables you to set every property seperately, and test every property individually.
That beeing said you probably shouldn't use a constructor with every field.
(see #AllArgsConstructor in lombok)
As you can see here (https://en.wikipedia.org/wiki/JavaBeans) beans should have a public default constructor
We are actually using Spring Boot's #ConfigurationProperties as basically a configuration mapper : it provides us an easy shortcut to map properties on objects.
#ConfigurationProperties("my.service")
public class MyService {
private String filePrefix;
private Boolean coefficient;
private Date beginDate;
// getters/setters mandatory at the time of writing
public void doBusinessStuff() {
// ...
}
}
Although this was a nice productivity boost when we were prototyping the app, we came to question if this was right usage.
I mean, configuration properties have a different status in Spring Boot's context, they're exposed through actuator endpoints, they can be used to trigger conditional beans, and seem more oriented toward technical configuration properties.
Question : Is it "correct" to use this mechanism on any business property/value, or is it plain misuse ?
Any potential drawback we missed ?
Right now our only concern is that we cannot use #ConfigurationProperties on immutable classes, which is closely related to this issue on Spring Boot's tracker : Allow field based #ConfigurationProperties binding
If your property represents something that is configurable based on the environment/profile that is what the mechanism is there for. Though I'm a little unclear what you mean by
"map properities on objects".
I would not favor this style in general, especially if your bean has multiple properties to set. A more standard idiom is to have a class that encapsulates the properties/settings used to create your bean:
#ConfigurationProperties("my.service")
public class MyServiceProperties {
private String filePrefix;
private Boolean coefficient;
private Date beginDate;
// getters/setters mandatory at the time of writing
}
then your Service class would look like this:
#EnableConfigurationProperties(MyServiceProperties.class)
public class MyService {
#Autowired
private MyServiceProperties properties;
//do stuff with properties
public void doBusinessStuff() {
// ...
}
}
This would at least allow you to pass the properties easily into an immutable class through it's constructor (make copies of any mutable properties). Also having the properties bean can be reused if you find other parts of your app need some shared configuration.
I'm building a simple RESTFul Service; and for achieve that I need two tasks:
Get an instance of my resource (i.e Book) from request parameters, so I can get that instance to be persisted
Build an XML document from that instance to send the representation to the clients
Right now, I'm doing both things in my POJO class:
public class Book implements Serializable {
private Long id;
public Book(Form form) {
//Initializing attributes
id = Long.parseLong(form.getFirstValue(Book.CODE_ELEMENT));
}
public Element toXml(Document document) {
// Getting an XML Representation of the Book
Element bookElement = document.createElement(BOOK_ELEMENT);
}
I've remembered an OO principle that said that behavior should be where the data is, but now my POJO depends from Request and XML API's and that doesn't feels right (also, that class has persistence anotations)
Is there any standard approach/pattern to solve that issue?
EDIT:
The libraries i'm using are Restlets and Objectify.
I agree with you when you say that the behavior should be where the data is. But at the same time, as you say I just don't feel confortable polluting a POJO interface with specific methods used for serialization means (which can grow considerably depending on the way you want to do it - JSON, XML, etc.).
1) Build an XML document from that instance to send the representation to the clients
In order to decouple the object from serialization logic, I would adopt the Strategy Pattern:
interface BookSerializerStrategy {
String serialize(Book book);
}
public class XmlBookSerializerStrategy implements BookSerializerStrategy {
public String serialize(Book book) {
// Do something to serialize your book.
}
}
public class JsonBookSerializerStrategy implements BookSerializerStrategy {
public String serialize(Book book) {
// Do something to serialize your book.
}
}
You POJO interface would become:
public class Book implements Serializable {
private Long id;
private BookSerializerStrategy serializer
public String serialize() {
return serializer.serialize(this);
}
public void setSerializer(BookSerializerStrategy serializer) {
this.serializer = serializer;
}
}
Using this approach you will be able to isolate the serialization logic in just one place and wouldn't pollute your POJO with that. Additionally, returning a String I won't need to couple you POJO with classes Document and Element.
2) Get an instance of my resource (i.e Book) from request parameters, so I can get that instance to be persisted
To find a pattern to handle the deserialization is more complex in my opinion. I really don't see a better way than to create a Factory with static methods in order to remove this logic from your POJO.
Another approach to answer your two questions would be something like JAXB uses: two different objects, an Unmarshaller in charge of deserialization and a Marshaller for serialization. Since Java 1.6, JAXB comes by default with JDK.
Finally, those are just suggestions. I've become really interested in your question actually and curious about other possible solutions.
Are you using Spring, or any other framework, in your project? If you used Spring, it would take care of serialization for you, as well as assigning request params to method params (parsing as needed).