I think I am falling into a rabbit hole.
Here is my situation.
I am planning on creating a somewhat generic event ApiRequestEvent that will get posted onto an event bus. The event will have 2 fields:
ApiRequestRepository.Request mApiRequest;
ApiRequestParams mApiRequestParams;
The receiver of the event will act like a "router". It will to the following:
look at the request field and figure out which api call needs
made
extract the correct parameters from the parameter object
call the correct api method with the parameters
What I am ending up with are several parameter classes that all "implement" the ApiRequestParam interface, but this interface has no methods. I am just using the interface so I can pass in any of the parameter classes into the event constructor.
Maybe this is totally correct, but it just feels like I am doing something wrong and this should be done some other way.
Can someone explain what's wrong with this approach , or why it is right, and if it is wrong , how it should be done?
Here some examples of the code I have so far.
The request event class
public class ApiRequestEvent {
protected ApiRequestRepository.Request mApiRequest;
protected ApiRequestParams mApiRequestParams;
public ApiRequestEvent(ApiRequestRepository.Request request, ApiRequestParams apiRequestParams) {
mApiRequest = request;
mApiRequestParams = apiRequestParams;
}
public ApiRequestParams getParams() {
return mApiRequestParams;
}
public ApiRequestRepository.Request getRequest() {
return mApiRequest;
}
the interface with no methods
public interface ApiRequestParams {
}
one of several parameter classes that will implement this "interface"
public class UserRequestParams implements ApiRequestParams {
private String api_access_token;
private User user;
public UserRequestParams(String apiAccessToken) {
api_access_token = apiAccessToken;
user = null;
}
public UserRequestParams(String name, String email, String password, String passwordConfirmation) {
user = new User();
user.name = name;
user.email = email;
user.password = password;
user.password_confirmation = passwordConfirmation;
}
}
creation of the event and posting it to the bus
mBus.post(new ApiRequestEvent( ApiRequestRepository.Request.SIGN_UP, new UserRequestParams(mName,mEmail,mPassword,mPasswordConfirmation )));
Related
In akka-typed, the convention is to create Behavior classes with static inner classes that represent the messages that they receive. Heres a simple example
public class HTTPCaller extends AbstractBehavior<HTTPCaller.MakeRequest> {
public interface Command {}
// this is the message the HTTPCaller receives
public static final class MakeRequest implements Command {
public final String query;
public final ActorRef<Response> replyTo;
public MakeRequest(String query, ActorRef<Response> replyTo) {
this.query = query;
this.replyTo = replyTo;
}
}
// this is the response message
public static final class Response implement Command {
public final String result;
public Response(String result) {
this.result = result;
}
}
public static Behavior<Command> create() {
return Behaviors.setup(HTTPCaller::new);
}
private HTTPCaller(ActorContext<Command> context) {
super(context);
}
#Override
public Receive<Command> createReceive() {
return newReceiveBuilder()
.onMessage(MakeRequest.class, this::onMakeRequest).build();
}
private Behavior<MakeRequest> onMakeRequest(MakeRequest message) {
String result = // make HTTP request here using message.query
message.replyTo.tell(new Response(result));
return Behaviors.same();
}
}
Let's say that 20 other actors send MakeRequest messages to the single HTTPCaller actor. Now, each of these other actors have inner classes that implement their own Command. Since MakeRequest is being used by all 20 classes it must be a subtype of all 20 of those actors' Command inner interface.
This is not ideal. I'm wondering what the Akka way of getting around this is.
There's no requirement that a message (e.g. a command) which an actor sends (except for messages to itself...) have to conform to that actor's incoming message type. The commands sent to the HTTPCaller actor only have to (and in this case only do) extend HTTPCaller.Command.
So imagine that we have something like
public class SomeOtherActor extends AbstractBehavior<SomeOtherActor.Command> {
public interface Command;
// yada yada yada
ActorRef<HTTPCaller.Command> httpCallerActor = ...
httpCallerActor.tell(new HTTPCaller.MakeRequest("someQuery", getContext().getSystem().ignoreRef());
}
In general, when defining messages which are sent in reply, those are not going to extend the message type of the sending actor. In HTTPCaller, for instance, Response probably shouldn't implements Command: it can be a standalone class (alternatively, if it is something that might be received by the HTTPCaller actor, it should be handled in the receive builder).
My code above does bring up one question: if Response is to be received by SomeOtherActor, how can it extend SomeOtherActor.Command?
The solution there is message adaptation: a function to convert a Response to a SomeOtherActorCommand. For example
// in SomeOtherActor
// the simplest possible adaptation:
public static final class ResponseFromHTTPCaller implements Command {
public final String result;
public ResponseFromHTTPCaller(HTTPCaller.Response response) {
result = response.result;
}
// at some point before telling the httpCallerActor...
// apologies if the Java lambda syntax is messed up...
ActorRef<HTTPCaller.Response> httpCallerResponseRef =
getContext().messageAdapter(
HTTPCaller.Response.class,
(response) -> { new ResponseFromHTTPCaller(response) }
);
httpCallerActor.tell(new HTTPCaller.MakeRequest("someQuery", httpCallerResponseRef);
There is also the ask pattern, which is more useful for one-shot interactions between actors where there's a timeout.
I am on a project using Java and Spring Boot that processes several different message types from the same queue. Each message gets processed conditionally based on the message type, using an implementation of MessageProcessingService abstract class for each message type.
As of now, we have 5 different message types coming into the same consumer. We are using the same queue because we leverage group policies in JMS, and each message type has the same business key as the group policy.
So what we end up with is that every time a requirement requires receiving a new message type, we add a new implementation of a MessageProcessingService and another dependency to the consumer object. I want to find a better strategy to selectively choose the message processing
Here is an example similar to what we are doing. I do not guarantee the syntax is compilable or syntactically perfect, just demonstrating the problem. Notice all the messages resolve around a person
Consumer:
#Component
public class PersonMessageConsumer {
private MessageProcessingService<HeightUpdate> heightUpdateMessageProcessingService;
private MessageProcessingService<WeightUpdate> weightUpdateMessageProcessingService;
private MessageProcessingService<NameUpdate> nameUpdateMessageProcessingService;
private MessageProcessingService<ShowSizeUpdate> shoeSizeUpdateMessageProcessingService;
public PersonMessageConsumer(
MessageProcessingService<HeightUpdate> heightUpdateMessageProcessingService,
MessageProcessingService<WeightUpdate> weightUpdateMessageProcessingService,
MessageProcessingService<NameUpdate> nameUpdateMessageProcessingService,
MessageProcessingService<ShowSizeUpdate> shoeSizeUpdateMessageProcessingService) {
this.heightUpdateMessageProcessingService = heightUpdateMessageProcessingService;
this.weightUpdateMessageProcessingService = weightUpdateMessageProcessingService;
this.nameUpdateMessageProcessingService = nameUpdateMessageProcessingService;
this.shoeSizeUpdateMessageProcessingService = shoeSizeUpdateMessageProcessingService;
}
#JmsListener(destination = "${queueName}")
public void receiveMessage(TextMessage message) {
String messageType = message.getHeader("MessageType");
switch (messageType) {
case "HeightUpdate":
heightUpdateMessageProcessingService.processMessage(message.getText());
return;
case "WeightUpdate":
weightUpdateMessageProcessingServivce.processMessage(message.getText());
return;
// And other message types
default:
throw new UnknownMessageTypeException(messageType);
}
}
Message POJO example
public class HeightUpdate implements PersonMessage {
#Getter
#Setter
private int height;
}
PersonMessage interface
public interface PersonMessage {
int getPersonId();
}
MessageProcessingService
public abstract class MessageProcessingService<T extends PersonMessage> {
public void processMessage(String messageText) {
//Common message processing, we do some more involved work here but just as a simple example
T message = new ObjectMapper.readValue(messageText, getClassType());
Person person = personRepository.load(message.getPersonId());
Person originalPerson = person.deepCopy();
processMessageLogic(person, message);
if (originalPerson.isDifferentFrom(person)) {
personRespository.update(person);
}
}
protected abstract void processMessageLogic(Person person, T message);
protected abstract Class getClassType();
}
Abstract class implementation example
#Service("heightUpdateMessageProcessingService")
public class HeightUpdateMessageProcessingService extends MessageProcessingService<HeightUpdate> {
#Override
protected void processMessageLogic(Person person, HeightUpdate update) {
person.setHeight(update.getHeight());
}
#Override
protected Class getMessageType() {
return HeightUpdate.getClass();
}
}
So my question is whether or not there is a better design pattern or way of coding this in java and spring that is a little easier to clean and maintain and keeps SOLID principles in mind
Add an abstract method in the MessageProcessingService to return the messageType that each concrete implementation can handle.
Rather than wiring each individual service into PersonMessageConsumer, wire in a List<MessageProcessingService> so that you get all of them at once.
Transform that List into a Map<String, MessageProcessingService>, using the messageType as the key.
Replace the switch statement by looking up the appropriate service in the Map and then invoking its processMessage method.
In the future you can add new instances of MessageProcessingService without having to edit PersonMessageConsumer because Spring will automatically add those new instances to the List<MessageProcessingService> that you wire in.
I develop a Android application for a Ble-Device and implement a Interface to listen on write, read and subscribe operations. I add all my listener instances to a List and trigger the Interface methods like this:
readWriteEvent = new BleDevice.ReadWriteEvent(true, status, characteristic.getValue());
for (ReadWriteListener listener : readWriteListener) {
listener.onEvent(readWriteEvent);
}
But the problem is, that all ReadWriteEvents get triggered who have an active listener. So is there some kind of identification where I can trigger a specific listener ? So that I can do something like this:
for (ReadWriteListener listener : readWriteListener) {
if (listener.getUuid() == characteristic.getUuid()) {
listener.onEvent(readWriteEvent);
}
}
Or is there a better solution for my problem ? This is how my Interface looks know:
public interface ReadWriteListener {
void onEvent(BleDevice.ReadWriteEvent event);
}
Use an abstract class instead:
public abstract class ReadWriteListener {
private int uid;
public ReadWriteListener(int uid) {
this.uid = uid;
}
public int getUid() {
return uid; //or just make uid final and public
}
public abstract void onEvent(BleDevice.ReadWriteEvent event);
}
This way, when you construct it, you can pass a UID and retrieve it, while the onEvent method remains abstract and necessary to implement. Of course, this means you can no longer implement the listener in a class that's already extending another class.
I have a Controller class with the below two methods for finding a doctors (context changed). Getting the
Mass Assignment: Insecure Binder Configuration (API Abuse, Structural) error on both methods.
#Controller
#RequestMapping(value = "/findDocSearch")
public class Controller {
#Autowired
private IFindDocService findDocService;
#RequestMapping(value = "/byName", method = RequestMethod.GET)
#ResponseBody
public List<FindDocDTO> findDocByName(FindDocBean bean) {
return findDocService.retrieveDocByName(bean.getName());
}
#RequestMapping(value = "/byLoc", method = RequestMethod.GET)
#ResponseBody
public List<FindDocDTO> findDocByLocation(FindDocBean bean) {
return findDocService.retrieveDocByZipCode(bean.getZipcode(),
bean.getDistance());
}
}
and my Bean is :
public class FindDocBean implements Serializable {
private static final long serialVersionUID = -1212xxxL;
private String name;
private String zipcode;
private int distance;
#Override
public String toString() {
return String.format("FindDocBean[name: %s, zipcode:%s, distance:%s]",
name, zipcode, distance);
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getZipcode() {
return zipcode;
}
public void setZipcode(String zipcode) {
this.zipcode = zipcode;
}
public int getDistance() {
return distance;
}
public void setDistance(int distance) {
this.distance = distance;
}
As per all the suggestions found so far, they are suggesting to restrict the bean with required parameters only by something like below :
final String[] DISALLOWED_FIELDS = new String[]{"bean.name", "bean.zipcode", };
#InitBinder
public void initBinder(WebDataBinder binder) {
binder.setDisallowedFields(DISALLOWED_FIELDS);
But my problem is all the 3 parameters of the bean will be used in either of the method supplied on Controller.
Can someone please suggest some solution for this. Thanks in advance.
InitBinder can be used for methods. You can try this.
#InitBinder("findDocByName")
public void initBinderByName(WebDataBinder binder) {
binder.setDisallowedFields(new String[]{"distance","zipcode"});
}
#InitBinder("findDocByLocation")
public void initBinderByZipCode(WebDataBinder binder) {
binder.setDisallowedFields(new String[]{"distance","name"});
}
i was facing same issue, then i added below code in same rest controller class:
#InitBinder
public void populateCustomerRequest(WebDataBinder binder) {
binder.setDisallowedFields(new String[]{});
}
now its working fine for me and mass assignment issue was fixed.
Simple question - how your mapper can instantionate the bean? Here is answer / example. You can pass that data by query parameter, or in header. However that would be strange. Better is to have that methods with #QueryParam providing location, or name. That way it will be easier to protect your application.
As a side note, query has limited length, so if your search form is big and strange, #POST can be good idea, and that way you can pass all the data. For this, simple example that would be overkill.
This looks like an unfortunate false positive. The rule behind this error is made to avoid that properties present in an object but not intended to be (unvalidated) user input are accidentally populated from a web request. An example would be a POST request creating a resource. If the request handler takes the full resource object and fills only missing properties an malicious user could populate fields that she shouldn't be able to edit.
This case however does not match the scheme. You just use the same mechanism to capture your different arguments. Additionally populated properties will not even be read. In
GET http://yourhost/findDocSearch/byName?name=Abuse&zipCode=11111
the additional zipCode would just be ignored. Therefore the assumed risk is not present here.
To fix the warning, you could mark it as a false positive (if this is possible inside your setup). If that is not possible you could also just map the query parameters to method arguments directly. As you only have limited parameters that should not harm too much. If this is also no option you probably need to figure out the exact algorithm your code analysis uses to figure out what checks it will recognize. Unfortunately most scanners are only able to discover a limited set of ways to do input validation.
I'm new to RequestFactory but with generous help of Thomas Broyer and after reviewing documents below it's getting much better :)
Getting Started with RequestFactory
Request Factory Moving Parts
RequestFactory changes in GWT 2.4
But could you please explain why Locator<>.find() is being called so unnecessarily (in my opinion) often ?
In my sample project I have two entities Organization and Person that maintain parent-child relationship. When I fetch Organization Objectify automatically fetches child Person.
Also I created two methods in my service layer findOrganizationById and saveOrganization that load and persist objects.
Now consider two scenarios:
When I call findOrganizationById in the client following calls occur on server side:
OrderDao.findOrganizationById(1)
PojoLocator.getId(Key<?>(Organization(1)))
PojoLocator.getId(Key<?>(Organization(1)/Person(2)))
PojoLocator.getId(Key<?>(Organization(1)))
PojoLocator.find(Key<?>(Organization(1)))
PojoLocator.getId(Key<?>(Organization(1)/Person(2)))
PojoLocator.find(Key<?>(Organization(1)/Person(2)))
By calling OrderDao.findOrganizationById I've already received full graph of objects. Why call .find twice in addition to that? It's extra load on Datastore that cost me money. Of course I cache it but it would be neat to fix it. How can I avoid these additional calls ?
Similar thing happens when I save object(s) by calling saveOrganization in the client. Following calls occur on server side:
PojoLocator.find(Key<?>(Organization(1)))
PojoLocator.find(Key<?>(Organization(1)/Person(2)))
OrderDao.saveOrganization(1)
PojoLocator.getId(Key<?>(Organization(1)))
PojoLocator.find(Key<?>(Organization(1)))
PojoLocator.getId(Key<?>(Organization(1)/Person(2)))
PojoLocator.find(Key<?>(Organization(1)/Person(2)))
I can understand need for fetching two objects from DataStore before updating it. RequestFactory sends deltas to the server so it needs to have entire object before persisting it. Still since I load full graph at once it would be nice not to have second call which is PojoLocator.find(Key<?>(Organization(1)/Person(2))). And I truly can't understand need for .find() calls after persisting.
Thoughts ?
My proxies
#ProxyFor(value = Organization.class, locator = PojoLocator.class)
public interface OrganizationProxy extends EntityProxy
{
public String getName();
public void setName(String name);
public String getAddress();
public void setAddress(String address);
public PersonProxy getContactPerson();
public void setContactPerson(PersonProxy contactPerson);
public EntityProxyId<OrganizationProxy> stableId();
}
#ProxyFor(value = Person.class, locator = PojoLocator.class)
public interface PersonProxy extends EntityProxy
{
public String getName();
public void setName(String name);
public String getPhoneNumber();
public void setPhoneNumber(String phoneNumber);
public String getEmail();
public void setEmail(String email);
public OrganizationProxy getOrganization();
public void setOrganization(OrganizationProxy organization);
}
My service
public interface AdminRequestFactory extends RequestFactory
{
#Service(value = OrderDao.class, locator = InjectingServiceLocator.class)
public interface OrderRequestContext extends RequestContext
{
Request<Void> saveOrganization(OrganizationProxy organization);
Request<OrganizationProxy> findOrganizationById(long id);
}
OrderRequestContext contextOrder();
}
and finally my Locator<>
public class PojoLocator extends Locator<DatastoreObject, String>
{
#Inject Ofy ofy;
#Override
public DatastoreObject create(Class<? extends DatastoreObject> clazz)
{
try
{
return clazz.newInstance();
} catch (InstantiationException e)
{
throw new RuntimeException(e);
} catch (IllegalAccessException e)
{
throw new RuntimeException(e);
}
}
#Override
public DatastoreObject find(Class<? extends DatastoreObject> clazz, String id)
{
Key<DatastoreObject> key = Key.create(id);
DatastoreObject load = ofy.load(key);
return load;
}
#Override
public Class<DatastoreObject> getDomainType()
{
return null; // Never called
}
#Override
public String getId(DatastoreObject domainObject)
{
Key<DatastoreObject> key = ofy.fact().getKey(domainObject);
return key.getString();
}
#Override
public Class<String> getIdType()
{
return String.class;
}
#Override
public Object getVersion(DatastoreObject domainObject)
{
return domainObject.getVersion();
}
}
The pairs of getId and find at the end are the default implementation of Locator#isLive: it assumes an object is live (i.e. still exists in the data store) if finding it by its ID returns a non-null value.
RF checks each EntityProxy it ever seen during the request/response for their liveness when constructing the response, to tell the client when an entity has been deleted (on the client side, it'd then fire an EntityProxyChange event with a DELETE write operation.
You can of course override isLive in your Locator with a more optimized implementation if you can provide one.