Whiel generating proxy class by using SVCUTIL.exe or By Adding service reference from VS, it inherits the IExtensibleDataObject to the data contract classes by default.
WCF data contract
[DataContract]
public class Employee
{
[DataMember(Order = 1)]
public string Id { get; set; }
}
WCF Servcie
public class Service1 : IService1
{
public Employee GetEmployeeById(Employee employee)
{
return employee;
}
}
Proxy Class generated by adding service reference from VS and Employee composite class in client side inherits IExtensibleDataObject interface by default even though i haven;'t implement this in service end.
Client side Employee Class
[System.Diagnostics.DebuggerStepThroughAttribute()]
[System.CodeDom.Compiler.GeneratedCodeAttribute("System.Runtime.Serialization", "4.0.0.0")]
[System.Runtime.Serialization.DataContractAttribute(Name="Employee", Namespace="http://schemas.datacontract.org/2004/07/WcfService1")]
[System.SerializableAttribute()]
public partial class Employee : object, System.Runtime.Serialization.IExtensibleDataObject, System.ComponentModel.INotifyPropertyChanged {
[System.NonSerializedAttribute()]
private System.Runtime.Serialization.ExtensionDataObject extensionDataField;
[System.Runtime.Serialization.OptionalFieldAttribute()]
private string IdField;
[global::System.ComponentModel.BrowsableAttribute(false)]
public System.Runtime.Serialization.ExtensionDataObject ExtensionData {
get {
return this.extensionDataField;
}
set {
this.extensionDataField = value;
}
}
[System.Runtime.Serialization.DataMemberAttribute()]
public string Id {
get {
return this.IdField;
}
set {
if ((object.ReferenceEquals(this.IdField, value) != true)) {
this.IdField = value;
this.RaisePropertyChanged("Id");
}
}
}
Now the question is, While generating proxy from some other client (For ex: Java), will they implement IExtensibleDataObject interface by default?
No, because IExtensibleDataObject is an interface from .Net Framework.
The service will run even without it. It provides a way to store extra data, that is not present in the contract:
The IExtensibleDataObject interface provides a single property that
sets or returns a structure used to store data that is external to a
data contract. The extra data is stored in an instance of the
ExtensionDataObject class and accessed through the ExtensionData
property. In a roundtrip operation where data is received, processed,
and sent back, the extra data is sent back to the original sender
intact. This is useful to store data received from future versions of
the contract. If you do not implement the interface, any extra data is
ignored and discarded during a roundtrip operation.
You can read more here: https://msdn.microsoft.com/en-us/library/system.runtime.serialization.iextensibledataobject%28v=vs.100%29.aspx
Hope it helps
Related
I have an endpoint, let's call it "GetPersonInfo". GetPersonInfo is given a few parameters but one of them is "PersonType". Based on this PersonType, multiple downstream services are called. Some of these services could be shared between PersonType's but that is not a guarantee.
For example GetPersonInfo(...) #1:
PersonType = "Adult"
When GetPersonInfo is called for Adult, the API endpoint would need to make two downstream calls and populate the payload model with results:
"GetPersonName()" and "GetFavoriteAlcoholicBeverage()"
For example GetPersonInfo(...) #2:
PersonType = "Child"
When GetPersonInfo is called for Child, the api endpoint would need to make two downstream calls and populate the payload model with results:
"GetPersonName()" and "GetFavoriteToy()"
For example GetPersonInfo(...) #3:
PersonType = "NamelessPerson"
When GetPersonInfo is called for NamelessPerson, the api endpoint would need to make one downstream call:
"GetPersonIdNumber()"
Each of these calls would be populating the same model PersonInfo but all of the fields are nullable in case the downstream call wasn't required for that person type.
Is there a pattern where I can achieve this without duplicating the common downstream calls in every single logic implementation for getting the person info by PersonType.
Below is the initial call
public PersonInfo getPersonInfo(int id, PersonType personType) {
// logic here based on personType to call necessary downstreams and populate person info model
}
Well, the poor man's approach would be an if-cascade. :-)
But thinking in design patterns, this looks clearly like a strategy pattern. Each of your downstreams would define a stragegy, and each PersonType would trigger one or more strategies. We will talk about this many-to-many relationship later on.
Let's start with a strategy interface...
public interface PersonStrategy {
void enrichPerson(String id, PersonInfo result);
}
... and let's have some downstreams implemented as strategies:
public class DefaultStrategy implements PersonStrategy {
#Override
public void enrichPerson(String id, PersonInfo result) {
// fetch basic person data ...
}
}
public class FavoriteToyStrategy implements PersonStrategy {
#Override
public void enrichPerson(String id, PersonInfo result) {
// fetch toys ...
}
}
public class FavoriteAlcoholicBeverageStrategy implements PersonStrategy {
#Override
public void enrichPerson(String id, PersonInfo result) {
// fetch beverages ...
}
}
Once this is provided, your initial method would look like this:
private final Map<PersonType, List<PersonStrategy>> strategies = new LinkedHashMap<>();
public PersonInfo getPersonInfo(String id, PersonType type) {
final PersonInfo result = new PersonInfo();
strategies.get(type).forEach(strategy -> strategy.enrichPerson(id, result));
return result;
}
As you see, I implemented the many-to-many dependency in a multi-value map with the type as key. It's not yet populated, I think you can imagine how it works.
Other possibilities:
have a factory that returns the strategies belonging to a specific
type.
have each strategy decide if it reacts to a given type or
provide a list of types that it belongs to.
As you see: not a single if statement is needed here.
I am on a project using Java and Spring Boot that processes several different message types from the same queue. Each message gets processed conditionally based on the message type, using an implementation of MessageProcessingService abstract class for each message type.
As of now, we have 5 different message types coming into the same consumer. We are using the same queue because we leverage group policies in JMS, and each message type has the same business key as the group policy.
So what we end up with is that every time a requirement requires receiving a new message type, we add a new implementation of a MessageProcessingService and another dependency to the consumer object. I want to find a better strategy to selectively choose the message processing
Here is an example similar to what we are doing. I do not guarantee the syntax is compilable or syntactically perfect, just demonstrating the problem. Notice all the messages resolve around a person
Consumer:
#Component
public class PersonMessageConsumer {
private MessageProcessingService<HeightUpdate> heightUpdateMessageProcessingService;
private MessageProcessingService<WeightUpdate> weightUpdateMessageProcessingService;
private MessageProcessingService<NameUpdate> nameUpdateMessageProcessingService;
private MessageProcessingService<ShowSizeUpdate> shoeSizeUpdateMessageProcessingService;
public PersonMessageConsumer(
MessageProcessingService<HeightUpdate> heightUpdateMessageProcessingService,
MessageProcessingService<WeightUpdate> weightUpdateMessageProcessingService,
MessageProcessingService<NameUpdate> nameUpdateMessageProcessingService,
MessageProcessingService<ShowSizeUpdate> shoeSizeUpdateMessageProcessingService) {
this.heightUpdateMessageProcessingService = heightUpdateMessageProcessingService;
this.weightUpdateMessageProcessingService = weightUpdateMessageProcessingService;
this.nameUpdateMessageProcessingService = nameUpdateMessageProcessingService;
this.shoeSizeUpdateMessageProcessingService = shoeSizeUpdateMessageProcessingService;
}
#JmsListener(destination = "${queueName}")
public void receiveMessage(TextMessage message) {
String messageType = message.getHeader("MessageType");
switch (messageType) {
case "HeightUpdate":
heightUpdateMessageProcessingService.processMessage(message.getText());
return;
case "WeightUpdate":
weightUpdateMessageProcessingServivce.processMessage(message.getText());
return;
// And other message types
default:
throw new UnknownMessageTypeException(messageType);
}
}
Message POJO example
public class HeightUpdate implements PersonMessage {
#Getter
#Setter
private int height;
}
PersonMessage interface
public interface PersonMessage {
int getPersonId();
}
MessageProcessingService
public abstract class MessageProcessingService<T extends PersonMessage> {
public void processMessage(String messageText) {
//Common message processing, we do some more involved work here but just as a simple example
T message = new ObjectMapper.readValue(messageText, getClassType());
Person person = personRepository.load(message.getPersonId());
Person originalPerson = person.deepCopy();
processMessageLogic(person, message);
if (originalPerson.isDifferentFrom(person)) {
personRespository.update(person);
}
}
protected abstract void processMessageLogic(Person person, T message);
protected abstract Class getClassType();
}
Abstract class implementation example
#Service("heightUpdateMessageProcessingService")
public class HeightUpdateMessageProcessingService extends MessageProcessingService<HeightUpdate> {
#Override
protected void processMessageLogic(Person person, HeightUpdate update) {
person.setHeight(update.getHeight());
}
#Override
protected Class getMessageType() {
return HeightUpdate.getClass();
}
}
So my question is whether or not there is a better design pattern or way of coding this in java and spring that is a little easier to clean and maintain and keeps SOLID principles in mind
Add an abstract method in the MessageProcessingService to return the messageType that each concrete implementation can handle.
Rather than wiring each individual service into PersonMessageConsumer, wire in a List<MessageProcessingService> so that you get all of them at once.
Transform that List into a Map<String, MessageProcessingService>, using the messageType as the key.
Replace the switch statement by looking up the appropriate service in the Map and then invoking its processMessage method.
In the future you can add new instances of MessageProcessingService without having to edit PersonMessageConsumer because Spring will automatically add those new instances to the List<MessageProcessingService> that you wire in.
I have crud methods that modify the data in the cache and database
I also have a method that returns all entities when I use it after changes in the cache and database, I get irrelevant data.
As I understand it, the point is in the method of returning all entities. It uses the default key, it is different from other methods.
What do I need to do so that I can return the actual data sheet?
#Service
#CacheConfig(cacheNames = "configuration")
class ServiceConfiguration{
#Cacheable //this method returns non actual data
public List<MySomeConfiguration> getAllProxyConfigurations() {
return repository.getAllConfigurations();
}
#Cacheable(key = "#root.target.getConfigurationById(#id).serverId")
public MySomeConfiguration getConfigurationById(Long id) {
...
return configuration;
}
#CachePut(key = "#configuration.serverId", condition = "#result.id != null")
public MySomeConfiguration addOrUpdateConfiguration(Configuration configuration) {
return configuration;
}
#Cacheable(key = "#serverId")
public MySomeConfiguration getConfigurationByServerId(String serverId) {...
return configuration;
}
#CacheEvict(key = "#root.target.getConfigurationById(#id).serverId")
public void deleteConfigurationById(Long id) {
...
}
}//end class
p.s. sorry for my english
By default redis cache manager uses StringRedisSerializer for Key serializer
Your class toString() is used to serializer the key of the object so don't give different keys for different methods like put, get, evict etc jus rely on your toString() to get the key or override by using Spring Cache Custom KeyGenerator
Refer
https://www.baeldung.com/spring-cache-custom-keygenerator
Our application is getting complex, it has mainly 3 flow and have to process based on one of the 3 type. Many of these functionalities overlap each other.
So currently code is fully of if-else statements, it is all messed up and not organised. How to make a pattern so that 3 flows are clearly separated from each other but making use of power of re-usability.
Please provide some thoughts, this is a MVC application, where we need to produce and consume web servicees using jaxb technology.
May be you can view the application as a single object as input on which different strategies needs to be implemented based on runtime value.
You did not specify what your if-else statements are doing. Say they filtering depending on some value.
If I understand your question correctly, you want to look at Factory Pattern.
This is a clean approach, easy to maintain and produces readable code. Adding or removing a Filter is also easy, Just remove the class and remove it from FilterFactory hashmap.
Create an Interface : Filter
public interface Filter {
void Filter();
}
Create a Factory which returns correct Filter according to your value. Instead of your if-else now you can just use the following :
Filter filter = FilterFactory.getFilter(value);
filter.filter();
One common way to write FilterFactory is using a HashMap inside it.
public class FilterFactory{
static HashMap<Integer, Filter> filterMap;
static{
filterMap = new HashMap<>();
filterMap.put(0,new Filter0());
...
}
// this function will change depending on your needs
public Filter getFilter(int value){
return filterMap.get(value);
}
}
Create your three(in your case) Filters like this: (With meaningful names though)
public class Filter0 implements Filter {
public void filter(){
//do something
}
}
NOTE: As you want to reuse some methods, create a FilterUtility class and make all your filters extend this class so that you can use all the functions without rewriting them.
Your question is very broad and almost impossible to answer without some description or overview of the structure of your application. However, I've been in a similar situation and this is the approach I took:
Replace conditions with Polymorphism where possible
it has mainly 3 flow and have to process based on this one of the 3
type. Many of these functionalities overlap each other.
You say your project has 3 main flows and that much of the code overlaps each other. This sounds to me like a strategy pattern:
You declare an interface that defines the tasks performed by a Flow.
public interface Flow{
public Data getData();
public Error validateData();
public void saveData();
public Error gotoNextStep();
}
You create an abstract class that provides implementation that is common to all 3 flows. (methods in this abstract class don't have to be final, but you definitely want to consider it carefully.)
public abstract class AbstractFlow{
private FlowManager flowManager
public AbstractFlow(FlowManager fm){
flowManager = fm;
}
public final void saveData(){
Data data = getData();
saveDataAsXMl(data);
}
public final Error gotoNextStep(){
Error error = validateData();
if(error != null){
return error;
}
saveData();
fm.gotoNextStep();
return null;
}
}
Finally, you create 3 concrete classes that extend from the abstract class and define concrete implementation for the given flow.
public class BankDetailsFlow extends AbstractFlow{
public BankDetailsData getData(){
BankDetailsData data = new BankDetailsData();
data.setSwiftCode(/*get swift code somehow*/);
return data;
}
public Error validateData(){
BankDetailsData data = getData();
return validate(data);
}
public void onFormSubmitted(){
Error error = gotoNextStep();
if(error != null){
handleError(error);
}
}
}
Lets take example, suppose you have model say "Data" [which has some attributes and getters,setters, optional methods].In context of Mobile application ,in particular Android application there can be two modes Off-line or On-line. If device is connected to network , data is sent to network else stored to local database of device.
In procedural way someone can , define two models as OnlineData,OfflineData and write code as[The code is not exact ,its just like pseudo code ]:
if(Connection.isConnected()){
OnlineData ond=new OnlineData();
ond.save();//save is called which stores data on server using HTTP.
}
else{
OfflineData ofd=new Onlinedata();
ofd.save();//save is called which stores data in local database
}
A good approach to implement this is using OOPS principles :
Program to interface not Implementation
Lets see How to DO THIS.
I am just writing code snippets that will be more effectively represent what I mean.The snippets are as follows:
public interface Model {
long save();//save method
//other methods .....
}
public class OnlineData extends Model {
//attributes
public long save(){
//on-line implementation of save method for Data model
}
//implementation of other methods.
}
public class OfflineData extends Model {
//attributes
public long save(){
//off-line implementation of save method for Data model
}
//implementation of other methods.
}
public class ObjectFactory{
public static Model getDataObject(){
if(Connection.isConnected())
return new OnlineData();
else
return new OfflineData();
}
}
and Here is code that your client class should use:
public class ClientClass{
public void someMethod(){
Model model=ObjectFactory.getDataObject();
model.save();// here polymorphism plays role...
}
}
Also this follows:
Single Responsibility Principle [SRP]
because On-line and Off-line are two different responsibilities which we can be able to integrate in Single save() using if-else statement.
After loong time I find opensource rule engine frameworks like "drools" is a great alternative to fit my requirement.
I have the following Java servlet that performs what I call the "Addition Service":
public class AdditionService extends HttpServlet {
#Override
public void doGet(HttpServletRequest request, HttpServletResponse response) {
// The request will have 2 Integers inside its body that need to be
// added together and returned in the response.
Integer addend = extractAddendFromRequest(request);
Integer augend = extractAugendFromRequest(request);
Integer sum = addend + augend;
PrintWriter writer = response.getWriter();
writer.write(sum);
}
}
I am trying to get GWT's RequestFactory to do the same thing (adding two numbers on the app server and returning the sum as a response) using a ValueProxy and AdditionService, and am running into a few issues.
Here's the AdditionRequest (client tier) which is a value object holding two Integers to be added:
// Please note the "tier" (client, shared, server) I have placed all of my Java classes in
// as you read through the code.
public class com.myapp.client.AdditionRequest {
private Integer addend;
private Integer augend;
public AdditionRequest() {
super();
this.addend = 0;
this.augend = 0;
}
// Getters & setters for addend/augend.
}
Next my proxy (client tier):
#ProxyFor(value=AdditionRequest.class)
public interface com.myapp.client.AdditionRequestProxy extends ValueProxy {
public Integer getAddend();
public Integer getAugend();
public void setAddend(Integer a);
public void setAugend(Integer a);
}
Next my service API (in the shared tier):
#Service(value=DefaultAdditionService.class)
public interface com.myapp.shared.AdditionService extends RequestContext {
Request<Integer> sum(AdditionRequest request);
}
Next my request factory (shared tier):
public class com.myapp.shared.ServiceProvider implements RequestFactory {
public AdditionService getAdditionService() {
return new DefaultAdditionService();
}
// ... but since I'm implementing RequestFactory, there's about a dozen
// other methods GWT is forcing me to implement: find, getEventBus, fire, etc.
// Do I really need to implement all these?
}
Finally where the magic happens (server tier):
public class com.myapp.server.DefaultAdditionService implements AdditionService {
#Override
public Request<Integer> sum(AdditionRequest request) {
Integer sum = request.getAddend() + request.getAugend();
return sum;
}
// And because AdditionService extends RequestContext there's another bunch of
// methods GWT is forcing me to implement here: append, create, isChanged, etc.
// Do I really need to implement all these?
}
Here are my questions:
Is my "tier" strategy correct? Have I packaged all the types in the correct client/shared/server packages?
I don't think my setup is correct because AdditionService (in shared) references DefaultAdditionService, which is on the server, which it shouldn't be doing. Shared types should be able to live both on the client and the server, but not have dependencies on either...
Should ServiceProvider be a class that implements RequestFactory, or should it be an interface that extends it? If the latter, where do I define the ServiceProvider impl, and how do I link it back to all these other classes?
What about all these methods in ServiceProvider and DefaultAdditionService? Do I need to implement all 20+ of these core GWT methods? Or am I using the API incorrectly or not as simply as I could be using it?
Where does service locator factor in here? How?
If you want to use RF as a simple RPC mechanism [*] you can (and you are right: only ValueProxys), but you need something more: ServiceLocators (i.e., GWT 2.1.1).
With ServiceLocator you can simply put your service implementation (like your servlet) into a real service instance, instead into an entity object (as you will use only ValueProxys, with no static getXyz() methods) as required by the RF protocol. Note the existence also of Locators, used to externalize all those methods from your server-side entities: not needed if you use ValueProxy everywhere.
A ServiceLocator looks something like (taken from official docs):
public class DefaultAdditionServiceLocator implements ServiceLocator {
#Override
public Object getInstance(Class<?> clazz) {
try {
return clazz.newInstance();
} catch (InstantiationException e) {
throw new RuntimeException(e);
} catch (IllegalAccessException e) {
throw new RuntimeException(e);
}
}
}
You need to annotate your DefaultAdditionService also with a locator param, so RF knows on what to rely when it comes to dispatch your request to your service. Something like:
#Service(value = DefaultAdditionService.class, locator = DefaultAdditionServiceLocator.class)
public interface com.myapp.shared.AdditionService extends RequestContext {
// Note here, you need to use the proxy type of your AdditionRequest.
Request<Integer> sum(AdditionRequestProxy request);
}
Your service will then be the simplest possible thing on Earth (no need to extend/implement anything RF-related):
public class com.myapp.server.DefaultAdditionService {
// The server-side AdditionRequest type.
public Integer sum(AdditionRequest request) {
Integer sum = request.getAddend() + request.getAugend();
return sum;
}
}
If you mispell sum() or you do not implement a method declared in your RequestContext you will get an error.
To instantiate RequestContexts you need to extend the RequestFactory interface, with a public factory method for com.myapp.shared.AdditionService. Something like:
public interface AdditionServiceRequestFactory extends RequestFactory {
public com.myapp.shared.AdditionService createAdditionServiceRequestContext();
}
All your client calls will start from this. See the docs, if not already.
Now, RF works by totally separating the objects your want to pass from client (using EntityProxy and ValueProxy) and server (the real objects, either Entity values or simple DTO classes). You will use proxy types (i.e., interfaces whom implementations are automatically generated) everywhere in client/shared tier, and you use the relative domain object (the one referenced with #ProxyFor) only on server side. RF will take care of the rest. So your AdditionRequest will be on your server side, while AdditionRequestProxy will be on your client side (see the note in the RequestContext). Also note that, if you simply use primitive/boxed types as your RequestContext params or return types, you will not even need to create ValueProxys at all, as they are default transportable.
The last bit you need, is to wire the RequestFactoryServlet on your web.xml. See the docs here. Note that you can extend it if you want to, say, play around with custom ExceptionHandlers or ServiceLayerDecorators, but you don't need to.
Speaking about where to put everything:
Locators, ServiceLocators, service instances, domain objects, and RequestFactoryServlet extensions, will be on your server-side;
The RequestContext, RequestFactory extensions and all your proxy types will be on the shared-side;
client side will initialize the RequestFactory extension and use it to obtain the factory instance for your service requests.
All in all... to create a simple RPC mechanism with RF, just:
create your service along with ServiceLocator;
create a RequestContext for your requests (annotated with service and locator values);
create a RequestFactory extension to return your RequestContext;
if you want to use more than primitive types in your RequestContext (like simple DTOs), just create client proxy interfaces for them, annotated with #ProxyFor, and remember where to use each type;
wire everything.
Much like that. Ok, I wrote too much and probably forgot something :)
For reference, see:
Official RF documentation;
Thomas Broyer's articles [1], [2];
RF vs GWT-RPC from the RF author point of view.
[*]: In this approach you shift your logic from data-oriented to service-oriented app. You give up using Entitys, IDs, versions and, of course, all the complex diff logic between client and server, when it comes to CRUD operations.