I am bit confused, whether I should validate the data returned from the remote call to another Microserivce Or should I rely on the contract between these Microservice.
I know putting extra checks will not hurt anyone but I would like to know what's the right approach?
in theory, you don't even know how the data you get back from a microservices is created since you only know the interface (API) and what it is returning.
By that, you should take the data response of this API as given.
Sure, additional validation may not harm in the first place.
But consider a case where some business-logic got changed which lead to a change in one of the services. Could be a simple thing like adapting the definition of a KPI leading to a different response (datawise, not structurewise) from the microservice.
Your validation would fail too as false-positive. You would need to adapt your validation for basically nothing.
Related
what is the technical drawback of deleting a row on a GET method call in REST i know that it is not a standard way of doing but will there be having any issues ?
The technical drawback is that indexers (think Google) come along and GET all of the links that they can find, just to see what's there. General purpose components that see a link to your thing might do a GET on it as a way of pre-caching the results in case the client wants them.
Fielding, writing in 2002
HTTP does not attempt to require the results of a GET to be safe. What
it does is require that the semantics of the operation be safe, and
therefore it is a fault of the implementation, not the interface
or the user of that interface, if anything happens as a result that
causes loss of property (money, BTW, is considered property for the
sake of this definition)
I have to automate certain operations of PUT/POST operation in my case, I have those endpoints already-in-place which will do their part.
My planning is to have another method which will drive this whole automation, consider this method as new POST endpoint which would gonna call each either POST and PUT endpoint from the same service which I already mentioned.
I will gonna call those existing PUT and POST based on input, if consider the input is new I will call existing POST and if given input exists in database I will going to call PUT.
Till I am good, But I have a question in my mind, Which is bugging me a lot that my new endpoint which is of POST is calling PUT as well as POST, I each method type has to do its type of operations only but here I am calling PUT as well as POST whereas my parent calling method type is POST.
I am not sure if I am working in right direction to achieve my use-case.
Please correct me in a different way.
Note - I am having Spring Boot application which would always need some endpoint to trigger any logic which I am talking about.
Update my question for better understanding.
I dont really know what you mean exactly. The HTTP methods are considered to do a specific task, but yet again its ok to use POST to update something - might be not best practice, but works. If you want to seperate the concerns (adding, updating), then just implement two different endpoints, one handling the creation the other one the update. The client (whether its a web-app or desktop app or whatever) has to handle this issue.
We have a REST API that reads and deletes the record from database and returns the read value back to the client, all in same call. We have exposed it using HTTP POST. Should this be exposed as HTTP GET? What will be the implications in terms of Caching in case we expose it as GET.
First, you should keep in mind that one of the reasons that we care that a request is safe or idempotent is that the network is unreliable. Some non zero number of responses to the query are going to be lost, and what do you want to do about that?
A protocol where the client uses GET to request the resource, and then DELETE to acknowledge receipt, may be a more reliable choice than burning the resource on a single response.
Should this be exposed as HTTP GET?
Perhaps. I would not be overly concerned with the fact that the the second GET returns a different response than the first. Safe/idempotent doesn't promise that the response will be the same every time, it just promises that the second request doesn't change the effects.
DELETE, for example, is idempotent, because deleting something twice is the same as deleting it once, even though you might return 200 to the first request and 404/410 to the second.
HTTP does not attempt to require the results of a GET to be safe. What it does is require that the semantics of the operation be safe, and therefore it is a fault of the implementation, not the interface or the user of that interface, if anything happens as a result that causes loss of property (money, BTW, is considered property for the
sake of this definition).
I think the thing to pay attention to here is "loss of property". What kind of damage does it cause if generic components think that GET means GET? and act accordingly (for example, by pre-fetching the resource, or by crawling the API).
But you definitely need to be thinking about the semantics -- are we reading the document, and the delete of the database record is a side effect? or are we deleting the record, and receiving a last known representation as the response?
POST, of course, is also fine -- POST can mean anything.
What will be the implications in terms of Caching in case we expose it as GET.
RFC 7234 - I don't believe there are any particularly unusual implications. You should be able to get the caching behavior you want by specifying the appropriate headers.
If I'm interpreting your use case correctly, then you may want to include a private directive, for example.
As per the above discussion, it looks like PUT request. You should not use GET as it is idempotent because the same data is not available for the second time call. POST is used to create a new resource. So it will be better to use PUT http method for this kind of requirement. Refer below the link for more details.
https://restfulapi.net/http-methods/
I am building a service that depends on another service. A typical Service oriented architecture. The service i am dependent on exposes some API and data types. I am confused should i be converting the object types exposed by that service into specific objects which my service understands. I do expect their service to change with time as these are two different services. I have two options:
Directly use those data types in my service and pass those in methods.
Transform those into specific data types which only my service understands. ( objects will look exactly same if i do this with 0 changes ).
I tried to answer these questions but still could not make the final call. I need help in making this decision.
Why should I have encapsulated/transformed types ?
To prevent building every time they build changes in the service.
To prevent widespread changes ( adapter pattern ) : Changes to the wire
format will lead me to change only the encapsulating classes.
Why should I not have the changes for the types encapsulated ?
The classes will look exactly same as the wire format classes. ( Useless effort to maintain extra classes )
As i understand the impact will be same if i go with either approach. Help ?
I am no architect or SOA specialist, so excuse me if I am saying anything stupid :-)
But I really think the way here is to keep your services simple.
In your shoes, I'd just directly use the existent API. I would not spent any time wrapping or adapting the methods into another API. Your second service (that uses the existent first service) business logic should take care of this convertion, IMO, except if you're being forced to do something that is really expensive with the existent API.
Remember that services are mutable. They're software. They have bugs, business logic changes as time goes and you'll have to change the API and sometimes you'll have to keep older methods compatible for other service consumers. You probably don't want to maintain two APIs that provide the same information without any good practical reason. Not for twice the maintenance work.
Creating another API just to adapt the data format sounds to me a little like that old "DTOs are evil" flame war. And I think a very few people write about the advantages of using DTO nowadays :-)
This is sort of opinion based question, so my opinion is, you should make your own data-types to let your piece of code understand what should be contained in which variable.
I think of services as a data provider, which accepts certain request and fulfill our needs and in return may give us some data. I think role of service is just providing services to client.
It should be responsiblity of client to accept the data returned by service and store them in certain data-structure as there can be n different clients for single service and they can have n different requirements which may lead them to design client specific data-structure to contain data.
Also, as you said client service is subject to change over the period of time, then if you make your own data-structure, then you will need to make change in one single place, and rest of your code will be safe.
I'm using a java-backend with a flex frontend. And when i want to use a labelfunction it doesn't load the indepth properties such as a value object, it's like it is lazy loaded in the flex side, I'm sure it is not comming from the backend because i've checked it overthere.
I've got it also in datagrid's that it doesn't load all the values at once.
for example
Class John{
var name:String;
var lastName:Doe;
}
Class Doe{
var lastName:String;
}
I ask at my back end get all John's, the backend gives me all John's which contains the Doe's. Now At the flex side I fire the result event from the callresponder when i receive that data. But still it can't acces the doe's into the Johns, the doe property of john is still null. When i ask it the second time it nows about the doe's, so it looks like lazy loading in a front-end way...
What am i doing wrong?
Greets
It's tricky to completely understand your question. However, I've had problems along these lines, many, many times. So I get the general problem.
One thing to remember with BlazeDS is that the classes sent over the network are serialized and deserialized. Meaning, in simplified terms, that the only things written and read over the network are the fields/properties of each class. You have to pay CLOSE attention to the basic data types in your classes on both the java side and the Flex side. Make sure all properties/fields and public getters/setters match and make sure they're clear.
What I mean by "clear" is, BlazeDS gets confused when it can't figure out which variables to stick where.
Although your Doe class is not a String, it only contains a string. So, when it's sent over the network, it looks just like a string. In cases like this, I've seen the blazeds get confused. It sees two strings come over the network and it can't figure out which goes where. To you, John contains "Doe" and a String but all BlazeDS really sees, in the end, is a String and a String.
Just to test, in your basic example, change Doe.lastName to an Integer or some other object. Chances are, it will stop coming up null on the other end. If it's still null, then your ActionScript and Java classes (John & Doe) don't match properly or they're too ambiguous.
The basic point is: when things come up null when you receive data, that means you have a problem with serialization. BlazeDS can't figure out how to read what was written to the network. So either adjust your fields, properties and public getters/setters.... or write your own method for serializing your objects.
This page describes blazeds serialization (and also how to handle it on your own) in GREAT detail:
http://livedocs.adobe.com/blazeds/1/blazeds_devguide/help.html?content=serialize_data_2.html
Once I fully understood this, I had far fewer errors of this kind.
Hope that helps,
-kg
Ok i still don't know why it did that, but i've solved it by using flat dto's, now i'm using a complete MVC architecture...