How to solve obstacles related to API version - java

I am having a restfull service and there are more than 20 clients thats is using this service.
#Path("/provider")
public class Provider{
#Path("/simpleprovider")
#GET
public String getProvider(){
return "Simple Provider";
}
}
Now i have decided to introduce version in service, i have google alot, read my articles but i am totally confused how should i do? suppose i change URI for new service like #Path("/provider/v1") than how should i provide support for existing clients ? by considering this thing should i have to provide change on every client whenever api new version comes in action ?
After Googling i found that there are 3 ways to provide versioning
URL versioning
custom request header
content type
but could not find any practical example please help me in this regard
http://stackoverflow.com/questions/389169/best-practices-for-api-versioning
http://www.narwhl.com/2015/03/the-ultimate-solution-to-versioning-rest-apis-content-negotiation/
http://restcookbook.com/Basics/versioning/
http://www.troyhunt.com/2014/02/your-api-versioning-is-wrong-which-is.html
http://www.lexicalscope.com/blog/2012/03/12/how-are-rest-apis-versioned/
Any help will be greatly appreciated

Versioning a service can be very tricky and is always a big decision when determining the versioning strategy. Especially if you have people using the service. From my experience here are some things to consider:
Understand and communicate when and if you plan on sun setting versions of your API. The last thing you want to have is a problem where you are having to upkeep 10 different versions of an API.
Understand if a version change is absolutely necessary. A good rule of thumb is if core functionality is changing or if the contract can potentially break someones software that is integrating with your API. Here are some things to consider when determining if you really need to version:
If you are removing fields from a resource.
If you are removing (or updating) a URL (or method).
If an existing endpoint (or method) logic is going to change such that it would require a consumer to re-implement.
Overall, if a change that you make would break someone integrating with your API.
Are you going to require database changes that would not be backwards compatible with your previous version(s)? This is when versioning can get really fun (sarcastically) because now you might have to make your database backwards compatible which, in my experience, can be difficult problem to deal with moving forward.
To answer your question, though, I have found the best way to version to be in the URL. Be very simple and deliberate with your versioning so that it is crystal clear for your integrator. For example:
GET /v1/products/{id} // Version 1
GET /v2/products/{id} // Version 2
** If you decide on URL versioning then my advice is to put do "v" for version and a SINGLE number like 1 or 2. Don't get into versions, sub versions, etc... This will make your API seem like it revs a lot which can cause concern for consumers. Also, keep the version as FAR left of the URL as possible. The idea is that everything to the right of the version is the new versioned resource.
I would AVOID using headers to version your service. You don't want to hide versions from your consumer. Be as transparent and explicit about versioning as you possibly can.
Versioning in the URL allows you to also do some useful routing and things on your web server and proxies as well. You can either version like this in your actual code, for example:
[HttpGet("v1/products")]
public Product GetProduct(string id)
{
return _repository.GetProduct(id);
}
or you can version using your web server by setting up virtual directories (or whatever). Then your code might look something like this instead:
[HttpGet("products")]
public Product GetProduct(string id)
{
return _repository.GetProduct(id);
}
However you decide to version it is very important to think about the pro's and con's of each decision and weigh them because if you are running an API that people are using the bad decisions will catch up to you in a hurry.

Related

In modern application design, how do you implement / between TransferOject and BusinessObject

With organizations which are slow to adapt to modern technology finally junking EJBs and getting ready to transform to SpringBoot, Microservices, REST, Angular, there are some some questions application design. One being about TransferObjects and Business Objects
When the call comes to the REST Controller, is it still popular to populate a TO (POJO) and then make Service call, which in turn populates a BusinessObject and then calls a Repository service?
OR
At the REST Controller layer do we directly populate the BO and send it to the Service (This does not make any sense to me, because a BO is populated only during the execution business logic).
If nowadays its still Option 1, then how do we avoid writing to exactly similar POJO classes in most cases (in order to use BeanUtils.copyProperties()), with the BO decorated with #Id, #Column etc.
To elaborate on #Turing85's comments...
Option 1 usually (see the end of my answer) makes infinitely more sense. It's a question of responsibility (purpose) and change; the two logical components you refer to, a REST API and a repository / system service:
Responsibility: a REST service cares about working with its callers, so ideally when designing a REST API you should be involving someone from the client-side (client as in caller), because if the API doesn't work for them it's not going to be an effective API. On the other hand, repositories are somewhat self-centered, and may need to consider things that are or no interest to API callers (and vis-versa).
Change: if you pay attention to design principles, like SOLID, you'll know that part of a system should do one job - as a way of limiting the reasons why it needs to change (see: SRP). Trying to use one object across both outward-facing API's, and inward-facing repositories, is asking for trouble because it's trying to do too much - its trying to help solve problems in two very different parts of the wider solution, and thee both have very different change drivers working against them. Turning85's comment about the persistence layer stems from the same idea.
"Option 1 usually makes infinitely more sense":
One case where the REST API's objects will / can bear a very close resemblance to those that hit the actual repository (or even be reused, I guess) is when the REST API is a System API - i.e. a dedicated façade / proxy to the repository. In this case, the System API is largely driven by the repository i.e. the main change driver is just the repository.
After researching a bit I agree, keeping things simple will result in simpler code. I found a nice simple way to take care of this manual work.
https://www.baeldung.com/entity-to-and-from-dto-for-a-java-spring-application

Minimum required properties in ESAPI.properties

My web application uses only the following ESAPI encode methods:
ESAPI.encoder().encodeForLDAP()
ESAPI.encoder().encodeForHTML()
In this case, what is the minimum required properties in ESAPI.properties?
Now I'm using ESAPI 2.1.0.1 and this properties.
If you are just using the encoder() function, the 3 lines in the encoder section is all you need. Lines 99-119 (between all the comments).
Edit
Plus you must specify a default encoder. Example:
ESAPI.Encoder=org.owasp.esapi.reference.DefaultEncoder
Encoder.AllowMultipleEncoding=false
Encoder.AllowMixedEncoding=false
Encoder.DefaultCodecList=HTMLEntityCodec,PercentCodec,JavaScriptCodec
I think I answered a previous question.
Again you're the victim of some bad design choices back at the beginning of the ESAPI project between 2009-2011. Namely, the Singleton-based monolith.
ESAPI's current design is monolithic, meaning it tries to be everything to everyone. As you're well aware, this isn't the best design strategy, but that's where we're at.
We have several proposals for breaking various functions out into separate libraries, but that's future work towards building ESAPI 3.0.
For your current dilemma, there's too much of the library that is dependent upon functionality that it sounds like you don't need and don't intend to use. Unfortunately, that is simply the current fact of life. No one has ever seemed to use our authentication interface--but its there for everybody, even if they don't need it. Most users use our encoding/decoding capability first, followed by the validation API and then crypto. The last couple are log injection and the WAF.
Most users of ESAPI take the non-prod test file, and leave it at that. (This is a really bad idea.)
The others take the one you reference and work through the exceptions, asking us questions on the mailing list.
This is not an ideal path to walk either, but it's the path we're in right now.
The danger from my perspective, is if you choose to implement happy-path configurations for the ones ESAPI is throwing exceptions towards, with the goal of JUST making it happy so you can get to your two narrow use-cases.
Then you get promoted and another developer on your app is faced with a problem that she thinks is solved because you handled all the integration with ESAPI.
PAY ATTENTION TO THE PARTS OF ESAPI THAT DON'T PERTAIN TO YOUR USE CASE. This isn't ideal, but its where we're at in 2017. Ask us questions on the user list.
Failure to do so--especially in the crypto portion, will leave your application vulnerable in the future.
RegEx used in ESAPI.validator().getValidInput(..) calls
Validator.COMPANY_ID_PTRN=[a-zA-Z0-9]+
Validator.USER_DN_PTRN=[a-zA-Z0-9=,]+
Validator.ROLE_DN_PTRN=[a-zA-Z0-9=,^\- ]+
Minimum default settings
ESAPI.Encoder=org.owasp.encoder.esapi.ESAPIEncoder
ESAPI.Logger=org.owasp.esapi.logging.slf4j.Slf4JLogFactory
Logger.ApplicationName=TrianzApp
Logger.LogEncodingRequired=false
Logger.LogApplicationName=false
Logger.LogServerIP=false
Logger.UserInfo=false
Logger.ClientInfo=false
IntrusionDetector.Disable=true
ESAPI.Validator=org.owasp.esapi.reference.DefaultValidator
Encoder.AllowMixedEncoding=false
Encoder.AllowMultipleEncoding=false
ESAPI.printProperties=false

Best way to handle JAX-RS REST API URI versioning

I did my search first in stackoverflow & I was not able to find out any answers related to my question. All I can find was questions related to REST uri design.
My question in on the backend side.
Suppose we have two different version of REST uri's
http://api.abc.com/rest/v1/products
http://api.abc.com/rest/v2/products
What is the best approach to follow on the backend side (server side code) for proper routing, manageability & reuse of the existing classes across these two set of api's based on version?
I have thought of approach to define resource classes with different #Path annotations for e.g. have a package for v1 & v2 separately & in ProductsResource class of that package, define
package com.abc.api.rest.v1.products;
#Path("/rest/v1/products")
public class ProductsResource {...}
package com.abc.api.rest.v2.products;
#Path("/rest/v2/products")
public class ProductsResource {...}
& then have the implementation logic based on the versions. The problems with this approach is when we are only changing one particular resource api from the set of api's, we have to copy other classes to the v2 package also. Can we avoid it?
How about to write a custom annotation say #Version & have values of the versions it supports? Now whether it is v1 or v2, both request will go to same resource class.
Say for e.g.
package com.abc.api.rest.products;
#Path("/rest/{version: [0-9]+}/products")
#Version(1,2)
public class ProductsResource {...}
UPDATE:
There was a API versioning suggestion by Jarrod to handle version in headers. That's also one way to do it however, I am looking forward for best practices to use when we are following URI based versioning
The problem with putting it in the URL is that the URL is supposed to represent a resource by location. An API Version is not a location and it not part of the identifier of the resource.
Sticking /v2/ in the URL breaks all existing links that came before.
There is one correct way to specify API versioning:
Put it in the mime-type for the Accept: header that you want. Something like Accept: application/myapp.2.0.1+json
Chain of Responsiblity pattern goes well here especially if there will be significant number of API versions that are different enough to have to have their own handler, that way methods don't get out of hand.
This blog post has an example of what is considered the by some to be the correct approach, i.e. not having the version in the URI: http://codebias.blogspot.ca/2014/03/versioning-rest-apis-with-custom-accept.html
In short, it leverages JAX-RS #Consume annotation to associate the request for a particular version to a specific implementation, like:
#Consumes({"application/vnd.blog.v1+xml", "application/vnd.blog.v1+json"})
I was just wondering why not have a subclass of ProductService called
#Path(/v2/ProductService)
ProductServiceV2 extends ProductService {
}
#Path(/v1/ProductService)
class ProductService{
}
and only override whatever is changed in v2. Everything unchanged will work the same as in v1/ProductService.
This defintely leads to more # of classes but is one easier way of coding for only whatever is changing in the new version of api and reverting to the old version without duplicating code.

Need help improving a tightly coupled design

I have an in-house enterprise application (EJB2) that works with a certain BPM vendor. The current implementation of the in-house application involves pulling in an object that is only exposed by the vendor's API and making changes to it through the exposed methods in the API.
I'm thinking that I need to somehow map an internal object to this external one, but that seems too simple and I'm not quite sure of the best strategy to go about doing this. Can anyone shed some light on how they have handled such a situation in the past?
I want to "black box" this vendor's software so I can replace it easily if needed. What would be the best approach from a design point of view to somehow map an internal object to this exposed API object? Keep in mind that my in-house app needs to talk to the API still, so there is going to be some dependency between the two, but I want to reduce it so I can also test in isolation from this software using junit.
Thanks,
Jason
Create an interface for the service layer, internally all your code can work with that. Then make a class that uses that interface and calls the third party api methods and as the api facade.
i.e.
interface IAPIEndpoint {
MyDomainDataEntity getData();
}
class MyAPIEndpoint : IAPIEndpoint {
public MyDomainDataEntity getData() {
MyDomainDataEntity dataEntity = new MyDomainDataEntity();
// Call the third party api and fill it
return dataEntity;
}
}
It is always a good idea to interface out third party apis so you don't get their funk invading your app domain, and you can swap out as needed. You could make another class implementation that uses a different service entirely.
To use it in code you just call
IAPIEndpoint endpoint = new MyAPIEndpoint(); // or get it specific to the lang you are using.
Making your stuff based on interfaces when it spans multiple implementations is the way to go. It works great for TDD as well so you can just swap out the interface to a local test one that can inspect your domain code entirely separate from the third party api.
Abstraction; implement a DAL which will provide the transition from internal to external and back.
Then if you switched vendors your internals would remain valuable and you could change out the vendor specific code; assuming the vendors provide the same functionality and the data types related to each other.
I will be the black sheep here and advocate for the YAGNI principle. The problem is that if you do an abstraction layer now, it will look so close to the third party API that it will just be a redundant layer. Since you don't know now what a hypothetical future second vendor's API will look like, you don't know what differences you need to account for, and any future port is likely to require a rework for those unforeseen differences anyway.
If you need a test framework, my recommendation is to make your own test implementation using the same API as the BPM vendor. Even better, almost all reputable API providers provide some sort of sandbox mode for testing. If they don't, you should ask for one.

Domain name interpretation utility for java

I find myself with a need for a java utility for taking a fully-qualified hostname, and producing the domain name from that.
In the simple case, that means turning host.company.com into company.com, but this gets rapidly more complicated with host.subdomain.company.com, for example, or host.company.co.uk, where the meaning of "domain name" gets a bit fuzzy. Throw in complications with the definition of SLD and ccSLD, and it gets messy.
So my question is whether there's a 3rd-party library out there that understands these things and can give me sensible interpreations.
Mozilla regularly maintains the rules that it uses in its browser for cookie security in a format that can be parsed and used by others:
http://publicsuffix.org/
Searching Google, there are probably Java libraries that can parse the list, but I don't know the quality of any of them.
I don't think such a thing exists, since it's an adminstrative rather than technical issue, and a very multi-lateral one, at that.
If you end up rolling your own, this page on the Mozilla wiki looks like a good starting point, with lots of references. Looks like a major headache though. Just look at the rules for Japan. Ouch.
Not sure if it's for the same purpose, I do something similar in my code. When I set cookies, I want to set the domain as close to top as possible so I need to find the domain one-level lower than a public suffix. For example, the highest domain you can set cookie for host.div.example.com is .example.com. For host.div.example.co.jp is .example.co.jp.
Unfortunately, the code is not in the public domain. It's very easy to do. I basically use following 2 classes from Apache HttpClient 4,
org.apache.http.impl.cookie.PublicSuffixFilter
org.apache.http.impl.cookie.PublicSuffixListParser
I forgot the exact reason but we had to make some very minor tweaks. You just walk the domain from top to bottom, first valid cookie domain is what you need.
You need to download the public suffix list from here and include it in your JAR,
http://mxr.mozilla.org/mozilla-central/source/netwerk/dns/src/effective_tld_names.dat?raw=1

Categories