How to handle the code for versioned SOAP web services? - java

Background:
our web services are company internal, but with a lot different systems using them
we will strive to deprecate/remove old versions of the api as much as we can
There is a lot of information regarding versioning of web services, and our decision was to use the following approach to version our web services:
Keep version in URL (I know some people are against this, but mainly in regards to REST services)
Keep version in namespace.
But, now we are deciding how to actually implement this, and here we have not found that much information of best practices. We use (Java):
Annotations to define our web services (and the web service api)
POJO beans annotated with XML annotations, to define the content
Converter classes to convert from/to the business layer and web service pojo’s
Spring
So, to keep old versions on the web services, we need to keep old versions of the code. To do this, we have basically looked at two different approaches:
1) For each new version, make a complete new copy of the relevant code
This approach would look like this:
com.company.webservice.v3. -all of the web service classes, POJO’s and converters go here
com.company.webservice.v4. -all of the web service classes, POJO’s and converters go here
So, here we have the code duplicated. Our thought in short:
Code duplication. Will be several classes with identical code. Perhaps confusing in Eclipse.
Complete isolation, easy to determine what constitute a specific version
Minimized risk to affect functionality of previous versions of the services
2) Use spring to only make a copy of each class that is affected by a change
This approach means that use Spring IoC and let all versions of the web services use, as much as possible, the same code. Only when we make a change that affect behavior/api, we make new versions of those classes. For example:
com.company.webservice.beans.MyXMLAnnotatedPOJOv3.java
com.company.webservice.beans.MyXMLAnnotatedPOJOv4.java
com.company.webservice.translators.MyXTranslatorv1.java
com.company.webservice.translators.MyXTranslatorv2.java
Could be difficult to clearly see what constitutes a specific version of a web service. Maybe easier to by misstake affect previous versions of the web services when maintaining the code
No code duplication. Only changes are implemented as new classes
Neither approach feels optimal, but we haven’t found much information regarding this.
So, my questions is:
which of the two approaches would you use? Or would you take a completely different approach?

When generating wsdls from Java, I would use the package solution:
com.company.webservice.v3.
It has the code duplication problem, but the POJOs and converters have differences between versions anymay, so code reuse might not be very feasible after all. The main advantage is that if you want to get rid of an old version, you just delete the relevant packages.
I would keep versionnumber in URL, since you are not doing REST anyway. Furthermore, you could check in access logs, if certain versions are still used.

Related

Microservices with a shared lib dependency

I'm working on a microservice project, and I have a question about best practices.
We are using Java Spring, and all of our models are packaged in a single JAR. Each microservice depends on this JAR to function. Is it okay for a microservice to depend on a JAR containing models outside of its scope like this, or is it better practice to split this JAR up?
A very good article by Bartosz Jedrzejewski here
To quote a relevant part from his artcile...
If the service code should be completely separate, but we need to consume possibly complicated responses in the clients- clients should write their own libraries for consuming the service.
By using client-libraries for consuming the service the following benefits are achieved:
Service is fully decoupled from the clients and no services depend on one another- the library is separate and client specific. It can be even technology specific if we have mix of technologies
Releasing new version of the service is not coupled with clients- they may not even need to know if the backward compatibility is still there, it is the clients who maintain the library
The clients are now DRY - no needless code is copy pasted
It is quicker to integrate with the service - this is achieved without losing any of the microservices benefits
This solution is not something entirely new- it was successfully implemented in Scott Logic projects, is recommended in the “Building Microservices” by Sam Newman (highly recommended) and similar ideas can be seen in many successful microservices architectures.
There are some pitfalls as well, better read the entire article...
Sharing the domain models is an indicator of bad design. If services share a domain, they should not be split. For Microservices, teams working on one service should be able to modify their domain objects anytime without impacting other services/teams.
There can be done exceptions though, e.g. if the model objects are non-specific enough to be reusable in any service. As an example a domain of geometry could be contained in a geometry library. There can be other exceptions.

Where to put event upcasters in a microservice architecture?

I'm "playing" with Axon Framework with some small examples where the query and command services (and the logic behind them) are running as separated applications in several Docker containers.
Everything works fine so far and I started to evolve the event versioning topic. I haven't implemented that yet, but I like the idea to share the events as an API via JSON schema. But I've got stuck using that idea with the potential need of event upcasters.
If I understand that approach correctly every listening component has to upcasts the events independently, therefore it might be a good idea to share the upcasters, there is no need for different implementations, right? But then the upcasters seem to became a part of the API, or am I missing something?
How do you deal with that situation? Or generally, what are the best practices for API definitions in such scenario?
When accessing a microservices environment with distinct repositories for the different services, I feel it is common place to have a dedicated module/package/repository for the API of the given microservice. Or, a dedicated module for the shared language within a Bounded Context.
Especially when following the notion of Bounded Context, thus that every service within the context speaks the same language, to me emphasizes the requirement to share the created upcasters as well.
So shortly yes, I would group upcasters together with the API in question.
Schema languages typically also have solutions in place to support several versions of a message for example. Thus if you would be to use a schema language as your core API, that would also include a (although different) form of upcaster.
This is my 2 cents on the situation; hope this helps you out!

Swagger codegen server-side workflow

I am trying to incorporate swagger-codegen in my new greenfield project, using Java (jaxrs-jersey2).
There are a lot of resources out there already documenting various parts of the project; however, I still haven't been able to find out any high-level advice on the best workflow to use with these tools.
As I understand it, swagger-codegen will be able to generate client-side code to interact with my API, such that I don't have to write this myself. This will happen by looking at a swagger.yaml (2.0) or openapi.yaml (3.0) file. This part is clear.
However, there seems to be multiple ways of generating this specification file. As I understand it, there are two primary ways:
Write a server implementation using a combination of jaxrs and Swagger annotations - have a maven plugin run as part of the compile step, generating a swagger.yaml specification file to be used by the client-generation plugin.
Write a swagger.yaml specification first, and generate server-stub code for Jersey, implementing only business logic, separate from all server boilerplate.
Which of these two ways is the recommended workflow? It sounds like (2) means writing less code and focusing on just application logic, without worrying too much about Jersey-specific glue to make the API work. This also means that the single source of truth for the API becomes a simple yaml file, rather than a bunch of Jersey code.
However, I'm unsure how to properly set this up:
Does my build need to have a compilation phase where server stubs are constantly regenerated?
Do I simply extend from the generated server stub and never worry about annotating API paths with #Path, #GET, etc?
Am I misunderstanding the use-case for server-stub generation? I.e. is the first approach (Jersey code-first) more appropriate?
If there is no real difference between the two approaches, when would you pick (1) over (2) and vice-versa?
Many thanks.

How to use different versions of a class in the same application?

I'm currently working on a Java application which should have the capability to use different versions of a class at the same time (because of multi tenancy support). I was wondering, is there any good approach to manage this? My basic approach is to have an interface, lets say Car, and implement the different versions as CarV1, CarV2, and so on. Every version gets its own class.
My approach is kind of wiered, I think. But I didn't found any literature regarding to this topic, but I actually don't know what I should search for.
The interface idea is prudent. Combine it with a factory that can produce the required implementation instance depending on some external input, e. g. the tenant-id. If you don't need to support multiple tenants in the same running instance of the application, you could also use something like the ServiceLocator from the JDK which allows to use a file-based configuration approach.
If you are running in an application server, consider just firing up multiple instances, each configured for a different client. The server will then take care of the separation of instances, just fine.
Otherwise, if you really think you need multiple implementations at the same time (at runtime) in a non-Java EE application, this is a tricky problem. Maybe you want to consider a look at OSGi containers, which provide features for having multiple versions of a class. However, an approach like this add significant complexity, if you are not already familiar with it.
In theory you can handle this using multiple class loaders like JBoss for example does.
BUT: I would strongly advise against implementing this yourself. This is a rather complicated matter and easily gotten wrong. If you are talking about a web application, you can instead create one web app instance per tenant. If you are working on a stand-alone app, you should check, if running one instance per tenant might be feasible.

Concerns with managing JAX WS artifacts

I'm developing an application that makes heavy use of web services. I will be developing both the client and server ends of this application. I'd like to use JAX WS (which I am new to), because it seems to be the future for web services for Java, but I have a number of concerns related to the artifacts. None of these concerns is a deal-breaker, but collectively, JAX WS seems to create a lot of inconvenience. I'm new to JAX WS, so perhaps there are things I am unaware of that would alleviate my concerns.
Here are my concerns:
I anticipate having a fairly large number of POJOs that are passed between client and server (for lack of a better term, I'll call these transport objects). I would like to include documentation and business logic in these objects (for starters, equals, hashcode, toString). If I have business logic in these classes, then I cannot use wsimport to create the annotations for them, and I have to manage those by hand. Seems cumbersome and error-prone.
I have a choice of having the build system create artifacts, or having developers create artifacts and check them into source control. If artifacts are produced by the build system, then whenever a member of the team updates an API, everyone must generate artifacts in their own development environments. If artifacts are produced by developers and checked into source control, any time a member of the team renames or deletes an API, he must remember to delete wrapper artifacts. Either approach seems to be cumbersome. What's the best practice here?
wsimport creates all the artifacts in the same package. I will be creating multiple services, and I will have some transport objects that are shared, and therefore I need to wsimport all my services into the same package. If two services have an API with the same name, the wrapper artifacts will collide.
I anticipate having at least a hundred API's in my services. This means at least 200 wrapper classes. Seems like a huge amount of clutter. Lots and lots of classes that are of no interest for development. To make matters worse, these wrapper classes will reside in the same package as the transport objects, which will be some of the most highly-used classes in my system. Signal to noise ratio is very low for the most important package in my system.
Any pointers anyone can give me to ease development of my application would be greatly appreciated.
If you have control over both the client and the server you don't really have to generate the client with wsimport. I currently do it as follows: One project defines the API for the web service. The API consists of the interface and all classes of the "transfer objects". Another project implements the service. You can now distribute the API to the client who can now use the service and may leverage all your additional business methods.
Assuming ServiceInterface is your service interface a client might look like this:
Service s = Service.create(
new URL("http://example.com/your_service?wsdl"),
new QName("http://example.com/your_namespace", "YourServiceName"));
ServiceInterface yourService = s.getPort(
new QName("http://example.com/your_namespace", "YourPortName"),
ServiceInterface.class);
And just like that you have a service client. That way you can use all your methods (1), you have full control over your packages (3) and you don't have any wrapper classes lying around as they are all generated at runtime (4). I think (2) is solved by this as well.
Your question is quite large so if I fail to address a point sufficiently, leave a comment and I try to get into more detail.

Categories