How to let a consumer select specific 3party provider - java

What could be a good osgi implementation of the scenario below ?
I have a general algorithm which is divided in multiple modules. The idea is that each module could be extended by third party with specific configuration needs. My main algorithm is configured by a user mainly to select which module to include. As this configuration file could be difficult to write, I want to create a workbench that help him to do that.
My first idea was to consider my main algorithm as a consumer of multiple module providers using DS. The use case is: the user configure the main algorithm and the submodules he want to use; then when he runs the algorithm I want that the workbench creates the main algorithm service with the good configuration. But if I understand, services in osgi are designed to be provider independent. Does services are useful in my case?

Doing what you want they way you just described will cause you much heartache and issues. Instead I'd suggest you use a more hands-on approach:
Define interfaces in your bundles that define the ways that your algorithm can be extended
Use the service layer of OSGi to collect the implementations of the interfaces (DS can help you here)
Have a configuration class/object that defines which of the above are selected/activated for a specific instance
When you algorithm is executed, look up the necessary services from the service layer and use them.
Also, if you are going to have a full workbench, you could directly use extensions and extension points that help coordinate a bit.

Related

Exchange vars between API and software's core

I am developping a screenshot software which can load plugins from JAR. Thoses are developped using the API package, which is made of interfaces to implement, so the person who wants to make a plugin does not have to use the full source code.
This works well for adding like action (Upload to X or X host for example), but what if I want to send variable the other way around, like from a plugin TO the core ? How am I supposed to do this?
The only solution I can think of would be to use callbacks, but I don't find this so clean...
By the way, is my solution to use interface that devs implements, which I then instanciate is correct ? Or there is a better way?
Your solution is the most common way to implement such a scenario. You give plugins an instance of a class (instantiated by core) and they can store it for future use (e.g. to pass data to the core or trigger another action). Normally name of such classes ends with Context (e.g. BundleContext, PluginContext, etc.).
Another pattern is to use a sort of Mediator class. A class with some static methods that plugins can use to send some data to core or trigger some actions. I don't like it and it's not a very clean solution, but it makes it much easier for plugin developers to access the API as they don't need to store the context instance and respect its life cycle. This pattern is used widely in IntelliJ IDEA architecture.
As you're developing a plugin based system, I highly recommend you to take a look at OSGi architecture and APIs. It can be helpful in this regard.

Designing java project for monoliths and microservices at same time

I would like to know how you divide project modules in java for monolith with possibility of transforming modules to micro-services later?
My personal naming looks like this:
com.company.shopapp.product
...product.domain (ddd, services, repositories, entities, aggregates, command handlers - everything with package scope)
...product.api (everything with public scope)
...product.controller (CQRS endpoints for commands in web perspective - (package scope))
...product.query(CQRS - package scope)
com.company.shopapp.sales
- domain
- api
- controller
- query
What we have here is basically product management context and sales context as packages.
Modules communicate each other using public interfaces (api package) only. In my project I use "..api.ProductFacade" to centralize communication points.
When my "sales" module grow i will turn it into microservice by implementing "..api.ProductFacade" interface as a "rest" or "soap" client and on the other side I will create Endpoint/RestController based on ProductFacade interface.
Package "com.company.shopapp.product.api" will be transformed into extended library and added to both projects.
Edit:
I can achive this out of the box using #Feign library.
https://cloud.spring.io/spring-cloud-netflix/multi/multi_spring-cloud-feign.html#spring-cloud-feign-inheritance
The whole idea feels nice, but maybe you have better way to design project and ensure that breaking it into micro-services will not break whole application.
I think your module structure is good. But I would suggest you create a real 'multi module' project (link). This way, using code from another module will generate a compile-error. This will help you to keep your good intentions!
In order to do this, you'll have to split each module in a private (implementations) and public (api, only interfaces) module (By doing this, you don't need an 'api'-package).
An implementation module can depend on any public-module, but not a private-module.
If you wire your application together in the private module, with dependency injection, the private modules will have no 'internal' dependencies!
The private modules will not have any 'compile-time' dependencies, only 'runtime' dependencies.
Here quick module dependency graph:
I hope you find this usefull!
Edit:
You will need an extra module only to bootstrap the application!
TLDR: Think components and modules separately and establish their "touch points"
Modules, as in your example, look like cross-cutting structure, which corresponds well enough to the recommended microservice practice. So, they can all be parts of a single microservice. And if you are going to use DDD, than you'll want to include a bounded context name in your package path.
In my own source code I usually separate (at the top level) modules like config (to load and parse, well, config), functional for the functional core, domain model, operational for managing concurrency, Akka actors structure, monitoring and so on, and adapters, where all API, DB and MQ code lives. And, at last, module app, where all is launched and interfaces are bound to implementations. Also, you usually have some utils or commons for lower level boilerplate, algorithms and so on.
In some architecture schools there is explicit separation between modules and components. While the former are parts of the source code structure, the latter are runtime units, which consume resources and live in their specific way.
In your case microservices correspond to such components. These components can be run in the same JVM - and you get a monolith. Or they can be run in a separate JVM on a (maybe) separate host. Then you call them microservices.
So, you need to:
make each component's source code autonomous so that it could be launched in a separate runtime space (like classloader, thread, threadpool, actor system subtree). Hence, you need some launcher to bring it to life. Then you'll be able to call it from your public static void main(...).
introduce some modules in your code that would hold semantics of an individual component each. So that you could understand a component's scope from the code.
abstract communication between components, so that you could use adapters (source code modules) to talk over a network, or to use intra-JVM mechanisms like procedure call or Akka's message passing.
I should notice that on the lower levels you can use common source code modules in your components, so they can have some intersections in code. But on the higher level source code would be distinctive, so you can split it into modules according to components.
You can use Akka and run each of your components in a supervision subtree, where the subtree's supervisor is your component's main actor. Then that main actor definition would be your component's main module. If you need to let components communicate, you should pass corresponding ActorRefs to adapters as a config param.
You tell about centralising communication points, but in my opinion, if you stick to microservices paradigm and high level of autonomy for your components, then for every API call somebody has to own a contract. Enter different DDD bounded context interaction patterns. If you leave it in some centrally managed module, which every component should use, then that's a case of API governance. As long as you are the only maintainer, that may be convenient. But when different developers (or even teams) take their parts, you'll need to make this decision once again considering new conditions.
When later you take components apart - then you'll pass URL to adapters instead of ActorRefs.
Microservices compound by functionality and degree of connectivity.
I used this approach:
com.company.product:
possible big service:
config
services
domain
etc
possible second big service:
config
services
domain
etc
config
services // which likely never be separated
domain // common domain
etc
When split project you analyze new common depencies seeing by packages, exclude common library, copy project for each microservice, delete unneccessary code, perhaps change services implementations (for example, if "possible big service" uses "possible second big service"), configurations and build.
"Big" in that context means full-fledged functional implementation of something, what could be horizontally scaled, or need to be microservice for other reasons.

Dropwizard: handling multiple dropwizard instances

As I'm developing micro-services using Dropwizard I'm trying to find a balance between having many resources on one running instance/application of Dropwizard versus many instances.
For example - I have a project-A having 3 resources. In another project-B I would like to use one of the resources in project-A. The resource in common is related to user data.
Now I have options like :
make http call to user resource in project-A from project-B. I can use client approach of dropwizard here
as user resource is common - I can take it out from project-A to say project-C. And the I need to create client code in both project-A and project-B
i can extract jar containing user code and use in project-B. this will avoid making http calls.
Another point where I would like to have expert opinion is how to balance/minimize network calls associated with communication between different instances of microservice. In general should one use http to communicate between different instances? or can any other inter-process communication approach be used for performance perse [particularly if different instances are on same system]?
I feel this could be common problem/confusion for new comers in the world of micro-services. And hence would like to know any general guideline or best practices.
many thanks
Pradeep
make http call to user resource in project-A from project-B. I can use client approach of dropwizard here
I would not pursue this option if I were you. It's going to slow down your service unnecessarily, create potential logging headaches, and just feels wrong. The only time this might make sense is when the code is out of your control (but even then there's probably a better solution).
as user resource is common - I can take it out from project-A to say project-C. And the I need to create client code in both project-A and project-B
i can extract jar containing user code and use in project-B. this will avoid making http calls.
It sounds like project A and project B are logically different units with some common dependencies. It might make sense to consider a multi-module project (or a multi-module Maven project if you're using Maven). You could have a module containing any common code (and resources) that gets referenced by separate project modules. This is where Maven really excels since it can manage all these dependencies for you. It's like a combination of the last two options you listed.
One of the main advantages of micro-services is the opportunity to release and deploy each of them separately. Whatever option you choose make sure you don't loose this property.
Another property of a micro-service should be that it has only one responsibility. So it is all about finding the right boundaries for your services (in DDD-terms 'bounded contexts'), and indeed it is not easy to find the right boundaries. It is a balancing act.
For instance in your theoretical case:
If the communication between A and C will be very chatty, then it is not a great idea to extract C.
If A and C have a different lifecycle (business-wise), then it is a good idea to extract C.
That's essentially a design choice: are you ready to trade the simplicity of each one of your small services against the complexity of having to orchestrate them and the outcome of the overall latency.
If you choose the small service approach, you could stick to the documentation guidelines at http://dropwizard.io/manual/core.html#organizing-your-project : 1 project with 3 modules for api (that can be referenced from consumers), application and the optional client (also potentially used in consumers)
Other questions you will have to answer:
- each of your service will be hosted on a separate SCM repository...or not
- each of your service could (should?) have it's own version
If the user you feel is bounded context as if user management like user registration, authentication etc. This can certainly be a separate micro service. However you should invoke the user API from a single API gateway and convert it to a JWT token and pass it on to your other APIs in header.
In another case if your Business use case requires to invoke multiple micro services that logic (orchestration) should be developed in composite service layer.
Regarding inter micro service communication - talking each other through API calls takes you back to "point to point" communication introducing a lot of complexity and difficult to manage for a large project.
As per bounded context theory none of the transaction should go beyond one micro service. However in real world scenarios I think we still have dependency at least for the validation of the reference data. Example order service needs to validate product IDs. In this case the best I can think is to have eventing between microservices to feed each other with the reference data. You can try event sourcing for generating business events and async io for publish / subscribe.
Thanks,
Amit

How to use different versions of a class in the same application?

I'm currently working on a Java application which should have the capability to use different versions of a class at the same time (because of multi tenancy support). I was wondering, is there any good approach to manage this? My basic approach is to have an interface, lets say Car, and implement the different versions as CarV1, CarV2, and so on. Every version gets its own class.
My approach is kind of wiered, I think. But I didn't found any literature regarding to this topic, but I actually don't know what I should search for.
The interface idea is prudent. Combine it with a factory that can produce the required implementation instance depending on some external input, e. g. the tenant-id. If you don't need to support multiple tenants in the same running instance of the application, you could also use something like the ServiceLocator from the JDK which allows to use a file-based configuration approach.
If you are running in an application server, consider just firing up multiple instances, each configured for a different client. The server will then take care of the separation of instances, just fine.
Otherwise, if you really think you need multiple implementations at the same time (at runtime) in a non-Java EE application, this is a tricky problem. Maybe you want to consider a look at OSGi containers, which provide features for having multiple versions of a class. However, an approach like this add significant complexity, if you are not already familiar with it.
In theory you can handle this using multiple class loaders like JBoss for example does.
BUT: I would strongly advise against implementing this yourself. This is a rather complicated matter and easily gotten wrong. If you are talking about a web application, you can instead create one web app instance per tenant. If you are working on a stand-alone app, you should check, if running one instance per tenant might be feasible.

How to modularize a JSF/Facelets/Spring application with OSGi?

I'm working with very large JSF/Facelets applications which use Spring for DI/bean management.
My applications have modular structure and I'm currently looking for approaches to standardize the modularization.
My goal is to compose a web application from a number of modules (possibly depending on each other). Each module may contain the following:
Classes;
Static resources (images, CSS, scripts);
Facelet templates;
Managed beans - Spring application contexts, with request, session and application-scoped beans (alternative is JSF managed beans);
Servlet API stuff - servlets, filters, listeners (this is optional).
What I'd like to avoid (almost at all costs) is the need to copy or extract module resources (like Facelets templates) to the WAR or to extend the web.xml for module's servlets, filters, etc. It must be enough to add the module (JAR, bundle, artifact, ...) to the web application (WEB-INF/lib, bundles, plugins, ...) to extend the web application with this module.
Currently I solve this task with a custom modularization solution which is heavily based on using classpath resources:
Special resources servlet serves static resources from classpath resources (JARs).
Special Facelets resource resolver allows loading Facelet templates from classpath resources.
Spring loads application contexts via the pattern classpath*:com/acme/foo/module/applicationContext.xml - this loads application contexts defined in module JARs.
Finally, a pair of delegating servlets and filters delegate request processing to the servlets and filters configured in Spring application contexts from modules.
Last days I read a lot about OSGi and I was considering, how (and if) I could use OSGi as a standardized modularization approach. I was thinking about how individual tasks could be solved with OSGi:
Static resources - OSGi bundles which want to export static resources register a ResourceLoader instances with the bundle context. A central ResourceServlet uses these resource loaders to load resources from bundles.
Facelet templates - similar to above, a central ResourceResolver uses services registered by bundles.
Managed beans - I have no idea how to use an expression like #{myBean.property} if myBean is defined in one of the bundles.
Servlet API stuff - use something like WebExtender/Pax Web to register servlets, filters and so on.
My questions are:
Am I inventing a bicycle here? Are there standard solutions for that? I've found a mentioning of Spring Slices but could not find much documentation about it.
Do you think OSGi is the right technology for the described task?
Is my sketch of OSGI application more or less correct?
How should managed beans (especially request/session scope) be handled?
I'd be generally grateful for your comments.
What you're aiming to do sounds doable, with a few caveats:
The View Layer: First, your view layer sounds a little overstuffed. There are other ways to modularize JSF components by using custom components that will avoid the headaches involved with trying to create something as dramatic as late-binding managed beans.
The Modules Themselves: Second, your modules don't seem particularly modular. Your first bullet-list makes it sound as if you're trying to create interoperable web apps, rather than modules per se. My idea of a module is that each component has a well-defined, and more or less discrete, purpose. Like how ex underlies vi. If you're going down the OSGi route, then we should define modular like this: Modular, for the sake of this discussion, means that components are hot-swappable -- that is, they can be added and removed without breaking the app.
Dependencies: I'm a little concerned by your description of the modules as "possibly depending on each other." You probably (I hope) already know this, but your dependencies ought to form a directed acyclic graph. Once you introduce a circular dependency, you're asking for a world of hurt in terms of the app's eventual maintainability. One of the biggest weaknesses of OSGi is that it doesn't prevent circular dependencies, so it's up to you to enforce this. Otherwise your dependencies will grow like kudzu and gradually choke the rest of your system's ecosystem.
Servlets: Fuhgeddaboudit. You can't late-bind servlets into a web app, not until the Servlet 3.0 spec is in production (as Pascal pointed out). To launch a separate utility servlet, you'll need to put it into its own app.
OK, so much for the caveats. Let's think about how this might work:
You've defined your own JSF module to do... what, exactly? Let's give it a defined, fairly trivial purpose: a login screen. So you create your login screen, late-bind it using OSGi into your app and... then what? How does the app know the login functionality is there, if you haven't defined it in your .jspx page? How does the app know to navigate to something it can't know is there?
There are ways to get around this using conditional includes and the like (e.g., <c:if #{loginBean.notEmpty}>), but, like you said, things get a little hairy when your managed loginBean exists in another module that may not have even been introduced to the app yet. In fact, you'll get a servlet exception unless that loginBean exists. So what do you do?
You define an API in one of your modules. All the managed beans that you intend to share between modules must be specified as interfaces in this API layer. And all your modules must have default implementations of any of these interfaces that they intend to use. And this API must be shared between all interoperable modules. Then you can use OSGi and Spring to wire together the specified beans with their implementation.
I need to take a moment to point out that this is not how I would approach this problem. Not at all. Given something like as simple as a login page, or even as complicated as a stock chart, I'd personally prefer to create a custom JSF component. But if the requirement is "I want my managed beans to be modular (i.e., hot-swappable, etc)," this is the only way I know to make it work. And I'm not even entirely sure it will work. This email exchange suggests that it's a problem that JSF developers have only just started to work on.
I normally consider managed beans to be part of the view layer, and as such I use them only for view logic, and delegate everything else to the service layer. Making managed beans late-binding is, to my mind, promoting them out of the view layer and into the business logic. There's a reason why all those tutorials are so focused on services: because most of the time you want to consider what it would take for your app to run "headless," and how easy it would be to "skin" your view if, for instance, you wanted it to run, with all its functionality, on an Android phone.
But it sounds like a lot of what you're working with is itself view logic -- for instance, the need to swap in a different view template. OSGi/Spring should be able to help, but you'll need something in your app to choose between available implementations: pretty much what OSGi's Service Registry was built to do.
That leaves static resources. You can modularize these, but remember, you'll need to define an interface to retrieve these resources, and you'll need to provide a default implementation so your app doesn't choke if they're absent. If i18n is a consideration, this could be a good way to go. If you wanted to be really adventurous, then you could push your static resources into JNDI. That would make them completely hot-swappable, and save you the pain of trying to resolve which implementation to use programmatically, but there are some downsides: any failed lookup will cause your app to throw a NamingException. And it's overkill. JNDI is normally used in web apps for app configuration.
As for your remaining questions:
Am I inventing a bicycle here? Are there standard solutions for that?
You are, a little. I've seen apps that do this kind of thing, but you seem to have stumbled into a fairly unique set of requirements.
Do you think OSGi is the right technology for the described task?
If you need the modules to be hot-swappable, then your choices are OSGi and the lighter-weight ServiceLocator interface.
Is my sketch of OSGI application more or less correct?
I can't really tell without knowing more about where your component boundaries are. At the moment, it sounds like you may be pushing OSGi to do more than it is capable of doing.
But don't take my word for it. I found other reading material in these places.
And since you ask about Spring Slices, this should be enough to get you started. You'll need a Git client, and it looks like you'll be training yourself on the app by looking through the source code. And it's very early prototype code.
I am facing the same problems in my current project. In my opinion, OSGi is the best and cleanest solution in terms of standards and future support, but currently you may hit some problems if you try using it in a web application:
there is no well integrated solution between a Web Container and the OSGi platform yet.
OSGi may be too much for a custom build web application that is just searching for a simple modularized architecture. I would consider OSGi if my project needs to support third party extensions that are not 100% under our control, if the project needs hot redeployments, strict access rules between plugins, etc.
A custom solution based on class loaders and resource filters seems very appropriate for me.
As an example you can study the Hudson source code or Java Plug-in Framework (JPF) Project(http://jpf.sourceforge.net/).
As about extending the web.xml, we may be lucky with the Servlet 3.0 specification(http://today.java.net/pub/a/today/2008/10/14/introduction-to-servlet-3.html#pluggability-and-extensibility).
The "web module deployment descriptor fragment" (aka web-fragment.xml) introduced by the Servlet 3.0 specification would be nice here. The specification defines it as:
A web fragment is a logical
partitioning of the web app in such a
way that the frameworks being used
within the web app can define all the
artifacts without asking devlopers to
edit or add information in the
web.xml.
Java EE 6 is maybe not an option for you right now though. Still, it would to be the standardized solution.
Enterprise OSGi is a fairly new domain so dont think you will get a solution that directly satisfies your need. That said one of the things I found missing from Equinox (osgi engine behind eclipse and hence one with largest user base!) is a consistent configuration / DI service. In my project recently we had some similar needs and ended building a simple configuration osgi service.
One of the problems which will be inherent to modular applications would be around DI, as the module visibility could prevent class access in some cases. We got around this using a registered-buddy policy, which is not too ideal but works.
Other than configuration, you can take a look at the recently released Equinox book for guidance on using OSGi as base for creating modular applications. The examples may be specific to Equinox, but the principles would work with any OSGi framework. Link - http://equinoxosgi.org/
You should look into Spring DM Server (it's being transitioned to Eclipse Virgo but that's not been released yet). There's a lot of good things in the recent OSGi enterprise spec which has also just been released.
Some of the Spring DM tutorials will help, I'd imagine. But yes, it's possible to have both resources and classes loaded from outside the web bundle using standard modularity. In that, it's a good fit.
As for the session context - it gets handled as you would expect in a session. However, you might run into problems with sharing that session between web bundles to the extent that in not sure if it's even possible.
You could also look to have a single web bundle and then use e.g. the Eclipse extension registry to extend the capabilities of you web app.

Categories