Infinispan Unique Cache Manager for deployed Web Applications - java

I'm working with Infinispan 8.1 and WildFly 10.
I initialize my CacheManager programmatically using these code lines:
public class SessionManager {
private static DefaultCacheManager cacheManager;
public void initializeCache(){
if (cacheManager ==null){
GlobalConfigurationBuilder gcbLocal = new GlobalConfigurationBuilder();
ConfigurationBuilder builderLocal = new ConfigurationBuilder();
builderLocal.clustering().cacheMode(CacheMode.LOCAL);
cacheManager = new DefaultCacheManager(gcbLocal.build(), builderLocal.build());
cacheManager.getCache();
These code lines belong to a jar imported as dependency in multiple web applications deployed on my server.
So every time i deploy a new application, the initialize method is invoked and infinispan tries to create a new DefaultCacheManager, giving me this exception:
ISPN000034: There's already a JMX MBean instance type=CacheManager,name="DefaultCacheManager" already registered under 'org.infinispan' JMX domain. If you want to allow multiple instances configured with same JMX domain enable 'allowDuplicateDomains' attribute in 'globalJmxStatistics' config element
This issue can be resolved simply adding this code line:
gcbLocal.globalJmxStatistics().allowDuplicateDomains(true);
But now the effect is that Infinispan will create a new domain separated CacheManager. This means that every application will have its own.
My target is to have just 1 DefaultCacheManager serving all the web applications deployed inside the server the way that if WebApplicationA stores some value inside the infinispan cache, the webApplicationB can get it.
Is it possible? How can i obtain a global Cache Manager?

Ernest is right - MBean servers are per JVM not per ClassLoader, so you need to ignore duplicated domains. But what's more interesting - Wildfly uses Infinispan for session clustering, so the default cache manager might be already running. I strongly recommend using your own cache manager name:
new GlobalConfigurationBuilder().globalJmxStatistics()
.cacheManagerName(CACHE_NAME).build();
Ernest also suggested using a HotRod Server cluster and connecting to it using a HotRod client (which is by far faster than using REST interface). This sounds reasonable in scenario you described.

It seems obvious that you're running this code in web modules (.war) -- or in jars bundled in war files. You cannot share instances across web modules as class-loaders are protected (and that's good for you).
You have a few options:
Instead of deploying war files, make a single ear file with multiple web modules, and one EJB that will then create and use the cache manager. Each web module will then get to the cache via the local EJB, with infinispan libs deployed in ear/lib.
Run Infinispan server (standalone wildfly installation for Infinispan) and change your code to use the remote clients:
-- HotRod client to connect to it externally (docs here: http://infinispan.org/docs/8.2.x/getting_started/getting_started.html#_using_hot_rod_to_access_an_infinispan_data_grid).
-- REST client (docs here: http://infinispan.org/docs/8.2.x/user_guide/user_guide.html#_infinispan_rest_server)
Each web module can do this separately.

Related

External URL configuration in microservice

I have multiple microservices which communicates with each other through REST calls.
I have used spring boot and spring rest and have configured the URLS of the rest end points in application.properties file.
Now the problems is if the URL for one end point changes then I to have to manually modify all the property files of the services which are calling that particular end point which has got changed.
Is there a workaround for this so that the URLS can be somehow placed in a centralized location so that any modification does not impacts the other services which are using it.
You can use spring-cloud to achieve this. Usual way used in spring-cloud is by configuring the required properties in a git repo. And then those properties can be accessed by any micro-service you want with minimal configurations. You can refer projects in this repo
limits-services acts as a client that needs certain properties those are configured in spring-cloud-config-server. Hope this helps.
In case with microservices you can use Spring Cloud Config (Spring Cloud Config, Spring Cloud Config Server). It's very usefull and you can update your configuration at runtime.
Spring Cloud Config provides server and client-side support for externalized configuration in a distributed system. With the Config Server you have a central place to manage external properties for applications across all environments. The concepts on both client and server map identically to the Spring Environment and PropertySource abstractions, so they fit very well with Spring applications, but can be used with any application running in any language. As an application moves through the deployment pipeline from dev to test and into production you can manage the configuration between those environments and be certain that applications have everything they need to run when they migrate.
As others have mentioned you can use Spring Cloud Config Server to remotly load your application configuration. All you need is git repository containing your configuration.
Spring cloud configuration supporst Git, database as your store for configuration.
Idea is to create an spring-boot app that can provide configuration to other applications.
#SpringBootApplication
#EnableConfigServer
public class ConfigServer {
public static void main(String[] args) {
SpringApplication.run(ConfigServer.class, args);
}
}
You can configurae port and provide your git repository using key spring.cloud.config.server
server.port: 8888
spring.cloud.config.server.git.uri: file://${user.home}/config-repo
At client side, if you have spring-config in your classpath, application will try to connect to an application runnign at port 8888 to retrieve configuration.
More information can be found here.
may put configuration inside a database.
after that need have one centralize cache service that used by other services, can be .jar service,
then the values can be load inside a cache class in this service,
then in the front end side need have update button for updating the cache after modify the URL value in the database, so then all impacted services can use new value.
and also to be easier may have stand alone UI for update those configuration rather than updating database directly.
You can use Microconfig.IO to manage your service configuration and it's placeholders functionality to reference configuration values of certain services from others. So in your case you configure your deploy url in your server and put placeholders on it in your clients. This allows you to edit value only in one place and then everyone who depend on it will get it automatically.

When to put configuration in file.properties or Jndi

For a long time in many IT services, I see some complex process to manage Java EE application configuration depending of the environments:
- custom tools, with Database or not, to manage replacement in the properties file (unzip war, replace, zip war...)
- Externalize properties file in obscure directory in the server (and some process to update it some time) and some time with a JNDI configuration...
- maven profile and lot of big properties files
But for database connection everybody use jndi datasource.
Why this is not generalized for all configurations that depend of environment ?
Update : I want deal with other variable than datasource, there is no question about datasource : it's in configured in JNDI for Java EE application. After if you want hack JNDI...
Setting up database connectivity (like user name, password, URL, driver etc.) somewhere in the application server has several advantages over doing it yourself in the WAR:
The app server can be a central point where the DB is configured, and you might have several WARs running on that server sharing a DB. So you need to set it up only once.
The DB settings, especially the credentials (username, password) are stored somewhere in the app server instead of somewhere in the WAR. That can have security implications (for instance, restricting access to that file is easier done than in a WAR archive).
You can set up one JNDI path to retrieve a DataSource instance pointing to the DB and do not need to worry about username and password anymore. If you have multiple app servers (one live system, one test system, several developer machines) with different DB URLs and credentials, then you can just configure that in each app server individually and deploy the WAR files without the need to change DB settings (see below).
The server might provide additional services, like connection pools, container managed transactions, etc. So again, you don't have to do it on your own in the WAR.
This is true for other services provided by the app server as well, for example JavaMail.
There are other cases where it you want to configure something that is specific to one web application and does not rely on the environment (the app server), like logging (although that may be set up in the app server, too). In those cases you might prefer using static config files, for instance log4j.properties.
I want to illustrate the third bullet point a bit further ...
Suppose you have one WAR in three app servers (developer machine, test server, live server).
Option 1 (DB setup in WAR)
Create a database.properties :
db.url=jdbc:mysql://localhost:3306/localdb
db.user=myusername
db.pass=mysecretpassword
#db.url=jdbc:mysql://10.1.2.3:3306/testdb
#db.user=myusername
#db.pass=mysecretpassword
#db.url=jdbc:mysql://10.2.3.4:3306/livedb
#db.user=myusername
#db.pass=mysecretpassword
Before you deploy it somewhere, you need to check if your settings are pointing to the right DB!
Also, if you check this file in to some version control system, then you might not want to publish your DB username/password to your local machine.
Option 2 (DB setup in App Server)
Imagine you have configured the three servers with their individual DB settings, and each of them registers the DB with the JNDI path java:database/mydb.
Then you can retrieve the DataSource like so:
Context context = new InitialContext();
DataSource dataSource = (DataSource) context.lookup("java:database/mydb");
This is working on every app server instance and you can deploy your WAR without the need to modify anything.
Conclusion
By moving the configuration to the app server you'll have the advantage of separating settings depending on the environment from your app code. I would prefer this whenever you have settings involving IP addresses, credentials, etc.
Using a static .properties file on the other hand is simpler to manage. I would prefer this option when dealing with settings that have no dependencies to the environment or are app specific.

Implementing SSO between Jetty9 WebAppContexts

The Jetty 9 application I am developing automatically scans a set of JarFiles for web.xml, then programmatically imports the contained webapps as WebAppContexts. I need to implement single sign-on between the individual webapps, as explained in the following tutorial for Jetty 6: http://docs.codehaus.org/display/JETTY/Single+Sign+On+-+Jetty+HashSSORealm. Unfortunately, HashSSORealm seems to have been removed from Jetty. Are there any viable alternatives for implementing simple SSO?
I did find this post recommending the Fediz jetty plugin, but would prefer to use a native jetty solution if such a thing exists: http://dev.eclipse.org/mhonarc/lists/jetty-users/msg03176.html
Further info:
The central issue seems to be that each WebAppContext must have its own SessionManager, making it impossible for the WebAppContexts to share information with one another even when using the same cookie.
I solved the issue- you simply have to assign the same instance of SessionManager to each WebAappContext's SessionManager. It'll look a little something like this, assuming all WebAppContexts are grouped under the /webapps/ context path:
// To be passed to all scanned webapps. Ensures SSO between contexts
SessionManager sessManager = new HashSessionManager();
SessionCookieConfig config = sessManager.getSessionCookieConfig();
config.setPath("/webapps/"); // Ensures all webapps share the same cookie
// Create the Handler (a.k.a the WebAppContext).
App app = new App(deployer, provider, module.getFile().getAbsolutePath());
WebAppContext handler = (WebAppContext)app.getContextHandler(); // getContextHandler does the extraction
// Consolidating all scanned webapps under a single context path allows SSO
handler.setContextPath("/webapps" + handler.getContextPath());
// Cookies need to be shared between webapps for SSO
SessionHandler sessHandler = handler.getSessionHandler();
sessHandler.setSessionManager(sessManager);
If you share the SessionManager across WebAppContexts, then all of those WebAppContexts share exactly the same session instances. The Servlet Spec says that the WebAppContexts should share session ids, not session contents.
Jan

How to deploy the same web application twice on WebLogic 11g?

We have developed a JEE5 web application (WAR) and running it in production under WebLogic 11g (10.3.5).
Now the same application should be deployed as separate applications for different customers (different URLs, different data) on the same WebLogic.
I managed the first part by setting different context roots after deployment for each of them.
But I have yet to make them use different datasources - and since I want to avoid customer specific builds, the persistence.xml is the same for all applications, thus also the persistence unit name.
What is the best setup for this scenario? Am I forced making separate builds and by that different WARs or do I have to separate Managed Servers or Domains wihtin the server or is there a better way to solve it?
I know this thread is very old,but replying so that it may help someone with the same question stumbling on this thread.
The latest weblogic 12.2.1 comes with Multi-tenancy(add-on I guess) which can let you run same applications in a single domain.
Edit: Weblogic 12.2.1 introduced concept called Partitions. Partitions are both config and run-time subdivision of a weblogic Domain. In a single weblogic domain you can create multiple partitions. Each partition will have one or more resource groups. Resource groups are the logical grouping of weblogic resorces like data sources,jms,Java EE apps ,etc. For example to achieve what the original posts asked for , we create a Resource Group template with the web-application and the datasource as the resources. In the Data source configuration we can provide a place holder variable instead of actual URL as DB URL. Then we can create two partitions that refers to this Resource Group Template(Each partition will now have a separate web application and data source) . Each partition will override the DB URL property there by creating two data sources with same JNDI name.In each Partition we create virtual host/port so that the client can use that to access the application running in the respective partitions.
A better and more detailed information on this can be found in https://blogs.oracle.com/WebLogicServer/entry/domain_partitions_for_multi_tenancy
ServletContextListener.contextInitialized can look at the ServletContext and figure out which deployment is which
in web.xml, define a servlet context listener:
<listener>
<listener-class>com.path.YourServletContextListener</listener-class>
</listener>
and then in YourServletContextListener.java, add a contextInitialized method like this:
public void contextInitialized(ServletContextEvent sce)
{
ServletContext sc = sce.getServletContext();
String name = sc.getContextPath();
...
}
my thought is that you can use that name to select from multiple data sources that you have configured. depending on how you've been deployed, you'll make a different database connection and have the correct application's data.
It seems to me from what I saw in the Oracle documentation, that having several domains is the only way to separate data sources with the same persistence unit name - which is bad, since this basically means running two WLS in parallel.
For this reason I decided to go with building individual WAR files (which I tried to avoid initially), to include customer-specific persistence.xml files and specifying customer-specific datasources in the WLS.

Programaticly create datasource for JBoss 4.2.x

Would it be possible to programmaticly create a data source in jboss and still have a valid jndi entry for the entity manager to use?
Creating the data source is where I am lost, I hope I can use a MBean that runs on stat-up to handle this.
This would not be my preferred method, but the application I am working on has a global configuration file hosted on another server I am suppose to use for configuration.
update: In this instance I need to create a data source programticly or change the jdbc url of an exsiting datasource. I don't know the DB server url until runtime.
Rather than poking around in the guts of JBoss in order to do this, I suggest using a 3rd-party connection pool utility, such as Apache Commons DBCP. There are instructions on how to programmatically register a DBCP datasource on JNDI here.
The first two lines of the sample code should be unnecessary, just create the default InitialContext and then rebind the datasource reference into it as described.
Here's a post that describes how to create a jboss service archive (SAR) that you can put in your EAR that will deploy a data source when the EAR is deployed, and remove it when the EAR in undeployed.

Categories