Why we need ibm-web-bnd.xml and ibm-web-ext.xml in application that we need to run in WAS server. I found few things like it contain virtual host , context root etc. But i want to know why it is required for WAS server.
First of all they are not required, they can be generated during the installation for example via web admin console. However they can provide some predefined settings or change the default behavior.
The ibm-web-bnd.xml file provided binding between resource references used in web module and actual components, like datasouces, queues, etc. However since Java EE 6, you can actually use the lookup attribute from the #Resource annotation to provide them in the code. See some more info about bindings here - Application bindings
The ibm-web-ext.xml file allows you to configure some settings for web module e.g. context-root, directory browsing, etc and JSP engine parameters.
The easiest way to create them is to use WebSphere Developer Tools for Eclipse (free plugin), which have graphical/text editor for them.
Related
There's a web application and a number of environments in which it works. In each environment it has different settings like DB connection and SOAP ends-points that in their turn are defined in properties-files and accessed in the following way:
config.load(AppProp.class.getClassLoader().getResourceAsStream(
PROPERTIES_FILE_PATH + PROPERTIES_FILE_NAME));
Thus the WAR-files are different for every environment.
What we need is to build a unified WAR-file that doesn't contain any configuration and works in any environment (for now, Tomcat instance) getting its configuration from outside its WAR-file.
The answer Java Web Application Configuration Patterns, to my mind, gives the full set of common approaches but with just few examples. The most attractive way is configuring JNDI lookup mechanism. As I can guess it allows to separately configure web-applications by their context paths. But couldn't find a simple (step-by-step) instructions in both the Internet and the Tomcat's docs. Unfortunately cannot spend much time on studying this complicated stuff in order to just meet so seemingly simple and natural demand :(
Would appreciate your links at the relevant descriptions or any alternative suggestion on the problem.
If its a case of simply deploying your WAR on different environment (executed by different OS user), then you can put all your config files in the user's home folder and load them as:
config.load(new FileInputStream(System.getProperty("user.home") + PROPERTIES_FILE_NAME));
This gives you the isolation and security and makes your WAR completely portable. Ideally though, you should still provide built-in default configuration if that makes sense in your case.
The approach we've taken is based on our existing deployment method, namely to put the WAR files in the filesystem next to the Tomcat, and deploy a context.xml pointing to the WAR file to Tomcat.
The context descriptor allows for providing init parameters which is easily accessible in a servlet. We've also done some work on making this work with CDI (for Glassfish and TomEE dependency injection).
If you only have a single WAR file deployed to this Tomcat instance, you can also add init parameters to the default global context XML. These will be global and you can then deploy the WAR file directly. This is very useful during development.
I've been using an embedded neo4j server in my project so far.
Now I want to try out the new bolt protocol with a standalone server, however only for my deployed application. For convenience, I still want to use an embedded database when running from IDE (permanent) or when running tests (impermanent).
In order to support this, I've migrated from the java based configuration to the use of a ogm.properties file. Depending on the environment I run in, I want to use the file which configures the respective driver/database location.
I have placed a default configuration in the root of my resources folder. However I am not able to "override" this in other environment.
In order to do that I placed a different ogm.properties in the root folder of the deployed application. This doesn't seem to work. This the mechanism that I previously already used in order to have different application.properties and logback.xml configurations.
Is this not supported by neo4j-ogm? If not, how can one achieve this? It also isn't (trivially) possible with the java based configuration.
I am a bit confused, since this doesn't sound like such an unlikely requirement...
You can use Spring Profile for this to configure different properties for different environments and you can look here.
You can use application.properties (spring.profiles.active) to load a different profile or by using a runtime argument if you are using Spring boot with CommandLineRunner.
I have a webservice that uses Java, REST, Jersey and runs on Tomcat8. The webservice requires access to a database. Depending on where we are in the process the we may be using a testdatabase, production database or something else. Ideally we would like to be able to set which database to use without requiring a code change and recompile.
The approach we have tried is to have a properties file defining the database parameters and use an environment variable to point to the file. This has proved troublesome, first we've had a hard time defining system properties on the Tomcat server that we can read from the application, also it seems like all the files will have to be defined on the classpath, i.e already configured ahead of time and part of the codebase.
This seems like fairly common scenario, so I'm sure there is a recommended way to handle situations like this?
Zack Macomber has a point here. Don't enable your app/service to look up its settings dynamically.
Make your build process dynamic instead.
Maven, Gradle and friends all provide simple ways to modify output depending on build parameters and or tasks/profiles.
In your code always link to the same file (name). The actual file will then be included based on your task and/or build environment. Test config for tests. Production config for production.
In many cases a complete recompilation is not necessary and will therefore be skipped (this depends on your tool, of course).
No code changes at all. Moreover the code will be dumb as hell as it does not need to know anything about context.
Especially when working on something with multiple people this approach provides the most stable long-term-solution. Customizable for those who need some special, local config and most important transparent for all who don't need or don't want to know about runtime environment requirements!
We have a similar case. We have created a second web service on the same endpoint (/admin) which we call to set a few configuration parameters. We also have a DB for persisting the configuration once set. To make life easier, we also created a simple UI to set these values. The user configures the values in the UI, the UI calls the /admin web service, and the /admin service sets the configuration in memory (as properties) as well as in the DB. The main web service uses the properties as dynamic configuration.
Note: we use JWT based authorization to prevent unauthorized access to /admin. But depending upon your need you can keep it unsecure, use basic HTTP auth or go with something more detailed.
Not sure if in this particular case it is wise, but it is possible indeed to create a .properties file anywhere on the filesystem - and link it into your application by means of a Resources element.
https://tomcat.apache.org/tomcat-8.0-doc/config/resources.html
The Resources element represents all the resources available to the web application. This includes classes, JAR files, HTML, JSPs and any other files that contribute to the web application. Implementations are provided to use directories, JAR files and WARs as the source of these resources and the resources implementation may be extended to provide support for files stored in other forms such as in a database or a versioned repository.
You would need a PreResources element here, linking to a folder, the contents of which will be made available to the application at /WEB-INF/classes.
<Context antiResourceLocking="false" privileged="true" docBase="${catalina.home}/webapps/myapp">
<Resources className="org.apache.catalina.webresources.StandardRoot">
<!-- external res folder (contains settings.properties) -->
<PreResources className="org.apache.catalina.webresources.DirResourceSet"
base="/home/whatever/path/config/"
webAppMount="/WEB-INF/classes" />
</Resources>
</Context>
Your application now 'sees' the files in /home/whatever/path/config/ as if they were located at /WEB-INF/classes.
Typically, the Resources element is put inside a Context element. The Context element must be put in a file located at:
$CATALINA_BASE/conf/[enginename]/[hostname]/ROOT.xml
See https://tomcat.apache.org/tomcat-8.0-doc/config/context.html#Defining_a_context
in an existing Web Application (JSP, Struts), localizations are managed through JSTL tags fmt:setbundle, fmt:message and .properties files.
I'd like to get rid of the .properties files and use an alternative datasource for localizations.
For my goal I've created custom ResourceBundle and ResourceControl implementations (details on where data is picked, xml, database, are out of scope), but I'm wondering how to register and use them in place of the default/factory file-based implementation, so I'm not forced to modify markup code (fmt:message...) among web application files.
I saw examples that point to replace fmtResourceKey session value but it's limited to only one bundle and it looks like an "hack".
Any good ideas?
Thanks for your help!
Ok, it seems I sorted out with subclassing/customizing java.util.ResourceBundle, which also carries implementantion of custom ResourceBundleControl and ResourceBundleControlProvider (injected through Service Provider Interface - SPI).
Similar solution is depicted in this page from Oracle:
https://docs.oracle.com/javase/tutorial/i18n/serviceproviders/resourcebundlecontrolprovider.html
but was lacking an important hint: "put your JAR inside VM" since ResourceBundle.GetBundle method internally uses Serviceloader.LoadInstalled which searches for custom provider installed inside Java VM, as stated in LoadInstalled documentation:
This method is intended for use when only installed providers are
desired. The resulting servicewill only find and load providers that
have been installed into the current Java virtual machine; providers
on the application's class path will be ignored.
Thanks!
There is a team develops enterprise application with web interface: java, tomcat, struts, mysql, REST and LDAP calls to external services and so on.
All configuration is stored in context.xml --tomcat specific file that contains variables available via servlet context and object available via JNDI resources.
Developers have no access to production and QA platforms (as it should be) so context.xml is managed by support/sysadmin team.
Each release has config-notes.txt with instructions like:
please add "userLimit" variable to context.xml with value "123", rename "DB" resource to "fooDB" and add new database connection to our new server (you should know url and credentials) named "barDb"
That is not good.
Here is my idea how to solve it.
Each release has special config file with required variable names, descriptions and default values (if any): even web.xml could be used.
Here is pseudo example:
foo=bar
userLimit=123
barDb=SET_MANUAL(connection to our new server)
And there is a special tool that support team runs against deployment artifact.
Look at it (text after ">" is typed by support guy):
Config for version 123 of artifact "mySever".
Enter your config file location> /opt/tomcat/context/myServer.xml
+"foo" value "bar" -- already exists and would not be changed
+"userLimit" value "123" -- adding new
+"barDb"(connection to our new server) please type> jdbc:mysql:host/db
Saving your file as /opt/tomcat/context/myServer.xml
Your environment is not configured to run myServer-123.
That will give us ability to deploy application on any environment and update configuration if needed.
Do you like my idea? What do you use for environment configuration management? Does there is ready-to-use tools for that?
There are plenty of different strategies. All of them are good and depends on what suit you best.
Build a single artifact and deploy configs to a separate location. The artifact could have placeholder variables and, on deployment, the config could be read in. Have a look at Springs property placeholder. It works fantastically for webapps that use Spring and doesn't involve getting ops involved.
Have an externalised property config that lives outside of the webapp. Keep the location constant and always read from the property config. Update the config at any stage and a restart will be up the new values.
If you are modifying the environment (i.e. application server being used or user/group permissions) look at using the above methods with puppet or chef. Also have a look at managing your config files with these tools.
As for the whole should devs be given access to prod, it really depends on a per company basis. For smaller companies where the dev is called every time there is a problem, regardless of whether that problem is server or application related, then obviously devs require access to the box.
DevOps is not about giving devs access to the box, its about giving devs the ability to use infrastructure as a service, the ability to spawn new instances with application X with config Y and to push their applications into environments without ops. In a large company like ours, what it allows is the ability for devs to manage the application they put on a server. Operations shouldn't care what version is on their, thats our job, their job is all about keeping the server up and running.
I strongly disagree with your remark that devs shouldn't have access to prod or staging environments. It's this kind of attitude that leads to teams working against each other instead of with eath other.
But to answer your question: you are thinking about what is typically called continuous integration ( http://en.wikipedia.org/wiki/Continuous_integration ) and moving towards devops. Ideally you should aim for the magic "1 click automated deployment". The guys from Flickr wrote a lot of blogs (and books) about how they achieved that.
Anyhow .. there's a lot of tools around that sector. You may want to have a look a things like Hudson/Jenkins or Puppet/Chef.