Securing passwords in production environment - java

We have a Java web application running on JBoss and Linux. Production environment database connection parameters come from a configuration file that only exists on the production environment app servers. That config file is only readable by the user ID that also runs the application, (let's call that user appuser) and the only people who can log into production environment servers and sudo to appuser are members of our Operations team. The production environment itself is firewalled off from all other environments.
We would like to make this more secure. Specifically we would like to prevent the operations team from reading the database connection password and other keys that are currently in the configuration file.
Another factor to keep in mind is that the operations team is responsible for building and deploying the application.
What are our options? The solution needs to support manually restarting the application as well as automatically starting the application if the OS reboots.
Update
The solution I am investigating now (tip to Adamski for his suggestion, which roughly translates into step 1):
Write a wrapper executable that is setuid to a user that starts/stops the applications and owns the configuration files and everything in the JBoss directory tree.
Use jarsigner to sign the WAR after it is built. The building of the WAR will be done by development. The setuid wrapper will verify the signature, validating that the WAR has not been tampered with.
Change the deployment process to only deploy the signed WAR. The setuid wrapper can also move the WAR into place in the JBoss deploy directory.

Why not just create a second user for the Operations team to sudo to, which only has a subset of file permissions compared with your application's user ID?
No code changes necessary; nice and simple.

You might find it interesting to see how the Jetty folks have approached this problem:
http://wiki.eclipse.org/Jetty/Howto/Secure_Passwords
This at least ensures that you cannot just read the password directly but need to some serious effort to get a humanly readable version.
If the Jetty license is compatible with what you want to do, you can just lift their code.

The easy way is to use Unix permissions to control who can read these files. However, sensitive data like passwords should never be stored in plain. There are a few alternatives. They require some effort but, that's an approach followed by most commercial products.
Store the passwords encrypted on file system. You can use either Java cryptography or XML encryption to do so.
OR
Store sensitive information such as passwords in a database along with other configuration details and encrypt it using database tools. You will still need to store database password somewhere on the file system. Oracle provides a wallet to store the password. There are some third party wallets as well that can do this, if your database vendor does not provide one.

Related

Is it possible to coppy weblogic and it's domain from linux to windows using some tools?

I searched about cloning or copying weblogic and it's domain, and found 2 ways that I think is the most nearest to my question.
1, packing and unpacking a weblogic domain
2, Creating Extension Templates .
There is weblogic 12.2.4 installed on Linux server and I want to coppy it's configuration and domain and create my own instance with exact configuration.
If it's possible, pleas give me solution or some clues and key-word to search more.
And do I need to change some configs by hand or provided tool does everything?
Is copying domain different from copying weblogic configuration?
Thanks very much.
If you want to create a domain with the same configuration, but on Windows, you should use Weblogic Deploy Tooling.
The first step is to install Oracle Weblogic on your windows machine(s).
The second step is to use the discoverydomain.sh to introspect the domain, which is running on Linux.
The previous step will generate a model in YAML, which represents your Linux domain as code and then you will have to customize it with proper values for Weblogic's user's passwords, data sources, etc.
Once you have the model ready, you can run createdomain.cmd on Windows to create the domain. By the way, if your domain is distributed on several machines you will have to run pack and unpack after creating the domain with Weblogic Deploy Tooling because it only works on the node, which will host the AdminServer.
Here you have an example about using Weblogic Deploy Tooling with Ansible to create a domain with SOA https://github.com/textanalyticsman/ansible-soa-wldt
Yes, it is theoretically possible to copy $DOMAIN_HOME (WebLogic domain) to or from Windows.
However, I would NOT copy the $WL_HOME directory. Here a fresh installation is the only way.
As you mention the tool recommended by Oracle would be pack and unpack. With the assumption, that there is no WebLogic security realm configured.
Another option would be to create a new domain (preferable with the same name) with the Configuration Wizard and then copy the XML-fragments from the old $DOMAIN_HOME/config/* files in the fresh domain. Watch out for the encrypted passwords. Don't mix the encrypted fields in the old a new domain.
Another option would to use Windows Subsystem for Linux (WSL2). In this case, you can copy your files 1:1. $JAVA_HOME, $WL_HOME and $DOMAIN_HOME in one go. You WebLogic Server will start without any problems, except some DNS-Names or IP-Addresses issues.

How can I set my weblogic's deployment mode to nostage?

My problem is that after every code change I have to build and deploy my Java web application (or at least some parts of it), which takes too much time.
JRebel would do the trick, but my company doesn't have a license for it.
I heard that weblogic's nostage mode can save some time, but how can I configure it?
I've changed my Managed Server's staging mode in the Admin Console, but how can I provide the path to my .wars? Or how can I get this thing work?
Sorry for my lack of knowledge, but I'm pretty new to this topic.
You now configured the default staging mode for new deployments, it would probably be easier to just change this during the individual deployments. If you are using the admin console to deploy it is the section called "Source accessibility".
Basically, in nostage / "I will make the deployment accessible" you tell WebLogic where to find your deployment by passing it a file location - which should be accessible for every targeted server. In the default staging mode (aptly called "stage"), you tell the admin server where to find the files and the admin server copies your files to the managed servers.
Unless your limits are in your bandwidth, I don't think this will save you any time during deployments.

J2EE Cluster: Is there a generic way to handle central configuration?

We develop an application which is normally deployed on a single webserver. Now we check how it runs in a clustered environment, as some customers are using clusters.
The problem is the app creates a local configuration (in registry/file) which does not make any sense in a cluster. The config is changed by the application.
Is there a generic way (like an interface) to make a central configuration, so the config(-file) itself is not duplicated on each node when the app in deployed in a cluster? Any other recommended options? (doing it manually with config on network-share/in database/some MBean?)
why generic? It must run on different application-servers (like tomcat, jboss, Webspere, weblogic ...) so we cannot use some server-specific feature.
Thanks.
Easiest way for central configuration is to put it on the file system. This way you can mount the file system to your OS and make it available to your app server no matter what the brand or version.
We do this for some of our applications. Shared libraries and/or properties files that we care about (in our case). We set up either JVM parms or JNDI environment variables (trying to move toward those) so we can look up the path to the mounted drive at runtime and load the data from the files.
Works pretty slick for us.
Now if you are writing information, that's a different story. As then you have to worry about how you are running your cluster (is it highly available only? load-balanced?). Is the app running in both clusters as if it was one app? Or is it running independently on each cluster node? If so, then you might have to worry about concurrent writes. Probably better to go with a database or one of the other solutions as mentioned above.
But if all you are doing is reading configuration, then I would opt for the mounted file system as it is simplest.
You may use a library like Commons Configuration and choose an implementation which is cluster-friendly like JDBC or JNDI.
I would consider JDBC and JDNI first, however if you want your servers to be able to run independantly, I would suggest a file distrubtion system like subversion/git/mercurial i.e. if your central configuration servers is down or unavailable, you don't want production to stop.
A version controlled system provides a history of who made what changes when and controlled releases (and roll back of releases)
One way to avoid the issue of the central server adding another point of failure is to use a databasse server which you already depend on (assuming you have one) on the basis that if its not running, you won't be working anyway.

Best practices for deploying Java webapps with minimal downtime?

When deploying a large Java webapp (>100 MB .war) I'm currently use the following deployment process:
The application .war file is expanded locally on the development machine.
The expanded application is rsync:ed from the development machine to the live environment.
The app server in the live environment is restarted after the rsync. This step is not strictly needed, but I've found that restarting the application server on deployment avoids "java.lang.OutOfMemoryError: PermGen space" due to frequent class loading.
Good things about this approach:
The rsync minimizes the amount of data sent from the development machine to the live environment. Uploading the entire .war file takes over ten minutes, whereas an rsync takes a couple of seconds.
Bad things about this approach:
While the rsync is running the application context is restarted since the files are updated. Ideally the restart should happen after the rsync is complete, not when it is still running.
The app server restart causes roughly two minutes of downtime.
I'd like to find a deployment process with the following properties:
Minimal downtime during deployment process.
Minimal time spent uploading the data.
If the deployment process is app server specific, then the app server must be open-source.
Question:
Given the stated requirements, what is the optimal deployment process?
Update:
Since this answer was first written, a better way to deploy war files to tomcat with zero downtime has emerged. In recent versions of tomcat you can include version numbers in your war filenames. So for example, you can deploy the files ROOT##001.war and ROOT##002.war to the same context simultaneously. Everything after the ## is interpreted as a version number by tomcat and not part of the context path. Tomcat will keep all versions of your app running and serve new requests and sessions to the newest version that is fully up while gracefully completing old requests and sessions on the version they started with. Specifying version numbers can also be done via the tomcat manager and even the catalina ant tasks. More info here.
Original Answer:
Rsync tends to be ineffective on compressed files since it's delta-transfer algorithm looks for changes in files and a small change an uncompressed file, can drastically alter the resultant compressed version. For this reason, it might make good sense to rsync an uncompressed war file rather than a compressed version, if network bandwith proves to be a bottleneck.
What's wrong with using the Tomcat manager application to do your deployments? If you don't want to upload the entire war file directly to the Tomcat manager app from a remote location, you could rsync it (uncompressed for reasons mentioned above) to a placeholder location on the production box, repackage it to a war, and then hand it to the manager locally. There exists a nice ant task that ships with Tomcat allowing you to script deployments using the Tomcat manager app.
There is an additional flaw in your approach that you haven't mentioned: While your application is partially deployed (during an rsync operation), your application could be in an inconsistent state where changed interfaces may be out of sync, new/updated dependencies may be unavailable, etc. Also, depending on how long your rsync job takes, your application may actually restart multiple times. Are you aware that you can and should turn off the listening-for-changed-files-and-restarting behavior in Tomcat? It is actually not recommended for production systems. You can always do a manual or ant scripted restart of your application using the Tomcat manager app.
Your application will be unavailable to users during a restart, of course. But if you're so concerned about availability, you surely have redundant web servers behind a load balancer. When deploying an updated war file, you could temporarily have the load balancer send all requests to other web servers until the deployment is over. Rinse and repeat for your other web servers.
It has been noted that rsync does not work well when pushing changes to a WAR file. The reason for this is that WAR files are essentially ZIP files, and by default are created with compressed member files. Small changes to the member files (before compression) result in large scale differences in the ZIP file, rendering rsync's delta-transfer algorithm ineffective.
One possible solution is to use jar -0 ... to create the original WAR file. The -0 option tells the jar command to not compress the member files when creating the WAR file. Then, when rsync compares the old and new versions of the WAR file, the delta-transfer algorithm should be able to create small diffs. Then arrange that rsync sends the diffs (or original files) in compressed form; e.g. use rsync -z ... or a compressed data stream / transport underneath.
EDIT: Depending on how the WAR file is structured, it may also be necessary to use jar -0 ... to create component JAR files. This would apply to JAR files that are frequently subject to change (or that are simply rebuilt), rather than to stable 3rd party JAR files.
In theory, this procedure should give a significant improvement over sending regular WAR files. In practice I have not tried this, so I cannot promise that it will work.
The downside is that the deployed WAR file will be significantly bigger. This may result in longer webapp startup times, though I suspect that the effect would be marginal.
A different approach entirely would be to look at your WAR file to see if you can identify library JARs that are likely to (almost) never change. Take these JARs out of the WAR file, and deploy them separately into the Tomcat server's common/lib directory; e.g. using rsync.
In any environment where downtime is a consideration, you are surely running some sort of cluster of servers to increase reliability via redundancy. I'd take a host out of the cluster, update it, and then throw it back into the cluster. If you have an update that cannot run in a mixed environment (incompatible schema change required on the db, for example), you are going to have to take the whole site down, at least for a moment. The trick is to bring up replacement processes before dropping the originals.
Using tomcat as an example - you can use CATALINA_BASE to define a directory where all of tomcat's working directories will be found, separate from the executable code. Every time I deploy software, I deploy to a new base directory so that I can have new code resident on disk next to old code. I can then start up another instance of tomcat which points to the new base directory, get everything started up and running, then swap the old process (port number) with the new one in the load balancer.
If I am concerned about preserving session data across the switch, I can set up my system such that every host has a partner to which it replicates session data. I can drop one of those hosts, update it, bring it back up so that it picks the session data back up, and then switch the two hosts. If I've got multiple pairs in the cluster, I can drop half of all pairs, then do a mass switch, or I can do them a pair at a time, depending upon the requirements of the release, requirements of the enterprise, etc. Personally, however, I prefer to just allow end-users to suffer the very occasional loss of an active session rather than deal with trying to upgrade with sessions intact.
It's all a tradeoff between IT infrastructure, release process complexity, and developer effort. If your cluster is big enough and your desire strong enough, it is easy enough to design a system that can be swapped out with no downtime at all for most updates. Large schema changes often force actual downtime, since updated software usually cannot accommodate the old schema, and you probably cannot get away with copying the data to a new db instance, doing the schema update, and then switching the servers to the new db, since you will have missed any data written to the old after the new db was cloned from it. Of course, if you have resources, you can task developers with modifying the new app to use new table names for all tables that are updated, and you can put triggers in place on the live db which will correctly update the new tables with data as it is written to the old tables by the prior version (or maybe use views to emulate one schema from the other). Bring up your new app servers and swap them into the cluster. There are a ton of games you can play in order to minimize downtime if you have the development resources to build them.
Perhaps the most useful mechanism for reducing downtime during software upgrades is to make sure that your app can function in a read-only mode. That will deliver some necessary functionality to your users but leave you with the ability to make system-wide changes that require database modifications and such. Place your app into read-only mode, then clone the data, update schema, bring up new app servers against new db, then switch the load balancer to use the new app servers. Your only downtime is the time required to switch into read-only mode and the time required to modify the config of your load balancer (most of which can handle it without any downtime whatsoever).
My advice is to use rsync with exploded versions but deploy a war file.
Create temporary folder in the live environment where you'll have exploded version of webapp.
Rsync exploded versions.
After successfull rsync create a war file in temporary folder in the live environment machine.
Replace old war in the server deploy directory with new one from temporary folder.
Replacing old war with new one is recommended in JBoss container (which is based on Tomcat) beacause it'a atomic and fast operation and it's sure that when deployer will start entire application will be in deployed state.
Can't you make a local copy of the current web application on the web server, rsync to that directory and then perhaps even using symbolic links, in one "go", point Tomcat to a new deployment without much downtime?
Your approach to rsync the extracted war is pretty good, also the restart since I believe that a production server should not have hot-deployment enabled. So, the only downside is the downtime when you need to restart the server, right?
I assume all state of your application is hold in the database, so you have no problem with some users working on one app server instance while other users are on another app server instance. If so,
Run two app servers: Start up the second app server (which listens on other TCP ports) and deploy your application there. After deployment, update the Apache httpd's configuration (mod_jk or mod_proxy) to point to the second app server.
Gracefully restarting the Apache httpd process. This way you will have no downtime and new users and requests are automatically redirected to the new app server.
If you can make use of the app server's clustering and session replication support, it will be even smooth for users which are currently logged in, as the second app server will resync as soon as it starts. Then, when there are no accesses to the first server, shut it down.
This is dependant on your application architecture.
One of my applications sits behind a load-balancing proxy, where I perform a staggered deployment - effectively eradicating downtime.
Hot Deploy a Java EAR to Minimize or Eliminate Downtime of an Application on a Server or How to “hot” deploy war dependency in Jboss using Jboss Tools Eclipse plugin might have some options for you.
Deploying to a cluster with no downtime is interesting too.
JavaRebel has hot-code deployement too.
If static files are a big part of your big WAR (100Mo is pretty big), then putting them outside the WAR and deploying them on a web server (e.g. Apache) in front of your application server might speed up things. On top of that, Apache usually does a better job at serving static files than a servlet engine does (even if most of them made significant progress in that area).
So, instead of producing a big fat WAR, put it on diet and produce:
a big fat ZIP with static files for Apache
a less fat WAR for the servlet engine.
Optionally, go further in the process of making the WAR thinner: if possible, deploy Grails and other JARs that don't change frequently (which is likely the case of most of them) at the application server level.
If you succeed in producing a lighter WAR, I wouldn't bother of rsyncing directories rather than archives.
Strengths of this approach:
The static files can be hot "deployed" on Apache (e.g. use a symbolic link pointing on the current directory, unzip the new files, update the symlink and voilà).
The WAR will be thinner and it will take less time to deploy it.
Weakness of this approach:
There is one more server (the web server) so this add (a bit) more complexity.
You'll need to change the build scripts (not a big deal IMO).
You'll need to change the rsync logic.
I'm not sure if this answers your question, but I'll just share on the deployment process I use or encounter in the few projects I did.
Similiar to you, I do not ever recall making a full war redeployment or update. Most of the time, my updates are restricted to a few jsp files, maybe a library, some class files. I am able to manage and determine which are the affected artifacts, and usually, we packaged those update in a zip file, along with an update script. I will run the update script. The script does the following:
Backup the files that will be overwritten, maybe to a folder with today's date and time.
Unpackage my files
Stop the application server
Move the files over
Start the application server
If downtime is a concern, and they usually are, my projects are usually HA, even if they are not sharing state but using a router that provide sticky session routing.
Another thing that I am curious would be, why the need to rsync? You should able to know what are the required changes, by determining them on your staging/development environment, not performing delta checks with live. In most cases, you would have to tune your rsync to ignore files anyway, like certain property files that define resources a production server use, like database connection, smtp server, etc.
I hope this is helpful.
At what is your PermSpace set? I would expect to see this grow as well but should go down after collection of the old classes? (or does the ClassLoader still sit around?)
Thinking outloud, you could rsync to a separate version- or date-named directory. If the container supports symbolic links, could you SIGSTOP the root process, switch over the context's filesystem root via symbolic link, and then SIGCONT?
As for the early context restarts. All containers have configuration options to disable auto-redeploy on class file or static resource changes. You probably can't disable auto redeploys on web.xml changes so this file is the last one to update. So if you disable to auto redeploy and update the web.xml as the last one you'll see the context restart after the whole update.
We upload the new version of the webapp to a separate directory, then either move to swap it out with the running one, or use symlinks. For example, we have a symlink in the tomcat webapps directory named "myapp", which points to the current webapp named "myapp-1.23". We upload the new webapp to "myapp-1.24". When all is ready, stop the server, remove the symlink and make a new one pointing to the new version, then start the server again.
We disable auto-reload on production servers for performance, but even so, having files within the webapp changing in a non-atomic manner can cause issues, as static files or even JSP pages could change in ways that cause broken links or worse.
In practice, the webapps are actually located on a shared storage device, so clustered, load-balanced, and failover servers all have the same code available.
The main drawback for your situation is that the upload will take longer, since your method allows rsync to only transfer modified or added files. You could copy the old webapp folder to the new one first, and rsync to that, if it makes a significant difference, and if it's really an issue.
Tomcat 7 has a nice feature called "parallel deployment" that is designed for this use case.
The gist is that you expand the .war into a directory, either directly under webapps/ or symlinked. Successive versions of the application are in directories named app##version, for example myapp##001 and myapp##002. Tomcat will handle existing sessions going to the old version, and new sessions going to the new version.
The catch is that you have to be very careful with PermGen leaks. This is especially true with Grails that uses a lot of PermGen. VisualVM is your friend.
Just use 2 or more tomcat servers with a proxy over it. That proxy can be of apache/nignix/haproxy.
Now in each of the proxy server there is "in" and "out" url with ports are configured.
First copy your war in the tomcat without stoping the service. Once war is deployed it is automatically opened by the tomcat engine.
Note cross check unpackWARs="true" and autoDeploy="true" in node "Host" inside server.xml
It look likes this
<Host name="localhost" appBase="webapps"
unpackWARs="true" autoDeploy="true"
xmlValidation="false" xmlNamespaceAware="false">
Now see the logs of tomcat. If no error is there it means it is up successfully.
Now hit all APIs for testing
Now come to your proxy server .
Simply change the background url mapping with the new war's name. Since registering with the proxy servers like apache/nignix/haProxy took very less time, you will feel minimum downtime
Refer -- https://developers.google.com/speed/pagespeed/module/domains for mapping urls
You're using Resin, Resin has built in support for web app versioning.
http://www.caucho.com/resin-4.0/admin/deploy.xtp#VersioningandGracefulUpgrades
Update: It's watchdog process can help with permgenspace issues too.
Not a "best practice" but something I just thought of.
How about deploying the webapp through a DVCS such as git?
This way you can let git figure out which files to transfer to the server. You also have a nice way to back out of it if it turns out to be busted, just do a revert!
I wrote a bash script that takes a few parameters and rsyncs the file between servers. Speeds up rsync transfer a lot for larger archives:
https://gist.github.com/3985742

Should I implement source control for j2ee application server configuration files?

For a typical J2EE web application, the datasource connection settings are stored as part of the application server configuration.
Is there a way to version control these configuration details? I want more control on the datasource and other application server config changes.
What is the standard practice for doing this?
Tracking configuration changes to your application server through version control is a good thing to ask for. However, It does imply that all changes are done via scripting, instead of the administrative web interface. I recommend
http://www.ibm.com/developerworks/java/library/j-ap01139/index.html?ca=drs-
as a good background information article on this topic.
Update: Just recently, part 2 has been published here: http://www.ibm.com/developerworks/java/library/j-ap02109/index.html?ca=drs-
When working with WebSphere we found the best approach was to script the deployment and place the script under version control plus the response files for each of the target environments.
Websphere canbe tricky as the directory structure is a mess of files - often there appears to be duplicates and it's hard to figure which is the magic file you need to backup / restore . The question of how to go about this should not detract from the need to do it. - which is a definite yes.
Our (Spring) apps have a hardcoded jndi name in the spring config file. That way, the same ear can be deployed to dev, qa and prod environments, and you don't have to worry about database connection details.
The app server admins ensure that a datasource is registered against that jndi name, with the connection details as appropriate on each environment.
But how does this let me manage changes to datasource configurations in the application servers. Here's a scenario:
DBAs change the connection password of the database server.
Webspehere/Weblogic administrator makes corresponding changes to server configuration through administrator console.
The above change is not version controlled so there is no clean way of knowing the history of such changes.
The problem is not about how the application should be configured but about how the configuration changes should be version controlled. Perhaps it sounds like an overkill for simple projects but for some projects, controlling changes like these really becomes a problem.
Any time you ask yourself "should X be in version control" the default answer is "yes".
For a more refined answer, ask yourself this: is the file created by a person (like a source file or a document) or is it generated by another program (like an object file or a distribution PDF)?
File that are created, and/or maintained, by a human should be under configuration control.
We are always using version control for our app server settings. It's a tool called WLST (weblogic scripting tool) which is part of the weblogic server distribution. The domain configuration is stored within a Jython script, which can easily be executed via command line and therefore integrates superb with our build tool maven.
Creating a preconfigured running weblogic domain only needs to execute a maven goal. All those annoying problems of misconfigured jdbc connections or wrong jms destination parameters are gone. You will always have a appserver configuration which matches the source code at a given time. You will never need to remember which app server setting must be applied for this specific version of the project you are working on.
I really recommend this.
I also would like to know, if there are similar solutions for other application server available. As far as i know there is a way for glassfish via ant. How this can be achieved for JBoss?

Categories