We use JNLP application for my business. Actual use requires manual signing jars for each release. This certainly leads to having different certificates, expired certificates and so on ..
We POC'ed maven to automatically sign an application with Maven Jarsigner Plugin.
Now, what is the best approach to industrialize such process ? I'd like to have the certificate shared among all applications instead of recreating one everytime.
In particular:
Is it correct to have a certificate for a bunch of corporate applications, or shall I consider having one per application ?
Can we imagine to store certificate(s) as dependencies (under business repo) and have both dev and release certificates fetched uppon build ? Say dev cert for local build and release certificate for release.
What are the flaws of such use ?
Is there any other/better solution ?
Thanks for your answers.
There are many ways to solve the problem, so I can only share my thoughts on the subject.
a) I would assume different releases would be on different branches, so in essence we only deal with one release version at a time
b) I then assume per version, that you have different certificates per environment. The per environment part can be handled using maven profiles (http://maven.apache.org/guides/introduction/introduction-to-profiles.html), so...
Whether to have multiple or a single certificate is a matter of preference. Since it provides the level of trust between any given user and the given app it is essentially a judgment of risk versus maintainability.
Risk, in that multiple apps with the same certificate gives higher exposure, also to malign exposure, and any breach of one is a breach of all. So it may be important what the certificates guard.. Maintainability in that all apps follow the same update cycle, and that a change to one means a change to all.
So, the coupling is a bit higher, risk is harder and maintenance is simpler.. If you were global enterprise Acme Inc risk is probably higher than if you were local enterprise Icme Inc. and would it be other peoples data or money that would probably invite the safest option available.
I see no reason why certificates cannot be stored. Either in repository or some other safe repo or simply lying around. What is more interesting may be the private keys, which you can specify as properties and have the dev ones bound it the dev profile and the release ones omitted, so you would have to provide them on the command line.
Assuming you use maven jarsigner plugin, you could have ${my.keypass} and ${my.keystore}, and then the dev profile with both properties set, and the release profile with only keystore set.
Last time I used certificates in a similar manner I had:
- a set of individual components
- In a single repository
- Which could be build as a single complete entity.
So sharing the certificates was an easy takeaway. All certificates except final prod was in sourcecode repository Certificate for releases was on a secure server, where we had a batch process which only a few had access to.
As for security compromises.. I don't think we ever encountered one, but we were prepared :)
Related
Recently we found a jar in our application phoning home. See details: https://github.com/nextgenhealthcare/connect/issues/3431)
This is very undesired behaviour.
What would be the best approach to detect this in one of our depedencies?
ps. I can unzip all the jars and scan for HttpClient or UrlConnection, but there are many ways to connect to external systems, and preferably I don't want to reinvent the wheel.
I am aware of the OWASP Dependency-Check, but phoning home is not a CVE per se.
If you scan your jar's, and they do have network connectivity, then what can do then? You can't recompile the source, as you don't have it. A case of finding something you can do nothing about (apart from find an alternative).
The only way is it firewall your application, or network, use containers, and have a fine grained control of what you application talk to. Basically run your jars with zero trust!
I guess it really boils down to trusting your jar files, and that means in turn trusting the humans that make that everything that goes into jar file. (design, coding, build, distribution, maintenance ). The whole SDLC
If you approach the problem of zero trust, you can either get the JVM (security manager), The operating system (SELINUX/System Cap's/Docker) or the network (firewall/proxy/ids) (or all three) to control and audit access attempts..and either deny or permit these access depending on a policy that you set.
Scanning the jars for network calls can be done, but i'm sure if a jar really wants to obfuscate it's network behaviour, it will be able to, especially if it can run shell commands, or dynamically load jar's itself.
A jar you scan today, might not have the same behaviour on the next update? The classic supply chain attack.
If you don't trust you jar's, then if you must establish that trust, either thought scanning, auditing the source code.
There are many tools for this. I'm not sure if i'm allowed to recommend a particular product here that i've had success with, so i won't.
What is the best way to store parameters and data for an EE7 application. I have to provide the web applications with information like a member fee or similar data (which may/can be altered several times in a year). The owner of the application should also have a central place where these data are stored and an application to change them.
Thanks in advance for any input
Franz
This is one question we are currently struggling with as we re-architect some of our back-end systems here, and I do agree with the comment from #JB Nizet that it should be stored on the database, however I will try to add some additional considerations and options to help you make the decision that is right for you. The right option will depend on a few factors though.
If you are delivering source code and automation to build and deploy your software, the configuration can be stored in a source code repository (i.e. as YAML or XML) and bundled with your deployable during the build process. This is a bit archaic but certainly widely adopted practice and works well, for the most part.
If you are delivering deployable binaries, you have a couple of options.
First one is to have a predetermined place in the file system where your application will look for an "override" configuration file (i.e. home directory of the user used to run your application server). This way you can have your binary deployable file completely separate from your configuration, but you will still need to build some sort of automation and version control for that configuration file so that your customer can roll back versions if/when necessary. This can also be one or many configuration files (i.e. separate files for your app server, vs. the application itself).
The option we are contemplating currently is having a configuration database where all of our applications can query for their own configuration. This can either be a very simple or complex solution depending on your particular needs - for us these are internal applications and we manage the entire lifecycles ourselves, but we have a need to have a central repository since we have tens of services and applications running with a good number of common configuration keys, and updating these keys independently can be error prone.
We are looking at a few different solutions, but I would certainly not store the configuration in our main database as: 1) I don't think SQL is best repository for configuration, 2) I believe we can get better performance from NoSQL databases which can be critical if you need to load some of those configuration keys for every request.
MongoDB and CouchDB both come to mind as good candidates for storing the our configuration keys if you need clearly defined hierarchy for you options, whereas Redis or Memcached are great options if you just need a key-value storage for your configuration (faster than document based too). We will also likely build a small app to help up configure and version the configuration and push changes to existing/active servers, but we haven't spec'd out all the requirements for that.
There are also some OSS solutions that may work for you, although some of them add too much complexity for what we are trying to achieve at this point. If you are using springframework, take a look at the Spring Cloud Config Project, it is very interesting and worth looking into.
This is a very interesting discussion and I am very willing to continue it if you have more questions on how to achieve distributed configurations. Food for thought, here are some of my personal must haves and nice to haves for our new configuration architecture design:
Global configuration per environment (dev,staging,prod)
App specific configuration per environment (dev,staging,prod)
Auto-discovery (auto environment selection depending on requestor)
Access control and versioning
Ability to push updates live to different services
Roger,thanks a lot. Do you have an example for the version predetermined place in the file system"predetermined place in the file system"? Does it make sense to use a singleton which reads the configuration file (using Startup annotation) and provides then the configuration data? But this does not support a dynamic solution.kind regards Franz
Our company is currently using RAD to develop our Java apps, but we're looking to move to Eclipse with the WebSphere Developer Tools. The pilot for our transition is going pretty well, except we're running into a classloader policy issue for new applications that are originally created in Eclipse, not RAD. Our projects that were originally created by RAD are deployed with the correct classloader policy (PARENT_LAST) when published via Eclipse because we originally used the Deployment Descriptor Editor in RAD which set the proper classloader policy in /src/main/application/META-INF/ibmconfig/cells/defaultCell/applications/defaultApp/deployments/defaultApp/deployment.xml. But now with Eclipse & WebSphere Developer Tools, we no longer have the nice Deployment Descriptor Editor UI to create or modify this file for us (apparently it's not included with the WDT plugin).
So, my question then is what is the best way to go about setting this classloader policy? We still need some new apps to have the classloader policy of PARENT_LAST set when we deploy them to our local servers. Our team has though about this a bit and we can see 4 options at the moment.
Open the Admin Console after every publish and change it. This would be a huge pain, and is pretty much not even a real option.
Change the server profile setting to use a PARENT_LAST classloader policy for all apps. This however is not the case for all the apps at our company, and would not work for all groups.
Run a jython script after every publish to set the classloader policy. This is slightly better than option 1, but not by much.
Manually create a deployment.xml file in the same location as the other apps created by RAD with the same structure as the deployment.xml files created by RAD, and modify it as necessary for each app.
Option 4 seems to be the best of the bunch, but it's still a manual process and somewhat error prone. Even if most of our developers can grok this approach for new apps, it would be most ideal if this were a simple one button click type process.
So given the fact that IBM has omitted the Deployment Descriptor Editor from the WDT plugin it would seem as if option 4 is our only hope, but I'll ask once more, is there any other better way to set a WebSphere classloader policy to PARENT_LAST for an app when that app is created in Eclipse? Any help is appreciated, thanks.
Well, Eclipse is free, while Rational Application Developer costs about $5,000 per year (per developer). The nice Deployment Editor (which wasn't that nice. It tends to include all sorts of things that aren't needed. Who needs that Derby DataSource defined there, anyway?) is one of the things you have to give up for saving tons of cash on an annual basis.
I'm digressing.
Option (1) is a complete no-no. You don't want to rely on manual steps for deployments; you should strive to automate deployments to the extent possible.
Option (2) might do. I am not sure which flavour of WebSphere you're using, but if you're using the Network Deployment edition, then you can design a WebSphere topology that consists of multiple servers and clusters. You could, theoretically, come up with such a topology whereby PARENT_LAST applications run on a specific server (or cluster) and PARENT_FIRST applications run on another server (or cluster).
You may be able to combine option (2) with a technical initiative to have all of your applications work with PARENT_LAST. This is the recommended approach if your application is using popular third-party libraries that WebSphere happens to use as well (for its own internal purposes). For example, if you're using Commons Lang, then you're already recommended to switch to PARENT_LAST because WebSphere uses its own internal copy of Commons Lang that might conflict with yours.
Option (3) - it's of course better than option (1) but isn't necessarily worse than option (2) if you can get your WebSphere topology right.
Option (4) is harder to implement but I believe it's the best approach overall:
It's a one-time setup effort for each EAR (and for each WAR that exists within the EAR).
Once it's done, deployment can easily be automated as no extra steps are needed.
If you're working with a local test environment to test your code, and you're rapidly publishing applications from your workspace into your local test environment, then this approach is the only approach (other than option (2)) that will work for you without extra manual work.
If none works... consider paying $5,000 per year (per user) and get option (5) - use IBM's editor. Or, better off... hire someone to design an Eclipse plugin that will do that for you. Shouldn't take more than a week or two to develop.
um nether answer is useful.
Go into WAS console and pick your application; example:
Enterprise Applications > my_application_ear > Class loader AND change the "class loader order and WAR class loader policy"
Open the admin console within eclipse click server >> your server >> scroll down and under server infrastructure >> java process management select class loader >> select new you can change it here
I have a Java-based server, transmitting data from many remote devices to one app via TCP/IP. I need to develop several versions of it. How can I develop and then dwell them without need in coding for 2 projects?I'm asking not only for that project, but for different approaches.
Where the behaviour differs, make the behaviour "data driven" - typically by externalizing the data the drives the behaviour to properties files that are read at runtime/startup.
The goal is to have a single binary whose behaviour varies depending on the properties files found in the runtime environment.
Java supports this pattern through the Properties class, which offers convenient ways of loading properties. In fact, most websites operate in this way, for example the production database user/pass details are never (should never be) in the code. The sysadmins will edit a properties file that is read at start up, and which is protected by the operating system's file permissions.
Other options are to use a database to store the data that drives behaviour.
It can be a very powerful pattern, but it can be abused too, so some discretion is advised.
I think you need to read up on Source Control Management (SCM) and Version Control Systems (VCS).
I would recommend setting up a git or Subversion repository and adding the code initially to trunk and then branching it off to the number of branches (versions you'll be working on).
The idea of different versions is this:
You're developing your code and have it in your SCM's trunk (or otherwise known as a HEAD). At some point you consider the code stable enough for a release. You therefore create a tag (let's call it version 1.0). You cannot (should not) make changes to tags -- they're only there as a marker in time for you. If you have a client who has version 1.0 and reports bugs which you would like to fix, you create a branch based on a copy of your tag. The produced version would (normally) be 1.x (1.1, 1.2, etc). When you're done with your fixes, you tag again and release the new version.
Usually, most of the development happens on your trunk.
When you are ready with certain fixes, or know that certain fixes have already been applied to your trunk, you can merge these changes to other branches, if necessary.
Make any other version based on previous one by reusing code base, configurations and any other asset. In case if several versions should be in place at one time use configuration management practices. Probably you should consider some routing activities and client version checks on server side. This is the place where 'backward compatibility' comes to play.
The main approach is first to find and extract the code that won't change from one version to another. The best is to maximize this part to share the maximum of code base and to ease the maintenance (correcting a bug for one means correcting for all).
Then it depends on what really changes from one version to another. The best is that on the main project you can use some abstract classes or interfaces that you will be able to implement for each specific project.
Am using Maven and Jenkins to manage deployment of my web application. Essentially:
When deploy is triggered, CI box checks the code out of version control.
If code passes tests, it triggers the Maven release plugin to build a versioned war, and puts it in our local nexus repo
In same build, pulls the artifact from nexus, and copies the artifact into tomcat, triggering Tocmat to re-explode war.
This works fine, and using this technique I can use maven to replace the appropriate environment specific configurations, so long as they are within the project. However, my SysAdmin considers it a security risk to have production credentials in VC. Instead, we would prefer to store the production credentials on the production machines that will be using them. I can imagine writing a simple bash script to ssh into the service box, and soft link the conf file onto the classpath, but this seems like a pretty inelegant solution.
Is this reasonable? Is there a better/more standard way of acheiving this? Is it actually a security risk to hold production credentials in VC?
You have your conf file on your production server at some location. This location could be a property too.
If there is no specific reason for not loading it as a file from disk rather than loading as a resource from classpath, you could create a separate Maven profile production that would filter the location replacing it with the file path for your production server.
Yes, it's a security risk to have production credentials in version control. It frees your developers to do pretty much whatever they want to production. Regulations like HIPAA in medicine or PCI for e-commerce or SoX for public US companies would frown on that. Your sys-admin is reasonable to as well.
The basic strategy is to externalize this configuration and have the deployment process roll in the environment specific data.
Having that information on the production server itself is an ok, but not great solution. It's a good fit when you have just one target server. Once you have a bunch, there's a maintenance headache. Whenever env. specific data changes, it has to be updated on every server. You also need to be sure to only have env. specific information in there or else changes developers make to early environments may not be communicated to the sys-admin to change at deployment time leading to production deployment errors.
This is where, I think, Hudson lets you down from a continuous delivery perspective. Some of the commercial tools, including my company's uBuild/AnthillPro, formally track different environments and would securely let the sys-admin configure the production credentials and developers configure the dev credentials with the tool. Likewise the application release automation tools like our uDeploy that would pull builds out of Hudson and deploy them, should have this kind of per environment configuration baked.
In these scenarios, most of the property / xml files have generic config, and the deployment engine substitutes env. specific data in as it deploys.
Adding a new tool for just this problem is probably overkill, but the basic strategy of externalizing environment specific info into a central place where it can be looked up a deployment time could work. Since you're a Maven shop, you might consider stashing some of this in your Maven repo in an area locked down for access by only operations. Then pull the latest config for the appropriate environment at deployment time.
You have a range of options here. Consider how things vary by environment; what varies by server; what needs to be secured, what changes with time on the dev side, etc. And please, please, please sit down with your sys-admin and work out a solution together. You each have insight the other doesn't and the end solution will be better for the cooperation.