I'm trying to find a solution for configuring a server-side Java application such that different users of the system interact with the system as if it were configured differently (Multitenancy). For example, when my application services a request from user1, I wish my application to respond in Klingon, but for all other users I want it to reply in English. (I've picked a deliberately absurd example, to avoid specifics: the important thing is that I want the app to behave differently for different requests).
Ideally there's a generic solution (i.e. one that allows me to add
user-specific overrides to any part of my config without having to change code).
I've had a look at Apache Commons Configuration which has built in support for multitenant configuration, but as far as I can tell this is done by combining some base config with some set of overrides. This means that I'd have a config specifying:
application.lang=english
and, say a user1.properties override file:
application.lang=klingon
Unfortunately it's much easier for our support team if they can see all related configurations in one place, with overrides specified somehow inline, rather than having separate files for base vs. overrides.
I think some combination of Commons Config's multitenancy + something like a Velocity template to describe the conditional elements within underlying config is kind of what I'm aiming for - Commons Config for the ease of interacting with my configuration and Velocity for very expressively describing any overrides, in a single configuration, e.g.:
#if ($user=="user1")
application.lang=klingon
#else
application.lang=english
#end
What solutions are people using for this kind of problem?
Is it acceptable for you to code each server operation like in the following?
void op1(String username, ...)
{
String userScope = getConfigurationScopeForUser(username);
String language = cfg.lookupString(userScope, "language");
int fontSize = cfg.lookupInt(userScope, "font_size");
... // business logic expressed in terms of language and fontSize
}
(The above pseudocode assumes the name of a user is passed as a parameter, but you might pass it via another mechanism, for example, thread-local storage.)
If the above is acceptable, then Config4* could satisfy your requirements. Using Config4*, the getConfigurationScopeForUser() method used in the above pseudocode can be implemented as follows (this assumes cfg is a Configuration object that has been previously initialized by parsing a configuration file):
String getConfigurationScopeForUser(String username)
{
if (cfg.type("user", username) == Configuration.CFG_SCOPE) {
return Configuration.mergeNames("user", username);
} else {
return "user.default";
}
}
Here is a sample configuration file to work with the above. Most users get their configuration from the "user.default" scope, but Mary and John have their own overrides of some of those default values:
user.default {
language = "English";
font_size = "12";
# ... many other configuration settings
}
user.John {
#copyFrom "user.default";
language = "Klingon"; # override a default value
}
user.Mary {
#copyFrom "user.default";
font_size = "18"; # override a default value
}
If the above sounds like it might meet your needs, then I suggest you read Chapters 2 and 3 of the "Getting Started Guide" to get a good-enough understanding of the Config4* syntax and API to be able to confirm/refute the suitability of Config4* for your needs. You can find that documentation on the Config4* website.
Disclaimer: I am the maintainer of Config4*.
Edit: I am providing more details in response to comments by bacar.
I have not put Config4* in a Maven repository. However, it is trivial to build Config4* with its bundled Ant build file, because Config4* does not have any dependencies on third-party libraries.
Another approach for using Config4* in a server application (prompted by a comment by bacar) with Config4* is follows...
Implement each server operation like in the following pseudo-code:
void op1(String username, ...)
{
Configuration cfg = getConfigurationForUser(username);
String language = cfg.lookupString("settings", "language");
int fontSize = cfg.lookupInt("settings", "font_size");
... // business logic expressed in terms of language and fontSize
}
The getConfigurationForUser() method used above can be implemented as shown in the following pseudocode:
HashMap<String,Configuration> map = new HashMap<String,Configuration>();
synchronized String getConfigurationForUser(String username)
{
Configuration cfg = map.get(username);
if (cfg == null) {
// Create a config object tailored for the user & add to the map
cfg = Configuration.create();
cfg.insertString("", "user", username); // in global scope
cfg.parse("/path/to/file.cfg");
map.put(username, cfg);
}
return cfg;
}
Here is a sample configuration file to work with the above.
user ?= ""; // will be set via insertString()
settings {
#if (user #in ["John", "Sam", "Jane"]) {
language = "Klingon";
} #else {
language = "English";
}
#if (user == "Mary") {
font_size = "12";
} #else {
font_size = "10";
}
... # many other configuration settings
}
The main comments I have on the two approaches are as follows:
The first approach (one Configuration object that contains lots of variables and scopes) is likely to use slightly less memory than the second approach (many Configuration objects, each with a small number of variables). But my guess is that the memory usage of either approach will be measured in KB or tens of KB, and this will be insignificant compared to the overall memory footprint of your server application.
I prefer the first approach because a single Configuration object is initialized just once, and then it is accessed via read-only lookup()-style operations. This means you don't have to worry about synchronizing access to the Configuration object, even if your server application is multi-threaded. In contrast, the second approach requires you to synchronize access to the HashMap if your server application is multi-threaded.
The overhead of a lookup()-style operation is in the order of, say, nanoseconds or microseconds, while the overhead of parsing a configuration file is in the order of, say, milliseconds or tens of milliseconds (depending on the size of the file). The first approach performs that relatively expensive parsing of a configuration file only once, and that is done in the initialization of the application. In contrast, the second approach performs that relatively expensive parsing of a configuration file "N" times (once for each of "N" users), and that repeated expense occurs while the server is processing requests from clients. That performance hit may or may not be an issue for your application.
I think ease of use is more important than ease of implementation. So, if you feel that the second approach will make it easier to maintain the configuration file, then I suggest you use that approach.
In the second approach, you may wonder why I put most of the variables in a named scope (settings) rather than in the global scope along with the "injected" user variable. I did that for a reason that is outside the scope of your question: separating the "injected" variables from the application-visible variables makes it easier to perform schema validation on the application-visible variables.
Normally user profiles are going into a DB and the user must open a session with a login. The user name may go into the HTTPSession (=Cookies) and on every request the server will get the user name and may read the profile from the DB. Shure, the DB can be some config files like joe.properties, jim.properties, admin.properties, etc.
Related
At the top of the WebSphere log file, I see a couple of lines:
WebSphere Platform 8.5 blah blah running with process name abc\xyz\pqr and process id 1234
Full server name is abc\xyz\pqr-1234
I would like to get the value pqr shown in the above two lines using Java code in my application that runs on the WebSphere server. I found that I could get the values abc and xyz by doing JNDI lookup, based on this answer to another question:
(new InitialContext()).lookup("thisNode/cell/cellname").toString(); // returns "abc"
(new InitialContext()).lookup("thisNode/nodename").toString(); // returns "xyz"
However, JNDI lookup of "servername" does not return pqr or any of the values above, but something else entirely.
How can I get the value pqr (or the entire value abc\xyz\pqr or abc\xyz\pqr-1234, whichever is possible)? I would prefer to get the value by doing a JNDI lookup rather than by using a WebSphere class like com.ibm.websphere.runtime.ServerName as mentioned here, but if that is not possible I can use any solution that works.
I realize there may be questions about why I need to get the value and perhaps even opinions that it may not be a good practice to get that value etc. However, I have a valid and unavoidable reason for doing that.
Here is a link to a document about how to capture a WebSphere namespace dump, including example output, showing entries such as,
(top)/nodes/outpost/nodename
(top)/nodes/outpost/servers/server1/servername
Have you tried a look up of the following?
thisNode/servers/thisServer/servername
Well this answer is not a JNDI solution, however it is a solution to this problem. WebSphere provides class com.ibm.websphere.runtime.ServerName which is used for exactly this scenario. It has bunch of utility methods like:
getDisplayName()
getServerId()
getFullName()
So how to use this class in your project while still being able to deploy project on a non-websphere environments? By checking in runtime if you are running within WebSphere, and if you do, than invoking methods within ServerName.
In order to not pollute your project with unnecessary dependencies to was runtime create a new utility jar project and add dependencies:
com.ibm.ws.runtime-xxxx.jar as provided dependency (part of was or was client)
spring-core-xxxx.jar as runtime dependency
Rest of the solutions are in following two methods withing two classes. One which checks for presence of websphere and other which interacts with it:
import org.springframework.util.ClassUtils;
public class WasInfo {
/**
* #return a map populated with relevant WebSphere names
* if running on WebSphere or empty one if not
*/
public Map<String, String> about() {
ClassLoader currentClassLoader = this.getClass().getClassLoader();
boolean isWebsphere = ClassUtils.isPresent("com.ibm.websphere.runtime.ServerName", currentClassLoader);
if (!isWebsphere ) {
return new HashMap<>();
}
WebSphereConfig wc = new WebSphereConfig();
return wc.resolveServerName();
}
}
import com.ibm.websphere.runtime.ServerName;
public class WebSphereConfig {
public Map<String, String> resolveServerName() {
// expecting 'cell/node/server' pattern
String serverFullName = ServerName.getFullName();
String serverName = ServerName.getDisplayName();
Map<String, String> map = new HashMap<>();
map.put("serverFullName", serverFullName);
map.put("serverName", serverName);
String[] segments = serverFullName.split("\\\\");
if (segments.length == 3) {
map.put("cellName", segments[0]);
map.put("nodeName", segments[1]);
}
}
}
I used Spring's ClassUtils to get rid of some boring code in this example. And for exercise one could invoke ServerName methods using reflections. That would remove a need for import ServerName statement and make code even more "simpler". But idea would remain the same.
I want to create a log such as System.out.println("RuleName : "+ruleName); in IBM ODM rule engine.
So these are what i did;
1- Create BOM virtual methods which are static and get parameter of instance which is
the object of ilog.rules.engine.IlrRuleInstance.
instance ilog.rules.engine.IlrRuleInstance
2- Create BOM to XOM mapping by the following
System.out.println("Log icinde");
String ruleName = "";
if (instance != null )
ruleName = instance.getRuleName();
else
System.out.println("instance null!");
if (ruleName != null) {
System.out.println("RuleName: "+ ruleName);
}
return;
3- Call it in rule flow as an initial or final action.
utility.logla(ruleInstance);
But when i execute the flow my log doesnt work instance is null and ruleName also is null;
How should i configure and set logging feature by using bom. Could you give me an example of it?
Thanks.
So you could use Decision Warehouse that is part of the execution server to trace each execution. This can include which rules have been fired during the execution, but depends on what filters you apply.
Here is the documentation on DW and how to set it up: http://pic.dhe.ibm.com/infocenter/dmanager/v8r5/topic/com.ibm.wodm.dserver.rules.res.managing/topics/con_res_dw_overview.html
It is because you are calling the getRuleName() outside of the context of a rule instance within your rule flow, from what I can see from your description.
If you had a BOM method called from within the action of a rule, you could then call IlrRuleInstance.getRuleName() and it would return the name of the rule (I have done such a thing myself before).
What are you trying to achieve with this logging?
There is a much better way of logging from rules. In your Virtual method, pass the name of the rule itself rather than ruleInstance. You can also verbalize your method and use the same in each rule.
for e.g.:
From BAL:
Log the name of this rule ;
From IRL:
Log(ilog.rules.brl.IlrNameUtil.getBusinessIdentifier(?instance.ruleName));
Another approach is to use the BAL provided above (the name of this rule) within the Rule Flow (Orchestration) for your Rule Application or Module.
Of course this solution should be used only for debugging or troubleshooting scenarios.
Hope this helps.
I'm using a third party software library with a log prototype like this:
runtime.getInstance().log(int logtype, String moduleName, String logtext);
I have a utility library that I want to be library independent, but I also want to be able to log things to the software package from my own classes. This is fine and good, as the text messages are pretty universal, things like "you've passed bad data!" and "blah blah was successful!" Additionally, I've already wrapped the software vendor's logging functionality, so I'm not even worried about conforming to some random API.
What I am worried about (why I'm writing this post) is that there are going to be various different modules throughout my system. So the problem is like:
ModuleFoo extends com.thirdpartyvendor.BaseModule
ModuleBar extends com.thirdpartyvendor.BaseModule
ModuleFoo ---contains instance of---> IndependentDataStructure ---tries to write a log entry to my WrappedLogger ---> but data structure doesn't have a reference to ModuleFoo.
ModuleBar ---contains instance of---> IndependentDataStructure ---tries to write a log entry to my WrappedLogger ---> but data structure doesn't have a reference to ModuleBar.
Currently my system passes a field String moduleName around which quite frankly makes me sick... but I want the log entries to tell me what my module is! How can the logger know whether the IndependentDataStructure instance is working with ModuleFoo and not ModuleBar (or some other module) without IndependentDataStructure containing a reference to a BaseModule (or a String moduleName)?
Logging APIs such as Log4J and SLF4J have the concept of a diagnostic context, a way to store various bits of contextual information in a ThreadLocal map which the log message formatters can access to decorate the messages. Typical uses for this are things like putting the name of the currently authenticated user into log messages in a web application (using a servlet filter to store the username in the MDC for each request), would you be able to use a similar concept in your system?
runtime.getInstance().log(int logtype,
this.getClass().getSimpleName(),
String logtext);
If i've got it right...
EDIT: and for automated (but a bit slowly) auto-method name:
Thread.currentThread().getStackTrace()[level].getMethodName();
(where level is an integer specifies the number of classes through the log request is passed)
I've some data triplets that I want to write in some sort of basic OWL ontology. I've triplets like:
Delhi is part of India
or
India is an Asian country
Note that I've relations like "is-a", "part-of", or "related-to". What's the simplest way to build an ontology? Any working example or a reference to an example website will be great help!
There are a lot of different things mixed up in your question, I strongly suggest you take a bit of time (away from the keyboard!) to think through what you're trying to achieve here.
Firstly, geographic ontologies can get quite complex, and a lot of work has already been done in this area. Probably the obvious starting point is the GeoNames ontology, which gives names to geographic features, including cities like Dehli and countries like India. At the very least you should re-use those names for the places in your application, as that will maximise the chances that your data can be successfully joined with other available linked-data sources.
However, you probably don't want the whole of GeoNames in your application (I'm guessing), so you also need to be clear why you need an ontology at all. A good way to approach this is from the outside of your application: rather than worry about which kind of Jena model to use, start by thinking through ways to complete the sentence "using the ontology, a user of my application will be able to ...". That should then lead you on to establishing some competency questions (see, for example, section 3 of this guide) for your ontology. Once you know what kinds of information you want to represent, and what kinds of queries you need to apply to it, your technology choices will be much clearer. I realise that these applications are typically developed iteratively, and you'll want to try some code out fairly early on, but I still advocate getting your destination more clearly in mind before you start your coding journey.
You imply that you want to use Jena to drive a web site. There are many choices here. Don't be mislead by the term semantic web - this actually means bringing web-like qualities to interlined data sets, rather than putting semantics into human readable web pages per se. While you can do so, and many people do, you'll need some additional layers in your architecture. We typically use one of two approaches: using Jena with a templating engine, such as Velocity, in a servlets container, or using a Ruby web framework and driving Jena via JRuby. There are many other ways to solve this particular problem: Jena doesn't address web publishing directly, but it can be used within any Java-based web framework.
Finally, regarding namespaces, you should really re-use existing vocabularies, and hence namespaces, where possible. Don't make up new names for things which already have representations on the web of data somewhere. Use GeoNames, or DbPedia, or any of the many other published vocabularies where they fit. If they don't fit, then you should create a new name rather than use an existing name in a non-compatible way. In this case, you should use the web domain of your application (e.g. your company or university) as the basis for the namespace. Ideally, you should publish your ontology at the base URL of the namespace, but this can sometimes be hard to arrange depending on local web policies.
I suggest OWL API from Manchester University. In this way you can start to create your ontology "on the fly" in Java, and with a single method invocation you can serialize it in your preferred format (RDF, Manchester Syntax etc) if you need, or directly working on the in-memory representation. In this way you can rapidly prototype and experiment your ontology in the context of your program.
For an overview of the library and its main componenets I suggest the tutorial (code tutorial) provided by the creator of the library, it covers 90% of the basic needs.
PS: Protégé is based on OWL Api, you can also try it as suggested, but expecially in the beginning I preferred to rapidly play with ontologies and switch to some engineering environment like Protege when my mind was clear enough. In addition, with an external ontology you would need to learn how to navigate it, that IMHO it is really not worth in the very beginning.
Have a look at Stanford's Protege. It's an ontology editor.
You'd just declare a triplet class consisting of a subject, object, and predicate. "has-a" is a predicate, so your ontology elements would look like:
"Dehli", "is-in", "India"
"India", "is-in", "Asia"
"India", "is-a", "country"
This doesn't address queries, of course, but given a decent data store (even a database would do) you could start to build a flexible ontology with a decent query mechanism.
JENA is far, far more capable than what this would create, of course; it does provide the semantic query stuff, as well as far better resource definition and resolution. However, it's a lot more involved than a simple triplet structure; it all depends on what you need.
/**
- This is maven dependencies for owl-api
<dependency>
<groupId>net.sourceforge.owlapi</groupId>
<artifactId>owlapi-api</artifactId>
</dependency>
<dependency>
<groupId>net.sourceforge.owlapi</groupId>
<artifactId>owlapi-apibinding</artifactId>
</dependency>
* First of all you need to initialize ontology:
**/
private OWLDataFactory factory;
private PrefixManager pm;
private OWLOntology ontology;
private String pmString = "#";
private OWLOntologyManager manager;
private OWLReasoner reasoner;
private ShortFormEntityChecker entityChecker;
private BidirectionalShortFormProviderAdapter bidirectionalShortFormProviderAdapter;
private void initializeOntology(String fileContent)
throws OWLOntologyCreationException {
InputStream bstream = new ByteArrayInputStream(fileContent.getBytes());
this.manager = OWLManager.createOWLOntologyManager();
this.ontology = this.manager.loadOntologyFromOntologyDocument(bstream);
IRI ontologyIRI = this.ontology.getOntologyID().getOntologyIRI();
this.pm = new DefaultPrefixManager(ontologyIRI.toString()
+ this.pmString);
this.factory = this.manager.getOWLDataFactory();
ReasonerFactory factory = new ReasonerFactory();
this.reasoner = factory.createReasoner(this.ontology);
Set<OWLOntology> onts = new HashSet<>();
onts.add(this.ontology);
DefaultPrefixManager defaultPrefixManager = new DefaultPrefixManager(
this.pm);
ShortFormProvider shortFormProvider = new ManchesterOWLSyntaxPrefixNameShortFormProvider(
defaultPrefixManager);
this.bidirectionalShortFormProviderAdapter = new BidirectionalShortFormProviderAdapter(
this.manager, onts, shortFormProvider);
this.entityChecker = new ShortFormEntityChecker(
this.bidirectionalShortFormProviderAdapter);
}
/*
After that you need to define your classes and the relations of the classes. These relations calls as object properties in ontology. Instance of each ontology class calls as individual and the attributies of the classes (for person name, age , adress) calls as data-property.
*/
// To create a new individual of an ontology class :
public OWLClass getClass(String className) {
return this.factory.getOWLClass(":" + className, this.pm);
}
public OWLNamedIndividual createInvidual(String cls, String invname) {
OWLNamedIndividual res = this.factory.getOWLNamedIndividual(":"
+ invname, this.pm);
this.manager.addAxiom(this.ontology,
this.factory.getOWLDeclarationAxiom(res));
OWLClassAssertionAxiom axiom = this.factory.getOWLClassAssertionAxiom(
getClass(cls), res);
this.manager.addAxiom(this.ontology, axiom);
return res;
}
// To create an object property :
// This method will create an object property named prop if it is not exist.
public OWLObjectProperty getObjectProperty(String prop) {
return this.factory.getOWLObjectProperty(":" + prop, this.pm);
}
public void addObjectProperty(String propname, OWLNamedIndividual prop,
OWLNamedIndividual obj) {
OWLObjectPropertyAssertionAxiom axiom = this.factory
.getOWLObjectPropertyAssertionAxiom(
getObjectProperty(propname), obj, prop);
this.manager.addAxiom(this.ontology, axiom);
}
// And finally , to add a data-property to individuals :
public OWLDataProperty getDataProperty(String prop) {
return this.factory.getOWLDataProperty(":" + prop, this.pm);
}
public void addDataProperty(String propname, boolean propvalue,
OWLNamedIndividual inv) {
OWLAxiom axiom = this.factory.getOWLDataPropertyAssertionAxiom(
getDataProperty(propname), inv, propvalue);
this.manager.addAxiom(this.ontology, axiom);
}
Thank you all for your help. A number of you posted (as I should have expected) answers indicating my whole approach was wrong, or that low-level code should never have to know whether or not it is running in a container. I would tend to agree. However, I'm dealing with a complex legacy application and do not have the option of doing a major refactoring for the current problem.
Let me step back and ask the question the motivated my original question.
I have a legacy application running under JBoss, and have made some modifications to lower-level code. I have created a unit test for my modification. In order to run the test, I need to connect to a database.
The legacy code gets the data source this way:
(jndiName is a defined string)
Context ctx = new InitialContext();
DataSource dataSource = (DataSource) ctx.lookup(jndiName);
My problem is that when I run this code under unit test, the Context has no data sources defined. My solution to this was to try to see if I'm running under the application server and, if not, create the test DataSource and return it. If I am running under the app server, then I use the code above.
So, my real question is: What is the correct way to do this? Is there some approved way the unit test can set up the context to return the appropriate data source so that the code under test doesn't need to be aware of where it's running?
For Context: MY ORIGINAL QUESTION:
I have some Java code that needs to know whether or not it is running under JBoss. Is there a canonical way for code to tell whether it is running in a container?
My first approach was developed through experimention and consists of getting the initial context and testing that it can look up certain values.
private boolean isRunningUnderJBoss(Context ctx) {
boolean runningUnderJBoss = false;
try {
// The following invokes a naming exception when not running under
// JBoss.
ctx.getNameInNamespace();
// The URL packages must contain the string "jboss".
String urlPackages = (String) ctx.lookup("java.naming.factory.url.pkgs");
if ((urlPackages != null) && (urlPackages.toUpperCase().contains("JBOSS"))) {
runningUnderJBoss = true;
}
} catch (Exception e) {
// If we get there, we are not under JBoss
runningUnderJBoss = false;
}
return runningUnderJBoss;
}
Context ctx = new InitialContext();
if (isRunningUnderJboss(ctx)
{
.........
Now, this seems to work, but it feels like a hack. What is the "correct" way to do this? Ideally, I'd like a way that would work with a variety of application servers, not just JBoss.
The whole concept is back to front. Lower level code should not be doing this sort of testing. If you need a different implementation pass it down at a relevant point.
Some combination of Dependency Injection (whether through Spring, config files, or program arguments) and the Factory Pattern would usually work best.
As an example I pass an argument to my Ant scripts that setup config files depending if the ear or war is going into a development, testing, or production environment.
The whole approach feels wrong headed to me. If your app needs to know which container it's running in you're doing something wrong.
When I use Spring I can move from Tomcat to WebLogic and back without changing anything. I'm sure that with proper configuration I could do the same trick with JBOSS as well. That's the goal I'd shoot for.
Perhaps something like this ( ugly but it may work )
private void isRunningOn( String thatServerName ) {
String uniqueClassName = getSpecialClassNameFor( thatServerName );
try {
Class.forName( uniqueClassName );
} catch ( ClassNotFoudException cnfe ) {
return false;
}
return true;
}
The getSpecialClassNameFor method would return a class that is unique for each Application Server ( and may return new class names when more apps servers are added )
Then you use it like:
if( isRunningOn("JBoss")) {
createJBossStrategy....etcetc
}
Context ctx = new InitialContext();
DataSource dataSource = (DataSource) ctx.lookup(jndiName);
Who constructs the InitialContext? Its construction must be outside the code that you are trying to test, or otherwise you won't be able to mock the context.
Since you said that you are working on a legacy application, first refactor the code so that you can easily dependency inject the context or data source to the class. Then you can more easily write tests for that class.
You can transition the legacy code by having two constructors, as in the below code, until you have refactored the code that constructs the class. This way you can more easily test Foo and you can keep the code that uses Foo unchanged. Then you can slowly refactor the code, so that the old constructor is completely removed and all dependencies are dependency injected.
public class Foo {
private final DataSource dataSource;
public Foo() { // production code calls this - no changes needed to callers
Context ctx = new InitialContext();
this.dataSource = (DataSource) ctx.lookup(jndiName);
}
public Foo(DataSource dataSource) { // test code calls this
this.dataSource = dataSource;
}
// methods that use dataSource
}
But before you start doing that refactoring, you should have some integration tests to cover your back. Otherwise you can't know whether even the simple refactorings, such as moving the DataSource lookup to the constructor, break something. Then when the code gets better, more testable, you can write unit tests. (By definition, if a test touches the file system, network or database, it is not a unit test - it is an integration test.)
The benefit of unit tests is that they run fast - hundreds or thousands per second - and are very focused to testing just one behaviour at a time. That makes it possible run then often (if you hesitate running all unit tests after changing one line, they run too slowly) so that you get quick feedback. And because they are very focused, you will know just by looking at the name of the failing test that exactly where in the production code the bug is.
The benefit of integration tests is that they make sure that all parts are plugged together correctly. That is also important, but you can not run them very often because things like touching the database make them very slow. But you should still run them at least once a day on your continuous integration server.
There are a couple of ways to tackle this problem. One is to pass a Context object to the class when it is under unit test. If you can't change the method signature, refactor the creation of the inital context to a protected method and test a subclass that returns the mocked context object by overriding the method. That can at least put the class under test so you can refactor to better alternatives from there.
The next option is to make database connections a factory that can tell if it is in a container or not, and do the appropriate thing in each case.
One thing to think about is - once you have this database connection out of the container, what are you going to do with it? It is easier, but it isn't quite a unit test if you have to carry the whole data access layer.
For further help in this direction of moving legacy code under unit test, I suggest you look at Michael Feather's Working Effectively with Legacy Code.
A clean way to do this would be to have lifecycle listeners configured in web.xml. These can set global flags if you want. For example, you could define a ServletContextListener in your web.xml and in the contextInitialized method, set a global flag that you're running inside a container. If the global flag is not set, then you are not running inside a container.