(Too) complex configuration management (Java properties) - java

I'm working at a company having a (too) complex configuration management process:
In each module there is an application.properties file. There are properties for the developers like: database.host = localhost
Properties which change in other environments are maintained in an application.properties file in an override-properties folder (for each module) like: database.host=#dbhost#
There is a default-deployment.properties file with default values for other environments like: database.HOST=noValueConfigured.DB_HOST
A postconfigure.properties file with DATABASE_ HOST=#configure.DB_HOST#
Those files are only needed if a property value depends on the environments (is different for development, testing, live).
Finally there is a Excel document with a sheet for every environment and a row like: configure.DB_HOST - a comment ... - 127.0.0.1 (just as example). The Excel is responsible for generating the correct property files for the rpm packages.
This process is not only complex but also error prone.
How could it be simplified/improved?
The approach should be compatbiel with Spring DI.

I would start with a master configuration file and generate the properties files to start with.
Ultimately you could have a set of proprties files which can be deployed in all environments e.g.
database.host = localhost
database.host.prod = proddb1
database.host.uat = uatdb1
i.e. use the environment/host/region/service at the end as a search path. This has the advantage that you can see the variations between environments.
You can implement this collect like this
public class SearchProperties extends Properties {
private final List<String> searchList;
public SearchProperties(List<String> searchList) {
this.searchList = searchList;
}
#Override
public String getProperty(String key) {
for (String s : searchList) {
String property = super.getProperty(key + "." + s);
if (property != null)
return property;
}
return super.getProperty(key);
}
You might construct this like
Properties prop = new SearchProperties(Arrays.asList(serverName, environment));
This way, if there is a match for that server, it will override, the environment which will overidden the default.
In Java 8 you can do
public String getProperty(String key) {
return searchList.stream()
.map(s -> key + "." + s)
.map(super::getProperty)
.filter(s -> s != null)
.findFirst()
.orElseGet(()-> super.getProperty(key));
}

There should be only one file, even if it has a lot of properties. Also, there should be only one property for each functionality, like database.host, not database.host and database_host, or anything similar.
You need to create hierarchy for such and for every property in order to know which one will be user. For example, if there is some head global value for database.host, it should be used for that property. If not, check next level in hierarchy, like specific environments (like production value). If such does not exist, check next level, like local or test level. And for bottom level, have a default value. In such way, you have two dimension of consuming properties and as such, decreases chances for error dramatically.
In one company I used to work, we had automated deployer which would handle such level setup, we would just set variable on its web site for level we wanted and it would go from top to bottom and set them. We never had problems with such setup and we would have more then 50 variables in app.properties file.

If not to take into consideration all the redesign methods mentioned in the previous comments, you can wrap all the complexity into Tomtit task manager which is good with these types if tasks.
Just create properties files templates and populate them using environments

Related

Archunit package A can't access package B, except subpackage A.X can access B.Y

We are in the process of refactoring our code to a hexagonal architecture, with each domain in the code being a separate gradle module. Currently all the code is one module with a package for each domain, and modules interacting with eachother freely.
We want to first get the architectural boundaries correct on the package level before extracting each domain package to a domain module. To do this we want to enforce the architecture using archunit.
Inside each domain package, I've reconstructed the rules of our new architecture using layers:
#ParameterizedTest(name = "The hex architecture should be respected in module {0}.")
#MethodSource("provideDomains")
void hexArchRespectedInEveryModule(String pkg) {
layeredArchitecture()
.consideringOnlyDependenciesInAnyPackage(ROOT_DOTDOT)
.withOptionalLayers(true)
.layer("api").definedBy(ROOT_DOT + pkg + ".api..")
.layer("usecases").definedBy(ROOT_DOT + pkg + ".usecases..")
.layer("domain").definedBy(ROOT_DOT + pkg + ".domain..")
.layer("incoming infra").definedBy(ROOT_DOT + pkg + ".infrastructure.incoming..")
.layer("outgoing infra").definedBy(ROOT_DOT + pkg + ".infrastructure.outgoing..")
.layer("vocabulary").definedBy(ROOT_DOT + pkg + ".vocabulary..")
.whereLayer("incoming infra").mayOnlyAccessLayers("api", "vocabulary")
.whereLayer("usecases").mayOnlyAccessLayers("api", "domain", "vocabulary")
.whereLayer("domain").mayOnlyAccessLayers("domain", "vocabulary")
.whereLayer("outgoing infra").mayOnlyAccessLayers("domain", "vocabulary")
.check(new ClassFileImporter().importPackages(toPackage(pkg)));
}
I am however failing to write the check that a domain can only access another domain by calling it from an outgoing infra adapter, using the API provided by the other domain.
For that case I'd have to write something that says: "no packages in domain A may call domain B, except for the packages in outgoing infra of A that can call the API subpackage in B.
I've tried several different variations, eventually thinking that this would be correct, but violations still don't seem to get caught:
#ParameterizedTest(name = "Module {0} should only depend on itself, except outgoing infra on other apis")
#MethodSource("provideDomains")
void domainModuleBoundariesAreRespected(String module) {
noClasses()
.that().resideInAPackage(includeSubPackages(toPackage(module)))
.should().dependOnClassesThat().resideInAnyPackage(includeSubPackages(getPackagesOtherThan(module)))
.allowEmptyShould(true)
.check(allButOutgoingInfra);
classes()
.that().resideInAPackage(outGoingInfra(toPackage(module)))
.should().dependOnClassesThat().resideInAnyPackage(toApi(getPackagesOtherThan(module)))
.allowEmptyShould(true)
.check(allClasses);
}
There's some helper static methods I've added that add the ".." to get subpackages, or add a suffix to a package to indicate a subpackage. "getPackagesOtherThan(module)" returns a list of the other domains' packages.
I found this question but my problem seems different because I have two subpackages in relation to eachother, not just one.
I would actually use the "slices" API for this use case (see https://www.archunit.org/userguide/html/000_Index.html#_slices). I.e. the same thing that also allows you to ("vertically") slice your application by domain and assert that these slices are cycle-free.
The easiest way would be if you have a very consistent code structure, where you can just match your domain packages with a simple package pattern, like com.myapp.(*).. where your domains are something like com.myapp.customer.., com.myapp.shop.., etc. Otherwise, you can always highly customize this by using SliceAssignment (compare the user guide link above).
In any case, once you create your slices by domain, you can define a custom ArchCondition on them:
slices()
.matching("com.myapp.(*)..")
.as("Domain Modules")
.namingSlices("Module[$1]")
.should(only_depend_on_each_other_through_allowed_adapters())
// ...
ArchCondition<Slice> only_depend_on_each_other_through_allowed_adapters() {
return new ArchCondition<Slice>("only depend on each other through allowed adapters") {
final PackageMatcher validOutgoingAdapter = PackageMatcher.of("..outgoing..");
final PackageMatcher validIncomingAdapter = PackageMatcher.of("..incoming..");
final Map<JavaClass, Slice> classesToModules = new HashMap<>();
#Override
public void init(Collection<Slice> allModules) {
// create some reverse lookup so we can easily find the module of each class
allModules.forEach(module -> module.forEach(clazz -> classesToModules.put(clazz, module)));
}
#Override
public void check(Slice module, ConditionEvents events) {
module.getDependenciesFromSelf().stream()
// The following are dependencies to other modules
.filter(it -> classesToModules.containsKey(it.getTargetClass()))
// The following violate either the outgoing or incoming adapter rule
.filter(it ->
!validOutgoingAdapter.matches(it.getOriginClass().getPackageName()) ||
!validIncomingAdapter.matches(it.getTargetClass().getPackageName())
)
.forEach(dependency -> events.add(SimpleConditionEvent.violated(dependency,
String.format("%s depends on %s in an illegal way: %s",
module, classesToModules.get(dependency.getTargetClass()), dependency.getDescription()))));
}
};
This might not exactly match your needs (e.g. maybe you need to do more precise checks on how the incoming/outgoing adapter should look like), but I hope it helps to get you started.

Regarding a data structure for O(1) get on prefixes

So I am trying to write a little utility in Scala that constantly listens on a bunch of directories for file system changes (deletes, creates, modifications etc) and rsyncs it immediately across to a remote server. (https://github.com/Khalian/LockStep)
My configurations are stored in JSON as the follows:-
{
"localToRemoteDirectories": {
"/workplace/arunavs/third_party": {
"remoteDir": "/remoteworkplace/arunavs/third_party",
"remoteServerAddr": "some Remote server address"
}
}
}
This configuration is stored in a Scala Map (key = localDir, value = (remoteDir, remoteServerAddr)). The tuple is represented as a case class
sealed case class RemoteLocation(remoteDir:String, remoteServerAddr:String)
I am using an actor from a third party:
https://github.com/lloydmeta/schwatcher/blob/master/src/main/scala/com/beachape/filemanagement/FileSystemWatchMessageForwardingActor.scala)
that listens on these directories (e.g. /workplace/arunavs/third_party and then outputs an Java 7 WatchKind event (EVENT_CREATE, EVENT_MODIFY etc). The problem is that the events sent are absolute path (for instance if I create a file helloworld in third_party dir, the message sent by the actor is (ENTRY_CREATE, /workplace/arunavs/third_party/helloworld))
I need a way to write a getter that gets the nearest prefix from the configuration map stored above. The obvious way to do it is to filter on the map:-
def getRootDirsAndRemoteAddrs(localDir:String) : Map[String, RemoteLocation] =
localToRemoteDirectories.filter(e => localDir.startsWith(e._1))
This simply returns the subset of keys that are a prefix to the localDir (in the above example this method is called with localDir = /workplace/arunavs/third_party/helloworld. While this works, this implementation is O(n) where n is the number of items in my configuration. I am looking for better computational complexity (I looked at radix and patricia tries, but they dont cut it since I feeding a string and trying to get keys which are prefixes to it, tries solve the opposite problem).

Tomcat 7 - Get the application name during runtime without login via java-agent/aspectj

I'm trying to get a list of all deployed applications, and specifically the name of the application mapped to tomcat root.
I want to be able to do it during runtime, using a java agent that collects information on the tomcat server.
I tried using this code sample:
private Iterable<String> collectAllDeployedApps() {
try {
final Set<String> result = new HashSet<>();
final Set<ObjectName> instances = findServer()
.queryNames(new ObjectName("Tomcat:j2eeType=WebModule,*"), null);
for (ObjectName each : instances) {
result.add(substringAfterLast(each.getKeyProperty("name"), "/")); //it will be in format like //localhost/appname
}
return result;
} catch (MalformedObjectNameException e) {
//handle
}
}
taken from a similar question but since I'm not logged into the manager app, I don't have the right permissions, so I get an empty list.
What I actually want - I have a java agent (based on aspectJ), and I'd like during runtime/deployment time etc. to be able to get the list of all deployed apps without actually logging in to the manager myself.
How can I do this? I don't mind instrumenting tomcat's deployment code (which doesn't require any login from my side as I'm already instrumenting the code), but I'm not sure which function to instrument.
Thanks,
Lin
The question consists of 2 parts:
Get a list of all deployed applications - After reviewing Tomcat's API, I found several relevant deployment code parts which can be instrumented:
WarWatcher.java (allows to detect changes), and we can also see the apps from - UserConfig.java which is called on startup (instrumentation can be done on setDirectory name etc.), and of course HostConfig.java that is called on stratup:
protected void org.apache.catalina.startup.HostConfig.deployWARs(java.io.File, java.lang.String[])
protected void org.apache.catalina.startup.HostConfig.deployApps()
protected void org.apache.catalina.startup.HostConfig.deployWAR(org.apache.catalina.util.ContextName, java.io.File)
In addition - you can check the argument for:
protected boolean org.apache.catalina.startup.HostConfig.deploymentExists(java.lang.String)
It includes the war/folder name (which usually means the application name+-).
Get the root application name - This can be done by using ServletContext.getRealPath() - It returns the folder name, from which the war name can be extracted (and can be used, in my case at least as the app name).

Writing to $HOME from a jar file

I'm trying to write to a file located in my $HOME directory. The code to write to that file has been packaged into a jar file. When I run the unit tests to package the jar file, everything works as expected - namely the file is populated and can be read from again.
When I try to run this code from another application where the jar file is contained the lib directory it fails. The file is created - but the file is never written to. When the app goes to read the file it fails parsing it because it is empty.
Here is the code that writes to the file:
logger.warn("TestNet wallet does not exist creating one now in the directory: " + walletPath)
testNetFileName.createNewFile()
logger.warn("Wallet file name: " + testNetFileName.getAbsolutePath)
logger.warn("Can write: "+ testNetFileName.canWrite())
logger.warn("Can read: " + testNetFileName.canRead)
val w = Wallet.fromWatchingKey(TestNet3Params.get(), testNetSeed)
w.autosaveToFile(testNetFileName, savingInterval, TimeUnit.MILLISECONDS, null)
w
}
here is the log form the above method that is relevant:
2015-12-30 15:11:46,416 - [WARN] - from class com.suredbits.core.wallet.ColdStorageWallet$ in play-akka.actor.default-dispatcher-9
TestNet wallet exists, reading in the one from disk
2015-12-30 15:11:46,416 - [WARN] - from class com.suredbits.core.wallet.ColdStorageWallet$ in play-akka.actor.default-dispatcher-9
Wallet file name: /home/chris/testnet-cold-storage.wallet
then it bombs.
Here is the definition for autoSaveToFile
public WalletFiles autosaveToFile(File f, long delayTime, TimeUnit timeUnit,
#Nullable WalletFiles.Listener eventListener) {
lock.lock();
try {
checkState(vFileManager == null, "Already auto saving this wallet.");
WalletFiles manager = new WalletFiles(this, f, delayTime, timeUnit);
if (eventListener != null)
manager.setListener(eventListener);
vFileManager = manager;
return manager;
} finally {
lock.unlock();
}
}
and the definition for WalletFiles
https://github.com/bitcoinj/bitcoinj/blob/master/core/src/main/java/org/bitcoinj/wallet/WalletFiles.java#L68
public WalletFiles(final Wallet wallet, File file, long delay, TimeUnit delayTimeUnit) {
// An executor that starts up threads when needed and shuts them down later.
this.executor = new ScheduledThreadPoolExecutor(1, new ContextPropagatingThreadFactory("Wallet autosave thread", Thread.MIN_PRIORITY));
this.executor.setKeepAliveTime(5, TimeUnit.SECONDS);
this.executor.allowCoreThreadTimeOut(true);
this.executor.setExecuteExistingDelayedTasksAfterShutdownPolicy(false);
this.wallet = checkNotNull(wallet);
// File must only be accessed from the auto-save executor from now on, to avoid simultaneous access.
this.file = checkNotNull(file);
this.savePending = new AtomicBoolean();
this.delay = delay;
this.delayTimeUnit = checkNotNull(delayTimeUnit);
this.saver = new Callable<Void>() {
#Override public Void call() throws Exception {
// Runs in an auto save thread.
if (!savePending.getAndSet(false)) {
// Some other scheduled request already beat us to it.
return null;
}
log.info("Background saving wallet, last seen block is {}/{}", wallet.getLastBlockSeenHeight(), wallet.getLastBlockSeenHash());
saveNowInternal();
return null;
}
};
}
I'm guessing it is some sort of permissions issue but I cannot seem to figure this out.
EDIT: This is all being run on the exact same Ubuntu 14.04 machine - no added complexity of different operating systems.
You cannot generally depend on the existence or writability of $HOME. There are really only two portable ways to identify (i.e. provide a path to) an external file.
Provide an explicit path using a property set on the invocation command line or provided in the environment, or
Provide the path in a configuration properties file whose location is itself provided as a property on the command line or in the environment.
The problem with using $HOME is that you cannot know what userID the application is running under. The user may or may not even have a home directory, and even if the user does, the directory may or may not be writable. In your specific case, your process may have the ability to create a file (write access on the directory itself) but write access to a file may be restricted by the umask and/or ACLs (on Windows) or selinux (on Linux).
Put another way, the installer/user of the library must explicitly provide a known writable path for your application to use.
Yet another way to think about it is that you are writing library code that may be used in completely unknown environments. You cannot assume ANYTHING about the external environment except what is in the explicit contract between you and the user. You can declare in your interface specification that $HOME must be writable, but that may be highly inconvenient for some users whose environment doesn't have $HOME writable.
A much better and portable solution is to say
specify -Dcom.xyz.workdir=[path] on the command line to indicate the work path to be used
or
The xyz library will look for its work directory in the path specified by the XYZ_WORK environment variable
Ideally, you do BOTH of these to give the user some flexibility.
savePending is always false. In the beginning of call you check that it is false, and return null. The actual save code is never executed. I am guessing you meant to check if it was true there, and also set it to true, not false. You then also need to reset it back to false in the end.
Now, why this works in your unit test is a different story. The test must be executing different code.

System.getProperty(param) returns wrong value - Android

steps I do:
I do in code
System.setProperty("myproperty", 1);
and then I set in a shell script the property "myProperty" to 3.
like this:
# setprop "myproperty" 3
and then in the code I try to read the property like this:
System.getProperty("myproperty");
I get the value of 1. which means that the set from the shell didn't actually work.
but when I print all props from shell with
# getprop
I see in the list that myproperty equals 3.
in shorter words: I want to change the value of a property from a script, and I see that this scripts actually changes the property but in the java code I get the old value.
any ideas?
Java code in Android provides System.getProperty and System.setProperty functions in java library but it's important to note that although these java APIs are semantically equal to native version, java version store data in a totally different place. Actually, a hashtable is employed by dalvik VM to store properties. So, java properties are separated, it can't get or set native properties, and neither vice versa.
You can use android.os.SystemProperties class can manipulate native properties, though it's intended for internal usage only. It calls through jni into native property library to get/set properties.
getprop/setprop work on android.os.SystemProperties, not on java.lang.System.
Unfortunately, this class is not available to third party application. Apparently you have rooted your device, so you may still access it.
You can use that snippet to run getProp as shell command and get the value of any property:
private String getSystemProperty(String propertyName) {
String propertyValue = "[UNKNOWN]";
try {
Process getPropProcess = Runtime.getRuntime().exec("getprop " + propertyName);
BufferedReader osRes =
new BufferedReader(new InputStreamReader(getPropProcess.getInputStream()));
propertyValue = osRes.readLine();
osRes.close();
} catch (Exception e) {
// Do nothing - can't get property value
}
return propertyValue;
}

Categories