I've some data triplets that I want to write in some sort of basic OWL ontology. I've triplets like:
Delhi is part of India
or
India is an Asian country
Note that I've relations like "is-a", "part-of", or "related-to". What's the simplest way to build an ontology? Any working example or a reference to an example website will be great help!
There are a lot of different things mixed up in your question, I strongly suggest you take a bit of time (away from the keyboard!) to think through what you're trying to achieve here.
Firstly, geographic ontologies can get quite complex, and a lot of work has already been done in this area. Probably the obvious starting point is the GeoNames ontology, which gives names to geographic features, including cities like Dehli and countries like India. At the very least you should re-use those names for the places in your application, as that will maximise the chances that your data can be successfully joined with other available linked-data sources.
However, you probably don't want the whole of GeoNames in your application (I'm guessing), so you also need to be clear why you need an ontology at all. A good way to approach this is from the outside of your application: rather than worry about which kind of Jena model to use, start by thinking through ways to complete the sentence "using the ontology, a user of my application will be able to ...". That should then lead you on to establishing some competency questions (see, for example, section 3 of this guide) for your ontology. Once you know what kinds of information you want to represent, and what kinds of queries you need to apply to it, your technology choices will be much clearer. I realise that these applications are typically developed iteratively, and you'll want to try some code out fairly early on, but I still advocate getting your destination more clearly in mind before you start your coding journey.
You imply that you want to use Jena to drive a web site. There are many choices here. Don't be mislead by the term semantic web - this actually means bringing web-like qualities to interlined data sets, rather than putting semantics into human readable web pages per se. While you can do so, and many people do, you'll need some additional layers in your architecture. We typically use one of two approaches: using Jena with a templating engine, such as Velocity, in a servlets container, or using a Ruby web framework and driving Jena via JRuby. There are many other ways to solve this particular problem: Jena doesn't address web publishing directly, but it can be used within any Java-based web framework.
Finally, regarding namespaces, you should really re-use existing vocabularies, and hence namespaces, where possible. Don't make up new names for things which already have representations on the web of data somewhere. Use GeoNames, or DbPedia, or any of the many other published vocabularies where they fit. If they don't fit, then you should create a new name rather than use an existing name in a non-compatible way. In this case, you should use the web domain of your application (e.g. your company or university) as the basis for the namespace. Ideally, you should publish your ontology at the base URL of the namespace, but this can sometimes be hard to arrange depending on local web policies.
I suggest OWL API from Manchester University. In this way you can start to create your ontology "on the fly" in Java, and with a single method invocation you can serialize it in your preferred format (RDF, Manchester Syntax etc) if you need, or directly working on the in-memory representation. In this way you can rapidly prototype and experiment your ontology in the context of your program.
For an overview of the library and its main componenets I suggest the tutorial (code tutorial) provided by the creator of the library, it covers 90% of the basic needs.
PS: Protégé is based on OWL Api, you can also try it as suggested, but expecially in the beginning I preferred to rapidly play with ontologies and switch to some engineering environment like Protege when my mind was clear enough. In addition, with an external ontology you would need to learn how to navigate it, that IMHO it is really not worth in the very beginning.
Have a look at Stanford's Protege. It's an ontology editor.
You'd just declare a triplet class consisting of a subject, object, and predicate. "has-a" is a predicate, so your ontology elements would look like:
"Dehli", "is-in", "India"
"India", "is-in", "Asia"
"India", "is-a", "country"
This doesn't address queries, of course, but given a decent data store (even a database would do) you could start to build a flexible ontology with a decent query mechanism.
JENA is far, far more capable than what this would create, of course; it does provide the semantic query stuff, as well as far better resource definition and resolution. However, it's a lot more involved than a simple triplet structure; it all depends on what you need.
/**
- This is maven dependencies for owl-api
<dependency>
<groupId>net.sourceforge.owlapi</groupId>
<artifactId>owlapi-api</artifactId>
</dependency>
<dependency>
<groupId>net.sourceforge.owlapi</groupId>
<artifactId>owlapi-apibinding</artifactId>
</dependency>
* First of all you need to initialize ontology:
**/
private OWLDataFactory factory;
private PrefixManager pm;
private OWLOntology ontology;
private String pmString = "#";
private OWLOntologyManager manager;
private OWLReasoner reasoner;
private ShortFormEntityChecker entityChecker;
private BidirectionalShortFormProviderAdapter bidirectionalShortFormProviderAdapter;
private void initializeOntology(String fileContent)
throws OWLOntologyCreationException {
InputStream bstream = new ByteArrayInputStream(fileContent.getBytes());
this.manager = OWLManager.createOWLOntologyManager();
this.ontology = this.manager.loadOntologyFromOntologyDocument(bstream);
IRI ontologyIRI = this.ontology.getOntologyID().getOntologyIRI();
this.pm = new DefaultPrefixManager(ontologyIRI.toString()
+ this.pmString);
this.factory = this.manager.getOWLDataFactory();
ReasonerFactory factory = new ReasonerFactory();
this.reasoner = factory.createReasoner(this.ontology);
Set<OWLOntology> onts = new HashSet<>();
onts.add(this.ontology);
DefaultPrefixManager defaultPrefixManager = new DefaultPrefixManager(
this.pm);
ShortFormProvider shortFormProvider = new ManchesterOWLSyntaxPrefixNameShortFormProvider(
defaultPrefixManager);
this.bidirectionalShortFormProviderAdapter = new BidirectionalShortFormProviderAdapter(
this.manager, onts, shortFormProvider);
this.entityChecker = new ShortFormEntityChecker(
this.bidirectionalShortFormProviderAdapter);
}
/*
After that you need to define your classes and the relations of the classes. These relations calls as object properties in ontology. Instance of each ontology class calls as individual and the attributies of the classes (for person name, age , adress) calls as data-property.
*/
// To create a new individual of an ontology class :
public OWLClass getClass(String className) {
return this.factory.getOWLClass(":" + className, this.pm);
}
public OWLNamedIndividual createInvidual(String cls, String invname) {
OWLNamedIndividual res = this.factory.getOWLNamedIndividual(":"
+ invname, this.pm);
this.manager.addAxiom(this.ontology,
this.factory.getOWLDeclarationAxiom(res));
OWLClassAssertionAxiom axiom = this.factory.getOWLClassAssertionAxiom(
getClass(cls), res);
this.manager.addAxiom(this.ontology, axiom);
return res;
}
// To create an object property :
// This method will create an object property named prop if it is not exist.
public OWLObjectProperty getObjectProperty(String prop) {
return this.factory.getOWLObjectProperty(":" + prop, this.pm);
}
public void addObjectProperty(String propname, OWLNamedIndividual prop,
OWLNamedIndividual obj) {
OWLObjectPropertyAssertionAxiom axiom = this.factory
.getOWLObjectPropertyAssertionAxiom(
getObjectProperty(propname), obj, prop);
this.manager.addAxiom(this.ontology, axiom);
}
// And finally , to add a data-property to individuals :
public OWLDataProperty getDataProperty(String prop) {
return this.factory.getOWLDataProperty(":" + prop, this.pm);
}
public void addDataProperty(String propname, boolean propvalue,
OWLNamedIndividual inv) {
OWLAxiom axiom = this.factory.getOWLDataPropertyAssertionAxiom(
getDataProperty(propname), inv, propvalue);
this.manager.addAxiom(this.ontology, axiom);
}
Related
I am creating Client API in Java using :+ Apache Jena FrameWork+ Hydra(for Hypermedia driven) + my private vocab similar to Markus Lanther Event-API Vocab instead of schema.org(for Ontology/Vocabulary part)
Section 1 :
After looking this Markus Lanther EventDemo repo and hydra-java.I found that they are creating classes for each hydra:Class that can break client in future .For example :
A Person class (Person.java)
public class Person
{
String name;
};
But in future requirement name is also a class eg:
public class Name
{
String firstName;
String LastName;
};
So to fulfill this requirement I have to update Person class like this:
public class Person
{
Name name;
};
Question 1:
Is my understanding correct or not of this Section? If yes then what is the way to deal with this part ?
Section 2:
To avoid above problem I created a GenericResource class(GenericResource.java)
public class GenericResource
{
private Model model;
public void addProperty(String propertyName,Object propertyValue)
{
propertyName = "myvocab:"+propertyName;
//Because he will pass propertyName only eg: "name" and I will map it to "myvocab:name"
//Some logic to add propertyName and propertyValue to model
}
public GenericResource retriveProperty(String propertyName)
{
propertyName = "myvocab:"+propertyName;
//Some logic to query and retrieve propertyName data from this Object add it to new GenericResource Object and return
}
public GenericResouce performAction(String actionName,String postData)
{
//Some logic to make http call give response in return
}
}
But again I stuck in lots of problem :
Problem 1: It is not necessary that every propertyName is mapped to myvocab:propertyName. Some may be mapped to some other vocab eg: hydra:propertyName, schema:propertyName, rdfs:propertyName, newVocab:propertyName, etc.
Problem 2: How to validate whether this propertyName belongs to this class ?
Suggestion: Put type field/variable in GenericResource class.And then check supportedProperty in vocab corresponding to that class.To more clarity assume above Person class which is also defined in vocab and having supportedProperty : [name,age,etc] .So my GenericResource have type "Person" and at time of addProperty or some other operation , I will query through vocab for that property is in supportedProperty list or in supportedOperation list in case of performAction().
Is it correct way ? Any other suggestion will be most welcomed?
Question 1: Is my understanding correct or not of this Section? If yes
then what is the way to deal with this part ?
Yes, that seems to be correct. Just because hydra-java decided to creates classes doesn't mean you have to do the same in your implementation though. I would rather write a mapper and annotate an internal class that can then stay stable (you need to update the mapping instead). Your GenericResource approach also looks good btw.
Problem 1: It is not necessary that every propertyName is mapped to
myvocab:propertyName. Some may be mapped to some other vocab eg:
hydra:propertyName, schema:propertyName, rdfs:propertyName,
newVocab:propertyName, etc.
Why don't you store and access the properties with full URLs, i.e., including the vocab? You can of course implement some convenience methods to simplify the work with your vocab.
Problem 2: How to validate whether this propertyName belongs to this
class
Suggestion: Put type field/variable in GenericResource class
JSON-LD's #type in node objects (not in #value objects) corresponds to rdf:type. So simply add it as every other property.
And then check supportedProperty in vocab corresponding to that class.
Please keep in mind that supportedProperty only tells you which properties are known to be supported. It doesn't tell you which aren't. In other words, it is valid to have properties other than the ones listed as supportedProperty on an object/resource.
Ad Q1:
For the flexibility you want, the client has to be prepared for semantic and structural changes.
In HTML that is possible. The server can change the structure of an html form in the way outlined by you, by having a firstName and lastName field rather than just a name field. The client does not break, rather it adjusts its UI, following the new semantics. The trick is that the UI is generated, not fixed.
A client which tries to unmarshal the incoming message into a fixed representation, such as a Java bean, is out of luck, and I do not think there is any solution how you could deserialize into a Java bean and survive a change like yours.
If you do not try to deserialize, but stick to reading and processing the incoming message into a more flexible representation, then you can achieve the kind of evolvability you're after. The client must be able to handle the flexible representation accordingly. It could generate UIs rather than binding data to fixed markup, which means, it makes no assumptions about the semantics and structure of the data. If the client absolutely has to know what a data element means, then the server cannot change the related semantics, it can only add new items with the new semantics while keeping the old ones around.
If there were a way how a server could hand out a new structure with a code-on-demand adapter for existing clients, then the server would gain a lot of evolvability. But I am not aware of any such solutions yet.
Ad Q2:
If your goal is to read an incoming json-ld response into a Jena Model on the client side, please see https://jena.apache.org/documentation/io/rdf-input.html
Model model = ModelFactory.createDefaultModel() ;
String base = null;
model.read(inputStream, base, "JSON-LD");
Thus your client will not break in the sense that it cannot read the incoming response. I think that is what your GenericResource achieves, too. But you could use Jena directly on the client side. Basically, you would avoid unmarshalling into a fixed type.
Many Architects and Engineers recommend Dependency Injection and other Inversion of Control patterns as a way to improve the testability of your code. There's no denying that Dependency Injection makes code more testable, however, isn't it also a completing goal to Abstraction in general?
I feel conflicted! I wrote an example to illustrate this; it's not super-realistic and I wouldn't design it this way, but I needed a quick and simple example of a class structure with multiple dependencies. The first example is without Dependency Injection, and the second uses Injected Dependencies.
Non-DI Example
package com.stackoverflow.di;
public class EmployeeInventoryAnswerer()
{
/* In reality, at least the store name and product name would be
* passed in, but this example can't be 8 pages long or the point
* may be lost.
*/
public void myEntryPoint()
{
Store oaklandStore = new Store('Oakland, CA');
StoreInventoryManager inventoryManager = new StoreInventoryManager(oaklandStore);
Product fancyNewProduct = new Product('My Awesome Product');
if (inventoryManager.isProductInStock(fancyNewProduct))
{
System.out.println("Product is in stock.");
}
}
}
public class StoreInventoryManager
{
protected Store store;
protected InventoryCatalog catalog;
public StoreInventoryManager(Store store)
{
this.store = store;
this.catalog = new InventoryCatalog();
}
public void addProduct(Product product, int quantity)
{
this.catalog.addProduct(this.store, product, quantity);
}
public boolean isProductInStock(Product product)
{
return this.catalog.isInStock(this.store, this.product);
}
}
public class InventoryCatalog
{
protected Database db;
public InventoryCatalog()
{
this.db = new Database('productReadWrite');
}
public void addProduct(Store store, Product product, int initialQuantity)
{
this.db.query(
'INSERT INTO store_inventory SET store_id = %d, product_id = %d, quantity = %d'
).format(
store.id, product.id, initialQuantity
);
}
public boolean isInStock(Store store, Product product)
{
QueryResult qr;
qr = this.db.query(
'SELECT quantity FROM store_inventory WHERE store_id = %d AND product_id = %d'
).format(
store.id, product.id
);
if (qr.quantity.toInt() > 0)
{
return true;
}
return false;
}
}
Dependency-Injected Example
package com.stackoverflow.di;
public class EmployeeInventoryAnswerer()
{
public void myEntryPoint()
{
Database db = new Database('productReadWrite');
InventoryCatalog catalog = new InventoryCatalog(db);
Store oaklandStore = new Store('Oakland, CA');
StoreInventoryManager inventoryManager = new StoreInventoryManager(oaklandStore, catalog);
Product fancyNewProduct = new Product('My Awesome Product');
if (inventoryManager.isProductInStock(fancyNewProduct))
{
System.out.println("Product is in stock.");
}
}
}
public class StoreInventoryManager
{
protected Store store;
protected InventoryCatalog catalog;
public StoreInventoryManager(Store store, InventoryCatalog catalog)
{
this.store = store;
this.catalog = catalog;
}
public void addProduct(Product product, int quantity)
{
this.catalog.addProduct(this.store, product, quantity);
}
public boolean isProductInStock(Product product)
{
return this.catalog.isInStock(this.store, this.product);
}
}
public class InventoryCatalog
{
protected Database db;
public InventoryCatalog(Database db)
{
this.db = db;
}
public void addProduct(Store store, Product product, int initialQuantity)
{
this.db.query(
'INSERT INTO store_inventory SET store_id = %d, product_id = %d, quantity = %d'
).format(
store.id, product.id, initialQuantity
);
}
public boolean isInStock(Store store, Product product)
{
QueryResult qr;
qr = this.db.query(
'SELECT quantity FROM store_inventory WHERE store_id = %d AND product_id = %d'
).format(
store.id, product.id
);
if (qr.quantity.toInt() > 0)
{
return true;
}
return false;
}
}
(Please feel to make my example better if you have any ideas! It might not be the best example.)
In my example, I feel that Abstraction has been completely violated by EmployeeInventoryAnswerer having knowledge of underlying implementation details of StoreInventoryManager.
Shouldn't EmployeeInventoryAnswererhave the perspective of, "Okay, I'll just grab a StoreInventoryManager, give it the name of the product the customer is looking for, and what store I want to check, and it will tell me if the product is in stock."? Shouldn't it not know a single thing about Databases or InventoryCatalogs, as from its perspective, that's an implementation detail it need not concern itself with?
So, where's the balance between testable code with injected dependencies, and information-hiding as a principal of abstraction? Even if the middle-classes are merely passing-through dependencies, the constructor signature alone reveals irrelevant details, right?
More realistically, let's say this a long-running background application processing data from a DBMS; at what "layer" of the call-graph is it appropriate to create and pass around a database connector, while still making your code testable without a running DBMS?
I'm very interested in learning about both OOP theory and practicality here, as well as clarifying what seems to be a paradox between DI and Information Hiding/Abstraction.
The Dependency Inversion Principle and, more specifically, Dependency Injection tackle the problem of how make application code loosely coupled. This means that in many cases you want to prevent the classes in your application from depending on other concrete types, in case those dependent types contain volatile behavior. A volatile dependency is a dependency that, among other things, communicates with out-of-process resources, is non-deterministic, or needs to be replaceable. Tightly coupling to volatile dependencies hinders testability, but also limits the maintainability and flexibility of your application.
But no matter what you do, and no matter how many abstractions you introduce, somewhere in your application you need to take a dependency on a concrete type. So you can’t get rid of this coupling completely—but this shouldn't be a problem: An application that is 100% abstract is also 100% useless.
What this means is that you want to reduce the amount of coupling between classes and modules in your application, and the best way of doing that is to have a single place in the application that depends on all concrete types and will instantiate that for you. This is most beneficial because:
You will only have one place in the application that knows about the composition of object graphs, instead of having this knowledge scattered throughout the application
You will only have one place to change if you want to change implementations, or intercept/decorate instances to apply cross-cutting concerns.
This place where you wire everything up should be in your entry-point assembly. It should be the entry-point assembly, because this assembly already depends on all other assemblies anyway, making it already the most volatile part of your application.
According to the Stable-Dependencies Principle (2) dependencies should point in direction of stability, and since the part of the application where you compose your object graphs will be the most volatile part, nothing should depend on it. That’s why this place where you compose your object graphs should be in your entry point assembly.
This entry point in the application where you compose your object graphs is commonly referred to as the Composition Root.
If you feel that EmployeeInventoryAnswerer should not know anything about databases and InventoryCatalogs, it might be the case that the EmployeeInventoryAnswerer is mixing infrastructural logic (to build up the object graphs) and application logic. Iin other words, it might be violating the Single Responsibility Principle. In that case your EmployeeInventoryAnswerer should not be the entry point. Instead you should have a different entry point and the EmployeeInventoryAnswerer should only get a StoreInventoryManager injected. Your new entry point can than build up the object graph starting with EmployeeInventoryAnswerer and call its AnswerInventoryQuestion method (or whatever you decide to call it).
where's the balance between testable code with injected dependencies,
and information-hiding as a principal of abstraction?
The constructor is an implementation detail. Only the Composition Root knows about concrete types and it is, therefore, the only one calling those constructors. When a consuming class depends on abstractions as its incoming/injected dependencies (e.g. by specifying its constructor arguments as abstractions), the consumer knows nothing about the implementation and that makes it easier to prevent leaking implementation details onto the consumer. If the abstraction itself would leak implementation details, on the other hand, it would violate the Dependency Inversion Principle. And if the consumer would decide to cast the dependency back to the implementation, it would in turn violate the Liskov Substitition Principle. Both violations should be prevented.
But even if you would have a consumer that depends on a concrete component, that component can still do information-hiding—it doesn't have to expose its own dependencies (or other values) through public properties. And the fact that this component has a constructor that takes in the component's dependencies, does not make it violate information-hiding, because it is impossible to retrieve the component's dependencies through its constructor (you can only insert dependencies through the constructor; not receive them). And you can't change the component's dependencies, because that component itself will be injected into the consumer, and you can't call a constructor on an already created instance.
As I see it, when talking about a "balance," you are providing a false choice. Instead, it's a matter of applying the SOLID principles correctly, because without applying the SOLID principles, you'll be in a bad place (from a maintainability perspective) anyway—and application of the SOLID principles undoubtedly leads to Dependency Injection.
at what "layer" of the call-graph is it appropriate to create and pass around a database connector
At the very least, the entry point knows about the database connection, because it is only the entry point that should read from the configuration file. Reading from the configuration file should be done up front and in one single place. This allows the application to fail fast if it is misconfigured and prevents you from having reads to the config file scattered throughout your application.
But whether the entry point should be responsible of creating the database connection, that can depend on a lot of factors. I usually have some sort of ConnectionFactory abstraction for this, but YMMV.
UPDATE
I don't want to pass around a Context or an AppConfig to everything and end up passing dependencies classes don't need
Passing dependencies a class itself doesn't need is typically not the best solution, and might indicate that you are violating the Dependency Inversion Principle and applying the Control Freak anti-pattern. Here's an example of such problem:
public class Service : ServiceAbs
{
private IOtherService otherService;
public Service(IDep1 dep1, IDep2 dep2, IDep3 dep3) {
this.otherService = new OtherService(dep1, dep2, dep3);
}
}
Here you see a class Service that takes in 3 dependencies, but it doesn't use them at all. It only forwards them to the OtherService's constructor which it creates. When OtherService is not local to Service (i.e. lives in a different module or layer), it means that Service violates the Dependency Inversion Principle—Service is now tightly coupled with OtherService. Instead, this is how Service should look like:
public class Service : IService
{
private IOtherService otherService;
public Service(IOtherService otherService) {
this.otherService = otherService;
}
}
Here Service only takes in what it really needs and doesn't depend on any concrete types.
but I also don't want to pass the same 4 things to several different classes
If you have a group of dependencies that are often all injected together into a consumer, changes are that you are violating the Single Responsibility Principle: the consumer might do too much—know too much.
There are several solutions for this, depending on the problem at hand. One thing that comes to mind is refactoring to Facade Services.
It might also be the case that those injected dependencies are cross-cutting concerns. It's often much better to apply cross-cutting concerns transparently, instead of injecting it into dozens or hundreds of consumers (which is a violation of the Open/Closed principle). You can use the Decorator design pattern, Chain-of-Responsibility design pattern, or dynamic interception for this.
I guess I'm new to basic Java -- Android and .NET both have simple ways of storing application-specific data to private databases, but I'm seeing no such method for data storage in plain ol' Java.
What am I missing here? Hibernate and JPA seem a little excessive.
Edit: I was not looking for a lot of data storage in my application (yet) -- just a persistent record of a few settings like the current user should be good enough. I was basically looking for an analog to Android's shared preferences or .Net's properties.
(This is after the fact, however, since I've received a wealth of responses.)
If the Preferences API is not comprehensive enough and Hibernate/JPA seem excessive, then probably the most simple intermediate alternative, and easiest to learn, method of storing application data in a database is Java JDBC.
Simple example below;
// Database URL including driver, username, password, database name etc.
String URL = "jdbc:oracle:thin:username/password#amrood:1521:EMP";
Connection conn = DriverManager.getConnection(URL);
// Create your SQL Query.
PreparedStatement st = conn.prepareStatement("UPDATE YourTable SET YourColumn = 12 WHERE TableID = 1337");
// Execute (If your statement doesn't need to return results).
st.execute();
SQLite can run locally and within your application. The drivers also can be integrated. No server required. So between the two you have simple, local persistence in the form of an SQL database.
Preferences Tutorial
Official JDBC Tutotials
SQLite website
Edit:
After seeing that you only require persisting settings/preferences I would recommend the newer Preferences API (tutorial linked above) over the older Properties API, because the Preferences API handles file location more elegantly for you.
Example;
// This will define a node in which the preferences can be stored
prefs = Preferences.userRoot().node(this.getClass().getName());
// First we get the values
// Define a boolean value with the default true
prefs.getBoolean("Test1", true);
// Define a string with default "Hello World"
prefs.get("Test2", "Hello World");
// Define a integer with default 50
prefs.getInt("Test3", 50);
// now set the values
prefs.putBoolean("Test1", false);
prefs.put("Test2", "Hello Europa");
prefs.putInt("Test3", 45);
// Delete the preference settings for the first value
prefs.remove("Test1");
Are you trying to save a java object to a file or DB? Maybe look at the Serializable interface
http://www.mkyong.com/java/how-to-write-an-object-to-file-in-java/
Properties is a great suggestion for simple key/value configuration type data.
Since you suggested Hibernate/JPA, I'll throw out embedded Derby. It is a database you can embed in your application & in memory. http://db.apache.org/derby/
Serialization is better avoided for this purpose. I'd save it for clustering or sending objects between VM's.
If you need a simple key-value store, consider Properties.
Here's an example simple usage of Properties.
public class Program {
static final Properties Prefs = new Properties();
static final String Default_Path = "prefs.properties";
static void loadPreferences() throws IOException {
try (FileInputStream fis = new FileInputStream(Default_Path)) {
Prefs.loadFromXML(fis);
} catch (FileNotFoundException ex) {
// Okay - if not found we'll save it later
}
}
static void savePreferences() throws FileNotFoundException, IOException {
try (FileOutputStream fos = new FileOutputStream(Default_Path)) {
Prefs.storeToXML(fos, "Program Preferences");
}
}
public static void main(String[] args) throws IOException {
loadPreferences();
if (Prefs.getProperty("FavoriteColor") == null) {
Prefs.setProperty("FavoriteColor", "black");
}
System.out.println(Prefs.getProperty("FavoriteColor"));
savePreferences();
}
}
Which generates an XML files, prefs.properties:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>Program Preferences</comment>
<entry key="FavoriteColor">black</entry>
</properties>
Key notes
Don't use Properties.put(). Use Properties.setProperty() and Properties.getProperty().
Tend to prefer methods declared as part of Properties and not those inherited from Hashtable.
You specifically asked about serialzation in comments. I would not use it for this task. (I'd probably not use it for any task actually, but...).
Serialzation allows your class's invariants to be broken. i.e. serialize an instance, that instance is edited outside, then deserialized back into an object.
implements Serializable effectively gives your class a constructor, so it doesn't work with Singletons and other creational patterns.
implements Serializable also exposes all your fields, except those you declare as transient. That means that private parts of your class are part of the public interface. Boo on you if you change it later (by that I mean, later versions will likely break it without careful controls.)
this list is incomplete...
Some argue that serialization can be done right, but it's a hard one to get right for sure.
The project I'm currently working on is a web application (mostly Web Services, not JSP or anything like that) which functions as a 'façade' between an older interface coming from the front (a web page it exposes a couple of web services to) and a more complex interface at the bottom (using web services to call them).
A lot of the web services at the bottom have quite complex structures (part of a standard I can't change).
What I like to do in cases like that, is create simple JAXB POJOs and use XSLT to convert them to their complex structures.
For example:
#XmlRootElement(name = "simplequery", namespace="http://www.foo.com")
#XmlAccessorType(XmlAccessType.FIELD)
public class SimpleQuery {
#XmlElement(name = "id")
private String name;
public void setId(String id) {
this.id = id;
}
}
And I can call it like this:
SimpleQuery sq = new SimpleQuery();
sq.setId("12264736376374");
org.w3c.dom.Document sqDoc = new XsltTransformer("SimpleQuery-to-VeryComplexQuery.xsl").transform();
QueryConsumer.sendQuery(sqDoc);
You'll just have to assume those classes exist and do what you think they do.
Here's a relevant snippet from my example XSLT:
<template match="foo:simplequery">
<some><complex><structure><id><x:value-of select="foo:id"/></id></structure></complex></some>
</template>
I like doing it like this for 2 reasons:
The "configuration" is done in XSLT and not in compiled code so I can easily maintain it
I don't have to create horrible (what I like to call) "pyramid code" like this with a complex JAXB object:
Some s = new Some();
s.setComplex(new Complex());
s.getComplex().setStructure(new Structure());
s.getComplex().getStructure().setId(new Id());
s.getComplex().getStructure().getId().setValue("1222333232");
Time to get to the point. There are a couple of complex interfaces pointing to different web services and even servers, all in their respective namespaces.
During some refactoring OCD I want to place this part:
SimpleQuery sq = new SimpleQuery();
sq.setId("12264736376374");
org.w3c.dom.Document sqDoc = new XsltTransformer("SimpleQuery-to-VeryComplexQuery.xsl").transform();
...into a specific static class with a static method.
Now the challenge: Find a good name for that class!
What the class does is very simple: It takes a simplified object and converts it to the complex object. Hence, my Conveyor Belt pattern. In this case, it just spits out DOM Documents, but it could potentially also create JAXB objects which are JAXB'd versions of that Document (my XsltTransformer can already do this).
I tried a name like ComplexQueryFactory, but it just didn't seem right.
I'm trying to find a solution for configuring a server-side Java application such that different users of the system interact with the system as if it were configured differently (Multitenancy). For example, when my application services a request from user1, I wish my application to respond in Klingon, but for all other users I want it to reply in English. (I've picked a deliberately absurd example, to avoid specifics: the important thing is that I want the app to behave differently for different requests).
Ideally there's a generic solution (i.e. one that allows me to add
user-specific overrides to any part of my config without having to change code).
I've had a look at Apache Commons Configuration which has built in support for multitenant configuration, but as far as I can tell this is done by combining some base config with some set of overrides. This means that I'd have a config specifying:
application.lang=english
and, say a user1.properties override file:
application.lang=klingon
Unfortunately it's much easier for our support team if they can see all related configurations in one place, with overrides specified somehow inline, rather than having separate files for base vs. overrides.
I think some combination of Commons Config's multitenancy + something like a Velocity template to describe the conditional elements within underlying config is kind of what I'm aiming for - Commons Config for the ease of interacting with my configuration and Velocity for very expressively describing any overrides, in a single configuration, e.g.:
#if ($user=="user1")
application.lang=klingon
#else
application.lang=english
#end
What solutions are people using for this kind of problem?
Is it acceptable for you to code each server operation like in the following?
void op1(String username, ...)
{
String userScope = getConfigurationScopeForUser(username);
String language = cfg.lookupString(userScope, "language");
int fontSize = cfg.lookupInt(userScope, "font_size");
... // business logic expressed in terms of language and fontSize
}
(The above pseudocode assumes the name of a user is passed as a parameter, but you might pass it via another mechanism, for example, thread-local storage.)
If the above is acceptable, then Config4* could satisfy your requirements. Using Config4*, the getConfigurationScopeForUser() method used in the above pseudocode can be implemented as follows (this assumes cfg is a Configuration object that has been previously initialized by parsing a configuration file):
String getConfigurationScopeForUser(String username)
{
if (cfg.type("user", username) == Configuration.CFG_SCOPE) {
return Configuration.mergeNames("user", username);
} else {
return "user.default";
}
}
Here is a sample configuration file to work with the above. Most users get their configuration from the "user.default" scope, but Mary and John have their own overrides of some of those default values:
user.default {
language = "English";
font_size = "12";
# ... many other configuration settings
}
user.John {
#copyFrom "user.default";
language = "Klingon"; # override a default value
}
user.Mary {
#copyFrom "user.default";
font_size = "18"; # override a default value
}
If the above sounds like it might meet your needs, then I suggest you read Chapters 2 and 3 of the "Getting Started Guide" to get a good-enough understanding of the Config4* syntax and API to be able to confirm/refute the suitability of Config4* for your needs. You can find that documentation on the Config4* website.
Disclaimer: I am the maintainer of Config4*.
Edit: I am providing more details in response to comments by bacar.
I have not put Config4* in a Maven repository. However, it is trivial to build Config4* with its bundled Ant build file, because Config4* does not have any dependencies on third-party libraries.
Another approach for using Config4* in a server application (prompted by a comment by bacar) with Config4* is follows...
Implement each server operation like in the following pseudo-code:
void op1(String username, ...)
{
Configuration cfg = getConfigurationForUser(username);
String language = cfg.lookupString("settings", "language");
int fontSize = cfg.lookupInt("settings", "font_size");
... // business logic expressed in terms of language and fontSize
}
The getConfigurationForUser() method used above can be implemented as shown in the following pseudocode:
HashMap<String,Configuration> map = new HashMap<String,Configuration>();
synchronized String getConfigurationForUser(String username)
{
Configuration cfg = map.get(username);
if (cfg == null) {
// Create a config object tailored for the user & add to the map
cfg = Configuration.create();
cfg.insertString("", "user", username); // in global scope
cfg.parse("/path/to/file.cfg");
map.put(username, cfg);
}
return cfg;
}
Here is a sample configuration file to work with the above.
user ?= ""; // will be set via insertString()
settings {
#if (user #in ["John", "Sam", "Jane"]) {
language = "Klingon";
} #else {
language = "English";
}
#if (user == "Mary") {
font_size = "12";
} #else {
font_size = "10";
}
... # many other configuration settings
}
The main comments I have on the two approaches are as follows:
The first approach (one Configuration object that contains lots of variables and scopes) is likely to use slightly less memory than the second approach (many Configuration objects, each with a small number of variables). But my guess is that the memory usage of either approach will be measured in KB or tens of KB, and this will be insignificant compared to the overall memory footprint of your server application.
I prefer the first approach because a single Configuration object is initialized just once, and then it is accessed via read-only lookup()-style operations. This means you don't have to worry about synchronizing access to the Configuration object, even if your server application is multi-threaded. In contrast, the second approach requires you to synchronize access to the HashMap if your server application is multi-threaded.
The overhead of a lookup()-style operation is in the order of, say, nanoseconds or microseconds, while the overhead of parsing a configuration file is in the order of, say, milliseconds or tens of milliseconds (depending on the size of the file). The first approach performs that relatively expensive parsing of a configuration file only once, and that is done in the initialization of the application. In contrast, the second approach performs that relatively expensive parsing of a configuration file "N" times (once for each of "N" users), and that repeated expense occurs while the server is processing requests from clients. That performance hit may or may not be an issue for your application.
I think ease of use is more important than ease of implementation. So, if you feel that the second approach will make it easier to maintain the configuration file, then I suggest you use that approach.
In the second approach, you may wonder why I put most of the variables in a named scope (settings) rather than in the global scope along with the "injected" user variable. I did that for a reason that is outside the scope of your question: separating the "injected" variables from the application-visible variables makes it easier to perform schema validation on the application-visible variables.
Normally user profiles are going into a DB and the user must open a session with a login. The user name may go into the HTTPSession (=Cookies) and on every request the server will get the user name and may read the profile from the DB. Shure, the DB can be some config files like joe.properties, jim.properties, admin.properties, etc.