I've written a custom User Store Manager to interface WSO2 to a database managed by ASP.Net's Simple Membership Provider. My main issue is that SMP uses PBKDF2 for password hashing and the standard JDBC User Store doesn't seem to support that.
I basically used https://docs.wso2.com/display/IS530/Writing+a+Custom+User+Store+Manager as a template, as the example implements a different password hashing algo, which is exactly my use case.
You can find my POC implementation here: github project wso2_custom_userstore I built a jar, put it into the dropins directory and restarted the server. The server complained about lack of bundle headers, but that's it. When adding a new user store there's only the standard stores, nothing more. I then configured a JDBC user store and changed the class to the one I wrote. The only effect I saw was that my previously configured User Store had vanished. I tried putting the .jar into the libs directory instead, didn't change anything.
As that didn't seem to work and the server complained about missing bundle headers I built a OSGI bundle that exported my CustomUserStoreManager Package (the source can be found on github as well, I'm not allowed to add more than two URLs) - now the bundle gets loaded and activated but nothing more. Still my class is nowhere to be seen. I don't see it as an available class in the User Store add dialog and I don't see it as an available class in the log config. No hint of it anywhere, not in the log files, not in the server's startup output. Nada.
Am I doing something wrong?
I have to add that I by no means am a Java developer nor a developer at all. I'm evaluating WSO2 for a customer and this is supposed to be a PoC. Once it works and I know that using PBKDF2 hashes are possible someone more competent is going to build a production version.
Thanks in advance,
SunTsu
WSO2 IS 5.3.0 is based on org.wso2.carbon.user.core 4.4.11 [1] and you are using 4.2.0. Documentation should be updated in this case. You can find a sample code [2] which is written for WSO2 IS 5.1.0 and article in [3].
Related
I am using Xero's Java SDK to build my application. My application is now facing a requirement of having to work with several Xero private apps, therefore I need to manage and performing authentication (OAuth) via the key certificate file and appropriate consumer key and secret.
I was thinking to very simply store these details in a database table and retrieve them appropriately more or less as in the following:
// create a Xero config instance
Config config = JsonConfig.getInstance();
// build config file - details will be obtained from database
config.setConsumerKey("key");
config.setConsumerSecret("secret");
// this line will have me authenticate with the Xero service using the config file built
XeroClient client = new XeroClient(config);
The problem with this approach is that I am not pointing at the public_privatekey.pfx key file which is another essential element required to authenticate.
The reason why I am not doing so is that the SDK does not seem to support this using the Config instance as shown above - there is no option for me to select the appropriate public_private.pfx file (and neither an option for me to just load the contents of the file). It doesn't make sense to me that an SDK would be missing a feature, therefore questioning my approach; have I overlooked a detail or am I approaching the problem incorrectly?
Take a look at the read me under the heading Customize Request Signing
https://github.com/XeroAPI/Xero-Java/blob/master/README.md
You can provide your own signing mechanism by using the public XeroClient(Config config, SignerFactory signerFactory) constructor. Simply implement the SignerFactory interface with your implementation.
You can also provide a RsaSignerFactory using the public RsaSignerFactory(InputStream privateKeyInputStream, String privateKeyPassword) constructor to fetch keys from any InputStream.
in an existing Web Application (JSP, Struts), localizations are managed through JSTL tags fmt:setbundle, fmt:message and .properties files.
I'd like to get rid of the .properties files and use an alternative datasource for localizations.
For my goal I've created custom ResourceBundle and ResourceControl implementations (details on where data is picked, xml, database, are out of scope), but I'm wondering how to register and use them in place of the default/factory file-based implementation, so I'm not forced to modify markup code (fmt:message...) among web application files.
I saw examples that point to replace fmtResourceKey session value but it's limited to only one bundle and it looks like an "hack".
Any good ideas?
Thanks for your help!
Ok, it seems I sorted out with subclassing/customizing java.util.ResourceBundle, which also carries implementantion of custom ResourceBundleControl and ResourceBundleControlProvider (injected through Service Provider Interface - SPI).
Similar solution is depicted in this page from Oracle:
https://docs.oracle.com/javase/tutorial/i18n/serviceproviders/resourcebundlecontrolprovider.html
but was lacking an important hint: "put your JAR inside VM" since ResourceBundle.GetBundle method internally uses Serviceloader.LoadInstalled which searches for custom provider installed inside Java VM, as stated in LoadInstalled documentation:
This method is intended for use when only installed providers are
desired. The resulting servicewill only find and load providers that
have been installed into the current Java virtual machine; providers
on the application's class path will be ignored.
Thanks!
I'm starting to use cloudify and in the spirit of DevOps where infrastructure is code I want to have the passwords stored in a safe and centralized place.
It seems to me that I am supposed to put the credentials in the .properties file of the relevant service but versioning the plain password seems like a bad idea and not versioning it also seems like a bad idea (code which is unversioned).
I know chef has encrypted data bags and I was wondering if cloudify has something similar? If not is there a different best practice I should be aware of?
Thanks
With the upcoming Cloudify 2.3.0 release, you will be able to add overrides to property setting in the install-* command line. So your recipe should include a properties file with a default, possibly empty, password. This password should not actually do anything.
When you actually install the service, use overrides to set the actual password. This will keep the clear-text password out of your versioned properties file.
I was going through details of CAS project and found that it is using something called inspektr. I googled for some time and tried to find more details about its usage. But I did not get any information.
Can anyone provide more details about it and its usage.
Thanks in advance.
Inspektr can be found here: https://github.com/dima767/inspektr with details for usage here: https://github.com/dima767/inspektr/wiki/Inspektr-Auditing
As I understand the project, it collects information from your web flow and allows you to save said data through the use of the #Audit annotations provided. If the configuration is copied from that CAS project you linked, nearly everything's configured to log to a file. Sample data logged would be the Client's IP, remote IP, the action being performed (as configured via Spring and the #Audit annotation), as well as various other things.
If you're familiar with Spring Aspects, it should be a breeze to look through the Inspektr source code to find other uses.
Inspektr is a framework that allows us to drive audit records from Annotations utilizing an Aspect that is provided with the framework. This works for Spring Managed Beans only!
Here the github project website:
https://github.com/dima767/inspektr/wiki/Inspektr-Auditing
A good practical reference for config: https://wiki.jasig.org/display/CASUM/Auditing+and+Statistics+Via+Inspektr
The base principal here is that Inspektr allows for logging of these audit frames into the console, database, the application server log ,we can even define our own managers to log to a different medium if required.
There is a team develops enterprise application with web interface: java, tomcat, struts, mysql, REST and LDAP calls to external services and so on.
All configuration is stored in context.xml --tomcat specific file that contains variables available via servlet context and object available via JNDI resources.
Developers have no access to production and QA platforms (as it should be) so context.xml is managed by support/sysadmin team.
Each release has config-notes.txt with instructions like:
please add "userLimit" variable to context.xml with value "123", rename "DB" resource to "fooDB" and add new database connection to our new server (you should know url and credentials) named "barDb"
That is not good.
Here is my idea how to solve it.
Each release has special config file with required variable names, descriptions and default values (if any): even web.xml could be used.
Here is pseudo example:
foo=bar
userLimit=123
barDb=SET_MANUAL(connection to our new server)
And there is a special tool that support team runs against deployment artifact.
Look at it (text after ">" is typed by support guy):
Config for version 123 of artifact "mySever".
Enter your config file location> /opt/tomcat/context/myServer.xml
+"foo" value "bar" -- already exists and would not be changed
+"userLimit" value "123" -- adding new
+"barDb"(connection to our new server) please type> jdbc:mysql:host/db
Saving your file as /opt/tomcat/context/myServer.xml
Your environment is not configured to run myServer-123.
That will give us ability to deploy application on any environment and update configuration if needed.
Do you like my idea? What do you use for environment configuration management? Does there is ready-to-use tools for that?
There are plenty of different strategies. All of them are good and depends on what suit you best.
Build a single artifact and deploy configs to a separate location. The artifact could have placeholder variables and, on deployment, the config could be read in. Have a look at Springs property placeholder. It works fantastically for webapps that use Spring and doesn't involve getting ops involved.
Have an externalised property config that lives outside of the webapp. Keep the location constant and always read from the property config. Update the config at any stage and a restart will be up the new values.
If you are modifying the environment (i.e. application server being used or user/group permissions) look at using the above methods with puppet or chef. Also have a look at managing your config files with these tools.
As for the whole should devs be given access to prod, it really depends on a per company basis. For smaller companies where the dev is called every time there is a problem, regardless of whether that problem is server or application related, then obviously devs require access to the box.
DevOps is not about giving devs access to the box, its about giving devs the ability to use infrastructure as a service, the ability to spawn new instances with application X with config Y and to push their applications into environments without ops. In a large company like ours, what it allows is the ability for devs to manage the application they put on a server. Operations shouldn't care what version is on their, thats our job, their job is all about keeping the server up and running.
I strongly disagree with your remark that devs shouldn't have access to prod or staging environments. It's this kind of attitude that leads to teams working against each other instead of with eath other.
But to answer your question: you are thinking about what is typically called continuous integration ( http://en.wikipedia.org/wiki/Continuous_integration ) and moving towards devops. Ideally you should aim for the magic "1 click automated deployment". The guys from Flickr wrote a lot of blogs (and books) about how they achieved that.
Anyhow .. there's a lot of tools around that sector. You may want to have a look a things like Hudson/Jenkins or Puppet/Chef.