How to incorporate Environments with JClouds-Chef API? - java

I am using JClouds-Chef to:
Bootstrap a newly-provisioned Linux VM; and then
Run the chef-client on that node to configure it
It's important to note that all I'm currently configuring Chef with is a role (that's all it needs; everything else is set up on the Chef server for me):
public class ChefClient {
public configure() {
String vmIp = "myapp01.example.com";
String vmSshUsername = "myuser";
String vmSshPassword = "12345";
String endpoint = "https://mychefserver.example.com";
String client = "myuser";
String validator = "chef-validator";
String clientCredential = Files.toString(new File("C:\\Users\\myuser\\sandbox\\chef\\myuser.pem"), Charsets.UTF_8);
String validatorCredential = Files.toString(new File("C:\\Users\\myuser\\sandbox\\chef\\chef-validator.pem"), Charsets.UTF_8);
Properties props = new Properties();
props.put(ChefProperties.CHEF_VALIDATOR_NAME, validator);
props.put(ChefProperties.CHEF_VALIDATOR_CREDENTIAL, validatorCredential);
props.put(Constants.PROPERTY_RELAX_HOSTNAME, "true");
props.put(Constants.PROPERTY_TRUST_ALL_CERTS, "true");
ChefContext ctx = ContextBuilder.newBuilder("chef")
.endpoint(endpoint)
.credentials(client, clientCredential)
.overrides(props)
.modules(ImmutableSet.of(new SshjSshClientModule())) //
.buildView(ChefContext.class);
ChefService chef = ctx.getChefService();
List<String> runlist = new RunListBuilder().addRole("platformcontrol_dev").build();
ArrayList<String> runList2 = new ArrayList<String>();
for(String item : runlist) {
runList2.add(item);
}
BootstrapConfig bootstrapConfig = BootstrapConfig.builder().runList(runList2).build();
chef.updateBootstrapConfigForGroup("jclouds-chef", bootstrapConfig);
Statement bootstrap = chef.createBootstrapScriptForGroup("jclouds-chef");
SshClient.Factory sshFactory = ctx.unwrap().utils()
.injector().getInstance(Key.get(new TypeLiteral<SshClient.Factory>() {}));
SshClient ssh = sshFactory.create(HostAndPort.fromParts(vmIp, 22),
LoginCredentials.builder().user(vmSshUsername).password(vmSshPassword).build());
ssh.connect();
try {
StringBuilder rawScript = new StringBuilder();
Map<String, String> resolvedFunctions = ScriptBuilder.resolveFunctionDependenciesForStatements(
new HashMap<String, String>(), ImmutableSet.of(bootstrap), OsFamily.UNIX);
ScriptBuilder.writeFunctions(resolvedFunctions, OsFamily.UNIX, rawScript);
rawScript.append(bootstrap.render(OsFamily.UNIX));
ssh.put("/tmp/chef-bootstrap.sh", rawScript.toString());
ExecResponse result = ssh.exec("bash /tmp/chef-bootstrap.sh");
} catch(Throwable t) {
println "Exception: " + t.message
} finally {
ssh.disconnect();
}
}
}
Our in-house "Chef" (our devops guy) now wants to add the concept of Chef "environments" to all our recipes in addition to the existing roles. This is so that we can specify environment-specific roles for each node. My question: does the JClouds-Chef API handle environments? If so, how might I modify the code to incorporate environment-specific roles?
Is it just as simple as:
BootstrapConfig bootstrapConfig = BootstrapConfig.builder()
.environment("name-of-env-here?").runList(runList2).build();

Yes, it is that simple. That will tell the bootstrap script to register the node in the specified environment.
Take into account, though, that the environment must already exist in the Chef Server. If you want to create nodes in new environments, you can also create them programmatically as follows:
ChefApi api = ctx.unwrapApi(ChefApi.class);
if (api.getEnvironment("environment-name") == null) {
Environment env = Environment.builder()
.name("environment-name")
.description("Some description")
.build();
api.createEnvironment(env);
}

Related

Java Mail Listner : This email server does not support RECENT or USER flags

This is a continuation of Java mail listener using Spring Integration : mail isn't received by multiple app instances . I'm using below ImapMailReceiver code :
#Bean
public ImapMailReceiver receiver() {
ImapMailReceiver receiver = new ImapMailReceiver(
"imaps://username:pwd#mail.company.com/INBOX");
receiver.setShouldMarkMessagesAsRead(false);
receiver.setSimpleContent(true);
receiver.setUserFlag("test-flag");
//receiver.setJavaMailProperties(javaMailProperties());
return receiver;
}
My application has been deployed in dev and stage servers.As per the debug logs : This email server does not support RECENT or USER flags. Hence whatever userflag i'm setting via above code isn't useful and mails will be received by only once instance of my application (either dev or stage ) and not all instances.So mails are getting dropped by one instance. How to make it work so that all of my application instances receives new emails? Should i set any javamail properties ? How to make it work
UPDATE used below custom searchTermStrategy . For every poll list of new messages + set of old messages will be received . Haven't tested on multiple application instances yet.
private class CustomSearchTermStrategy implements SearchTermStrategy {
CustomSearchTermStrategy() {
}
#Override
public SearchTerm generateSearchTerm(Flags supportedFlags, Folder folder) {
SearchTerm searchTerm = null;
boolean recentFlagSupported = false;
if (supportedFlags != null) {
recentFlagSupported = supportedFlags.contains(Flags.Flag.RECENT);
if (recentFlagSupported) {
searchTerm = new FlagTerm(new Flags(Flags.Flag.RECENT), true);
}
if (supportedFlags.contains(Flags.Flag.ANSWERED)) {
NotTerm notAnswered = new NotTerm(new FlagTerm(new Flags(Flags.Flag.ANSWERED), true));
if (searchTerm == null) {
searchTerm = notAnswered;
} else {
searchTerm = new AndTerm(searchTerm, notAnswered);
}
}
if (supportedFlags.contains(Flags.Flag.DELETED)) {
NotTerm notDeleted = new NotTerm(new FlagTerm(new Flags(Flags.Flag.DELETED), true));
if (searchTerm == null) {
searchTerm = notDeleted;
} else {
searchTerm = new AndTerm(searchTerm, notDeleted);
}
}
if (supportedFlags.contains(Flags.Flag.SEEN)) {
NotTerm notSeen = new NotTerm(new FlagTerm(new Flags(Flags.Flag.SEEN), true));
if (searchTerm == null) {
searchTerm = notSeen;
} else {
searchTerm = new AndTerm(searchTerm, notSeen);
}
}
}
// if (!recentFlagSupported) {
// searchTerm = applyTermsWhenNoRecentFlag(folder, searchTerm);
// }
return searchTerm;
}
}
The simplest solution would be to use different accounts for each environment (forward mails from one to the other so both get them).
If that's not possible, the issue is with the FLAGGED flag, which is unconditionally set and excluded in the default search term.
Unfortunately, the method that sets that flag is private so you can't change that behavior.
I think the only solution is a custom search strategy that does not include NOT (FLAGGED) and keep state locally to ignore messages that you have already read.

spring boot 2 properties configuration

I have some code that works properly on spring boot prior to 2 and I find it hard to convert it to work with spring boot 2.
Can somebody assist?
public static MutablePropertySources buildPropertySources(String propertyFile, String profile)
{
try
{
Properties properties = new Properties();
YamlPropertySourceLoader loader = new YamlPropertySourceLoader();
// load common properties
PropertySource<?> applicationYamlPropertySource = loader.load("properties", new ClassPathResource(propertyFile), null);
Map<String, Object> source = ((MapPropertySource) applicationYamlPropertySource).getSource();
properties.putAll(source);
// load profile properties
if (null != profile)
{
applicationYamlPropertySource = loader.load("properties", new ClassPathResource(propertyFile), profile);
if (null != applicationYamlPropertySource)
{
source = ((MapPropertySource) applicationYamlPropertySource).getSource();
properties.putAll(source);
}
}
propertySources = new MutablePropertySources();
propertySources.addLast(new PropertiesPropertySource("apis", properties));
}
catch (Exception e)
{
log.error("{} file cannot be found.", propertyFile);
return null;
}
}
public static <T> void handleConfigurationProperties(T bean, MutablePropertySources propertySources) throws BindException
{
ConfigurationProperties configurationProperties = bean.getClass().getAnnotation(ConfigurationProperties.class);
if (null != configurationProperties && null != propertySources)
{
String prefix = configurationProperties.prefix();
String value = configurationProperties.value();
if (null == value || value.isEmpty())
{
value = prefix;
}
PropertiesConfigurationFactory<?> configurationFactory = new PropertiesConfigurationFactory<>(bean);
configurationFactory.setPropertySources(propertySources);
configurationFactory.setTargetName(value);
configurationFactory.bindPropertiesToTarget();
}
}
PropertiesConfigurationFactory doesnt exist anymore and the YamlPropertySourceLoader load method no longer accepts 3 parameters.
(the response is not the same either, when I have tried invoking the new method the response objects were wrapped instead of giving me the direct strings/integers etc...)
The PropertiesConfigurationFactory should be replaced with Binder class.
Binder class
Sample code:-
ConfigurationPropertySource source = new MapConfigurationPropertySource(
loadProperties(resource));
Binder binder = new Binder(source);
return binder.bind("initializr", InitializrProperties.class).get();
We were also using PropertiesConfigurationFactory to bind a POJO to a
prefix of the Environment. In 2.0, a brand new Binder API was
introduced that is more flexible and easier to use. Our binding that
took 10 lines of code could be reduced to 3 simple lines.
YamlPropertySourceLoader:-
Yes, this class has been changed in version 2. It doesn't accept the third parameter profile anymore. The method signature has been changed to return List<PropertySource<?>> rather than PropertySource<?>. If you are expecting single source, please get the first occurrence from the list.
Load the resource into one or more property sources. Implementations
may either return a list containing a single source, or in the case of
a multi-document format such as yaml a source for each document in the
resource.
Since there is no accepted answer yet, i post my full solution, which builds upon the answer from #nationquest:
private ConfigClass loadConfiguration(String path){
MutablePropertySources sources = new MutablePropertySources();
Resource res = new FileSystemResource(path);
PropertiesFactoryBean propFactory = new PropertiesFactoryBean();
propFactory.setLocation(res);
propFactory.setSingleton(false);
// resolve potential references to local environment variables
Properties properties = null;
try {
properties = propFactory.getObject();
for(String p : properties.stringPropertyNames()){
properties.setProperty(p, env.resolvePlaceholders(properties.getProperty(p)));
}
} catch (IOException e) {
e.printStackTrace();
}
sources.addLast(new PropertiesPropertySource("prefix", properties));
ConfigurationPropertySource propertySource = new MapConfigurationPropertySource(properties);
return new Binder(propertySource).bind("prefix", ConfigClass.class).get();
}
The last three lines are the relevant part here.
dirty code but this is how i solved it
https://github.com/mamaorha/easy-wire/tree/master/src/main/java/co/il/nmh/easy/wire/core/utils/properties

How to find the weblogic jmx attributes for a objectname

I am writing simple client application to connect to weblogic and list all the libraries that a webapp is depending on.However, I am having difficulty in finding the right attributes for a objectname. For example,
If you look at the below sample code given on oracle.com to connect MBeanServer
public static void initConnection(String hostname, String portString,
String username, String password) throws IOException,
MalformedURLException {
String protocol = "t3";
Integer portInteger = Integer.valueOf(portString);
int port = portInteger.intValue();
String jndiroot = "/jndi/";
String mserver = "weblogic.management.mbeanservers.edit";
JMXServiceURL serviceURL = new JMXServiceURL(protocol, hostname, port,
jndiroot + mserver);
Hashtable h = new Hashtable();
h.put(Context.SECURITY_PRINCIPAL, username);
h.put(Context.SECURITY_CREDENTIALS, password);
h.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES,
"weblogic.management.remote");
connector = JMXConnectorFactory.connect(serviceURL, h);
connection = connector.getMBeanServerConnection();
}
public ObjectName startEditSession() throws Exception {
// Get the object name for ConfigurationManagerMBean.
ObjectName cfgMgr = (ObjectName) connection.getAttribute(service,
"ConfigurationManager");
// Instruct MBeanServerConnection to invoke
// ConfigurationManager.startEdit(int waitTime int timeout).
// The startEdit operation returns a handle to DomainMBean, which is
// the root of the edit hierarchy.
ObjectName domainConfigRoot = (ObjectName)
connection.invoke(cfgMgr,"startEdit",
new Object[] { new Integer(60000),
new Integer(120000) }, new String[] { "java.lang.Integer",
"java.lang.Integer" });
if (domainConfigRoot == null) {
// Couldn't get the lock
throw new Exception("Somebody else is editing already");
}
return domainConfigRoot;
}
The line
ObjectName cfgMgr = (ObjectName) connection.getAttribute(service,
"ConfigurationManager");
Is referring to a JMX attribute ConfigurationManger. How can we find all the attributes that are under a given objectname in weblogic?
Thanks for your help!!
Nevermind! I found the solution.
You get attributes for a ObjectName by calling getBeanInfo on the ServerConnection!
Example:
MBeanAttributeInfo[] beanInfo = (connection.getMBeanInfo(objectName)).getAttributes();
for(MBeanAttributeInfo info:beanInfo)
System.out.println(info.getType()+" "+info.getName());
Maybe the WebLogic Classloader Analysis Tool (CAT) can give additional insight out of the box ...

Shiro LDAP Authorization config

Could you please help me with the following situation?
Background information:
I'm using the Vaadin framework.
I'm using the Java security framework Shiro
I'm using ssl.
Authentication works.
Username syntax = pietj#.lcl , jank#.lcl
memberOf field is being used as role.
shiro.ini
[main]
contextFactory = org.apache.shiro.realm.ldap.JndiLdapContextFactory
contextFactory.url = ldaps://<SERVER>:636
contextFactory.systemUsername = <USERNAME>#<COMPANY>
contextFactory.systemPassword = <PASSWORD>
contextFactory.environment[java.naming.security.protocol] = ssl
realm = org.apache.shiro.realm.activedirectory.ActiveDirectoryRealm
realm.ldapContextFactory = $contextFactory
realm.searchBase = "OU=<APPDIR>,DC=<COMPANY>,DC=lcl"
realm.groupRolesMap = "CN=<ROLE>,OU=<APPDIR>,DC=<COMPANY>,DC=lcl":"Admin"
[roles]
# 'Admin' role has permissions *
Admin = *
Goal
Authorization mapping based on the memberOf field from the currentUser.
Problem
currentUser.hasRole("Admin") always return false.
Questions
Is the above shiro.ini correct?
How do I fix the problem?
I ran into a similar issue using Shiro 1.2.4. Your Shiro configuration is probably OK and the problem lies in ActiveDirectory configuration.
In my setup some users had the userPrincipalName attribute set, while other users hadn't. You can check your in AD server with Sysinternals Active Directory Explorer for example.
This attribute is the one used by Shiro to search for a particular user, then it looks for groups defined in the memberOf attribute.
Take a look at ActiveDirectoryRealm.java source code, method Set<String> getRoleNamesForUser(String username, LdapContext ldapContext) the exact query used is
String searchFilter = "(&(objectClass=*)(userPrincipalName={0}))";
So you have two solutions:
Set userPrincipalName attribute on every user
Change how Shiro searches for users
I went for the second solution. Changing the search query is harder than it should be: you have to customize queryForAuthorizationInfo and getRoleNamesForUser (because its private) methods of ActiveDirectoryRealm class. This is how I did it:
public class CustomActiveDirectoryRealm extends ActiveDirectoryRealm {
#Override
protected AuthorizationInfo queryForAuthorizationInfo(PrincipalCollection principals, LdapContextFactory ldapContextFactory) throws NamingException {
String username = (String) getAvailablePrincipal(principals);
// Perform context search
LdapContext ldapContext = ldapContextFactory.getSystemLdapContext();
Set<String> roleNames = null;
try {
roleNames = getRoleNamesForUser(username, ldapContext);
} finally {
LdapUtils.closeContext(ldapContext);
}
return buildAuthorizationInfo(roleNames);
}
// Customize your search query here
private static final String USER_SEARCH_FILTER = "(&(objectClass=*)(sn={0}))";
private Set<String> getRoleNamesForUser(String username, LdapContext ldapContext) throws NamingException {
Set<String> roleNames;
roleNames = new LinkedHashSet<String>();
SearchControls searchCtls = new SearchControls();
searchCtls.setSearchScope(SearchControls.SUBTREE_SCOPE);
String userPrincipalName = username.replace("acegas\\", "");
if (principalSuffix != null) {
userPrincipalName += principalSuffix;
}
Object[] searchArguments = new Object[]{userPrincipalName};
NamingEnumeration answer = ldapContext.search(searchBase, USER_SEARCH_FILTER, searchArguments, searchCtls);
while (answer.hasMoreElements()) {
SearchResult sr = (SearchResult) answer.next();
Attributes attrs = sr.getAttributes();
if (attrs != null) {
NamingEnumeration ae = attrs.getAll();
while (ae.hasMore()) {
Attribute attr = (Attribute) ae.next();
if (attr.getID().equals("memberOf")) {
Collection<String> groupNames = LdapUtils.getAllAttributeValues(attr);
Collection<String> rolesForGroups = getRoleNamesForGroups(groupNames);
roleNames.addAll(rolesForGroups);
}
}
}
}
return roleNames;
}
}
And then of course use this class as Realm in shiro.ini
[main]
realm = your.package.CustomActiveDirectoryRealm
realm.ldapContextFactory = $contextFactory
realm.searchBase = "OU=<APPDIR>,DC=<COMPANY>,DC=lcl"
realm.groupRolesMap = "CN=<ROLE>,OU=<APPDIR>,DC=<COMPANY>,DC=lcl":"Admin"

Running Apache DS embedded in my application

I'm trying to run an embedded ApacheDS in my application. After reading http://directory.apache.org/apacheds/1.5/41-embedding-apacheds-into-an-application.html I build this:
public void startDirectoryService() throws Exception {
service = new DefaultDirectoryService();
service.getChangeLog().setEnabled( false );
Partition apachePartition = addPartition("apache", "dc=apache,dc=org");
addIndex(apachePartition, "objectClass", "ou", "uid");
service.startup();
// Inject the apache root entry if it does not already exist
try
{
service.getAdminSession().lookup( apachePartition.getSuffixDn() );
}
catch ( LdapNameNotFoundException lnnfe )
{
LdapDN dnApache = new LdapDN( "dc=Apache,dc=Org" );
ServerEntry entryApache = service.newEntry( dnApache );
entryApache.add( "objectClass", "top", "domain", "extensibleObject" );
entryApache.add( "dc", "Apache" );
service.getAdminSession().add( entryApache );
}
}
But I can't connect to the server after running it. What is the default port? Or am I missing something?
Here is the solution:
service = new DefaultDirectoryService();
service.getChangeLog().setEnabled( false );
Partition apachePartition = addPartition("apache", "dc=apache,dc=org");
LdapServer ldapService = new LdapServer();
ldapService.setTransports(new TcpTransport(389));
ldapService.setDirectoryService(service);
service.startup();
ldapService.start();
Here is an abbreviated version of how we use it:
File workingDirectory = ...;
Partition partition = new JdbmPartition();
partition.setId(...);
partition.setSuffix(...);
DirectoryService directoryService = new DefaultDirectoryService();
directoryService.setWorkingDirectory(workingDirectory);
directoryService.addPartition(partition);
LdapService ldapService = new LdapService();
ldapService.setSocketAcceptor(new SocketAcceptor(null));
ldapService.setIpPort(...);
ldapService.setDirectoryService(directoryService);
directoryService.startup();
ldapService.start();
I wasn't able to make it run neither with cringe's, Kevin's nor Jörg Pfünder's version. Received constantly NPEs from within my JUnit test. I have debugged that and compiled all of them to a working solution:
public class DirContextSourceAnonAuthTest {
private static DirectoryService directoryService;
private static LdapServer ldapServer;
#BeforeClass
public static void startApacheDs() throws Exception {
String buildDirectory = System.getProperty("buildDirectory");
File workingDirectory = new File(buildDirectory, "apacheds-work");
workingDirectory.mkdir();
directoryService = new DefaultDirectoryService();
directoryService.setWorkingDirectory(workingDirectory);
SchemaPartition schemaPartition = directoryService.getSchemaService()
.getSchemaPartition();
LdifPartition ldifPartition = new LdifPartition();
String workingDirectoryPath = directoryService.getWorkingDirectory()
.getPath();
ldifPartition.setWorkingDirectory(workingDirectoryPath + "/schema");
File schemaRepository = new File(workingDirectory, "schema");
SchemaLdifExtractor extractor = new DefaultSchemaLdifExtractor(
workingDirectory);
extractor.extractOrCopy(true);
schemaPartition.setWrappedPartition(ldifPartition);
SchemaLoader loader = new LdifSchemaLoader(schemaRepository);
SchemaManager schemaManager = new DefaultSchemaManager(loader);
directoryService.setSchemaManager(schemaManager);
schemaManager.loadAllEnabled();
schemaPartition.setSchemaManager(schemaManager);
List<Throwable> errors = schemaManager.getErrors();
if (!errors.isEmpty())
throw new Exception("Schema load failed : " + errors);
JdbmPartition systemPartition = new JdbmPartition();
systemPartition.setId("system");
systemPartition.setPartitionDir(new File(directoryService
.getWorkingDirectory(), "system"));
systemPartition.setSuffix(ServerDNConstants.SYSTEM_DN);
systemPartition.setSchemaManager(schemaManager);
directoryService.setSystemPartition(systemPartition);
directoryService.setShutdownHookEnabled(false);
directoryService.getChangeLog().setEnabled(false);
ldapServer = new LdapServer();
ldapServer.setTransports(new TcpTransport(11389));
ldapServer.setDirectoryService(directoryService);
directoryService.startup();
ldapServer.start();
}
#AfterClass
public static void stopApacheDs() throws Exception {
ldapServer.stop();
directoryService.shutdown();
directoryService.getWorkingDirectory().delete();
}
#Test
public void anonAuth() throws NamingException {
DirContextSource.Builder builder = new DirContextSource.Builder(
"ldap://localhost:11389");
DirContextSource contextSource = builder.build();
DirContext context = contextSource.getDirContext();
assertNotNull(context.getNameInNamespace());
context.close();
}
}
the 2.x sample is located at folloing link :
http://svn.apache.org/repos/asf/directory/sandbox/kayyagari/embedded-sample-trunk/src/main/java/org/apache/directory/seserver/EmbeddedADSVerTrunk.java
The default port for LDAP is 389.
Since ApacheDS 1.5.7 you will get a NullpointerException. Please use the tutorial at
http://svn.apache.org/repos/asf/directory/documentation/samples/trunk/embedded-sample
This project helped me:
Embedded sample project
I use this dependency in pom.xml:
<dependency>
<groupId>org.apache.directory.server</groupId>
<artifactId>apacheds-server-integ</artifactId>
<version>1.5.7</version>
<scope>test</scope>
</dependency>
Further, in 2.0.* the working dir and other paths aren't anymore defined in DirectoryService, but rather in separate class InstanceLayout, which you need to instantiate and then call
InstanceLayout il = new InstanceLayout(BASE_PATH);
directotyService.setInstanceLayout(il);

Categories