Running Apache DS embedded in my application - java

I'm trying to run an embedded ApacheDS in my application. After reading http://directory.apache.org/apacheds/1.5/41-embedding-apacheds-into-an-application.html I build this:
public void startDirectoryService() throws Exception {
service = new DefaultDirectoryService();
service.getChangeLog().setEnabled( false );
Partition apachePartition = addPartition("apache", "dc=apache,dc=org");
addIndex(apachePartition, "objectClass", "ou", "uid");
service.startup();
// Inject the apache root entry if it does not already exist
try
{
service.getAdminSession().lookup( apachePartition.getSuffixDn() );
}
catch ( LdapNameNotFoundException lnnfe )
{
LdapDN dnApache = new LdapDN( "dc=Apache,dc=Org" );
ServerEntry entryApache = service.newEntry( dnApache );
entryApache.add( "objectClass", "top", "domain", "extensibleObject" );
entryApache.add( "dc", "Apache" );
service.getAdminSession().add( entryApache );
}
}
But I can't connect to the server after running it. What is the default port? Or am I missing something?
Here is the solution:
service = new DefaultDirectoryService();
service.getChangeLog().setEnabled( false );
Partition apachePartition = addPartition("apache", "dc=apache,dc=org");
LdapServer ldapService = new LdapServer();
ldapService.setTransports(new TcpTransport(389));
ldapService.setDirectoryService(service);
service.startup();
ldapService.start();

Here is an abbreviated version of how we use it:
File workingDirectory = ...;
Partition partition = new JdbmPartition();
partition.setId(...);
partition.setSuffix(...);
DirectoryService directoryService = new DefaultDirectoryService();
directoryService.setWorkingDirectory(workingDirectory);
directoryService.addPartition(partition);
LdapService ldapService = new LdapService();
ldapService.setSocketAcceptor(new SocketAcceptor(null));
ldapService.setIpPort(...);
ldapService.setDirectoryService(directoryService);
directoryService.startup();
ldapService.start();

I wasn't able to make it run neither with cringe's, Kevin's nor Jörg Pfünder's version. Received constantly NPEs from within my JUnit test. I have debugged that and compiled all of them to a working solution:
public class DirContextSourceAnonAuthTest {
private static DirectoryService directoryService;
private static LdapServer ldapServer;
#BeforeClass
public static void startApacheDs() throws Exception {
String buildDirectory = System.getProperty("buildDirectory");
File workingDirectory = new File(buildDirectory, "apacheds-work");
workingDirectory.mkdir();
directoryService = new DefaultDirectoryService();
directoryService.setWorkingDirectory(workingDirectory);
SchemaPartition schemaPartition = directoryService.getSchemaService()
.getSchemaPartition();
LdifPartition ldifPartition = new LdifPartition();
String workingDirectoryPath = directoryService.getWorkingDirectory()
.getPath();
ldifPartition.setWorkingDirectory(workingDirectoryPath + "/schema");
File schemaRepository = new File(workingDirectory, "schema");
SchemaLdifExtractor extractor = new DefaultSchemaLdifExtractor(
workingDirectory);
extractor.extractOrCopy(true);
schemaPartition.setWrappedPartition(ldifPartition);
SchemaLoader loader = new LdifSchemaLoader(schemaRepository);
SchemaManager schemaManager = new DefaultSchemaManager(loader);
directoryService.setSchemaManager(schemaManager);
schemaManager.loadAllEnabled();
schemaPartition.setSchemaManager(schemaManager);
List<Throwable> errors = schemaManager.getErrors();
if (!errors.isEmpty())
throw new Exception("Schema load failed : " + errors);
JdbmPartition systemPartition = new JdbmPartition();
systemPartition.setId("system");
systemPartition.setPartitionDir(new File(directoryService
.getWorkingDirectory(), "system"));
systemPartition.setSuffix(ServerDNConstants.SYSTEM_DN);
systemPartition.setSchemaManager(schemaManager);
directoryService.setSystemPartition(systemPartition);
directoryService.setShutdownHookEnabled(false);
directoryService.getChangeLog().setEnabled(false);
ldapServer = new LdapServer();
ldapServer.setTransports(new TcpTransport(11389));
ldapServer.setDirectoryService(directoryService);
directoryService.startup();
ldapServer.start();
}
#AfterClass
public static void stopApacheDs() throws Exception {
ldapServer.stop();
directoryService.shutdown();
directoryService.getWorkingDirectory().delete();
}
#Test
public void anonAuth() throws NamingException {
DirContextSource.Builder builder = new DirContextSource.Builder(
"ldap://localhost:11389");
DirContextSource contextSource = builder.build();
DirContext context = contextSource.getDirContext();
assertNotNull(context.getNameInNamespace());
context.close();
}
}

the 2.x sample is located at folloing link :
http://svn.apache.org/repos/asf/directory/sandbox/kayyagari/embedded-sample-trunk/src/main/java/org/apache/directory/seserver/EmbeddedADSVerTrunk.java

The default port for LDAP is 389.

Since ApacheDS 1.5.7 you will get a NullpointerException. Please use the tutorial at
http://svn.apache.org/repos/asf/directory/documentation/samples/trunk/embedded-sample

This project helped me:
Embedded sample project
I use this dependency in pom.xml:
<dependency>
<groupId>org.apache.directory.server</groupId>
<artifactId>apacheds-server-integ</artifactId>
<version>1.5.7</version>
<scope>test</scope>
</dependency>

Further, in 2.0.* the working dir and other paths aren't anymore defined in DirectoryService, but rather in separate class InstanceLayout, which you need to instantiate and then call
InstanceLayout il = new InstanceLayout(BASE_PATH);
directotyService.setInstanceLayout(il);

Related

How to spock integration test with standalone tomcat runner?

Our project is not currently using a spring framework.
Therefore, it is being tested based on the standalone tomcat runner.
However, since integration-enabled tests such as #SpringBootTest are not possible, Tomcat is operated in advance and the HTTP API test is carried out using Spock.
Is there a way to turn this like #SpringBootTest?
TomcatRunner
private Tomcat tomcat = null;
private int port = 8080;
private String contextPath = null;
private String docBase = null;
private Context rootContext = null;
public Tomcat8Launcher(){
init();
}
public Tomcat8Launcher(int port, String contextPath, String docBase){
this.port = port;
this.contextPath = contextPath;
this.docBase = docBase;
init();
}
private void init(){
tomcat = new Tomcat();
tomcat.setPort(port);
tomcat.enableNaming();
if(contextPath == null){
contextPath = "";
}
if(docBase == null){
File base = new File(System.getProperty("java.io.tmpdir"));
docBase = base.getAbsolutePath();
}
rootContext = tomcat.addContext(contextPath, docBase);
}
public void addServlet(String servletName, String uri, HttpServlet servlet){
Tomcat.addServlet(this.rootContext, servletName, servlet);
rootContext.addServletMapping(uri, servletName);
}
public void addListenerServlet(ServletContextListener listener){
rootContext.addApplicationListener(listener.getClass().getName());
}
public void startServer() throws LifecycleException {
tomcat.start();
tomcat.getServer().await();
}
public void stopServer() throws LifecycleException {
tomcat.stop();
}
public static void main(String[] args) throws Exception {
System.setProperty("java.util.logging.manager", "org.apache.logging.log4j.jul.LogManager");
System.setProperty(javax.naming.Context.INITIAL_CONTEXT_FACTORY, "org.apache.naming.java.javaURLContextFactory");
System.setProperty(javax.naming.Context.URL_PKG_PREFIXES, "org.apache.naming");
Tomcat8Launcher tomcatServer = new Tomcat8Launcher();
tomcatServer.addListenerServlet(new ConfigInitBaseServlet());
tomcatServer.addServlet("restServlet", "/rest/*", new RestServlet());
tomcatServer.addServlet("jsonServlet", "/json/*", new JsonServlet());
tomcatServer.startServer();
}
Spock API Test example
class apiTest extends Specification {
//static final Tomcat8Launcher tomcat = new Tomcat8Launcher()
static final String testURL = "http://localhost:8080/api/"
#Shared
def restClient
def setupSpec() {
// tomcat.main()
restClient = new RESTClient(testURL)
}
def 'findAll user'() {
when:
def response = restClient.get([path: 'user/all'])
then:
with(response){
status == 200
contentType == "application/json"
}
}
}
The test will not work if the annotations are removed from the annotations below.
// static final Tomcat8Launcher tomcat = new Tomcat8Launcher()
This line is specified API Test at the top.
// tomcat.main()
This line is specified API Test setupSpec() method
I don't know why, but only logs are recorded after Tomcat has operated and the test method is not executed.
Is there a way to fix this?
I would suggest to create a Spock extension to encapsulate everything you need. See writing custom extensions of the Spock docs as well as the built-in extensions for inspiration.

Cannot find symbol symbol method of() location interface java.util.list, unittests

I'm currently working on a spring boot application, where I'm now writing the unittests. But when I run the command 'mvn test', I get the following error:
This is the code
BoardServiceTest.java
#Test
public void testGetBoards() {
Set<Lists> list = new HashSet<Lists>();
List<Board> expectedBoards = List.of(
new Board(1, "Studie", list),
new Board(2, "Todo Applicatie", list)
);
when(this.boardRepo.findAll()).thenReturn(expectedBoards);
var receivedBoards = this.boardService.getBoards();
assertEquals(expectedBoards, receivedBoards);
verify(this.boardRepo, times(1)).findAll();
}
ListsServiceTest.java
#Test
public void testGetLists() throws Exception {
int boardId = 1;
Set<Task> task = new HashSet<>();
List<Lists> expectedLists = List.of(
new Lists(1, "registered", new Board(), task),
new Lists(2, "open", new Board(), task)
);
when(this.listRepository.findAll()).thenReturn(expectedLists);
var receivedLists = this.listService.getLists(boardId);
assertEquals(expectedLists, receivedLists);
verify(this.listRepository, times(1)).findAll();
}
List.of was added in Java 9, you need to make sure that you use JDK 9+ to compile your sources and that appropriate java version is set in the maven pom.xml.

AWS Athena ClientExecutionTimeoutException

I am trying to do a POC for AWS Athena using Java. I am using the sample code given in https://docs.aws.amazon.com/athena/latest/ug/code-samples.html
BasicAWSCredentials awsCredentials = new BasicAWSCredentials("accesskey","secretkey");
private final AmazonAthenaClientBuilder builder = AmazonAthenaClientBuilder.standard()
.withRegion(Regions.US_EAST_1)
.withCredentials(new AWSStaticCredentialsProvider(awsCredentials))
.withClientConfiguration(new ClientConfiguration().withClientExecutionTimeout(5000));
public AmazonAthena createClient()
{
return builder.build();
}
=============================================================
public class StartQueryExample
{
public static void main(String[] args) throws InterruptedException
{
AthenaClientFactory factory = new AthenaClientFactory();
AmazonAthena client = factory.createClient();
String queryExecutionId = submitAthenaQuery(client);
}
private static String submitAthenaQuery(AmazonAthena client)
{
QueryExecutionContext queryExecutionContext = new QueryExecutionContext().withDatabase("DB_Name");
ResultConfiguration resultConfiguration = new ResultConfiguration()
.withOutputLocation("s3://bucket_name/results");
StartQueryExecutionRequest startQueryExecutionRequest = new StartQueryExecutionRequest()
.withQueryString("select * from tablename")
.withQueryExecutionContext(queryExecutionContext)
.withResultConfiguration(resultConfiguration);
StartQueryExecutionResult startQueryExecutionResult = client.startQueryExecution(startQueryExecutionRequest);
return startQueryExecutionResult.getQueryExecutionId();
}
The sample table has only 6 rows and 2 columns.
I tried running the code using Boto3 and it works perfectly fine.
But when running from Java I get ClientExecutionTimeoutException:
Exception in thread "main" **com.amazonaws.http.timers.client.ClientExecutionTimeoutException: Client execution did not complete before the specified timeout configuration.**
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleAbortedException(AmazonHttpClient.java:813)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:703)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at com.amazonaws.services.athena.AmazonAthenaClient.doInvoke(AmazonAthenaClient.java:813)
at com.amazonaws.services.athena.AmazonAthenaClient.invoke(AmazonAthenaClient.java:789)
at com.amazonaws.services.athena.AmazonAthenaClient.executeStartQueryExecution(AmazonAthenaClient.java:694)
at com.amazonaws.services.athena.AmazonAthenaClient.startQueryExecution(AmazonAthenaClient.java:669)
at com.capitalone.aws.athena.StartQueryExample.submitAthenaQuery(StartQueryExample.java:60)
at com.capitalone.aws.athena.StartQueryExample.main(StartQueryExample.java:32)
I have tried running it from eclipse and also tried creating a jar and running it on the ec2 instance using Athena IAM role.
Any help would be useful.
Thanks
Fixed the above issue by fixing these lines
.withClientConfiguration(buildClientConfig().withClientExecutionTimeout(5000));
private ClientConfiguration buildClientConfig() {
ClientConfiguration clientConfiguration = new ClientConfiguration();
clientConfiguration.setProxyHost("host");
clientConfiguration.setProxyPort(port);
clientConfiguration.setProxyUsername("");
clientConfiguration.setProxyPassword("");
clientConfiguration.setPreemptiveBasicProxyAuth(false);
clientConfiguration.setConnectionTimeout(2000);
clientConfiguration.setRequestTimeout(2000);
return clientConfiguration;
}

Using AWS Java's SDKs, how can I terminate the CloudFormation stack of the current instance?

Uses on-line decomentation I come up with the following code to terminate the current EC2 Instance:
public class Ec2Utility {
static private final String LOCAL_META_DATA_ENDPOINT = "http://169.254.169.254/latest/meta-data/";
static private final String LOCAL_INSTANCE_ID_SERVICE = "instance-id";
static public void terminateMe() throws Exception {
TerminateInstancesRequest terminateRequest = new TerminateInstancesRequest().withInstanceIds(getInstanceId());
AmazonEC2 ec2 = new AmazonEC2Client();
ec2.terminateInstances(terminateRequest);
}
static public String getInstanceId() throws Exception {
//SimpleRestClient, is an internal wrapper on http client.
SimpleRestClient client = new SimpleRestClient(LOCAL_META_DATA_ENDPOINT);
HttpResponse response = client.makeRequest(METHOD.GET, LOCAL_INSTANCE_ID_SERVICE);
return IOUtils.toString(response.getEntity().getContent(), "UTF-8");
}
}
My issue is that my EC2 instance is under an AutoScalingGroup which is under a CloudFormationStack, that is because of my organisation deployment standards though this single EC2 is all there is there for this feature.
So, I want to terminate the entire CloudFormationStack from the JavaSDK, keep in mind, I don't have the CloudFormation Stack Name in advance as I didn't have the EC2 Instance Id so I will have to get it from the code using the API calls.
How can I do that, if I can do it?
you should be able to use the deleteStack method from cloud formation sdk
DeleteStackRequest request = new DeleteStackRequest();
request.setStackName(<stack_name_to_be_deleted>);
AmazonCloudFormationClient client = new AmazonCloudFormationClient (<credentials>);
client.deleteStack(request);
If you don't have the stack name, you should be able to retrieve from the Tag of your instance
DescribeInstancesRequest request =new DescribeInstancesRequest();
request.setInstanceIds(instancesList);
DescribeInstancesResult disresult = ec2.describeInstances(request);
List <Reservation> list = disresult.getReservations();
for (Reservation res:list){
List <Instance> instancelist = res.getInstances();
for (Instance instance:instancelist){
List <Tag> tags = instance.getTags();
for (Tag tag:tags){
if (tag.getKey().equals("aws:cloudformation:stack-name")) {
tag.getValue(); // name of the stack
}
}
At the end I've achieved the desired behaviour using the set of the following util functions I wrote:
/**
* Delete the CloudFormationStack with the given name.
*
* #param stackName
* #throws Exception
*/
static public void deleteCloudFormationStack(String stackName) throws Exception {
AmazonCloudFormationClient client = new AmazonCloudFormationClient();
DeleteStackRequest deleteStackRequest = new DeleteStackRequest().withStackName("");
client.deleteStack(deleteStackRequest);
}
static public String getCloudFormationStackName() throws Exception {
AmazonEC2 ec2 = new AmazonEC2Client();
String instanceId = getInstanceId();
List<Tag> tags = getEc2Tags(ec2, instanceId);
for (Tag t : tags) {
if (t.getKey().equalsIgnoreCase(TAG_KEY_STACK_NAME)) {
return t.getValue();
}
}
throw new Exception("Couldn't find stack name for instanceId:" + instanceId);
}
static private List<Tag> getEc2Tags(AmazonEC2 ec2, String instanceId) throws Exception {
DescribeInstancesRequest describeInstancesRequest = new DescribeInstancesRequest().withInstanceIds(instanceId);
DescribeInstancesResult describeInstances = ec2.describeInstances(describeInstancesRequest);
List<Reservation> reservations = describeInstances.getReservations();
if (reservations.isEmpty()) {
throw new Exception("DescribeInstances didn't returned reservation for instanceId:" + instanceId);
}
List<Instance> instances = reservations.get(0).getInstances();
if (instances.isEmpty()) {
throw new Exception("DescribeInstances didn't returned instance for instanceId:" + instanceId);
}
return instances.get(0).getTags();
}
// XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
// Example of usage from the code:
deleteCloudFormationStack(getCloudFormationStackName());
// XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

How to incorporate Environments with JClouds-Chef API?

I am using JClouds-Chef to:
Bootstrap a newly-provisioned Linux VM; and then
Run the chef-client on that node to configure it
It's important to note that all I'm currently configuring Chef with is a role (that's all it needs; everything else is set up on the Chef server for me):
public class ChefClient {
public configure() {
String vmIp = "myapp01.example.com";
String vmSshUsername = "myuser";
String vmSshPassword = "12345";
String endpoint = "https://mychefserver.example.com";
String client = "myuser";
String validator = "chef-validator";
String clientCredential = Files.toString(new File("C:\\Users\\myuser\\sandbox\\chef\\myuser.pem"), Charsets.UTF_8);
String validatorCredential = Files.toString(new File("C:\\Users\\myuser\\sandbox\\chef\\chef-validator.pem"), Charsets.UTF_8);
Properties props = new Properties();
props.put(ChefProperties.CHEF_VALIDATOR_NAME, validator);
props.put(ChefProperties.CHEF_VALIDATOR_CREDENTIAL, validatorCredential);
props.put(Constants.PROPERTY_RELAX_HOSTNAME, "true");
props.put(Constants.PROPERTY_TRUST_ALL_CERTS, "true");
ChefContext ctx = ContextBuilder.newBuilder("chef")
.endpoint(endpoint)
.credentials(client, clientCredential)
.overrides(props)
.modules(ImmutableSet.of(new SshjSshClientModule())) //
.buildView(ChefContext.class);
ChefService chef = ctx.getChefService();
List<String> runlist = new RunListBuilder().addRole("platformcontrol_dev").build();
ArrayList<String> runList2 = new ArrayList<String>();
for(String item : runlist) {
runList2.add(item);
}
BootstrapConfig bootstrapConfig = BootstrapConfig.builder().runList(runList2).build();
chef.updateBootstrapConfigForGroup("jclouds-chef", bootstrapConfig);
Statement bootstrap = chef.createBootstrapScriptForGroup("jclouds-chef");
SshClient.Factory sshFactory = ctx.unwrap().utils()
.injector().getInstance(Key.get(new TypeLiteral<SshClient.Factory>() {}));
SshClient ssh = sshFactory.create(HostAndPort.fromParts(vmIp, 22),
LoginCredentials.builder().user(vmSshUsername).password(vmSshPassword).build());
ssh.connect();
try {
StringBuilder rawScript = new StringBuilder();
Map<String, String> resolvedFunctions = ScriptBuilder.resolveFunctionDependenciesForStatements(
new HashMap<String, String>(), ImmutableSet.of(bootstrap), OsFamily.UNIX);
ScriptBuilder.writeFunctions(resolvedFunctions, OsFamily.UNIX, rawScript);
rawScript.append(bootstrap.render(OsFamily.UNIX));
ssh.put("/tmp/chef-bootstrap.sh", rawScript.toString());
ExecResponse result = ssh.exec("bash /tmp/chef-bootstrap.sh");
} catch(Throwable t) {
println "Exception: " + t.message
} finally {
ssh.disconnect();
}
}
}
Our in-house "Chef" (our devops guy) now wants to add the concept of Chef "environments" to all our recipes in addition to the existing roles. This is so that we can specify environment-specific roles for each node. My question: does the JClouds-Chef API handle environments? If so, how might I modify the code to incorporate environment-specific roles?
Is it just as simple as:
BootstrapConfig bootstrapConfig = BootstrapConfig.builder()
.environment("name-of-env-here?").runList(runList2).build();
Yes, it is that simple. That will tell the bootstrap script to register the node in the specified environment.
Take into account, though, that the environment must already exist in the Chef Server. If you want to create nodes in new environments, you can also create them programmatically as follows:
ChefApi api = ctx.unwrapApi(ChefApi.class);
if (api.getEnvironment("environment-name") == null) {
Environment env = Environment.builder()
.name("environment-name")
.description("Some description")
.build();
api.createEnvironment(env);
}

Categories