Setting up Spring Data LDAP Embedded for tests with base DN - java

I have a weird behaviour of Spring Data Ldap and was wondering how I can fight it.
From the looks of it, it seems that the base information is either lost or handled differently when I use a "proper" LDAP server and the embedded version.
The embedded version should be used for some of my integration tests. But what works perfectly fine when I configure my LDAP server like so:
spring:
ldap:
urls: ldap://localhost:389
base: dc=e-mehlbox,dc=eu
username: cn=admin,dc=e-mehlbox,dc=eu
password: root
in my application.yml. But once I set up the embedded server, my searches fail:
spring:
ldap:
urls: ldap://localhost:9321
base: dc=e-mehlbox,dc=eu
username: uid=admin
password: secret
embedded:
base-dn: dc=e-mehlbox,dc=eu
credential:
username: uid=admin
password: secret
ldif: classpath:test-schema.ldif
port: 9321
validation:
enabled: false
Enabling debugging, it shows the missing base DN. Here are the corresponding lines for the working configuration agains a "real" LDAP server:
2018-01-10 18:06:55.296 DEBUG 23275 --- [ main] o.s.ldap.core.LdapTemplate : Searching - base=ou=internal,ou=Users, finalFilter=(&(&(objectclass=inetOrgPerson)(objectclass=organizationalPerson)(objectclass=person)(objectclass=qmailUser))(uid=big.bird)), scope=javax.naming.directory.SearchControls#6a013bdd
2018-01-10 18:06:55.311 DEBUG 23275 --- [ main] o.s.l.c.support.AbstractContextSource : Got Ldap context on server 'ldap://localhost:389/dc=e-mehlbox,dc=eu'
The interesting bit is the Ldap context, having the base in it.
And this the output when I switch to the embedded LDAP:
2018-01-10 18:08:42.836 DEBUG 23569 --- [ main] o.s.ldap.core.LdapTemplate : Searching - base=ou=internal,ou=Users, finalFilter=(&(&(objectclass=inetOrgPerson)(objectclass=organizationalPerson)(objectclass=person)(objectclass=qmailUser))(uid=big.bird)), scope=javax.naming.directory.SearchControls#55202ba6
2018-01-10 18:08:42.871 DEBUG 23569 --- [ main] o.s.l.c.support.AbstractContextSource : Got Ldap context on server 'ldap://localhost:9321'
I am a bit lost, as I can not find any other configuration options to set the base DN.
Some details of my project:
Right now, I am using the following Spring Data LDAP related dependencies (my project is Gradle driven):
compile (
"org.springframework.boot:spring-boot-starter-data-ldap:1.5.9.RELEASE",
"org.springframework.data:spring-data-ldap:1.0.9.RELEASE"
)
testCompile (
"org.springframework.ldap:spring-ldap-test:2.3.2.RELEASE",
"com.unboundid:unboundid-ldapsdk:4.0.3"
)
And here is one of my entity classes:
#Builder
#AllArgsConstructor
#NoArgsConstructor
#Getter
#Setter
#EqualsAndHashCode(doNotUseGetters = true)
#ToString(doNotUseGetters = true)
#Entry(
objectClasses = {"inetOrgPerson", "organizationalPerson", "person", "qmailUser"},
base = "ou=internal,ou=Users")
public class User implements Serializable {
#Id
private Name dn;
#Attribute(name = "entryUuid", readonly = true)
private String entryUuid;
#Attribute(name = "uid")
private String username;
#Attribute(name = "userPassword")
private byte[] password;
#Attribute(name = "mail")
private String internalMailAddress;
#Attribute(name = "mailAlternateAddress")
private List<String> mailAddresses;
#Attribute(name = "displayName")
private String displayName;
#Attribute(name = "accountStatus")
private String status;
#Attribute(name = "givenName")
private String firstName;
#Attribute(name = "sn")
private String lastName;
#Attribute(name = "mailMessageStore")
private String mailboxHome;
}
Any ideas? Is this a bug or just me not seeing the obvious?

Thanks to #vdubus and this question, I got it working.
It seems like the embedded LDAP server version does not set the configured base DN (see the other SO question). But adding the following class to my project fixes this:
import com.unboundid.ldap.listener.InMemoryDirectoryServer;
import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
import org.springframework.boot.autoconfigure.ldap.LdapProperties;
import org.springframework.boot.autoconfigure.ldap.embedded.EmbeddedLdapProperties;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.DependsOn;
import org.springframework.core.env.Environment;
import org.springframework.ldap.core.ContextSource;
import org.springframework.ldap.core.support.LdapContextSource;
#Configuration
#EnableConfigurationProperties({LdapProperties.class, EmbeddedLdapProperties.class})
#ConditionalOnClass(InMemoryDirectoryServer.class)
public class EmbeddedLdapConf {
private final Environment environment;
private final LdapProperties properties;
public EmbeddedLdapConf(Environment environment, LdapProperties properties) {
this.environment = environment;
this.properties = properties;
}
#Bean
#DependsOn("directoryServer")
public ContextSource ldapContextSource() {
final LdapContextSource source = new LdapContextSource();
source.setUrls(this.properties.determineUrls(this.environment));
source.setBase(this.properties.getBase());
return source;
}
}

If you prefer to solve it purely in the test properties, you can add the base to the ldap url instead.
I don't know why configuring the embedded ldap messes up the normal ldap config, but this way you can at least verify that it's a properties-only thing that works without additional code.
spring:
ldap:
urls:
- ldap://localhost:12345/dc=stuff,dc=test,dc=my
embedded:
base-dn: dc=stuff,dc=test,dc=my
ldif: classpath:test.ldif
port: 12345
validation:
enabled: false

Related

Dijital Ocean spaces listObjects not working

My application.yml
## PostgreSQL
spring:
datasource:
url: jdbc:postgresql://myip:5432/db?useSSL=false&serverTimezone=UTC&useLegacyDatetimeCode=false
username: wqeqe
password: qweqwewqe$qwewqe
jpa:
hibernate:
ddl-auto: update
properties:
hibernate:
dialect: org.hibernate.dialect.PostgreSQLDialect
show-sql: true
---
server:
port: 8085
## DO properties
do:
spaces:
key: mykey
secret: mysecret
endpoint: digitaloceanspaces.com
region: myregion
bucket: mybusket
My read config read appication.yml
#Configuration
#PropertySource("classpath:application.yml")
#ConfigurationProperties(prefix = "do.spaces")
public class DoConfig {
#Value("${key}")
private String doSpaceKey;
#Value("${secret}")
private String doSpaceSecret;
#Value("${endpoint}")
private String doSpaceEndpoint;
#Value("${region}")
private String doSpaceRegion;
#Bean
public AmazonS3 getS3() {
BasicAWSCredentials creds = new BasicAWSCredentials(doSpaceKey, doSpaceSecret);
return AmazonS3ClientBuilder.standard()
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(doSpaceEndpoint, doSpaceRegion))
.withCredentials(new AWSStaticCredentialsProvider(creds)).build();
}
}
My service
#Override
public List<TemplateDto> getAll() {
return (List<TemplateImage>) templateImageRepository.findAll();
}
My repo
#Repository
public interface TemplateImageRepository extends PagingAndSortingRepository<TemplateImage,Long> {
}
Hi everyone I am using this project dijital ocean spaces but the problem listObject images don't return null. What is the problem please help me I am new working is the file servers.
Thanks !!!

How to create a TextFree search in mongoDB with Micronaut

I am using reactive MongoDb, and trying to implement Free Text search based on the weight
implementation("io.micronaut.mongodb:micronaut-mongo-reactive")
on below POJO
public class Product {
#BsonProperty("_id")
#BsonId
private ObjectId id;
private String name;
private float price;
private String description;
}
Tried this simple example
public Flowable<List<Product>> findByFreeText(String text) {
LOG.info(String.format("Listener --> Listening value = %s", text));
Flowable.fromPublisher(this.repository.getCollection("product", List.class)
.find(new Document("$text", new Document("$search", text)
.append("$caseSensitive", false)
.append("$diacriticSensitive", false)))).subscribe(item -> {
System.out.println(item);
}, error -> {
System.out.println(error);
});
return Flowable.just(List.of(new Product()));
}
I don't think this is the correct way of implementing the Free Text Search.
At first you don't need to have Flowable with List of Product because Flowable can manage more then one value unlike Single. So, it is enough to have Flowable<Product>. Then you can simply return the Flowable instance from find method.
Text search can be then implemented like this:
public Flowable<Product> findByFreeText(final String query) {
return Flowable.fromPublisher(repository.getCollection("product", Product.class)
.find(new Document("$text",
new Document("$search", query)
.append("$caseSensitive", false)
.append("$diacriticSensitive", false)
)));
}
Then it is up to the consumer of the method how it subscribes to the result Flowable. In controller you can directly return the Flowable instance. If you need to consume it somewhere in your code you can do subscribe() or blockingSubscribe() and so on.
And you can of course test it by JUnit like this:
#MicronautTest
class SomeServiceTest {
#Inject
SomeService service;
#Test
void findByFreeText() {
service.findByFreeText("test")
.test()
.awaitCount(1)
.assertNoErrors()
.assertValue(p -> p.getName().contains("test"));
}
}
Update: you can debug communication with MongoDB by setting this in logback.xml (Micronaut is using Logback as a default logging framework) logging config file:
<configuration>
....
<logger name="org.mongodb" level="debug"/>
</configuration>
Then you will see this in the log file:
16:20:21.257 [Thread-5] DEBUG org.mongodb.driver.protocol.command - Sending command '{"find": "product", "filter": {"$text": {"$search": "test", "$caseSensitive": false, "$diacriticSensitive": false}}, "batchSize": 2147483647, "$db": "some-database"}' with request id 6 to database some-database on connection [connectionId{localValue:3, serverValue:1634}] to server localhost:27017
16:20:21.258 [Thread-8] DEBUG org.mongodb.driver.protocol.command - 16:20:21.258 [Thread-7] DEBUG org.mongodb.driver.protocol.command - Execution of command with request id 6 completed successfully in 2.11 ms on connection [connectionId{localValue:3, serverValue:1634}] to server localhost:27017
Then you can copy the command from log and try it in MongoDB CLI or you can install MongoDB Compass where you can play with that more and see whether the command is correct or not.

how to get values to spring.boot.admin.client.username/ password from database?

I have spring boot admin project and now I hardcoded the username and passwords in application.properties file like this.
spring.boot.admin.client.username=user
spring.boot.admin.client.password=pass
spring.boot.admin.client.instance.metadata.user.name=user
spring.boot.admin.client.instance.metadata.user.password=pass
But want to get that values from database not hardcoded like this.I want to configs to connect to self register the admin server as a client.I am beginner to SpringBoot. How can I do it? Thanks.
So every configuration in an application.properties file can be configured via Javacode. First you have to create a Datasource for your project. Add the spring-data-jpa dependency to your project and config the datasource.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
more you can find here: A Guide to JPA with Spring
To configure for example the two properties spring.boot.admin.client.username=user and spring.boot.admin.client.password=pass you need to create a #Configuration class which creates a ClientProperties Bean.
#Configuration
public class AdminClientConfig {
private final JdbcTemplate jdbcTemplate;
private final Environment environment;
public AdminClientConfig(JdbcTemplate jdbcTemplate,
Environment environment) {
super();
this.jdbcTemplate = jdbcTemplate;
this.environment = environment;
}
#Bean
public ClientProperties clientProperties() {
ClientProperties cp = new ClientProperties(environment);
cp.setUsername(getUsername());
cp.setPassword(getPassword());
return cp;
}
private String getUsername() {
String username = jdbcTemplate.queryForObject(
"select username from AnyTable where id = ?",
new Object[] { "123" }, String.class);
return username;
}
private String getPassword() {
String password = jdbcTemplate.queryForObject(
"select password from AnyTable where id = ?",
new Object[] { "123" }, String.class);
return password;
}
}
So the JdbcTemplate has already a Database connection and creates the query to get the Username and Password from the Database. The ClientProperties Bean can then be set.
P.S.: This code is not tested but gives you a some hints to get the job done.

Spring boot application properties with different keys [duplicate]

This question already has answers here:
Spring Boot - inject map from application.yml
(8 answers)
Closed 4 years ago.
How can i define a Object to read my application.mysql-databases by key:
swaw:
stage: dev
ip: x.x.x.x
port: 3306
databases:
mysql:
url: "jdbc:mysql://...."
password:
dba: AAA
read: BBB
mssql:
url: "jdbc:mssql://...."
password:
dba: CCC
read: DDD
informix:
....
I try with this object:
#ConfigurationProperties(prefix = "swaw.databases")
public class Databases {
private Map<String, DatabasesConfig> map;
public static class DatabasesConfig {
private String url;
private Password password;
//GETTER AND SETTER
i get per request: {"ip":"1x.x.x.x","port":"3306","databases":null}
#emoleumassi - Try this one :
#ConfigurationProperties(prefix = "databases")
public class Databases {
private String url;
private Password password;
//GETTER AND SETTER
}

My Kafka sink connector for Neo4j fails to load

Introduction:
Let me start by apologizing for any vagueness in my question I will try to provide as much information on this topic as I can (hopefully not too much), and please let me know if I should provide more. As well, I am quite new to Kafka and will probably stumble on terminology.
So, from my understanding on how the sink and source work, I can use the FileStreamSourceConnector provided by the Kafka Quickstart guide to write data(Neo4j commands) to a topic held in a Kafka cluster. Then I can write my own Neo4j sink connector and task to read those commands and send them to one or more Neo4j servers. To keep the project as simple as possible, for now, I based the sink connector and task off of the Kafka Quickstart guide's FileStreamSinkConnector and FileStreamSinkTask.
Kafka's FileStream:
FileStreamSourceConnector
FileStreamSourceTask
FileStreamSinkConnector
FileStreamSinkTask
My Neo4j Sink Connector:
package neo4k.sink;
import org.apache.kafka.common.config.ConfigDef;
import org.apache.kafka.common.config.ConfigDef.Importance;
import org.apache.kafka.common.config.ConfigDef.Type;
import org.apache.kafka.common.utils.AppInfoParser;
import org.apache.kafka.connect.connector.Task;
import org.apache.kafka.connect.sink.SinkConnector;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
public class Neo4jSinkConnector extends SinkConnector {
public enum Keys {
;
static final String URI = "uri";
static final String USER = "user";
static final String PASS = "pass";
static final String LOG = "log";
}
private static final ConfigDef CONFIG_DEF = new ConfigDef()
.define(Keys.URI, Type.STRING, "", Importance.HIGH, "Neo4j URI")
.define(Keys.USER, Type.STRING, "", Importance.MEDIUM, "User Auth")
.define(Keys.PASS, Type.STRING, "", Importance.MEDIUM, "Pass Auth")
.define(Keys.LOG, Type.STRING, "./neoj4sinkconnecterlog.txt", Importance.LOW, "Log File");
private String uri;
private String user;
private String pass;
private String logFile;
#Override
public String version() {
return AppInfoParser.getVersion();
}
#Override
public void start(Map<String, String> props) {
uri = props.get(Keys.URI);
user = props.get(Keys.USER);
pass = props.get(Keys.PASS);
logFile = props.get(Keys.LOG);
}
#Override
public Class<? extends Task> taskClass() {
return Neo4jSinkTask.class;
}
#Override
public List<Map<String, String>> taskConfigs(int maxTasks) {
ArrayList<Map<String, String>> configs = new ArrayList<>();
for (int i = 0; i < maxTasks; i++) {
Map<String, String> config = new HashMap<>();
if (uri != null)
config.put(Keys.URI, uri);
if (user != null)
config.put(Keys.USER, user);
if (pass != null)
config.put(Keys.PASS, pass);
if (logFile != null)
config.put(Keys.LOG, logFile);
configs.add(config);
}
return configs;
}
#Override
public void stop() {
}
#Override
public ConfigDef config() {
return CONFIG_DEF;
}
}
My Neo4j Sink Task:
package neo4k.sink;
import org.apache.kafka.clients.consumer.OffsetAndMetadata;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.connect.sink.SinkRecord;
import org.apache.kafka.connect.sink.SinkTask;
import org.neo4j.driver.v1.AuthTokens;
import org.neo4j.driver.v1.Driver;
import org.neo4j.driver.v1.GraphDatabase;
import org.neo4j.driver.v1.Session;
import org.neo4j.driver.v1.StatementResult;
import org.neo4j.driver.v1.exceptions.Neo4jException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.Collection;
import java.util.Map;
public class Neo4jSinkTask extends SinkTask {
private static final Logger log = LoggerFactory.getLogger(Neo4jSinkTask.class);
private String uri;
private String user;
private String pass;
private String logFile;
private Driver driver;
private Session session;
public Neo4jSinkTask() {
}
#Override
public String version() {
return new Neo4jSinkConnector().version();
}
#Override
public void start(Map<String, String> props) {
uri = props.get(Neo4jSinkConnector.Keys.URI);
user = props.get(Neo4jSinkConnector.Keys.USER);
pass = props.get(Neo4jSinkConnector.Keys.PASS);
logFile = props.get(Neo4jSinkConnector.Keys.LOG);
driver = null;
session = null;
try {
driver = GraphDatabase.driver(uri, AuthTokens.basic(user, pass));
session = driver.session();
} catch (Neo4jException ex) {
log.trace(ex.getMessage(), logFilename());
}
}
#Override
public void put(Collection<SinkRecord> sinkRecords) {
StatementResult result;
for (SinkRecord record : sinkRecords) {
result = session.run(record.value().toString());
log.trace(result.toString(), logFilename());
}
}
#Override
public void flush(Map<TopicPartition, OffsetAndMetadata> offsets) {
}
#Override
public void stop() {
if (session != null)
session.close();
if (driver != null)
driver.close();
}
private String logFilename() {
return logFile == null ? "stdout" : logFile;
}
}
The Issue:
After writing that, I next built that including any dependencies that it had, excluding any Kafka dependencies, into a jar (Or Uber Jar? It was one file). Then I edited the plugin pathways in the connect-standalone.properties to include that artifact and wrote a properties file for my Neo4j sink connector. I did this all in an attempt to follow these guidelines.
My Neo4j sink connector properties file:
name=neo4k-sink
connector.class=neo4k.sink.Neo4jSinkConnector
tasks.max=1
uri=bolt://localhost:7687
user=neo4j
pass=Hunter2
topics=connect-test
But upon running the standalone, I get this error in the output that shuts down the stream (Error on line 5):
[2017-08-14 12:59:00,150] INFO Kafka version : 0.11.0.0 (org.apache.kafka.common.utils.AppInfoParser:83)
[2017-08-14 12:59:00,150] INFO Kafka commitId : cb8625948210849f (org.apache.kafka.common.utils.AppInfoParser:84)
[2017-08-14 12:59:00,153] INFO Source task WorkerSourceTask{id=local-file-source-0} finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:143)
[2017-08-14 12:59:00,153] INFO Created connector local-file-source (org.apache.kafka.connect.cli.ConnectStandalone:91)
[2017-08-14 12:59:00,153] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:100)
java.lang.IllegalArgumentException: Malformed \uxxxx encoding.
at java.util.Properties.loadConvert(Properties.java:574)
at java.util.Properties.load0(Properties.java:390)
at java.util.Properties.load(Properties.java:341)
at org.apache.kafka.common.utils.Utils.loadProps(Utils.java:429)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:84)
[2017-08-14 12:59:00,156] INFO Kafka Connect stopping (org.apache.kafka.connect.runtime.Connect:65)
[2017-08-14 12:59:00,156] INFO Stopping REST server (org.apache.kafka.connect.runtime.rest.RestServer:154)
[2017-08-14 12:59:00,168] INFO Stopped ServerConnector#540accf4{HTTP/1.1}{0.0.0.0:8083} (org.eclipse.jetty.server.ServerConnector:306)
[2017-08-14 12:59:00,173] INFO Stopped o.e.j.s.ServletContextHandler#6d548d27{/,null,UNAVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:865)
Edit: I should mention that during the part of the connector loading where the output is declaring what plugins have been added, I do not see any mention of the jar that I built earlier and created a pathway for in connect-standalone.properties. Here's a snippet for context:
[2017-08-14 12:58:58,969] INFO Added plugin 'org.apache.kafka.connect.file.FileStreamSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-08-14 12:58:58,969] INFO Added plugin 'org.apache.kafka.connect.tools.MockSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-08-14 12:58:58,969] INFO Added plugin 'org.apache.kafka.connect.tools.VerifiableSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-08-14 12:58:58,969] INFO Added plugin 'org.apache.kafka.connect.tools.VerifiableSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-08-14 12:58:58,970] INFO Added plugin 'org.apache.kafka.connect.tools.MockConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
Conclusion:
I am at loss, I've done testing and researching for about a couple hours and I don't think I'm exactly sure what question to ask. So I'll say thank you for reading if you've gotten this far. If you noticed anything glaring that I may have done wrong in code or in method (e.g. packaging the jar), or think I should provide more context or console logs or anything really let me know. Thank you, again.
As pointed out by #Randall Hauch, my properties file had hidden characters within it because it was a rich text document. I fixed this by duplicating the connect-file-sink.properties file provided with Kafka, which I believe is just a regular text document. Then renaming and editing that duplicate for my neo4j sink properties.

Categories