I have a very simple Quarkus application which accepts input and insert it into MongoDB using MongoClient.
Controller:
#ApplicationScoped
#Path("/endpoint")
public class A {
#Inject
B service;
#POST
#Produces(MediaType.APPLICATION_JSON)
#Consumes(MediaType.APPLICATION_JSON)
public Document add(List<? extends Document> list) {
return service.add(list);
}
}
Service Class:
#ApplicationScoped
public class B {
#Inject
MongoClient mongoClient;
private MongoCollection<Document> getCollection() {
return mongoClient.getDatabase(DBname).getCollection(coll);
}
public Document add(List<? extends Document> list) {
Document response = new Document();
getCollection().deleteMany(new BasicDBObject());
getCollection().insertMany(list);
response.append("count", list.size());
return response;
}
}
As you see that my service removes existing data and inserts the new data. For JUnit testing, I am trying to set up embedded MongoDB and want my service call to use the embedded mongo. But no success.
My JUnit class
I tried out many approaches discussed on the internet to set up the embedded mongo but none worked for me.
I want to invoke my POST service but actual mongodb must not get connected. My JUnit class is as below:
#QuarkusTest
public class test {
List<Document> request = new ArrayList<Document>();
Document doc = new Document();
doc.append("Id", "007")
.append("name", "Nitin");
request.add(doc);
given()
.body(request)
.header("Content-Type", MediaType.APPLICATION_JSON)
.when()
.post("/endpoint")
.then()
.statusCode(200);
}
You need to use a different connection-string for your test than for your regular (production) run.
Quakus can use profiles to do this, the %test profile is automatically selected when running #QuarkusTest tests.
So you can add in your application.properties something like this :
quarkus.mongodb.connection-string=mongodb://host:port
%test.quarkus.mongodb.connection-string=mongodb://localhost:27017
Here mongodb://host:port will be use on the normal run of your application and mongodb://localhost:27017 will be used from inside your test.
Then you can use flapdoodle or Testcontainers to launch a MongoDB database on localhost during your test.
More information on configuration profiles: https://quarkus.io/guides/config#configuration-profiles
More information on how to start an external service from a Quarkus test: https://quarkus.io/guides/getting-started-testing#quarkus-test-resource
Have u tried flapdoodle:
package com.example.mongo;
import com.mongodb.BasicDBObject;
import com.mongodb.MongoClient;
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoDatabase;
import de.flapdoodle.embed.mongo.MongodExecutable;
import de.flapdoodle.embed.mongo.MongodProcess;
import de.flapdoodle.embed.mongo.MongodStarter;
import de.flapdoodle.embed.mongo.config.IMongodConfig;
import de.flapdoodle.embed.mongo.config.MongodConfigBuilder;
import de.flapdoodle.embed.mongo.config.Net;
import de.flapdoodle.embed.mongo.distribution.Version;
import de.flapdoodle.embed.process.runtime.Network;
import java.util.Date;
import org.junit.After;
import static org.junit.Assert.*;
import org.junit.Before;
import org.junit.Test;
public class EmbeddedMongoTest
{
private static final String DATABASE_NAME = "embedded";
private MongodExecutable mongodExe;
private MongodProcess mongod;
private MongoClient mongo;
#Before
public void beforeEach() throws Exception {
MongodStarter starter = MongodStarter.getDefaultInstance();
String bindIp = "localhost";
int port = 12345;
IMongodConfig mongodConfig = new MongodConfigBuilder()
.version(Version.Main.PRODUCTION)
.net(new Net(bindIp, port, Network.localhostIsIPv6()))
.build();
this.mongodExe = starter.prepare(mongodConfig);
this.mongod = mongodExe.start();
this.mongo = new MongoClient(bindIp, port);
}
#After
public void afterEach() throws Exception {
if (this.mongod != null) {
this.mongod.stop();
this.mongodExe.stop();
}
}
#Test
public void shouldCreateNewObjectInEmbeddedMongoDb() {
// given
MongoDatabase db = mongo.getDatabase(DATABASE_NAME);
db.createCollection("testCollection");
MongoCollection<BasicDBObject> col = db.getCollection("testCollection", BasicDBObject.class);
// when
col.insertOne(new BasicDBObject("testDoc", new Date()));
// then
assertEquals(1L, col.countDocuments());
}
}
Reference : Embedded MongoDB when running integration tests
Thanks everyone for suggestions. I declared test collections in application.properties file. %test profile automatically get activated when we run junits, so automatically my services picked up the test collections. I deleted the test collections after my junit test cases got completed.
Related
I'm running on an aws-elasticsearch (with OpenSearch 1.1.x) service and im trying to connect with it from a spring application using spring-data-elasticsearch, according to the doc i configured the bean as it says.
on my local i used a ssh tunnel from my aws account.
i used this command:
ssh -4 -i my-creds.pem ec2-user#xxxx.xxxx.xxxx.xxxx -N -L 9200:vpc-my-custom-domain-etc.us-east-1.es.amazonaws.com:443
so i can connect with OpenSearch dashboard over localhost in my browser through port 9200.
Using the OpenSearch RestHighLevelClient from OpenSearch and disabling the ssl i can connect and it works just fine here the config with OS RHLC:
import org.apache.http.HttpHost;
import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestClientBuilder;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.client.indices.CreateIndexRequest;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.util.Map;
public class OSSCLientWorks{
private static final Logger log = LoggerFactory.getLogger(ClientAutoWrapper.class);
public void request(String indexName, Map<String, Object> doc) throws IOException {
//Create a client.
RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200, "https"))
.setHttpClientConfigCallback(httpClientBuilder -> httpClientBuilder
//.addInterceptorFirst(interceptor) //-> for AwsRequestInterceptor due to some struggles i had, not necessary to work with localhost
.setSSLHostnameVerifier((hostname, session) -> true));
try (RestHighLevelClient hlClient = new RestHighLevelClient(builder)) {
CreateIndexRequest createIndexRequest = new CreateIndexRequest(indexName);
var createIndexResp = hlClient.indices().create(createIndexRequest, RequestOptions.DEFAULT);
log.info("Create index resp {}", createIndexResp);
IndexRequest indexRequest = new IndexRequest(createIndexResp.index())
.id(String.valueOf(doc.get("id")))
.source(doc);
var response = hlClient.index(indexRequest, RequestOptions.DEFAULT);
var resp = response.toString();
log.info("response is {}", json);
}
}
}
, but when i try with spring and its reactive client i get this error:
reactor.core.Exceptions$ErrorCallbackNotImplemented: org.springframework.data.elasticsearch.client.NoReachableHostException: Host 'localhost:9200' not reachable. Cluster state is offline.
Caused by: org.springframework.data.elasticsearch.client.NoReachableHostException: Host 'localhost:9200' not reachable. Cluster state is offline.
here is the config i used to work with spring-data-elasticsearch:
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.elasticsearch.client.ClientConfiguration;
import org.springframework.data.elasticsearch.client.reactive.ReactiveElasticsearchClient;
import org.springframework.data.elasticsearch.client.reactive.ReactiveRestClients;
import org.springframework.data.elasticsearch.config.AbstractReactiveElasticsearchConfiguration;
import org.springframework.data.elasticsearch.core.ReactiveElasticsearchOperations;
import org.springframework.data.elasticsearch.core.ReactiveElasticsearchTemplate;
import org.springframework.data.elasticsearch.repository.config.EnableElasticsearchRepositories;
import org.springframework.data.elasticsearch.repository.config.EnableReactiveElasticsearchRepositories;
#Configuration
#EnableReactiveElasticsearchRepositories(basePackages = {"com.elastic.repo"})
public class ElasticRestHighLevelClientConfig extends AbstractReactiveElasticsearchConfiguration {
#Override
#Bean
public ReactiveElasticsearchClient reactiveElasticsearchClient() {
final ClientConfiguration clientConfiguration = ClientConfiguration.builder()
.connectedTo("localhost:9200")
.build();
return ReactiveRestClients.create(clientConfiguration);
}
#Bean
public ReactiveElasticsearchOperations elasticsearchOperations(ReactiveElasticsearchClient reactiveElasticsearchClient) {
return new ReactiveElasticsearchTemplate(reactiveElasticsearchClient);
}
}
i also tried some solutions other people posted here on SO and Github, but the problem persists, does anybody have a workaround for this? what am i doing wrong?
here i did a demo for the trouble
Thank you very much in advance!
EDIT: clarity
You have to configure to use SSL for the reactive client with one of the usingSsl()methods:
#Override
#Bean
public ReactiveElasticsearchClient reactiveElasticsearchClient() {
final ClientConfiguration clientConfiguration = ClientConfiguration.builder()
.connectedTo("localhost:9200")
.usingSsl() // <--
.build();
return ReactiveRestClients.create(clientConfiguration);
}
NoReachableHostException is just a generic error they throw when lookupActiveHost(HostProvider interface) fails.
You should debug what happens before - for you it's probably here:
#Override
public Mono clusterInfo() {
return createWebClient(endpoint) //
.head().uri("/") //
.exchangeToMono(it -> {
if (it.statusCode().isError()) {
state = ElasticsearchHost.offline(endpoint);
} else {
state = ElasticsearchHost.online(endpoint);
}
return Mono.just(state);
}).onErrorResume(throwable -> {
state = ElasticsearchHost.offline(endpoint);
clientProvider.getErrorListener().accept(throwable);
return Mono.just(state);
}).map(elasticsearchHost -> new ClusterInformation(Collections.singleton(elasticsearchHost)));
}
see what is the real exception on error resume.
I bet you will get SSL Handshake Exception, you can fix it in the clientConfiguration with .usingSsl({SSL CONTEXT HERE})
You can create insecure context like this(convert to java if needed):
SSLContext.getInstance("TLS")
.apply { init(null, InsecureTrustManagerFactory.INSTANCE.trustManagers, SecureRandom()) }
I have this SpringBoot and Pact test example from Writing Contract Tests with Pact in Spring Boot
#RunWith(SpringRunner.class)
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE,
properties = "user-service.base-url:http://localhost:8080",
classes = UserServiceClient.class)
public class UserServiceContractTest {
#Rule
public PactProviderRuleMk2 provider = new PactProviderRuleMk2("user-service", null,
8080, this);
#Autowired
private UserServiceClient userServiceClient;
#Pact(consumer = "messaging-app")
public RequestResponsePact pactUserExists(PactDslWithProvider builder) {
return builder.given("User 1 exists")
.uponReceiving("A request to /users/1")
.path("/users/1")
.method("GET")
.willRespondWith()
.status(200)
.body(LambdaDsl.newJsonBody((o) -> o
.stringType("name", “user name for CDC”)
).build())
.toPact();
}
#PactVerification(fragment = "pactUserExists")
#Test
public void userExists() {
final User user = userServiceClient.getUser("1");
assertThat(user.getName()).isEqualTo("user name for CDC");
}
}
In order to generate the PACT file I need to start a mock Provider, which is set up as:
public PactProviderRuleMk2 provider = new PactProviderRuleMk2("user-service", null,
8080, this);
The #SpringBootTest annotation provides a mock web environment running on http://localhost:8080
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE,
properties = "user-service.base-url:http://localhost:8080",
classes = UserServiceClient.class)
Is it possible to do something similar in Micronaut? Can I use an EmbeddedServer running in a specified port such as http://localhost:8080 so my Pact MockProvider can listen to that port?
I would like to specify the port in the test class, not into an application.yml file
Any ideas?
You can use micronaut and pact with junit5, simple example based on hello-world-java:
Add pact dependencies to build.gradle:
// pact
compile 'au.com.dius:pact-jvm-consumer-junit5_2.12:3.6.10'
compile 'au.com.dius:pact-jvm-provider-junit5_2.12:3.6.10'
// client for target example
compile 'io.micronaut:micronaut-http-client'
FooService.java:
import io.micronaut.http.client.RxHttpClient;
import io.micronaut.http.client.annotation.Client;
import javax.inject.Inject;
import javax.inject.Singleton;
import static io.micronaut.http.HttpRequest.GET;
#Singleton
public class FooService {
#Inject
#Client("http://localhost:8080")
private RxHttpClient httpClient;
public String getFoo() {
return httpClient.retrieve(GET("/foo")).blockingFirst();
}
}
FooServiceTest.java:
import au.com.dius.pact.consumer.Pact;
import au.com.dius.pact.consumer.dsl.PactDslWithProvider;
import au.com.dius.pact.consumer.junit5.PactConsumerTestExt;
import au.com.dius.pact.consumer.junit5.PactTestFor;
import au.com.dius.pact.model.RequestResponsePact;
import io.micronaut.test.annotation.MicronautTest;
import org.junit.jupiter.api.extension.ExtendWith;
import org.junit.jupiter.api.Test;
import javax.inject.Inject;
import static org.junit.jupiter.api.Assertions.assertEquals;
#MicronautTest
#ExtendWith(PactConsumerTestExt.class)
#PactTestFor(providerName = "foo", hostInterface = "localhost", port = "8080")
public class FooServiceTest {
#Inject
FooService fooService;
#Pact(provider = "foo", consumer = "foo")
public RequestResponsePact pact(PactDslWithProvider builder) {
return builder
.given("test foo")
.uponReceiving("test foo")
.path("/foo")
.method("GET")
.willRespondWith()
.status(200)
.body("{\"foo\":\"bar\"}")
.toPact();
}
#Test
public void testFoo() {
assertEquals(fooService.getFoo(), "{\"foo\":\"bar\"}");
}
}
I haven't been able to find a comprehensive example of connecting to and then querying a remote Apache Tinkerpop Graph Database with Gremlin and Java. And I can't quite get it to work. Can anyone that's done something like this before offer any advice?
I've set up a Azure Cosmos database in Graph-DB mode, which is expecting Gremlin queries in order to modify and access its data. I have the database host name, port, username, and password, and I'm able to execute queries, but only if I pass in a big ugly query string. I would like to be able to leverage the org.apache.tinkerpop.gremlin.structure.Graph traversal methods, but I can't quite get it working.
import java.util.List;
import java.util.concurrent.CompletableFuture;
import org.apache.tinkerpop.gremlin.driver.Result;
import org.apache.tinkerpop.gremlin.driver.ResultSet;
import org.apache.tinkerpop.gremlin.structure.Graph;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
//More imports...
#Service
public class SearchService {
private final static Logger log = LoggerFactory.getLogger(SearchService.class);
#Autowired
private GraphDbConnection graphDbConnection;
#Autowired
private Graph graph;
public Object workingQuery() {
try {
String query = "g.V('1234').outE('related').inV().both().as('v').project('vertex').by(select('v')).by(bothE().fold())";
log.info("Submitting this Gremlin query: {}", query);
ResultSet results = graphDbConnection.executeQuery(query);
CompletableFuture<List<Result>> completableFutureResults = results.all();
List<Result> resultList = completableFutureResults.get();
Result result = resultList.get(0);
log.info("Query result: {}", result.toString());
return result.toString();
} catch (Exception e) {
log.error("Error fetching data.", e);
}
return null;
}
public Object failingQuery() {
return graph.traversal().V(1234).outE("related").inV()
.both().as("v").project("vertex").by("v").bothE().fold()
.next();
/* I get an Exception:
"org.apache.tinkerpop.gremlin.process.remote.RemoteConnectionException:
java.lang.RuntimeException: java.lang.RuntimeException:
java.util.concurrent.TimeoutException: Timed out while waiting for an
available host - check the client configuration and connectivity to the
server if this message persists" */
}
}
This is my configuration class:
import java.util.HashMap;
import java.util.Map;
import org.apache.tinkerpop.gremlin.driver.Cluster;
import org.apache.tinkerpop.gremlin.driver.MessageSerializer;
import org.apache.tinkerpop.gremlin.driver.remote.DriverRemoteConnection;
import org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV2d0;
import org.apache.tinkerpop.gremlin.structure.Graph;
import org.apache.tinkerpop.gremlin.structure.util.GraphFactory;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
public class GraphDbConfig {
private final static Logger log = LoggerFactory.getLogger(GraphDbConfig.class);
#Value("${item.graph.hostName}")
private String hostName;
#Value("${item.graph.port}")
private int port;
#Value("${item.graph.username}")
private String username;
#Value("${item.graph.password}")
private String password;
#Value("${item.graph.enableSsl}")
private boolean enableSsl;
#Bean
public Graph graph() {
Map<String, String> graphConfig = new HashMap<>();
graphConfig.put("gremlin.graph",
"org.apache.tinkerpop.gremlin.process.remote.RemoteGraph");
graphConfig.put("gremlin.remoteGraph.remoteConnectionClass",
"org.apache.tinkerpop.gremlin.driver.remote.DriverRemoteConnection");
Graph g = GraphFactory.open(graphConfig);
g.traversal().withRemote(DriverRemoteConnection.using(cluster()));
return g;
}
#Bean
public Cluster cluster() {
Cluster cluster = null;
try {
MessageSerializer serializer = new GraphSONMessageSerializerGremlinV2d0();
Cluster.Builder clusterBuilder = Cluster.build().addContactPoint(hostName)
.serializer(serializer)
.port(port).enableSsl(enableSsl)
.credentials(username, password);
cluster = clusterBuilder.create();
} catch (Exception e) {
log.error("Error in connecting to host address.", e);
}
return cluster;
}
}
And I have to define this connection component currently in order to send queries to the database:
import org.apache.tinkerpop.gremlin.driver.Client;
import org.apache.tinkerpop.gremlin.driver.Cluster;
import org.apache.tinkerpop.gremlin.driver.ResultSet;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
#Component
public class GraphDbConnection {
private final static Logger log = LoggerFactory.getLogger(GraphDbConnection.class);
#Autowired
private Cluster cluster;
public ResultSet executeQuery(String query) {
Client client = connect();
ResultSet results = client.submit(query);
closeConnection(client);
return results;
}
private Client connect() {
Client client = null;
try {
client = cluster.connect();
} catch (Exception e) {
log.error("Error in connecting to host address.", e);
}
return client;
}
private void closeConnection(Client client) {
client.close();
}
}
You cannot leverage the remote API with CosmosDB yet. It does not support Gremlin Bytecode yet.
https://github.com/Azure/azure-documentdb-dotnet/issues/439
https://feedback.azure.com/forums/263030-azure-cosmos-db/suggestions/33632779-support-gremlin-bytecode-to-enable-the-fluent-api
You would have to continue with strings until then, though.....since you are using Java you could try a somewhat unadvertised feature: GroovyTranslator
gremlin> g = EmptyGraph.instance().traversal()
==>graphtraversalsource[emptygraph[empty], standard]
gremlin> translator = GroovyTranslator.of('g')
==>translator[g:gremlin-groovy]
gremlin> translator.translate(g.V().out('knows').has('person','name','marko').asAdmin().getBytecode())
==>g.V().out("knows").has("person","name","marko")
As you can see, it takes Gremlin Bytecode and converts it into a String of Gremlin that you could submit to CosmosDB. Later, when CosmosDB supports Bytecode, you could drop the GroovyTranslator and change from EmptyGraph construction of your GraphTraversalSource and everything should start working. To make this really seamless, you could go the extra step and write a TraversalStrategy that would do something similar to TinkerPop's RemoteStrategy. Instead of submitting Bytecode as that strategy does, you would just just use GroovyTranslator and submit the string of Gremlin. That approach would make it even easier to switch over when CosmosDB supports Bytecode because then all you would have to do is remove your custom TraversalStrategy and reconfigure your remote GraphTraversalSource in the standard way.
please help me, i want send json data to some API which use basic auth and i want catch respon from that API. this is my code:
#Inject
WSClient ws;
public Result testWS(){
JsonNode task = Json.newObject()
.put("id", 123236)
.put("name", "Task ws")
.put("done", true);
WSRequest request = ws.url("http://localhost:9000/json/task").setAuth("user", "password", WSAuthScheme.BASIC).post(task);
return ok(request.tojson);
the question is how i get return from ws above and process it to json? because that code still error. i'm use playframework 2.5
.post(task) results in a CompletionStage<WSResponse>, so you can't just call toJson on it. You have to get the eventual response from the completion stage (think of it as a promise). Note the change to the method signature too.
import java.util.concurrent.CompletionStage;
import javax.inject.Inject;
import javax.inject.Singleton;
import com.fasterxml.jackson.databind.JsonNode;
import play.libs.Json;
import play.libs.ws.WSAuthScheme;
import play.libs.ws.WSClient;
import play.libs.ws.WSResponse;
import play.mvc.Controller;
import play.mvc.Result;
import scala.concurrent.ExecutionContextExecutor;
#Singleton
public class FooController extends Controller {
private final WSClient ws;
private final ExecutionContextExecutor exec;
#Inject
public FooController(final ExecutionContextExecutor exec,
final WSClient ws) {
this.exec = exec;
this.ws = ws;
}
public CompletionStage<Result> index() {
final JsonNode task = Json.newObject()
.put("id", 123236)
.put("name", "Task ws")
.put("done", true);
final CompletionStage<WSResponse> eventualResponse = ws.url("http://localhost:9000/json/task")
.setAuth("user",
"password",
WSAuthScheme.BASIC)
.post(task);
return eventualResponse.thenApplyAsync(response -> ok(response.asJson()),
exec);
}
}
Check the documentation for more details of working with asynchronous calls to web services.
Is there a way to start elasticsearch within a gradle build before running integration tests and afterwards stop elasticsearch?
My approach so far is the following, but this blocks the further execution of the gradle build.
task runES(type: JavaExec) {
main = 'org.elasticsearch.bootstrap.Elasticsearch'
classpath = sourceSets.main.runtimeClasspath
systemProperties = ["es.path.home":"$buildDir/elastichome",
"es.path.data":"$buildDir/elastichome/data"]
}
For my purpose i have decided to start elasticsearch within my integration test in java code.
I've tried out ElasticsearchIntegrationTest but that didn't worked with spring, because it didn't harmony with SpringJUnit4ClassRunner.
I've found it easier to start elasticsearch in the before method:
My test class testing some 'dummy' productive code (indexing a document):
import static org.hamcrest.CoreMatchers.notNullValue;
import static org.junit.Assert.assertThat;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.common.settings.ImmutableSettings.Builder;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.indices.IndexAlreadyExistsException;
import org.elasticsearch.node.Node;
import org.elasticsearch.node.NodeBuilder;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
public class MyIntegrationTest {
private Node node;
private Client client;
#Before
public void before() {
createElasticsearchClient();
createIndex();
}
#After
public void after() {
this.client.close();
this.node.close();
}
#Test
public void testSomething() throws Exception {
// do something with elasticsearch
final String json = "{\"mytype\":\"bla\"}";
final String type = "mytype";
final String id = index(json, type);
assertThat(id, notNullValue());
}
/**
* some productive code
*/
private String index(final String json, final String type) {
// create Client
final Settings settings = ImmutableSettings.settingsBuilder().put("cluster.name", "mycluster").build();
final TransportClient tc = new TransportClient(settings).addTransportAddress(new InetSocketTransportAddress(
"localhost", 9300));
// index a document
final IndexResponse response = tc.prepareIndex("myindex", type).setSource(json).execute().actionGet();
return response.getId();
}
private void createElasticsearchClient() {
final NodeBuilder nodeBuilder = NodeBuilder.nodeBuilder();
final Builder settingsBuilder = nodeBuilder.settings();
settingsBuilder.put("network.publish_host", "localhost");
settingsBuilder.put("network.bind_host", "localhost");
final Settings settings = settingsBuilder.build();
this.node = nodeBuilder.clusterName("mycluster").local(false).data(true).settings(settings).node();
this.client = this.node.client();
}
private void createIndex() {
try {
this.client.admin().indices().prepareCreate("myindex").execute().actionGet();
} catch (final IndexAlreadyExistsException e) {
// index already exists => we ignore this exception
}
}
}
It is also very important to use elasticsearch version 1.3.3 or higher. See Issue 5401.