issue with connect to Cassandra by spark in Java - java

I have server with docker and create 3 Cassandra node , 2 worker spark node and one master spark node.
Now i want to connect to spark from my Laptop by java application.
my java code is :
public SparkTestPanel(String id, User user) {
super(id);
form = new Form("form");
form.setOutputMarkupId(true);
this.add(form);
SparkConf conf = new SparkConf(true);
conf.setAppName("Spark Test");
conf.setMaster("spark://172.11.100.156:9050");
conf.set("spark.cassandra.connection.host", "cassandra-0");
conf.set("spark.cassandra.connection.port", "9042");
conf.set("spark.cassandra.auth.username", "cassandra");
conf.set("spark.cassandra.auth.password", "cassandra");
JavaSparkContext sc = null;
try {
sc = new JavaSparkContext(conf);
CassandraTableScanJavaRDD<com.datastax.spark.connector.japi.CassandraRow> cassandraTable = javaFunctions(sc).cassandraTable("test", "test_table");
List<com.datastax.spark.connector.japi.CassandraRow> collect = cassandraTable.collect();
for(com.datastax.spark.connector.japi.CassandraRow cassandraRow : collect){
Logger.getLogger(SparkTestPanel.class).error(cassandraRow.toString());
}
} finally {
sc.stop();
}
}
and I know that application connect to spark master because i see my app on web ui of spark but on line:
CassandraTableScanJavaRDD<com.datastax.spark.connector.japi.CassandraRow> cassandraTable = javaFunctions(sc).cassandraTable("test", "test_table");
I get below error:
2017-08-17 12:14:31,906 ERROR CassandraConnectorConf:72 - Unknown host 'cassandra-0'
java.net.UnknownHostException: cassandra-0: nodename nor servname provided, or not known
...
And other error:
Caused by: java.lang.IllegalArgumentException: Cannot build a cluster without contact points
at com.datastax.driver.core.Cluster.checkNotEmpty(Cluster.java:119)
at com.datastax.driver.core.Cluster.<init>(Cluster.java:112)
at com.datastax.driver.core.Cluster.buildFrom(Cluster.java:178)
at com.datastax.driver.core.Cluster$Builder.build(Cluster.java:1335)
at com.datastax.spark.connector.cql.DefaultConnectionFactory$.createCluster(CassandraConnectionFactory.scala:131)
at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:159)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$8.apply(CassandraConnector.scala:154)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$8.apply(CassandraConnector.scala:154)
at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:32)
at com.datastax.spark.connector.cql.RefCountedCache.syncAcquire(RefCountedCache.scala:69)
at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:57)
at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:79)
at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:111)
at com.datastax.spark.connector.cql.CassandraConnector.withClusterDo(CassandraConnector.scala:122)
at com.datastax.spark.connector.cql.Schema$.fromCassandra(Schema.scala:330)
at com.datastax.spark.connector.cql.Schema$.tableFromCassandra(Schema.scala:350)
at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.tableDef(CassandraTableRowReaderProvider.scala:50)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.tableDef$lzycompute(CassandraTableScanRDD.scala:62)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.tableDef(CassandraTableScanRDD.scala:62)
at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.verify(CassandraTableRowReaderProvider.scala:137)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.verify(CassandraTableScanRDD.scala:62)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.getPartitions(CassandraTableScanRDD.scala:262)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
And I go to server("172.11.100.156") and got to spark-master container ping cassandra-0 and see:
root#708d210056af:/# ping cassandra-0
PING cassandra-0 (21.1.0.21): 56 data bytes
64 bytes from 21.1.0.21: icmp_seq=0 ttl=64 time=0.554 ms
64 bytes from 21.1.0.21: icmp_seq=1 ttl=64 time=0.117 ms
64 bytes from 21.1.0.21: icmp_seq=2 ttl=64 time=0.116 ms
64 bytes from 21.1.0.21: icmp_seq=3 ttl=64 time=0.093 ms
what happened in my application that can got this error?
Can any one help?

Related

Trouble with MYSQL

<Edited>
I'm an intermediate Java programmer currently working with Java and MySQL to create an app. I'm using Xampp and the PHPMYAdmin that comes with it.
The server is on 127.0.0.1 without any routers,Wifi systems or network.My app is also on 127.0.0.1
Everytime I try to connect to MySQL using Java, this message is displayed:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link
failure The last packet sent successfully to the server was 0 milliseconds
ago.The driver has not received any packets from the server.
But the MySQL server is running alright. When I log into PHPMYAdmin, these
errors are shown:
#2002 - Only one usage of each socket address (protocol/network
address/port)
is normally permitted.
— The server is not responding (or the local server's socket is not
correctly configured).
mysqli_real_connect(): (HY000/2002): Only one usage of each socket
address
(protocol/network address/port) is normally permitted.
Connection for controluser as defined in your configuration failed.
mysqli_real_connect(): (HY000/2002): Only one usage of each socket
address
(protocol/network address/port) is normally permitted.
Retry to connect
Warning in .\libraries\dbi\DBIMysqli.php#629
mysqli_real_escape_string() expects parameter 1 to be mysqli, boolean
given
Backtrace
.\libraries\dbi\DBIMysqli.php#629: mysqli_real_escape_string(
boolean false,
string 'root',
)
.\libraries\DatabaseInterface.php#2670: PMA\libraries\dbi\DBIMysqli-
>escapeString(
boolean false,
string 'root',
)
.\libraries\Menu.php#142: PMA\libraries\DatabaseInterface-
>escapeString(string 'root')
.\libraries\Menu.php#110: PMA\libraries\Menu->_getAllowedTabs(string
'server')
.\libraries\Menu.php#83: PMA\libraries\Menu->_getMenu()
.\libraries\Response.php#316: PMA\libraries\Menu->getHash()
.\libraries\Response.php#441: PMA\libraries\Response->_ajaxResponse()
PMA\libraries\Response::response()
Warning in .\libraries\dbi\DBIMysqli.php#629
mysqli_real_escape_string() expects parameter 1 to be mysqli, boolean
given
Backtrace
.\libraries\dbi\DBIMysqli.php#629: mysqli_real_escape_string(
boolean false,
string 'root',
)
.\libraries\DatabaseInterface.php#2670: PMA\libraries\dbi\DBIMysqli-
>escapeString(
boolean false,
string 'root',
)
.\libraries\Menu.php#142: PMA\libraries\DatabaseInterface-
>escapeString(string
'root')
.\libraries\Menu.php#110: PMA\libraries\Menu->_getAllowedTabs(string
'server')
.\libraries\Menu.php#71: PMA\libraries\Menu->_getMenu()
.\libraries\Response.php#327: PMA\libraries\Menu->getDisplay()
.\libraries\Response.php#441: PMA\libraries\Response->_ajaxResponse()
PMA\libraries\Response::response()
the mysql my.ini file:
[mysqld]
port= 3306
socket = "E:/xampp/mysql/mysql.sock"
basedir = "E:/xampp/mysql"
tmpdir = "E:/xampp/tmp"
datadir = "E:/xampp/mysql/data"
pid_file = "mysql.pid"
# enable-named-pipe
key_buffer = 16M
max_allowed_packet = 1M
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
log_error = "mysql_error.log"
plugin_dir = "E:/xampp/mysql/lib/plugin/"
innodb_data_home_dir = "E:/xampp/mysql/data"
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = "E:/xampp/mysql/data"
innodb_buffer_pool_size = 16M
innodb_additional_mem_pool_size = 2M
innodb_log_file_size = 5M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[isamchk]
key_buffer = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M
[myisamchk]
key_buffer = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M
[mysqlhotcopy]
interactive-timeout
And heres how I connect to Java:
try{
Class.forName("java.sql.DriverManager");
Connection conn=
(Connection)DriverManager.getConnection("jdbc:mysql://
localhost:3306/","root","mypassword");
Statement stmt=(Statement)conn.createStatement();
}
catch(Exception e){/*handling*/}

EMR cluster bootstrap failure (timeout) occurs most of the times I initialize a cluster

I'm writing an app that is consisted of 4 chained MapReduce jobs, which runs on Amazon EMR. I'm using the JobFlow interface to chain the jobs. Each job is contained in its own class, and has its own main method. All of these are packed into a .jar which is saved in S3, and the cluster is initialized from a small local app on my laptop, which configures the JobFlowRequest and submits it to EMR.
For most of the attempts I make to start the cluster, it fails with the error message Terminated with errors On the master instance (i-<cluster number>), bootstrap action 1 timed out executing. I looked up info on this issue, and all I could find is that if the combined bootstrap time of the cluster exceeds 45 minutes, then this exception is thrown. However, This only occurs ~15 minutes after the request is submitted to EMR, with disregard to the requested cluster size, be it of 4 EC2 instances, 10 or even 20. This makes no sense to me at all, what am I missing?
Some tech specs:
-The project is compiled with Java 1.7.79
-The requested EMR image is 4.6.0, which uses Hadoop 2.7.2
-I'm using the AWS SDK for Java v. 1.10.64
This is my local main method, which sets up and submits the JobFlowRequest:
import com.amazonaws.AmazonClientException;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.ec2.model.InstanceType;
import com.amazonaws.services.elasticmapreduce.AmazonElasticMapReduce;
import com.amazonaws.services.elasticmapreduce.AmazonElasticMapReduceClient;
import com.amazonaws.services.elasticmapreduce.model.*;
public class ExtractRelatedPairs {
public static void main(String[] args) throws Exception {
if (args.length != 1) {
System.err.println("Usage: ExtractRelatedPairs: <k>");
System.exit(1);
}
int outputSize = Integer.parseInt(args[0]);
if (outputSize < 0) {
System.err.println("k should be positive");
System.exit(1);
}
AWSCredentials credentials = null;
try {
credentials = new ProfileCredentialsProvider().getCredentials();
} catch (Exception e) {
throw new AmazonClientException(
"Cannot load the credentials from the credential profiles file. " +
"Please make sure that your credentials file is at the correct " +
"location (~/.aws/credentials), and is in valid format.",
e);
}
AmazonElasticMapReduce mapReduce = new AmazonElasticMapReduceClient(credentials);
HadoopJarStepConfig jarStep1 = new HadoopJarStepConfig()
.withJar("s3n://dsps162assignment2benasaf/jars/ExtractRelatedPairs.jar")
.withMainClass("Phase1")
.withArgs("s3://datasets.elasticmapreduce/ngrams/books/20090715/eng-gb-all/5gram/data/", "hdfs:///output1/");
StepConfig step1Config = new StepConfig()
.withName("Phase 1")
.withHadoopJarStep(jarStep1)
.withActionOnFailure("TERMINATE_JOB_FLOW");
HadoopJarStepConfig jarStep2 = new HadoopJarStepConfig()
.withJar("s3n://dsps162assignment2benasaf/jars/ExtractRelatedPairs.jar")
.withMainClass("Phase2")
.withArgs("shdfs:///output1/", "hdfs:///output2/");
StepConfig step2Config = new StepConfig()
.withName("Phase 2")
.withHadoopJarStep(jarStep2)
.withActionOnFailure("TERMINATE_JOB_FLOW");
HadoopJarStepConfig jarStep3 = new HadoopJarStepConfig()
.withJar("s3n://dsps162assignment2benasaf/jars/ExtractRelatedPairs.jar")
.withMainClass("Phase3")
.withArgs("hdfs:///output2/", "hdfs:///output3/", args[0]);
StepConfig step3Config = new StepConfig()
.withName("Phase 3")
.withHadoopJarStep(jarStep3)
.withActionOnFailure("TERMINATE_JOB_FLOW");
HadoopJarStepConfig jarStep4 = new HadoopJarStepConfig()
.withJar("s3n://dsps162assignment2benasaf/jars/ExtractRelatedPairs.jar")
.withMainClass("Phase4")
.withArgs("hdfs:///output3/", "s3n://dsps162assignment2benasaf/output4");
StepConfig step4Config = new StepConfig()
.withName("Phase 4")
.withHadoopJarStep(jarStep4)
.withActionOnFailure("TERMINATE_JOB_FLOW");
JobFlowInstancesConfig instances = new JobFlowInstancesConfig()
.withInstanceCount(10)
.withMasterInstanceType(InstanceType.M1Small.toString())
.withSlaveInstanceType(InstanceType.M1Small.toString())
.withHadoopVersion("2.7.2")
.withEc2KeyName("AWS")
.withKeepJobFlowAliveWhenNoSteps(false)
.withPlacement(new PlacementType("us-east-1a"));
RunJobFlowRequest runFlowRequest = new RunJobFlowRequest()
.withName("extract-related-word-pairs")
.withInstances(instances)
.withSteps(step1Config, step2Config, step3Config, step4Config)
.withJobFlowRole("EMR_EC2_DefaultRole")
.withServiceRole("EMR_DefaultRole")
.withReleaseLabel("emr-4.6.0")
.withLogUri("s3n://dsps162assignment2benasaf/logs/");
System.out.println("Submitting the JobFlow Request to Amazon EMR and running it...");
RunJobFlowResult runJobFlowResult = mapReduce.runJobFlow(runFlowRequest);
String jobFlowId = runJobFlowResult.getJobFlowId();
System.out.println("Ran job flow with id: " + jobFlowId);
}
}
A while back, I encountered a similar issue, where even a Vanilla EMR cluster of 4.6.0 was failing to get past the startup, and thus it was throwing a timeout error on the bootstrap step.
I ended up just creating a cluster on a different/new VPC in a different region and it worked fine, and thus it led me to believe there may be a problem with either the original VPC itself or the software in 4.6.0.
Also, regarding the VPC, it was specifically having an issue setting and resolving DNS names for the newly created cluster nodes, even though older versions of EMR were not having this problem

Kafka 0.8.2.2 - Unable to publish messages

We have written a java client for publishing message to kafka. The code is as shown below
Properties props = new Properties();
props.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "202.xx.xx.xxx:9092");
props.setProperty(ProducerConfig.METADATA_FETCH_TIMEOUT_CONFIG,Integer.toString(5 * 1000));
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,StringSerializer.class.getName());
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class.getName());
//1. create KafkaProducer
KafkaProducer producer = new KafkaProducer(props);
//2 create callback
Callback callback = new Callback() {
public void onCompletion(RecordMetadata metadata, Exception e) {
System.out.println("Error while sending data");
if (e != null);
e.printStackTrace();
}
};
producer.send(record, callback);
When we execute this code , we get the following message and exception
ProducerConfig values:
compression.type = none
metric.reporters = []
metadata.max.age.ms = 300000
metadata.fetch.timeout.ms = 5000
acks = 1
batch.size = 16384
reconnect.backoff.ms = 10
bootstrap.servers = [202.xx.xx.xx:9092]
receive.buffer.bytes = 32768
retry.backoff.ms = 100
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
retries = 0
max.request.size = 1048576
block.on.buffer.full = true
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
metrics.sample.window.ms = 30000
send.buffer.bytes = 131072
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
linger.ms = 0
client.id =
Updated cluster metadata version 1 to Cluster(nodes = [Node(202.xx.xx.xx, 9092)], partitions = [])
Starting Kafka producer I/O thread.
The configuration metadata.broker.list = null was supplied but isn't a known config.
The configuration request.required.acks = null was supplied but isn't a known config.
Kafka producer started
Trying to send metadata request to node -1
Init connection to node -1 for sending metadata request in the next iteration
Initiating connection to node -1 at 202.xx.xx.xx:9092.
Trying to send metadata request to node -1
Completed connection to node -1
Trying to send metadata request to node -1
Sending metadata request ClientRequest(expectResponse=true, payload=null, request=RequestSend(header= {api_key=3,api_version=0,correlation_id=0,client_id=producer-1}, body={topics=[HelloWorld]})) to node -1
Updated cluster metadata version 2 to Cluster(nodes = [Node(0, 192.local, 9092)], partitions = [Partition(topic = HelloWorld, partition = 0, leader = 0, replicas = [0,], isr = [0,]])
Initiating connection to node 0 at 192.local:9092.
0 max latency = 219 ms, avg latency = 0.00022
1 records sent in 219 ms ms. 4.57 records per second (0.00 mb/sec).Error connecting to node 0 at 192.local:9092:
java.io.IOException: Can't resolve address: 192.local:9092
at org.apache.kafka.common.network.Selector.connect(Selector.java:138)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:417)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:116)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:165)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
at java.lang.Thread.run(Unknown Source)
Caused by: java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Unknown Source)
at sun.nio.ch.SocketChannelImpl.connect(Unknown Source)
at org.apache.kafka.common.network.Selector.connect(Selector.java:135)
... 5 more
Beginning shutdown of Kafka producer I/O thread, sending remaining records.
Initiating connection to node 0 at 192.local:9092.
Error connecting to node 0 at 192.local:9092:
java.io.IOException: Can't resolve address: 192.local:9092
at org.apache.kafka.common.network.Selector.connect(Selector.java:138)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:417)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:116)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:165)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:135)
at java.lang.Thread.run(Unknown Source)
Caused by: java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Unknown Source)
at sun.nio.ch.SocketChannelImpl.connect(Unknown Source)
at org.apache.kafka.common.network.Selector.connect(Selector.java:135)
... 5 more
Give up sending metadata request since no node is available
This happens in a infinite loop and the application hangs... When we checked the kafka broker , we found that the topic was created... but we did not get the message... We have been stuck on this for a while... Please help
We finally figured out the issue... We were running kafka in a hybrid evironment as mentioned in the following post -
https://medium.com/#thedude_rog/running-kafka-in-a-hybrid-cloud-environment-17a8f3cfc284
We changed the host.name to the internal IP and advertised.host.name to external IP

Can't create InetSocketTransportAddress in TransportClient: NoNodeAvailableException[None of the configured nodes -are available: []]

UPDATED: Hopefully clearer details and code...
I'm trying to make my first Java application to talk to ElasticSearch, which is running on this node (timestamps and log-levels removed):
$ bin/elasticsearch
[bootstrap ]Unable to lock JVM Memory: error=78,reason=Function not implemented
[bootstrap ]This can result in part of the JVM being swapped out.
[node ][clustername-node.01] version[2.0.0], pid[49252], build[de54438/2015-10-22T08:09:48Z]
[node ][clustername-node.01] initializing ...
[plugins ][clustername-node.01] loaded [license, marvel], sites []
[env ][clustername-node.01] using [1] data paths, mounts [[/ (/dev/disk1)]], net usable_space [164.4gb], net total_space [232.5gb], spins? [unknown], types [hfs]
[node ][clustername-node.01] initialized
[node ][clustername-node.01] starting ...
[transport ][clustername-node.01] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[discovery ][clustername-node.01] clustername/AM4lm0ZBS_6FofhC0UbNIA
[cluster.service ][clustername-node.01] new_master {clustername-node.01}{AM4lm0ZBS_6FofhC0UbNIA}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[http ][clustername-node.01] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[node ][clustername-node.01] started
[license.plugin.core][clustername-node.01] license [3ff50767-f1a5-4bac-8e35-c7a131384fd9] - valid
[license.plugin.core][clustername-node.01]
[gateway ][clustername-node.01] recovered [14] indices into cluster_state
With DEBUG-ing, as suggested by #Val, these additional lines are also included in the above output:
[transport.netty][clustername.01] using profile[default], worker_count[8], port[9300-9400], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/3/6/1/1], receive_predictor[512kb->512kb]
[transport.netty][clustername.01] binding server bootstrap to: 127.0.0.1
[transport.netty][clustername.01] Bound profile [default] to address {127.0.0.1:9300}
The address portion:
publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
clustername/AM4lm0ZBS_6FofhC0UbNIA
new_master {clustername-node.01}{AM4lm0ZBS_6FofhC0UbNIA}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
I've confirmed the IP and port is running:
$ bin/elasticsearch --version
Version: 2.0.0, Build: de54438/2015-10-22T08:09:48Z, JVM: 1.8.0_45
$ telnet 127.0.0.1 9300
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
^CConnection closed by foreign host.
$ telnet 127.0.0.1 9301
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
telnet: Unable to connect to remote host
$
9300 is there, 9301 isn't, as expected. I'm reasonably sure that port 9300 is correct for a Java TransportClient.
But no matter how I try to create the InetSocketTransportAddress...
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.NoNodeAvailableException;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.net.UnknownHostException;
public class TrivialClient {
public static void main(String[] args) throws UnknownHostException {
InetSocketTransportAddress transportAddress = new InetSocketTransportAddress(
InetAddress.getLocalHost(), 9300);
createClientPrintResponse("getLocalHost", transportAddress);
transportAddress = new InetSocketTransportAddress(
InetAddress.getByName("localhost"), 9300);
createClientPrintResponse("getByName(\"localhost\")", transportAddress);
//Does not compile in ElasticSearch 2.0
// transportAddress = new InetSocketTransportAddress("localhost", 9300);
// createClientPrintResponse("getByName(\"localhost\")", transportAddress);
transportAddress = new InetSocketTransportAddress(
InetAddress.getByAddress(new byte[]{127, 0, 0, 1}), 9300);
createClientPrintResponse("getByAddress(new byte[] {127, 0, 0, 1})", transportAddress);
transportAddress =
new InetSocketTransportAddress(new InetSocketAddress("127.0.0.1", 9300));
createClientPrintResponse("InetSocketAddress", transportAddress);
}
private static void createClientPrintResponse(String description,
InetSocketTransportAddress transportAddress) {
Settings settings = Settings.settingsBuilder()
.put("cluster.name", "clustername").build();
Client client;
client = TransportClient.builder().settings(settings).build().
addTransportAddress(transportAddress);
try {
GetResponse response = client.prepareGet("comicbook", "superhero", "1").get();
System.out.println(description + ": " + response);
} catch (NoNodeAvailableException e) {
System.out.println(description + ": " + e);
//e.printStackTrace();
}
}
}
...it fails with:
getLocalHost: NoNodeAvailableException[None of the configured nodes are available: []]
getByName("localhost"): NoNodeAvailableException[None of the configured nodes are available: []]
getByAddress(new byte[] {127, 0, 0, 1}): NoNodeAvailableException[None of the configured nodes are available: []]
InetSocketAddress: NoNodeAvailableException[None of the configured nodes are available: []]
The stack trace:
NoNodeAvailableException[None of the configured nodes are available: []]
getLocalHost: NoNodeAvailableException[None of the configured nodes are available: []]
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:280)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:197)
at org.elasticsearch.client.transport.support.TransportProxyClient.execute(TransportProxyClient.java:55)
at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:272)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:347)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:85)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:59)
at org.elasticsearch.action.ActionRequestBuilder.get(ActionRequestBuilder.java:67)
at springes.esonly.TrivialClient.createClientPrintResponse(TrivialClient.java:47)
at springes.esonly.TrivialClient.main(TrivialClient.java:19)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
What am I missing?
You'll have to specify the name of the cluster:
Settings settings = Settings.settingsBuilder()
.put("cluster.name", "my_cluster_name").build();
Client client = TransportClient.builder().settings(settings).build()
.addTransportAddress(new InetSocketTransportAddress(new InetSocketAddress("127.0.0.1", 9300)));
Refer:
https://discuss.elastic.co/t/elasticsearch-in-java-transportclient-nonodeavailableexception-none-of-the-configured-nodesare-available/34452

mongodb insert fails due to socket exception

I am working on a Java project in Eclipse. I have a staging server and a live server. Those two also have their own mongodbs, which run on a different server on two different ports (29017 and 27017).
Via a Junit Test I want to copy data from the live mongo to the devel mongo.
Weirdest thing: sometimes it works and sometimes I get a socket error. I wonder why mongo sometimes completely refuses to write inserts and on other days it works flawlessly. Here is an excerpt of the mongo log file (the one where code gets inserted) and the Junit test script:
mongo log:
Thu Mar 14 21:01:04 [initandlisten] connection accepted from xx.xxx.xxx.183:60848 #1 (1 connection now open)
Thu Mar 14 21:01:04 [conn1] run command admin.$cmd { isMaster: 1 }
Thu Mar 14 21:01:04 [conn1] command admin.$cmd command: { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:90 0ms
Thu Mar 14 21:01:04 [conn1] opening db: repgain
Thu Mar 14 21:01:04 [conn1] query repgain.editorconfigs query: { $and: [ { customer: "nokia" }, { category: "restaurant" } ] } ntoreturn:0 keyUpdates:0 locks(micros) W:5302 r:176 nreturned:0 reslen:20 0ms
Thu Mar 14 21:01:04 [conn1] Socket recv() errno:104 Connection reset by peer xx.xxx.xxx.183:60848
Thu Mar 14 21:01:04 [conn1] SocketException: remote: xx.xxx.xxx.183:60848 error: 9001 socket exception [1] server [xx.xxx.xxx.183:60848]
Thu Mar 14 21:01:04 [conn1] end connection xx.xxx.xxx.183:60848 (0 connections now open)
junit test script:
public class CopyEditorConfig {
protected final Log logger = LogFactory.getLog(getClass());
private static final String CUSTOMER = "customerx";
private static final String CATEGORY = "categoryx";
#Test
public void test() {
try {
ObjectMapper om = new ObjectMapper();
// script copies the config from m2 to m1.
Mongo m1 = new Mongo("xxx.xxx.com", 29017); // devel
Mongo m2 = new Mongo("yyy.yyy.com", 27017); // live
Assert.assertNotNull(m1);
Assert.assertNotNull(m2);
logger.info("try to connect to db \"dbname\"");
DB db2 = m2.getDB("dbname");
logger.info("get collection \"config\"");
DBCollection c2 = db2.getCollection("config");
JacksonDBCollection<EditorTabConfig, ObjectId> ec2 = JacksonDBCollection.wrap(c2, EditorTabConfig.class, ObjectId.class);
logger.info("find entry with customer {" + CUSTOMER + "} and category {" + CATEGORY + "}");
EditorTabConfig config2 = ec2.findOne(DBQuery.and(DBQuery.is("customer", CUSTOMER), DBQuery.is("category", CATEGORY)));
// config
if (config2 == null) {
logger.info("no customer found to copy.");
} else {
logger.info("Found config with id: {" + config2.objectId + "}");
config2.objectId = null;
logger.info("copy config");
boolean found = false;
DB db1 = m1.getDB("dbname");
DBCollection c1 = db1.getCollection("config");
JacksonDBCollection<EditorTabConfig, ObjectId> ec1 = JacksonDBCollection.wrap(c1, EditorTabConfig.class, ObjectId.class);
EditorTabConfig config1 = ec1.findOne(DBQuery.and(DBQuery.is("customer", CUSTOMER), DBQuery.is("category", CATEGORY)));
if (config1 != null) {
found = true;
}
if (found == false) {
WriteResult<EditorTabConfig, ObjectId> result = ec1.insert(config2);
ObjectId id = result.getSavedId();
logger.info("INSERT config with id: " + id);
} else {
logger.info("UPDATE config with id: " + config1.objectId);
ec1.updateById(config1.objectId, config2);
}
StringWriter sw = new StringWriter();
om.writeValue(sw, config2);
logger.info(sw);
}
} catch (Exception e) {
logger.error("exception occured: ", e);
}
}
}
Running this script seems like a success when I read the log in eclipse. I get an id for both c1 and c2 and the data is also here. The log even states, that it didn't find the config on devel and inserts it. That also is true, if I put it there manually. It gets "updated" then. But the mongo log stays the same.
The socket exception occurs, and the data is never written to the db.
I am out of good ideas to debug this. If you could, I'd be glad to get some tips how to go from here. Also, if any information is missing, please tell me, I'd be glad to share.
Regards,
Alex
It seems you have a connection issue with mongo server. Below ways may help you better diagnose the mongo servers:
Try to get more information from log files:
$less /var/log/mongo/mongod.log
or customized log files defined in mongod.conf
Try to use mongostat to monitor the server state:
$ mongostat -u ADMIN_USER -p ADMIN_PASS
Try to use mongo cli to check server runing status:
$ mongo admin -u ADMIN_USER -p ADMIN_PASS
$ db.serverStatus()
More useful commands is at: http://docs.mongodb.org/manual/reference/method/
Sometimes it may come across with Linux system configs. Try to tune Linux for more connections and limits, and it may help.
To check current Linux limits, run:
$ ulimit -a
Below suggestions may be helpful:
Each connection is seen by Linux as an open file. The default maximum number of open file is 1024. To increase this limit:
modify /etc/security/limits.conf:
root soft nofile 500000
root hard nofile 512000
root soft nproc 500000
root hard nproc 512000
modify /etc/sysctl.conf
fs.file-max=360000
net.ipv4.ip_local_port_range=1024 65000
Comment out the line in your mongod.conf that binds the IP to 127.0.0.1.
Usually, it is set to 127.0.0.1 by default.
For Linux, this config file location should be be /etc/mongod.conf. Once you comment that out , it will receive connections from all interfaces. This fixed it for me as i was getting these socket exceptions as well.

Categories