Cassandra sstableloader - Connection refused - java

I am trying to bulk load a 4 node Cassandra 3.0.10 cluster with some data. I've successfully generated the SStables following the documentation, however it seems like I cannot get the sstableloader load them.
I get the following java.net.ConnectException: Connection refused
bin/sstableloader -v -d localhost test-data/output/si_test/messages/
Established connection to initial hosts
Opening sstables and calculating sections to stream
Streaming relevant part of /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-1-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-10-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-11-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-12-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-13-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-14-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-15-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-16-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-17-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-18-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-19-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-2-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-20-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-21-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-22-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-23-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-24-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-25-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-26-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-27-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-28-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-29-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-3-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-30-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-31-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-32-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-33-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-34-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-35-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-36-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-37-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-38-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-39-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-4-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-40-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-41-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-42-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-43-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-44-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-45-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-46-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-47-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-48-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-49-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-5-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-50-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-51-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-52-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-53-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-54-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-55-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-56-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-57-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-58-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-59-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-6-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-60-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-61-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-7-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-8-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-9-big-Data.db to [localhost/127.0.0.1]
ERROR 14:46:24 [Stream #3d0c24e0-cc43-11e6-8c9f-615437259231] Streaming error occurred
java.net.ConnectException: Connection refused
at sun.nio.ch.Net.connect0(Native Method) ~[na:1.8.0_111]
at sun.nio.ch.Net.connect(Net.java:454) ~[na:1.8.0_111]
at sun.nio.ch.Net.connect(Net.java:446) ~[na:1.8.0_111]
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648) ~[na:1.8.0_111]
at java.nio.channels.SocketChannel.open(SocketChannel.java:189) ~[na:1.8.0_111]
at org.apache.cassandra.tools.BulkLoadConnectionFactory.createConnection(BulkLoadConnectionFactory.java:60) ~[apache-cassandra-3.0.10.jar:3.0.10]
at org.apache.cassandra.streaming.StreamSession.createConnection(StreamSession.java:255) ~[apache-cassandra-3.0.10.jar:3.0.10]
at org.apache.cassandra.streaming.ConnectionHandler.initiate(ConnectionHandler.java:84) ~[apache-cassandra-3.0.10.jar:3.0.10]
at org.apache.cassandra.streaming.StreamSession.start(StreamSession.java:242) ~[apache-cassandra-3.0.10.jar:3.0.10]
at org.apache.cassandra.streaming.StreamCoordinator$StreamSessionConnector.run(StreamCoordinator.java:212) [apache-cassandra-3.0.10.jar:3.0.10]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
progress: total: 100% 0 MB/s(avg: 0 MB/s)WARN 14:46:24 [Stream #3d0c24e0-cc43-11e6-8c9f-615437259231] Stream failed
Streaming to the following hosts failed:
[localhost/127.0.0.1]
java.util.concurrent.ExecutionException: org.apache.cassandra.streaming.StreamException: Stream failed
at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)
at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:120)
Caused by: org.apache.cassandra.streaming.StreamException: Stream failed
at org.apache.cassandra.streaming.management.StreamEventJMXNotifier.onFailure(StreamEventJMXNotifier.java:85)
at com.google.common.util.concurrent.Futures$6.run(Futures.java:1310)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:457)
at com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)
at com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:202)
at org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:211)
at org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:187)
at org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:440)
at org.apache.cassandra.streaming.StreamSession.onError(StreamSession.java:540)
at org.apache.cassandra.streaming.StreamSession.start(StreamSession.java:248)
at org.apache.cassandra.streaming.StreamCoordinator$StreamSessionConnector.run(StreamCoordinator.java:212)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
The utility seems to connect to the cluster (...Established connection to initial hosts..), however I does not stream the data.
What I've tried so far to debug the issue:
Dropped the target keyspace (and got a different error), created again via cqlsh
I can telnet to each node of the cluster through ports 9042 and 7000
Enabled thrift using nodetool enablethrift
EDIT
That's the output of netstat -an | grep 7000. The nodes have only one network interface and Cassandra is listening to it. It has also established connections with all the other nodes on port 7000.
tcp 0 0 172.31.3.88:7000 0.0.0.0:* LISTEN
tcp 0 0 172.31.3.88:7000 172.31.3.86:54348 ESTABLISHED
tcp 0 0 172.31.3.88:7000 172.31.3.87:60661 ESTABLISHED
tcp 0 0 172.31.3.88:53061 172.31.3.87:7000 ESTABLISHED
tcp 0 0 172.31.3.88:7000 172.31.11.43:36984 ESTABLISHED
tcp 0 0 172.31.3.88:51412 172.31.11.43:7000 ESTABLISHED
tcp 0 0 172.31.3.88:54018 172.31.3.87:7000 ESTABLISHED
tcp 0 0 172.31.3.88:7000 172.31.3.87:40667 ESTABLISHED
tcp 0 0 172.31.3.88:34469 172.31.3.86:7000 ESTABLISHED
tcp 0 0 172.31.3.88:43658 172.31.3.86:7000 ESTABLISHED
tcp 0 0 172.31.3.88:7000 172.31.11.43:49487 ESTABLISHED
tcp 0 0 172.31.3.88:40798 172.31.11.43:7000 ESTABLISHED
tcp 0 0 172.31.3.88:7000 172.31.3.86:51537 ESTABLISHED
EDIT 2
Changing the initial host from 127.0.0.1 to the actual address of the node in the network results to a com.datastax.driver.core.exceptions.TransportException:
bin/sstableloader -v -d 172.31.3.88 test-data/output/si_test/messages/
All host(s) tried for query failed (tried: /172.31.3.88:9042 (com.datastax.driver.core.exceptions.TransportException: [/172.31.3.88] Cannot connect))
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /172.31.3.88:9042 (com.datastax.driver.core.exceptions.TransportException: [/172.31.3.88] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:233)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:79)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1424)
at com.datastax.driver.core.Cluster.init(Cluster.java:163)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:334)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:309)
at com.datastax.driver.core.Cluster.connect(Cluster.java:251)
at org.apache.cassandra.utils.NativeSSTableLoaderClient.init(NativeSSTableLoaderClient.java:70)
at org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:159)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:104)
Any suggestion is appreciated.
Thanks

Thats it trying to connect to the storage port (7000). Its most likely binding to a different interface than 127.0.0.1. You can check what its binding too with netstat -an | grep 7000. You may want to double check any firewall or iptable settings.
UPDATE: its not bound to 127.0.0.1 which is default but to 172.31.3.88. So call sstableloader -v -d 172.31.3.88 test-data/output/si_test/messages/
Also if you have ssl enabled (server_encryption_options in cassandra.yaml) you need to use 7001 and configure it to match. If you can telnet to 7000 its most likely not that though.
Worth noting that enabling thrift is not necessarily in 3.0.10. sstableloader no longer uses that (in older versions it was used to read the schema).

Solved changing the rpc_address in the cassandra.yaml file from the default localhost to the actual address of each node.

Related

The kafka script kafka-consumer-group.sh throws timed out waiting for a node assignment

I have a Kafka cluster running with 3 brokers.
<node-ip>:10092 //broker-0
<node-ip>:10093 //broker-1
<node-ip>:10094 //broker-2
The broker-1 <node-ip>:10093 is in a not-ready state(due to some readiness failure). But other 2 brokers are running fine.
But when I use the script kafka-consumer-groups.sh with a running broker address as bootstrap-server, I get the following error.
kafka#mirror-maker-0:/opt/kafka/bin$ /opt/kafka/bin/kafka-consumer-groups.sh --bootstrap-server <node-ip>:10094 --describe --group c2-c1-consumer-group --state
[2022-03-14 10:24:16,008] WARN [AdminClient clientId=adminclient-1] Connection to node 1 (/<node-ip>:10093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2022-03-14 10:24:17,086] WARN [AdminClient clientId=adminclient-1] Connection to node 1 (/<node-ip>:10093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2022-03-14 10:24:18,206] WARN [AdminClient clientId=adminclient-1] Connection to node 1 (/<node-ip>:10093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2022-03-14 10:24:19,458] WARN [AdminClient clientId=adminclient-1] Connection to node 1 (/<node-ip>:10093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
Error: Executing consumer group command failed due to org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: describeGroups(api=DESCRIBE_GROUPS)
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: describeGroups(api=DESCRIBE_GROUPS)
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
at kafka.admin.ConsumerGroupCommand$ConsumerGroupService.$anonfun$describeConsumerGroups$1(ConsumerGroupCommand.scala:543)
at scala.collection.StrictOptimizedMapOps.map(StrictOptimizedMapOps.scala:28)
at scala.collection.StrictOptimizedMapOps.map$(StrictOptimizedMapOps.scala:27)
at scala.collection.convert.JavaCollectionWrappers$AbstractJMapWrapper.map(JavaCollectionWrappers.scala:309)
at kafka.admin.ConsumerGroupCommand$ConsumerGroupService.describeConsumerGroups(ConsumerGroupCommand.scala:542)
at kafka.admin.ConsumerGroupCommand$ConsumerGroupService.collectGroupsState(ConsumerGroupCommand.scala:620)
at kafka.admin.ConsumerGroupCommand$ConsumerGroupService.describeGroups(ConsumerGroupCommand.scala:373)
at kafka.admin.ConsumerGroupCommand$.run(ConsumerGroupCommand.scala:72)
at kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:59)
at kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: describeGroups(api=DESCRIBE_GROUPS)
Could someone please help me to understand
Why is it connecting to the non mentioned broker(log shows 10093 but I passed :10094)?
Is there any solution to use only the mentioned bootstrap-servers?
One more thing is,
When I run kafka-topics.sh with the running broker address as bootstrap-server, it returns the response.
Thanks
I faced a similar issue. I was able to read the topics but I cannot list, describe the groups. I solved the issue by adding a large timeout. Can you please also try putting a large timeout?
./kafka-consumer-groups.sh --command-config kafka.properties --bootstrap-server brokers --group group --describe --timeout 100000

Kafka Admins timeout

I have an AWS MSK cluster up and running. Connected to it and ran this command to create a test topic called topicoteste
usr/local/kafka_2.13-2.5.0/bin/kafka-topics --create --bootstrap-server BOOTSTRAP_STRING_HERE --partitions 1 --replication-factor 3 --topic topicoteste
These are the two errors I get. Any suggestions?
Error while executing topic command : org.apache.kafka.common.errors.TimeoutException: Call(callName=listTopics, deadlineMs=1611587423888) timed out at 9223372036854775807 after 1 attempt(s)
[2021-01-25 15:09:24,312] ERROR Uncaught exception in thread 'kafka-admin-client-thread | adminclient-1': (org.apache.kafka.common.utils.KafkaThread)
java.lang.OutOfMemoryError: Java heap space
at java.base/java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:61)
at java.base/java.nio.ByteBuffer.allocate(ByteBuffer.java:348)
at org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:113)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:448)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:398)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:678)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:580)
at org.apache.kafka.common.network.Selector.poll(Selector.java:485)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1272)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1203)
at java.base/java.lang.Thread.run(Thread.java:829)
[2021-01-25 15:09:24,314] ERROR java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Call(callName=listTopics, deadlineMs=1611587423888) timed out at 9223372036854775807 after 1 attempt(s)
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
at kafka.admin.TopicCommand$AdminClientTopicService.createTopic(TopicCommand.scala:227)
at kafka.admin.TopicCommand$TopicService.createTopic(TopicCommand.scala:196)
at kafka.admin.TopicCommand$TopicService.createTopic$(TopicCommand.scala:191)
at kafka.admin.TopicCommand$AdminClientTopicService.createTopic(TopicCommand.scala:219)
at kafka.admin.TopicCommand$.main(TopicCommand.scala:62)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
Caused by: org.apache.kafka.common.errors.TimeoutException: Call(callName=listTopics, deadlineMs=1611587423888) timed out at 9223372036854775807 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: The AdminClient thread has exited.
(kafka.admin.TopicCommand$)
I had the same problem because the broker used TLS and the AdminClient was not configured to use TLS.
You can either run a PLAINTEXT-listener next to the TLS listener and use that to create topics or configure your admin client with --command-config <ssl.conf> and a file ssl.conf looking something like this:
ssl.endpoint.identification.algorithm=https
security.protocol=SSL
ssl.keystore.location=/path/to/keystore.jks
ssl.keystore.password=password
ssl.key.password=password
ssl.truststore.location=/path/to/truststore.jks
ssl.truststore.password=password

DefaultHttpClient call throws connection refused in the same tomcat with public ip

centos 7, tomcat 8.5.
a.war and rest.war are in the same tomcat.
a.war use following code to call rest.war:
import org.apache.http.impl.client.DefaultHttpClient;
DefaultHttpClient httpClient = new DefaultHttpClient();
HttpPost httpPost = new HttpPost(url);
httpPost.addHeader(HTTP.CONTENT_TYPE, "application/json");
StringEntity se = new StringEntity(json.toString());
se.setContentType("text/json");
se.setContentEncoding(new BasicHeader(HTTP.CONTENT_TYPE, "application/json"));
httpPost.setEntity(se);
HttpResponse response = httpClient.execute(httpPost);
however, if url of HttpPost(url) is <public ip>:80, then httpClient.execute(httpPost) will throw connection refused.
while if url of HttpPost(url) is localhost:80 or 127.0.0.1:80, then httpClient.execute(httpPost) is success.
why? and how can solve this problem?
Note: if I access a.war from browser with public ip like http://<public ip>/a in my computer, all operations are success.
my tomcat connector is:
<Connector
port="80"
protocol="HTTP/1.1"
connectionTimeout="60000"
keepAliveTimeout="15000"
maxKeepAliveRequests="-1"
maxThreads="1000"
minSpareThreads="200"
maxSpareThreads="300"
minProcessors="100"
maxProcessors="900"
acceptCount="1000"
enableLookups="false"
executor="tomcatThreadPool"
maxPostSize="-1"
compression="on"
compressionMinSize="1024"
redirectPort="8443" />
my server has no domain, only has a public ip, its /etc/hosts is:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
updated with some commands run in server:
ss -nltp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:111 *:* users:(("rpcbind",pid=643,fd=8))
LISTEN 0 128 *:80 *:* users:(("java",pid=31986,fd=53))
LISTEN 0 128 *:22 *:* users:(("sshd",pid=961,fd=3))
LISTEN 0 1 127.0.0.1:8005 *:* users:(("java",pid=31986,fd=68))
LISTEN 0 128 :::111 :::* users:(("rpcbind",pid=643,fd=11))
LISTEN 0 128 :::22 :::* users:(("sshd",pid=961,fd=4))
LISTEN 0 80 :::3306 :::* users:(("mysqld",pid=1160,fd=19))
netstat -nltp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 643/rpcbind
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 31986/java
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 961/sshd
tcp 0 0 127.0.0.1:8005 0.0.0.0:* LISTEN 31986/java
tcp6 0 0 :::111 :::* LISTEN 643/rpcbind
tcp6 0 0 :::22 :::* LISTEN 961/sshd
tcp6 0 0 :::3306 :::* LISTEN 1160/mysqld
ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 1396428 bytes 179342662 (171.0 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1396428 bytes 179342662 (171.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
p2p1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.25 netmask 255.255.255.0 broadcast 192.168.1.255
ether f8:bc:12:a3:4f:b7 txqueuelen 1000 (Ethernet)
RX packets 5352432 bytes 3009606926 (2.8 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2839034 bytes 559838396 (533.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: p2p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether f8:bc:12:a3:4f:b7 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.25/24 brd 192.168.1.255 scope global noprefixroute dynamic p2p1
valid_lft 54621sec preferred_lft 54621sec
route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 100 0 0 p2p1
192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 p2p1
ip route
default via 192.168.1.1 dev p2p1 proto dhcp metric 100
192.168.1.0/24 dev p2p1 proto kernel scope link src 192.168.1.25 metric 100
iptables -L -n -v --line-numbers
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
You probably have configured one of these:
Firewall public IP's ports, so that nothing goes through.
Tomcat may bind a specific IP, e.g. localhost (see Connector elements in tomcat's server.xml)
Apache httpd, nginx or another reverse proxy might handle various virtual host names, and also they might handle localhost different than the public IP
Port Forwarding - if you only forward localhost:80 to localhost:8080 (tomcat's default port), you might not have anything on publicip:80 that forwards that traffic as well.
Edit after your comment:
incoming traffic seems to be fine, but outgoing you do have those problems. Adding from #stringy05's comment: Check if the IP in question is routable from your server: You're connecting to whatever IP from that server, so use another means to create an outgoing connection, e.g. curl.
Explanation for #1 & #3:
If you connect to an external http server, it will handle the request differently based on the hostname used. It might well be that the IP "hostname" is blocked, either by a high level firewall, or just handled differently than the URL by the webserver itself. In most cases you can check this by connecting to the webserver in question from any other system, e.g. your own browser.
If Tomcat is listening (bound) to your public IP-adres it should work, but maybe your public IP-adres belongs to some other device, like a SOHO router, than your problem is similar to this:
https://superuser.com/questions/208710/public-ip-address-answered-by-router-not-internal-web-server-with-port-forwardi
But without an DNS name you cannot simply add a line to /etc/hosts but you can add the public IP-adres to one of your Network Interfaces Cards (NIC) like lo (loopback), eth0, etc. as described in one of these articles:
https://www.garron.me/en/linux/add-secondary-ip-linux.html
https://www.thegeekdiary.com/centos-rhel-6-how-to-addremove-additional-ip-addresses-to-a-network-interface/
E.g. with public IP-address 1.2.3.4 you would need (which will only be effective until next reboot and worst case might interfere with your ability to connect to the server with e.g. SSH!):
sudo ip addr add 1.2.3.4/32 dev lo
It may be useful to have the output of these commands to better understand your setup, feel free to share it in your question, with consistently anonymized public IP-adres):
Either one of these (ss = socket stat, newer replacement for good old netstat):
ss -nltp
netstat -nltp
And one of these:
ifconfig
ip addr show
And last but not least either one of these:
route
ip route
I don't expect that we need to know your firewall config, but if you use it, it may be interesting to keep an eye on it while you are at it:
iptables -L -n -v --line-numbers
Try putting your public domain names into the local /etc/hosts file of your server like this:
127.0.0.1 localhost YOURPUBLIC.DOMAIN.NAME
This way your Java code does not need to try to use the external IP-adres but instead connects directly to Tomcat.
Good luck!
I think the curl timeout explains it - you have a firewall rule somewhere that is stopping the server accessing the public IP address.
If there's no reason the service can't be accessed using localhost or the local hostname then do that but if you need to call the service via a public IP then it's a matter of working out why the request gets a timeout from the server.
Some usual suspects:
The server might not actually have internet access - can you curl https://www.google.com?
There might be a forward proxy required - a sys admin will know this sort of thing
There might be IP whitelisting on some infra around your server - think AWS security groups, load balancer IP whitelists that sort of thing. To fix that you need to know the public IP of your server curl https://canihazip.com/s and get that added to the whitelist

elasticsearch with RSS plugin hangs after restart

I have a clean ES server on my mac with the RSS plugin - after adding several sources the server hangs after I restart it (because I wanted to add additional plugins or restarting the mac).
Is there a way to prevent it / restore the current installation?
Log:
[2015-01-14 09:43:27,870][WARN ][discovery.zen.ping.multicast]
[Mar-Vell] received ping response ping_response{node
[[Asmodeus][JpUdhq0FRYKr7RqDqcK7WQ][Dorons-MBP.Home][inet[/10.0.0.7:9300]]],
id[1089], master
[[Asmodeus][JpUdhq0FRYKr7RqDqcK7WQ][Dorons-MBP.Home][inet[/10.0.0.7:9300]]],
hasJoinedOnce [true], cluster_name[elasticsearch]} with no matching id
[2] [2015-01-14 09:43:30,724][DEBUG][action.admin.cluster.health]
[Mar-Vell] no known master node, scheduling a retry [2015-01-14
09:43:57,943][WARN ][discovery.zen ] [Mar-Vell] failed to
connect to master
[[Asmodeus][JpUdhq0FRYKr7RqDqcK7WQ][Dorons-MBP.Home][inet[/10.0.0.7:9300]]],
retrying... org.elasticsearch.transport.ConnectTransportException:
[Asmodeus][inet[/10.0.0.7:9300]] connect_timeout[30s] at
org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:807)
at
org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:741)
at
org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:714)
at
org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:150)
at
org.elasticsearch.discovery.zen.ZenDiscovery.joinElectedMaster(ZenDiscovery.java:441)
at
org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:393)
at
org.elasticsearch.discovery.zen.ZenDiscovery.access$6000(ZenDiscovery.java:80)
at
org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1318)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744) Caused by:
org.elasticsearch.common.netty.channel.ConnectTimeoutException:
connection timed out: /10.0.0.7:9300 at
org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.processConnectTimeout(NioClientBoss.java:139)
at
org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:83)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at
org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
... 3 more [2015-01-14
09:44:00,729][DEBUG][action.admin.cluster.health] [Mar-Vell] observer:
timeout notification from cluster service. timeout setting [30s], time
since start [30s] [2015-01-14 09:44:01,048][WARN
][discovery.zen.ping.multicast] [Mar-Vell] received ping response
ping_response{node
[[Asmodeus][JpUdhq0FRYKr7RqDqcK7WQ][Dorons-MBP.Home][inet[/10.0.0.7:9300]]],
id[1095], master
[[Asmodeus][JpUdhq0FRYKr7RqDqcK7WQ][Dorons-MBP.Home][inet[/10.0.0.7:9300]]],
hasJoinedOnce [true], cluster_name[elasticsearch]} with no matching id
[3]

SSTable loader streaming failed giving java.io.IOException: Connection reset by peer

I am trying to use sstableloader to stream data to a Cassandra database, which is in fact in the same node. It used to work when i was using DSE 2.2 but when i upgraded it to DSE 4.5 and made all the relevant changes in the cassandra.yaml file, it stopped working and now it is throwing an error like this:
Established connection to initial hosts
Opening sstables and calculating sections to stream
Streaming relevant part of demo/test_yale/demo-test_yale-jb-2-Data.db demo/test_yale/demo-test_yale-jb-1-Data.db to [/127.0.0.1]
Streaming session ID: 02225ef0-1c17-11e4-a1ea-5f2d4f6a32c1
progress: [/127.0.0.1 1/2 (88%)] [total: 88% - 2147483647MB/s (avg: 14MB/s)]ERROR 16:36:29,029 [Stream #02225ef0-1c17-11e4-a1ea-5f2d4f6a32c1] Streaming error occurred
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487)
at java.nio.channels.Channels.writeFullyImpl(Channels.java:78)
at java.nio.channels.Channels.writeFully(Channels.java:98)
at java.nio.channels.Channels.access$000(Channels.java:61)
at java.nio.channels.Channels$1.write(Channels.java:174)
at com.ning.compress.lzf.LZFChunk.writeCompressedHeader(LZFChunk.java:77)
at com.ning.compress.lzf.ChunkEncoder.encodeAndWriteChunk(ChunkEncoder.java:132)
at com.ning.compress.lzf.LZFOutputStream.writeCompressedBlock(LZFOutputStream.java:203)
at com.ning.compress.lzf.LZFOutputStream.write(LZFOutputStream.java:97)
at org.apache.cassandra.streaming.StreamWriter.write(StreamWriter.java:151)
at org.apache.cassandra.streaming.StreamWriter.write(StreamWriter.java:101)
at org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:59)
at org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:42)
at org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:45)
at org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:383)
at org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:363)
at java.lang.Thread.run(Thread.java:745)
WARN 16:36:29,032 [Stream #02225ef0-1c17-11e4-a1ea-5f2d4f6a32c1] Stream failed
Streaming to the following hosts failed:
[/127.0.0.1]
java.util.concurrent.ExecutionException: org.apache.cassandra.streaming.StreamException: Stream failed
I have even tried assigning actual ip address of the node for the listen_address, broadcast_address, and rpc_address in the cassandra.yaml file but the same error occurs.
Can anyone be of assistance please?
It's worth looking at your system.log as specified in cassandra/conf/logback.xml, as suggested by Zanson.
In my case the issue was simply with exhausting disk space on the node:
ERROR [STREAM-IN-/xx.xx.xx.xx] 2016-08-02 10:50:31,125 StreamSession.java:505 - [Stream #8420bfa0-589c-11e6-9512-235b1f79cf1b] Streaming error occurred
java.io.IOException: No space left on device
at java.io.RandomAccessFile.writeBytes(Native Method) ~[na:1.8.0_101]
at java.io.RandomAccessFile.write(RandomAccessFile.java:525) ~[na:1.8.0_101]

Categories