curl waits in cygwin before sending the actual request - java

I am running curl from cygwin like this
$ curl -v -i -X POST 'http://www.abc.com:10123/myapp/abc' -H "Content-Type: application/json" --data-binary "#input.json"
It starts connecting and hangs at this line STATE: CONNECT => WAITCONNECT handle 0x8001f150; line 1074 (connection #0) for 149986ms and then is able to connect and send the request (traces shown below). Now the question is why it waits for that long? How can I skip this wait time?
Thanks.
STATE: INIT => CONNECT handle 0x8001f150; line 1027 (connection #-5000)
About to connect() to www.abc.com port 10123 (#0)
Trying ::1...
Adding handle: conn: 0x80059548
Adding handle: send: 0
Adding handle: recv: 0
Curl_addHandleToPipeline: length: 1
0x8001f150 is at send pipe head!
- Conn 0 (0x80059548) send_pipe: 1, recv_pipe: 0
STATE: CONNECT => WAITCONNECT handle 0x8001f150; line 1074 (connection #0)
After 149986ms connect time, move on!
Trying 127.0.0.1...
Connected to www.abc.com (127.0.0.1) port 10

Related

FTPClient throws Connection reset by peer during storeFile

I am trying to upload file to ftp server using Java.
What i got so far is
fun uploadData(): Boolean {
val ftpClient = FTPSClient()
ftpClient.addProtocolCommandListener(
PrintCommandListener(PrintWriter(OutputStreamWriter(System.out, "UTF-8")), true)
)
ftpClient.connect(ftpProperties.server)
try {
ftpClient.enterLocalPassiveMode()
ftpClient.login(ftpProperties.username, ftpProperties.password).takeIf { !it }?.let {
log.error("cannot login to ftp")
throw Exception("cannot login to ftp")
}
ftpClient.enterLocalPassiveMode()
ftpClient.soTimeout = 10000
ftpClient.dataTimeout = Duration.ofSeconds(10)
ftpClient.execPBSZ(0)
ftpClient.execPROT("P")
ftpClient.changeWorkingDirectory("/")
ftpClient.setFileType(FTP.BINARY_FILE_TYPE)
ftpClient.enterLocalPassiveMode()
} catch (ie: IOException) {
log.error("ftp initialization error", ie)
throw Exception("ftp initialization error")
}
val remoteFile = "top.txt.gz"
val data = FileInputStream(File("top.txt.gz"))
var done = false
try {
done = ftpClient.storeFile(remoteFile, data)
} catch (e: Exception) {
log.error(e) { "error" }
}
data.close()
if (done) {
return true
}
log.error { "${ftpClient.replyCode} ${ftpClient.replyString}" }
throw RuntimeException("File not stored $remoteFile")
}
Code does not seem to be working properly. After starting it i get error code on line with code:
done = ftpClient.storeFile(remoteFile, data)
org.apache.commons.net.io.CopyStreamException: IOException caught while copying.
Caused by: java.net.SocketException: Connection reset by peer
I could not find anything wrong in ftp log
220 Private FTP server
AUTH TLS
234 Proceed with negotiation.
USER *******
331 Please specify the password.
PASS *******
230 Login successful.
PBSZ 0
200 PBSZ set to 0.
PROT P
200 PROT now Private.
CWD /
250 Directory successfully changed.
TYPE I
200 Switching to Binary mode.
PASV
227 Entering Passive Mode (x,x,x,x,x,x).
STOR top.txt.gz
150 Ok to send data.
I am able to connect and upload files to that ftp server from the same machine that i start Java/Kotlin code, using filezilla.
Filezilla log looks like that:
Status: Connection established, waiting for welcome message...
Response: 220 Private FTP server
Command: AUTH TLS
Response: 234 Proceed with negotiation.
Status: Initializing TLS...
Status: Verifying certificate...
Status: TLS connection established.
Command: USER k8s_search
Response: 331 Please specify the password.
Command: PASS ******************
Response: 230 Login successful.
Status: Server does not support non-ASCII characters.
Command: PBSZ 0
Response: 200 PBSZ set to 0.
Command: PROT P
Response: 200 PROT now Private.
Status: Logged in
Status: Starting upload of /home/x/x/top.txt.gz
Command: CWD /
Response: 250 Directory successfully changed.
Command: TYPE I
Response: 200 Switching to Binary mode.
Command: PASV
Response: 227 Entering Passive Mode (x,x,x,x,x,x).
Command: STOR top.txt.gz
Response: 150 Ok to send data.
Response: 226 Transfer complete.
Status: File transfer successful, transferred 2.6 MB in 1 second
I connot figure out if problem is in my code or is it anything else. Filezilla working correctly indicates that problem lies in code.

DefaultHttpClient call throws connection refused in the same tomcat with public ip

centos 7, tomcat 8.5.
a.war and rest.war are in the same tomcat.
a.war use following code to call rest.war:
import org.apache.http.impl.client.DefaultHttpClient;
DefaultHttpClient httpClient = new DefaultHttpClient();
HttpPost httpPost = new HttpPost(url);
httpPost.addHeader(HTTP.CONTENT_TYPE, "application/json");
StringEntity se = new StringEntity(json.toString());
se.setContentType("text/json");
se.setContentEncoding(new BasicHeader(HTTP.CONTENT_TYPE, "application/json"));
httpPost.setEntity(se);
HttpResponse response = httpClient.execute(httpPost);
however, if url of HttpPost(url) is <public ip>:80, then httpClient.execute(httpPost) will throw connection refused.
while if url of HttpPost(url) is localhost:80 or 127.0.0.1:80, then httpClient.execute(httpPost) is success.
why? and how can solve this problem?
Note: if I access a.war from browser with public ip like http://<public ip>/a in my computer, all operations are success.
my tomcat connector is:
<Connector
port="80"
protocol="HTTP/1.1"
connectionTimeout="60000"
keepAliveTimeout="15000"
maxKeepAliveRequests="-1"
maxThreads="1000"
minSpareThreads="200"
maxSpareThreads="300"
minProcessors="100"
maxProcessors="900"
acceptCount="1000"
enableLookups="false"
executor="tomcatThreadPool"
maxPostSize="-1"
compression="on"
compressionMinSize="1024"
redirectPort="8443" />
my server has no domain, only has a public ip, its /etc/hosts is:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
updated with some commands run in server:
ss -nltp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:111 *:* users:(("rpcbind",pid=643,fd=8))
LISTEN 0 128 *:80 *:* users:(("java",pid=31986,fd=53))
LISTEN 0 128 *:22 *:* users:(("sshd",pid=961,fd=3))
LISTEN 0 1 127.0.0.1:8005 *:* users:(("java",pid=31986,fd=68))
LISTEN 0 128 :::111 :::* users:(("rpcbind",pid=643,fd=11))
LISTEN 0 128 :::22 :::* users:(("sshd",pid=961,fd=4))
LISTEN 0 80 :::3306 :::* users:(("mysqld",pid=1160,fd=19))
netstat -nltp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 643/rpcbind
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 31986/java
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 961/sshd
tcp 0 0 127.0.0.1:8005 0.0.0.0:* LISTEN 31986/java
tcp6 0 0 :::111 :::* LISTEN 643/rpcbind
tcp6 0 0 :::22 :::* LISTEN 961/sshd
tcp6 0 0 :::3306 :::* LISTEN 1160/mysqld
ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 1396428 bytes 179342662 (171.0 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1396428 bytes 179342662 (171.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
p2p1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.25 netmask 255.255.255.0 broadcast 192.168.1.255
ether f8:bc:12:a3:4f:b7 txqueuelen 1000 (Ethernet)
RX packets 5352432 bytes 3009606926 (2.8 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2839034 bytes 559838396 (533.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: p2p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether f8:bc:12:a3:4f:b7 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.25/24 brd 192.168.1.255 scope global noprefixroute dynamic p2p1
valid_lft 54621sec preferred_lft 54621sec
route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 100 0 0 p2p1
192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 p2p1
ip route
default via 192.168.1.1 dev p2p1 proto dhcp metric 100
192.168.1.0/24 dev p2p1 proto kernel scope link src 192.168.1.25 metric 100
iptables -L -n -v --line-numbers
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
You probably have configured one of these:
Firewall public IP's ports, so that nothing goes through.
Tomcat may bind a specific IP, e.g. localhost (see Connector elements in tomcat's server.xml)
Apache httpd, nginx or another reverse proxy might handle various virtual host names, and also they might handle localhost different than the public IP
Port Forwarding - if you only forward localhost:80 to localhost:8080 (tomcat's default port), you might not have anything on publicip:80 that forwards that traffic as well.
Edit after your comment:
incoming traffic seems to be fine, but outgoing you do have those problems. Adding from #stringy05's comment: Check if the IP in question is routable from your server: You're connecting to whatever IP from that server, so use another means to create an outgoing connection, e.g. curl.
Explanation for #1 & #3:
If you connect to an external http server, it will handle the request differently based on the hostname used. It might well be that the IP "hostname" is blocked, either by a high level firewall, or just handled differently than the URL by the webserver itself. In most cases you can check this by connecting to the webserver in question from any other system, e.g. your own browser.
If Tomcat is listening (bound) to your public IP-adres it should work, but maybe your public IP-adres belongs to some other device, like a SOHO router, than your problem is similar to this:
https://superuser.com/questions/208710/public-ip-address-answered-by-router-not-internal-web-server-with-port-forwardi
But without an DNS name you cannot simply add a line to /etc/hosts but you can add the public IP-adres to one of your Network Interfaces Cards (NIC) like lo (loopback), eth0, etc. as described in one of these articles:
https://www.garron.me/en/linux/add-secondary-ip-linux.html
https://www.thegeekdiary.com/centos-rhel-6-how-to-addremove-additional-ip-addresses-to-a-network-interface/
E.g. with public IP-address 1.2.3.4 you would need (which will only be effective until next reboot and worst case might interfere with your ability to connect to the server with e.g. SSH!):
sudo ip addr add 1.2.3.4/32 dev lo
It may be useful to have the output of these commands to better understand your setup, feel free to share it in your question, with consistently anonymized public IP-adres):
Either one of these (ss = socket stat, newer replacement for good old netstat):
ss -nltp
netstat -nltp
And one of these:
ifconfig
ip addr show
And last but not least either one of these:
route
ip route
I don't expect that we need to know your firewall config, but if you use it, it may be interesting to keep an eye on it while you are at it:
iptables -L -n -v --line-numbers
Try putting your public domain names into the local /etc/hosts file of your server like this:
127.0.0.1 localhost YOURPUBLIC.DOMAIN.NAME
This way your Java code does not need to try to use the external IP-adres but instead connects directly to Tomcat.
Good luck!
I think the curl timeout explains it - you have a firewall rule somewhere that is stopping the server accessing the public IP address.
If there's no reason the service can't be accessed using localhost or the local hostname then do that but if you need to call the service via a public IP then it's a matter of working out why the request gets a timeout from the server.
Some usual suspects:
The server might not actually have internet access - can you curl https://www.google.com?
There might be a forward proxy required - a sys admin will know this sort of thing
There might be IP whitelisting on some infra around your server - think AWS security groups, load balancer IP whitelists that sort of thing. To fix that you need to know the public IP of your server curl https://canihazip.com/s and get that added to the whitelist

Why OpenCV-Java can not connect to mjpeg-stream (from flask) while Python3/cv2 can do it?

A Python/flask server (server.py) acts as WebCam proxy.
Python3+cv2 (opencv4.1.1) using cv2.VideoCapture() can connect and display the stream (client.py).
Java+OpenCV (1.8.0_192 / opencv-4.1.1_vc14_vc15) seems to have connect issues (client.java).
Note: While cv2.open(url) sends just one GET request. Using Java/OpenCV the server sees 2 GET requests and the capture stays close.
Below the source code and the command-line calls ... running on Win7Pro.
Is there any idea why it is not working in Java?
server.py
# Python 3.7.4 / cv2.__version__=4.1.1
# WebCam-Server providing MJPEG stream
# OK:Firefox_62.0.3 OK_SLOW:VLC_3.0.8 OK:Py3.7+cv2
# FAIL:Java+OpenCV_4.1.1
# python server.py
import cv2, time
from flask import Flask, Response, request
VIDEO_CAPTURE_ID=0
app=Flask(__name__)
def get_frames():
camera=cv2.VideoCapture(VIDEO_CAPTURE_ID, cv2.CAP_DSHOW)
while True:
val, img=camera.read()
if not val: continue
yield cv2.imencode('.jpg', img)[1].tobytes()
time.sleep(0.01)
def gen():
while True:
for frame in get_frames():
yield (b'--frame\r\n'
b'Content-Type:image/jpeg\r\n'
b'Content-Length: '+(b'%d' % len(frame))+b'\r\n'
b'\r\n'+frame+b'\r\n')
#app.route('/videofeed')
def f_videofeed():
print("#app.route('/videofeed') .....")
return Response(gen(), mimetype='multipart/x-mixed-replace;boundary=frame')
if __name__=='__main__':
app.run(host='localhost', threaded=True, debug=True) # default port 5000
client.py
# Python 3.7.4 / cv2.__version__=4.1.1
# Client ...
# python client.py
client.py
import cv2
fg=cv2.VideoCapture()
print(fg)
addr="http://localhost:5000/videofeed?cam.mjpg"
print(addr)
tst=fg.open(addr)
print(tst)
for ii in range(5):
val,frm=fg.read()
s1=frm.tostring()
print("%d : %r %r %r ... %r" % (ii, val, len(s1), s1[:6], s1[-6:]))
fg.release()
client.java
// Java 1.8.0_192 / opencv-4.1.1_vc14_vc15
// Client
// opencv-411.jar/opencv_java411.dll vom InstallDir nach "." kopiert
// javac -cp opencv-411.jar client.java && java.exe -cp .;opencv-411.jar client
import org.opencv.core.Core;
import org.opencv.videoio.VideoCapture;
import org.opencv.videoio.Videoio;
public class client{
public static void main (String args[]) throws InterruptedException
{
Boolean tst=false;
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
VideoCapture fg=new VideoCapture(Videoio.CAP_FFMPEG);
System.out.println(fg);
String addr="http://localhost:5000/videofeed?cam.mjpg";
tst=fg.open(addr);
System.out.println(tst);
System.out.println(fg.getBackendName());
}
}
client console window
test_flask_opencv>
test_flask_opencv>javac -cp opencv-411.jar client.java && java.exe -cp .;opencv-411.jar client
org.opencv.videoio.VideoCapture#28d93b30
[ERROR:0] global C:\build\master_winpack-bindings-win64-vc14-static\opencv\modules\videoio\src\cap.cpp (116) cv::VideoCapture::open VIDEOIO(CV_IMAGES): raised OpenCV exception:
OpenCV(4.1.1) C:\build\master_winpack-bindings-win64-vc14-static\opencv\modules\videoio\src\cap_images.cpp:253: error: (-5:Bad argument) CAP_IMAGES: can't find starting number (in the name of file): http://localhost:5000/videofeed?cam.mjpg in function 'c
v::icvExtractPattern'
false
Exception in thread "main" CvException [org.opencv.core.CvException: cv::Exception: OpenCV(4.1.1) C:\build\master_winpack-bindings-win64-vc14-static\opencv\modules\videoio\src\cap.cpp:220: error: (-215:Assertion failed) api != 0 in function 'cv::VideoCap
ture::getBackendName'
]
at org.opencv.videoio.VideoCapture.getBackendName_0(Native Method)
at org.opencv.videoio.VideoCapture.getBackendName(VideoCapture.java:164)
at client.main(client.java:23)
test_flask_opencv>
test_flask_opencv>
test_flask_opencv>python client.py
<VideoCapture 0013DDC0>
http://localhost:5000/videofeed?cam.mjpg
True
FFMPEG
0 : True 921600 b'\x00\x00\x00\x00\x00\x00' ... b'\x00\x00\x00\x00\x00\x00'
1 : True 921600 b'\x9e\xa9\x8c\x9a\xa5\x88' ... b'`vok\x81z'
2 : True 921600 b'\xc3\xa3\x91\xc1\xa1\x8f' ... b'\x90\x80\x85\x8d}\x82'
3 : True 921600 b'\xb4\x9d\x8f\xb0\x99\x8b' ... b'\x9d\x90\x8c\x8f\x82~'
4 : True 921600 b'\xcd\x94\x86\xd2\x99\x8b' ... b'\xb8\x89\x85\xac}y'
test_flask_opencv>
server console window
test_flask_opencv>
test_flask_opencv>python server.py
* Serving Flask app "server" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
* Restarting with stat
* Debugger is active!
* Debugger PIN: 156-364-938
* Running on http://localhost:5000/ (Press CTRL+C to quit)
#app.route('/videofeed') .....
127.0.0.1 - - [10/Nov/2019 17:45:44] "GET /videofeed?cam.mjpg HTTP/1.1" 200 -
#app.route('/videofeed') .....
127.0.0.1 - - [10/Nov/2019 17:45:45] "GET /videofeed?cam.mjpg HTTP/1.1" 200 -
test_flask_opencv>
test_flask_opencv>
test_flask_opencv>
test_flask_opencv>python server.py
* Serving Flask app "server" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
* Restarting with stat
* Debugger is active!
* Debugger PIN: 156-364-938
* Running on http://localhost:5000/ (Press CTRL+C to quit)
#app.route('/videofeed') .....
127.0.0.1 - - [10/Nov/2019 17:46:17] "GET /videofeed?cam.mjpg HTTP/1.1" 200 -

Retrieving JSON in Python in response to POST

I'm trying to get a JSON from a server to use it in a Python code. For test purposes, I did POST by curl:
$ curl -u trial:trial -H "Content-Type: application/json"
-X POST -d '{"BP_TSM":"22"}' http://some-host --trace-ascii -
My Java code seems to correctly handle creating JSON as a response. Please look at the result of curl command:
== Info: About to connect() to localhost port 8080 (#0)
== Info: Trying ::1...
== Info: Connected to localhost (::1) port 8080 (#0)
== Info: Server auth using Basic with user 'trial'
=> Send header, 224 bytes (0xe0)
0000: POST /get/auth HTT
0040: P/1.1
0047: Authorization: Basic dHJpYWw6dHJpYWw=
006e: User-Agent: curl/7.29.0
0087: Host: localhost:8080
009d: Accept: */*
00aa: Content-Type: application/json
00ca: Content-Length: 15
00de:
=> Send data, 15 bytes (0xf)
0000: {"BP_TSM":"22"}
== Info: upload completely sent off: 15 out of 15 bytes
<= Recv header, 23 bytes (0x17)
0000: HTTP/1.1 202 Accepted
<= Recv header, 34 bytes (0x22)
0000: Server: Payara Micro #badassfish
<= Recv header, 32 bytes (0x20)
0000: Content-Type: application/json
<= Recv header, 37 bytes (0x25)
0000: Date: Thu, 22 Mar 2018 14:30:43 GMT
<= Recv header, 21 bytes (0x15)
0000: Content-Length: 108
<= Recv header, 29 bytes (0x1d)
0000: X-Frame-Options: SAMEORIGIN
<= Recv header, 2 bytes (0x2)
0000:
<= Recv data, 108 bytes (0x6c)
0000: {"title":"Free Music Archive - Albums","message":"","total":"112
0040: 59","total_pages":2252,"page":1,"limit":"5"}
{"title":"Free Music Archive - Albums","message":"","total":"11259","total_pages
":2252,"page":1,"limit":"5"}== Info: Connection #0 to host localhost left intact
Now I would like Python script be able to receive the same message that curl did. I wrote the following Python code (note I'm not Python developer):
import pickle
import requests
import codecs
import json
from requests.auth import HTTPBasicAuth
from random import randint
req = requests.get('server/get/auth', auth=HTTPBasicAuth('trial', 'trial'))
return pickle.dumps(req)
Unfortunately, I get error message 'unicode' object has no attribute 'copy' when return pickle.dumps(req) command is executed. I also tried using return json.dumps(req) but this time I get another error:
Traceback (most recent call last):
File "/tmp/tmp8DfLJ7/usercode.py", line 16, in the_function
return json.dumps(req)
File "/usr/lib64/python2.7/json/__init__.py", line 244, in dumps
return _default_encoder.encode(obj)
File "/usr/lib64/python2.7/json/encoder.py", line 207, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib64/python2.7/json/encoder.py", line 270, in iterencode
return _iterencode(o, 0)
File "/usr/lib64/python2.7/json/encoder.py", line 184, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: <Response [405]> is not JSON serializable
Do I have some error in Python code or is it fault of my Java server returning incorrect JSON?
There are a number of errors in your Python code.
You are using request.get to POST. Instead, use request.post.
You are not passing the BP_TSM json string into your request. Use data= in your request.post.
You are not emulating the -H switch to curl. Use headers= in your request.post.
You are using pickle for no apparent reason. Don't do that.
You are using a return statement when you are not in a function. Don't do that. If you want to print to stdout, use print() or sys.stdout.write() instead.
If you actually want to use the returned variables from the JSON (as opposed to simply printing to stdout), you shoud invoke req.json().
Here is a version of your code with problems addressed.
import requests
import json
import sys
from requests.auth import HTTPBasicAuth
data = '{"BP_TSM": "22"}' # curl -d
headers = {'content-type': 'application/json'} # curl -H
auth = HTTPBasicAuth('trial', 'trial') # curl -u
req = requests.post( # curl -X POST
'http://httpbin.org/post',
auth=auth,
data=data,
headers=headers)
sys.stdout.write(req.text) # Display JSON on stdout
returned_data = req.json()
my_ip = returned_data["origin"] # Query value from JSON
print("My public IP is", my_ip)
You're trying to dumps a Response object.
Try returning req.json() or calling json.loads(req.text)
In order to load the Json string, you'll need to use json.loads(req.text).
You must also ensure that the req string is valid json.
eg
'{"FOO":"BAR"}'
You can use requests.json() method to get json response as dict
req = requests.get('http://yourdomain.com/your/path', auth=HTTPBasicAuth('trial', 'trial'))
mydict = req.json()

Cassandra sstableloader - Connection refused

I am trying to bulk load a 4 node Cassandra 3.0.10 cluster with some data. I've successfully generated the SStables following the documentation, however it seems like I cannot get the sstableloader load them.
I get the following java.net.ConnectException: Connection refused
bin/sstableloader -v -d localhost test-data/output/si_test/messages/
Established connection to initial hosts
Opening sstables and calculating sections to stream
Streaming relevant part of /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-1-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-10-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-11-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-12-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-13-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-14-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-15-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-16-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-17-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-18-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-19-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-2-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-20-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-21-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-22-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-23-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-24-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-25-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-26-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-27-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-28-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-29-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-3-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-30-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-31-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-32-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-33-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-34-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-35-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-36-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-37-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-38-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-39-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-4-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-40-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-41-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-42-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-43-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-44-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-45-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-46-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-47-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-48-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-49-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-5-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-50-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-51-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-52-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-53-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-54-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-55-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-56-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-57-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-58-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-59-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-6-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-60-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-61-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-7-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-8-big-Data.db /home/ubuntu/cassandra/apache-cassandra-3.0.10/test-data/output/si_test/messages/mc-9-big-Data.db to [localhost/127.0.0.1]
ERROR 14:46:24 [Stream #3d0c24e0-cc43-11e6-8c9f-615437259231] Streaming error occurred
java.net.ConnectException: Connection refused
at sun.nio.ch.Net.connect0(Native Method) ~[na:1.8.0_111]
at sun.nio.ch.Net.connect(Net.java:454) ~[na:1.8.0_111]
at sun.nio.ch.Net.connect(Net.java:446) ~[na:1.8.0_111]
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648) ~[na:1.8.0_111]
at java.nio.channels.SocketChannel.open(SocketChannel.java:189) ~[na:1.8.0_111]
at org.apache.cassandra.tools.BulkLoadConnectionFactory.createConnection(BulkLoadConnectionFactory.java:60) ~[apache-cassandra-3.0.10.jar:3.0.10]
at org.apache.cassandra.streaming.StreamSession.createConnection(StreamSession.java:255) ~[apache-cassandra-3.0.10.jar:3.0.10]
at org.apache.cassandra.streaming.ConnectionHandler.initiate(ConnectionHandler.java:84) ~[apache-cassandra-3.0.10.jar:3.0.10]
at org.apache.cassandra.streaming.StreamSession.start(StreamSession.java:242) ~[apache-cassandra-3.0.10.jar:3.0.10]
at org.apache.cassandra.streaming.StreamCoordinator$StreamSessionConnector.run(StreamCoordinator.java:212) [apache-cassandra-3.0.10.jar:3.0.10]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
progress: total: 100% 0 MB/s(avg: 0 MB/s)WARN 14:46:24 [Stream #3d0c24e0-cc43-11e6-8c9f-615437259231] Stream failed
Streaming to the following hosts failed:
[localhost/127.0.0.1]
java.util.concurrent.ExecutionException: org.apache.cassandra.streaming.StreamException: Stream failed
at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)
at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:120)
Caused by: org.apache.cassandra.streaming.StreamException: Stream failed
at org.apache.cassandra.streaming.management.StreamEventJMXNotifier.onFailure(StreamEventJMXNotifier.java:85)
at com.google.common.util.concurrent.Futures$6.run(Futures.java:1310)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:457)
at com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)
at com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:202)
at org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:211)
at org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:187)
at org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:440)
at org.apache.cassandra.streaming.StreamSession.onError(StreamSession.java:540)
at org.apache.cassandra.streaming.StreamSession.start(StreamSession.java:248)
at org.apache.cassandra.streaming.StreamCoordinator$StreamSessionConnector.run(StreamCoordinator.java:212)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
The utility seems to connect to the cluster (...Established connection to initial hosts..), however I does not stream the data.
What I've tried so far to debug the issue:
Dropped the target keyspace (and got a different error), created again via cqlsh
I can telnet to each node of the cluster through ports 9042 and 7000
Enabled thrift using nodetool enablethrift
EDIT
That's the output of netstat -an | grep 7000. The nodes have only one network interface and Cassandra is listening to it. It has also established connections with all the other nodes on port 7000.
tcp 0 0 172.31.3.88:7000 0.0.0.0:* LISTEN
tcp 0 0 172.31.3.88:7000 172.31.3.86:54348 ESTABLISHED
tcp 0 0 172.31.3.88:7000 172.31.3.87:60661 ESTABLISHED
tcp 0 0 172.31.3.88:53061 172.31.3.87:7000 ESTABLISHED
tcp 0 0 172.31.3.88:7000 172.31.11.43:36984 ESTABLISHED
tcp 0 0 172.31.3.88:51412 172.31.11.43:7000 ESTABLISHED
tcp 0 0 172.31.3.88:54018 172.31.3.87:7000 ESTABLISHED
tcp 0 0 172.31.3.88:7000 172.31.3.87:40667 ESTABLISHED
tcp 0 0 172.31.3.88:34469 172.31.3.86:7000 ESTABLISHED
tcp 0 0 172.31.3.88:43658 172.31.3.86:7000 ESTABLISHED
tcp 0 0 172.31.3.88:7000 172.31.11.43:49487 ESTABLISHED
tcp 0 0 172.31.3.88:40798 172.31.11.43:7000 ESTABLISHED
tcp 0 0 172.31.3.88:7000 172.31.3.86:51537 ESTABLISHED
EDIT 2
Changing the initial host from 127.0.0.1 to the actual address of the node in the network results to a com.datastax.driver.core.exceptions.TransportException:
bin/sstableloader -v -d 172.31.3.88 test-data/output/si_test/messages/
All host(s) tried for query failed (tried: /172.31.3.88:9042 (com.datastax.driver.core.exceptions.TransportException: [/172.31.3.88] Cannot connect))
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /172.31.3.88:9042 (com.datastax.driver.core.exceptions.TransportException: [/172.31.3.88] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:233)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:79)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1424)
at com.datastax.driver.core.Cluster.init(Cluster.java:163)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:334)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:309)
at com.datastax.driver.core.Cluster.connect(Cluster.java:251)
at org.apache.cassandra.utils.NativeSSTableLoaderClient.init(NativeSSTableLoaderClient.java:70)
at org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:159)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:104)
Any suggestion is appreciated.
Thanks
Thats it trying to connect to the storage port (7000). Its most likely binding to a different interface than 127.0.0.1. You can check what its binding too with netstat -an | grep 7000. You may want to double check any firewall or iptable settings.
UPDATE: its not bound to 127.0.0.1 which is default but to 172.31.3.88. So call sstableloader -v -d 172.31.3.88 test-data/output/si_test/messages/
Also if you have ssl enabled (server_encryption_options in cassandra.yaml) you need to use 7001 and configure it to match. If you can telnet to 7000 its most likely not that though.
Worth noting that enabling thrift is not necessarily in 3.0.10. sstableloader no longer uses that (in older versions it was used to read the schema).
Solved changing the rpc_address in the cassandra.yaml file from the default localhost to the actual address of each node.

Categories