I want to connect to HBase running in standalone in a docker, using Java and the HBase API
I use this code to connect :
Configuration config = HBaseConfiguration.create();
config.set("hbase.zookeeper.quorum", "163.172.142.199");
config.set("hbase.zookeeper.property.clientPort","2181");
HBaseAdmin.checkHBaseAvailable(config);
Here is my /etc/hosts file
127.0.0.1 localhost
XXX.XXX.XXX.XXX hbase-srv
Here is the /etc/hosts file from my docker (named hbase-srv)
XXX.XXX.XXX.XXX hbase-srv
With this configuration, I get a connection refused error :
INFO | Initiating client connection, connectString=163.172.142.199:2181 sessionTimeout=90000 watcher=hconnection-0x6aba2b860x0, quorum=163.172.142.199:2181, baseZNode=/hbase
INFO | Opening socket connection to server 163.172.142.199/163.172.142.199:2181. Will not attempt to authenticate using SASL (unknown error)
INFO | Socket connection established to 163.172.142.199/163.172.142.199:2181, initiating session
INFO | Session establishment complete on server 163.172.142.199/163.172.142.199:2181, sessionid = 0x15602f8d8dc0002, negotiated timeout = 40000
INFO | Closing zookeeper sessionid=0x15602f8d8dc0002
INFO | Session: 0x15602f8d8dc0002 closed
INFO | EventThread shut down
org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1560)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1580)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1737)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isMasterRunning(ConnectionManager.java:948)
at org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:3159)
at hbase.Benchmark.main(Benchmark.java:26)
However, if I remove the lines XXX.XXX.XXX.XXX hbase-srv from both /etc/hosts files I get the error unknown host : hbase-srv
I have also checked, I can successfully telnet to my hbase docker on the client port.
On the docker, all the ports used by HBase are opened and binded to the same number (60000 on 60000, 2181 on 2181, etc).
I also wanted to add that all was fine when I used this configuration on localhost.
If you can't give me an answer to my problem, could you at least give me a procedure to deploy a standalone hbase on a docker.
UPDATE : Here is my Docker file
FROM java:openjdk-8
ADD hbase-1.2.1 /hbase-1.2.1
WORKDIR /hbase-1.2.1
# ZooKeeper
EXPOSE 2181
# HMaster
EXPOSE 60000
# HMaster Web
EXPOSE 60010
# RegionServer
EXPOSE 60020
# RegionServer Web
EXPOSE 60030
EXPOSE 16010
RUN chmod 755 /hbase-1.2.1/bin/start-hbase.sh
CMD ["/hbase-1.2.1/bin/start-hbase.sh"]
My HBase shell is working, I also tried to open the port using iptables for tcp and udp but still the same problem
There are two problems with your Dockerfile:
use hbase master start instead of start-hbase.sh
regionserver is actually not running on 60020
The 2nd problem is not so easy to solve. If run hbase standalone with version >= 1.2.0 (not sure, I'm running 1.2.0), hbase will use ephemeral port instead of the default port or the port you provide in hbase-site.xml which makes it very hard to provide hbase service in docker using the original version.
I add a property named hbase.localcluster.port.ephemeral and managed to build a standalone hbase in docker, which you can reference here.
Related
I am running the latest Kafka on Ubuntu WSL2 successfully. I can start zookeeper, kafka server, create topics, console produce and console consume just fine from within the Ubuntu that I have running on the WSL. However, when I go into my Intellij on Windows and create a simple Java Producer it does not seem to be able to connect to the broker
Versions & Hostname
Java version: 1.8
Kafka Version: 2.6
hostname (from Ubuntu): KDAAPPDEV04
hostname (from Powershell): KDAAPPDEV04
java.net.InetAddress.getLocalHost().getHostName() = KDAAPPDEV04
java.net.InetAddress.getLocalHost().getCanonicalHostName() = KDAAPPDEV04
netstat from CMD:
TCP [::1]:9092 [::]:0 LISTENING
server.properties
I found this settings on another SO answer but these did not work for me.
advertised.listeners=PLAINTEXT://127.0.0.1:9092
listener.security.protocol.map=PLAINTEXT:PLAINTEXT
listeners=PLAINTEXT://0.0.0.0:9092
then tried (and restarted zookeeper and kafka)
advertised.listeners=PLAINTEXT://KDAAPPDEV04:9092
listener.security.protocol.map=PLAINTEXT:PLAINTEXT
listeners=PLAINTEXT://0.0.0.0:9092
Producer
I run this producer with three different values: hostname, localhost and 127.0.0.1 but it never connects to the broker
public class ProducerDemo{
private static Logger logger = LoggerFactory.getLogger(ProducerDemo.class);
public static void main(String[] args) throws UnknownHostException{
System.out.println(InetAddress.getLocalHost().getHostName());
System.out.println(InetAddress.getLocalHost().getCanonicalHostName());
String bootstrapServers = "127.0.0.1:9092";
// String bootstrapServers = "localhost:9092";
// String bootstrapServers = "KDAAPPDEV04:9092";
//create Producer properties
Properties properties = new Properties();
properties.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,bootstrapServers);
properties.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
properties.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class.getName());
//create the producer
KafkaProducer<String,String> producer = new KafkaProducer<String, String>(properties);
//create a producer record
ProducerRecord<String,String> record = new ProducerRecord<String, String>("first-topic","hola mundo");
//send data
producer.send(record);
//flush + close
producer.flush();
producer.close();
}
}
Error
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 2.6.0
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 62abe01bee039651
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1601666175706
[kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Connection to node -1 (KDAAPPDEV04/my-ipconfig-address-here:9092) could not be established. Broker may not be available.
Had this same issue. The root cause seems to be that WSL2 is broken with regards to IPv6 and localhost (See: https://github.com/microsoft/WSL/issues/4851)
The only fix I found that doesn't involve changing configs every time you reboot (per the "172.*" suggestion above) is to use the IPv6 loopback address ::1 in both the Kafka server config running in Linux and the Java client in Windows.
In server.properties I have this:
listeners=PLAINTEXT://[::1]:9092
And likewise in my Java client bootstrap server config I use
"[::1]:9092"
I had the exact problem you are having and I resolved it as follows:
I ran the following command in my WSL2 Ubuntu shell:
ip addr | grep "eth0"
I made note of the ip address against the inet property, for example, 172.27.10.68
In my Kafka server.properties I replaced the listeners property value as follows:
listeners=PLAINTEXT://172.27.10.68:9092
I commented out the advertised.listeners property. But you can alternatively assign
the ip in question to this property, and have the listeners property set to 0.0.0.0.
But I assume you are using the Kafka installation for testing/learning purposes,
so I would keep it simple.
I made no change to the Zookeeper's default ip:port
I am using the Schema Registry, so I modified the Kafka bootstrap property as follows:
kafkastore.bootstrap.servers=PLAINTEXT://172.27.10.68:9092
I made no change to the default schema registry listener listeners=http://0.0.0.0:8081
I used the same ip (as listed above) in my IntelliJ Kafka Producer.
It then happily connected to my Kafka broker in WSL2.
More information on WSL2 networking can be found at https://learn.microsoft.com/en-us/windows/wsl/compare-versions .
The only problem with this setup is that every time you shutdown or restart your Windows machine, or close your Ubuntu terminal, the ip address for eth0 changes. And this results in redoing steps 2, 4 and 5. I am sure there is a better way, but everything I tried failed, except for this.
WSL2 runs on hypervisor and you need port proxy to connect Kafka Broker running on WSL2.
Step 1 . Check you WSL2 IP using following command and copy inet value
$ ifconfig
inet 172.X.X.X
Step 2. Open cmd with Admin permsissions
netsh interface portproxy add v4tov4 listenport=9092 listenaddress=0.0.0.0 connectport=9092 connectaddress=172.X.X.X
You should be able to connect now
Note : WSL2 IP changes everytime you restart machine
I am able to find a work around . Thanks to Goose's comments
I ran the following command in my WSL2 Ubuntu shell: ip addr
Then ip address against the inet property global eth0 . for example, inet 172.20.XXX.XXX/20 .... scope global eth0
I replaced all localhost with this IP address in the docker-compose.yml
I replaced the localhost with this IP address in springboot yml or properties file.
My Kafka producer and consumer able to connect to the Kafka running in Ubunti - WSL 2 from Windows
Stop Kafka and Zookeeper, then
Disable IPv6 on WSL2:
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1
sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1
Start Kafka, and you're good to go!
I got this problem when running a kafka producer in IntelliJ and a consumer in ubuntu terminal while on WSL2.
First, stop Kafka and Zookeeper. Then run these commands on WSL2, one by one:
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1
sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1
After that, in the kakfa folder, go to config/server.properties and edit the file to add the line:
listeners=PLAINTEXT://localhost:9092
When these commands have succeeded relaunch zookeeper and kafka.
https://www.conduktor.io/kafka/kafka-fundamentals
This is not the optimal solution, but you will be able to connect if you run your producer in Ubuntu/WSL. This means if you are using a Windows IDE, writing the code, switching to Ubuntu and using a command line compiler and running the producer. See this post Error connecting to kafka server via IDE in WSL2
Edit the file etc/sysctl.conf and add following lines in it.
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
Replace listeners=PLAINTEXT://:9092 with listeners=PLAINTEXT://localhost:9092 in your server.properties.
Update the sysctl config by using the following command. (Everytime you restart your machine this command needs to be run to update the configuration)
sudo sysctl -p
I have a java application running on an openshift remote cluster and I want to debug the app from my local machine with Intellij-Idea. The app is built by a Jenkinsfile on another remote jenkins server (gradle build, docker build and pushed to openshift, where it is automatically deployed).
The Dockerfile exposes port 9009 and therefore my Intellij Remote Debug Config looks like this:
Debug Config
With the localhost in the Debug Config I need openshift port-forwarding:
oc port-forward my-pod 9009
Forwarding from 127.0.0.1:9009 -> 9009
When I start the Debugger I get the following error in Intellij:
Error running 'DTC Remote Debug':
Unable to open debugger port (localhost:9009): java.net.ConnectException "Connection refused: connect"
At the same time the terminal with the port forwarding shows:
Handling connection for 9009
E0927 09:52:33.711817 5996 portforward.go:331] an error occurred forwarding 9009 -> 9009: error forwarding port 9009 to pod ad370...c010, uid : exit status 1: 2019/09/27 03:52:33 socat[129691] E connect(5, AF=2 127.0.0.1:9009, 16): Connection refused
Doing an Nmap scan against the url where I get the index.html of my application I got the following:
nmap -sS my-openshift-url
Starting Nmap 7.80 ( https://nmap.org ) at 2019-09-27 15:01 Mitteleuropõische Sommerzeit
Nmap scan report for my-openshift-url (IP-Address)
Host is up (0.0043s latency).
rDNS record for IP-Address: dispatch-my-domain
Not shown: 997 filtered ports
PORT STATE SERVICE
80/tcp open http
443/tcp open https
9009/tcp closed pichat
Nmap done: 1 IP address (1 host up) scanned in 6.10 seconds
I guess the problem is the closed 9009 port, but I have no clue how I can open that port on my openshift cluster. I already set several environment variables in the openshift web UI (just to be sure):
DEBUG TRUE
DEBUG true
DEBUGGING TRUE
DEBUGGING true
JAVA_DEBUG TRUE
JAVA_DEBUG true
JAVA_DEBUG_PORT 9009
But I can't get it to work. If I switch the port-forwarding to 8080 I can access the index.html via localhost:8080 from my browser. I don't know if I need to change something in the project code (gradle, docker, jenkins, etc.) or if I can just open the port on the deployed service in openshift somehow...
If anything isn't clear or if I missed something just tell me. I'm happy for every piece of advice.
Regards,
Christoph
Adding the following environment variable in openshift did the trick:
JAVA_TOOL_OPTIONS -agentlib:jdwp=transport=dt_socket,address=9009,server=y,suspend=n
All the other environment variables from above are absolete...
I have a local spring boot application that connects to local MySQL and it works fine.
For connection I use the following property:
spring.datasource.url=jdbc:mysql://localhost:3306/pitstop?useSSL=false&useUnicode=true&characterEncoding=utf8&serverTimezone=UTC
I would like to put my app in docker and try to connect to local DB.
So I need to modify MySQL url.
I used this command to obtain local IP
ip route show | grep "default" | awk '{print $3}' the result is 192.168.1.1. I modify my url like this
spring.datasource.url=jdbc:mysql://192.168.1.1:3306/pitstop?useSSL=false&useUnicode=true&characterEncoding=utf8&serverTimezone=UTC
and try to start docker container with my app by command docker run -p 9001:9001 --network=bizon4ik --rm bizon4ik/mycontainer the result is the exception:
Caused by: com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
I had tried to find another IP. I used ip addr show command and found the next record:
3: wlp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 30:52:cb:db:8d:e0 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.102/24 brd 192.168.1.255 scope global dynamic noprefixroute wlp3s0
So based on it I took 192.168.1.102 and modify my url but the result is the same exception.
To be sure that founded IPs are correct I run new one container with pure ubuntu docker run -it --rm --network=bizon4ik ubuntu and checked the mentioned above IP:
root#268c4d544328:/# nmap -p 3306 192.168.1.1
Starting Nmap 7.60 ( https://nmap.org ) at 2019-07-28 17:28 UTC
Nmap scan report for 192.168.1.1
Host is up (0.053s latency).
PORT STATE SERVICE
3306/tcp filtered mysql
root#268c4d544328:/# nmap -p 3306 192.168.1.102
Starting Nmap 7.60 ( https://nmap.org ) at 2019-07-28 15:56 UTC
Nmap scan report for my-host (192.168.1.102)
Host is up (0.000088s latency).
PORT STATE SERVICE
3306/tcp closed mysql
So looks fine, the ubuntu container can ping my DB. Do you have any ideas on why I cannot connect through the app to DB?
NB:
I checked in DB SHOW GRANTS; the result is GRANT ALL PRIVILEGES ON *.* TO 'root'#'%' WITH GRANT OPTION
I also checked /etc/mysql/my.cnf file. It has only these records:
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mysql.conf.d/
I don't have ~/.my.cnf
The problem was in bind-address for MySQL. I looked it in the /etc/mysql/my.cnf however, the right place is /etc/mysql/mysql.conf.d/mysqld.cnf
MacOS + Docker (Version 17.12.0-ce-mac49 (21995)) here. I am trying to Dockerize an existing Spring Boot app. Here's my Dockerfile:
FROM openjdk:8
RUN mkdir /opt/myapp
ADD build/libs/myapp.jar /opt/myapp
ADD application.yml /opt/myapp
ADD logback.groovy /opt/myapp
WORKDIR /opt/myapp
EXPOSE 9200
ENTRYPOINT ["java", "-Dspring.config=.", "-jar", "myapp.jar"]
Here's my Spring Boot application.yml config file. As you can see it expects Docker to inject environment variables from an env file:
logging:
config: 'logback.groovy'
server:
port: 9200
error:
whitelabel:
enabled: true
spring:
cache:
type: none
datasource:
driver-class-name: com.mysql.jdbc.Driver
url: jdbc:mysql://${DB_HOST}:3306/myapp_db?useSSL=false&nullNamePatternMatchesAll=true
username: ${DB_USERNAME}
password: ${DB_PASSWORD}
testWhileIdle: true
validationQuery: SELECT 1
jpa:
show-sql: false
hibernate:
ddl-auto: none
naming:
physical-strategy: org.springframework.boot.orm.jpa.hibernate.SpringPhysicalNamingStrategy
implicit-strategy: org.springframework.boot.orm.jpa.hibernate.SpringImplicitNamingStrategy
properties:
hibernate.dialect: org.hibernate.dialect.MySQL5InnoDBDialect
hibernate.cache.use_second_level_cache: false
hibernate.cache.use_query_cache: false
hibernate.generate_statistics: false
hibernate.hbm2ddl.auto: validate
myapp:
detailsMode: ${DETAILS_MODE}
tokenExpiryDays:
alert: 5
jwtInfo:
secret: ${JWT_SECRET}
expiry: ${JWT_EXPIRY}
topics:
adminAlerts: admin-alerts
Here's my myapp-local.env file:
DB_HOST=localhost
DB_USERNAME=root
DB_PASSWORD=
DETAILS_MODE=Terse
JWT_SECRET=12345==
JWT_EXPIRY=86400000
It's worth noting that above in the env file, I have tried localhost, 127.0.0.1 and 172.17.0.1 and all of them produce identical errors below.
Then I build the container:
docker build -t myapp .
Success! Then I run the container:
docker run -it -p 9200:9200 --net="host" --env-file myapp-local.env --name myapp myapp
...and I watch as the container quickly dies with MySQL connection-related exceptions (can't connect to the MySQL machine running locally). I can confirm that the Spring Boot app has no problem connecting to MySQL when it runs as an executable ("fat") jar outside of Docker, and I can confirm that the local MySQL instance is up and running and is perfectly healthy.
Unable to connect to database. }com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:590)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:57)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:1606)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:633)
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:347)
When I turn TRACE-level logging on, I see it is trying to connect to:
url=jdbc:mysql://localhost:3306/myapp?useSSL=false&nullNamePatternMatchesAll=true
So it does look like Docker is properly injecting the env file's vars into the Spring YAML-based config. So this doesn't feel like a config issue, moreover an isse with the container speaking to the MySQL port running on the Docker host.
Can anybody see where I'm going awry?
Accessing the host machine from within a container is not recommended. Usually it can be solved by wrapping service you need into a container and accessing it via container name.
There is no solution, there are only workarounds, you can use one of them:
On Mac you can access the host services using docker.for.mac.host.internal DNS name.
You need to set environment variable like this:
DB_HOST=docker.for.mac.host.internal
And refer to the DB_HOST from your connection string.
For more details see the documentation:
From 17.12 onwards our recommendation is to connect to the special
Mac-only DNS name docker.for.mac.host.internal, which resolves to the
internal IP address used by the host.
Note: Having --net="host" doesn't let you reach the host machine via localhost. localhost always points to local machine, but in case if it is invoked from within a container it points to the container itself.
So basically Docker app is not in the same network as the host you're running it from and that's why you can't access MySQL by pointing to localhost (because this is another network from Docker's point of view).
What you could try is to run docker with --net="host" option and then it will share the network with its host.
You can find better explanation on this issue in this topic From inside of a Docker container, how do I connect to the localhost of the machine?
Show my code
conf.set( "mongo.input.uri" , "mongodb://127.0.0.1/stackoverflow.mrtest" );
conf.set( "mongo.output.uri" , "mongodb://127.0.0.1/stackoverflow.mrtest_out2" );
the code runs without error when the host is localhost or 127.0.0.1. But when the host changed to my ip wlan0 192.168.1.102, it returned the following error
Cluster created with settings {hosts=[192.168.1.102:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
Exception in monitor thread while connecting to server 192.168.1.102:27017
com.mongodb.MongoSocketOpenException: Exception opening socket
at com.mongodb.connection.SocketStream.open(SocketStream.java:63)
at com.mongodb.connection.InternalStreamConnection.open(InternalStreamConnection.java:114)
at com.mongodb.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:127)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at com.mongodb.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:50)
at com.mongodb.connection.SocketStream.open(SocketStream.java:58)
... 3 more
I have open the port 27017.
sudo iptables -A INPUT -ptcp --dport 27017 -j ACCEPT
My OS is Ubuntu 14.04.
How should I fix it? Thank you!
By default MongoDB only binds to the loopback interface which makes it only accessible from localhost. To change that you need to edit this line in mongod.conf file;
# /etc/mongod.conf
# Listen to local interface only. Comment out to listen on all interfaces.
bind_ip = 127.0.0.1
you can change it to bind_ip = 127.0.0.1,192.168.1.102 to allow LAN and local connections or you can remove or comment out that line to allow all connections.
For more info : MongoDB – Allow remote access
There could be several reasons of it , which in short can be concluded as Your Application is unable to Communicate mongoDB service
1.Check your MongoDB using the same IP configured in your application.yml file,
If not then configure the same used by MongoDB:
spring:
profiles:
active: dev
---
spring:
profiles: dev
data:
mongodb:
host: localhost
port: 27017
Here i assumed my mongo running on localhost, and port 27017, so i configured accordingly.
Check whether your MongoDB service up and running , How to check ?
Execute following command in your terminal
sudo service mongodb status
<pre><code>
● mongodb.service - An object/document-oriented database
Loaded: loaded (/lib/systemd/system/mongodb.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2020-07-03 20:10:15 IST; 1min 54s ago
Docs: man:mongod(1)
Main PID: 14305 (mongod)
Tasks: 23 (limit: 4915)
CGroup: /system.slice/mongodb.service
└─14305 /usr/bin/mongod --unixSocketPrefix=/run/mongodb --config /etc/mongodb.conf</pre></code>
If Status not visible as active and running, you need to start/restart the service
sudo service mongodb restart
I was able to determine that it was an issue with the bind parameter in the /etc/mongod.conf. Instead of commenting it out I set it to 0.0.0.0 to all for remote access.
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0
Exception: com.mongodb.MongoSocketOpenException: Exception opening socket
Solution:
Verify whether you have started "mongo daemon" or not.
Windows Terminal: mongod.exe
Linux Termina: mongod
If you're using Spring Boot and are following the Quick Start, make sure you put this configuration in your application.properties
spring.data.mongodb.uri=[YOUR_URI]