Spring boot mongo time out while connecting with mongo driver - java

While I'm working with spring-boot-starter-data-mongodb. I always got a timeout exception. The log detail is as follows:
Could any body tell me why I always got a timeout? thanks so much.
2019-04-01 19:08:50.255 INFO 8336 --- [168.0.101:27017] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server 192.168.0.101:27017
com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message
at com.mongodb.connection.InternalStreamConnection.translateReadException(InternalStreamConnection.java:530)
at com.mongodb.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:421)
2019-04-01 19:09:15.163 DEBUG 8336 --- [nio-8888-exec-1] o.s.b.w.s.f.OrderedRequestContextFilter : Cleared thread-bound request context: org.apache.catalina.connector.RequestFacade#4ce3ddaf
2019-04-01 19:09:15.165 ERROR 8336 --- [nio-8888-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.dao.DataAccessResourceFailureException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=192.168.0.101:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=192.168.0.101:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}]] with root cause
com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=192.168.0.101:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}]
at com.mongodb.connection.BaseCluster.getDescription(BaseCluster.java:167)
at com.mongodb.Mongo.getConnectedClusterDescription(Mongo.java:885)
at com.mongodb.Mongo.createClientSession(Mongo.java:877)
at com.mongodb.Mongo$3.getClientSession(Mongo.java:866)
My application.yml is and spring boot version is 2.0.8.RELEASE, and here is the content:
spring:
data:
mongodb:
host: 192.168.0.101
port: 27017
username: test
password: test
database: test
server:
port: 8888
management:
health:
mongo:
enabled: false

you can try this:
<dependency>
<groupId>com.spring4all</groupId>
<artifactId>mongodb-plus-spring-boot-starter</artifactId>
<version>1.0.0.RELEASE</version>
</dependency>
#EnableMongoPlus
#SpringBootApplication
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
Than you have a bunch of more configuration properties to play around :)
spring.data.mongodb.option.min-connection-per-host=0
spring.data.mongodb.option.max-connection-per-host=100
spring.data.mongodb.option.threads-allowed-to-block-for-connection-multiplier=5
spring.data.mongodb.option.server-selection-timeout=30000
spring.data.mongodb.option.max-wait-time=120000
spring.data.mongodb.option.max-connection-idle-time=0
spring.data.mongodb.option.max-connection-life-time=0
spring.data.mongodb.option.connect-timeout=10000
spring.data.mongodb.option.socket-timeout=0
spring.data.mongodb.option.socket-keep-alive=false
spring.data.mongodb.option.ssl-enabled=false
spring.data.mongodb.option.ssl-invalid-host-name-allowed=false
spring.data.mongodb.option.always-use-m-beans=false
spring.data.mongodb.option.heartbeat-socket-timeout=20000
spring.data.mongodb.option.heartbeat-connect-timeout=20000
spring.data.mongodb.option.min-heartbeat-frequency=500
spring.data.mongodb.option.heartbeat-frequency=10000
spring.data.mongodb.option.local-threshold=15
I have not tried it yet... but maybe it's worth a try.
Or look in the the repo how to do it without the dependency in your Project ;)

It is not a final solution, but you can try a longer timeout.
# The time to wait to establish a connection before timing out, in seconds.
# (default: 10)
connect_timeout: 99
If it connects successfully after changing the timeout, you should find why it is taking so long to establish a connections and try to fix it.
If even after setting a very long timeout it does not connect you should check your proxy and try to ping the machine where is the mongodb.

Related

Camel Elasticsearch Component throws Connection refused

I'm using latest Camel 3.17 on Camel-K with Elasticsearch REST Component and Elastic 7.7 instance all on Kubernetes Cluster with Services.
I'm getting Connection refused when running a simple integration with this Route:
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.impl.DefaultCamelContext;
import org.apache.camel.component.elasticsearch.ElasticsearchComponent;
public class Routes extends RouteBuilder {
#Override
public void configure() throws Exception {
ElasticsearchComponent elasticsearchComponent = new ElasticsearchComponent();
elasticsearchComponent.setHostAddresses("elasticsearch-rlam-service:9200");
getContext().addComponent("elasticsearch-rest", elasticsearchComponent);
from("kafka:dbz?brokers={{kafka.bootstrap.address}}&groupId=apps&autoOffsetReset=earliest")
.choice()
.when().simple("${body} == 'null'")
.log("Null!")
.otherwise()
.log("Message: ${body}")
.to("elasticsearch-rest://elasticsearch?hostAddresses=elasticsearch-rlam-service:9200&operation=INDEX&indexName=dbz")
.endChoice();
}
}
The call for ElasticSearchComponent is not mandatory for this test since I'm not connecting through port 9300 with credentials, but still removing it and adding hostAddresses property doesn't resolve this issue.
The stacktrace:
2022-06-20 00:11:35,837 INFO [route1] (Camel (camel-1) thread #1 - KafkaConsumer[dbz]) Null!
2022-06-20 00:11:35,849 INFO [route1] (Camel (camel-1) thread #1 - KafkaConsumer[dbz]) Message: {"last_name":"Ketchmar","id":1004,"first_name":"Anne","email":"annek#noanswer.org"}
2022-06-20 00:11:36,116 ERROR [org.apa.cam.pro.err.DefaultErrorHandler] (Camel (camel-1) thread #1 - KafkaConsumer[dbz]) Failed delivery for (MessageId: 274D73D47B2829F-0000000000000000 on ExchangeId: 274D73D47B2829F-0000000000000000). Exhausted after delivery attempt: 1 caught: java.net.ConnectException: Connection refused
Message History (source location and message history is disabled)
---------------------------------------------------------------------------------------------------------------------------------------
Source ID Processor Elapsed (ms)
route1/route1 from[kafka://dbz?autoOffsetReset=earliest&brokers= 272
...
route1/to1 elasticsearch-rest://elasticsearch?hostAddresses=e 0
Stacktrace
---------------------------------------------------------------------------------------------------------------------------------------: java.net.ConnectException: Connection refused
at org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:918)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:299)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:287)
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1632)
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1602)
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1572)
at org.elasticsearch.client.RestHighLevelClient.index(RestHighLevelClient.java:989)
at org.apache.camel.component.elasticsearch.ElasticsearchProducer.process(ElasticsearchProducer.java:170)
at org.apache.camel.support.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:66)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:172)
at org.apache.camel.processor.errorhandler.RedeliveryErrorHandler$SimpleTask.run(RedeliveryErrorHandler.java:471)
at org.apache.camel.impl.engine.DefaultReactiveExecutor$Worker.schedule(DefaultReactiveExecutor.java:193)
at org.apache.camel.impl.engine.DefaultReactiveExecutor.scheduleMain(DefaultReactiveExecutor.java:64)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:184)
at org.apache.camel.impl.engine.CamelInternalProcessor.process(CamelInternalProcessor.java:399)
at org.apache.camel.impl.engine.DefaultAsyncProcessorAwaitManager.process(DefaultAsyncProcessorAwaitManager.java:83)
at org.apache.camel.support.AsyncProcessorSupport.process(AsyncProcessorSupport.java:41)
at org.apache.camel.component.kafka.consumer.support.KafkaRecordProcessor.processExchange(KafkaRecordProcessor.java:109)
at org.apache.camel.component.kafka.consumer.support.KafkaRecordProcessorFacade.processRecord(KafkaRecordProcessorFacade.java:120)
at org.apache.camel.component.kafka.consumer.support.KafkaRecordProcessorFacade.processPolledRecords(KafkaRecordProcessorFacade.java:80)
at org.apache.camel.component.kafka.KafkaFetchRecords.startPolling(KafkaFetchRecords.java:280)
at org.apache.camel.component.kafka.KafkaFetchRecords.run(KafkaFetchRecords.java:181)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:174)
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:148)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:351)
at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:221)
at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64)
... 1 more
I've tried reaching ES from another pod and it works just fine

Mongodb Atlas "Got Socket exception on Connection To Cluster"

I'm using Java & Springboot and MongoDB Atlas and created a database which response to many Object's CURD
When I do the post on uploadingImage, I got this error Got Socket exception on Connection [connectionId{localValue:4, serverValue:114406}] to cluster0-shard-00-02.1c6kg.mongodb.net:27017
However when I call other object's CRUD, it works totally fine. I don't why it raise this exception. BTW all my CRUD operation of all objects works well on localhost when not connecting to MongoDB Atlas, That means my ImageDAO should be fine, I just used mongoTemplate.insert(Image).
I search online, and they said might be IP whitelist of Atlas, So I setup my Cluster open to any IP Address.
Also I set my timeout and socket configure like this in my .properties file:
spring.data.mongodb.uri=mongodb+srv://username:password#cluster0.1c6kg.mongodb.net/database?retryWrites=true&w=majority&keepAlive=true&pooSize=30&autoReconnect=true&socketTimeoutMS=361000000&connectTimeoutMS=3600000
it still not work, I think the problem definitely related to the timeout of socket, But I don't know where else I can config
The Error log is here:
2020-11-01 12:25:34.275 WARN 20242 --- [nio-8088-exec-1] org.mongodb.driver.connection : Got socket exception on connection [connectionId{localValue:4, serverValue:114406}] to cluster0-shard-00-02.1c6kg.mongodb.net:27017. All connections to cluster0-shard-00-02.1c6kg.mongodb.net:27017 will be closed.
2020-11-01 12:25:34.283 INFO 20242 --- [nio-8088-exec-1] org.mongodb.driver.connection : Closed connection [connectionId{localValue:4, serverValue:114406}] to cluster0-shard-00-02.1c6kg.mongodb.net:27017 because there was a socket exception raised by this connection.
2020-11-01 12:25:34.295 INFO 20242 --- [nio-8088-exec-1] org.mongodb.driver.cluster : No server chosen by WritableServerSelector from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=cluster0-shard-00-00.1c6kg.mongodb.net:27017, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=46076648, setName='atlas-d9ovwb-shard-0', canonicalAddress=cluster0-shard-00-00.1c6kg.mongodb.net:27017, hosts=[cluster0-shard-00-00.1c6kg.mongodb.net:27017, cluster0-shard-00-01.1c6kg.mongodb.net:27017, cluster0-shard-00-02.1c6kg.mongodb.net:27017], passives=[], arbiters=[], primary='cluster0-shard-00-02.1c6kg.mongodb.net:27017', tagSet=TagSet{[Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_WEST_2'}, Tag{name='workloadType', value='OPERATIONAL'}]}, electionId=null, setVersion=1, lastWriteDate=Sun Nov 01 12:25:29 PST 2020, lastUpdateTimeNanos=104428017411386}, ServerDescription{address=cluster0-shard-00-02.1c6kg.mongodb.net:27017, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=cluster0-shard-00-01.1c6kg.mongodb.net:27017, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=41202444, setName='atlas-d9ovwb-shard-0', canonicalAddress=cluster0-shard-00-01.1c6kg.mongodb.net:27017, hosts=[cluster0-shard-00-00.1c6kg.mongodb.net:27017, cluster0-shard-00-01.1c6kg.mongodb.net:27017, cluster0-shard-00-02.1c6kg.mongodb.net:27017], passives=[], arbiters=[], primary='cluster0-shard-00-02.1c6kg.mongodb.net:27017', tagSet=TagSet{[Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_WEST_2'}, Tag{name='workloadType', value='OPERATIONAL'}]}, electionId=null, setVersion=1, lastWriteDate=Sun Nov 01 12:25:29 PST 2020, lastUpdateTimeNanos=104428010234368}]}. Waiting for 30000 ms before timing out
2020-11-01 12:25:34.316 INFO 20242 --- [ngodb.net:27017] org.mongodb.driver.cluster : Discovered replica set primary cluster0-shard-00-02.1c6kg.mongodb.net:27017
2020-11-01 12:25:34.612 INFO 20242 --- [nio-8088-exec-1] org.mongodb.driver.connection : Opened connection [connectionId{localValue:5, serverValue:108547}] to cluster0-shard-00-02.1c6kg.mongodb.net:27017
2020-11-01 12:25:34.838 WARN 20242 --- [nio-8088-exec-1] org.mongodb.driver.connection : Got socket exception on connection [connectionId{localValue:5, serverValue:108547}] to cluster0-shard-00-02.1c6kg.mongodb.net:27017. All connections to cluster0-shard-00-02.1c6kg.mongodb.net:27017 will be closed.
2020-11-01 12:25:34.838 INFO 20242 --- [nio-8088-exec-1] org.mongodb.driver.connection : Closed connection [connectionId{localValue:5, serverValue:108547}] to cluster0-shard-00-02.1c6kg.mongodb.net:27017 because there was a socket exception raised by this connection.
2020-11-01 12:25:34.876 INFO 20242 --- [ngodb.net:27017] org.mongodb.driver.cluster : Discovered replica set primary cluster0-shard-00-02.1c6kg.mongodb.net:27017
2020-11-01 12:25:34.878 ERROR 20242 --- [nio-8088-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.data.mongodb.UncategorizedMongoDbException: Exception sending message; nested exception is com.mongodb.MongoSocketWriteException: Exception sending message] with root cause

Neo4J connection pool is closing after application starts

I'm running my application and Neo4j 4 in Docker compose environment. After starting my application I'm getting some weird logs that connection pool is closing connection to DB (Closing connection pool towards graphdb(172.21.0.4):7687) and after this neo4jClient is unable to query DB (logs below). What is the reason of this behaviour?
BTW. I created Spring Health Check (driver.verifyConnectivity()), but it always return OK (no error is thrown).
Any ideas?
#Configuration
class Neo4jConfiguration {
private val logger = LoggerFactory.getLogger(Neo4jConfiguration::class.java)
#Bean
fun neo4jDriver(
#Value("\${spring.data.neo4j.host}") host: String?,
#Value("\${spring.data.neo4j.port}") port: Int?): Driver {
val connectionUrl = "neo4j://$host:$port"
logger.info("Connecting to Neo4j on `$connectionUrl`")
return GraphDatabase.driver(connectionUrl/*, AuthTokens.basic("neo4j", "secret")*/)
}
#Bean
fun neo4jClient(): ReactiveNeo4jClient = ReactiveNeo4jClient.create(neo4jDriver(null, null))
#Bean
fun neo4jTransactionManager() = ReactiveNeo4jTransactionManager(neo4jDriver(null, null))
}
Docker compose:
version: '3.7'
services:
graphdb:
image: neo4j:4.0.0
ports:
- 7474:7474
- 7687:7687
environment:
NEO4J_AUTH: none
NEO4J_dbms_connectors_default__listen__address: 0.0.0.0
volumes:
- ./docker/neo4j/data:/data
networks:
- things
networks:
things:
name: things
Full logs:
2020-02-24 20:57:32.922 INFO 1 --- [ restartedMain] c.t.r.repo.neo4j.Neo4jConfiguration : Connecting to Neo4j on `neo4j://graphdb:7687`
2020-02-24 20:57:33.321 INFO 1 --- [ restartedMain] Driver : Routing driver instance 656417291 created for server address graphdb:7687
2020-02-24 20:57:43.329 INFO 1 --- [o4jDriverIO-2-3] LoadBalancer : Routing table for database 'system' is stale. Ttl 1582577863326, currentTime 1582577863328, routers AddressSet=[], writers AddressSet=[], readers AddressSet=[], database 'system'
2020-02-24 20:57:43.437 INFO 1 --- [o4jDriverIO-2-2] ConnectionPool : Closing connection pool towards graphdb(172.21.0.4):7687, it has no active connections and is not in the routing table registry.
2020-02-24 20:57:43.440 INFO 1 --- [o4jDriverIO-2-2] LoadBalancer : Updated routing table for database 'system'. Ttl 1582578163422, currentTime 1582577863439, routers AddressSet=[0.0.0.0:7687], writers AddressSet=[0.0.0.0:7687], readers AddressSet=[0.0.0.0:7687], database 'system'
2020-02-24 20:58:02.694 INFO 1 --- [ault-executor-1] LoadBalancer : Routing table for database '<default database>' is stale. Ttl 1582577882693, currentTime 1582577882694, routers AddressSet=[], writers AddressSet=[], readers AddressSet=[], database '<default database>'
2020-02-24 20:58:02.777 INFO 1 --- [o4jDriverIO-2-2] ConnectionPool : Closing connection pool towards graphdb(172.21.0.4):7687, it has no active connections and is not in the routing table registry.
2020-02-24 20:58:02.777 INFO 1 --- [o4jDriverIO-2-2] LoadBalancer : Updated routing table for database '<default database>'. Ttl 1582578182776, currentTime 1582577882777, routers AddressSet=[0.0.0.0:7687], writers AddressSet=[0.0.0.0:7687], readers AddressSet=[0.0.0.0:7687], database '<default database>'
2020-02-24 20:58:02.803 WARN 1 --- [o4jDriverIO-2-2] LoadBalancer : Failed to obtain a connection towards address 0.0.0.0:7687
org.neo4j.driver.exceptions.SessionExpiredException: Server at 0.0.0.0:7687 is no longer available
at org.neo4j.driver.internal.cluster.loadbalancing.LoadBalancer.lambda$acquire$9(LoadBalancer.java:204) ~[neo4j-java-driver-4.0.0.jar:4.0.0-d03d93ede8ad65657eeb90ed890757203ecfaa7a]
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(Unknown Source) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture.uniWhenCompleteStage(Unknown Source) ~[na:na]
[... removed ...]
Caused by: org.neo4j.driver.exceptions.ServiceUnavailableException: Unable to connect to 0.0.0.0:7687, ensure the database is running and that there is a working network connection to it.
at org.neo4j.driver.internal.async.connection.ChannelConnectedListener.databaseUnavailableError(ChannelConnectedListener.java:76) ~[neo4j-java-driver-4.0.0.jar:4.0.0-d03d93ede8ad65657eeb90ed890757203ecfaa7a]
at org.neo4j.driver.internal.async.connection.ChannelConnectedListener.operationComplete(ChannelConnectedListener.java:70) ~[neo4j-java-driver-4.0.0.jar:4.0.0-d03d93ede8ad65657eeb90ed890757203ecfaa7a]
at org.neo4j.driver.internal.async.connection.ChannelConnectedListener.operationComplete(ChannelConnectedListener.java:37) ~[neo4j-java-driver-4.0.0.jar:4.0.0-d03d93ede8ad65657eeb90ed890757203ecfaa7a]
[... removed ...]
... 7 common frames omitted
Caused by: org.neo4j.driver.internal.shaded.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /0.0.0.0:7687
Caused by: java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:na]
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source) ~[na:na]
[... removed ...]
I have just come across the same problem, after a little experimenting turns out that the ip should be "bolt://$host:$port" and not "neo4j://$host:$port"
Seems like some of the spring.io tutorials are out of date.

No server chosen by WritableServerSelector from cluster

As a newbie to MongoDB I'm trying to insert a simple document into my freshly installed mongoDB (v3.2.4). MongoDB Driver 3.2.2 is used.
Here my minimized code:
public <classname>()
{
public static final String COLLECTION_NAME = "logs";
MongoClient mongoClient = new MongoClient("<server-adress>");
MongoDatabase db = mongoClient.getDatabase("test");
Document data = new Document ();
data.append(<whatever>);
//...inserting more into document...
db.getCollection(COLLECTION_NAME).insertOne(data); //collection should be created new
mongoClient.close();
}
The programm is executing, but I'm getting the following errors (and informations) during execution:
Mär 16, 2016 3:50:06 PM com.mongodb.diagnostics.logging.JULLogger log
INFORMATION: Cluster created with settings {hosts=[<server-adress>:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
Mär 16, 2016 3:50:06 PM com.mongodb.diagnostics.logging.JULLogger log
INFORMATION: No server chosen by WritableServerSelector from cluster description ClusterDescription{type=UNKNOWN, connectionMode=SINGLE, all=[ServerDescription{address=<server-adress>:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out
Mär 16, 2016 3:50:07 PM com.mongodb.diagnostics.logging.JULLogger log
INFORMATION: Exception in monitor thread while connecting to server <server-adress>:27017
com.mongodb.MongoSocketOpenException: Exception opening socket
at com.mongodb.connection.SocketStream.open(SocketStream.java:63)
at com.mongodb.connection.InternalStreamConnection.open(InternalStreamConnection.java:114)
at com.mongodb.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:128)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.ConnectException: Connection refused: connect
(...)
at com.mongodb.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:50)
at com.mongodb.connection.SocketStream.open(SocketStream.java:58)
... 3 more
Error: Timed out after 30000 ms while waiting for a server that matches WritableServerSelector. Client view of cluster state is {type=UNKNOWN, servers=[{address=<server-adress>:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused: connect}}]
The document is also not inserted.
When I'm commenting the line with the insertOne()-command, it disappears. So I assume it isnt' a problem with the connection, is that right?
I've some ideas:
Does it has something to do with asynchronity? I'd be suprized, I said nowhere that I don't want to execute it synchronous.
Are any permissions missing (because of this WritableServerSelector-Thing)?
Does it has to do with the "test"-database?
Are there problems with the stand-alone-mode of MongoDB? (I don't want to use replicasets.)
But for all these ideas I couldn't find real proofs...
(Please improve the question-title, I had no better idea...)
Solved it on my own.
I also should have mentioned, that I'm trying to connect from another machine to the MongoDB-instance. That is not allowed with the MongoDB-presets.
So I had to change in the /etc/mongod.conf File the value behind bindIp from 127.0.0.1 to 0.0.0.0.
[Source]

Java/Scala remote HDFS usage

I'm trying to connect to remote HDFS cluster. I've read some documentation and getting started's but didn't find a best solution how to do that.
Situation: I have HDFS on xxx-something.com. I can connect to it via SSH and everything works.
But what I'm trying to do, get the files from it to my local machine.
What I've done:
I've created core-site.xml in my conf folder (I'm creating Play! application). There I've changed fs.default.name config to hdfs://xxx-something.com:8020 (not sure about the port).
Then I'm trying to launch a simple test:
val conf = new Configuration()
conf.addResource(new Path("conf/core-site.xml"))
val fs = FileSystem.get(conf)
val status = fs.listStatus(new Path("/data/"))
And I'm getting errors:
13:56:09.012 [specs2.DefaultExecutionStrategy1] WARN org.apache.hadoop.conf.Configuration - conf/core-site.xml:a attempt to override final parameter: fs.trash.interval; Ignoring.
13:56:09.012 [specs2.DefaultExecutionStrategy1] WARN org.apache.hadoop.conf.Configuration - conf/core-site.xml:a attempt to override final parameter: hadoop.tmp.dir; Ignoring.
13:56:09.013 [specs2.DefaultExecutionStrategy1] WARN org.apache.hadoop.conf.Configuration - conf/core-site.xml:a attempt to override final parameter: fs.checkpoint.dir; Ignoring.
13:56:09.022 [specs2.DefaultExecutionStrategy1] DEBUG org.apache.hadoop.fs.FileSystem - Creating filesystem for hdfs://xxx-something.com:8021
13:56:09.059 [specs2.DefaultExecutionStrategy1] DEBUG org.apache.hadoop.conf.Configuration - java.io.IOException: config()
at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:226)
at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:213)
at org.apache.hadoop.security.SecurityUtil.<clinit>(SecurityUtil.java:53)
at org.apache.hadoop.net.NetUtils.<clinit>(NetUtils.java:62)
Thanks in advance!
Update:
probably the port was wrong. Now I set it to 22, I'm still getting same errors, but after 3 times it does say:
14:01:01.877 [specs2.DefaultExecutionStrategy1] DEBUG org.apache.hadoop.ipc.Client - Connecting to xxx-something.com/someIp:22
14:01:02.187 [specs2.DefaultExecutionStrategy1] DEBUG org.apache.hadoop.ipc.Client - IPC Client (47) connection to xxx-something.com/someIp:22 from britva sending #0
14:01:02.188 [IPC Client (47) connection to xxx-something.com/someIp:22 from britva] DEBUG org.apache.hadoop.ipc.Client - IPC Client (47) connection to xxx-something.com/someIp:22 from britva: starting, having connections 1
14:01:02.422 [IPC Client (47) connection to xxx-something.com/someIp:22 from britva] DEBUG org.apache.hadoop.ipc.Client - IPC Client (47) connection to xxx-something.com/someIp:22 from britva got value #1397966893
And afterwards:
Call to xxx-something.com/someIp:22 failed on local exception: java.io.EOFException
java.io.IOException: Call to xxx-something.com/someIp:22 failed on local exception: java.io.EOFException
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1103)
at org.apache.hadoop.ipc.Client.call(Client.java:1071)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at com.sun.proxy.$Proxy1.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:118)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:222)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:187)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1328)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:65)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1346)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:244)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:122)
at HdfsSpec$$anonfun$1$$anonfun$apply$3.apply(HdfsSpec.scala:33)
at HdfsSpec$$anonfun$1$$anonfun$apply$3.apply(HdfsSpec.scala:17)
at testingSupport.specs2.MyNotifierRunner$$anon$2$$anon$1.executeBody(MyNotifierRunner.scala:16)
at testingSupport.specs2.MyNotifierRunner$$anon$2$$anon$1.execute(MyNotifierRunner.scala:16)
Caused by: java.io.EOFException
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:807)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:745)
What does it mean?
You'll need to find the fs.default.name property in the $HADOOP_HOME/conf/core-site.xml on the server running the Name Node (HDFS master) to get the correct port. It might be 8020, or it could be something else. That's what you should use. Make sure there's no firewall between you and the server that disallows connections on the port.

Categories