HBase: Zookeeper tells remote client to connect to localhost - java

Super new to HBase/Hadoop here. I got a two-node HBase test cluster up and running, and I'm now trying to connect to that cluster from a remote Java client. Here's where I'm stuck: the client successfully connects to the single-server Zookeeper quorum (running on the same server as the HBase master), but the address passed back to the client by Zookeeper is localhost, and (obviously) the client fails to connect to anything because HBase isn't running locally. Considering that I can't edit the client-side hosts file for administrative reasons (and in any case I'm not inclined to since that seems like an awful hack), is there a way to get Zookeeper to return a proper IP for the HBase master server?
Java code:
public static final String MASTER_IP = "10.3.248.105";
public static final String ZOOKEEPER_PORT = "2181";
Configuration config = HBaseConfiguration.create();
config.set("hbase.zookeeper.quorum", MASTER_IP);
config.set("hbase.zookeeper.property.clientPort", ZOOKEEPER_PORT);
System.out.println("Running connecting test...");
try {
HBaseAdmin.checkHBaseAvailable(config);
System.out.println("HBase found!");
HTable table = new HTable(config, "testTable");
System.out.println("Table testTable obtained!");
} catch (MasterNotRunningException e) {
System.out.println("HBase connection failed!");
e.printStackTrace();
} catch (ZooKeeperConnectionException e) {
System.out.println("Zookeeper connection failed!");
e.printStackTrace();
} catch (Exception e) { e.printStackTrace(); }
Error dump:
13/06/27 11:20:25 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=10.3.248.105:2181 sessionTimeout=180000 watcher=hconnection
13/06/27 11:20:25 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 5896#HQNJVCVM0004
13/06/27 11:20:29 INFO zookeeper.ClientCnxn: Opening socket connection to server 10.3.248.105/10.3.248.105:2181. Will not attempt to authenticate using SASL (unknown error)
13/06/27 11:20:29 INFO zookeeper.ClientCnxn: Socket connection established to 10.3.248.105/10.3.248.105:2181, initiating session
13/06/27 11:20:29 INFO zookeeper.ClientCnxn: Session establishment complete on server 10.3.248.105/10.3.248.105:2181, sessionid = 0x13f8638485c0003, negotiated timeout = 180000
13/06/27 11:20:30 INFO client.HConnectionManager$HConnectionImplementation: getMaster attempt 0 of 1 failed; no more retrying.
java.net.UnknownHostException: unknown host: localhost.localdomain
HBase connection failed!
at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.<init>(HBaseClient.java:276)
at org.apache.hadoop.hbase.ipc.HBaseClient.createConnection(HBaseClient.java:255)
at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1111)
at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:974)
at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86)
at com.sun.proxy.$Proxy5.getProtocolVersion(Unknown Source)
at org.apache.hadoop.hbase.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:138)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:712)
at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:126)
at org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:1781)
at hbaseimagestore.HBaseImageStore.main(HBaseImageStore.java:44)
13/06/27 11:20:30 INFO client.HConnectionManager$HConnectionImplementation: Closed zookeeper sessionid=0x13f8638485c0003
13/06/27 11:20:30 INFO zookeeper.ZooKeeper: Session: 0x13f8638485c0003 closed
org.apache.hadoop.hbase.MasterNotRunningException: Retried 1 times
at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:138)
at org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:1781)
at hbaseimagestore.HBaseImageStore.main(HBaseImageStore.java:44)
13/06/27 11:20:30 INFO zookeeper.ClientCnxn: EventThread shut down
Edit: also, the /etc/hosts file on the master/zookeeper server:
10.3.248.105 master
10.3.248.106 slave
127.0.0.1 localhost

I don't know if it is the best way to do this, but it should do the trick. Change the master's hosts file to :
10.3.248.105 master localhost
10.3.248.106 slave
#127.0.0.1 localhost
Restart hbase after this change.

Related

Proper way to handle long running cleanup logic in response to Netty channelInactive event?

I have a Netty application that keeps track of connected web socket connections. When a web socket disconnects, I need to perform some logic that cleans up the stored data on that connection. I'm trying to be a good Netty citizen and not block the event loop with this cleanup logic, so I'm running everything in a CompletableFuture. The logic looks vaguely like this:
#Override
public void channelInactive(ChannelHandlerContext ctx) {
String connectionId = ctx.channel().id().asLongText();
LOGGER.info("Web socket disconnected: {}.", connectionId);
connectionStore
.deleteConnection(connectionId)
.thenCompose(
(connection) → {
LOGGER.info("Deleted connection {}, publishing event.", connectionId);
return messageBusPublisher.publish(new DisconnectEvent(connectionId));
})
.thenAccept((v) -> LOGGER.info("Done cleaning up connection {}.", connectionId));
}
This results in log statements like this:
2020-02-10 17:47:23,091 [nioEventLoopGroup-6-2] INFO c.e.WebSocketFrameHandler - Web socket disconnected: acde48fffe001122-000080ec-00000004-d86b4b1c219b503a-d8239aa2.
2020-02-10 17:47:23,091 [nioEventLoopGroup-6-2] INFO c.e.store.ConnectionStore - Attempting to delete connection for acde48fffe001122-000080ec-00000004-d86b4b1c219b503a-d8239aa2
2020-02-10 17:47:23,092 [nioEventLoopGroup-6-1] INFO c.e.WebSocketFrameHandler - Web socket disconnected: acde48fffe001122-000080ec-00000003-60f2c3402164ad79-a7e03a19.
2020-02-10 17:47:23,092 [nioEventLoopGroup-6-1] INFO c.e.store.ConnectionStore - Attempting to delete connection for acde48fffe001122-000080ec-00000003-60f2c3402164ad79-a7e03a19
2020-02-10 17:47:23,093 [nioEventLoopGroup-6-3] INFO c.e.WebSocketFrameHandler - Web socket disconnected: acde48fffe001122-000080ec-00000005-6962838a219b52e9-e6ceac8b.
2020-02-10 17:47:23,093 [nioEventLoopGroup-6-3] INFO c.e.store.ConnectionStore - Attempting to delete connection for acde48fffe001122-000080ec-00000005-6962838a219b52e9-e6ceac8b
2020-02-10 17:47:23,093 [nioEventLoopGroup-6-5] INFO c.e.WebSocketFrameHandler - Web socket disconnected: acde48fffe001122-000080ec-00000007-ccfcc4a5219b57eb-d78fc5f5.
2020-02-10 17:47:23,093 [nioEventLoopGroup-6-4] INFO c.e.WebSocketFrameHandler - Web socket disconnected: acde48fffe001122-000080ec-00000006-6c933ab1219b554f-2b3cf759.
2020-02-10 17:47:23,093 [nioEventLoopGroup-6-5] INFO c.e.store.ConnectionStore - Attempting to delete connection for acde48fffe001122-000080ec-00000007-ccfcc4a5219b57eb-d78fc5f5
2020-02-10 17:47:23,093 [nioEventLoopGroup-6-4] INFO c.e.store.ConnectionStore - Attempting to delete connection for acde48fffe001122-000080ec-00000006-6c933ab1219b554f-2b3cf759
2020-02-10 17:47:23,097 [lettuce-kqueueEventLoop-4-1] INFO c.e.store.ConnectionStore - Successfully deleted connection for acde48fffe001122-000080ec-00000004-d86b4b1c219b503a-d8239aa2
2020-02-10 17:47:23,102 [lettuce-kqueueEventLoop-4-1] INFO c.e.WebSocketFrameHandler - Deleted connection acde48fffe001122-000080ec-00000004-d86b4b1c219b503a-d8239aa2, publishing event.
2020-02-10 17:47:23,127 [lettuce-kqueueEventLoop-4-1] INFO c.e.service.MessageBusPublisher - Attempting to publish message for acde48fffe001122-000080ec-00000004-d86b4b1c219b503a-d8239aa2
2020-02-10 17:47:23,866 [sdk-async-response-0-0] INFO c.e.service.MessageBusPublisher - Successfully published message for acde48fffe001122-000080ec-00000004-d86b4b1c219b503a-d8239aa2
2020-02-10 17:47:23,866 [sdk-async-response-0-0] INFO c.e.WebSocketFrameHandler - Done cleaning up connection acde48fffe001122-000080ec-00000004-d86b4b1c219b503a-d8239aa2
The problem I'm running into is that when multiple disconnects happen in short order, it seems that the full cleanup logic is only run for the first one or two connections, and beyond that, nothing after the initial call is processed. This means the connections are properly deleted from the store, but no follow up actions are processed beyond that. My assumption is that this is because the CompletableFuture that represents the full chain of calls is not bound to anything, so when the first calls in the chain complete, the callbacks are not executed.
I've tried various things like using ctx.channel().eventLoop().execute(...) to run the clean up code (and theoretically bind the execution to the channel's event loop), but I see the same results.
My question then is: what is the appropriate way to fire off I/O bound tasks in response to Netty channel events?
As discussed above in the comments, this had nothing to do w/ how I was handling the long running work, but instead was the lack of error handling on my futures.
Remember to always catch exceptions from your futures, kids!

Spring boot mongo time out while connecting with mongo driver

While I'm working with spring-boot-starter-data-mongodb. I always got a timeout exception. The log detail is as follows:
Could any body tell me why I always got a timeout? thanks so much.
2019-04-01 19:08:50.255 INFO 8336 --- [168.0.101:27017] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server 192.168.0.101:27017
com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message
at com.mongodb.connection.InternalStreamConnection.translateReadException(InternalStreamConnection.java:530)
at com.mongodb.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:421)
2019-04-01 19:09:15.163 DEBUG 8336 --- [nio-8888-exec-1] o.s.b.w.s.f.OrderedRequestContextFilter : Cleared thread-bound request context: org.apache.catalina.connector.RequestFacade#4ce3ddaf
2019-04-01 19:09:15.165 ERROR 8336 --- [nio-8888-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.dao.DataAccessResourceFailureException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=192.168.0.101:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=192.168.0.101:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}]] with root cause
com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=192.168.0.101:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}]
at com.mongodb.connection.BaseCluster.getDescription(BaseCluster.java:167)
at com.mongodb.Mongo.getConnectedClusterDescription(Mongo.java:885)
at com.mongodb.Mongo.createClientSession(Mongo.java:877)
at com.mongodb.Mongo$3.getClientSession(Mongo.java:866)
My application.yml is and spring boot version is 2.0.8.RELEASE, and here is the content:
spring:
data:
mongodb:
host: 192.168.0.101
port: 27017
username: test
password: test
database: test
server:
port: 8888
management:
health:
mongo:
enabled: false
you can try this:
<dependency>
<groupId>com.spring4all</groupId>
<artifactId>mongodb-plus-spring-boot-starter</artifactId>
<version>1.0.0.RELEASE</version>
</dependency>
#EnableMongoPlus
#SpringBootApplication
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
Than you have a bunch of more configuration properties to play around :)
spring.data.mongodb.option.min-connection-per-host=0
spring.data.mongodb.option.max-connection-per-host=100
spring.data.mongodb.option.threads-allowed-to-block-for-connection-multiplier=5
spring.data.mongodb.option.server-selection-timeout=30000
spring.data.mongodb.option.max-wait-time=120000
spring.data.mongodb.option.max-connection-idle-time=0
spring.data.mongodb.option.max-connection-life-time=0
spring.data.mongodb.option.connect-timeout=10000
spring.data.mongodb.option.socket-timeout=0
spring.data.mongodb.option.socket-keep-alive=false
spring.data.mongodb.option.ssl-enabled=false
spring.data.mongodb.option.ssl-invalid-host-name-allowed=false
spring.data.mongodb.option.always-use-m-beans=false
spring.data.mongodb.option.heartbeat-socket-timeout=20000
spring.data.mongodb.option.heartbeat-connect-timeout=20000
spring.data.mongodb.option.min-heartbeat-frequency=500
spring.data.mongodb.option.heartbeat-frequency=10000
spring.data.mongodb.option.local-threshold=15
I have not tried it yet... but maybe it's worth a try.
Or look in the the repo how to do it without the dependency in your Project ;)
It is not a final solution, but you can try a longer timeout.
# The time to wait to establish a connection before timing out, in seconds.
# (default: 10)
connect_timeout: 99
If it connects successfully after changing the timeout, you should find why it is taking so long to establish a connections and try to fix it.
If even after setting a very long timeout it does not connect you should check your proxy and try to ping the machine where is the mongodb.

SolrJ hanging when connecting to zookeeper

I have a local two instance Solr Cloud setup with a single zookeeper instance. I am trying to connect via SolrJ to execute a query however my code hangs for 2mins or so when executing the query and then fails. I have followed the basic example on the Solr wiki. The logs/code is below
2016-07-24 13:29:01.932 INFO 83666 --- [qtp699221219-28] org.apache.zookeeper.ZooKeeper : Initiating client connection, connectString=localhost:2181 sessionTimeout=10000 watcher=org.apache.solr.common.cloud.SolrZkClient$3#496eab9
2016-07-24 13:29:01.948 INFO 83666 --- [qtp699221219-28] o.a.solr.common.cloud.ConnectionManager : Waiting for client to connect to ZooKeeper
2016-07-24 13:29:01.953 INFO 83666 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2016-07-24 13:29:01.955 INFO 83666 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Socket connection established to localhost/127.0.0.1:2181, initiating session
2016-07-24 13:29:01.967 INFO 83666 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1561cdd875e0004, negotiated timeout = 10000
2016-07-24 13:29:01.972 INFO 83666 --- [back-3-thread-1] o.a.solr.common.cloud.ConnectionManager : Watcher org.apache.solr.common.cloud.ConnectionManager#4bb95d56 name:ZooKeeperConnection Watcher:localhost:2181 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
2016-07-24 13:29:01.972 INFO 83666 --- [qtp699221219-28] o.a.solr.common.cloud.ConnectionManager : Client is connected to ZooKeeper
2016-07-24 13:29:01.973 INFO 83666 --- [qtp699221219-28] o.apache.solr.common.cloud.SolrZkClient : Using default ZkACLProvider
2016-07-24 13:29:01.974 INFO 83666 --- [qtp699221219-28] o.a.solr.common.cloud.ZkStateReader : Updating cluster state from ZooKeeper...
2016-07-24 13:29:01.990 INFO 83666 --- [qtp699221219-28] o.a.solr.common.cloud.ZkStateReader : Loaded empty cluster properties
2016-07-24 13:29:01.995 INFO 83666 --- [qtp699221219-28] o.a.solr.common.cloud.ZkStateReader : Updated live nodes from ZooKeeper... (0) -> (2)
2016-07-24 13:31:24.653 ERROR 83666 --- [qtp699221219-28] o.a.s.client.solrj.impl.CloudSolrClient : Request to collection foo failed due to (0) java.net.ConnectException: Operation timed out, retry? 0
and my code is:
String zkHostString = "localhost:2181";
CloudSolrClient solr = new CloudSolrClient.Builder().withZkHost(zkHostString).build();
solr.setDefaultCollection("foo");
SolrQuery query = new SolrQuery();
query.set("q", "*:*");
QueryResponse response = null;
try {
response = solr.query(query);
} catch (SolrServerException e) {
return null;
}
//Do Something with the results...
Urgh, I'm an idiot, the zookeeper instance and solr instances are inside docker, the code posted above is not. So Zookeeper reported back the solr urls using the docker containers ip...The host needs to connect via localhost and not the docker container ip.
Eg: Zookeeper responds [http://172.17.0.5:8983/solr/foo_shard1_replica2, http://172.17.0.6:8984/solr/foo_shard1_replica1]
but my code needs to call [http://localhost:8983/solr/foo_shard1_replica2, http://localhost:8984/solr/foo_shard1_replica1]

ActiveMQ timeout at connection

I have the following problem:
I try to connect to an ActiveMQ broker (which is now down) using the following piece of code
connectionFactory = new ActiveMQConnectionFactory(this.url + "?timeout=2000");
connection = connectionFactory.createConnection();
connection.start();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
LOGGER.info("Connected to " + this.url);
The problem is that the timeout does not have any effect
connection.start()
is blocked forever.
I inspected ActiveMQ log and found the following info:
2013-12-20 01:49:03,149 DEBUG [ActiveMQ Task-1] (FailoverTransport.java:786) - urlList connectionList:[tcp://localhost:61616?timeout=2000], from: [tcp://localhost:61616?timeout=2000]
2013-12-20 01:49:03,149 DEBUG [ActiveMQ Task-1] (FailoverTransport.java:1040) - Connect fail to: tcp://localhost:61616?timeout=2000, reason: java.lang.IllegalArgumentException: Invalid connect parameters: {timeout=2000}
The timeout parameter is specified here http://activemq.apache.org/cms/configuring.html
Has anybody any idea how to pass timeout argument to ActiveMQConnectionFactory?
Or how to set a timeout for connection.start() ?
Thank you!
Update: I found this on Stackoverflow: ActiveMQ - CreateSession failover timeout after a connection is resumed . I tried it but the following exception is thrown:
javax.jms.JMSException: Could not create Transport. Reason: java.lang.IllegalArgumentException: Invalid connect parameters: {transport.timeout=5000}
at org.apache.activemq.util.JMSExceptionSupport.create(JMSExceptionSupport.java:35)
I use ActiveMQ 5.8.0 from maven repo
It appears that your url is invalid still in both cases when attempting to set the timeout property.
If you're trying to have a failover URL, which it looks like you are since it is getting in to the Failover code then you're probably looking for initialReconnectDelay (and possibly maxReconnectAttempts which would throw an exception if the server is still down after the number of attempts is reached).
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory("failover://(tcp://localhost:61616)?initialReconnectDelay=2000&maxReconnectAttempts=2");

Java/Scala remote HDFS usage

I'm trying to connect to remote HDFS cluster. I've read some documentation and getting started's but didn't find a best solution how to do that.
Situation: I have HDFS on xxx-something.com. I can connect to it via SSH and everything works.
But what I'm trying to do, get the files from it to my local machine.
What I've done:
I've created core-site.xml in my conf folder (I'm creating Play! application). There I've changed fs.default.name config to hdfs://xxx-something.com:8020 (not sure about the port).
Then I'm trying to launch a simple test:
val conf = new Configuration()
conf.addResource(new Path("conf/core-site.xml"))
val fs = FileSystem.get(conf)
val status = fs.listStatus(new Path("/data/"))
And I'm getting errors:
13:56:09.012 [specs2.DefaultExecutionStrategy1] WARN org.apache.hadoop.conf.Configuration - conf/core-site.xml:a attempt to override final parameter: fs.trash.interval; Ignoring.
13:56:09.012 [specs2.DefaultExecutionStrategy1] WARN org.apache.hadoop.conf.Configuration - conf/core-site.xml:a attempt to override final parameter: hadoop.tmp.dir; Ignoring.
13:56:09.013 [specs2.DefaultExecutionStrategy1] WARN org.apache.hadoop.conf.Configuration - conf/core-site.xml:a attempt to override final parameter: fs.checkpoint.dir; Ignoring.
13:56:09.022 [specs2.DefaultExecutionStrategy1] DEBUG org.apache.hadoop.fs.FileSystem - Creating filesystem for hdfs://xxx-something.com:8021
13:56:09.059 [specs2.DefaultExecutionStrategy1] DEBUG org.apache.hadoop.conf.Configuration - java.io.IOException: config()
at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:226)
at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:213)
at org.apache.hadoop.security.SecurityUtil.<clinit>(SecurityUtil.java:53)
at org.apache.hadoop.net.NetUtils.<clinit>(NetUtils.java:62)
Thanks in advance!
Update:
probably the port was wrong. Now I set it to 22, I'm still getting same errors, but after 3 times it does say:
14:01:01.877 [specs2.DefaultExecutionStrategy1] DEBUG org.apache.hadoop.ipc.Client - Connecting to xxx-something.com/someIp:22
14:01:02.187 [specs2.DefaultExecutionStrategy1] DEBUG org.apache.hadoop.ipc.Client - IPC Client (47) connection to xxx-something.com/someIp:22 from britva sending #0
14:01:02.188 [IPC Client (47) connection to xxx-something.com/someIp:22 from britva] DEBUG org.apache.hadoop.ipc.Client - IPC Client (47) connection to xxx-something.com/someIp:22 from britva: starting, having connections 1
14:01:02.422 [IPC Client (47) connection to xxx-something.com/someIp:22 from britva] DEBUG org.apache.hadoop.ipc.Client - IPC Client (47) connection to xxx-something.com/someIp:22 from britva got value #1397966893
And afterwards:
Call to xxx-something.com/someIp:22 failed on local exception: java.io.EOFException
java.io.IOException: Call to xxx-something.com/someIp:22 failed on local exception: java.io.EOFException
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1103)
at org.apache.hadoop.ipc.Client.call(Client.java:1071)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at com.sun.proxy.$Proxy1.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:118)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:222)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:187)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1328)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:65)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1346)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:244)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:122)
at HdfsSpec$$anonfun$1$$anonfun$apply$3.apply(HdfsSpec.scala:33)
at HdfsSpec$$anonfun$1$$anonfun$apply$3.apply(HdfsSpec.scala:17)
at testingSupport.specs2.MyNotifierRunner$$anon$2$$anon$1.executeBody(MyNotifierRunner.scala:16)
at testingSupport.specs2.MyNotifierRunner$$anon$2$$anon$1.execute(MyNotifierRunner.scala:16)
Caused by: java.io.EOFException
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:807)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:745)
What does it mean?
You'll need to find the fs.default.name property in the $HADOOP_HOME/conf/core-site.xml on the server running the Name Node (HDFS master) to get the correct port. It might be 8020, or it could be something else. That's what you should use. Make sure there's no firewall between you and the server that disallows connections on the port.

Categories