I am developing an Java Application and this application is saving a result data to HDFS. The java Application should run in my windows machine.
We using Kerberos Authentication and we placed a keytab file in NAS drive. And we saved Hadoop config Files in the same NAS drive.
My issues is when I load the Hadoop config files from NAS drive, Its throwing me some Authetication error, But my application is running fine if I load the config files from my local File System (I also saved the config files inside C:\Hadoop)
Below is my working code snippet. (keytab file in NAS, Hadoop config files in local file system)
static String KeyTabPath = "\\\\path\\2\\keytabfile\\name.keytab"
Configuration config = new Configuration();
config.set("fs.defaultFS", "hdfs://xxx.xx.xx.com:8020");
config.addResource(new Path("C:\\Hadoop\\core-site.xml"));
config.addResource(new Path("C:\\Hadoop\\hdfs-site.xml"));
config.addResource(new Path("C:\\Hadoop\\mapred-site.xml"));
config.addResource(new Path("C:\\Hadoop\\yarn-site.xml"));
config.set("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
config.set("fs.file.impl",org.apache.hadoop.fs.LocalFileSystem.class.getName());
// Kerberos Authentication
config.set("hadoop.security.authentication", "Kerberos");
UserGroupInformation.setConfiguration(config);
UserGroupInformation.loginUserFromKeytab("name#xx.xx.COM",KeyTabPath);
I tried loading config files also from the NAS drive but getting kerberos authentication error.
Below is the code snippet which throwing error (Keytab file in NAS and Hadoop config files also in NAS)
static String KeyTabPath = "\\\\path\\2\\keytabfile\\name.keytab"
Configuration config = new Configuration();
config.set("fs.defaultFS", "hdfs://xxx.xx.xx.com:8020");
config.addResource(new Path("\\\\NASDrive\\core-site.xml"));
config.addResource(new Path("\\\\NASDrive\\hdfs-site.xml"));
config.addResource(new Path("\\\\NASDrive\\mapred-site.xml"));
config.addResource(new Path("\\\\NASDrive\\yarn-site.xml"));
config.set("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
config.set("fs.file.impl",org.apache.hadoop.fs.LocalFileSystem.class.getName());
// Kerberos Authentication
config.set("hadoop.security.authentication", "Kerberos");
UserGroupInformation.setConfiguration(config);
UserGroupInformation.loginUserFromKeytab("name#xx.xx.COM",KeyTabPath);
Below is the Error Message
java.io.IOException: Login failure for name#XX.XX.COM from keytab \\NASdrive\name.keytab: javax.security.auth.login.LoginException: java.lang.IllegalArgumentException: Illegal principal name name#XX.XX.COM: org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: No rules applied to name#XX.XX.COM
at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:962)
at Appname.ldapLookupLoop(Appname.java:111)
at Appname.main(Appname.java:70)
Caused by: javax.security.auth.login.LoginException: java.lang.IllegalArgumentException: Illegal principal name name#XX.XX.COM: org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: No rules applied to name#XX.XX.COM
at org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:199)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at javax.security.auth.login.LoginContext.invoke(Unknown Source)
at javax.security.auth.login.LoginContext.access$000(Unknown Source)
at javax.security.auth.login.LoginContext$4.run(Unknown Source)
at javax.security.auth.login.LoginContext$4.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.LoginContext.invokePriv(Unknown Source)
at javax.security.auth.login.LoginContext.login(Unknown Source)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:953)
... 2 more
Caused by: java.lang.IllegalArgumentException: Illegal principal name name#XX.XX.COM: org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: No rules applied to name#XX.XX.COM
at org.apache.hadoop.security.User.<init>(User.java:51)
at org.apache.hadoop.security.User.<init>(User.java:43)
at org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:197)
... 14 more
Caused by: org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: No rules applied to name#XX.XX.COM
at org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:389)
at org.apache.hadoop.security.User.<init>(User.java:48)
... 16 more
Jul 06, 2016 4:29:14 PM com.XX.it.logging.JdkMapper info
INFO: IO Exception occured: java.io.IOException: Login failure for name#XX.XX.COM from keytab \\NASdrive\name.keytab: javax.security.auth.login.LoginException: java.lang.IllegalArgumentException: Illegal principal name name#XX.XX.COM: org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: No rules applied to name#XX.XX.COM
So issues seems to be loading the config file. My application reading the keytab file fine from NAS drive, but not the Hadoop config files. What could be the issue. I checked all the NAS Drive permissions and file permissions. Everthing is fine. I dont know where the issue is. please anyone help me to find out the issue.
You're missing "DEFAULT" rule for auth_to_local kerberos principal name transformation.
org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule:
No rules applied to
See example here -
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SecureMode.html#Mapping_from_Kerberos_principals_to_OS_user_accounts
so basically just add word "DEFAULT" at the very end of hadoop.security.auth_to_local in your core-site.xml.
Also check auth_to_local in Kerberos documentation .
PS. Here's where this exception happens in Hadoop codebase, in case if you're interested to dig deeper on this subject.
Related
I have a Docker container with Spring Boot 3.0.0 application running in the Kubernetes Deployment.
I have received a number of hardening rules that are required to be implemented, such as:
do not mount emptyDir volumes to pods
define runAsNonRoot in securityContext of pods
pods must define, that their filesystem is mounted readonly using readOnlyRootFilesystem: true in securityContext
My application stores data in the database and does not need any write permissions to the filesystem, so I have tried to define securityContext in my Deployment using:
securityContext:
runAsNonRoot: true
readOnlyRootFilesystem: true
However, running the Deployment failed, and it seems that the Spring Boot is trying to create some temporary directory, which of course is not allowed:
[2023-02-01 15:12:31.762] ERROR [main] [org.springframework.boot.SpringApplication - 820]: Application run failed
org.springframework.context.ApplicationContextException: Unable to start web server
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.onRefresh(ServletWebServerApplicationContext.java:164)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:578)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:146)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:730)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:432)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:308)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1302)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1291)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.base/java.lang.reflect.Method.invoke(Unknown Source)
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:95)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:58)
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:65)
Caused by: org.springframework.boot.web.server.WebServerException: Unable to create tempDir. java.io.tmpdir is set to /tmp
at org.springframework.boot.web.server.AbstractConfigurableWebServerFactory.createTempDir(AbstractConfigurableWebServerFactory.java:208)
at org.springframework.boot.web.embedded.tomcat.TomcatServletWebServerFactory.getWebServer(TomcatServletWebServerFactory.java:194)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.createWebServer(ServletWebServerApplicationContext.java:183)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.onRefresh(ServletWebServerApplicationContext.java:161)
... 16 common frames omitted
Caused by: java.nio.file.FileSystemException: /tmp/tomcat.8080.8311954706877871713: Read-only file system
at java.base/sun.nio.fs.UnixException.translateToIOException(Unknown Source)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(Unknown Source)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(Unknown Source)
at java.base/sun.nio.fs.UnixFileSystemProvider.createDirectory(Unknown Source)
at java.base/java.nio.file.Files.createDirectory(Unknown Source)
at java.base/java.nio.file.TempFileHelper.create(Unknown Source)
at java.base/java.nio.file.TempFileHelper.createTempDirectory(Unknown Source)
at java.base/java.nio.file.Files.createTempDirectory(Unknown Source)
at org.springframework.boot.web.server.AbstractConfigurableWebServerFactory.createTempDir(AbstractConfigurableWebServerFactory.java:202)
... 19 common frames omitted
I was trying to find some relevant information on the web and this tells me that read-only containers are not supported in Spring Boot: https://github.com/spring-projects/spring-boot/issues/8578.
There are some proposed solutions, which I cannot apply, for example to use emptyDir mount, or pvc. It does not make sense for me to mount something just because Spring Boot needs to create temporary directory.
Does anyone has experience with that and can suggest a good solution how to create Deployment with Spring Boot pod that works when readOnlyRootFilesystem: true?
I'm a total novice to graph database and I've giving Orientdb 2.2.34 a go. I'm using a Windows 10 machine with Java 10.0.1 JRE and JDK. When I run the server.bat file I get the following errors displayed and I don't know where to start to solve them:
Can't load log handler "java.util.logging.FileHandler"
java.nio.file.AccessDeniedException: ..\log\orient-server.log.0.lck
java.nio.file.AccessDeniedException: ..\log\orient-server.log.0.lck
at java.base/sun.nio.fs.WindowsException.translateToIOException(Unknown
Source)
at java.base/sun.nio.fs.WindowsException.rethrowAsIOException(Unknown
Source)
at java.base/sun.nio.fs.WindowsException.rethrowAsIOException(Unknown
Source)
at java.base/sun.nio.fs.WindowsFileSystemProvider.newFileChannel(Unknown
Source)
at java.base/java.nio.channels.FileChannel.open(Unknown Source)
at java.base/java.nio.channels.FileChannel.open(Unknown Source)
at java.logging/java.util.logging.FileHandler.openFiles(Unknown Source)
at java.logging/java.util.logging.FileHandler.(Unknown Source)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(Unknown
Source)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown
Source)
at java.base/java.lang.reflect.Constructor.newInstance(Unknown Source)
at java.base/java.lang.Class.newInstance(Unknown Source)
at java.logging/java.util.logging.LogManager.createLoggerHandlers(Unknown
Source)
at java.logging/java.util.logging.LogManager.access$1000(Unknown Source)
at java.logging/java.util.logging.LogManager$4.run(Unknown Source)
at java.logging/java.util.logging.LogManager$4.run(Unknown Source)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.logging/java.util.logging.LogManager.loadLoggerHandlers(Unknown
Source)
at java.logging/java.util.logging.LogManager.initializeGlobalHandlers(Unknown
Source)
at java.logging/java.util.logging.LogManager.access$1800(Unknown Source)
at java.logging/java.util.logging.LogManager$RootLogger.accessCheckedHandlers(Unknown
Source)
at java.logging/java.util.logging.Logger.getHandlers(Unknown Source)
at com.orientechnologies.common.log.OLogManager.installCustomFormatter(OLogManager.java:84)
at com.orientechnologies.orient.server.OServer.(OServer.java:135)
at com.orientechnologies.orient.server.OServer.(OServer.java:118)
at com.orientechnologies.orient.server.OServerMain.create(OServerMain.java:28)
at com.orientechnologies.orient.server.OServerMain$1.run(OServerMain.java:47)
2018-05-01 21:47:35:110 INFO Loading configuration from: C:/Program
Files/Orientdb-2.2.34/config/orientdb-server-config.xml...WARNING: An
illegal reflective access operation has occurred WARNING: Illegal
reflective access by
com.sun.xml.bind.v2.runtime.reflect.opt.Injector$1
(file:/C:/Program%20Files/Orientdb-2.2.34/lib/jaxb-impl-2.2.3.jar) to
method
java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int)
WARNING: Please consider reporting this to the maintainers of
com.sun.xml.bind.v2.runtime.reflect.opt.Injector$1 WARNING: Use
--illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be
denied in a future release
2018-05-01 21:47:35:371 INFO OrientDB Server v2.2.34 (build
f340442755a31eabc91b87cb3ef99eda5cee6ebd, branch 2.2.x) is starting
up... 2018-05-01 21:47:35:377 INFO Databases directory: C:\Program
Files\Orientdb-2.2.34\databases 2018-05-01 21:47:35:413 INFO
Configuration of usage of soft references inside of containers of
results of SQL execution 2018-05-01 21:47:35:426 INFO Initial and
maximum values of heap memory usage are equal, containers of results
of SQL executors will use soft references by default 2018-05-01
21:47:35:427 INFO Auto configuration of disk cache size. 2018-05-01
21:47:35:483 INFO 8449830912 B/8058 MB/7 GB of physical memory were
detected on machine 2018-05-01 21:47:35:483 INFO Detected memory
limit for current process is 8449830912 B/8058 MB/7 GB 2018-05-01
21:47:35:486 INFO OrientDB auto-config DISKCACHE=3,962MB
(heap=2,048MB direct=524,288MB os=8,058MB) 2018-05-01 21:47:35:599
INFO {db=OSystem} Creating the system database 'OSystem' for current
serverException 1E7ECDE6 in storage plocal:C:/Program
Files/Orientdb-2.2.34/databases/OSystem: 2.2.34 (build
f340442755a31eabc91b87cb3ef99eda5cee6ebd, branch 2.2.x)
com.orientechnologies.orient.core.exception.OStorageException: Cannot
create folders in storage with path C:/Program
Files/Orientdb-2.2.34/databases/OSystem
DB name="OSystem"
at com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.create(OLocalPaginatedStorage.java:127)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.create(ODatabaseDocumentTx.java:438)
at com.orientechnologies.orient.server.OSystemDatabase.init(OSystemDatabase.java:160)
at com.orientechnologies.orient.server.OSystemDatabase.(OSystemDatabase.java:44)
at com.orientechnologies.orient.server.OServer.initSystemDatabase(OServer.java:1309)
at com.orientechnologies.orient.server.OServer.activate
(OServer.java:367)
at com.orientechnologies.orient.server.OServerMain$1.run(OServerMain.java:48)
Error during server execution
com.orientechnologies.orient.core.exception.ODatabaseException: Cannot create database 'OSystem'
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.create(ODatabaseDocumentTx.java:506)
at com.orientechnologies.orient.server.OSystemDatabase.init(OSystemDatabase.java:160)
at com.orientechnologies.orient.server.OSystemDatabase.<init>(OSystemDatabase.java:44)
at com.orientechnologies.orient.server.OServer.initSystemDatabase(OServer.java:1309)
at com.orientechnologies.orient.server.OServer.activate(OServer.java:367)
at com.orientechnologies.orient.server.OServerMain$1.run(OServerMain.java:48)
Caused by: com.orientechnologies.orient.core.exception.OStorageException: Cannot
create folders in storage with path C:/Program
Files/Orientdb-2.2.34/databases/OSystem
DB name="OSystem"
at com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.create(OLocalPaginatedStorage.java:127)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.create(ODatabaseDocumentTx.java:438)
... 5
more
Looks like you need to edit your java.util.logging.FileHandler.pattern in orientdb-server-log-properties to an absolute path instead of a relative.
e.g. C:\Program Files\Orientdb-2.2.34\log
If you are running this on Windows with Server.BAT, then that BAT requires its working directory is set to there. You can edit the orientdb-server-log-properties and that's a correct answer.
Alternately, you can just change the working directory to where the bin is and start the server from there without a need to change the config file and that's what I opted to do.
I have a small orientStart.ps1 file that I can run from anywhere that look like this.
The environment ORIENTDB_HOME is always required.
Push-Location
$env:ORIENTDB_HOME="C:\orientdb-3.1.1"
Set-Location $env:ORIENTDB_HOME\bin
$SERVER ="server.bat"
cmd /c $SERVER
Pop-Location
Start OrientDB's server.bat from within a Windows CLI that has been started with Admin privileges.
I'm having trouble to connect to my database from eclipse using my instance a google cloud sql database.
I successed to access to the database via mysql command line but no in eclise.
My code:
String instanceConnectionName = "****";
String databaseName = "****";
String username = "***";
String password = "***";
String jdbcUrl = String.format(
"jdbc:mysql://google/%s?cloudSqlInstance=%s&"
+ "socketFactory=com.google.cloud.sql.mysql.SocketFactory",
databaseName,
instanceConnectionName);
Connection connection = DriverManager.getConnection(jdbcUrl, username, password);
The Error I get:
jdbc:mysql://google/****?cloudSqlInstance=****e&socketFactory=com.google.cloud.sql.mysql.SocketFactory
May 03, 2017 10:53:07 AM com.google.cloud.sql.mysql.SocketFactory connect
INFO: Connecting to Cloud SQL instance [****].
May 03, 2017 10:53:07 AM com.google.cloud.sql.mysql.SslSocketFactory getInstance
INFO: First Cloud SQL connection, generating RSA key pair.
Exception in thread "main" com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Could not create connection to database server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
at java.lang.reflect.Constructor.newInstance(Unknown Source)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:404)
at com.mysql.jdbc.Util.getInstance(Util.java:387)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:917)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:896)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:885)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:860)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2332)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2085)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:795)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:44)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
at java.lang.reflect.Constructor.newInstance(Unknown Source)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:404)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:400)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:327)
at java.sql.DriverManager.getConnection(Unknown Source)
at java.sql.DriverManager.getConnection(Unknown Source)
at neviTracker.program.ProgramConnections.main(ProgramConnections.java:27)
Caused by: java.lang.RuntimeException: Unable to obtain credentials to communicate with the Cloud SQL API
at com.google.cloud.sql.mysql.SslSocketFactory$ApplicationDefaultCredentialFactory.create(SslSocketFactory.java:545)
at com.google.cloud.sql.mysql.SslSocketFactory.getInstance(SslSocketFactory.java:138)
at com.google.cloud.sql.mysql.SocketFactory.connect(SocketFactory.java:47)
at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:298)
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2253)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2286)
... 13 more
Caused by: java.io.IOException: The Application Default Credentials are not available. They are available if running on Google App Engine, Google Compute Engine, or Google Cloud Shell. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
at com.google.api.client.googleapis.auth.oauth2.DefaultCredentialProvider.getDefaultCredential(DefaultCredentialProvider.java:98)
at com.google.api.client.googleapis.auth.oauth2.GoogleCredential.getApplicationDefault(GoogleCredential.java:213)
at com.google.api.client.googleapis.auth.oauth2.GoogleCredential.getApplicationDefault(GoogleCredential.java:191)
at com.google.cloud.sql.mysql.SslSocketFactory$ApplicationDefaultCredentialFactory.create(SslSocketFactory.java:543)
... 18 more
You have an error message that says the environment variable GOOGLE_APPLICATION_CREDENTIALS is not set. Please create a .credentials file in a root folder and add your credential details in it.
Please refer to this link
https://developers.google.com/identity/protocols/application-default-credentials
The Application Default Credentials are not available. They are available if running on Google App Engine, Google Compute Engine, or Google Cloud Shell. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
You need credentials to access google cloud sql. You can create from here
After creating your credential, create your key file and point it in your bash_profile(read the block quote for more info). then it should work
Alternatively you can use CLOUD SQL PROXY to not work with environment variables, follow the steps in this link, if something is blurry please ask.
https://cloud.google.com/sql/docs/mysql/connect-admin-proxy
hope this helps.
I would like to see the following code make a directory in my "/tmp" via hdfs.
I can, for instance, run
hadoop fs -mkdir hdfs://localhost:9000/tmp/newdir
and succeed.
jps lists that namenode, datanode are running.
Hadoop version 0.20.1+169.89.
public static void main(String[] args) throws IOException {
Configuration conf = new Configuration();
conf.set("fs.default.name", "hdfs://localhost:9000");
FileSystem fs = FileSystem.get(conf);
fs.mkdirs(new Path("hdfs://localhost:9000/tmp/alex"));
}
I get the following error:
Exception in thread "main" java.io.IOException: Failed on local exception: java.io.EOFException; Host Details : local host is: "<my-machine-name>/192.168.2.6"; destination host is: "localhost":9000;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
at org.apache.hadoop.ipc.Client.call(Client.java:1351)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy9.mkdirs(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy9.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:467)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2394)
at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2365)
at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:817)
at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:813)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:813)
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:806)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1933)
at com.twitter.amplify.core.dao.AccessHdfs.main(AccessHdfs.java:39)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:375)
at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:995)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:891)
You have a version mismatch - your questions notes a NameNode running version 0.20.1+169.89 (which i think is from Cloudera distro CDH2 - http://archive.cloudera.com/cdh/2/), and in IntelliJ you are using Apache hadoop version 2.2.0.
Update your IntelliJ classpath to use the jars compatible with your cluster version - namely:
hadoop-0.20.1+169.89-core.jar
I had same version of Hadoop(hadoop-2.2.0) installed on my master and slave nodes but still I was getting same exception. To get rid of it I have followed below steps:
1. from $HADOP_HOME execute sbin/stop-all.sh, to stop the cluster
2. delete the data directory from all problematic node. If you dont know where data directory is then open core-site.xml, find the value corresponding to hadoop.tmp.dir, go to that directory, then cd dfs there you will find a directory named data, delete that data directory from all problematic datanodes
3. format the master node
4. from $HADOP_HOME execute sbin/start-all.sh, to start the cluster
I deployed my play framework 2.1 application on appfog(same issue also exists on cloudfoundry since they are essentially the same). My application allow file upload, but I have encountered an error getting the file. It works fine when I run locally with play run.
My code is as follows:
MultipartFormData body = request().body().asMultipartFormData();
FilePart midi = body.getFile("midi");
File file = midi.getFile();
The error message in log is:
play.api.Application$$anon$1: Execution exception[[IOException: No such file or directory]]
at play.api.Application$class.handleError(Application.scala:289) ~[play.play_2.10-2.1.0.jar:2.1.0]
at play.api.DefaultApplication.handleError(Application.scala:383) [play.play_2.10-2.1.0.jar:2.1.0]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$handleAction$1$4$$anonfun$apply$28.apply(PlayDefaultUpstreamHandler.scala:391) [play.play_2.10-2.1.0.jar:2.1.0]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$handleAction$1$4$$anonfun$apply$28.apply(PlayDefaultUpstreamHandler.scala:391) [play.play_2.10-2.1.0.jar:2.1.0]
at scala.Option.map(Option.scala:145) [org.scala-lang.scala-library-2.10.0.jar:na]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$handleAction$1$4.apply(PlayDefaultUpstreamHandler.scala:391) [play.play_2.10-2.1.0.jar:2.1.0]
Caused by: java.io.IOException: No such file or directory
at java.io.UnixFileSystem.createFileExclusively(Native Method) ~[na:1.7.0_11]
at java.io.File.createTempFile(Unknown Source) ~[na:1.7.0_11]
at java.io.File.createTempFile(Unknown Source) ~[na:1.7.0_11]
at play.api.libs.Files$TemporaryFile$.apply(Files.scala:60) ~[play.play_2.10-2.1.0.jar:2.1.0]
at play.api.mvc.BodyParsers$parse$Multipart$$anonfun$handleFilePartAsTemporaryFile$1.apply(ContentTypes.scala:624) ~[play.play_2.10-2.1.0.jar:2.1.0]
at play.api.mvc.BodyParsers$parse$Multipart$$anonfun$handleFilePartAsTemporaryFile$1.apply(ContentTypes.scala:622) ~[play.play_2.10-2.1.0.jar:2.1.0]
It looks like that it's having problem creating a temporary file from the multipartForm. Is there a good way to solve this? My application is stored on AWS, configured through appfog.