I just ran a hdfs demo like this:
public final class HDFSRemoveDemo {
public static void main(String[] args) throws Exception {
Path root = new Path("hdfs://localhost:49000/");
FileSystem fs = root.getFileSystem(new Configuration());
fs.create(new Path("/tmp/test"));
fs.delete(new Path("/tmp/test"), false);
fs.close();
}
}
A puzzling exception threw as follows:
org.apache.hadoop.hdfs.DFSClient closeAllFilesBeingWritt
en
SEVERE: Failed to close file /tmp/test
org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.Le
aseExpiredException: No lease on /tmp/test File does not exist. Holder DFSClient
_NONMAPREDUCE_-1727094995_1 does not have any open files
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.
java:1999)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.
java:1990)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSN
amesystem.java:2045)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesyste
m.java:2033)
at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:805)
at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.ja
va:1190)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
at org.apache.hadoop.ipc.Client.call(Client.java:1113)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy1.complete(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57
)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocati
onHandler.java:85)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHand
ler.java:62)
at com.sun.proxy.$Proxy1.complete(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.jav
a:4121)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:4022)
at org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:41
7)
at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:433)
at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.jav
a:369)
When I removed fs.close();, it worked well.
The environment is:
hadoop-core -- 1.2.1
jdk -- 1.6.0_21
What happened when filesystem closed?
Anyone has encountered this problem?
Generally, you should not call fs.close() when you do a FileSystem.get(...).
FileSystem.get(...) won't actually open a "new" FileSystem object. When you do a close() on that FileSystem, you will close it for any upstream process as well.
For example, if you close the FileSystem during a mapper, your MapReduce driver will fail when it again tries to close the FileSystem on cleanup.
Related
I have a clone of a repository on my computer already. I wish to create a Java program that pulls when it opens and pushes when it closes.
This is how I initiate JGit:
auth = new UsernamePasswordCredentialsProvider("[username]", "[pass]");
git = Git.open(new File(path_to_git + "\\.git"));
git.checkout().setName("master").call();
git.pull().setCredentialsProvider(auth).call();
This works the first time I start up, but if I close and reopen the program I get the error
Exception in thread "JavaFX Application Thread" Exception in thread "main" java.lang.ExceptionInInitializerError
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Unknown Source)
at com.sun.javafx.application.LauncherImpl.lambda$launchApplicationWithArgs$2(LauncherImpl.java:352)
at com.sun.javafx.application.PlatformImpl.lambda$runAndWait$7(PlatformImpl.java:326)
at com.sun.javafx.application.PlatformImpl.lambda$null$5(PlatformImpl.java:295)
at java.security.AccessController.doPrivileged(Native Method)
at com.sun.javafx.application.PlatformImpl.lambda$runLater$6(PlatformImpl.java:294)
at com.sun.glass.ui.InvokeLaterDispatcher$Future.run(InvokeLaterDispatcher.java:95)
at com.sun.glass.ui.win.WinApplication._runLoop(Native Method)
at com.sun.glass.ui.win.WinApplication.lambda$null$4(WinApplication.java:186)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.UnsupportedOperationException: Cannot check out from unborn branch
at org.eclipse.jgit.api.CheckoutCommand.call(CheckoutCommand.java:235)
at com.company.app.Main.<clinit>(Main.java:59)
... 11 more
When the program closes I call the function pushToGit() which is defined as:
public static void pushToGit() {
try {
git.remoteAdd()
.setName("origin")
.setUri(new URIish("https://github.com/[username]/[repo]"))
.call();
git.commit().setMessage("from database application").setAuthor(new PersonIdent("[name]","[email]")).call();
git.push().setCredentialsProvider(auth).call();
} catch (GitAPIException | URISyntaxException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
Whenever I run the program, I need to delete the local repo and reclone it.
You can try with the following example code snippet.
Git git = Git.cloneRepository()
.setURI( "https://github.com/someRepoName" )
.setDirectory( "/path/to/repo" )
.setBranch( "refs/heads/master" )
.call();
You can refer below the link for more details.
https://www.codeaffine.com/2015/11/30/jgit-clone-repository/
In the higher version of Jgit api, you can use CloneCommand, you can check below the link.
https://www.programcreek.com/java-api-examples/?api=org.eclipse.jgit.api.CloneCommand
I'm trying to load the data into pig and dump the same data on to the console. I did without any errors in Cloudera sandbox using following commands.
raw_data = LOAD 'hdfs:/user/cloudera/sampledata' USING PigStorage(',') AS (
custno:chararray,
firstname:chararray,
lastname:chararray,
age:int,
profession:chararray
);
dump raw_data;
it dumps all the data in sampledata file.
Trying to do the same in MapR cluster with the following commands.
raw_data = LOAD '/hdfspath/input' USING PigStorage(',') AS (
custno:chararray,
firstname:chararray,
lastname:chararray,
age:int,
profession:chararray
);
dump raw_data;
Getting the following error.
(RemoteException): org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.lang.IllegalArgumentException: There is no queue named default
ERROR org.apache.hadoop.ipc.RPC - FailoverProxy: Failing this Call: getQueueAdmins for error(RemoteException): org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.lang.IllegalArgumentException: There is no queue named default
at org.apache.hadoop.mapred.QueueManager.getQueueACL(QueueManager.java:413)
at org.apache.hadoop.mapred.JobTracker.getQueueAdmins(JobTracker.java:5346)
at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:993)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1326)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1322)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1320)
ERROR 2997: Unable to recreate exception from backend error: org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.lang.IllegalArgumentException: There is no queue named default
at org.apache.hadoop.mapred.QueueManager.getQueueACL(QueueManager.java:413)
at org.apache.hadoop.mapred.JobTracker.getQueueAdmins(JobTracker.java:5346)
at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:993)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1326)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1322)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1320)
at org.apache.hadoop.ipc.Client.call(Client.java:1095)
at org.apache.hadoop.ipc.Client.call(Client.java:1041)
at org.apache.hadoop.ipc.RPC$FailoverInvoker.invoke(RPC.java:540)
at org.apache.hadoop.mapred.$Proxy0.getQueueAdmins(Unknown Source)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:939)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:885)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:885)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:859)
at org.apache.hadoop.mapred.jobcontrol.Job.submit(Job.java:378)
at org.apache.hadoop.mapred.jobcontrol.JobControl.startReadyJobs(JobControl.java:247)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.pig.backend.hadoop20.PigJobControl.mainLoopAction(PigJobControl.java:157)
at org.apache.pig.backend.hadoop20.PigJobControl.run(PigJobControl.java:134)
at java.lang.Thread.run(Thread.java:724)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:257)
Any help please.
Thanks in Advance.
Typically this happens if your scheduler has specific queues created without users assigned, and the user submitting the job doesn't specify any queue name.
If it assumes default queue, and has no permission to use it, you could end up with this error. You can avoid the issue with
export PIG_OPTS=”$PIG_OPTS -Dmapred.job.queue.name=my-queue”
or
pig -Dmapreduce.job.queuename=$queue_name -f path/to/script.pig
I am developing one project in hadoop using java. When i run my code(jar) on local cluster its working fine but when i run its on amazon multi cluster then it will give exception...
my code for mapreduce job....
job.setJarByClass(ReadActivityDriver.class);
job.setMapperClass(ReadActivityLogMapper.class);
job.setReducerClass(ReadActivityLogReducer.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
job.setInputFormatClass(ColumnFamilyInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
ConfigHelper.setInputRpcPort(job.getConfiguration(), pro.getProperty("port"));
ConfigHelper.setInputInitialAddress(job.getConfiguration(), pro.getProperty("server"));
ConfigHelper.setInputPartitioner(job.getConfiguration(), "org.apache.cassandra.dht.Murmur3Partitioner");
ConfigHelper.setInputColumnFamily(job.getConfiguration(), keyspace, columnFamily);
SlicePredicate predicate = new SlicePredicate().setColumn_names(cn);
ConfigHelper.setInputSlicePredicate(job.getConfiguration(), predicate);
FileSystem.get(job.getConfiguration()).delete(new Path("ReadOutput"), true);
FileOutputFormat.setOutputPath(job, new Path("ReadOutput"));
job.waitForCompletion(true);
Exception which i am getting...
8020/home/ubuntu/hdfstmp/mapred/staging/ubuntu/.staging/job_201405080944_0010
java.lang.RuntimeException: org.apache.thrift.TApplicationException: Invalid method name: 'describe_local_ring'
at org.apache.cassandra.hadoop.AbstractColumnFamilyInputFormat.getRangeMap(AbstractColumnFamilyInputFormat.java:337)
at org.apache.cassandra.hadoop.AbstractColumnFamilyInputFormat.getSplits(AbstractColumnFamilyInputFormat.java:125)
at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)
at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
at com.cassandra.readActivity.ReadActivityDriver.run(ReadActivityDriver.java:117)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at com.cassandra.readActivity.ReadActivityDriver.main(ReadActivityDriver.java:33)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
Caused by: org.apache.thrift.TApplicationException: Invalid method name: 'describe_local_ring'
at org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
at org.apache.cassandra.thrift.Cassandra$Client.recv_describe_local_ring(Cassandra.java:1277)
at org.apache.cassandra.thrift.Cassandra$Client.describe_local_ring(Cassandra.java:1264)
at org.apache.cassandra.hadoop.AbstractColumnFamilyInputFormat.getRangeMap(AbstractColumnFamilyInputFormat.java:329)
... 20 more
java.io.FileNotFoundException: File does not exist: /user/ubuntu/ReadOutput/part-r-00000;
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchLocatedBlocks(DFSClient.java:2006)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1975)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.<init>(DFSClient.java:1967)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:735)
at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:165)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:436)
at com.cassandra.readActivity.ReadActivityMySql.calculatePoint(ReadActivityMySql.java:65)
at com.cassandra.readActivity.ReadActivityDriver.main(ReadActivityDriver.java:36)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
java.io.FileNotFoundException: File does not exist: /user/ubuntu/ReadOutput/part-r-00000;
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchLocatedBlocks(DFSClient.java:2006)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1975)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.<init>(DFSClient.java:1967)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:735)
at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:165)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:436)
at com.cassandra.readActivity.MySqlSavePoint.setSavePoint(MySqlSavePoint.java:66)
at com.cassandra.readActivity.ReadActivityDriver.main(ReadActivityDriver.java:37)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
It looks like the input/output format jars and your cluster are not using the same Cassandra version. You need to either fix the jars, or upgrade the AWS Cassandra nodes.
I think problem in your cassandra partition try random partition
ConfigHelper.setInputPartitioner(job.getConfiguration(),"org.apache.cassandra.dht.RandomPartitioner");
finally i got the answer..
use the random partition
ConfigHelper.setInputPartitioner(job.getConfiguration(),"org.apache.cassandra.dht.RandomPartitioner");
instead of murmummer
ConfigHelper.setInputPartitioner(job.getConfiguration(),"org.apache.cassandra.dht.Murmur3Partitioner");
I have command that reads the *.avro file and loads into the table
loadavro *.avro tablename
when I try to run the code in vagrant cluster and it's throwing following error
[root#cdh4-cluster vagrant]# hadoop jar devcenter-store-load-0.2-SNAPSHOT.jar loadavro person.avro person_table
Unexpected exception: Cannot create job output directory /tmp/crunch-1880716375
java.lang.RuntimeException: Cannot create job output directory /tmp/crunch-1880716375
at org.apache.crunch.impl.dist.DistributedPipeline.createTempDirectory(DistributedPipeline.java:281)
at org.apache.crunch.impl.dist.DistributedPipeline.<init>(DistributedPipeline.java:88)
at org.apache.crunch.impl.mr.MRPipeline.<init>(MRPipeline.java:88)
at org.apache.crunch.impl.mr.MRPipeline.<init>(MRPipeline.java:76)
at com.cerner.devcenter.pipeline.crunch.PersonRecordPipeline.getPipelineInstance(PersonRecordPipeline.java:72)
at com.cerner.devcenter.pipeline.crunch.PersonRecordPipeline.writeIntoHBase(PersonRecordPipeline.java:53)
at com.cerner.console.commands.LoadAvroCommand.run(LoadAvroCommand.java:35)
at com.cerner.kepler.commands.Command.run(Command.java:112)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at com.cerner.kepler.commands.BasicCommandRunner.run(BasicCommandRunner.java:61)
at com.cerner.kepler.commands.CommandRunner.run(CommandRunner.java:19)
at com.cerner.console.main.ConsoleMain.main(ConsoleMain.java:26)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/crunch-1880716375. Name node is in safe mode.
The reported blocks 104 needs additional 2 blocks to reach the threshold 0.9990 of total blocks 106. Safe mode will be turned off automatically.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2982)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2960)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2938)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:417)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44096)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
at org.apache.hadoop.ipc.Client.call(Client.java:1225)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
at $Proxy10.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:425)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
at $Proxy11.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2121)
at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2092)
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:546)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1902)
at org.apache.crunch.impl.dist.DistributedPipeline.createTempDirectory(DistributedPipeline.java:279)
... 16 more
If you look into the exception you can see that the NameNode is in safe mode so the crunch job can't write any new files into HDFS. Take the NameNode out the safe mode and run the job again. hadoop dfsadmin -safemode leave
import org.owasp.esapi.*;
public class esapiTest
{
public static void main(String args[])
{
test();
}
public static void test()
{
String clean = ESAPI.encoder().canonicalize("someString");
Randomizer r=ESAPI.randomizer();
System.out.println(r);
System.out.println(".....................");
System.out.println(clean);
}
}
Why do I get these errors at runtime? I'm using ESAPI-2.0.1.jar, I'm not trying to run it on a server. Just testing it in Eclipse. They are in my build path and in Referenced Libraries. Any help would be great. Thanks.
Attempting to load ESAPI.properties via file I/O.
Attempting to load ESAPI.properties as resource file via file I/O.
Found in 'org.owasp.esapi.resources' directory: C:\resources\ESAPI.properties
Loaded 'ESAPI.properties' properties file
Attempting to load validation.properties via file I/O.
Attempting to load validation.properties as resource file via file I/O.
Found in 'org.owasp.esapi.resources' directory: C:\resources\validation.properties
Loaded 'validation.properties' properties file
Exception in thread "main" org.owasp.esapi.errors.ConfigurationException: java.lang.reflect.InvocationTargetException Encoder class (org.owasp.esapi.reference.DefaultEncoder) CTOR threw exception.
at org.owasp.esapi.util.ObjFactory.make(ObjFactory.java:129)
at org.owasp.esapi.ESAPI.encoder(ESAPI.java:99)
at esapiTest.test(esapiTest.java:12)
at esapiTest.main(esapiTest.java:7)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.owasp.esapi.util.ObjFactory.make(ObjFactory.java:86)
... 3 more
Caused by: org.owasp.esapi.errors.ConfigurationException: java.lang.reflect.InvocationTargetException LogFactory class (org.owasp.esapi.reference.Log4JLogFactory) CTOR threw exception.
at org.owasp.esapi.util.ObjFactory.make(ObjFactory.java:129)
at org.owasp.esapi.ESAPI.logFactory(ESAPI.java:137)
at org.owasp.esapi.ESAPI.getLogger(ESAPI.java:154)
at org.owasp.esapi.reference.DefaultEncoder.<init>(DefaultEncoder.java:75)
at org.owasp.esapi.reference.DefaultEncoder.getInstance(DefaultEncoder.java:59)
... 8 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.owasp.esapi.util.ObjFactory.make(ObjFactory.java:86)
... 12 more
Caused by: java.lang.NoClassDefFoundError: org/apache/log4j/spi/LoggerFactory
at org.owasp.esapi.reference.Log4JLogFactory.<init>(Log4JLogFactory.java:62)
at org.owasp.esapi.reference.Log4JLogFactory.getInstance(Log4JLogFactory.java:68)
... 17 more
Not sure what to do.
It seems that you have a problem finding the Log4j.jar (org.apache.log4j)