pyspark container work locally but not when deployed to AWS lambda - java

I have create a pyspark batch pipeline and packed into a docker container. everything works fine locally until I deploy it to run as AWS lambda function.
The error message is as follows:
java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not found
and trackback is
Traceback (most recent call last):
File "/var/task/pipeline.py", line 71, in
df.write.mode("overwrite").parquet(parquet_output)
File "/var/lang/lib/python3.8/site-packages/pyspark/sql/readwriter.py", line 1140, in parquet
self._jwrite.parquet(path)
File "/var/lang/lib/python3.8/site-packages/py4j/java_gateway.py", line 1321, in call
return_value = get_return_value(
File "/var/lang/lib/python3.8/site-packages/pyspark/sql/utils.py", line 190, in deco
return f(*a, **kw)
File "/var/lang/lib/python3.8/site-packages/py4j/protocol.py", line 326, in get_return_value
raise Py4JJavaError(
py4j.protocol.Py4JJavaError: An error occurred while calling o47.parquet.
: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2688)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3431)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3466)
at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
at org.apache.spark.sql.execution.datasources.DataSource.planForWritingFileFormat(DataSource.scala:461)
at org.apache.spark.sql.execution.datasources.DataSource.planForWriting(DataSource.scala:558)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:390)
at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:363)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:239)
at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:793)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2592)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2686)
... 25 more
END RequestId: fde8d18e-0a38-4242-a919-8f94530a87e4
REPORT RequestId: fde8d18e-0a38-4242-a919-8f94530a87e4 Duration: 32409.88 ms Billed Duration: 32410 ms Memory Size: 5120 MB Max Memory Used: 472 MB
RequestId: fde8d18e-0a38-4242-a919-8f94530a87e4 Error: Runtime exited with error: exit status 1
Runtime.ExitError
My question here is:
How come the same container image works locally but not on lambda? My understanding is that if a Docker image runs locally, it should also work when it is deployed to any cloud environment since it is self-contained.
Am I misunderstanding something?

Related

Why does pyspark fail with “Error while instantiating 'org.apache.spark.sql.internal.SessionStateBuilder'”?

While trying to get Pyspark up and running on PyCharm (through Databricks with AWS), I am getting the following error:
Spark service enabled. To enable the Spark service on this cluster, go to
https://....cloud.databricks.com/?o=...#setting/clusters//#setting/clusters/.../configuration
and add the following to the cluster's Spark config:
spark.databricks.service.server.enabled true
I've set this to true. I even setup a new cluster on Databricks with it set to true initially and still get the same error!
Full error message:
19/10/25 14:09:02 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
19/10/25 14:09:04 WARN MetricsSystem: Using default name SparkStatusTracker for source because neither spark.metrics.namespace nor spark.app.id is set.
Testing simple count
Traceback (most recent call last):
File "/Users/.../anaconda3/envs/dbconnect/lib/python3.5/site-packages/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw)
File "/Users/.../anaconda3/envs/dbconnect/lib/python3.5/site-packages/py4j/protocol.py", line 328, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o20.range.
: java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.internal.SessionStateBuilder':
at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1178)
at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:170)
at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:169)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:169)
at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:166)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:193)
at org.apache.spark.sql.SparkSession.range(SparkSession.scala:609)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
at py4j.Gateway.invoke(Gateway.java:295)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:251)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.databricks.service.SparkServiceConnectionException: It appears that the cluster you are trying to connect to (...) does not have the
Spark service enabled. To enable the Spark service on this cluster, go to
...#setting/clusters//#setting/clusters/.../configuration
and add the following to the cluster's Spark config:
spark.databricks.service.server.enabled true
at com.databricks.service.SparkServiceRPCClient.doPost(SparkServiceRPCClient.scala:104)
at com.databricks.service.SparkServiceRPCClient.executeRPC0(SparkServiceRPCClient.scala:66)
at com.databricks.service.SparkServiceRPCClientStub.com$databricks$service$SparkServiceRPCClientStub$$executeRPC(SparkServiceRPCClientStub.scala:133)
at com.databricks.service.SparkServiceRPCClientStub$$anonfun$pollStatuses$1.apply(SparkServiceRPCClientStub.scala:486)
at com.databricks.service.SparkServiceRPCClientStub$$anonfun$pollStatuses$1.apply(SparkServiceRPCClientStub.scala:483)
at com.databricks.spark.util.Log4jUsageLogger.recordOperation(UsageLogger.scala:172)
at com.databricks.spark.util.UsageLogging$class.recordOperation(UsageLogger.scala:297)
at com.databricks.service.SparkServiceRPCClientStub.recordOperation(SparkServiceRPCClientStub.scala:60)
at com.databricks.service.SparkServiceRPCClientStub.pollStatuses(SparkServiceRPCClientStub.scala:483)
at com.databricks.service.SparkServiceRPCClientStub.com$databricks$service$SparkServiceRPCClientStub$$pollAndUpdateStatuses0(SparkServiceRPCClientStub.scala:454)
at com.databricks.service.SparkServiceRPCClientStub$$anonfun$pollAndUpdateStatuses$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SparkServiceRPCClientStub.scala:435)
at com.databricks.service.SparkServiceRPCClientStub$$anonfun$pollAndUpdateStatuses$1$$anonfun$apply$mcV$sp$1.apply(SparkServiceRPCClientStub.scala:433)
at com.databricks.service.SparkServiceRPCClientStub$$anonfun$pollAndUpdateStatuses$1$$anonfun$apply$mcV$sp$1.apply(SparkServiceRPCClientStub.scala:433)
at com.databricks.service.SparkServiceRPCClientStub.com$databricks$service$SparkServiceRPCClientStub$$withPollLock(SparkServiceRPCClientStub.scala:445)
at com.databricks.service.SparkServiceRPCClientStub$$anonfun$pollAndUpdateStatuses$1.apply$mcV$sp(SparkServiceRPCClientStub.scala:432)
at com.databricks.service.SparkServiceRPCClientStub$$anonfun$pollAndUpdateStatuses$1.apply(SparkServiceRPCClientStub.scala:430)
at com.databricks.service.SparkServiceRPCClientStub$$anonfun$pollAndUpdateStatuses$1.apply(SparkServiceRPCClientStub.scala:430)
at com.databricks.spark.util.Log4jUsageLogger.recordOperation(UsageLogger.scala:172)
at com.databricks.spark.util.UsageLogging$class.recordOperation(UsageLogger.scala:297)
at com.databricks.service.SparkServiceRPCClientStub.recordOperation(SparkServiceRPCClientStub.scala:60)
at com.databricks.service.SparkServiceRPCClientStub.pollAndUpdateStatuses(SparkServiceRPCClientStub.scala:430)
at com.databricks.service.SparkServiceRPCClientStub$$anonfun$getServerHadoopConf$1.apply(SparkServiceRPCClientStub.scala:408)
at com.databricks.service.SparkServiceRPCClientStub$$anonfun$getServerHadoopConf$1.apply(SparkServiceRPCClientStub.scala:407)
at com.databricks.service.SparkServiceRPCClientStub.com$databricks$service$SparkServiceRPCClientStub$$withPollLock(SparkServiceRPCClientStub.scala:445)
at com.databricks.service.SparkServiceRPCClientStub.getServerHadoopConf(SparkServiceRPCClientStub.scala:407)
at com.databricks.service.SparkClient$.getServerHadoopConf(SparkClient.scala:245)
at com.databricks.spark.util.SparkClientContext$.getServerHadoopConf(SparkClientContext.scala:222)
at org.apache.spark.SparkContext$$anonfun$hadoopConfiguration$1.apply(SparkContext.scala:317)
at org.apache.spark.SparkContext$$anonfun$hadoopConfiguration$1.apply(SparkContext.scala:312)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.SparkContext.hadoopConfiguration(SparkContext.scala:311)
at org.apache.spark.sql.internal.SharedState.<init>(SharedState.scala:67)
at org.apache.spark.sql.SparkSession$$anonfun$sharedState$1.apply(SparkSession.scala:145)
at org.apache.spark.sql.SparkSession$$anonfun$sharedState$1.apply(SparkSession.scala:145)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession.sharedState$lzycompute(SparkSession.scala:145)
at org.apache.spark.sql.SparkSession.sharedState(SparkSession.scala:144)
at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:291)
at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1175)
... 18 more
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/.../Documents/School/Technical/Data Science/Spark - The Definitive Guide/Examples/Spark-The_Definitive_Guide/test.py", line 9, in <module>
print(spark.range(100).count())
File "/Users/.../anaconda3/envs/dbconnect/lib/python3.5/site-packages/pyspark/sql/session.py", line 337, in range
jdf = self._jsparkSession.range(0, int(start), int(step), int(numPartitions))
File "/Users/.../anaconda3/envs/dbconnect/lib/python3.5/site-packages/py4j/java_gateway.py", line 1257, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/Users/.../anaconda3/envs/dbconnect/lib/python3.5/site-packages/pyspark/sql/utils.py", line 79, in deco
raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.internal.SessionStateBuilder':"
I've set the config variable spark.databricks.service.server.enabled to true but still getting this error.
Update your PyCharm to the latest version. I got the same issue before.

Pyspark: java.lang.OutOfMemoryError: Java heap space when Saving the dataframe to parquet/csv

I have been using pyspark 2.3 on jupyter notebook on Lenovo Pc (windows 10 & Ram 48 G), I tried to save dataframe as parquet or csv format but i got this error :
Py4JJavaError: An error occurred while calling o394.csv.
...
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 6 in stage 4.0 failed 1 times, most recent failure: Lost task 6.0 in stage 4.0 (TID 17, localhost, executor driver): java.lang.OutOfMemoryError: Java heap space
...
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1599)
...
Caused by: java.lang.OutOfMemoryError: Java heap space
...
----------------------------------------
Exception happened during processing of request from ('127.0.0.1', 50525)
Traceback (most recent call last):
File "C:\Users\****\Anaconda3\lib\socketserver.py", line 317, in _handle_request_noblock
self.process_request(request, client_address)
File "C:\Users\****\Anaconda3\lib\socketserver.py", line 348, in process_request
self.finish_request(request, client_address)
File "C:\Users\****\Anaconda3\lib\socketserver.py", line 361, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "C:\Users\****\Anaconda3\lib\socketserver.py", line 696, in __init__
self.handle()
File "F:\spark\spark\python\pyspark\accumulators.py", line 235, in handle
num_updates = read_int(self.rfile)
File "F:\spark\spark\python\pyspark\serializers.py", line 683, in read_int
length = stream.read(4)
File "C:\Users\****\Anaconda3\lib\socket.py", line 586, in readinto
return self._sock.recv_into(b)
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
By following one of the recommendations from other questions have been posted here, I created the sparkSession as follows :
SparkSession \
.builder \
.master("local[*]")\
.appName("Python Spark SQL basic example") \
.config("spark.memory.fraction", 0.8) \
.config("spark.executor.memory", "16g") \
.config("spark.driver.memory", "16g")\
.config("spark.sql.shuffle.partitions" , "800") \
.config("spark.memory.offHeap.enabled",'true')\
.config("spark.memory.offHeap.size","16g")\
.getOrCreate()

Getting "java.lang.NoClassDefFoundError: javax/security/jacc/PolicyContext" when run node.js

I am trying to get a node.js+node-java app that calls an API from a commercial product, and have gotten it to the point it almost runs, but now, when I run it, it is giving me a "java.lang.NoClassDefFoundError".
Here's the ending of the output when it gets this error:
THIS IS CONSOLE.LOG!!
Finished STARTING TO DO MyFactoryImplClass.getPepRequestFactory()...
About to do pepReqF1.newPepRequest()...
/apps/Oracle/OESCLIENT/oes_sm_instances/smtest8-java-PULL-NODEJS/testOES.js:97
var pepReq1 = pepReqF1.newPepRequestSync("Administrators", "GET" , "test/foo2/fooresource" );
^
Error: Error running instance method
java.lang.NoClassDefFoundError: javax/security/jacc/PolicyContext
at oracle.security.jps.runtime.AppSecurityContext$2.run(AppSecurityContext.java:223)
at oracle.security.jps.runtime.AppSecurityContext$2.run(AppSecurityContext.java:221)
at java.security.AccessController.doPrivileged(Native Method)
at oracle.security.jps.runtime.AppSecurityContext.getApplicationID(AppSecurityContext.java:221)
at oracle.security.jps.internal.api.runtime.AppSecurityContext.getApplicationID(AppSecurityContext.java:66)
at oracle.security.jps.openaz.pep.SubjectObjMapper$2.run(SubjectObjMapper.java:234)
at oracle.security.jps.openaz.pep.SubjectObjMapper$2.run(SubjectObjMapper.java:231)
at java.security.AccessController.doPrivileged(Native Method)
at oracle.security.jps.openaz.pep.SubjectObjMapper.mapStringSubject(SubjectObjMapper.java:231)
at oracle.security.jps.openaz.pep.SubjectObjMapper.mapToJpsObject(SubjectObjMapper.java:173)
at oracle.security.jps.openaz.pep.PepRequestImpl.setAccessSubject(PepRequestImpl.java:429)
at oracle.security.jps.openaz.pep.PepRequestFactoryImpl.newPepRequest(PepRequestFactoryImpl.java:202)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at Error (native)
at Object.<anonymous> (/apps/Oracle/OESCLIENT/oes_sm_instances/smtest8-java-PULL-NODEJS/testOES.js:97:24)
at Module._compile (module.js:556:32)
at Object.Module._extensions..js (module.js:565:10)
at Module.load (module.js:473:32)
at tryModuleLoad (module.js:432:12)
at Function.Module._load (module.js:424:3)
at Module.runMain (module.js:590:10)
at run (bootstrap_node.js:394:7)
at startup (bootstrap_node.js:149:9)
[root#nodejs smtest8-java-PULL-NODEJS]#
I found that PolicyContext class is in jacc-api.jar, so I put a copy of jacc-api.jar in my directory and then added:
enter code herejava.classpath.push("/apps/node-v6.6.0-linux-x64/code/jacc-api.jsr");
However, even after doing that, I am still getting the same error when I run my node.js+node-java app.
I am guessing that when I call out to the commercial product's API, that THAT is looking for this PolicyContext class, so I tried exporting CLASSPATH set to the path that the jacc-api.jar file, but even then I get the same error.
Where can I add that JAR to the CLASSPATH so that I can eliminate this error?
Thanks,
Jim

NullPointerException in Camus Job [EtlMultiOutputRecordWriter] - ExceptionWritable

I am very new to Camus and Hadoop, and am running into an exception error. I am trying to write some avro files to a hdfs, and keep getting the following error block:
[EtlMultiOutputRecordWriter] - ExceptionWritable key: topic=_schemas partition=0leaderId=0 server= service= beginOffset=0 offset=0 msgSize=1024 server= checksum=0 time=1450371931447 value: java.lang.Exception
at com.linkedin.camus.etl.kafka.common.KafkaReader.getNext(KafkaReader.java:108)
at com.linkedin.camus.etl.kafka.mapred.EtlRecordReader.nextKeyValue(EtlRecordReader.java:232)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
... 14 more
I looked up line 108 in com.linkedin.camus.etl.kafka.common.KafkaReader.getNext and found it to be this: MessageAndOffset msgAndOffset = messageIter.next();.
I am using io.confluent.camus.etl.kafka.coders.AvroMessageDecoder for my decoder and com.linkedin.camus.example.DummySchemaRegistry for my coder.
At the end of the logs I get another line indicating an error from one of the hdfs files: Error from file [hdfs://localhost:9000/user/username/exec/2015-12-17-17-05-25/errors-m-00000]. The error-m-00000 file contains a somewhat readable beginning, but then changes to an undecipherable string:
SEQ*com.linkedin.camus.etl.kafka.common.EtlKey5com.linkedin.camus.etl.kafka.common.ExceptionWritable*org.apache.hadoop.io.compress.DefaultCodec|Ò ∫±ß˝}pºHí$ò¸·:0schemasQ∞∆øÿxúïîÀN√0E7l‡+∫»¢lFMõ>á*êxU®™ËzÍmàc[ÆÕ„XÚÕÿqZ%#[ÿD±gÓô…¯∆üGœ¯Ç¿Q,·Úçë2ô'«hZL¿3ëSöXÿ5ê·ê„Sé‡ÇÖpÎS¬î4,…LËÕ¥Î{û}wFßáâ*M)>%&uZÑCfi“˚#rKÌÔ¡flÌu^Í%†B∂"Xa*•⁄0ÔQÕpùGzùidy&ñªkT…śԈ≥-#0>›…∆RG∫.ˇÅ¨«JÚ®sÃ≥Ö¡\£Rîfi˚ßéT≥D#%T8ãW®ÚµÌ∫4N˙©W∫©mst√—Ô嶥óhÓ$C~#S+Ñâ{ãÇfl¡ßí⁄L´ÏíÙºÙΩ5wfÃjM¬∏_Äò5RØ£
Ë"Eeúÿëx{ÆÏ«{XW÷XM€O¨-C#É¡Òl•ù9§‰õö2ó:wɲ%Œ-N∫ˇbFXˆ∑:àá5fyQÑ‘ö™:roõ1⁄5•≠≈˚yM0±ú?»ÃW◊.h≈I´êöNæ
[û3
At the end it appears that a hadoop job has run, but a commit never takes place, based of the timing report:
Job time (seconds):
pre setup 1.0 (11%)
get splits 1.0 (11%)
hadoop job 4.0 (44%)
commit 0.0 (0%)
Total: 0 minutes 9 seconds
Any help or an idea of where to look to resolve this would be greatly appreciated. Thank you.

Using PigServer to run mapreduce jobs

I want to use the PigServer java class to run pig scripts on a remote hadoop cluster. At the moment I am using the cloudera cdh4.4.0 distribution for testing pusposes. I'm running my java program from within the cloudera VM.
The directory with the hadoop config files is included in the classpath.
When I run the same script in map reduce mode in shell it works just fine. Thanks in advance for your answer.
The java code:
Properties props = new Properties();
props.setProperty("fs.default.name", "hdfs://localhost.localdomain:8020");
props.setProperty("mapred.job.tracker", "localhost.localdomain:8021");
PigServer pigServer = new PigServer(ExecType.MAPREDUCE, props);
pigServer.registerQuery("batting = load './Batting.csv' using PigStorage(',');");
pigServer.registerQuery("runs_raw = FOREACH batting GENERATE $0 as playerID, $1 as year, $8 as runs;");
pigServer.registerQuery("runs = FILTER runs_raw BY runs > 0;");
pigServer.registerQuery("grp_data = GROUP runs by (year);");
pigServer.registerQuery("max_runs = FOREACH grp_data GENERATE group as grp, MAX(runs.runs) as max_runs;");
pigServer.registerQuery("join_max_run = JOIN max_runs by ($0, max_runs), runs by (year,runs);");
pigServer.registerQuery("join_data = FOREACH join_max_run GENERATE $0 as year, $2 as playerID, $1 as runs;");
pigServer.store("join_data", "join_data");
I also added the following maven dependency
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.2.0</version>
</dependency>
Upon running my java program I'm getting the following stacktrace:
log4j:WARN No appenders could be found for logger (org.apache.hadoop.conf.Configuration.deprecation).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
*Exception in thread "main" org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1000: Error during parsing. Unable to check name hdfs://localhost.localdomain:8020/user/cloudera*
at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1608)
at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1547)
at org.apache.pig.PigServer.registerQuery(PigServer.java:518)
at org.apache.pig.PigServer.registerQuery(PigServer.java:531)
at MainTestPigServer.main(MainTestPigServer.java:24)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Caused by: Failed to parse: Pig script failed to parse:
<line 1, column 10> pig script failed to validate: org.apache.pig.backend.datastorage.DataStorageException: ERROR 6007: Unable to check name hdfs://localhost.localdomain:8020/user/cloudera
at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:191)
at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1600)
... 9 more
Caused by:
<line 1, column 10> pig script failed to validate: org.apache.pig.backend.datastorage.DataStorageException: ERROR 6007: Unable to check name hdfs://localhost.localdomain:8020/user/cloudera
at org.apache.pig.parser.LogicalPlanBuilder.buildLoadOp(LogicalPlanBuilder.java:835)
at org.apache.pig.parser.LogicalPlanGenerator.load_clause(LogicalPlanGenerator.java:3236)
at org.apache.pig.parser.LogicalPlanGenerator.op_clause(LogicalPlanGenerator.java:1315)
at org.apache.pig.parser.LogicalPlanGenerator.general_statement(LogicalPlanGenerator.java:799)
at org.apache.pig.parser.LogicalPlanGenerator.statement(LogicalPlanGenerator.java:517)
at org.apache.pig.parser.LogicalPlanGenerator.query(LogicalPlanGenerator.java:392)
at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:184)
... 10 more
Caused by: org.apache.pig.backend.datastorage.DataStorageException: ERROR 6007: Unable to check name hdfs://localhost.localdomain:8020/user/cloudera
at org.apache.pig.backend.hadoop.datastorage.HDataStorage.isContainer(HDataStorage.java:207)
at org.apache.pig.backend.hadoop.datastorage.HDataStorage.asElement(HDataStorage.java:128)
at org.apache.pig.backend.hadoop.datastorage.HDataStorage.asElement(HDataStorage.java:138)
at org.apache.pig.parser.QueryParserUtils.getCurrentDir(QueryParserUtils.java:91)
at org.apache.pig.parser.LogicalPlanBuilder.buildLoadOp(LogicalPlanBuilder.java:827)
... 16 more
Caused by: java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message contained an invalid tag (zero).; Host Details : local host is: "localhost.localdomain/127.0.0.1"; destination host is: "localhost.localdomain":8020;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
at org.apache.hadoop.ipc.Client.call(Client.java:1351)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at $Proxy9.getFileInfo(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at $Proxy9.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:651)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1679)
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1106)
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1397)
at org.apache.pig.backend.hadoop.datastorage.HDataStorage.isContainer(HDataStorage.java:200)
... 20 more
Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol message contained an invalid tag (zero).
at com.google.protobuf.InvalidProtocolBufferException.invalidTag(InvalidProtocolBufferException.java:89)
at com.google.protobuf.CodedInputStream.readTag(CodedInputStream.java:108)
at org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.<init>(RpcHeaderProtos.java:1398)
at org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.<init>(RpcHeaderProtos.java:1362)
at org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto$1.parsePartialFrom(RpcHeaderProtos.java:1492)
at org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto$1.parsePartialFrom(RpcHeaderProtos.java:1487)
at com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:200)
at com.google.protobuf.AbstractParser.parsePartialDelimitedFrom(AbstractParser.java:241)
at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:253)
at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:259)
at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:49)
at org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.parseDelimitedFrom(RpcHeaderProtos.java:2364)
at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:996)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:891)
Process finished with exit code 1

Categories