I am confronted with a weird problem. I have a mapreduce class which looks for patterns in a file (the patternfile goes into DistributedCache). Now I wanted to reuse this class to run for 1000 pattern files. I just had to extend the pattern matching class and override its main and run function. In the run of the child class I modify the commandline arguments and feed them to the parents run() function. Everything goes well up until iteration 45-50. Suddenly all tasktrackers start to fail until no progress is made. I checked the HDFS but still 70% of space left. Anybody any ideas as to why launching 50 jobs, one by one causes difficulties to hadoop?
#Override
public int run(String[] args) throws Exception {
//-patterns patternsDIR input/ output/
List<String> files = getFiles(args[1]);
String inputDataset=args[2];
String outputDir=args[3];
for (int i=0; i<files.size(); i++){
String [] newArgs= new String[4];
newArgs = modifyArgs(args);
super.run(newArgs);
}
return 0;
}
EDIT: Just checked the job logs, this is the first error occurring:
2013-11-12 09:03:01,665 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hduser cause:java.io.IOException: java.lang.OutOfMemoryError: Java heap space
2013-11-12 09:03:32,971 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201311120807_0053_m_000053_0' has completed task_201311120807_0053_m_000053 successfully.
2013-11-12 09:07:51,717 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hduser cause:java.io.IOException: java.lang.OutOfMemoryError: Java heap space
2013-11-12 09:08:05,973 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201311120807_0053_m_000128_0' has completed task_201311120807_0053_m_000128 successfully.
2013-11-12 09:08:16,571 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201311120807_0053_m_000130_0' has completed task_201311120807_0053_m_000130 successfully.
2013-11-12 09:08:16,571 WARN org.apache.hadoop.hdfs.LeaseRenewer: Failed to renew lease for [DFSClient_NONMAPREDUCE_1595161181_30] for 30 seconds. Will retry shortly ...
2013-11-12 09:08:27,175 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201311120807_0053_m_000138_0' has completed task_201311120807_0053_m_000138 successfully.
2013-11-12 09:08:25,241 ERROR org.mortbay.log: EXCEPTION
java.lang.OutOfMemoryError: Java heap space
2013-11-12 09:08:25,241 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 54311, call heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus#7fcb9c0a, false, false, true, 9834) from 10.1.1.13:55028: error: java.io.IOException: java.lang.OutOfMemoryError: Java heap space
java.io.IOException: java.lang.OutOfMemoryError: Java heap space
at java.lang.AbstractStringBuilder.<init>(AbstractStringBuilder.java:62)
at java.lang.StringBuilder.<init>(StringBuilder.java:97)
at org.apache.hadoop.util.StringUtils.escapeString(StringUtils.java:435)
at org.apache.hadoop.mapred.Counters.escape(Counters.java:768)
at org.apache.hadoop.mapred.Counters.access$000(Counters.java:52)
at org.apache.hadoop.mapred.Counters$Counter.makeEscapedCompactString(Counters.java:111)
at org.apache.hadoop.mapred.Counters$Group.makeEscapedCompactString(Counters.java:221)
at org.apache.hadoop.mapred.Counters.makeEscapedCompactString(Counters.java:648)
at org.apache.hadoop.mapred.JobHistory$MapAttempt.logFinished(JobHistory.java:2276)
at org.apache.hadoop.mapred.JobInProgress.completedTask(JobInProgress.java:2636)
at org.apache.hadoop.mapred.JobInProgress.updateTaskStatus(JobInProgress.java:1222)
at org.apache.hadoop.mapred.JobTracker.updateTaskStatuses(JobTracker.java:4471)
at org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3306)
at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:3001)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
2013-11-12 09:08:16,571 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 54311, call heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus#3269c671, false, false, true, 9841) from 10.1.1.23:42125: error: java.io.IOException: java.lang.OutOfMemoryError: Java heap space
java.io.IOException: java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$Packet.<init>(DFSClient.java:2875)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.writeChunk(DFSClient.java:3806)
at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.java:150)
at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:132)
at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:121)
at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:112)
at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:86)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:49)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:220)
at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:290)
at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:294)
at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:140)
at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
at java.io.BufferedWriter.flush(BufferedWriter.java:253)
at java.io.PrintWriter.flush(PrintWriter.java:293)
at java.io.PrintWriter.checkError(PrintWriter.java:330)
at org.apache.hadoop.mapred.JobHistory.log(JobHistory.java:847)
at org.apache.hadoop.mapred.JobHistory$MapAttempt.logStarted(JobHistory.java:2225)
at org.apache.hadoop.mapred.JobInProgress.completedTask(JobInProgress.java:2632)
at org.apache.hadoop.mapred.JobInProgress.updateTaskStatus(JobInProgress.java:1222)
at org.apache.hadoop.mapred.JobTracker.updateTaskStatuses(JobTracker.java:4471)
at org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3306)
at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:3001)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
at java.security.AccessController.doPrivileged(Native Method)
And after that we see a bunch of:
2013-11-12 09:13:48,204 INFO org.apache.hadoop.mapred.TaskInProgress: Error from attempt_201311120807_0053_m_000033_0: Lost task tracker: tracker_n144-06b.wall1.ilabt.iminds.be:localhost/127.0.0.1:47567
EDIT2: Some ideas?
The heap space error is kind of unexpected since the mappers hardly require any memory.
I am calling the base class with super.run(), should I use a Toolrunner call for that?
In every iteration a file with approximately 1000 words + score is added to the DistributedCache, I am not sure whether I should reset the cache somewhere? (every job in the super.run() runs with job.waitForCompletion(), is the cache cleared then?)
EDIT3:
#Donald: I haven't resized the memory for the hadoop daemons, so they should have a heap of 1GB each. The maptasks have 800 MB of heap from which 450 MB is used for io.sort.
#Chris: I haven't modified anything on the counters, I am using the regular ones. There are 1764 map tasks with 16 counters each, and the job itself will have another 20 or so. This might indeed add up after 50 consecutive jobs, but I would think it is not stored in the heap if you are running multiple consecutive jobs?
#Extra information:
The map tasks are extremely fast, it only takes 3-5 seconds per task, and I have jvm.reuse=-1. A map tasks processes a file with 10 records (the file is much smaller than the block size). Due to the small files I could consider making input files with 100 records to reduce the mapping overhead.
The first thing I tried was to add a unit reducer (1 reduce task) to reduce the number of files create in the HDFS, (otherwise there would be 1 per pattern and therefore 1000 per job which might create overhead for the datanodes)
The number of records per job is rather low, I am looking for specific words in 1764 files and the number of matches with one of 1000 patterns is around 5000 map output records in total)
#All: Thanks for helping me out guys!
Related
I'm using an ElasticSearch cluster in my Production environments for months now.
This cluster contains 2 nodes, which are Windows Server 2019 servers.
Sometimes, a random node of this cluster suddenly get stuck until i reboot the ElasticService, which is impossible by simply restarting the windows service. I need to kill the process to be able to restart it just after.
When I'm looking the threads contention, calling Elastic API, i'm getting this :
0.0% (0s out of 500ms) cpu usage by thread 'threadDeathWatcher-2-1'
10/10 snapshots sharing following 4 elements
java.lang.Thread.sleep(Native Method)
io.netty.util.ThreadDeathWatcher$Watcher.run(ThreadDeathWatcher.java:152)
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
java.lang.Thread.run(Thread.java:748)
0.0% (0s out of 500ms) cpu usage by thread 'DestroyJavaVM'
unique snapshot
unique snapshot
unique snapshot
unique snapshot
unique snapshot
unique snapshot
unique snapshot
unique snapshot
unique snapshot
unique snapshot
0.0% (0s out of 500ms) cpu usage by thread 'elasticsearch[PRODUCTION_CRITQUE_2][refresh][T#1]'
10/10 snapshots sharing following 27 elements
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
org.apache.lucene.search.ReferenceManager.maybeRefreshBlocking(ReferenceManager.java:251)
org.elasticsearch.index.engine.InternalEngine.refresh(InternalEngine.java:910)
org.elasticsearch.index.shard.IndexShard.refresh(IndexShard.java:632)
org.elasticsearch.action.admin.indices.refresh.TransportShardRefreshAction.shardOperationOnReplica(TransportShardRefreshAction.java:65)
org.elasticsearch.action.admin.indices.refresh.TransportShardRefreshAction.shardOperationOnReplica(TransportShardRefreshAction.java:38)
org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.onResponse(TransportReplicationAction.java:494)
org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.onResponse(TransportReplicationAction.java:467)
org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:147)
org.elasticsearch.index.shard.IndexShard.acquireReplicaOperationLock(IndexShard.java:1673)
org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.doRun(TransportReplicationAction.java:566)
org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicaOperationTransportHandler.messageReceived(TransportReplicationAction.java:451)
org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicaOperationTransportHandler.messageReceived(TransportReplicationAction.java:441)
org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69)
org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1544)
org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638)
org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)
0.0% (0s out of 500ms) cpu usage by thread 'elasticsearch[PRODUCTION_CRITQUE_2][refresh][T#2]'
10/10 snapshots sharing following 39 elements
sun.nio.fs.WindowsNativeDispatcher.DeleteFile0(Native Method)
sun.nio.fs.WindowsNativeDispatcher.DeleteFile(WindowsNativeDispatcher.java:114)
sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:249)
sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
java.nio.file.Files.delete(Files.java:1126)
org.apache.lucene.store.FSDirectory.privateDeleteFile(FSDirectory.java:373)
org.apache.lucene.store.FSDirectory.deleteFile(FSDirectory.java:335)
org.apache.lucene.store.FilterDirectory.deleteFile(FilterDirectory.java:62)
org.apache.lucene.store.FilterDirectory.deleteFile(FilterDirectory.java:62)
org.elasticsearch.index.store.Store$StoreDirectory.deleteFile(Store.java:700)
org.elasticsearch.index.store.Store$StoreDirectory.deleteFile(Store.java:705)
org.apache.lucene.store.LockValidatingDirectoryWrapper.deleteFile(LockValidatingDirectoryWrapper.java:38)
org.apache.lucene.index.IndexFileDeleter.deleteFile(IndexFileDeleter.java:723)
org.apache.lucene.index.IndexFileDeleter.deleteFiles(IndexFileDeleter.java:717)
org.apache.lucene.index.IndexFileDeleter.deleteNewFiles(IndexFileDeleter.java:693)
org.apache.lucene.index.IndexWriter.deleteNewFiles(IndexWriter.java:4965)
org.apache.lucene.index.DocumentsWriter$DeleteNewFilesEvent.process(DocumentsWriter.java:771)
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5043)
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5034)
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:477)
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:291)
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:266)
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:256)
org.apache.lucene.index.FilterDirectoryReader.doOpenIfChanged(FilterDirectoryReader.java:104)
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140)
org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:156)
org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:58)
org.apache.lucene.search.ReferenceManager.doMaybeRefresh(ReferenceManager.java:176)
org.apache.lucene.search.ReferenceManager.maybeRefreshBlocking(ReferenceManager.java:253)
org.elasticsearch.index.engine.InternalEngine.refresh(InternalEngine.java:910)
org.elasticsearch.index.shard.IndexShard.refresh(IndexShard.java:632)
org.elasticsearch.index.IndexService.maybeRefreshEngine(IndexService.java:690)
org.elasticsearch.index.IndexService.access$400(IndexService.java:92)
org.elasticsearch.index.IndexService$AsyncRefreshTask.runInternal(IndexService.java:832)
org.elasticsearch.index.IndexService$BaseAsyncTask.run(IndexService.java:743)
org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)
0.0% (0s out of 500ms) cpu usage by thread 'elasticsearch[PRODUCTION_CRITQUE_2][flush][T#4334]'
10/10 snapshots sharing following 16 elements
org.apache.lucene.index.IndexWriter.setLiveCommitData(IndexWriter.java:3116)
org.elasticsearch.index.engine.InternalEngine.commitIndexWriter(InternalEngine.java:1562)
org.elasticsearch.index.engine.InternalEngine.flush(InternalEngine.java:1063)
org.elasticsearch.index.shard.IndexShard.flush(IndexShard.java:780)
org.elasticsearch.indices.flush.SyncedFlushService.performPreSyncedFlush(SyncedFlushService.java:414)
org.elasticsearch.indices.flush.SyncedFlushService.access$1000(SyncedFlushService.java:70)
org.elasticsearch.indices.flush.SyncedFlushService$PreSyncedFlushTransportHandler.messageReceived(SyncedFlushService.java:696)
org.elasticsearch.indices.flush.SyncedFlushService$PreSyncedFlushTransportHandler.messageReceived(SyncedFlushService.java:692)
org.elasticsearch.transport.TransportRequestHandler.messageReceived(TransportRequestHandler.java:33)
org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69)
org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1544)
org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638)
org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)
It seems that a delete file is locking (deadlocking ?) Elastic threads. I'm not deleting any index on Production so I guess it's an internal ElasticSearch process about Lucene when the replica node is trying to synchronize with the master node, it should delete Lucene segments that doesn't exist anymore or something like that ..
I tried speaking with the Elastic development team, but being stuck on a delete file seems, in their opinion, an environment issue more than an Elastic issue, which is undertanstable actually.
I stopped Antivirus and backup process on these servers, but still getting these locks like one time per month minimum.
How inner Java "DeleteFile" can hangs without returning any error or something. It just hangs forever, the server didn't seem to be under pressure at the same time.
If anyone has encountered this kind of issue, or have an idea to help me investigate, it would be awesome.
Thanks !
Looks like others are experiencing this:
https://discuss.elastic.co/t/massive-queue-in-refresh-thread-pool-on-a-single-node-causing-timeouts/280732/4
Have you looked at the Windows Event Viewer Application Logs to see if any windows process is giving any insight?
Looks like it is trying to remove old index files.
org.elasticsearch.index.shard.IndexShard.refresh(IndexShard.java:632)
We are using spark 2.0.2 managed by a DCOS system that fetch data from a Kafka 1.0.0 messaging service and writes parquet in a hdfs system.
Every thing was working ok, but when we increase the number of topics in Kafka, our spark executors began to crash constantly with OOM errors:
java.lang.OutOfMemoryError: Java heap space
at org.apache.parquet.column.values.dictionary.IntList.initSlab(IntList.java:90)
at org.apache.parquet.column.values.dictionary.IntList.<init>(IntList.java:86)
at org.apache.parquet.column.values.dictionary.DictionaryValuesWriter.<init>(DictionaryValuesWriter.java:93)
at org.apache.parquet.column.values.dictionary.DictionaryValuesWriter$PlainDoubleDictionaryValuesWriter.<init>(DictionaryValuesWriter.java:422)
at org.apache.parquet.column.ParquetProperties.dictionaryWriter(ParquetProperties.java:139)
at org.apache.parquet.column.ParquetProperties.dictWriterWithFallBack(ParquetProperties.java:178)
at org.apache.parquet.column.ParquetProperties.getValuesWriter(ParquetProperties.java:203)
at org.apache.parquet.column.impl.ColumnWriterV1.<init>(ColumnWriterV1.java:83)
at org.apache.parquet.column.impl.ColumnWriteStoreV1.newMemColumn(ColumnWriteStoreV1.java:68)
at org.apache.parquet.column.impl.ColumnWriteStoreV1.getColumnWriter(ColumnWriteStoreV1.java:56)
at org.apache.parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.<init>(MessageColumnIO.java:183)
at org.apache.parquet.io.MessageColumnIO.getRecordWriter(MessageColumnIO.java:375)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.initStore(InternalParquetRecordWriter.java:109)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.<init>(InternalParquetRecordWriter.java:99)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:217)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:175)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:146)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:113)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:87)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:62)
at org.apache.parquet.avro.AvroParquetWriter.<init>(AvroParquetWriter.java:47)
at npm.parquet.ParquetMeasurementWriter.ensureOpenWriter(ParquetMeasurementWriter.java:91)
at npm.parquet.ParquetMeasurementWriter.write(ParquetMeasurementWriter.java:75)
at npm.ingestion.spark.StagingArea$Measurements.store(StagingArea.java:100)
at npm.ingestion.spark.StagingArea$StagingAreaStorage.store(StagingArea.java:80)
at npm.ingestion.spark.StagingArea.add(StagingArea.java:40)
at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.sendToStagingArea(Kafka2HDFSPM.java:207)
at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.consumeRecords(Kafka2HDFSPM.java:193)
at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.process(Kafka2HDFSPM.java:169)
at npm.ingestion.spark.Kafka2HDFSPM$FetchSubsetsAndStore.call(Kafka2HDFSPM.java:133)
at npm.ingestion.spark.Kafka2HDFSPM$FetchSubsetsAndStore.call(Kafka2HDFSPM.java:111)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:218)
18/03/20 18:41:13 ERROR [Executor task launch worker-0] SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-0,5,main]
java.lang.OutOfMemoryError: Java heap space
at org.apache.parquet.column.values.dictionary.IntList.initSlab(IntList.java:90)
at org.apache.parquet.column.values.dictionary.IntList.<init>(IntList.java:86)
at org.apache.parquet.column.values.dictionary.DictionaryValuesWriter.<init>(DictionaryValuesWriter.java:93)
at org.apache.parquet.column.values.dictionary.DictionaryValuesWriter$PlainDoubleDictionaryValuesWriter.<init>(DictionaryValuesWriter.java:422)
at org.apache.parquet.column.ParquetProperties.dictionaryWriter(ParquetProperties.java:139)
at org.apache.parquet.column.ParquetProperties.dictWriterWithFallBack(ParquetProperties.java:178)
at org.apache.parquet.column.ParquetProperties.getValuesWriter(ParquetProperties.java:203)
at org.apache.parquet.column.impl.ColumnWriterV1.<init>(ColumnWriterV1.java:83)
at org.apache.parquet.column.impl.ColumnWriteStoreV1.newMemColumn(ColumnWriteStoreV1.java:68)
at org.apache.parquet.column.impl.ColumnWriteStoreV1.getColumnWriter(ColumnWriteStoreV1.java:56)
at org.apache.parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.<init>(MessageColumnIO.java:183)
at org.apache.parquet.io.MessageColumnIO.getRecordWriter(MessageColumnIO.java:375)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.initStore(InternalParquetRecordWriter.java:109)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.<init>(InternalParquetRecordWriter.java:99)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:217)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:175)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:146)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:113)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:87)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:62)
at org.apache.parquet.avro.AvroParquetWriter.<init>(AvroParquetWriter.java:47)
at npm.parquet.ParquetMeasurementWriter.ensureOpenWriter(ParquetMeasurementWriter.java:91)
at npm.parquet.ParquetMeasurementWriter.write(ParquetMeasurementWriter.java:75)
at npm.ingestion.spark.StagingArea$Measurements.store(StagingArea.java:100)
at npm.ingestion.spark.StagingArea$StagingAreaStorage.store(StagingArea.java:80)
at npm.ingestion.spark.StagingArea.add(StagingArea.java:40)
at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.sendToStagingArea(Kafka2HDFSPM.java:207)
at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.consumeRecords(Kafka2HDFSPM.java:193)
at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.process(Kafka2HDFSPM.java:169)
at npm.ingestion.spark.Kafka2HDFSPM$FetchSubsetsAndStore.call(Kafka2HDFSPM.java:133)
at npm.ingestion.spark.Kafka2HDFSPM$FetchSubsetsAndStore.call(Kafka2HDFSPM.java:111)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:218)
We tried to increase the available the executors memory, review the code, but we couldn't find anything wrong.
Another info: we are using RDDs in spark.
Have someone encountered a similar problem, that already been solved
What is the heap configuration for the executor? By default, Java will autotune its heap according to machine memory. You need to change it to fit in your container with -Xmx setting.
See this article about running Java in the container
https://github.com/fabianenardon/docker-java-issues-demo/tree/master/memory-sample
Please before marking this duplicate read this : I have gone through all the answers provided for this error and nothing helped in my scenario.
I am doing a server migration where the same thing works well in 32 bit and 64 bit runs out of memory.
I have a windows service which internally points to .exe that spawns java process : I have made all the possible memory improvements in the config file of my .exe :Below:
I am not sure what different behavior is causing this out of memory for 64 bit server.(my java version is 1.8.xx)
#Java Additional Parameters
wrapper.java.additional.1=-XX:+UseConcMarkSweepGC
wrapper.java.additional.2=-XX:+UseParNewGC
wrapper.java.additional.3=-XX:ParallelGCThreads=8
wrapper.java.additional.4=-verbose:gc
# wrapper.java.additional.!!! should be sequence !!!=-Xloggc:D:\apps\Logs\gc.log
# wrapper.java.additional.!!! should be sequence !!!=-XX:+PrintGCDetails
# wrapper.java.additional.!!! should be sequence !!!=-XX:+PrintGCTimeStamps
wrapper.java.additional.5=-XX:MaxDirectMemorySize=128m
wrapper.java.additional.6=-XX:+HeapDumpOnOutOfMemoryError
wrapper.java.additional.7=-Dcom.sun.management.jmxremote.port=34001
wrapper.java.additional.8=-Dcom.sun.management.jmxremote.ssl=false
wrapper.java.additional.9=-Dcom.sun.management.jmxremote.authenticate=false
wrapper.java.additional.10=-XX:CMSInitiatingOccupancyFraction=55
wrapper.java.additional.11=-XX:NewSize=474m
wrapper.java.additional.12=-XX:MaxNewSize=474m
#wrapper.java.additional.13=-XX:PermSize=128m
#wrapper.java.additional.14=-XX:MaxPermSize=128m
wrapper.java.additional.15=-Xss128k
wrapper.java.additional.16=-XX:+CMSIncrementalMode
wrapper.java.additional.17=-XX:+UseCompressedOops
# Initial Java Heap Size (in MB)
wrapper.java.initmemory=1638
# Maximum Java Heap Size (in MB)
wrapper.java.maxmemory=1638
Still i am ending up to have :
[severe 2016/10/24 06:27:46.192 java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Unknown Source)
at com.gemstone.gemfire.internal.SocketCreator.asyncClose(SocketCreator.java:688)
Reading done for the concept here :
Error reading
I am not much into Java things but tried all the things from my side , any help on this will be highly appreciated , i spend huge amount of time on this but not able to reach to any conclusion.
***********Update***************
So basically could figure out that this problem was coming due to excessive creation of thread from Gemfire which exceeds the threshold ~800 threads for Gemfire Java Process.
Here Jconsole tool helped to calculate the thread count , i could see around 200-300 threads from different pool getting created with no purpose apart from usual threads and they have discription as :
Name: pool-9-thread-1
State: WAITING on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject#163b285
Total blocked: 0 Total waited: 2
Stack trace:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(Unknown Source)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(Unknown Source)
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(Unknown Source)
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(Unknown Source)
java.util.concurrent.ThreadPoolExecutor.getTask(Unknown Source)
java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
java.lang.Thread.run(Unknown Source)
I'll add more details if i can find more on this !
*******Update 2 : ************
I could manage to see all the threads created by gemfire using Jconsole:
And this number keeps on increasing and after certain point of time i am seeing the OOM issue.Is there any way i can stop this unnecessary threads creation and memory conumption !
I'm running a very simple pig script (pig 0.14, Hadoop 2.4) :
customers = load '/some/hdfs/path' using SomeUDFLoader();
customers2 = foreach (group customers by customer_id) generate FLATTEN(group) as customer_id, MIN(dw_customer.date) as date;
store customers2 into '/hdfs/output' using PigStorage(',');
This launches a map-reduce job of ~60000 mappers, and 999 reducers.
After the map-reduce job has finished it's work ( I know becuase the output has been written, and the job manager says the job has succeeded ), There is a long pause and I get the following error in the pig output :
2015-11-24 11:45:29,394 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at *********
2015-11-24 11:45:29,403 [main] INFO org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2015-11-24 11:46:03,533 [Service Thread] INFO org.apache.pig.impl.util.SpillableMemoryManager - first memory handler call- Usage threshold init = 698875904(682496K) used = 520031456(507843K) committed = 698875904(682496K) max = 698875904(682496K)
2015-11-24 11:46:04,473 [Service Thread] INFO org.apache.pig.impl.util.SpillableMemoryManager - first memory handler call - Collection threshold init = 698875904(682496K) used = 575405920(561919K) committed = 698875904(682496K) max = 698875904(682496K)
2015-11-24 11:47:36,255 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 2998: Unhandled internal error. GC overhead limit exceeded
The stack trace looks something like (each time the exception in is another function ):
Pig Stack Trace
---------------
ERROR 2998: Unhandled internal error. Java heap space
java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.mapreduce.v2.api.records.impl.pb.CounterGroupPBImpl.initCounters(CounterGroupPBImpl.java:136)
at org.apache.hadoop.mapreduce.v2.api.records.impl.pb.CounterGroupPBImpl.getAllCounters(CounterGroupPBImpl.java:121)
at org.apache.hadoop.mapreduce.TypeConverter.fromYarn(TypeConverter.java:240)
at org.apache.hadoop.mapreduce.TypeConverter.fromYarn(TypeConverter.java:367)
at org.apache.hadoop.mapreduce.TypeConverter.fromYarn(TypeConverter.java:388)
at org.apache.hadoop.mapred.ClientServiceDelegate.getTaskReports(ClientServiceDelegate.java:448)
at org.apache.hadoop.mapred.YARNRunner.getTaskReports(YARNRunner.java:551)
at org.apache.hadoop.mapreduce.Job$3.run(Job.java:533)
at org.apache.hadoop.mapreduce.Job$3.run(Job.java:531)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
at org.apache.hadoop.mapreduce.Job.getTaskReports(Job.java:531)
at org.apache.pig.backend.hadoop.executionengine.shims.HadoopShims.getTaskReports(HadoopShims.java:235)
at org.apache.pig.tools.pigstats.mapreduce.MRJobStats.addMapReduceStatistics(MRJobStats.java:352)
at org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil.addSuccessJobStats(MRPigStatsUtil.java:233)
at org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil.accumulateStats(MRPigStatsUtil.java:165)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:360)
at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:280)
at org.apache.pig.PigServer.launchPlan(PigServer.java:1390)
...
My set of SET statements in the pig script :
SET mapreduce.map.java.opts '-server -Xmx6144m -Djava.net.preferIPv4Stack=true -Duser.timezone=UTC'
SET mapreduce.reduce.java.opts '-server -Xmx6144m -Djava.net.preferIPv4Stack=true -Duser.timezone=UTC'
SET mapreduce.map.memory.mb '8192'
SET mapreduce.reduce.memory.mb '8192'
SET mapreduce.map.speculative 'true'
SET mapreduce.reduce.speculative 'true'
SET mapreduce.jobtracker.maxtasks.perjob '100000'
SET mapreduce.job.split.metainfo.maxsize '-1'
Why is this happening, and how can I fix that ?
Thanks in advance for any help.
Looks like this is caused in your application manager, since you mention that the error is being returned after the execution of all mappers/reducers. Try increasing the memory of application-manager.
In a YARN cluster, you can use the following two properties to control the amount of memory available to your ApplicationMaster:
yarn.app.mapreduce.am.command-opts
yarn.app.mapreduce.am.resource.mb
Again, you could set -Xmx (in the former) to 75% of the resource.mb value.
Details regarding the parameters can be found here.
Simple job: kafka->flatmap->reduce->map.
Job runs ok with default value for taskmanager.heap.mb (512Mb). According to the docs: this value should be as large as possible. Since the machine in question has 96Gb of RAM I set this to 75000 (arbitrary value).
Starting job gives this error:
Caused by: org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$5.apply$mcV$sp(JobManager.scala:563)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$5.apply(JobManager.scala:509)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$5.apply(JobManager.scala:509)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Not enough free slots available to run the job. You can decrease the operator parallelism or increase the number of slots per TaskManager in the configuration. Task to schedule: < Attempt #0 (Source: Custom Source (1/1)) # (unassigned) - [SCHEDULED] > with groupID < 95b239d1777b2baf728645df9a1c4232 > in sharing group < SlotSharingGroup [772c9ff1cf0b6cb3a361e3352f75fcee, d4f856f13654f424d7c49d0f00f6ecca, 81bb8c4310faefe32f97ebd6baa4c04f, 95b239d1777b2baf728645df9a1c4232] >. Resources available to scheduler: Number of instances=0, total number of slots=0, available slots=0
at org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleTask(Scheduler.java:255)
at org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleImmediately(Scheduler.java:131)
at org.apache.flink.runtime.executiongraph.Execution.scheduleForExecution(Execution.java:298)
at org.apache.flink.runtime.executiongraph.ExecutionVertex.scheduleForExecution(ExecutionVertex.java:458)
at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.scheduleAll(ExecutionJobVertex.java:322)
at org.apache.flink.runtime.executiongraph.ExecutionGraph.scheduleForExecution(ExecutionGraph.java:686)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply$mcV$sp(JobManager.scala:982)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:962)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:962)
... 8 more
Restore the default value (512) to this parameter and the job runs ok. At 5000 it works -> at 10000 it doesn't.
What did I miss?
Edit: This is more hit-n-miss than I thought. Setting the value to 50000 and resubmitting gives success. In every test, the cluster is stopped and restarted.
What you are probably experiencing is submitting a job before the workers have registered at the master.
A 5GB JVM heap is initialized fast, and the TaskManager can register almost immediately. For a 70GB heap, the JVM takes a while to initialize and boot. Consequently, the worker registers later, and the job cannot be executed when you submit it, due to a lack of workers.
That is also the reason why it works once you re-submit the job.
JVMs are initialized faster, if you start the cluster in "streaming" mode (standalone via start-cluster-streaming.sh), because then at least Flink's internal memory is initialized lazily.