NetBeans installer closes on a white screen after "Configuring the installer..." - java

I'm having some trouble installing apache netbeans 11.2, (and in general any version of netbeans) on my pc because when I run the installer the error, described in the title happens.
There are no lock files.
I have jdk 13.0.1 installed, (and java 8u231)
I'm running it as administrator
(Update) Since there is a 30000 character limit I've replaced the log with the NetBeans\11.2\var\log\messages.txt and removed the Turning on modules: ...
Any help is greatly appreciated.
-------------------------------------------------------------------------------
>Log Session: Monday, December 2, 2019 at 12:11:03 PM Central European Standard Time
>System Info:
Product Version = Apache NetBeans IDE 11.2
Operating System = Windows 10 version 10.0 running on amd64
Java; VM; Vendor = 13.0.1; Java HotSpot(TM) 64-Bit Server VM 13.0.1+9; Oracle Corporation
Runtime = Java(TM) SE Runtime Environment 13.0.1+9
Java Home = C:\Program Files\Java\jdk-13.0.1
System Locale; Encoding = hu_HU (nb); Cp1250
Home Directory = C:\Users\banhi
Current Directory = C:\netbeans\bin
User Directory = C:\Users\banhi\AppData\Roaming\NetBeans\11.2
Cache Directory = C:\Users\banhi\AppData\Local\NetBeans\Cache\11.2
Installation = C:\netbeans\nb
C:\netbeans\ergonomics
C:\netbeans\ide
C:\netbeans\extide
C:\netbeans\java
C:\netbeans\apisupport
C:\netbeans\webcommon
C:\netbeans\websvccommon
C:\netbeans\enterprise
C:\netbeans\profiler
C:\netbeans\php
C:\netbeans\harness
C:\netbeans\groovy
C:\netbeans\javafx
C:\netbeans\platform
Boot & Ext. Classpath =
Application Classpath = C:\netbeans\platform\lib\boot.jar;C:\netbeans\platform\lib\org-openide-modules.jar;C:\netbeans\platform\lib\org-openide-util-lookup.jar;C:\netbeans\platform\lib\org-openide-util-ui.jar;C:\netbeans\platform\lib\org-openide-util.jar
Startup Classpath = C:\netbeans\platform\core\asm-all-5.0.1.jar;C:\netbeans\platform\core\core-base.jar;C:\netbeans\platform\core\core.jar;C:\netbeans\platform\core\org-netbeans-libs-asm.jar;C:\netbeans\platform\core\org-openide-filesystems-compat8.jar;C:\netbeans\platform\core\org-openide-filesystems.jar;C:\netbeans\nb\core\org-netbeans-upgrader.jar;C:\netbeans\nb\core\locale\core_nb.jar
-------------------------------------------------------------------------------
WARNING [org.netbeans.core.startup.NbEvents]: The extension C:\netbeans\ide\modules\ext\jcodings-1.0.18.jar may be multiply loaded by modules: [C:\netbeans\ide\modules\org-netbeans-libs-bytelist.jar, C:\netbeans\ide\modules\org-netbeans-modules-textmate-lexer.jar]; see: http://www.netbeans.org/download/dev/javadoc/org-openide-modules/org/openide/modules/doc-files/classpath.html#class-path
INFO [org.netbeans.modules.netbinox]: Install area set to file:/C:/netbeans/
WARNING [org.netbeans.core.modules]: the modules [org.netbeans.modules.java.editor.lib, org.netbeans.modules.xml.text] use org.netbeans.modules.editor.deprecated.pre65formatting which is deprecated.
WARNING [org.netbeans.core.modules]: the modules [org.netbeans.modules.ide.kit, org.netbeans.modules.xml.text] use org.netbeans.modules.editor.structure which is deprecated.
WARNING [org.netbeans.core.modules]: the modules [org.netbeans.modules.ant.hints, org.netbeans.modules.java.hints, org.netbeans.modules.jshell.support, org.netbeans.modules.maven.hints] use org.netbeans.modules.java.hints.legacy.spi which is deprecated: Use Java Hints SPI (org.netbeans.spi.java.hints) instead.
WARNING [org.openide.filesystems.Ordering]: Found same position 127 for both Services/MIMEResolver/org-netbeans-modules-javascript-nodejs-file-PackageLockJsonDataObject-Registration.xml and Services/MIMEResolver/org-netbeans-modules-javascript-nodejs-file-NpmDebugLogDataObject-Registration.xml
WARNING [org.openide.filesystems.Ordering]: Found same position 127 for both Services/MIMEResolver/org-netbeans-modules-javascript-nodejs-file-PackageLockJsonDataObject-Registration.xml and Services/MIMEResolver/org-netbeans-modules-javascript-nodejs-file-PackageJsonDataObject-Registration.xml
WARNING [org.openide.filesystems.Ordering]: Found same position 127 for both Services/MIMEResolver/org-netbeans-modules-javascript-nodejs-file-NpmDebugLogDataObject-Registration.xml and Services/MIMEResolver/org-netbeans-modules-javascript-nodejs-file-PackageJsonDataObject-Registration.xml
WARNING [org.openide.filesystems.Ordering]: Found same position 127 for both Services/MIMEResolver/org-netbeans-modules-javascript-nodejs-file-PackageLockJsonDataObject-Registration.xml and Services/MIMEResolver/org-netbeans-modules-javascript-nodejs-file-NpmDebugLogDataObject-Registration.xml
WARNING [org.openide.filesystems.Ordering]: Found same position 127 for both Services/MIMEResolver/org-netbeans-modules-javascript-nodejs-file-PackageLockJsonDataObject-Registration.xml and Services/MIMEResolver/org-netbeans-modules-javascript-nodejs-file-PackageJsonDataObject-Registration.xml
WARNING [org.openide.filesystems.Ordering]: Found same position 127 for both Services/MIMEResolver/org-netbeans-modules-javascript-nodejs-file-NpmDebugLogDataObject-Registration.xml and Services/MIMEResolver/org-netbeans-modules-javascript-nodejs-file-PackageJsonDataObject-Registration.xml
WARNING [null]: Last record repeated again.
WARNING [org.openide.filesystems.Ordering]: Found same position 127 for both Services/MIMEResolver/org-netbeans-modules-javascript-nodejs-file-NpmDebugLogDataObject-Registration.xml and Services/MIMEResolver/org-netbeans-modules-javascript-nodejs-file-PackageLockJsonDataObject-Registration.xml
WARNING [org.openide.filesystems.Ordering]: Found same position 127 for both Services/MIMEResolver/org-netbeans-modules-javascript-nodejs-file-PackageJsonDataObject-Registration.xml and Services/MIMEResolver/org-netbeans-modules-javascript-nodejs-file-PackageLockJsonDataObject-Registration.xml
WARNING [org.netbeans.modules.java.j2seplatform.libraries.J2SELibraryTypeProvider]: Can not resolve URL: nbinst://org.netbeans.modules.j2ee.eclipselink/modules/ext/docs/javax.persistence-2.1.0-doc.zip
WARNING [null]: Last record repeated again.
WARNING [org.netbeans.modules.java.j2seplatform.libraries.J2SELibraryTypeProvider]: Can not resolve URL: nbinst://org.netbeans.modules.j2ee.eclipselink/modules/ext/eclipselink/eclipselink.jar
WARNING [null]: Last record repeated again.
WARNING [org.netbeans.modules.java.j2seplatform.libraries.J2SELibraryTypeProvider]: Can not resolve URL: nbinst://org.netbeans.libs.javacapi/modules/ext/nb-javac-api.jar
WARNING [null]: Last record repeated again.
WARNING [org.netbeans.modules.java.j2seplatform.libraries.J2SELibraryTypeProvider]: Can not resolve URL: nbinst://org.netbeans.libs.jaxb/modules/ext/jaxb/jaxb-impl.jar
WARNING [org.netbeans.modules.java.j2seplatform.libraries.J2SELibraryTypeProvider]: Can not resolve URL: nbinst://org.netbeans.libs.jaxb/modules/ext/jaxb/jaxb-xjc.jar
WARNING [org.netbeans.modules.java.j2seplatform.libraries.J2SELibraryTypeProvider]: Can not resolve URL: nbinst://org.netbeans.modules.xml.jaxb.api/modules/ext/jaxb/api/jsr173_1.0_api.jar
WARNING [org.netbeans.modules.java.j2seplatform.libraries.J2SELibraryTypeProvider]: Can not resolve URL: nbinst://org.netbeans.libs.jaxb/modules/ext/jaxb/jaxb-impl.jar
WARNING [org.netbeans.modules.java.j2seplatform.libraries.J2SELibraryTypeProvider]: Can not resolve URL: nbinst://org.netbeans.libs.jaxb/modules/ext/jaxb/jaxb-xjc.jar
WARNING [org.netbeans.modules.java.j2seplatform.libraries.J2SELibraryTypeProvider]: Can not resolve URL: nbinst://org.netbeans.modules.xml.jaxb.api/modules/ext/jaxb/api/jsr173_1.0_api.jar
WARNING [org.netbeans.modules.java.j2seplatform.libraries.J2SELibraryTypeProvider]: Can not resolve URL: nbinst://org.netbeans.modules.j2ee.eclipselink/modules/ext/docs/javax.persistence-2.1.0-doc.zip
WARNING [null]: Last record repeated again.
//Had to remove turning on modules... due to 30000 character limit
INFO [org.netbeans.core.netigso.Netigso]: bundle org.eclipse.mylyn.wikitext.markdown.core#2.6.0.v20150901-2143 resolved
INFO [org.netbeans.core.netigso.Netigso]: bundle org.eclipse.mylyn.wikitext.textile.core#2.6.0.v20150901-2143 resolved
INFO [org.netbeans.core.netigso.Netigso]: bundle org.eclipse.jgit.java7#3.6.2.201501210735-r resolved
INFO [org.netbeans.core.netigso.Netigso]: bundle org.eclipse.mylyn.wikitext.confluence.core#2.6.0.v20150901-2143 resolved
INFO [org.netbeans.core.netigso.Netigso]: bundle net.java.html.sound#1.6.1 resolved
INFO [org.netbeans.core.netigso.Netigso]: bundle net.java.html.boot.script#1.6.1 resolved
INFO [org.netbeans.core.netigso.Netigso]: bundle net.java.html.geo#1.6.1 resolved
INFO [org.netbeans.core.netigso.Netigso]: bundle org.eclipse.osgi#3.9.1.v20140110-1610 started
INFO [org.netbeans.core.network.proxy.NetworkProxyReloader]: System network proxy resolver: Windows
INFO [org.netbeans.core.network.proxy.windows.WindowsNetworkProxy]: Windows system proxy resolver: auto detect
INFO [org.netbeans.core.network.proxy.NetworkProxyReloader]: System network proxy reloading succeeded.
INFO [org.netbeans.core.network.proxy.NetworkProxyReloader]: System network proxy - mode: direct
INFO [org.netbeans.core.network.proxy.NetworkProxyReloader]: System network proxy: fell to default (correct if direct mode went before)
WARNING [org.netbeans.TopSecurityManager]: use of system property netbeans.home has been obsoleted in favor of InstalledFileLocator/Places at org.netbeans.modules.java.j2seplatform.platformdefinition.Util.removeNBArtifacts(Util.java:337)
Diagnostic information
Input arguments:
-Dnetbeans.importclass=org.netbeans.upgrade.AutoUpgrade
-XX:+UseStringDeduplication
-Djdk.lang.Process.allowAmbiguousCommands=true
-Xss2m
-Djdk.gtk.version=2.2
-Dapple.laf.useScreenMenuBar=true
-Dapple.awt.graphics.UseQuartz=true
-Dsun.java2d.noddraw=true
-Dsun.java2d.dpiaware=true
-Dsun.zip.disableMemoryMapping=true
-Dplugin.manager.check.updates=false
-Dnetbeans.extbrowser.manual_chrome_plugin_install=yes
--add-opens=java.base/java.net=ALL-UNNAMED
--add-opens=java.base/java.lang.ref=ALL-UNNAMED
--add-opens=java.base/java.lang=ALL-UNNAMED
--add-opens=java.base/java.security=ALL-UNNAMED
--add-opens=java.base/java.util=ALL-UNNAMED
--add-opens=java.desktop/javax.swing.plaf.basic=ALL-UNNAMED
--add-opens=java.desktop/javax.swing.text=ALL-UNNAMED
--add-opens=java.desktop/javax.swing=ALL-UNNAMED
--add-opens=java.desktop/java.awt=ALL-UNNAMED
--add-opens=java.desktop/java.awt.event=ALL-UNNAMED
--add-opens=java.prefs/java.util.prefs=ALL-UNNAMED
--add-opens=jdk.jshell/jdk.jshell=ALL-UNNAMED
--add-modules=jdk.jshell
--add-exports=java.desktop/sun.awt=ALL-UNNAMED
--add-exports=java.desktop/java.awt.peer=ALL-UNNAMED
--add-exports=java.desktop/com.sun.beans.editors=ALL-UNNAMED
--add-exports=java.desktop/sun.swing=ALL-UNNAMED
--add-exports=java.desktop/sun.awt.im=ALL-UNNAMED
--add-exports=jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED
--add-exports=java.management/sun.management=ALL-UNNAMED
--add-exports=java.base/sun.reflect.annotation=ALL-UNNAMED
--add-exports=jdk.javadoc/com.sun.tools.javadoc.main=ALL-UNNAMED
-XX:+IgnoreUnrecognizedVMOptions
-Djdk.home=C:\Program Files\Java\jdk-13.0.1
-Dnetbeans.home=C:\netbeans\platform
-Dnetbeans.user=C:\Users\banhi\AppData\Roaming\NetBeans\11.2
-Dnetbeans.default_userdir_root=C:\Users\banhi\AppData\Roaming\NetBeans
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=C:\Users\banhi\AppData\Roaming\NetBeans\11.2\var\log\heapdump.hprof
-Dsun.awt.keepWorkingSetOnMinimize=true
-Dnetbeans.dirs=C:\netbeans\nb;C:\netbeans\ergonomics;C:\netbeans\ide;C:\netbeans\extide;C:\netbeans\java;C:\netbeans\apisupport;C:\netbeans\webcommon;C:\netbeans\websvccommon;C:\netbeans\enterprise;C:\netbeans\mobility;C:\netbeans\profiler;C:\netbeans\python;C:\netbeans\php;C:\netbeans\identity;C:\netbeans\harness;C:\netbeans\cnd;C:\netbeans\cndext;C:\netbeans\dlight;C:\netbeans\groovy;C:\netbeans\extra;C:\netbeans\javacard;C:\netbeans\javafx
exit
Compiler: HotSpot 64-Bit Tiered Compilers
Heap memory usage: initial 256,0MB maximum 4082,0MB
Non heap memory usage: initial 7,3MB maximum -1b
Garbage collector: G1 Young Generation (Collections=21 Total time spent=0s)
Garbage collector: G1 Old Generation (Collections=0 Total time spent=0s)
Classes: loaded=11437 total loaded=11442 unloaded 5
INFO [org.netbeans.core.ui.warmup.DiagnosticTask]: Total memory 17 114 759 168
INFO [org.netbeans.modules.parsing.impl.indexing.RepositoryUpdater]: Resolving dependencies took: 16 ms
INFO [org.netbeans.modules.parsing.impl.indexing.RepositoryUpdater]: Complete indexing of 32 binary roots took: 334 ms
INFO [org.netbeans.modules.parsing.impl.indexing.RepositoryUpdater]: Indexing of: C:\Users\banhi\Documents\NetBeansProjects\NASAInSight\src took: 280 ms (New or modified files: 0, Deleted files: 0) [Adding listeners took: 1 ms]
INFO [org.netbeans.modules.parsing.impl.indexing.RepositoryUpdater]: Indexing of: C:\Users\banhi\Documents\NetBeansProjects\test\src took: 46 ms (New or modified files: 0, Deleted files: 0) [Adding listeners took: 0 ms]
INFO [org.netbeans.modules.parsing.impl.indexing.RepositoryUpdater]: Indexing of: C:\Users\banhi\Documents\NetBeansProjects\NASAInSight\test took: 0 ms (New or modified files: 0, Deleted files: 0) [Adding listeners took: 0 ms]
INFO [org.netbeans.modules.parsing.impl.indexing.RepositoryUpdater]: Indexing of: C:\Users\banhi\Documents\NetBeansProjects\test\test took: 0 ms (New or modified files: 0, Deleted files: 0) [Adding listeners took: 0 ms]
INFO [org.netbeans.modules.parsing.impl.indexing.RepositoryUpdater]: Complete indexing of 4 source roots took: 326 ms (New or modified files: 0, Deleted files: 0) [Adding listeners took: 1 ms]
WARNING [org.netbeans.TopSecurityManager]: use of system property netbeans.user has been obsoleted in favor of InstalledFileLocator/Places at org.netbeans.modules.java.api.common.project.ActionProviderSupport.verifyUserPropertiesFile(ActionProviderSupport.java:927)
WARNING [org.netbeans.modules.options.keymap.LayersBridge]: Invalid shortcut: org.openide.loaders.XMLDataObject#66d7a096[MultiFileObject#286f4280[Actions/Help/master-help.xml]]
WARNING [org.netbeans.modules.options.keymap.LayersBridge]: Invalid shortcut: org.openide.loaders.BrokenDataShadow#b4648e[MultiFileObject#2acdee67[Keymaps/NetBeans/D-BACK_QUOTE.shadow]]
WARNING [null]: Last record repeated again.
INFO [org.netbeans.modules.parsing.impl.indexing.RepositoryUpdater]: Resolving dependencies took: 15 ms
INFO [org.netbeans.modules.parsing.impl.indexing.RepositoryUpdater]: Complete indexing of 0 binary roots took: 1 ms
INFO [org.netbeans.modules.parsing.impl.indexing.RepositoryUpdater]: Indexing of: C:\netbeans\webcommon\jsstubs\reststubs.zip took: 98 ms (New or modified files: 0, Deleted files: 0) [Adding listeners took: 0 ms]
INFO [org.netbeans.modules.parsing.impl.indexing.RepositoryUpdater]: Indexing of: C:\netbeans\webcommon\jsstubs\corestubs.zip took: 46 ms (New or modified files: 0, Deleted files: 0) [Adding listeners took: 0 ms]
INFO [org.netbeans.modules.parsing.impl.indexing.RepositoryUpdater]: Indexing of: C:\netbeans\webcommon\jsstubs\domstubs.zip took: 49 ms (New or modified files: 0, Deleted files: 0) [Adding listeners took: 0 ms]
INFO [org.netbeans.modules.parsing.impl.indexing.RepositoryUpdater]: Complete indexing of 3 source roots took: 193 ms (New or modified files: 0, Deleted files: 0) [Adding listeners took: 0 ms]
INFO [org.netbeans.modules.mercurial]: version: null
INFO [org.netbeans.modules.subversion]: Finished indexing svn cache with 0 entries. Elapsed time: 0 ms.
INFO [org.netbeans.modules.parsing.impl.indexing.RepositoryUpdater]: Resolving dependencies took: 48 ms
INFO [org.netbeans.modules.parsing.impl.indexing.RepositoryUpdater]: Complete indexing of 20 binary roots took: 2 938 ms
INFO [org.netbeans.modules.parsing.impl.indexing.RepositoryUpdater]: Indexing of: C:\Users\banhi\Documents\NetBeansProjects\NASAInSight\src took: 385 ms (New or modified files: 0, Deleted files: 0) [Adding listeners took: 0 ms]
INFO [org.netbeans.modules.parsing.impl.indexing.RepositoryUpdater]: Complete indexing of 1 source roots took: 385 ms (New or modified files: 0, Deleted files: 0) [Adding listeners took: 0 ms]
INFO [org.netbeans.core.netigso.Netigso]: bundle org.eclipse.osgi#3.9.1.v20140110-1610 256
WARNING [org.netbeans.modules.progress.spi.InternalHandle]: Cannot switch to silent mode when not running at org.netbeans.core.ui.warmup.MenuWarmUpTask$NbWindowsAdapter$1HandleBridge.run(MenuWarmUpTask.java:244)
INFO [org.netbeans.core.netigso.Netigso]: bundle org.eclipse.osgi#3.9.1.v20140110-1610 stopped
INFO [null]: Last record repeated again.
WARNING [org.netbeans.modules.progress.spi.InternalHandle]: Cannot switch to silent mode when not running at org.netbeans.core.ui.warmup.MenuWarmUpTask$NbWindowsAdapter$1HandleBridge.run(MenuWarmUpTask.java:244)
WARNING [null]: Last record repeated more than 10 times, further logs of this record are ignored until the log record changes.

Related

GATK: HaplotypceCaller IntelPairHmm only detecting 1 thread

I can't seem to get GATK to recognise the number of available threads. I am running GATK (4.2.4.1) in a conda environment which is part of a nextflow (v20.10.0) pipeline I'm writing. For whatever reason, I cannot get GATK to see there is more than one thread. I've tried different node types, increasing and decreasing the number of cpus available, providing java arguments such as -XX:ActiveProcessorCount=16, using taskset, but it always just detects 1.
Here is the command from the .command.sh:
gatk HaplotypeCaller \
--tmp-dir tmp/ \
-ERC GVCF \
-R VectorBase-54_AgambiaePEST_Genome.fasta \
-I AE12A_S24_BP.bam \
-O AE12A_S24_BP.vcf
And here is the top of the .command.log file:
12:10:00.695 INFO HaplotypeCaller - ------------------------------------------------------------
12:10:00.695 INFO HaplotypeCaller - The Genome Analysis Toolkit (GATK) v4.2.4.1
12:10:00.695 INFO HaplotypeCaller - For support and documentation go to https://software.broadinstitute.org/gatk/
12:10:00.696 INFO HaplotypeCaller - Executing on Linux v4.18.0-193.6.3.el8_2.x86_64 amd64
12:10:00.696 INFO HaplotypeCaller - Java runtime: OpenJDK 64-Bit Server VM v11.0.13+7-b1751.21
12:10:00.696 INFO HaplotypeCaller - Start Date/Time: 9 February 2022 at 12:10:00 GMT
12:10:00.696 INFO HaplotypeCaller - ------------------------------------------------------------
12:10:00.696 INFO HaplotypeCaller - ------------------------------------------------------------
12:10:00.697 INFO HaplotypeCaller - HTSJDK Version: 2.24.1
12:10:00.697 INFO HaplotypeCaller - Picard Version: 2.25.4
12:10:00.697 INFO HaplotypeCaller - Built for Spark Version: 2.4.5
12:10:00.697 INFO HaplotypeCaller - HTSJDK Defaults.COMPRESSION_LEVEL : 2
12:10:00.697 INFO HaplotypeCaller - HTSJDK Defaults.USE_ASYNC_IO_READ_FOR_SAMTOOLS : false
12:10:00.697 INFO HaplotypeCaller - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_SAMTOOLS : true
12:10:00.697 INFO HaplotypeCaller - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_TRIBBLE : false
12:10:00.697 INFO HaplotypeCaller - Deflater: IntelDeflater
12:10:00.697 INFO HaplotypeCaller - Inflater: IntelInflater
12:10:00.697 INFO HaplotypeCaller - GCS max retries/reopens: 20
12:10:00.698 INFO HaplotypeCaller - Requester pays: disabled
12:10:00.698 INFO HaplotypeCaller - Initializing engine
12:10:01.126 INFO HaplotypeCaller - Done initializing engine
12:10:01.129 INFO HaplotypeCallerEngine - Tool is in reference confidence mode and the annotation, the following changes will be made to any specified annotations: 'StrandBiasBySample' will be enabled. 'ChromosomeCounts', 'FisherStrand', 'StrandOddsRatio' and 'QualByDepth' annotations have been disabled
12:10:01.143 INFO HaplotypeCallerEngine - Standard Emitting and Calling confidence set to 0.0 for reference-model confidence output
12:10:01.143 INFO HaplotypeCallerEngine - All sites annotated with PLs forced to true for reference-model confidence output
12:10:01.162 INFO NativeLibraryLoader - Loading libgkl_utils.so from jar:file:/home/anaconda3/envs/NF_GATK/share/gatk4-4.2.4.1-0/gatk-package-4.2.4.1-local.jar!/com/intel/gkl/native/libgkl_utils.so
12:10:01.169 INFO NativeLibraryLoader - Loading libgkl_pairhmm_omp.so from jar:file:/home/anaconda3/envs/NF_GATK/share/gatk4-4.2.4.1-0/gatk-package-4.2.4.1-local.jar!/com/intel/gkl/native/libgkl_pairhmm_omp.so
12:10:01.209 INFO IntelPairHmm - Flush-to-zero (FTZ) is enabled when running PairHMM
12:10:01.210 INFO IntelPairHmm - Available threads: 1
12:10:01.210 INFO IntelPairHmm - Requested threads: 4
12:10:01.210 WARN IntelPairHmm - Using 1 available threads, but 4 were requested
12:10:01.210 INFO PairHMM - Using the OpenMP multi-threaded AVX-accelerated native PairHMM implementation
12:10:01.271 INFO ProgressMeter - Starting traversal
I found a thread on the broad institute website suggesting it might be the OMP library, but this is seemingly loaded, and I'm using the version they suggested updating to...
Needless to say, this is a little slow. I can always parallelise by using the -L option, but this doesn't solve that every step in the pipeline will be very slow.
Thanks in advance.
In case anyone else has the same problem, it turned out I had to configure the submission as an MPI job.
So on the HPC I use, here is the nextflow process:
process DNA_HCG {
errorStrategy { sleep(Math.pow(2, task.attempt) * 600 as long); return 'retry' }
maxRetries 3
maxForks params.HCG_Forks
tag { SampleID+"-"+chrom }
executor = 'pbspro'
clusterOptions = "-lselect=1:ncpus=${params.HCG_threads}:mem=${params.HCG_memory}gb:mpiprocs=1:ompthreads=${params.HCG_threads} -lwalltime=${params.HCG_walltime}:00:00"
publishDir(
path: "${params.HCDir}",
mode: 'copy',
)
input:
each chrom from chromosomes_ch
set SampleID, path(bam), path(bai) from processed_bams
path ref_genome
path ref_dict
path ref_index
output:
tuple chrom, path("${SampleID}_${chrom}.vcf") into HCG_ch
path("${SampleID}_${chrom}.vcf.idx") into idx_ch
beforeScript 'module load anaconda3/personal; source activate NF_GATK'
script:
"""
mkdir tmp
n_slots=`expr ${params.GVCF_threads} / 2 - 3`
if [ \$n_slots -le 0 ]; then n_slots=1; fi
taskset -c 0-\${n_slots} gatk --java-options \"-Xmx${params.HCG_memory}G -XX:+UseParallelGC -XX:ParallelGCThreads=\${n_slots}\" HaplotypeCaller \\
--tmp-dir tmp/ \\
--pair-hmm-implementation AVX_LOGLESS_CACHING_OMP \\
--native-pair-hmm-threads \${n_slots} \\
-ERC GVCF \\
-L ${chrom} \\
-R ${ref_genome} \\
-I ${bam} \\
-O ${SampleID}_${chrom}.vcf ${params.GVCF_args}
"""
}
I think I solved this problem (at least for me, it worked well on SLURM). This comes from how GATK is configured for parallelizing jobs: it's based on OpenMP, so you should add to the beginning of your script something like this:
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
source

is there any solution regarding dl4j with cuda support for this problem?

I am trying to execute MultiGpuLenetMnistExample.java
and i have received following error
"
...
12:41:24.129 [main] INFO Test - Load data....
12:41:24.716 [main] INFO Test - Build model....
12:41:25.500 [main] INFO org.nd4j.linalg.factory.Nd4jBackend - Loaded [JCublasBackend] backend
ND4J CUDA build version: 10.1.243
CUDA device 0: [Quadro K4000]; cc: [3.0]; Total memory: [3221225472];
12:41:26.692 [main] INFO org.nd4j.nativeblas.NativeOpsHolder - Number of threads used for OpenMP: 32
12:41:26.746 [main] INFO org.nd4j.nativeblas.Nd4jBlas - Number of threads used for OpenMP BLAS: 0
12:41:26.755 [main] INFO org.nd4j.linalg.api.ops.executioner.DefaultOpExecutioner - Backend used: [CUDA]; OS: [Windows 8.1]
12:41:26.755 [main] INFO org.nd4j.linalg.api.ops.executioner.DefaultOpExecutioner - Cores: [24]; Memory: [3,5GB];
12:41:26.755 [main] INFO org.nd4j.linalg.api.ops.executioner.DefaultOpExecutioner - Blas vendor: [CUBLAS]
12:41:26.755 [main] INFO org.nd4j.linalg.jcublas.ops.executioner.CudaExecutioner - Device Name: [Quadro K4000]; CC: [3.0]; Total/free memory: [3221225472]
12:41:26.844 [main] INFO org.deeplearning4j.nn.multilayer.MultiLayerNetwork - Starting MultiLayerNetwork with WorkspaceModes set to [training: ENABLED; inference: ENABLED], cacheMode set to [NONE]
12:41:27.957 [main] DEBUG org.nd4j.jita.allocator.impl.MemoryTracker - Free memory on device_0: 2709856256
Exception in thread "main" java.lang.RuntimeException: cudaGetSymbolAddress(...) failed; Error code: [13]
at org.nd4j.linalg.jcublas.ops.executioner.CudaExecutioner.createShapeInfo(CudaExecutioner.java:2557)
at org.nd4j.linalg.api.shape.Shape.createShapeInformation(Shape.java:3282)
at org.nd4j.linalg.api.ndarray.BaseShapeInfoProvider.createShapeInformation(BaseShapeInfoProvider.java:76)
at org.nd4j.jita.constant.ProtectedCudaShapeInfoProvider.createShapeInformation(ProtectedCudaShapeInfoProvider.java:96)
at org.nd4j.jita.constant.ProtectedCudaShapeInfoProvider.createShapeInformation(ProtectedCudaShapeInfoProvider.java:77)
at org.nd4j.linalg.jcublas.CachedShapeInfoProvider.createShapeInformation(CachedShapeInfoProvider.java:44)
at org.nd4j.linalg.api.ndarray.BaseNDArray.<init>(BaseNDArray.java:211)
at org.nd4j.linalg.jcublas.JCublasNDArray.<init>(JCublasNDArray.java:383)
at org.nd4j.linalg.jcublas.JCublasNDArrayFactory.create(JCublasNDArrayFactory.java:1543)
at org.nd4j.linalg.jcublas.JCublasNDArrayFactory.create(JCublasNDArrayFactory.java:1538)
at org.nd4j.linalg.factory.Nd4j.create(Nd4j.java:4298)
at org.nd4j.linalg.factory.Nd4j.create(Nd4j.java:3986)
at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.init(MultiLayerNetwork.java:688)
at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.init(MultiLayerNetwork.java:604)
at Test.main(Test.java:80)
Process finished with exit code 1 "
is there any workaround about this problem?
2 options here: either build dl4j from sources for your target compute capability (3.0) or wait for next release, since we’re going to bring it back for 1 additional release.
At this point cc 3.0 is just considered deprecated by most frameworks afaik 😞

Tomcat 8 : 100% cpu usage

I am having a problem with tomcat since switching to a different package provider (bitnami -> official debian).
Someone seems to be hitting our servers with a request (with malicious intent):
59.111.29.6 - - [04/Feb/2017:16:17:58 +0000] "-" 400 -
where "-" is the request path, which coincides with
Feb 04, 2017 4:17:58 PM org.apache.coyote.http11.AbstractHttp11Processor process
INFO: Error parsing HTTP request header
Note: further occurrences of HTTP header parsing errors will be logged at DEBUG level.
which coincides with the increased CPU usage.
The server status shows the following:
<h1>JVM</h1><p> Free memory: 355.58 MB Total memory: 833.13 MB Max memory: 2900.00 MB</p><table border="0"><thead><tr><th>Memory Pool</th><th>Type</th><th>Initial</th><th>Total</th><th>Maximum</th><th>Used</th></tr></thead><tbody><tr><td>Eden Space</td><td>Heap memory</td><td>34.12 MB</td><td>229.93 MB</td><td>800.00 MB</td><td>12.47 MB (1%)</td></tr><tr><td>Survivor Space</td><td>Heap memory</td><td>4.25 MB</td><td>28.68 MB</td><td>100.00 MB</td><td>2.22 MB (2%)</td></tr><tr><td>Tenured Gen</td><td>Heap memory</td><td>85.37 MB</td><td>574.51 MB</td><td>2000.00 MB</td><td>462.84 MB (23%)</td></tr><tr><td>Code Cache</td><td>Non-heap memory</td><td>2.43 MB</td><td>7.00 MB</td><td>48.00 MB</td><td>6.89 MB (14%)</td></tr><tr><td>Perm Gen</td><td>Non-heap memory</td><td>128.00 MB</td><td>128.00 MB</td><td>512.00 MB</td><td>52.57 MB (10%)</td></tr></tbody></table><h1>"http-nio-8080"</h1><p> Max threads: 200 Current thread count: 10 Current thread busy: 3 Keeped alive sockets count: 1<br> Max processing time: 301 ms Processing time: 71.068 s Request count: 10021 Error count: 2996 Bytes received: 0.00 MB Bytes sent: 3.18 MB</p><table border="0"><tr><th>Stage</th><th>Time</th><th>B Sent</th><th>B Recv</th><th>Client (Forwarded)</th><th>Client (Actual)</th><th>VHost</th><th>Request</th></tr><tr><td><strong>F</strong></td><td>1486364749526 ms</td><td>0 KB</td><td>0 KB</td><td>185.40.4.169</td><td>185.40.4.169</td><td nowrap>?</td><td nowrap class="row-left">? ? ?</td></tr><tr><td><strong>F</strong></td><td>1486364749526 ms</td><td>0 KB</td><td>0 KB</td><td>185.40.4.169</td><td>185.40.4.169</td><td nowrap>?</td><td nowrap class="row-left">? ? ?</td></tr><tr><td><strong>R</strong></td><td>?</td><td>?</td><td>?</td><td>?</td><td>?</td><td>?</td></tr><tr><td><strong>S</strong></td><td>36 ms</td><td>0 KB</td><td>0 KB</td><td>106.51.39.130</td><td>106.51.39.130</td><td nowrap>104.197.119.177</td><td nowrap class="row-left">GET /manager/status?org.apache.catalina.filters.CSRF_NONCE=072F9F6884D94C5D7B30D1D34CE61BD9 HTTP/1.1</td></tr><tr><td><strong>R</strong></td><td>?</td><td>?</td><td>?</td><td>?</td><td>?</td><td>?</td></tr></table><p>P: Parse and prepare request S: Service F: Finishing R: Ready K: Keepalive</p><hr size="1" noshade="noshade">
<center><font size="-1" color="#525D76">
So it doesn't seem like an out of memory issue (but I could be wrong).
How can I stop someone from making the request in the first place to avoid the issues I'm facing? My webapp running on tomcat restricts HTTP methods to GET/POST, but how can I configure tomcat as a whole to restrict them?
I would advise you to obtain a thread dump of your server :
Isolates the PID of the tomcat server using :
jps -l
Obtains a thread dump using :
kill -3 PID
or jstack PID
Then checks the Thread dump, you should find the reason of the hogging thread

How do I spawn custom mobs?

I'm trying to spawn one entity from my class which customizes EntityZombie.
World bukkit; // ...
CraftWorld craft = (CraftWorld)bukkit;
net.minecraft.server.v1_7_R3.World world = craft.getHandle();
Boss boss = new Boss(world);
Location spawn; // ...
boss.setLocation(spawn.getX(), spawn.getY(), spawn.getZ(), spawn.getYaw(), spawn.getPitch());
world.addEntity(boss);
public class Boss extends EntityZombie {
public Boss(World world) {
super(world);
}
// Empty right now, nothing more than the normal zombie
}
The output is:
If I run the code above by staying away, when I come back I see nothing.
If I run it by being close, the client crashes.
Nothing relevant in console, here's the full crash report.
---- Minecraft Crash Report ----
// Who set us up the TNT?
Time: 06/03/15 14.57
Description: Ticking screen
java.lang.NullPointerException: Ticking screen
at bjb.a(SourceFile:514)
at fz.a(SourceFile:97)
at fz.a(SourceFile:15)
at ej.a(SourceFile:174)
at bcx.e(SourceFile:78)
at bao.p(SourceFile:1343)
at bao.ak(SourceFile:774)
at bao.f(SourceFile:728)
at net.minecraft.client.main.Main.main(SourceFile:148)
A detailed walkthrough of the error, its code path and all known details is as follows:
---------------------------------------------------------------------------------------
-- Head --
Stacktrace:
at bjb.a(SourceFile:514)
at fz.a(SourceFile:97)
at fz.a(SourceFile:15)
at ej.a(SourceFile:174)
at bcx.e(SourceFile:78)
-- Affected screen --
Details:
Screen name: ~~ERROR~~ NullPointerException: null
-- Affected level --
Details:
Level name: MpServer
All players: 1 total; [bjk['System_'/1, l='MpServer', x=-518,50, y=5,62, z=754,50]]
Chunk stats: MultiplayerChunkCache: 10, 10
Level seed: 0
Level generator: ID 01 - flat, ver 0. Features enabled: false
Level generator options:
Level spawn location: World: (-519,4,756), Chunk: (at 9,0,4 in -33,47; contains blocks -528,0,752 to -513,255,767), Region: (-2,1; contains chunks -64,32 to -33,63, blocks -1024,0,512 to -513,255,1023)
Level time: 45 game time, 45 day time
Level dimension: 0
Level storage version: 0x00000 - Unknown?
Level weather: Rain time: 0 (now: false), thunder time: 0 (now: false)
Level game mode: Game mode: survival (ID 0). Hardcore: false. Cheats: false
Forced entities: 1 total; [bjk['System_'/1, l='MpServer', x=-518,50, y=5,62, z=754,50]]
Retry entities: 0 total; []
Server brand: Spigot
Server type: Non-integrated multiplayer server
Stacktrace:
at bjf.a(SourceFile:289)
at bao.b(SourceFile:1972)
at bao.f(SourceFile:737)
at net.minecraft.client.main.Main.main(SourceFile:148)
-- System Details --
Details:
Minecraft Version: 1.7.10
Operating System: Windows 7 (amd64) version 6.1
Java Version: 1.8.0_25, Oracle Corporation
Java VM Version: Java HotSpot(TM) 64-Bit Server VM (mixed mode), Oracle Corporation
Memory: 136194320 bytes (129 MB) / 186380288 bytes (177 MB) up to 1060372480 bytes (1011 MB)
JVM Flags: 6 total; -XX:HeapDumpPath=MojangTricksIntelDriversForPerformance_javaw.exe_minecraft.exe.heapdump -Xmx1G -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:-UseAdaptiveSizePolicy -Xmn128M
AABB Pool Size: 0 (0 bytes; 0 MB) allocated, 0 (0 bytes; 0 MB) used
IntCache: cache: 0, tcache: 0, allocated: 0, tallocated: 0
Launched Version: 1.7.10
LWJGL: 2.9.1
OpenGL: GeForce GT 440/PCIe/SSE2 GL version 4.4.0, NVIDIA Corporation
GL Caps: Using GL 1.3 multitexturing.
Using framebuffer objects because OpenGL 3.0 is supported and separate blending is supported.
Anisotropic filtering is supported and maximum anisotropy is 16.
Shaders are available because OpenGL 2.1 is supported.
Is Modded: Probably not. Jar signature remains and client brand is untouched.
Type: Client (map_client.txt)
Resource Packs: []
Current Language: English (US)
Profiler Position: N/A (disabled)
Vec3 Pool Size: 0 (0 bytes; 0 MB) allocated, 0 (0 bytes; 0 MB) used
Anisotropic Filtering: On (2)
Perhaps, do I need to register any custom mob for each biome first? If yes, how?
You could download RemoteEntities Api from:
HERE
You can download other custom mob api if you want but I recommend you using it.

algebraic error when running "aggregate" function on dataset

I'm learning hadoop/pig/hive through running through tutorials on hortonworks.com
I have indeed tried to find a link to the tutorial, but unfortunately it only ships with the ISA image that they provide to you. It's not actually hosted on their website.
batting = load 'Batting.csv' using PigStorage(',');
runs = FOREACH batting GENERATE $0 as playerID, $1 as year, $8 as runs;
grp_data = GROUP runs by (year);
max_runs = FOREACH grp_data GENERATE group as grp,MAX(runs.runs) as max_runs;
join_max_run = JOIN max_runs by ($0, max_runs), runs by (year,runs);
join_data = FOREACH join_max_run GENERATE $0 as year, $2 as playerID, $1 as runs;
dump join_data;
I've copied their code exactly as it was stated in the tutorial and I'm getting this output:
2013-06-14 14:34:37,969 [main] INFO org.apache.pig.Main - Apache Pig version 0.11.1.1.3.0.0-107 (rexported) compiled May 20 2013, 03:04:35
2013-06-14 14:34:37,977 [main] INFO org.apache.pig.Main - Logging error messages to: /hadoop/mapred/taskTracker/hue/jobcache/job_201306140401_0020/attempt_201306140401_0020_m_000000_0/work/pig_1371245677965.log
2013-06-14 14:34:38,412 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file /usr/lib/hadoop/.pigbootup not found
2013-06-14 14:34:38,598 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://sandbox:8020
2013-06-14 14:34:38,998 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to map-reduce job tracker at: sandbox:50300
2013-06-14 14:34:40,819 [main] WARN org.apache.pig.PigServer - Encountered Warning IMPLICIT_CAST_TO_DOUBLE 1 time(s).
2013-06-14 14:34:40,827 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: HASH_JOIN,GROUP_BY
2013-06-14 14:34:41,115 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
2013-06-14 14:34:41,160 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.CombinerOptimizer - Choosing to move algebraic foreach to combiner
2013-06-14 14:34:41,201 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler$LastInputStreamingOptimizer - Rewrite: POPackage->POForEach to POJoinPackage
2013-06-14 14:34:41,213 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 3
2013-06-14 14:34:41,213 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - Merged 1 map-reduce splittees.
2013-06-14 14:34:41,214 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - Merged 1 out of total 3 MR operators.
2013-06-14 14:34:41,214 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 2
2013-06-14 14:34:41,488 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job
2013-06-14 14:34:41,551 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
2013-06-14 14:34:41,555 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Using reducer estimator: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator
2013-06-14 14:34:41,559 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator - BytesPerReducer=1000000000 maxReducers=999 totalInputFileSize=6398990
2013-06-14 14:34:41,559 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting Parallelism to 1
2013-06-14 14:34:44,244 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - creating jar file Job5371236206169131677.jar
2013-06-14 14:34:49,495 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - jar file Job5371236206169131677.jar created
2013-06-14 14:34:49,517 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up multi store job
2013-06-14 14:34:49,529 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Key [pig.schematuple] is false, will not generate code.
2013-06-14 14:34:49,530 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Starting process to move generated code to distributed cacche
2013-06-14 14:34:49,530 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Setting key [pig.schematuple.classes] with classes to deserialize []
2013-06-14 14:34:49,755 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
2013-06-14 14:34:50,144 [JobControl] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2013-06-14 14:34:50,145 [JobControl] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
2013-06-14 14:34:50,256 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
2013-06-14 14:34:50,316 [JobControl] INFO com.hadoop.compression.lzo.GPLNativeCodeLoader - Loaded native gpl library
2013-06-14 14:34:50,444 [JobControl] INFO com.hadoop.compression.lzo.LzoCodec - Successfully loaded & initialized native-lzo library [hadoop-lzo rev cf4e7cbf8ed0f0622504d008101c2729dc0c9ff3]
2013-06-14 14:34:50,665 [JobControl] WARN org.apache.hadoop.io.compress.snappy.LoadSnappy - Snappy native library is available
2013-06-14 14:34:50,666 [JobControl] INFO org.apache.hadoop.util.NativeCodeLoader - Loaded the native-hadoop library
2013-06-14 14:34:50,666 [JobControl] INFO org.apache.hadoop.io.compress.snappy.LoadSnappy - Snappy native library loaded
2013-06-14 14:34:50,680 [JobControl] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
2013-06-14 14:34:52,796 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_201306140401_0021
2013-06-14 14:34:52,796 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Processing aliases batting,grp_data,max_runs,runs
2013-06-14 14:34:52,796 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - detailed locations: M: batting[1,10],runs[2,7],max_runs[4,11],grp_data[3,11] C: max_runs[4,11],grp_data[3,11] R: max_runs[4,11]
2013-06-14 14:34:52,796 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - More information at: http://sandbox:50030/jobdetails.jsp?jobid=job_201306140401_0021
2013-06-14 14:36:01,993 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 50% complete
2013-06-14 14:36:04,767 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.
2013-06-14 14:36:04,768 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_201306140401_0021 has failed! Stop running all dependent jobs
2013-06-14 14:36:04,768 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2013-06-14 14:36:05,029 [main] ERROR org.apache.pig.tools.pigstats.SimplePigStats - ERROR 2106: Error executing an algebraic function
2013-06-14 14:36:05,030 [main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed!
2013-06-14 14:36:05,042 [main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
1.2.0.1.3.0.0-107 0.11.1.1.3.0.0-107 mapred 2013-06-14 14:34:41 2013-06-14 14:36:05 HASH_JOIN,GROUP_BY
Failed!
Failed Jobs:
JobId Alias Feature Message Outputs
job_201306140401_0021 batting,grp_data,max_runs,runs MULTI_QUERY,COMBINER Message: Job failed! Error - # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201306140401_0021_m_000000
Input(s):
Failed to read data from "hdfs://sandbox:8020/user/hue/batting.csv"
Output(s):
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_201306140401_0021 -> null,
null
2013-06-14 14:36:05,042 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
2013-06-14 14:36:05,043 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias join_data
Details at logfile: /hadoop/mapred/taskTracker/hue/jobcache/job_201306140401_0020/attempt_201306140401_0020_m_000000_0/work/pig_1371245677965.log
When switching this part: MAX(runs.runs) to avg(runs.runs) then I am getting a completely different issue:
2013-06-14 14:38:25,694 [main] INFO org.apache.pig.Main - Apache Pig version 0.11.1.1.3.0.0-107 (rexported) compiled May 20 2013, 03:04:35
2013-06-14 14:38:25,695 [main] INFO org.apache.pig.Main - Logging error messages to: /hadoop/mapred/taskTracker/hue/jobcache/job_201306140401_0022/attempt_201306140401_0022_m_000000_0/work/pig_1371245905690.log
2013-06-14 14:38:26,198 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file /usr/lib/hadoop/.pigbootup not found
2013-06-14 14:38:26,438 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://sandbox:8020
2013-06-14 14:38:26,824 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to map-reduce job tracker at: sandbox:50300
2013-06-14 14:38:28,238 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1070: Could not resolve avg using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.]
Details at logfile: /hadoop/mapred/taskTracker/hue/jobcache/job_201306140401_0022/attempt_201306140401_0022_m_000000_0/work/pig_1371245905690.log
Anybody know what the issue might be?
I am sure lot of people would have figured this out. I combined Eugene's solution with the original code from Hortonworks such that we get the exact output as specific in the tutorial.
Following code works and produces exact output as specified in the tutorial:
batting = LOAD 'Batting.csv' using PigStorage(',');
runs_raw = FOREACH batting GENERATE $0 as playerID, $1 as year, $8 as runs;
runs = FILTER runs_raw BY runs > 0;
grp_data = group runs by (year);
max_runs = FOREACH grp_data GENERATE group as grp, MAX(runs.runs) as max_runs;
join_max_run = JOIN max_runs by ($0, max_runs), runs by (year,runs);
join_data = FOREACH join_max_run GENERATE $0 as year, $2 as playerID, $1 as runs;
dump join_data;
Note: line "runs = FILTER runs_raw BY runs > 0;" is additional than what has been provided by Hortonworks, thanks to Eugene for sharing working code which I used to modify original Hortonworks code to make it work.
UDFs are case sensitive, so at least to answer the second part of your question - you'll need to use AVG(runs.runs) instead of avg(runs.runs)
It's likely that once you correct your syntax you'll get the original error you reported...
i am having the same exact same issue with exact same log output, but this solution doesn't work because i believe changing MAX with AVG here dumps the whole purpose of this hortonworks.com tutorial - it was to get the MAX runs by playerID for each year.
UPDATE
Finally i got it resolved - you have to either remove the first line in Batting.csv (column names) or edit your Pig Latin code like this:
batting = LOAD ‘Batting.csv’ using PigStorage(‘,’);
runs_raw = FOREACH batting GENERATE $0 as playerID, $1 as year, $8 as runs;
runs = FILTER runs_raw BY runs > 0;
grp_data = group runs by (year);
max_runs = FOREACH grp_data GENERATE group as grp, MAX(runs.runs) as max_runs;
dump max_runs;
After that you should be able to complete tutorial correctly and get the proper result.
It also looks like this is due to the "bug" in the older versions of Pig rhich was used in the tutorial
Please specify appropriate data type for playerID, year & runs like below:
runs = FOREACH batting GENERATE $0 as playerID:int, $1 as year:chararray, $8 as runs:int;
Not, it should work.

Categories