Installed Elasticsearch 5.5 and failed to start the service - java

Installed elasticsearch v5.5 in centos and ran the following command to initiate the service.
sudo service elasticsearch start
Getting following error while running the above command.
Starting elasticsearch: OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000085330000, 2060255232, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 2060255232 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /tmp/hs_err_pid15359.log
Suggest me how to fix this.

Elasticsearch starts with 2 GB of RAM as default in 5.X versions.
Assuming that you are using virtual machine, it seems like your VM has less free memory than 2GB. Try giving your VM more memory or change your Elasticsearch JVM settings in /etc/elasticsearch/jvm.options (for example set -Xms512m -Xmx512m).

Related

How to interpret jeprof output

Recently I came across some java related memory leaks (continuously decreasing server-free memory and finally getting RAM warning which we have set up using nagios) and I did an investigation and found that the memory leak is not related to the heap ara. But still tomcat process's memory consumption keeps growing.
server memory graph - 7 days
Did a heap memory analysis and nothing found in there ( if I run jcmd <pid> GC.run heap memory usage drops to around 200MB from 2.8GB). heap memory graph - 7 days
Checked metaspace area and other memory areas related to the JVM as per the discussion on this video and post.
https://www.youtube.com/watch?t=2483&v=c755fFv1Rnk&feature=youtu.be
https://github.com/jeffgriffith/native-jvm-leaks/blob/master/README.md
Finally, I added jemalloc to profile native memory allocation, and here is some of the output that I got.
ouptput 1
ouptput 2
But I couldn't interpret this output and I'm not sure whether this output is correct or not.
And also I have a doubt regarding whether that jeprof is working with oracle JDK.
Could you please help me on this?
Additional info:
server memory: 4GB
Xmx: 3072M (recently we changed to this and earlier it was 2048M. but the memory behavior is similar on both occasions)
Xms: 3072M (recently we changed to this and earlier it was 2048M. but the memory behavior is similar on both occasions)
javac -version: jdk1.8.0_72
java version:
"1.8.0_72"
Java(TM) SE Runtime Environment (build 1.8.0_72-b15)
Java HotSpot(TM) 64-Bit Server VM (build 25.72-b15, mixed mode)
jemelloc configs:
jemelloc version: https://github.com/jemalloc/jemalloc/releases/download/5.2.1/jemalloc-5.2.1.tar.bz2
export LD_PRELOAD=/usr/local/lib/libjemalloc.so
export MALLOC_CONF=prof:true,lg_prof_interval:31,lg_prof_sample:17,prof_prefix:/opt/jemalloc/jeprof-output/jeprof
My application is running on a tomcat server in an ec2 instance (only one application running on that server).

couldn't start zookeeper server in vagrant ubuntu box due to insufficient memory error

I've vagrant box with below ubuntu configurations.
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.5 LTS
Release: 16.04
Codename: xenial
Below are the memory details -
$ free -m
total used free shared buff/cache available
Mem: 488 43 92 1 351 414
I've downloaded the file kafka_2.12-1.1.1.tgz from here.
Then I'm trying to start zookeeper server after extracting the archive using below command.
$ sudo /home/vagrant/kafka/bin/zookeeper-server-start.sh /home/vagrant/kafka/config/zookeeper.properties
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000e0000000, 536870912, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 536870912 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /home/vagrant/hs_err_pid5404.log
I've provided the complete error log here.
I see always the vagrant machine has the swap space problem.
How can I solve this problem and install successfully in vagrant machine.
You will need to show your Vagrant file but the default VM memory space is not enough to start Zookeeper, let alone Kafka as well as Zookeeper.
Assuming your host machine has at least 4G of memory, you can take a look at the Vagrant + Ansible repo that I've forked from Confluent that by default, will start Zookeeper and Kafka on separate machines.
https://github.com/cricket007/cp-ansible/blob/addVagrant/vagrant/README.md

Jenkins: Out of memory issue

Suddenly I have started getting following error from integration test cases. Using Java 8 so I added MAVEN_OPTS = -Xmx512m. But it did not work. What am I missing here and how can I fix it? Between it works fine on local machine.
SUREFIRE-859: Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c9800000, 54001664, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 54001664 bytes for committing reserved memory.
# An error report file with more information is saved as:
Looking at the error message, it looks like Java was not able to allocate enough memory, i.e. it's not Java's heap limit that's in the way but rather no more memory available to be given to Java by OS. Check that the machine is not running out of memory.

Java Memory issue while executing sbt package in spark

Can you please suggest me solution for the below issues.
hduser#hduser-VirtualBox:/usr/local/spark1/project$ sbt package
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000a8000000, 1073741824, 0) failed; error='Cannot allocate memory' (errno=12)
#
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (malloc) failed to allocate 1073741824 bytes for committing reserved memory.
An error report file with more information is saved as:
/usr/local/spark-1.1.0-bin-hadoop1/project/hs_err_pid26824.log
hduser#hduser-VirtualBox:/usr/local/spark1/project$ java -version
java version "1.7.0_65"
OpenJDK Runtime Environment (IcedTea 2.5.3) (7u71-2.5.3-0ubuntu0.14.04.1)
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode)
Looks like you're trying to run with quite a large Java heap size (1GB). I'd start by reducing that. If you really do need that much, you might be in trouble: it looks as though your machine just doesn't have enough RAM to allocate it for you.

java file executed from command line but not from browser(apache)?

I have a java file which is being triggered from a shell script. If I execute the shell script at command line it is executing the java file without any issues but if i execute this shell script from browser( i have a index.php which executes this shell script in linux server ) it is not executing the java file in shell script. The shell script is executed properly If I remove the java execution line from the shell script.
below is the error i received when executed from browser.
Error From browser:Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007fcf589ac000, 2555904, 1) failed; error='Permission denied' (errno=13) # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (malloc) failed to allocate 2555904 bytes for committing reserved memory. # An error report file with more information is saved as: # /tmp/hs_err_pid306.log
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 2555904 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (os_linux.cpp:2726), pid=306, tid=140528680765184
#
# JRE version: (7.0_51-b13) (build )
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.51-b03 mixed mode linux-amd64 compressed oops)
Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
Please help me on how I can fix this problem.. Stuck with this issue from last one week. :|
Permission problem.
Probably you run this java file as a different user from a browser.
An error report file with more information is saved as: # /tmp/hs_err_pid306.log
What does this error say?
Issue you have is with the HEAP memory.You don't have set enough memory to run the application.
Default size of Heap space in Java is 128MB on most of 32 bit Sun's JVM but its highly varies from JVM to JVM e.g. default maximum and start heap size for the 32-bit Solaris Operating System (SPARC Platform Edition) is -Xms=3670K and -Xmx=64M and Default values of heap size parameters on 64-bit systems have been increased up by approximately 30%. Also if you are using throughput garbage collector in Java 1.5 default maximum heap size of JVM would be Physical Memory/4 and default initial heap size would be Physical Memory/16. Another way to find default heap size of JVM is to start an application with default heap parameters and monitor in using JConsole which is available on JDK 1.5 onwards, on VMSummary tab you will be able to see maximum heap size.
By the way you can increase size of java heap space based on your application need and I always recommend this to avoid using default JVM heap values. if your application is large and lots of object created you can change size of heap space by using JVM options -Xms and -Xmx. Xms denotes starting size of Heap while -Xmx denotes maximum size of Heap in Java. There is another parameter called -Xmn which denotes Size of new generation of Java Heap Space. Only thing is you can not change the size of Heap in Java dynamically, you can only provide Java Heap Size parameter while starting JVM. I have shared some more useful JVM options related to Java Heap space and Garbage collection on my post 10 JVM options Java programmer must know, you may find useful.
Read more: http://javarevisited.blogspot.com/2011/05/java-heap-space-memory-size-jvm.html#ixzz30FsKCqeT
If it's tomcat you have to set this Memory Variables in "catalina.sh".
Eg : If you a starting the application through command Line :
/bin/java -Xms2048M -Xmx2048M Djava.util.logging.config.file= -Xms2048M -Xmx2048M

Categories