I installed openjdk, and cassandra via brew. I got this error when I started cassandra with cassandra -f:
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x000000010df65ab8, pid=52667, tid=0x0000000000008603
#
# JRE version: OpenJDK Runtime Environment (8.0_275) (build 1.8.0_275-bre_2020_11_16_16_29-b00)
# Java VM: OpenJDK 64-Bit Server VM (25.275-b00 mixed mode bsd-amd64 compressed oops)
# Problematic frame:
# V [libjvm.dylib+0x565ab8]
#
# Core dump written. Default location: /cores/core or core.52667
#
# An error report file with more information is saved as:
# /Users/my_laptop/hs_err_pid52667.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
#
After I saw that I tried uninstalling cassandra, and uninstalling openjdk. I installed regular java from oracle. When I run java --version, I now see this:
java 15.0.1 2020-10-20
Java(TM) SE Runtime Environment (build 15.0.1+9-18)
Java HotSpot(TM) 64-Bit Server VM (build 15.0.1+9-18, mixed mode, sharing)
I compiled and ran a basic HelloWorld.java file to make sure my command line java/javac were working correctly. I then reinstalled cassandra via homebrew. But atlas I see the same error. If I start it with brew services start cassandra I get an error:
➜ ~ brew services list
Name Status User Plist
cassandra error
And with cassandra -f, the exact same error:
OpenJDK 64-Bit Server VM warning: Cannot open file /usr/local/Cellar/cassandra/3.11.9_1/libexec/logs/gc.log due to No such file or directory
CompilerOracle: dontinline org/apache/cassandra/db/Columns$Serializer.deserializeLargeSubset (Lorg/apache/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/Columns;I)Lorg/apache/cassandra/db/Columns;
CompilerOracle: dontinline org/apache/cassandra/db/Columns$Serializer.serializeLargeSubset (Ljava/util/Collection;ILorg/apache/cassandra/db/Columns;ILorg/apache/cassandra/io/util/DataOutputPlus;)V
CompilerOracle: dontinline org/apache/cassandra/db/Columns$Serializer.serializeLargeSubsetSize (Ljava/util/Collection;ILorg/apache/cassandra/db/Columns;I)I
CompilerOracle: dontinline org/apache/cassandra/db/commitlog/AbstractCommitLogSegmentManager.advanceAllocatingFrom (Lorg/apache/cassandra/db/commitlog/CommitLogSegment;)V
CompilerOracle: dontinline org/apache/cassandra/db/transform/BaseIterator.tryGetMoreContents ()Z
CompilerOracle: dontinline org/apache/cassandra/db/transform/StoppingTransformation.stop ()V
CompilerOracle: dontinline org/apache/cassandra/db/transform/StoppingTransformation.stopInPartition ()V
CompilerOracle: dontinline org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.doFlush (I)V
CompilerOracle: dontinline org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.writeExcessSlow ()V
CompilerOracle: dontinline org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.writeSlow (JI)V
CompilerOracle: dontinline org/apache/cassandra/io/util/RebufferingInputStream.readPrimitiveSlowly (I)J
CompilerOracle: inline org/apache/cassandra/db/rows/UnfilteredSerializer.serializeRowBody (Lorg/apache/cassandra/db/rows/Row;ILorg/apache/cassandra/db/SerializationHeader;Lorg/apache/cassandra/io/util/DataOutputPlus;)V
CompilerOracle: inline org/apache/cassandra/io/util/Memory.checkBounds (JJ)V
CompilerOracle: inline org/apache/cassandra/io/util/SafeMemory.checkBounds (JJ)V
CompilerOracle: inline org/apache/cassandra/utils/AsymmetricOrdering.selectBoundary (Lorg/apache/cassandra/utils/AsymmetricOrdering/Op;II)I
CompilerOracle: inline org/apache/cassandra/utils/AsymmetricOrdering.strictnessOfLessThan (Lorg/apache/cassandra/utils/AsymmetricOrdering/Op;)I
CompilerOracle: inline org/apache/cassandra/utils/BloomFilter.indexes (Lorg/apache/cassandra/utils/IFilter/FilterKey;)[J
CompilerOracle: inline org/apache/cassandra/utils/BloomFilter.setIndexes (JJIJ[J)V
CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare (Ljava/nio/ByteBuffer;[B)I
CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare ([BLjava/nio/ByteBuffer;)I
CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compareUnsigned (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/lang/Object;JILjava/lang/Object;JI)I
CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/lang/Object;JILjava/nio/ByteBuffer;)I
CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
CompilerOracle: inline org/apache/cassandra/utils/vint/VIntCoding.encodeVInt (JI)[B
INFO [main] 2020-12-29 22:29:28,063 YamlConfigurationLoader.java:92 - Configuration location: file:/usr/local/etc/cassandra/cassandra.yaml
INFO [main] 2020-12-29 22:29:28,283 Config.java:536 - Node configuration:[allocate_tokens_for_keyspace=null; authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer; auto_bootstrap=true; auto_snapshot=true; back_pressure_enabled=false; back_pressure_strategy=org.apache.cassandra.net.RateBasedBackPressure{high_ratio=0.9, factor=5, flow=FAST}; batch_size_fail_threshold_in_kb=50; batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; broadcast_address=null; broadcast_rpc_address=null; buffer_pool_use_heap_if_exhausted=true; cas_contention_timeout_in_ms=1000; cdc_enabled=false; cdc_free_space_check_interval_ms=250; cdc_raw_directory=null; cdc_total_space_in_mb=0; check_for_duplicate_rows_during_compaction=true; check_for_duplicate_rows_during_reads=true; client_encryption_options=<REDACTED>; cluster_name=Test Cluster; column_index_cache_size_in_kb=2; column_index_size_in_kb=64; commit_failure_policy=stop; commitlog_compression=null; commitlog_directory=null; commitlog_max_compression_buffers_in_pool=3; commitlog_periodic_queue_size=-1; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_batch_window_in_ms=NaN; commitlog_sync_period_in_ms=10000; commitlog_total_space_in_mb=null; compaction_large_partition_warning_threshold_mb=100; compaction_throughput_mb_per_sec=16; concurrent_compactors=null; concurrent_counter_writes=32; concurrent_materialized_view_writes=32; concurrent_reads=32; concurrent_replicates=null; concurrent_writes=32; counter_cache_keys_to_save=2147483647; counter_cache_save_period=7200; counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; credentials_cache_max_entries=1000; credentials_update_interval_in_ms=-1; credentials_validity_in_ms=2000; cross_node_timeout=false; data_file_directories=[Ljava.lang.String;#2aa5fe93; disk_access_mode=auto; disk_failure_policy=stop; disk_optimization_estimate_percentile=0.95; disk_optimization_page_cross_chance=0.1; disk_optimization_strategy=ssd; dynamic_snitch=true; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ms=600000; dynamic_snitch_update_interval_in_ms=100; enable_materialized_views=true; enable_sasi_indexes=true; enable_scripted_user_defined_functions=false; enable_user_defined_functions=false; enable_user_defined_functions_threads=true; encryption_options=null; endpoint_snitch=SimpleSnitch; file_cache_round_up=null; file_cache_size_in_mb=null; gc_log_threshold_in_ms=200; gc_warn_threshold_in_ms=1000; hinted_handoff_disabled_datacenters=[]; hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; hints_compression=null; hints_directory=null; hints_flush_period_in_ms=10000; incremental_backups=false; index_interval=null; index_summary_capacity_in_mb=null; index_summary_resize_interval_in_minutes=60; initial_token=null; inter_dc_stream_throughput_outbound_megabits_per_sec=200; inter_dc_tcp_nodelay=false; internode_authenticator=null; internode_compression=dc; internode_recv_buff_size_in_bytes=0; internode_send_buff_size_in_bytes=0; key_cache_keys_to_save=2147483647; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_address=localhost; listen_interface=null; listen_interface_prefer_ipv6=false; listen_on_broadcast_address=false; max_hint_window_in_ms=10800000; max_hints_delivery_threads=2; max_hints_file_size_in_mb=128; max_mutation_size_in_kb=null; max_streaming_retries=3; max_value_size_in_mb=256; memtable_allocation_type=heap_buffers; memtable_cleanup_threshold=null; memtable_flush_writers=0; memtable_heap_space_in_mb=null; memtable_offheap_space_in_mb=null; min_free_space_per_drive_in_mb=50; native_transport_flush_in_batches_legacy=true; native_transport_max_concurrent_connections=-1; native_transport_max_concurrent_connections_per_ip=-1; native_transport_max_concurrent_requests_in_bytes=-1; native_transport_max_concurrent_requests_in_bytes_per_ip=-1; native_transport_max_frame_size_in_mb=256; native_transport_max_negotiable_protocol_version=-2147483648; native_transport_max_threads=128; native_transport_port=9042; native_transport_port_ssl=null; num_tokens=256; otc_backlog_expiration_interval_ms=200; otc_coalescing_enough_coalesced_messages=8; otc_coalescing_strategy=DISABLED; otc_coalescing_window_us=200; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_cache_max_entries=1000; permissions_update_interval_in_ms=-1; permissions_validity_in_ms=2000; phi_convict_threshold=8.0; prepared_statements_cache_size_mb=null; range_request_timeout_in_ms=10000; read_request_timeout_in_ms=5000; repair_session_max_tree_depth=18; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_scheduler_id=null; request_scheduler_options=null; request_timeout_in_ms=10000; role_manager=CassandraRoleManager; roles_cache_max_entries=1000; roles_update_interval_in_ms=-1; roles_validity_in_ms=2000; row_cache_class_name=org.apache.cassandra.cache.OHCProvider; row_cache_keys_to_save=2147483647; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=localhost; rpc_interface=null; rpc_interface_prefer_ipv6=false; rpc_keepalive=true; rpc_listen_backlog=50; rpc_max_threads=2147483647; rpc_min_threads=16; rpc_port=9160; rpc_recv_buff_size_in_bytes=null; rpc_send_buff_size_in_bytes=null; rpc_server_type=sync; saved_caches_directory=null; seed_provider=org.apache.cassandra.locator.SimpleSeedProvider{seeds=127.0.0.1}; server_encryption_options=<REDACTED>; slow_query_log_timeout_in_ms=500; snapshot_before_compaction=false; snapshot_on_duplicate_row_detection=false; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_native_transport=true; start_rpc=false; storage_port=7000; stream_throughput_outbound_megabits_per_sec=200; streaming_keep_alive_period_in_secs=300; streaming_socket_timeout_in_ms=86400000; thrift_framed_transport_size_in_mb=15; thrift_max_message_length_in_mb=16; thrift_prepared_statements_cache_size_mb=null; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; tracetype_query_ttl=86400; tracetype_repair_ttl=604800; transparent_data_encryption_options=org.apache.cassandra.config.TransparentDataEncryptionOptions#5c1a8622; trickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=60000; unlogged_batch_across_partitions_warn_threshold=10; user_defined_function_fail_timeout=1500; user_defined_function_warn_timeout=500; user_function_timeout_policy=die; windows_timer_interval=1; write_request_timeout_in_ms=2000]
INFO [main] 2020-12-29 22:29:28,284 DatabaseDescriptor.java:381 - DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO [main] 2020-12-29 22:29:28,284 DatabaseDescriptor.java:439 - Global memtable on-heap threshold is enabled at 998MB
INFO [main] 2020-12-29 22:29:28,284 DatabaseDescriptor.java:443 - Global memtable off-heap threshold is enabled at 998MB
INFO [main] 2020-12-29 22:29:28,470 RateBasedBackPressure.java:123 - Initialized back-pressure with high ratio: 0.9, factor: 5, flow: FAST, window size: 2000.
INFO [main] 2020-12-29 22:29:28,470 DatabaseDescriptor.java:773 - Back-pressure is disabled with strategy org.apache.cassandra.net.RateBasedBackPressure{high_ratio=0.9, factor=5, flow=FAST}.
INFO [main] 2020-12-29 22:29:28,669 JMXServerUtils.java:253 - Configured JMX server at: service:jmx:rmi://127.0.0.1/jndi/rmi://127.0.0.1:7199/jmxrmi
INFO [main] 2020-12-29 22:29:28,676 CassandraDaemon.java:490 - Hostname: Peters-Mac.hsd1.ca.comcast.net
INFO [main] 2020-12-29 22:29:28,676 CassandraDaemon.java:497 - JVM vendor/version: OpenJDK 64-Bit Server VM/1.8.0_275
INFO [main] 2020-12-29 22:29:28,677 CassandraDaemon.java:498 - Heap size: 3.900GiB/3.900GiB
INFO [main] 2020-12-29 22:29:28,678 CassandraDaemon.java:503 - Code Cache Non-heap memory: init = 2555904(2496K) used = 7167616(6999K) committed = 7208960(7040K) max = 251658240(245760K)
INFO [main] 2020-12-29 22:29:28,678 CassandraDaemon.java:503 - Metaspace Non-heap memory: init = 0(0K) used = 19574984(19116K) committed = 20054016(19584K) max = -1(-1K)
INFO [main] 2020-12-29 22:29:28,678 CassandraDaemon.java:503 - Compressed Class Space Non-heap memory: init = 0(0K) used = 2344344(2289K) committed = 2490368(2432K) max = 1073741824(1048576K)
INFO [main] 2020-12-29 22:29:28,679 CassandraDaemon.java:503 - Par Eden Space Heap memory: init = 859045888(838912K) used = 240576832(234938K) committed = 859045888(838912K) max = 859045888(838912K)
INFO [main] 2020-12-29 22:29:28,679 CassandraDaemon.java:503 - Par Survivor Space Heap memory: init = 107347968(104832K) used = 0(0K) committed = 107347968(104832K) max = 107347968(104832K)
INFO [main] 2020-12-29 22:29:28,679 CassandraDaemon.java:503 - CMS Old Gen Heap memory: init = 3221225472(3145728K) used = 0(0K) committed = 3221225472(3145728K) max = 3221225472(3145728K)
INFO [main] 2020-12-29 22:29:28,679 CassandraDaemon.java:505 - Classpath: /usr/local/etc/cassandra:/usr/local/Cellar/cassandra/3.11.9_1/libexec/build/classes/main:/usr/local/Cellar/cassandra/3.11.9_1/libexec/build/classes/thrift:/usr/local/Cellar/cassandra/3.11.9_1/libexec/HdrHistogram-2.1.9.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/ST4-4.0.8.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/airline-0.6.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/antlr-runtime-3.5.2.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/apache-cassandra-3.11.9.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/apache-cassandra-thrift-3.11.9.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/asm-5.0.4.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/caffeine-2.2.6.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/cassandra-driver-core-3.0.1-shaded.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/commons-cli-1.1.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/commons-codec-1.9.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/commons-lang3-3.1.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/commons-math3-3.2.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/compress-lzf-0.8.4.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/concurrent-trees-2.4.0.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/concurrentlinkedhashmap-lru-1.4.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/disruptor-3.0.1.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/ecj-4.4.2.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/guava-18.0.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/high-scale-lib-1.0.6.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/hppc-0.5.4.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/jackson-annotations-2.9.10.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/jackson-core-2.9.10.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/jackson-databind-2.9.10.4.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/jamm-0.3.0.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/javax.inject.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/jbcrypt-0.3m.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/jcl-over-slf4j-1.7.7.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/jctools-core-1.2.1.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/jflex-1.6.0.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/jna-4.2.2.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/joda-time-2.4.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/json-simple-1.1.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/jstackjunit-0.0.1.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/libthrift-0.9.2.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/log4j-over-slf4j-1.7.7.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/logback-classic-1.1.3.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/logback-core-1.1.3.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/lz4-1.3.0.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/metrics-core-3.1.5.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/metrics-jvm-3.1.5.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/metrics-logback-3.1.5.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/netty-all-4.0.44.Final.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/ohc-core-0.4.4.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/ohc-core-j8-0.4.4.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/reporter-config-base-3.0.3.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/reporter-config3-3.0.3.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/sigar-1.6.4.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/slf4j-api-1.7.7.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/snakeyaml-1.11.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/snappy-java-1.1.1.7.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/snowball-stemmer-1.3.0.581.1.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/stream-2.5.2.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/thrift-server-0.3.7.jar:/usr/local/Cellar/cassandra/3.11.9_1/libexec/lib/jsr223/*/*.jar::/usr/local/Cellar/cassandra/3.11.9_1/libexec/jamm-0.3.0.jar
INFO [main] 2020-12-29 22:29:28,681 CassandraDaemon.java:507 - JVM Arguments: [-Xloggc:/usr/local/Cellar/cassandra/3.11.9_1/libexec/logs/gc.log, -ea, -XX:+UseThreadPriorities, -XX:ThreadPriorityPolicy=42, -XX:+HeapDumpOnOutOfMemoryError, -Xss256k, -XX:StringTableSize=1000003, -XX:+AlwaysPreTouch, -XX:-UseBiasedLocking, -XX:+UseTLAB, -XX:+ResizeTLAB, -XX:+UseNUMA, -XX:+PerfDisableSharedMem, -Djava.net.preferIPv4Stack=true, -XX:+UseParNewGC, -XX:+UseConcMarkSweepGC, -XX:+CMSParallelRemarkEnabled, -XX:SurvivorRatio=8, -XX:MaxTenuringThreshold=1, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:CMSWaitDuration=10000, -XX:+CMSParallelInitialMarkEnabled, -XX:+CMSEdenChunksRecordAlways, -XX:+CMSClassUnloadingEnabled, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintHeapAtGC, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -XX:+PrintPromotionFailure, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=10, -XX:GCLogFileSize=10M, -Xms4096M, -Xmx4096M, -Xmn1024M, -XX:+UseCondCardMark, -XX:CompileCommandFile=/usr/local/etc/cassandra/hotspot_compiler, -javaagent:/usr/local/Cellar/cassandra/3.11.9_1/libexec/jamm-0.3.0.jar, -Dcassandra.jmx.local.port=7199, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.password.file=/etc/cassandra/jmxremote.password, -Djava.library.path=/usr/local/Cellar/cassandra/3.11.9_1/libexec/sigar-bin, -Dcassandra.libjemalloc=/usr/local/lib/libjemalloc.dylib, -XX:OnOutOfMemoryError=kill -9 %p, -Dlogback.configurationFile=logback.xml, -Dcassandra.logdir=/usr/local/var/log/cassandra, -Dcassandra.storagedir=/usr/local/var/lib/cassandra, -Dcassandra-foreground=yes]
INFO [main] 2020-12-29 22:29:28,786 StartupChecks.java:140 - jemalloc seems to be preloaded from /usr/local/lib/libjemalloc.dylib
WARN [main] 2020-12-29 22:29:28,787 StartupChecks.java:169 - JMX is not enabled to receive remote connections. Please see cassandra-env.sh for more info.
INFO [main] 2020-12-29 22:29:28,789 SigarLibrary.java:44 - Initializing SIGAR library
INFO [main] 2020-12-29 22:29:28,809 SigarLibrary.java:57 - Could not initialize SIGAR library org.hyperic.sigar.Sigar.getFileSystemListNative()[Lorg/hyperic/sigar/FileSystem;
INFO [main] 2020-12-29 22:29:28,809 SigarLibrary.java:185 - Sigar could not be initialized, test for checking degraded mode omitted.
INFO [main] 2020-12-29 22:29:28,941 QueryProcessor.java:116 - Initialized prepared statement caches with 15 MB (native) and 15 MB (Thrift)
INFO [main] 2020-12-29 22:29:29,477 ColumnFamilyStore.java:427 - Initializing system.IndexInfo
INFO [main] 2020-12-29 22:29:29,931 ColumnFamilyStore.java:427 - Initializing system.batches
INFO [main] 2020-12-29 22:29:29,935 ColumnFamilyStore.java:427 - Initializing system.paxos
INFO [main] 2020-12-29 22:29:29,947 ColumnFamilyStore.java:427 - Initializing system.local
INFO [SSTableBatchOpen:6] 2020-12-29 22:29:29,978 BufferPool.java:234 - Global buffer pool is enabled, when pool is exhausted (max is 512.000MiB) it will allocate on heap
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x0000000107d65ab8, pid=53674, tid=0x0000000000008203
#
# JRE version: OpenJDK Runtime Environment (8.0_275) (build 1.8.0_275-bre_2020_11_16_16_29-b00)
# Java VM: OpenJDK 64-Bit Server VM (25.275-b00 mixed mode bsd-amd64 compressed oops)
# Problematic frame:
# V [libjvm.dylib+0x565ab8]
#
# Core dump written. Default location: /cores/core or core.53674
#
# An error report file with more information is saved as:
# /Users/my_laptop/hs_err_pid53674.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
#
I honestly have very little knowledge of Java, or Cassandra, so I'm not sure where the issue might be. I find it weird that the error has OpenJDK 64-Bit Server VM warning: ... at the beginning even after I uninstalled the openjdk via homebrew, and removed it with rm -rf /Library/Java/JavaVirtualMachines/adoptopenjdk-15.jdk.
Any idea what I might try to get it working?
I eventually got it to work:
Uninstalled Cassandra
$ brew install --cask adoptopenjdk/openjdk/adoptopenjdk8
$ /usr/libexec/java_home -V
Matching Java Virtual Machines (2):
15.0.1, x86_64: "Java SE 15.0.1" /Library/Java/JavaVirtualMachines/jdk-15.0.1.jdk/Contents/Home
1.8.0_275, x86_64: "AdoptOpenJDK 8" /Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home
$ export JAVA_HOME=`/usr/libexec/java_home -v 1.8.0_275`
$ brew install cassandra
$ vim /usr/local/Cellar/cassandra/3.11.9_1/share/cassandra/cassandra.in.sh
Change JAVA_HOME to:
JAVA_HOME=`/usr/libexec/java_home -v 1.8.0_275`
Then finally, cross your fingers and run:
brew services start cassandra
I dont think java 15 is supported.
Cassandra 4 adds support for Java 11:
https://cassandra.apache.org/doc/latest/new/java11.html
Had a similar issue.
make sure you have the right java version, not only installed, but as your environmental variables.
make sure your cassandra version fits your java version
https://cassandra.apache.org/doc/latest/cassandra/new/java11.html
when I try to run cassandra with cassandra -f command I have this error messages
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x0000000109d8bb3c, pid=98066, tid=0x0000000000001b03
#
# JRE version: OpenJDK Runtime Environment (8.0_312) (build 1.8.0_312-bre_2022_01_01_23_04-b00)
# Java VM: OpenJDK 64-Bit Server VM (25.312-b00 mixed mode bsd-amd64 compressed oops)
# Problematic frame:
# V [libjvm.dylib+0x545b3c]
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /Users/dvegamar/hs_err_pid98066.log
#
# If you would like to submit a bug report, please visit:
# https://github.com/Homebrew/homebrew-core/issues
And of course, it wont connect
(base) MACPRO:~ dvegamar$ cqlsh
/usr/local/Cellar/cassandra/4.0.3/libexec/bin/cqlsh.py:460: DeprecationWarning: Legacy execution parameters will be removed in 4.0. Consider using execution profiles.
Connection error: ('Unable to connect to any servers', {'127.0.0.1:9042': ConnectionRefusedError(61, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection refused")})
but in this same situation if I run with brew services start cassandra it works
(base) MACPRO:~ dvegamar$ brew services start cassandra
==> Successfully started `cassandra` (label: homebrew.mxcl.cassandra)
(base) MACPRO:~ dvegamar$ cqlsh
/usr/local/Cellar/cassandra/4.0.3/libexec/bin/cqlsh.py:460: DeprecationWarning: Legacy execution parameters will be removed in 4.0. Consider using execution profiles.
/usr/local/Cellar/cassandra/4.0.3/libexec/bin/cqlsh.py:490: DeprecationWarning: Setting the consistency level at the session level will be removed in 4.0. Consider using execution profiles and setting the desired consitency level to the EXEC_PROFILE_DEFAULT profile.
Connected to Test Cluster at 127.0.0.1:9042
[cqlsh 6.0.0 | Cassandra 4.0.3 | CQL spec 3.4.5 | Native protocol v5]
Use HELP for help.
cqlsh>
cqlsh>
Connection closed by foreign host...
We are finding that our JVM crashes quite inexplicably after a minor version update of the JRE. Intially this was the suspect. But after correlating the time of crash with syslog messages I found that everytime there is a crash there was this memory error from the kernel logged. There is enough RAM; but I guess swap is still used by linux. The assumption is that the disk error has caused the JVM to crash. Is this a fair assumption?
JVM Crash stack
be-7.2.1/lib/jdbc/mysql/mysql-connector-java-5.1.46.jar org.sonar.ce.app.CeServer /home/cicd/sonarqube-7.2.1/temp/sq-process3072857830430806886properties
Picked up _JAVA_OPTIONS: -Xmx60g
2020.02.06 11:51:16 INFO app[][o.s.a.SchedulerImpl] Process[ce] is up
2020.02.06 11:51:16 INFO app[][o.s.a.SchedulerImpl] SonarQube is up
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGBUS (0x7) at pc=0x00007f7df7918531, pid=13479, tid=0x00007f6e59cf9700
#
# JRE version: OpenJDK Runtime Environment (8.0_242-b08) (build 1.8.0_242-b08)
# Java VM: OpenJDK 64-Bit Server VM (25.242-b08 mixed mode linux-amd64 )
# Problematic frame:
# V [libjvm.so+0xa42531] Symbol::increment_refcount()+0x1
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
# An error report file with more information is saved as:
# /home/xxx/sonarqube-7.2.1/elasticsearch/hs_err_pid13479.log
[error occurred during error reporting , id 0x7]
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
#
2020.02.18 07:35:17 WARN app[][o.s.a.p.AbstractProcessMonitor] Process exited with exit value [es]: 134
2020.02.18 07:35:17 INFO app[][o.s.a.SchedulerImpl] Process [es] is stopped
2020.02.18 07:35:21 INFO app[][o.s.a.SchedulerImpl] Process [ce] is stopped
2020.02.18 07:35:24 INFO app[][o.s.a.SchedulerImpl] Process [web] is stopped
2020.02.18 07:35:24 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
Error in syslog
"kernel: blk_update_request: critical medium error, dev sda, sector"
[libjvm.so+0xa42531] Symbol::increment_refcount()+0x1
Feb 18 07:35:14 xxx kernel: Read-error on swap-device (253:1:2227784)
Feb 18 07:35:14 xxx kernel: Read-error on swap-device (253:1:2227792)
Feb 18 07:35:14 xxx kernel: Read-error on swap-device (253:1:2227
The disk is an older IBM HW, and from other servers had failed before
=== START OF INFORMATION SECTION ===
Vendor: IBM
Product: ServeRAID M5110
Revision: 3.45
Compliance: SPC-3
User Capacity: 5,996,996,984,832 bytes [5.99 TB]
Logical block size: 512 bytes
Logical Unit id: 0x600605b0072bb48022f34180127fc92d
Serial number: 002dc97f128041f32280b42b07b00506
Device type: disk
Local Time is: Wed Feb 19 06:44:22 2020 EET
SMART support is: Unavailable - device lacks SMART capability
As #user207421 has mentioned it is due to the Read error that JVM crashed. ( the libjvm.so in the log nails it)
Had this read error a few more times in this machine. But those times there was no
JVM crash and there was no log of libjvm.so in syslogs too
Mar 4 03:49:33 xxx kernel: blk_update_request: critical medium error, dev sda, sector 15250816
Mar 4 03:49:33 xxx kernel: XFS (dm-2): metadata I/O error: block 0x425d80 ("xfs_trans_read_buf_map") error 61 numblks 32
Mar 4 03:49:33 xxx kernel: XFS (dm-2): xfs_imap_to_bp: xfs_trans_read_buf() returned error -61.
Mar 4 03:49:37 xx kernel: megaraid_sas 0000:1b:00.0: 17102 (636601577s/0x0002/FATAL) - Unrecoverable
The previous version of SQ was 6.7.7 LTS, so i upgraded to 7.9.1 LTS.
I installed a new JDK
java version "11.0.4" 2019-07-16 LTS
Java(TM) SE Runtime Environment 18.9 (build 11.0.4+10-LTS)
Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.4+10-LTS, mixed mode)
but still SQ stopped.
This is a snippet of the sonar.log
2019.08.01 14:50:19 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /data/sonarqube/sonarqube-7.9.1/temp
2019.08.01 14:50:19 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2019.08.01 14:50:19 INFO app[][o.s.a.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/data/sonarqube/sonarqube-7.9.1/elasticsearch]: /data/sonarqube/sonarqube-````7.9.1/elasticsearch/bin/elasticsearch
2019.08.01 14:50:19 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
Java HotSpot(TM) 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
2019.08.01 14:50:20 INFO app[][o.e.p.PluginsService] no modules loaded
2019.08.01 14:50:20 INFO app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
2019.08.01 14:50:22 WARN app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [es]: 1
2019.08.01 14:50:22 INFO app[][o.s.a.SchedulerImpl] Process[es] is stopped
2019.08.01 14:50:22 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
2019.08.01 14:50:22 INFO app[][o.e.c.t.TransportClientNodesService] failed to get node info for {#transport#-1}{GK1cYpSNTM-eycINcVQsOA}{127.0.0.1}{127.0.0.1:9001}, disconnecting...
java.lang.IllegalStateException: Future got interrupted
and a snippet of es.log
2019.08.01 14:50:22 ERROR es[][o.e.b.Bootstrap] Exception
java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:103) ~[elasticsearch-6.8.0.jar:6.8.0]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:170) ~[elasticsearch-6.8.0.jar:6.8.0]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:333) [elasticsearch-6.8.0.jar:6.8.0]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) [elasticsearch-6.8.0.jar:6.8.0]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150) [elasticsearch-6.8.0.jar:6.8.0]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) [elasticsearch-6.8.0.jar:6.8.0]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) [elasticsearch-cli-6.8.0.jar:6.8.0]
at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-cli-6.8.0.jar:6.8.0]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:116) [elasticsearch-6.8.0.jar:6.8.0]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:93) [elasticsearch-6.8.0.jar:6.8.0]
of course i start SQ (and ES) with a non-root user. That’s why i don’t understand the error in es.log
I can't find the solution at all. Please can you help me?
With kind regards, William
I am trying Hadoop map-reduce in Linux (Ubuntu Virtual Machine) by following the link
I ran the wordcount example on a sample file. The process gets killed unexpectedly. How can I debug this ?
Initially I was getting an insufficient memory error on large data set.
15/11/28 19:24:27 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
15/11/28 19:24:27 INFO mapred.MapTask: Processing split: hdfs://localhost:54310/user/hduser/eg2/a.txt:0+1538
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000e6093000, 104861696, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 104861696 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /usr/local/hadoop/hs_err_pid7516.log
So I reduced the size of my files and tried again which resulted in unexpected termination.
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /user/hduser/eg2/ /user/hduser/eg2/eg2-output2
......
......
15/11/28 18:55:44 INFO mapred.LocalJobRunner: Waiting for map tasks
15/11/28 18:55:44 INFO mapred.LocalJobRunner: Starting task: attempt_local1996683170_0001_m_000000_0
15/11/28 18:55:44 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
15/11/28 18:55:44 INFO mapred.MapTask: Processing split: hdfs://localhost:54310/user/hduser/eg2/a.txt:0+1538
15/11/28 18:55:45 INFO mapreduce.Job: Job job_local1996683170_0001 running in uber mode : false
15/11/28 18:55:45 INFO mapreduce.Job: map 0% reduce 0%
Killed
Why is the process getting terminated ?
Try:
Hadoop job -list
Kill all jobs and rerun it:
Hadoop job –kill <JobID>
Try checking the logs of job tracker for error
http://localhost:50070/ – web UI of the NameNode daemon
http://localhost:50030/ – web UI of the JobTracker daemon
http://localhost:50060/ – web UI of the TaskTracker daemon
The size of the data set didn't matter. Hadoop didn't have enough memory to start. I tried increasing the memory of my VM and the issue got fixed.
We always have hard time with Bamboo upgrades because AIX agents are not officially supported. No exception this one, from 5.5.1 to 5.7.2 (sad) It is trying to receive/start a job and apparently is failing to read some serialized XML. Last time moving to 5.5.1 we had to use previous version of JNA (3.4.0) instead one packaged with Bamboo (4.1.0). It did not happen this time so we use the latest one that comes with Bamboo 5.7. Agent comes on-line but fails to execute even simple commands (sad) Any ideas would be appreciated!
The java version on AIX currently is:
java version "1.7.0"
Java(TM) SE Runtime Environment (build pap6470sr6fp1-20140108_01(SR6 FP1))
IBM J9 VM (build 2.6, JRE 1.7.0 AIX ppc64-64 Compressed References 20140106_181350 (JIT enabled, AOT enabled)
J9VM - R26_Java726_SR6_20140106_1601_B181350
JIT - r11.b05_20131003_47443.02
GC - R26_Java726_SR6_20140106_1601_B181350_CMPRSS
J9CL - 20140106_181350)
JCL - 20140103_01 based on Oracle 7u51-b11
And here is part of the log showing error when agent is trying to execute the very first job:
2015-01-08 12:11:38,417 INFO [Thread-5] [AgentHeartBeatJobScheduler] Scheduled AgentHeartBeatJobScheduler to run every 60s. Next run at Thu Jan 08 12:11:38 PST 2015
2015-01-08 12:11:38,455 INFO [Thread-5] [RemoteAgent] **************************************************************************************************************************************************
2015-01-08 12:11:38,455 INFO [Thread-5] [RemoteAgent] * *
2015-01-08 12:11:38,455 INFO [Thread-5] [RemoteAgent] * Bamboo agent 'ibmaix71vm2.woods.ad' ready to receive builds.
2015-01-08 12:11:38,455 INFO [Thread-5] [RemoteAgent] * Remote Agent Home: /bamboo/bamboo-agent-home
2015-01-08 12:11:38,456 INFO [Thread-5] [RemoteAgent] * Broker URL: failover:(tcp://192.168.223.200:54663?wireFormat.maxInactivityDuration=300000)?initialReconnectDelay=15000&maxReconnectAttempts=10
2015-01-08 12:11:38,456 INFO [Thread-5] [RemoteAgent] * *
2015-01-08 12:11:38,456 INFO [Thread-5] [RemoteAgent] **************************************************************************************************************************************************
2015-01-08 12:11:38,511 INFO [scheduler_Worker-1] [AgentHeartBeatJob] executableBuildAgent still unavailable. Heartbeat skipped.
2015-01-08 12:16:38,425 INFO [0-BAM::ibmaix71vm2.woods.ad::Agent:pool-3-thread-1] [BuildAgentControllerImpl] Agent 747175938 checking build queue for executables...
2015-01-08 12:16:38,825 ERROR [0-BAM::ibmaix71vm2.woods.ad::Agent:pool-3-thread-1] [BuildAgentControllerImpl] Unknown exception occurred on 'ibmaix71vm2.woods.ad'. Agent will attempt to recover its normal operation...
com.thoughtworks.xstream.converters.ConversionException: java.lang.ref.ReferenceQueue$Null : java.lang.ref.ReferenceQueue$Null
---- Debugging information ----
message : java.lang.ref.ReferenceQueue$Null
cause-exception : com.thoughtworks.xstream.mapper.CannotResolveClassException
cause-message : java.lang.ref.ReferenceQueue$Null
class : com.atlassian.util.concurrent.ResettableLazyReference$InternalReference
required-type : com.atlassian.util.concurrent.ResettableLazyReference$InternalReference
converter-type : com.thoughtworks.xstream.converters.reflection.ReflectionConverter
path : /result/value/parentBuildContext/variableContext/effectiveStateRef/referrent/queue
line number : 234
class[1] : com.atlassian.bamboo.variable.VariableContextImpl$1
class[2] : com.atlassian.bamboo.variable.VariableContextImpl
class[3] : com.atlassian.bamboo.v2.build.BuildContextImpl
converter-type[1] : com.atlassian.bamboo.serialization.xstream.BuildContextXStreamConverter
class[4] : org.springframework.remoting.support.RemoteInvocationResult
version : not available
-------------------------------
at com.thoughtworks.xstream.core.TreeUnmarshaller.convert(TreeUnmarshaller.java:79)
at com.thoughtworks.xstream.core.AbstractReferenceUnmarshaller.convert(AbstractReferenceUnmarshaller.java:65)
at com.thoughtworks.xstream.core.TreeUnmarshaller.convertAnother(TreeUnmarshaller.java:66)
at com.thoughtworks.xstream.converters.reflection.AbstractReflectionConverter.unmarshallField(AbstractReflectionConverter.java:474)
at com.thoughtworks.xstream.converters.reflection.AbstractReflectionConverter.doUnmarshal(AbstractReflectionConverter.java:406)
at com.thoughtworks.xstream.converters.reflection.AbstractReflectionConverter.unmarshal(AbstractReflectionConverter.java:257)
at com.thoughtworks.xstream.core.TreeUnmarshaller.convert(TreeUnmarshaller.java:72)
at com.thoughtworks.xstream.core.AbstractReferenceUnmarshaller.convert(AbstractReferenceUnmarshaller.java:65)
Someone at Atlassian suggested to try OpenJDK and it solved our problem. We didn't have time to try different JVM builds as per #technomage, but found this link for downloads.