Ehcache Initial table allocation failed - java

One of our web application runs within Tomcat 7 which is deployed on AS400 server, and it is using Ehcache as cache component swap data into disk and reduce memory usage.
Few weeks ago, when we try to deploy this application for one of our customer, it fails at startup. And log shows:
Caused by: java.lang.IllegalStateException: Cache 'data' creation in EhcacheManager failed.
at org.ehcache.core.EhcacheManager.createCache(EhcacheManager.java:288)
at org.ehcache.core.EhcacheManager.init(EhcacheManager.java:567)
... 7 more
Caused by: org.ehcache.StateTransitionException: Initial table allocation failed.
Initial Table Size (slots) : 64
Allocation Will Require : 1KB
Table Page Source : org.terracotta.offheapstore.disk.paging.MappedPageSource#bc8a4ca2
at org.ehcache.core.StatusTransitioner$Transition.succeeded(StatusTransitioner.java:209)
at org.ehcache.core.Ehcache.init(Ehcache.java:567)
at org.ehcache.core.EhcacheManager.createCache(EhcacheManager.java:261)
... 8 more
Caused by: java.lang.IllegalArgumentException: Initial table allocation failed.
Initial Table Size (slots) : 64
Allocation Will Require : 1KB
Table Page Source : org.terracotta.offheapstore.disk.paging.MappedPageSource#bc8a4ca2
at org.terracotta.offheapstore.OffHeapHashMap.<init>(OffHeapHashMap.java:219)
at org.terracotta.offheapstore.AbstractLockedOffHeapHashMap.<init>(AbstractLockedOffHeapHashMap.java:71)
at org.terracotta.offheapstore.AbstractOffHeapClockCache.<init>(AbstractOffHeapClockCache.java:76)
at org.terracotta.offheapstore.disk.persistent.AbstractPersistentOffHeapCache.<init>(AbstractPersistentOffHeapCache.java:43)
at org.terracotta.offheapstore.disk.persistent.PersistentReadWriteLockedOffHeapClockCache.<init>(PersistentReadWriteLockedOffHeapClockCache.java:36)
at org.ehcache.impl.internal.store.disk.factories.EhcachePersistentSegmentFactory$EhcachePersistentSegment.<init>(EhcachePersistentSegmentFactory.java:73)
at org.ehcache.impl.internal.store.disk.factories.EhcachePersistentSegmentFactory.newInstance(EhcachePersistentSegmentFactory.java:60)
at org.ehcache.impl.internal.store.disk.factories.EhcachePersistentSegmentFactory.newInstance(EhcachePersistentSegmentFactory.java:37)
at org.terracotta.offheapstore.concurrent.AbstractConcurrentOffHeapMap.<init>(AbstractConcurrentOffHeapMap.java:106)
at org.terracotta.offheapstore.concurrent.AbstractConcurrentOffHeapCache.<init>(AbstractConcurrentOffHeapCache.java:48)
at org.terracotta.offheapstore.disk.persistent.AbstractPersistentConcurrentOffHeapCache.<init>(AbstractPersistentConcurrentOffHeapCache.java:52)
at org.ehcache.impl.internal.store.disk.EhcachePersistentConcurrentOffHeapClockCache.<init>(EhcachePersistentConcurrentOffHeapClockCache.java:52)
at org.ehcache.impl.internal.store.disk.OffHeapDiskStore.createBackingMap(OffHeapDiskStore.java:279)
at org.ehcache.impl.internal.store.disk.OffHeapDiskStore.getBackingMap(OffHeapDiskStore.java:167)
at org.ehcache.impl.internal.store.disk.OffHeapDiskStore.access$600(OffHeapDiskStore.java:95)
at org.ehcache.impl.internal.store.disk.OffHeapDiskStore$Provider.init(OffHeapDiskStore.java:460)
at org.ehcache.impl.internal.store.disk.OffHeapDiskStore$Provider.initStore(OffHeapDiskStore.java:456)
at org.ehcache.impl.internal.store.disk.OffHeapDiskStore$Provider.initAuthoritativeTier(OffHeapDiskStore.java:507)
at org.ehcache.impl.internal.store.tiering.TieredStore$Provider.initStore(TieredStore.java:472)
at org.ehcache.core.EhcacheManager$8.init(EhcacheManager.java:499)
at org.ehcache.core.StatusTransitioner.runInitHooks(StatusTransitioner.java:135)
at org.ehcache.core.StatusTransitioner.access$000(StatusTransitioner.java:33)
at org.ehcache.core.StatusTransitioner$Transition.succeeded(StatusTransitioner.java:194)
this code triggered this is:
CacheConfiguration<String, String[]> dconf = CacheConfigurationBuilder
.newCacheConfigurationBuilder(String.class, String[].class, ResourcePoolsBuilder.heap(11)
.disk(3, MemoryUnit.GB, false))
.withExpiry(Expirations.timeToLiveExpiration(Duration.of(30, TimeUnit.MINUTES)))
.build();
dataCacheManager = CacheManagerBuilder.newCacheManagerBuilder()
.with(CacheManagerBuilder.persistence(new File(cacheFolder, "requestdata"))) //$NON-NLS-1$
.withCache(CACHE_NAME_DATA,dconf)
.build(true);
which surprised us because it has never happened before, we have deployed it for some other customers' server (Windows, As400, linux), none of them has this issues.
This is really a headache, we spend weeks try to figure it out, read source code, tuning jvm parameters, googling around..., nothing except one unanswered post: https://groups.google.com/forum/#!topic/ehcache-users/ApFAe5nYxuA
Is there anyone can help us one this? thanks ahead!

The Ehcache 3 disk store uses java.nio.MappedByteBuffer which require access to direct memory.
There is no documented default MaxDirectMemorySize in Java and the same JVM on different OS can behave differently.
If you have not already set the flag -XX:MaxDirectMemorySize=3G when launching your application, it could be the cause of that exception you see.

Related

Getting org.apache.openjpa.persistence.OptimisticLockException: Unable to obtain an object lock on "null"

I am getting below exception while doing EntityManager.find(). We are using DB2 database and WAS 8.0 server for our application. Any help greatly appreciated.
Caused by: <openjpa-2.1.2-SNAPSHOT-r422266:1709309 nonfatal store error> org.apache.openjpa.persistence.OptimisticLockException: Unable to obtain an object lock on "null".
at org.apache.openjpa.jdbc.sql.DBDictionary.narrow(DBDictionary.java:4930)
at org.apache.openjpa.jdbc.sql.DBDictionary.newStoreException(DBDictionary.java:4908)
at org.apache.openjpa.jdbc.sql.DB2Dictionary.newStoreException(DB2Dictionary.java:603)
at org.apache.openjpa.jdbc.sql.SQLExceptions.getStore(SQLExceptions.java:136)
at org.apache.openjpa.jdbc.sql.SQLExceptions.getStore(SQLExceptions.java:110)
at org.apache.openjpa.jdbc.sql.SQLExceptions.getStore(SQLExceptions.java:62)
at org.apache.openjpa.jdbc.kernel.PreparedSQLStoreQuery$PreparedSQLExecutor.executeQuery(PreparedSQLStoreQuery.java:139)
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:1012)
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:870)
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:801)
at org.apache.openjpa.kernel.DelegatingQuery.execute(DelegatingQuery.java:542)
at org.apache.openjpa.persistence.QueryImpl.execute(QueryImpl.java:315)
at org.apache.openjpa.persistence.QueryImpl.getResultList(QueryImpl.java:331)
... 116 more
Caused by: org.apache.openjpa.lib.jdbc.ReportingSQLException: DB2 SQL Error: SQLCODE=-913, SQLSTATE=57033, SQLERRMC=00C90088;00000304;ODNC001 .SNCPC145.X'200D65' '.X'43', DRIVER=4.15.134 {prepstmnt -1803801027
SELECT a.column1
FROM table_test a
WHERE (a.column2 = ? AND a.column3 = ?)
[params=(String) 00000, (String) 000011]} [code=-913, state=57033]
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.wrap(LoggingConnectionDecorator.java:281)
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.wrap(LoggingConnectionDecorator.java:265)
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.access$700(LoggingConnectionDecorator.java:72)
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator$LoggingConnection$LoggingPreparedStatement.executeQuery(LoggingConnectionDecorator.java:1183)
at org.apache.openjpa.lib.jdbc.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:284)
at org.apache.openjpa.jdbc.kernel.JDBCStoreManager$CancelPreparedStatement.executeQuery(JDBCStoreManager.java:1787)
at org.apache.openjpa.lib.jdbc.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:274)
at org.apache.openjpa.jdbc.kernel.PreparedSQLStoreQuery$PreparedSQLExecutor.executeQuery(PreparedSQLStoreQuery.java:118)
... 123 more
sqlode -913 with SQLERRMC=00C90088 means that your connection experienced a DEADLOCK.
If your Db2-server is running on Z/OS, then ask your DBA for help to find the other Db2-connection and the SQL-statements running in both transactions.The access-plans and isolation levels used by both connections are also relevant, as are any applicable lock timeouts. The Db2-server DBA has access to diagnostic tools to help you.
There are many hits online giving advice on how to reduce the likelihood of Db2 deadlocks, so do your research.
You will need to know the isolation level being used by the Websphere connection (or package, or SQL-statement(s)), and all the statements in the Db2-transaction for your connection.
The other tokens in the message are also relevant i.e. ODNC001.SNCPC145 may be the involved table.
The version of the jdbc type4 driver being used by your Websphere is out of date (looks like it is from a Db2 v10.1 fixpack 5 build) so consider getting that upgraded to a current version.

Spark on mesos Executors failing with OOM Errors

We are using spark 2.0.2 managed by a DCOS system that fetch data from a Kafka 1.0.0 messaging service and writes parquet in a hdfs system.
Every thing was working ok, but when we increase the number of topics in Kafka, our spark executors began to crash constantly with OOM errors:
java.lang.OutOfMemoryError: Java heap space
at org.apache.parquet.column.values.dictionary.IntList.initSlab(IntList.java:90)
at org.apache.parquet.column.values.dictionary.IntList.<init>(IntList.java:86)
at org.apache.parquet.column.values.dictionary.DictionaryValuesWriter.<init>(DictionaryValuesWriter.java:93)
at org.apache.parquet.column.values.dictionary.DictionaryValuesWriter$PlainDoubleDictionaryValuesWriter.<init>(DictionaryValuesWriter.java:422)
at org.apache.parquet.column.ParquetProperties.dictionaryWriter(ParquetProperties.java:139)
at org.apache.parquet.column.ParquetProperties.dictWriterWithFallBack(ParquetProperties.java:178)
at org.apache.parquet.column.ParquetProperties.getValuesWriter(ParquetProperties.java:203)
at org.apache.parquet.column.impl.ColumnWriterV1.<init>(ColumnWriterV1.java:83)
at org.apache.parquet.column.impl.ColumnWriteStoreV1.newMemColumn(ColumnWriteStoreV1.java:68)
at org.apache.parquet.column.impl.ColumnWriteStoreV1.getColumnWriter(ColumnWriteStoreV1.java:56)
at org.apache.parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.<init>(MessageColumnIO.java:183)
at org.apache.parquet.io.MessageColumnIO.getRecordWriter(MessageColumnIO.java:375)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.initStore(InternalParquetRecordWriter.java:109)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.<init>(InternalParquetRecordWriter.java:99)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:217)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:175)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:146)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:113)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:87)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:62)
at org.apache.parquet.avro.AvroParquetWriter.<init>(AvroParquetWriter.java:47)
at npm.parquet.ParquetMeasurementWriter.ensureOpenWriter(ParquetMeasurementWriter.java:91)
at npm.parquet.ParquetMeasurementWriter.write(ParquetMeasurementWriter.java:75)
at npm.ingestion.spark.StagingArea$Measurements.store(StagingArea.java:100)
at npm.ingestion.spark.StagingArea$StagingAreaStorage.store(StagingArea.java:80)
at npm.ingestion.spark.StagingArea.add(StagingArea.java:40)
at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.sendToStagingArea(Kafka2HDFSPM.java:207)
at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.consumeRecords(Kafka2HDFSPM.java:193)
at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.process(Kafka2HDFSPM.java:169)
at npm.ingestion.spark.Kafka2HDFSPM$FetchSubsetsAndStore.call(Kafka2HDFSPM.java:133)
at npm.ingestion.spark.Kafka2HDFSPM$FetchSubsetsAndStore.call(Kafka2HDFSPM.java:111)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:218)
18/03/20 18:41:13 ERROR [Executor task launch worker-0] SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-0,5,main]
java.lang.OutOfMemoryError: Java heap space
at org.apache.parquet.column.values.dictionary.IntList.initSlab(IntList.java:90)
at org.apache.parquet.column.values.dictionary.IntList.<init>(IntList.java:86)
at org.apache.parquet.column.values.dictionary.DictionaryValuesWriter.<init>(DictionaryValuesWriter.java:93)
at org.apache.parquet.column.values.dictionary.DictionaryValuesWriter$PlainDoubleDictionaryValuesWriter.<init>(DictionaryValuesWriter.java:422)
at org.apache.parquet.column.ParquetProperties.dictionaryWriter(ParquetProperties.java:139)
at org.apache.parquet.column.ParquetProperties.dictWriterWithFallBack(ParquetProperties.java:178)
at org.apache.parquet.column.ParquetProperties.getValuesWriter(ParquetProperties.java:203)
at org.apache.parquet.column.impl.ColumnWriterV1.<init>(ColumnWriterV1.java:83)
at org.apache.parquet.column.impl.ColumnWriteStoreV1.newMemColumn(ColumnWriteStoreV1.java:68)
at org.apache.parquet.column.impl.ColumnWriteStoreV1.getColumnWriter(ColumnWriteStoreV1.java:56)
at org.apache.parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.<init>(MessageColumnIO.java:183)
at org.apache.parquet.io.MessageColumnIO.getRecordWriter(MessageColumnIO.java:375)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.initStore(InternalParquetRecordWriter.java:109)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.<init>(InternalParquetRecordWriter.java:99)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:217)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:175)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:146)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:113)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:87)
at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:62)
at org.apache.parquet.avro.AvroParquetWriter.<init>(AvroParquetWriter.java:47)
at npm.parquet.ParquetMeasurementWriter.ensureOpenWriter(ParquetMeasurementWriter.java:91)
at npm.parquet.ParquetMeasurementWriter.write(ParquetMeasurementWriter.java:75)
at npm.ingestion.spark.StagingArea$Measurements.store(StagingArea.java:100)
at npm.ingestion.spark.StagingArea$StagingAreaStorage.store(StagingArea.java:80)
at npm.ingestion.spark.StagingArea.add(StagingArea.java:40)
at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.sendToStagingArea(Kafka2HDFSPM.java:207)
at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.consumeRecords(Kafka2HDFSPM.java:193)
at npm.ingestion.spark.Kafka2HDFSPM$SubsetProcessor.process(Kafka2HDFSPM.java:169)
at npm.ingestion.spark.Kafka2HDFSPM$FetchSubsetsAndStore.call(Kafka2HDFSPM.java:133)
at npm.ingestion.spark.Kafka2HDFSPM$FetchSubsetsAndStore.call(Kafka2HDFSPM.java:111)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:218)
We tried to increase the available the executors memory, review the code, but we couldn't find anything wrong.
Another info: we are using RDDs in spark.
Have someone encountered a similar problem, that already been solved
What is the heap configuration for the executor? By default, Java will autotune its heap according to machine memory. You need to change it to fit in your container with -Xmx setting.
See this article about running Java in the container
https://github.com/fabianenardon/docker-java-issues-demo/tree/master/memory-sample

org.h2.jdbc.JdbcSQLException: General error: "java.lang.StackOverflowError" [50000-176]

Stackoverflow error while using H2 database in Multi Threaded Environment
Our Application has service layer querying H2 database and retrieving the resultset.
The service layer connects to the h2 database using opensource clustering middleware "Sequoia" (that offers load balancing and
transparent failover) and also manages database connections .
https://sourceforge.net/projects/sequoiadb/
Our service layer has 50 service methods and we have exposed the service methods as EJB's . While Invoking the EJB's
we get the response from service (that includes H2 READ) with an average response time of 0.2 secs .
The DAO layer, query the database using Hibernate Criteria and we also use JPA2.0 entity manager to manage datasource.
For Load testing , We created a test class (with a main method) that invokes all the 50 EJB Methods .
50 threads were created and all the threads invoked the test class . The execution was Ok for first run and all the 50 threads succssfully completed
invoking 50 EJB methods .
When we triggered the test class again , we encountered "stackoverflowerror".The Detailed stacktrace is shown below
org.h2.jdbc.JdbcSQLException: General error: "java.lang.StackOverflowError" [50000-176]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:344)
at org.h2.message.DbException.get(DbException.java:167)
at org.h2.message.DbException.convert(DbException.java:290)
at org.h2.server.TcpServerThread.sendError(TcpServerThread.java:222)
at org.h2.server.TcpServerThread.run(TcpServerThread.java:155)
at java.lang.Thread.run(Thread.java:784)
Caused by: java.lang.StackOverflowError
at java.lang.Character.digit(Character.java:4505)
at java.lang.Integer.parseInt(Integer.java:458)
at java.lang.Integer.parseInt(Integer.java:510)
at java.text.MessageFormat.makeFormat(MessageFormat.java:1348)
at java.text.MessageFormat.applyPattern(MessageFormat.java:469)
at java.text.MessageFormat.<init>(MessageFormat.java:361)
at java.text.MessageFormat.format(MessageFormat.java:822)
at org.h2.message.DbException.translate(DbException.java:92)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:343)
at org.h2.message.DbException.get(DbException.java:167)
at org.h2.message.DbException.convert(DbException.java:290)
at org.h2.command.Command.executeUpdate(Command.java:262)
at org.h2.jdbc.JdbcPreparedStatement.execute(JdbcPreparedStatement.java:199)
at org.h2.server.TcpServer.addConnection(TcpServer.java:140)
at org.h2.server.TcpServerThread.run(TcpServerThread.java:152)
... 1 more
at org.h2.engine.SessionRemote.done(SessionRemote.java:606)
at org.h2.engine.SessionRemote.initTransfer(SessionRemote.java:129)
at org.h2.engine.SessionRemote.connectServer(SessionRemote.java:430)
at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:311)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:107)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:91)
at org.h2.Driver.connect(Driver.java:74)
at org.continuent.sequoia.controller.connection.DriverManager.getConnectionForDriver(DriverManager.java:266)
We then even added a random thread sleep(10-25 secs) between EJB Invocation . The execution was successful thrice (all 50 EJB Invocation)
and when we triggered for 4th time ,it failed with above error .
We get to see the above failure even with a thread count of 25 .
The Failure is random and there doesn't seems to be a pattern . Kindly let us know if we have missed any configuration .
Please let me know if you need any additional information . Thanks in Advance for any help .
Technology Stack :
1) Java 1.6
2) h2-1.3.176
3) Sequoia Middleware that manages DB Connection Open and Close.
-Variable Connection Pool Manager
-init pool size 250
Thanks Lance Java for your suggestions . Increasing stack size didnt help in our scenario for the following reasons (i.e additional stack helped only for few more executions).
In Our App , we are using Entity Manager (JPA) and the transaction attribute was not set . Hence each query to the database , created a thread carrying out execution . In JVisualVm , we observed the DB Threads, the Live Threads was equal to Total Threads Started .
Eventually our app created more than 30K threads and hence has resulted in Stackoverflow error .
Upon Setting the transaction attribute , the threads get killed after DB execution and all the transactions are then managed by only 25-30 threads.
The Issue is resolved now .
There's two main causes for a stack overflow error
A bug containing a non-terminating recursive call
The allocated stack size for the jvm isn't big enough
Looking at your stack trace it doesn't look recursive so I'm guessing you are running out of space. Have you set the -Xss flag for your JVM? You might need to increase this value.

error when opening a lucene index: java.io.IOException: Map failed

I have a problem which I think is the same as that described here:
Error when opening a lucene index: Map failed
However the solution does not apply in this case so I am providing more details and asking again.
The index is created using Solr 5.3
The line of code causing the exception is:
IndexReader indexReader = DirectoryReader.open(FSDirectory.open(Paths.get("the_path")));
The exception stacktrace is:
Exception in thread "main" java.io.IOException: Map failed: MMapIndexInput(path="/mnt/fastdata/ac1zz/JATE/solr-5.3.0/server/solr/jate/data_aclrd/index/_5t.tvd") [this may be caused by lack of enough unfragmented virtual address space or too restrictive virtual memory limits enforced by the operating system, preventing us to map a chunk of 434505698 bytes. Please review 'ulimit -v', 'ulimit -m' (both should return 'unlimited'), and 'sysctl vm.max_map_count'. More information: http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html]
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:907)
at org.apache.lucene.store.MMapDirectory.map(MMapDirectory.java:265)
at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:239)
at org.apache.lucene.codecs.compressing.CompressingTermVectorsReader.<init>(CompressingTermVectorsReader.java:144)
at org.apache.lucene.codecs.compressing.CompressingTermVectorsFormat.vectorsReader(CompressingTermVectorsFormat.java:91)
at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:120)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:65)
at org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:58)
at org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:50)
at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:731)
at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:50)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:63)
at uk.ac.shef.dcs.jate.app.AppATTF.extract(AppATTF.java:39)
at uk.ac.shef.dcs.jate.app.AppATTF.main(AppATTF.java:33)
The suggested solutions as in the exception message do not work in this case because I am running the application on a server and I do not have permissions to change those.
Namely,
ulimit -v unlimited
prints: "-bash: ulimit: virtual memory: cannot modify limit: Operation not permitted"
and
sysctl -w vm.max_map_count=10000000
gives:"error: permission denied on key 'vm.max_map_count'"
Is there anyway I can solve this?
Thanks
I have found a solution and so I am answering myself.
If you really cannot set ulimit or vm.max_map_count, the only solution, according to http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html, is to configure your solr (or if you work with Lucene api, choose explicitly) to use SimpleFSDirectory (if windows) or NIOFSDirectory, both are slower than the default.
For example
DirectoryReader.open(new NIOFSDirectory(Paths.get("path_to_index"), FSLockFactory.getDefault()))

Unable to get resource from jedis

After running my application, i am getting this error after around 5 mins.
Even though i am returning the resource after use, i keep getting this.
I have built jedis-2.2.2-SNAPSHOT.jar from the jedis code base, since its not released yet
I had set the minIdle = 100, maxIdle=200 & maxActive=200. At the time of this exception, the connection count to redis was 122 from my application
redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
at redis.clients.util.Pool.getResource(Pool.java:42)
Caused by: java.util.NoSuchElementException: Timeout waiting for idle object
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:442)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:360)
at redis.clients.util.Pool.getResource(Pool.java:40)
... 6 more
Did you check that redis is still up & running ?
If not, investigate why it died.
try a redis-cli in a terminal if you can. "info" would give you more details.

Categories