My application is populating a Neo4j graph database at /tmp/import.db. In addition to my unit tests I like to use the Neo4j browser (AKA Neo4j Community) to do some digging in that same database. When the browser is running, my application crashes when it gets run because the database it is locked:
Exception in thread "main" java.lang.RuntimeException: Error starting org.neo4j.kernel.EmbeddedGraphDatabase, /tmp/import.db
at org.neo4j.kernel.InternalAbstractGraphDatabase.run(InternalAbstractGraphDatabase.java:330)
at org.neo4j.kernel.EmbeddedGraphDatabase.<init>(EmbeddedGraphDatabase.java:63)
at org.neo4j.graphdb.factory.GraphDatabaseFactory$1.newDatabase(GraphDatabaseFactory.java:92)
at org.neo4j.graphdb.factory.GraphDatabaseBuilder.newGraphDatabase(GraphDatabaseBuilder.java:198)
at org.neo4j.graphdb.factory.GraphDatabaseFactory.newEmbeddedDatabase(GraphDatabaseFactory.java:69)
at no.marcello.cmdb.Import.<init>(Import.java:34)
at no.marcello.cmdb.Main.main(Main.java:10)
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.kernel.StoreLockerLifecycleAdapter#5d20e46' was successfully initialized, but failed to start. Please see attached cause exception.
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:509)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:115)
at org.neo4j.kernel.InternalAbstractGraphDatabase.run(InternalAbstractGraphDatabase.java:307)
... 6 more
Caused by: org.neo4j.kernel.StoreLockException: Unable to obtain lock on store lock file: /tmp/import.db/store_lock. Please ensure no other process is using this database, and that the directory is writable (required even for read-only access)
at org.neo4j.kernel.StoreLocker.checkLock(StoreLocker.java:82)
at org.neo4j.kernel.StoreLockerLifecycleAdapter.start(StoreLockerLifecycleAdapter.java:44)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:503)
... 8 more
Caused by: java.io.IOException: Unable to lock sun.nio.ch.FileChannelImpl#70b0b186
at org.neo4j.kernel.impl.nioneo.store.FileLock.wrapFileChannelLock(FileLock.java:38)
at org.neo4j.kernel.impl.nioneo.store.FileLock.getOsSpecificFileLock(FileLock.java:93)
at org.neo4j.kernel.DefaultFileSystemAbstraction.tryLock(DefaultFileSystemAbstraction.java:89)
at org.neo4j.kernel.StoreLocker.checkLock(StoreLocker.java:74)
... 10 more
Now I have to neo4j stop and neo4j start between every run of my application to see the changes. My hands got tired of that.
Can I disable locking of the database when using the Neo4j browser? I'd like to do that for testing purposes, as it helps alot to see how my database model evolves while I'm populating it.
Database systems -- small ones, anyway -- can often run in either of two modes: embedded or server. In embedded mode, the idea is that one program and only one program can read and write to the database at a time. This is quite useful for many applications, and allows the database to dispense with the code necessary to allow access among multiple programs, which eats up time, code, and processing power.
In server mode, the database management system itself runs as a separate program, and it is built to have multiple programs access it.
Based on the class in the error message above, you have an embedded database, so the answer to your question is "no, you can't do that in this mode". You can switch to using a server mode of neo4j, I expect, but connecting to it will involve some code changes, and you then have the minor problems of making sure your database system is running when your program runs, etc.
So you can do it with this database data, but you have to change the mode in which you are running the database management system.
Related
I have been getting struggle to connect H2 database from a Spring Boot app by using the following connection string as mentioned on Database URL Overview section:
spring.datasource.url=jdbc:h2:tcp://localhost:9092/~/test-db
I also tried many different combination for tcp (server mode) connection, but still get error e.g. "Connection is broken: "java.net.SocketTimeoutException: connect timed out: localhost:9092" when running Spring Boot app.
#SpringBootApplication
public class Application {
// code omitted
#Bean(initMethod = "start", destroyMethod = "stop")
public Server h2Server() throws SQLException {
return Server.createTcpServer("-tcp", "-tcpAllowOthers", "-tcpPort", "9092");
}
}
So, how can I fix this problem and connect to H2 database via server mode?
You seem to be a little confused.
H2 can run in two different 'modes'.
Local mode
Local mode means H2 'just works', and you access this mode with the file: thing in the JDBC connect URL. The JDBC driver itself does all the database work, as in, it opens files, writes data, it does it all. There is no 'database server' at all. Or, if you prefer, the JDBC driver is its own server though it opens no ports.
Server mode
In this case you need a (separate) JVM and separately fire up H2 in server mode and then you can use the same library (still h2.jar) to serve as a JDBC server. In this mode, the two things are completely separate - if you want, you can run h2.jar on one machine to be the server, and run the same h2.jar on a completely different machine just to connect to the other H2 machine. The database server machine does the bulk of the work, with the 'client' H2 just being the JDBC driver. H2 is no different than e.g. mysql or postgres in such a mode: You have one 'app' / JVM process that runs as a database engine, allowing multiple different processes, even coming from completely different machines halfway around the world if you want to, to connect to it.
You access this mode with the tcp: thing in the JDBC string.
If you really want, you can run this mode and still have it all on a single machine, even a single JVM, but why would you want to? Whatever made you think this will 'solve lock errors' wouldn't be fixed by running all this stuff on a single JVM. There are only two options:
You're mis-analysing the problem.
You really do have multiple separate JVM processes (either one machine with 2 java processes in the activity monitor / ps auxww output / task manager, or 2+ machines) all trying to connect to a single database in which case you certainly do need this, yes.
How to do server mode right
You most likely want a separate JVM that starts before and that hosts the h2 database; it needs to run before the 'client' JVMs (the ones that will connect to it) start running. Catalina is not the 'server' you are looking for, it is org.h2.tools.Server, and if it says 'not found' you need to fix your maven imports. This needs be a separate JVM (you COULD write code that goes: Oh, hey, there isn't a separate JVM running with the h2 server so I'll start it in-process right here right now, but that means that process needs to stay in the air forever, which is just weird. Hence, you want a separate JVM process for this).
You haven't explained what you're doing. But, let's say what you're doing is this:
I have a CI script that fires up multiple separate JVMs, some in parallel even, which runs a bunch of integration and unit tests in parallel.
Even though they run in parallel (or perhaps intentionally so), you all want to run this off of a single DB. This is usually a really bad idea (you want tests to be isolated; that running them on their own continues to behave identically. You don't want a test to fail in a way that can only be reproduced if you run the same batch of 18 separate tests using the same run code, where one unrelated test fails in a specific fashion, whilst it's Tuesday, a full moon, and Beethoven is playing in your music player, and it's warmer than 24º in the room affecting the CPU's throttling, of course. Which is exactly what tends to happen if you try to re-use resources in multiple tests!) – still, you somehow really want this.
... then, edit the CI script to first Launch a JVM that hosts a H2 server, and once that's up and running, presumably run a process that fills this database with test data, and once that's done, then run all tests in parallel, and once those are all done, shut down the JVM, and delete the DB file.
Exactly how to do the third part is a separate question - if you need help with that, ask a new question and name the relevant tool(s) you are using to run this stuff, paste the config files, etc.
I am currently using Spring's Mongo persistence layer for querying MongoDB. The collection I query contains about 4G of data. When I run the find code on my IDE it retrieves the data. However, when I run the same code on my server, it freezes for about 15 to 20 minutes and eventually throws the error below. My concern is that it runs without a hitch on my IDE running on my 4G Ram windows PC and fails on my 14G ram server. I have looked through the Mongo Log, and there's nothing there that points to the problem. I also assumed that the problem might be an environmental issue since it works on my local spring IDE, however the libraries on both my local pc are the same as the ones on my server. Has anyone had this kind of issue or can any one point me to what I'm doing wrong. Also weirdly, the find operation works when I revert to Mongo's java driver find methods.
I'm using mongo-java-driver - 2.12.1
spring-data-mongodb - 1.7.0.RELEASE
See below sample find operation code and error message.
List<HTObject> empObjects =mongoOperations.find(new Query(Criteria.where("date").gte(dateS).lte(dateE)),HTObject.class);
The exception I get is:
09:42:01.436 [main] DEBUG o.s.data.mongodb.core.MongoDbUtils - Getting Mongo Database name=[Hansard]
Exception in thread "main" org.springframework.dao.DataAccessResourceFailureException: Cursor 185020098546 not found on server 172.30.128.155:27017; nested exception is com.mongodb.MongoException$CursorNotFound: Cursor 185020098546 not found on server 172.30.128.155:27017
at org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:73)
at org.springframework.data.mongodb.core.MongoTemplate.potentiallyConvertRuntimeException(MongoTemplate.java:2002)
at org.springframework.data.mongodb.core.MongoTemplate.executeFindMultiInternal(MongoTemplate.java:1885)
at org.springframework.data.mongodb.core.MongoTemplate.doFind(MongoTemplate.java:1696)
at org.springframework.data.mongodb.core.MongoTemplate.doFind(MongoTempate.java:1679)
at org.springframework.data.mongodb.core.MongoTemplate.find(MongoTemplate.java:598)
at org.springframework.data.mongodb.core.MongoTemplate.find(MongoTemplate.java:589)
at com.sa.dbObject.TestDb.main(TestDb.java:74)
Caused by: com.mongodb.MongoException$CursorNotFound: Cursor 185020098546 not found on server 172.30.128.155:27017
at com.mongodb.QueryResultIterator.throwOnQueryFailure(QueryResultIterator.java:218)
at com.mongodb.QueryResultIterator.init(QueryResultIterator.java:198)
at com.mongodb.QueryResultIterator.initFromQueryResponse(QueryResultIterator.java:176)
at com.mongodb.QueryResultIterator.getMore(QueryResultIterator.java:141)
at com.mongodb.QueryResultIterator.hasNext(QueryResultIterator.java:127)
at com.mongodb.DBCursor._hasNext(DBCursor.java:551)
at com.mongodb.DBCursor.hasNext(DBCursor.java:571)
at org.springframework.data.mongodb.core.MongoTemplate.executeFindMultiInternal(MongoTemplate.java:1871)
... 5 more
In short
The MongoDB result cursor is not available anymore on the server.
Explanation
This can happen when using Sharding and a connection to a mongos fails over or if you run into timeouts (see http://docs.mongodb.org/manual/core/cursors/#closure-of-inactive-cursors).
You're performing a query that loads all objects into one list (mongoOperations.find). Depending on the result size, this may take a long time. Using an Iterator can help to leverage but even loading huge amounts using Iterators is limited at a certain point.
You should partition the results if you have to query very large data amounts using either paging (paging gets slower the more records you skip) or by querying with splits of your range (you have already a date range, so this could work).
I have a web service application using Cassandra 2.0 and Datastax java driver 2.0.2. I sometimes get the stacktrace below when trying to write to/read from database, especially if the application has been sitting there for a while (like overnight). This error usually goes away when I retry, however, sometimes it persists and I have to restart the web app to get rid of the error.
I wonder if this is some sort of "stale connection" issue. However, the Datastax java driver documentation indicates it is supposed to keep the connection alive.
I did a google search on the error message and only two (!) hits were given by google. They are related. This is the answer in one of the google result:
Sylvain Lebresne Apr 2 You're running into
https://datastax-oss.atlassian.net/browse/JAVA-250. We'll fix it soon
hopefully (I have some half-finished patch that I need to finish), but
currently, if you restart a whole cluster without doing queries during
the restat, it can sometimes happen that you'll get this before the
cluster properly reconnect. In the meantime and as a workaround, you
can always make sure to run a few trivial queries while you're doing
the cluster restart to avoid it.
However this does not look like my scenario because we are not restarting the cluster at all. I wonder if anyone has some insights about this error?
Stacktrace:
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: ec2-54-197-xxx-xxx.compute-1.amazonaws.com/54.197.xxx.xxx:9042 (com.datastax.driver.core.ConnectionException: [ec2-54-197-xxx-xxx.compute-1.amazonaws.com/54.197.xxx.xxx:9042] Write attempt on defunct connection))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:65)
at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:256)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:172)
at com.datastax.driver.core.SessionManager.execute(SessionManager.java:92)
I have what I believe is the exact same issue (Write attempt on defunct connection) on my development machine intermittently.
It seems to happen when my dev machine goes to sleep while the server is up. Obviously there's no power management in the AWS cluster you're running, but it gives you a hint - the key is that something is breaking your control connection or intermittently preventing network connectivity between your hosts.
You should see the reconnection thread in your logs:
21:34:51.616 [Reconnection-1] ERROR c.d.driver.core.ControlConnection - [Control connection] Cannot connect to any host, scheduling retry in 2000 milliseconds
The next request after this will always succeed in my experience.
TL; DR - check for networking issues or any intermittent shutdown of servers that could break the control connection. The driver should do a better job of re-establishing broken control connections, sounds like they're working on it for JAVA-250
I got the JPA exception
"javax.persistence.PersistenceException:
Transaction failed to flush"
Then I deleted my local datastore(datastore-indexes-auto.xml and local_db.bin) from my system. Recreated all the data again and after that, the exception was gone. I want to know what did just happened ?
The following is the stacktrace
[RPC Fault faultString="org.springframework.orm.jpa.JpaSystemException : Transaction failed to flush; nested exception is javax.persistence.PersistenceException: Transaction failed to flush" faultCode="Server.Processing" faultDetail="null"]
at mx.rpc::AbstractInvoker/http://www.adobe.com/2006/flex/mx/internal::faultHandler()[C:\autobuild\3.5.0\frameworks\projects\rpc\src\mx\rpc\AbstractInvoker.as:290]
at mx.rpc::Responder/fault()[C:\autobuild\3.5.0\frameworks\projects\rpc\src\mx\rpc\Responder.as:58]
at mx.rpc::AsyncRequest/fault()[C:\autobuild\3.5.0\frameworks\projects\rpc\src\mx\rpc\AsyncRequest.as:103]
at NetConnectionMessageResponder/statusHandler()[C:\autobuild\3.5.0\frameworks\projects\rpc\src\mx\messaging\channels\NetConnectionChannel.as:581]
at mx.messaging::MessageResponder/status()[C:\autobuild\3.5.0\frameworks\projects\rpc\src\mx\messaging\MessageResponder.as:222]
I don't know google-app-engine, but I assume you have some limited space in DB there? Maybe you just run out of space?
I believe it is due to this problem with the time it takes for AppEngine to start up, thereby causing timeout errors.
http://googleappengine.blogspot.com/2009/12/request-performance-in-java.html
If you've been following the App Engine Java runtime group, you may
have noticed some discussions about performance of the Java runtime.
Many of you have complained about hard-to-predict
DeadlineExceededExceptions, or unexpectedly slow requests that use a
high amount of CPU. These issues often have the same root cause: App
Engine is preparing a new instance of your code to respond an incoming
request.
It was reported by Grails http://jira.grails.org/browse/GPAPPENGINE-67
There is an open issue Google has not fixed yet, even after several years.
https://code.google.com/p/googleappengine/issues/detail?id=7706
As a Java project becomes more complicated and requires loading more
classes & jars at startup, instance startup time degrades to the point
where instances blow the 60s user-facing request deadline.
You MIGHT be able to work around this by keeping an idle instance resident in memory so it doesn't have to spin up.
https://developers.google.com/appengine/docs/adminconsole/performancesettings#scheduler
https://appengine.google.com/settings
I have an application running in Websphere Portal Server inside of Websphere Application Server 6.0 (WAS). In this application for one particular functionality that takes a long time to complete, I am firing a new thread that performs this action. This new thread opens a new Session from Hibernate and starts performing DB transactions with it. Sometimes (haven't been able to see a pattern), the transactions inside the thread work fine and the process completes successfully. Other times however I get the errors below:
org.hibernate.exception.GenericJDBCException: could not load an entity: [OBJECT NAME#218294]
...
Caused by: com.ibm.websphere.ce.cm.ObjectClosedException: DSRA9110E: Connection is closed.
Method cleanup failed while trying to execute method cleanup on ManagedConnection WSRdbManagedConnectionImpl#642aa0d8 from resource jdbc/MyJDBCDataSource. Caught exception: com.ibm.ws.exception.WsException: DSRA0080E: An exception was received by the Data Store Adapter. See original exception message: Cannot call 'cleanup' on a ManagedConnection while it is still in a transaction..
How can I stop this from happening? Why does it seem that WAS wants to kill my connections even though they're not done. Is there a way I can stop WAS from attempting to close this particular connection?
Thanks
I mentioned two possible causes in my other answer: 1. the hibernate.connection.release_mode optional parameter or 2. a problem with unmanaged threads. Now that I read this question, I really start to think that your problem may be related to the fact that you're spawning your own threads. Since they aren't managed by the container, connections used in these treads may appear as "leaked" (not closed properly) and I wouldn't be surprised if WAS tries to recover them at some point.
If you want to start a long running job, you should use a WorkManager. Don't spawn threads yourself.