I currently have a Java application that updates a neo4j database every day.
I then have another application that queries the database using traversals by creating an embedded database with the same storage path.
How should I go about keeping the server running and directing the queries at the already running instance every time the querying java application runs? I'm unsure how to do this without creating an embedded server instance every time.
I can keep my current approach, the problem is it has to load the database every single time a user makes a request for a query and this is expensive.
Thanks!
You can run server on top of an embedded database: http://docs.neo4j.org/chunked/milestone/server-embedded.html
So you can keep your embedded app running and import the data using a timer-task and at the same time offer the server's web-ui.
Not only is it expensive, but if I understood your application concept correctly, you have a potential lock store error.
If your updating application is doing something in the database, and thus has an instance of the embedded database running, and at the same time your other application is trying to make an instance of the embedded database to perform a query, you'd run into a lock store.
I don't know if you have taken any precautions to prevent this, or if you've just been lucky so far that these actions have not occured simultaneously, but I would look into it.
Related
In my java/spring application a database record is fetched at the server init and is stored as a static field. Currently we do a mbean refresh to refresh the database values across all instances. Is there any other way to programatically refresh the database value across all the instances of the server? I am reading about EntityManager refresh.Will that work across all instances?Any help would be greatly appreciated.
You could schedule a reload every 5 minutes for example.
Or you could send events and all instance react to that event.
Till now, Communication between databases and servers is one-sided i.e. app server requests for data from the database. This generally results in the problem, and as you mentioned, that all application servers cannot know about a database change if an application is being run in cluster mode.
The current solution includes refreshing the fields time-to-time (A poll based technique).
To make this a push based model, We can create wrapper APIs over databases and let those wrapper APIs pass on the change to all the application servers.
By this I mean, Do not directly update database values from one application server but instead, on an update request send this change request to another application which keeps track of your application servers and pushes an event (via API call or queues) for a refresh of the passed database table.
Luckily, if you are using some new database (like MongoDB), they provide this update push to app servers out of the box now.
So i have a problem. Currently my application connects with single database and supports multi user. So for different landscapes, we deploy different application all together.
I need a solution that my application remains the same (single WAR deployment) but is able to connect to different DB across different landscapes.
For example a user in UK is using same application but underlying DB is in UK and subsequently another user logs in from Bangladesh and he sees the data of DB schema for Bangladesh and so on.
Currently we are creating JDBC connections in a connection pool created out of java and taking the same through out the application. Also we load static datas in hashmaps during the start up of server. But the same would not be possible with multiple DB since one would overwrite the other static data.
I have been scratching here and there , If some one can point me in the right direction, would be grateful.
You have to understand that your application start up and a user's geography are not connected attributes. You simply need to switch / pick correct DB connection while doing CRUD operations for a user of a particular geography.
So in my opinion, your app's memory requirement is going to be bigger now ( than previous ) but rest of set up would be simple.
At app start up, You need to initialize DB Connection pools for all databases and load static data for all geographies and then use / pick connection & static data as per logged in user's geography.
There might be multiple ways to implement this switching / choosing logic and this very much dependent on what frameworks & libraries you are using.
I just wonder if i can(or is it a good way to use it) set location of an embedded database on a server computer and run my desktop app on a computer which have access to server folders and get/insert data from database?
For example, i have one server machine and 3 computers accessing it. I want them to insert/update data of server database which is installed as embedded style.
If i can't which method is easier and free way of doing it?
EDIT: Actually that server is a not server.. it is just a computer others can access to.
It isn't a good idea to share the embedded database's files between different applications. For most embedded database implementations it is even not possible, because the embedded database engine needs exclusive access to the underlying data files. Furthermore it is a performance penalty to access the database files over a shared folder.
I know only two databases allowing shared database file access: SQLite and MS Access. Java and MS Access is not a good combination. Avoid it, use it only if you are forced to. For SQLite I don't know if it performs well for different processes on the same machine. But over a shared folder, I think this would work only for the simplest cases.
So if you have multiple client applications accessing the same database then you should install a database server. A database server is exactly made for such a sceanario. It manages the server local database files efficiently and can handle many clients at the same time. There are simple ones like Apache Derby or H2 which are Java only implementations and very easy to use. If you need more performance then you can go with MySQL or PostgreSQL, but these are more complex to administer.
The word "embedded" normally means running inside a given JVM. To access it from clients, as opposed to from other code running in the same JVM, an method of connecting will need to be supplied, such as a connection protocol + port. Well, by the time you do all that, you have in fact rolled your own server.
If you just want filesystem access, well normally databases lock the files they're using. And if they don't, you will anyway be missing all of the control and ACID constraints that a database normally gives you.
H2 database can be run in different modes: embedded, in-memory, standalone and mixed.
I think you are asking about the last one "mixed" mode.
I am investigating a bug in my application which runs on Weblogic 10.3.4 server. For this investigation, sometimes I need to clear some tables in the database directly (using SQL Navigator). But these changes doesn't reflect immediately in the weblogic server unless I restart it. It is a time consuming task to restart this every time I modify the data in the database.
I was wondering whether there is an easy and quick way to clear the database cache in the weblogic server and force it to reload the modified data. I think if I add a ejb which calls the flush method related to every entity and if I call that method, it will do this task.
But do you have any suggestion or any other way to do this task, may be by changing a weblogic server setting?
Is there any one method call we can do for forcing flushing of all the entities in current container?
JPA 2.0 has a Cache API that allows you to clear the cache (evictAll).
EclipseLink also has its own API previous the JPA 2.0.
See,
http://wiki.eclipse.org/EclipseLink/UserGuide/JPA/Basic_JPA_Development/Caching/Cache_API
In my java-based application, I need a job to read data from a set of tables and insert them into another table. In my first design, I created a oracle job and scheduled it to do the process frequently.
Unfortunately, when the job fails, there is not enough info available about the root causes of the failure. In addition. deploying the system for many system instances has made the work harder.
As an alternative work, I am trying to move the job into my application server, as a Weblogic job. Is this a good design or not?
Having moved my jobs into application server, I have faced the following advantages:
Tracking the job failure is easier.
Non-DBA users can easily read the application Server logs and fix the issues. (Many users do not have access to DB in production line. )
The logic of the job has been moved from my data access layer into my business logic layer and it is more acceptable due to design patterns.