I am using Postgres 9.3 on my production servers. I would like to achieve high availability of Postgres DB using Master-Master configuration where each master would run in an Active-Active mode with bidirectional replication.
I have 2 Java Spring REST web-services pointed to 2 separate database engines each having their own storage. Both web services point to its own database plus the other one in HA configuration.
Now if any one of the Databases fails, I want the active database server to work and when the failed one recover, the data should be synced back to the recovered one.
I tried doing bidirectional replication using Bucardo 5.3.1 but the recovered database does not get updated with the new data and Bucardo syncs need to be kicked again. (see bug: https://github.com/bucardo/bucardo/issues/88)
Is there any way I can achieve this with some other bi-directional replication tool?
Or is there any other way where I can have with 2 Postgres engines pointing to a shared storage running in Active-Active configuration?
2nd Quadrant released Postgres BDR which is a patched version of PostgreSQL that can do multimaster replication using logical WAL decoding. You will find more informations here : https://www.2ndquadrant.com/fr/resources/bdr/
I have finally decided to move to Enterprise DB of Postgres (a paid licence) that provides replication tools via GUI which are easy to use and configure.
Related
I have some application written in JAVA.
We are using MySQL DB.
It is possible to integrate that MySQL DB with Apache Ignite as In Memory cache and use that configuration without any updates in JAVA application (of course some DB connection details should be changed)?
So my application do the same staff but only difference will be connection with Apache Ignite instead of MySQL?
It is possible this kind of configuration?
I suppose you are looking for the write-through feature. I'm not sure what is your use case, but you should be aware of some limitations like your data have to be preloaded into Ignite before running SELECT queries. From a very abstract perspective, you need to define POJOs and implement a custom CacheStore interface. Though GridGain Control Center can do the latter for you automatically, check this demo as a reference.
Does someone know if GlassFish 5 has support to use global transactions with 2PC (XA protocol)? but without install extra tools.
I have looked for information in the page of GlassFish "The Open Source Java EE Reference Implementation" where I downloaded the app server (and in other pages) but I have not had luck.
I try doing transactions in two microservices that insert two values in the data base. I have configured the GlassFish's JNDI with "com.mysql.jdbc.jdbc2.optional.MysqlXADataSource" and it looks like working, but when I check the data base only is added a value of one service. (the global transactions with 2PC does not work). I begin to think that GlassFish does not have support for 2PC.
I have read that it can do it with tomcat, but i need to add tools like atomikos, bitronix, etc. The idea is can do it with glassfish with out install nothing more.
Regards.
Does someone know if GlassFish 5 has support to use global transactions with 2PC (XA protocol)? but without install extra tools.
Glassfish 5 supports transactions using XA datasources. You can create a program that executes transactions combining operations on multiple databases. For instance, you can create a transaction that performs operations into Oracle and IBM DB2 databases. If one of operations in the transaction fails, the other operations (in the same and in the other databases) will be not executed or rollbacked.
I try doing transactions in two microservices that insert two values in the data base. I have configured the GlassFish's JNDI with "com.mysql.jdbc.jdbc2.optional.MysqlXADataSource" and it looks like working, but when I check the data base only is added a value of one service.
If your program invokes a REST/web service in a transaction, the operations performed by the other REST/webservice do not join to the transaction. An error in the program will not produce a rollback in the operations performed by the already invoked REST/webservice.
What we're trying to do is what Meteor is doing with Mongo with LiveQuery, which is this:
Livequery can connect to the database, pretend to be a replication
slave, and consume the replication log. Most databases support some
form of replication so this is a widely applicable approach. This is
the strategy that Livequery prefers with MongoDB, since MongoDB does
not have triggers.
Source of that quote here
So is there a way with com.mongodb.*; in Java to create such replication slave so that it receives any notifications for each update that happens on the primary Mongo server?
Also, I don't see any replication log in the local database. Is there a way to turn them on?
If it's not possible to do it in Java, is it possible to create such solution in other languages (C++ or Node.js maybe)?
You need to start your database with the --replSet rsName option, and then run rs.initiate(). After that you will see a rs.oplog collection in the local database.
What you are describing is commonly referred to as "tailing the oplog", which is based on using a Tailable Cursor on a capped collection (the MongoDB oplog in this case). The mechanics are relatively simple, there are numerous oplog tailing examples out there written in Java, here are a few:
Event Streaming with MongoDB
TailableCursorExample
Wordnik mongo-admin-utils
IncrementalBackupUtil
We're trying to horizontally scale a JPA based application, but have encountered issues with the second level cache of JPA. We've looked at several solutions (EhCache, Terracotta, Hazelcast) but couldn't seem to find the right solution. Basically what we want to achieve is to have multiple application servers all pointing to a single cache server that serves as the JPA's second level cache.
From a non java perspective, it would look like several PHP servers all pointing to one centralised memcache server as it's cache service. Is this currently possible with Java?
Thanks
This is in response to the comment above.
Terracotta will be deployed in it's own server
Each of the app server will have terracota drivers which will store/retrieve data to-fro terracotta server.
Ehcache api present in the application war, will invoke the terracota drivers to store data into terracotta server.
Hibernate api will maintain the L1 cache, in addition it will use the ehcache api to save/retrieve data to-fro L2 cache. Blissfully unaware about how ehcache api performs the task.
Context: I'm working on Spring MVC project and using Hibernate to generate database schema from my classes using annotations. It uses MySQL server running on my local machine. I'm aiming to get hosting and make my website live.
Do I use mySQL server of a hosting provider in that case to run my database?
What are the pros and cons? Would they normally do db backups or its worth to do that myself and store it on my machine?
Am I going to loose data in case of server reboot?
Thanks in advance. I'm new to this, hence feel free to moderate questions if it sounds unreasonable.
Much of this will depend on how you host your site. I would recommend looking into CloudFoundry which is a free Platform as a Service (PAAS) provided by the folks at VMWare. If your using Spring to setup hibernate, Cloudfoundry can automatically hook your application into a MySql service it provides.
In any case, your database will most likely reside on the hosts server, unless you establish a static ip for your machine and expose the database services. At that point, you might as well host your own site.
Where the data will be stored depends on the type of host. For instance if you use a PAAS, they will choose the location they store your database on the server. It will be transparent to you. If you go with a dedicated server, you will most likely have to install your database software.
Most databases supporting websites should provide persistent storage or be configured to do so. I'm not sure why your MySql database loses data after you restart, but out of the box it should not do so. If your using hibernate to autogenerate your DDL, I could see the data being blown away at each restart. You would want to move away from this configuration.
1 Do I use mySQL server of a hosting provider in that case to run my database?
Yes. In your application you only change the JDBC connection URL and credentials.
There are other details about the level of service that you want for the database: security, backup, up time. But that depends on your hosting provider and your application needs.
2 Is it stored somewhere on the server?
Depends on how your hosting provider hosts the database. The usual approach is to have the web server in one machine and the database in another machine inside the VPN.
From the Hibernate configuration perspective, is just changing the JDBC url. But there are other quality attributes that will be affected by your provider infrastructure, and that depends on the level of service that you contract.
3 Should I declare somehow that data must be stored f.e. in a separate file on server?
Probably not. If your provider gives you a database service, what you choose is the level of service: storage, up-time... they take care of providing the infrastructure. And yes usually they do that using a separate machine for the database.
4 Am I going to loose data in case of server reboot? (As f.e. I do when I restart server on my local machine)
Depends on the kind of hosting that you are using. BTW Why you loose the data on reboot in your local machine? Probably you are re-creating the database each time (check your Hibernate usage). Because the main feature of any database is well... persistent storage :)
If you host your application in a virtual machine and you install MySQL in that VM... yes you are going to loose data on reboot. Because in this kind of hosting (like Amazon EC2) you host a VM for CPU execution, and all the disk data is transient. If you want persistent data you have to use a database located in another machine (this is done in this way for architectural reasons, and cloud providers like Amazon gives you also different storage services).
But if the database is provided, no.. a persistent database is the usual level of service that you should expect from a provider.