MySQL to Redis and Redis to MySQL - java

i want to optimize my game servers in Minecraft. I have 150k users in database, when daily on my servers join 15k users.
I have read about Redis, and i also read that Redis is faster than MySQL, i know that i can't give up from MySQL because my websites are using same database.
But what if i will load every 15 minutes all MySQL data to redis, then all my server plugins will work on this data, then after next 15 minutes redis will export that data to MySQL? I load same data to 4 servers and to 3 plugins on every server, so maybe loading it all to one redis server will be faster than send requests to MySQL from 4 servers * 3 Plugins?
Thanks for help.

Redis is an effective way to cache data from a MySQL database. Even though Redis has persistence options, many will still favor using a MySQL database for this task. As Redis operates in memory, it will be much faster than a MySQL database which (for the most part) does not operate in memory. Often, people will favor storing cache data with HashMaps, but since you have 3 servers, Redis would be a much better option. This way, you wouldn't have to create 3 near identical caches for each server.

Hi as much I can understand you have 4 mysql servers and 3 plugins.
As Redis is extremely fast no doubt but Redis use case is different than mysql. my advice is to load data in Redis which you'll use very frequently it'll be much faster than mysql, but to make it faster you have to design your keys intelligently so that Redis can search it faster. You can refresh keys and values after certain interval, defiantly your system's performance will improve.

Related

Is there overhead in having multiple data sources against same data base instance when having an XA transaction?

A very high level description of our test set up is:
Java application is running on Jboss (WildFly)
Using Oracle as database server
XA transaction between IBM MQ and Oracle data sources
More than 100 concurrent transactions doing MQ GET/PUT and SQL inserts/read/update/delete
More than 1000 transactions per seconds to be processed
Each transaction is doing about 100 SQL inserts + some reads (some of the transactions are also doing a few delete and updates)
2 Jboss nodes with 32 CPU's and 32 CPU oracle database server
Each transaction will connect to about 3 data sources. However, all data sources are on the same data base instance. I have been wondering if there is an overhead in having multiple data sources against same data base instance when having an XA transaction.
One of the reason i ask, is that we are now struggling with the "enq: DX - contention" waits in Oracle.
I have been trying to google without finding a clear answer.
it seems like Oracle have some kind of optimizations for this scenario
https://docs.oracle.com/en/database/oracle/oracle-database/19/jjdbc/distributed-transactions.html#GUID-2C258328-7DFD-42ED-AA03-01959FFE924A (32.3.4 Oracle XA Optimizations)
However, as mentioned above, we are struggling with "enq: DX - contention" waits
Any insight will be helpful
I see serveral problems in your solution if you are using multiple datasources to the same database.
If you have 3 Datasources you have 3 seperate connections to the database. So you also have 3 parallel database transactions.
First of all you have overhead in your application because you have to synchronise the transactions on the application level. On top this 3 transaction are then syncronized via XA. This might cause a huge performance problem.
If you have 3 transactions you need 3 commits which might be slower than one commit.
The 3 DB transactions might run into deadlocks because all 3 are trying to modify the same or related data.

How to initialize and fill Apache Ignite database by first node?

I would like to use Apache Ignite as failover read-only storage so my application will be able to access the most sensitive data if main storage (Oracle) is down.
So I need to
Start nodes
Create schema (execute DDL queries)
Load data from Oracle to Ignite
Seems like it's not the same as database caching and I don't need to use Cache. However, this page says that I need to implement a store to load a large amount of data from 3rd parties.
So, my questions are:
How to effectively transfer data from Oracle to Ignite? Data Streamers?
Who should init this transfer? First started node? How to do that? (tutorials explain how to achieve that via clients, should I follow this advice?)
Actually, I think, use of a cache store without read/write-through would be a suitable option here. You can configure a CacheJdbcPojoStore, for example, and call IgniteCache#loadCache(...) on your cache, once the cluster is up. More on this topic: https://apacheignite.readme.io/docs/3rd-party-store
If you don't want to use a cache store, then IgniteDataStreamer could be a good choice. This is the fastest way to upload big amount of data to the cluster. Data loading is usually performed from a client node, when all server nodes are up and running.

DB Scalability for a high load application?

I have seen application to have clustered web server(like 10 to 20 server) to have scalability where they can distribute the
load among webservers. But i have always seen all webserver using single DB.
Now consider any ecommerce or railways web application where million users are hitting the application at any point of time.
To scale at webserver side, we can have server clustering but how we can scale DB ? As we can not have multiple DB like multiple webserver as one dB will have different state than other one :)
UPDATE:-
Is scaling the db not possible in relational DBMS but only in NO SQL DB like mongo db etc ?
There is two differend kind of scalability on database side. One is read-scalability and other one is write scalability. You can do both with scaling vertically means adding more CPU and RAM to some level. But if you need to scale on very large data more than the limit of a single machine you should use read replicas for need of read-scalability and sharding for write-scalability.
Sharding is not working like putting some entities(shoes) to one server and others(t-shirts) to another servers. It works like putting some of shoes and some of t-shirts to one machine and doing that for the rest of entities also.
Another solution for high volume data management is using microservices which is more similar to your example. I mean having a service for shoes another service for t-shirts. With microservices you divide your code and data to different projects and to different application and database servers. So you can deal with scalability of different part of your data differently.

Web Application Database Mem-Cache Suggestions

My website serve live info to user. These information can change dynamically. (You can think it is a STOCK Prices) My each query time to get these information from db about 3-5 seconds. My total time to get all information about 3 minutes. I serve these information to 6000 user. I am using hashmap to store and serve information to users. I get all information from db every 5 minutes and store it on hashmap. Everything is OK but I want to use advanced cache systems. What is your suggest. Can I use HSQLDB for that? INFO: I am using Spring MVC + Hibernate so I don't want to use non-JAVA solutions such as REDIS.
You may use ehcache as a second level cache for hibernate or as "self managed" cache. Guava library offers also efficient cache capibilities.

Best approach for Spring+MyBatis with Multiple Databases to support failovers

I need to develop some services and expose an API to some third parties.
In those services I may need to fetch/insert/update/delete data with some complex calculations involved(not just simple CRUD). I am planning to use Spring and MyBatis.
But the real challenge is there will be multiple DB nodes with same data(some external setup will takes care of keeping them in sync). When I got a request for some data I need to randomly pick one DB node and query it and return the results. If the selected DB is unreachable or having some network issues or some unknown problem then I need to try to connect to some other DB node.
I am aware of Spring's AbstractRoutingDataSource. But where to inject the DB Connection Retry logic? Will Spring handle transactions properly if I switch the dataSource dynamically?
Or should I avoid Spring & MyBatis out-of-the-box integration and do Transaction management by myself using MyBatis?
What do you guys suggest?
I propose to you using of NoSQL database like MongoDB. It is easy clustering. You can configure for example use 10 servers and do replication of data 3 times.
Thats mean that if 2 of your 10 servers will fails - you still got data save.
NoSQL databases is different comparing to RDBS, but they can give hight performance for clustering.
Also, there is no transactions support for NoSQL - you have to do it manually in case of financial operations.
Actually you should thing in different way when developing with NoSQL.
Yes, it will work. Get AbstractRoutingDataSource and code your own one. The only thing you cannot do is to change the target database while a transaction is running.
So what you have to do is putting the db retry code in the getConnection. If during the transaction that connection becomes invalid you should let it fail.

Categories