I'm using Firebase Firestore documents to publish the location of my users on a map, so they see each other. This works fine when all of them have good connectivity, but sometimes their mobiles can't connect to the Firebase servers and it seems that the writes are cached: whenever they recover connectivity all the pending location writes are sent in bulk.
The effect for other users is that they see the position of a person stop, and after a while they start moving really quick until the map position catches the real value. This is annoying and a waste of bandwidth.
I have tried disabling the persistence cache but this doesn't help (it would only help if the transmisor app would die, but as long as it lives the positions are cached in memory).
Maybe the problem is that I shouldn't be using documents for this purpose and there is another Firebase mechanism which allows discarding stale write data for the purposes of real time communication?
All write operations that are performed while the device is offline are queued until the connection with the Firebase servers is reestablished. Unfortunately, there is no API that can help you control which write operation are queued and which are not.
The simplest solution I can think of is to use Firestore transactions, which are currently not persisted to disk and thus will be lost when the application is offline.
So, transactions are not supported for offline use, they can't be cached or saved for later. This is because a transaction absolutely requires round trip communications with server in order to ensure that the code inside the transaction completes successfully. So you can use transaction only while online because the transactions are network dependent.
You can work around this by only making requests if you can connect to firestore. Here's a helper function that will determine if you're connected or not. It's similar to using a transaction since both methods involve making a read request.
If you don't plan on invoking the function that much, the read costs will probably be negligible. However, to save the cost of reads, you could also consider pinging some server or a cloud function instead of firestore itself. Doing so might be a less accurate way of testing connection to firestore though.
import {
doc,
getDocFromServer,
} from "firebase/firestore"
async function canConnectToFirestore(){
//navigator.onLine can only say for certain if you're disconnected
//For more info on navigator.onLine: https://developer.mozilla.org/en-US/docs/Web/API/Navigator/onLine
if (!navigator.onLine)
return false
//db is initialized from getFirestore()
try{
await getDocFromServer(doc(db, "IrrelevantText","IrrelevantText"))
return true
}
catch(e){
return false
}
}
async function example(){
if(await canConnectToFirestore()) console.log("Do something with firestore")
}
Related
I am working on a Java web application that uses Weblogic to connect to an Informix database. In the application we have multiple threads creating records in a table.
It happens pretty often that it fails and the following error is thrown:
java.sql.SQLException: Could not do a physical-order read to fetch next row....
Caused by: java.sql.SQLException: ISAM error: record is locked.
I am assuming that both threads are trying to insert or update when the record is locked.
I did some research and found that there is an option to set the database that instead of throwing an error, it should wait for the lock to be released.
SET LOCK MODE TO WAIT;
SET LOCK MODE TO WAIT 17;
I don't think that there is an option in JDBC to use this setting. How do I go about using this setting in my java web app?
You can always just send that SQL straight up, using createStatement(), and then send that exact SQL.
The more 'normal' / modern approach to this problem is a combination of MVCC, the transaction level 'SERIALIZABLE', retry, and random backoff.
I have no idea if Informix is anywhere near that advanced, though. Modern DBs such as Postgres are (mysql does not count as modern for the purposes of MVCC/serializable/retry/backoff, and transactional safety).
Doing MVCC/Serializable/Retry/Backoff in raw JDBC is very complicated; use a library such as JDBI or JOOQ.
MVCC: A mechanism whereby transactions are shallow clones of the underlying data. 2 separate transactions can both read and write to the same records in the same table without getting in each other's way. Things aren't 'saved' until you commit the transaction.
SERIALIZABLE: A transaction level (also called isolationlevel), settable with jdbcDbObj.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE); - the safest level. If you know how version control systems work: You're asking the database to aggressively rebase everything so that the entire chain of commits is ordered into a single long line of events: Each transaction acts as if it was done after the previous transaction was completed. The simplest way to implement this level is to globally lock all the things. This is, of course, very detrimental to multithread performance. In practice, good DB engines (such as postgres) are smarter than that: Multiple threads can simultaneously run transactions without just being frozen and waiting for locks; the DB engine instead checks if the things that the transaction did (not just writing, also reading) is conflict-free with simultaneous transactions. If yes, it's all allowed. If not, all but one simultaneous transaction throw a retry exception. This is the only level that lets you do this sequence of events safely:
Fetch the balance of isaace's bank account.
Fetch the balance of rzwitserloot's bank account.
subtract €10,- from isaace's number, failing if the balance is insufficient.
add €10,- to rzwitserloot's number.
Write isaace's new balance to the db.
Write rzwitserloot's new balance to the db.
commit the transaction.
Any level less than SERIALIZABLE will silently fail the job; if multiple threads do the above simultaneously, no SQLExceptions occur but the sum of the balance of isaace and rzwitserloot will change over time (money is lost or created – in between steps 1 & 2 vs. step 5/6/7, another thread sets new balances, but these new balances are lost due to the update in 5/6/7). With serializable, that cannot happen.
RETRY: The way smart DBs solve the problem is by failing (with a 'retry' error) all but one transaction, by checking if all SELECTs done by the entire transaction are not affected by any transactions that been committed to the db after this transaction was opened. If the answer is yes (some selects would have gone differently), the transaction fails. The point of this error is to tell the code that ran the transaction to just.. start from the top and do it again. Most likely this time there won't be a conflict and it will work. The assumption is that conflicts CAN occur but usually do not occur, so it is better to assume 'fair weather' (no locks, just do your stuff), check afterwards, and try again in the exotic scenario that it conflicted, vs. trying to lock rows and tables. Note that for example ethernet works the same way (assume fair weather, recover errors afterwards).
BACKOFF: One problem with retry is that computers are too consistent: If 2 threads get in the way of each other, they can both fail, both try again, just to fail again, forever. The solution is that the threads twiddle their thumbs for a random amount of time, to guarantee that at some point, one of the two conflicting retriers 'wins'.
In other words, if you want to do it 'right' (see the bank account example), but also relatively 'fast' (not globally locking), get a DB that can do this, and use JDBI or JOOQ; otherwise, you'd have to write code to run all DB stuff in a lambda block, catch the SQLException, check the SqlState to see if it is indicating that you should retry (sqlstate codes are DB-engine specific), and if yes, rerun that lambda, after waiting an exponentially increasing amount of time that also includes a random factor. That's fairly complicated, which is why I strongly advise you rely on JOOQ or JDBI to take care of this for you.
If you aren't ready for that level of DB usage, just make a statement and send "SET LOCK MDOE TO WAIT 17;" as SQL statement straight up, at the start of opening any connection. If you're using a connection pool there is usually a place you can configure SQL statements to be run on connection start.
The Informix JDBC driver does allow you to automatically set the lock wait mode when you connect to the server.
Simply pass via the DataSource or connection URL the following parameter
IFX_LOCK_MODE_WAIT=17
The values for JDBC are
(-1) Wait forever
(0) not wait (default)
(> 0) wait this many seconds
See https://www.ibm.com/support/knowledgecenter/SSGU8G_14.1.0/com.ibm.jdbc.doc/ids_jdbc_040.htm
Connection conn = DriverManager.getConnection ( "jdbc:Informix-sqli://cleo:1550:
IFXHOST=cleo;PORTNO=1550;user=rdtest;password=my_passwd;IFX_LOCK_MODE_WAIT=17";);
I am using free Firebase database in my app. As it has a few limitation in concurrent connections and database size, when it exceed the limitation what kind of error will occur to user and how to handle it and is there any kind of error?
The main question should be, what happens when we reach the limit of concurrent connections? Meaning the maximum number of writes per seconds.
If you have a very popular app, you most likely already reached that limit, and at that moment, the Firebase database will start to queue up the number of writes that cannot be written on the disk straight away. With other words, Firebase biulds a buffer of pendings write operations. If the write volum goes down, it will start caching up with the buffer.
The answer to your question, is no, there is no way to programmatically catch an exception, when Firebase Realtime database reaches the limit. This is happening because at that moment, there is no exception that is thrown. There is also no method that has as a return value, the maximum number of connections.
But, there is although a workaroung in which you can attach a CompletionListener on the node that you are interested. If you'll see that the time between when you start the write operation and the time when it completes goes up, it means that you're buffered (queued). This is how you can know when the 100 simultaneous connection is reached.
If you've received an email alert or notification in the Firebase console that you've exceeded your Realtime Database usage limits, you can address it based on the usage limit you've exceeded. To see your Realtime Database usage, go to the Realtime Database usage section of the Firebase console.
If you're over your download limit, you can upgrade your Firebase plan or wait until your download limit resets at the start of your next billing cycle. To decrease your downloads, try the following steps:
Add queries to limit the data that your listen operations return.
Check for unindexed queries.
Use listeners that only download updates to data — for example, on()
instead of once().
Use security rules to block unauthorized downloads.
If you're over your storage limit, upgrade your plan to avoid service disruptions. To reduce the amount of data in your database, try the following steps:
Run periodic cleanup jobs.
Reduce any duplicate data in your database.
Note that it may take some time to see any data deletions reflected in your storage allotment.
If you're over your simultaneous database connections limit, upgrade your plan to avoid any service disruptions. To manage simultaneous connections to your database, try connecting via users via the REST API if they don't require a realtime connection.
I need to create a clustered micro-service (means it is a service which runs on different nodes, replicated). This service needs a "super fast" for "GET" operation e.g: when receiving username then responding with a user token.
this service can be slow on other token operations like save/update/delete.
problems and assumpsions:
logged in users (which has tokens) can be something like 10,000 - 20,000 users.
speed: The service needs to be super fast for the "getToken" operation, so I need some way to store it in memory locally e.g: HashMap.
stale: Since there are replicas, I have micro services which also suppose to have the same cache somehow.
slow: slow operation can be perform when executing token update/delete/save, no problem in that, as long as after update/delete/save, all caches are up to date.
I have started to look at Redis as a buzzword for Caching and also for storing user session, BUT, I have found out that people are confused, and doesn't have the right answer for the above problem.
Redis is a Cached database, which means it runs in memory, but it can't used in my case because I am a clusterd service which has replicas of local memory, REDIS can't synchronize my local cache (or is it). Redis can't give me solution for this problem, REDIS driver is not synchronizing my HashMap, although I thought about some pub/sub of Redus to solve this issue, but still, its stale data!
I think the "super fast" is to use local Java HashMap, and every micro-service on startup will take the Tokens data from some DB (who cares which DB), and, when READ is made, just give data from local memory, but, when update/delete/save made, then send a message to a message queue, and wait for message queue to fan out all messages to the subscribers (other replicas).
I'm not sure if I'm inventing the wheel here, I mean, this is a common problem of distributed cache, is there any other known solution for this? remember, only GET operation need to be fast.
I am not quite sure how to ask this question but I hope you get my drift...
I am using OrientDB as an embedded database that is used by a single application. I would like to ensure that should this application crash, the database is always in a consistent state so that my application can be started again without having to perform an maintenance on the database or loosing any data.
Ie so when I change the database and get a success message, I know that the changes have been written.
Is this support by OrientDB, if so, what is the option to enable?
(P.S. if I knew what the general accepted term for this kind of setup is called, I could search myself...)
OrientDB uses some kind of rollback journal which means that by default it logs all operations are performed with data stored on disk and put them into append only log. Records of this log are cached and flushed every second. If application crashes WAL (write ahead)/operation log will be read and all operations will be applied once again. Also WAL has notion of transactions which means that if transaction will not be finished at the time of crash all applied changes will be rolled back. So you can be sure about following in OrientDB:
All data the were written before one second interval before crash will be restored.
All data are written inside of transaction will be in consistent state.
You can lost part of the data in last one second interval.
Interval of flushes of WAL cache can be changed but it may lead to performance slowdown.
Just start scaling APNS provider program unfortunately I am really new to networking protocol implementation.
The provider now only runs on one thread and it's just handling a tiny amount of notifications. Now I want to increase its capability to send significantly more than before.
My questions are:
According to Apple doc I can maintain multiple connections to gateways. So my understanding is that I run multithreads in the provider program and maintain a separate connection in each. Is this right?
It first one is right the real difficulty for me comes: my program polls a queue database every 5 seconds to check new message that's to be sent. I do not think it's a good idea for all the threads to poll this same database because there should be duplicate message same to users. How to solve this problem?
I have seen the connections pooling but I do not really understand what that is. Is that the thing I need to study and use? If it is can someone offer an brief explanation regarding what it is and how to use it?
Thanks guys!
Your first assumption is reasonable. Each thread should have its own connection.
As for the second point, the access to the DB that contains the new messages should be synchronized. For example, you can access that DB by a synchronized method that fetches a message or several messages that haven't been processed yet, and marks them as being processed. Two threads can't access that method at the same time, and therefore won't get the same messages.
Another option is to put the messages in memory in a blocking quoue (with the DB only serving for backup in case of a crash). The threads can request an item from the queue, which would block them until an item is available.