I wanna connect to a Redis Cluster via TIBCO BW5, So far I've no idea where to start!
Is there a way to connect via JDBC or should I write my own custom Java code?
So far I haven't tried anything yet.
The easiest path to Redis integration via JDBC is through Redis SQL - https://github.com/redis-field-engineering/redis-sql-trino
Redis SQL Trino lets lets you easily integrate with visualization
frameworks — like Tableau and SuperSet — and platforms that support
JDBC-compatible databases (e.g., Mulesoft). Query support includes
SELECT statements across secondary indexes on both Redis hashes &
JSON, aggregations (e.g., count, min, max, avg), ordering, and more.
However, if you want to build your own integration then TIBCO ActiveMatrix BusinessWorks™ Plug-in Development Kit (PDK) could be the way.
Related
I am making a JavaFX application (rental management software) and using MySQL database,
I was wondering how can I make my application works on my friend or client's PC since the database is on my PC? Is there is any way to configure the database on their PC without them doing all the installation processes of MySQL because they are not good with PC and it's not reliable to make the client set up the database I want to use a local database?
Server versus embedded
There are two kinds of database engines:
Those that run in their own process, as a separate app, accepting connections coming from any number of other apps on the same computer or over a network. This we call a database server. Postgres, MySQL, Microsoft SQL Server, Oracle, etc run this way.
Those that run within the process of an app, being started and stopped from that parent app, accepting connections only from within that parent app. This we call an embedded database engine. SQLite runs this way.
Some database products can run in either fashion. H2 Database Engine is one such product.
Given your situation, and given that H2 is written in pure Java, I suggest replacing your use of MySQL with H2. Run H2 in embedded mode.
Cloud database
Another option is for you to set up a database (MySQL or other) available to your users over the internets. You can run your own server. Or you can utilize any of several Database-as-a-Service (DBaaS) vendors such as Digital Ocean. This “cloud database” approach may not be practical for desktop apps because of unreliable internet connections, security issues around database passwords, and the challenges of multi-tenancy.
Repository design
By the way, you may want to learn about the Repository design approach in using interfaces and implementations as a layer of abstraction between your app and your database. This makes switching out database engines easier.
For example, your repository interfaces would declare methods such as fetchAllCustomers() and fetchCustomerForId( UUID id ). One implementation of that interface might be built for MySQL while another implementation is built for H2. The code calling methods on your repository interface knows nothing about MySQL or H2.
I have some application written in JAVA.
We are using MySQL DB.
It is possible to integrate that MySQL DB with Apache Ignite as In Memory cache and use that configuration without any updates in JAVA application (of course some DB connection details should be changed)?
So my application do the same staff but only difference will be connection with Apache Ignite instead of MySQL?
It is possible this kind of configuration?
I suppose you are looking for the write-through feature. I'm not sure what is your use case, but you should be aware of some limitations like your data have to be preloaded into Ignite before running SELECT queries. From a very abstract perspective, you need to define POJOs and implement a custom CacheStore interface. Though GridGain Control Center can do the latter for you automatically, check this demo as a reference.
I've read the following posts:
Is there a way to run MySQL in-memory for JUnit test cases?
Stubbing / mocking a database in .Net
SQL server stub for java
They seem to address unit/component level testing (or have no answers), but I'm doing system testing of an application which has few test hooks. I have a RESTful web service backed by a database with JPA. I'm using NUnit to run tests against the API, but those tests often need complex data setup and teardown. To reduce the cost of doing this within a test via API calls, I would like to create (ideally in memory) databases which can be connected to via a DB provider using a connection string. The idea would be to have a test resource management service which builds databases of specific types, allowing a test to re-point the SUT to a new database with the expected data when it starts - one which can simply be dropped on teardown.
Is there a way, using Oracle or MSSQL, to create a database in memory (could be something as simple as a C# DataSet) which the web server can talk to as if it were a production database? Quick/cheap creation and disposal would be as good as in memory, to be honest.
I feel like this is a question that should have an answer already, but can't find it/ don't understand enough to know that I've found it.
we have a rest api which needs to talk to Mongodb (as of now its postgres), right now in the property/config file of the api we are hardcoding the DB password. we are using JDBC to connect to postgres,we need to decide whether to use the same JDBC or Mongoclient to connect to MongoDB.
So the question is
Is there a way to encrypt the DB password and sent over to network and connect to Mongo Db instead of hardcoding the password in the property file.??
or
Use SSL to connect to MongoDB from rest api ,so that the password even can be a plain text in the property file.
And which one from the above will be the best way to follow to avoid security threats...we have both api and database in AWS...
There is no way to encrypt Mongo password alone, you need to encrypt your whole connection using SSL.
If you are administering your own mongodb instance, you need to take a look in this document: https://docs.mongodb.org/manual/tutorial/configure-ssl/
If you are hiring some mongodb provider (like mongolab), they usually offer a way to enable SSL in your connections (but they usually limit this feature to paid plans).
The usual way to store DB passwords is through environment variables. This way you won't save those values to your git and you can configure those values directly on server.
To configure environment variables in UNIX, you need to export like that:
export MONGODB_DB_URL_ADMIN=mongodb://myuser:mypassword#ds01345.mongolab.com:35123/my_database_name
And to use it inside your code (NodeJS + mongoose example):
var mongoDbURL = process.env.MONGODB_DB_URL_ADMIN || "mongodb://127.0.0.1/myLocalDB";
var db = mongoose.createConnection(mongoDbURL);
db.model("MyModel", mySchema, "myCollectionName");
If you are using a PaaS (like Heroku), they usually provide a way to setup environment variables using their interface. This way this variable get configured in every instance you use. If you are setting up your own Linux instance, you need to put those values under a startup script (.bashrc) or other method (for example /etc/environment)
You can use SSL connections since you are hosting on AWS.
Normally MongoDB does not have SSL support, but if you use the Enterprise version of MongoDB, then SSL support is included.
Per Amazon's MongoDB Security Architecture White Paper, AWS does use MongoDB Enterprise which means it has SSL support.
The other responses seem outdated by now. As per 4.0, all versions of MongoDB support TLS 1.1, with only the Enterprise version supporting "FIPS mode".
What we're trying to do is what Meteor is doing with Mongo with LiveQuery, which is this:
Livequery can connect to the database, pretend to be a replication
slave, and consume the replication log. Most databases support some
form of replication so this is a widely applicable approach. This is
the strategy that Livequery prefers with MongoDB, since MongoDB does
not have triggers.
Source of that quote here
So is there a way with com.mongodb.*; in Java to create such replication slave so that it receives any notifications for each update that happens on the primary Mongo server?
Also, I don't see any replication log in the local database. Is there a way to turn them on?
If it's not possible to do it in Java, is it possible to create such solution in other languages (C++ or Node.js maybe)?
You need to start your database with the --replSet rsName option, and then run rs.initiate(). After that you will see a rs.oplog collection in the local database.
What you are describing is commonly referred to as "tailing the oplog", which is based on using a Tailable Cursor on a capped collection (the MongoDB oplog in this case). The mechanics are relatively simple, there are numerous oplog tailing examples out there written in Java, here are a few:
Event Streaming with MongoDB
TailableCursorExample
Wordnik mongo-admin-utils
IncrementalBackupUtil