we have a rest api which needs to talk to Mongodb (as of now its postgres), right now in the property/config file of the api we are hardcoding the DB password. we are using JDBC to connect to postgres,we need to decide whether to use the same JDBC or Mongoclient to connect to MongoDB.
So the question is
Is there a way to encrypt the DB password and sent over to network and connect to Mongo Db instead of hardcoding the password in the property file.??
or
Use SSL to connect to MongoDB from rest api ,so that the password even can be a plain text in the property file.
And which one from the above will be the best way to follow to avoid security threats...we have both api and database in AWS...
There is no way to encrypt Mongo password alone, you need to encrypt your whole connection using SSL.
If you are administering your own mongodb instance, you need to take a look in this document: https://docs.mongodb.org/manual/tutorial/configure-ssl/
If you are hiring some mongodb provider (like mongolab), they usually offer a way to enable SSL in your connections (but they usually limit this feature to paid plans).
The usual way to store DB passwords is through environment variables. This way you won't save those values to your git and you can configure those values directly on server.
To configure environment variables in UNIX, you need to export like that:
export MONGODB_DB_URL_ADMIN=mongodb://myuser:mypassword#ds01345.mongolab.com:35123/my_database_name
And to use it inside your code (NodeJS + mongoose example):
var mongoDbURL = process.env.MONGODB_DB_URL_ADMIN || "mongodb://127.0.0.1/myLocalDB";
var db = mongoose.createConnection(mongoDbURL);
db.model("MyModel", mySchema, "myCollectionName");
If you are using a PaaS (like Heroku), they usually provide a way to setup environment variables using their interface. This way this variable get configured in every instance you use. If you are setting up your own Linux instance, you need to put those values under a startup script (.bashrc) or other method (for example /etc/environment)
You can use SSL connections since you are hosting on AWS.
Normally MongoDB does not have SSL support, but if you use the Enterprise version of MongoDB, then SSL support is included.
Per Amazon's MongoDB Security Architecture White Paper, AWS does use MongoDB Enterprise which means it has SSL support.
The other responses seem outdated by now. As per 4.0, all versions of MongoDB support TLS 1.1, with only the Enterprise version supporting "FIPS mode".
Related
I have a java REST API application using Quarkus as the framework. The application uses a PostgreSQL database, which is configured via the application.properties config file for hibernate entities (using "quarkus-hibernate-orm" module) etc. However, there are cases where i will have to dynamically connect to a remote database (connection info will be supplied by parameters) to read and write data from during runtime as well. How do i go about this the best way with Quarkus? For simplicity reasons we can assume that the remote databases are of the same type (PostgreSQL) so we don't have to worry about whether the correct driver is locally available or not.
Is there something provided by Quarkus or the environment to establish these connections and read/write? i dont need an ORM layer here necessarily, as i may not know the structure beforehand either. Simple queries are also sufficient. When i try to research this subject i can only get information about static hibernate or datasource configurations in Quarkus, but i won't know what they look like beforehand. Basically, is there some kind of "db connection provider" etc. i should use or do i simply have to manually create new plain JDBC connections in my own code for it?
I am intending to have a console on my web app so I can run queries directly from my browser. I can only find guides on how to connect the h2console to an in-memory DB instance. Is this possible? Security isn't an issue, this is strictly for testing purposes, only my ip address will be allowed to connect to the site (for now).
I think you are confusing some things here: h2 is an in-memory-database. There is NO persistent storage. MySQL is a proper RDBMS. I would not expect you to be able to connect to mysql through that interface.
If you just need to be able to execute queries from your web application, and it is not going to go public, simply create a page with a textarea, send that to the backend using JDBC. If I have misunderstood your question, please add additional details to it so we cn provide a better answer.
Context: I'm working on Spring MVC project and using Hibernate to generate database schema from my classes using annotations. It uses MySQL server running on my local machine. I'm aiming to get hosting and make my website live.
Do I use mySQL server of a hosting provider in that case to run my database?
What are the pros and cons? Would they normally do db backups or its worth to do that myself and store it on my machine?
Am I going to loose data in case of server reboot?
Thanks in advance. I'm new to this, hence feel free to moderate questions if it sounds unreasonable.
Much of this will depend on how you host your site. I would recommend looking into CloudFoundry which is a free Platform as a Service (PAAS) provided by the folks at VMWare. If your using Spring to setup hibernate, Cloudfoundry can automatically hook your application into a MySql service it provides.
In any case, your database will most likely reside on the hosts server, unless you establish a static ip for your machine and expose the database services. At that point, you might as well host your own site.
Where the data will be stored depends on the type of host. For instance if you use a PAAS, they will choose the location they store your database on the server. It will be transparent to you. If you go with a dedicated server, you will most likely have to install your database software.
Most databases supporting websites should provide persistent storage or be configured to do so. I'm not sure why your MySql database loses data after you restart, but out of the box it should not do so. If your using hibernate to autogenerate your DDL, I could see the data being blown away at each restart. You would want to move away from this configuration.
1 Do I use mySQL server of a hosting provider in that case to run my database?
Yes. In your application you only change the JDBC connection URL and credentials.
There are other details about the level of service that you want for the database: security, backup, up time. But that depends on your hosting provider and your application needs.
2 Is it stored somewhere on the server?
Depends on how your hosting provider hosts the database. The usual approach is to have the web server in one machine and the database in another machine inside the VPN.
From the Hibernate configuration perspective, is just changing the JDBC url. But there are other quality attributes that will be affected by your provider infrastructure, and that depends on the level of service that you contract.
3 Should I declare somehow that data must be stored f.e. in a separate file on server?
Probably not. If your provider gives you a database service, what you choose is the level of service: storage, up-time... they take care of providing the infrastructure. And yes usually they do that using a separate machine for the database.
4 Am I going to loose data in case of server reboot? (As f.e. I do when I restart server on my local machine)
Depends on the kind of hosting that you are using. BTW Why you loose the data on reboot in your local machine? Probably you are re-creating the database each time (check your Hibernate usage). Because the main feature of any database is well... persistent storage :)
If you host your application in a virtual machine and you install MySQL in that VM... yes you are going to loose data on reboot. Because in this kind of hosting (like Amazon EC2) you host a VM for CPU execution, and all the disk data is transient. If you want persistent data you have to use a database located in another machine (this is done in this way for architectural reasons, and cloud providers like Amazon gives you also different storage services).
But if the database is provided, no.. a persistent database is the usual level of service that you should expect from a provider.
Can someone point me to example of Java code that can work both with Memcached server and Couchbase server. If i understand correctly one can use spymemcached for communicating with both server. Does that mean i can use same code to connect(obviously using different url) get and put values to them or there are some differences?
Any particular reason to use the memcached protocol directly?
The best practice when working with Couchbase is to use the Client SDK (many languages are supported as you can see here http://www.couchbase.com/develop , including Java) ?
The reason why it is better to use the SDK (and for the same reason you have to use Moxi) is because to be able to support the clustering from your application.
You client SDK will direct the operations to the correct cluster nodes, but also the cluster map will automatically be updated when you add new nodes (or when nodes are failing).
The Java SDK tutorial will guide you through the different steps of developing an application using Couchbase:
- http://www.couchbase.com/docs/couchbase-sdk-java-1.1/tutorial.html
So, can you use the Java client SDK?
According to the couchbase documentation it support textual memcached protocol. So you can use any of the available java memcached client and reuse the same code used for memcached. Couchbase supports memcached protocol only through moxi.
I have a connection pool set in the tomcat server context.xml (connection used by several webapps so seems the best place for it).
However, I don't like having passwords hard-coded in the file. Is there any way for me to retrieve the password from elsewhere (secure password store) and set it pragmatically at the time the pooled connections are established?
Thank you
Ryan
I believe you are looking for Custom Resource Factory, you can code your factory to create javax.sql.DataSource object or a DBCP (or such) based connection pooling facade object, and have your custom code for getting and setting the username/password for the connection.
Do note that if you're looking for extra security -- the pragmatic way would be to use filesystem security for securing your context.xml file, as adding extra layers (such as your custom implementation for the resource factory), won't make the system more secure, as you still need the password for the secure password store configured somewhere -- you'll end up getting the chicken or the egg problem.
You might want to implement a single sign-on for your web application (e.g. using JOSSO). Note that it might be a significant overhead for a small project, but this should solve your problem. Apart from this solution, there are vendor specific applications like Secure External Password Store from Oracle. Another platform dependent example: you can configure PostgreSQL pg_hba.conf. Try the following authentication options:
Authenticate using SSL client
certificates.
Authenticate using the Pluggable
Authentication Modules (PAM)
service provided by the operating
system.
Authenticate using an LDAP
server.
... and many others
Edit: In one of the projects we used 3DES to encrypt the password. And yes, the key was hardcoded in application :)