I'm working on a project in Spring Boot that until now only used a local Postgres database. However, we are now working on deploying the application to the Google cloud Platform which involves using Cloud SQL. I have found several guides on how to connect to Cloud SQL and decided to follow this one.
However, we can't afford to have separate development databases also running in Cloud SQL so would like to continue using local Postgres databases for development. For this, I wrote the following code:
#Configuration
#EnableTransactionManagement
public class PersistenceContext {
#Bean
public DriverManagerDataSource dataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setUrl(System.getenv("DB_URL"));
dataSource.setUsername(System.getenv("DB_USER"));
dataSource.setPassword(System.getenv("DB_PASS"));
String dbInstance = System.getenv("DB_INSTANCE");
if (dbInstance != null && !dbInstance.isEmpty()) {
Properties connectionProperties = new Properties();
connectionProperties
.setProperty("socketFactory", "com.google.cloud.sql.postgres.SocketFactory");
connectionProperties.setProperty("cloudSqlInstance", dbInstance);
dataSource.setConnectionProperties(connectionProperties);
}
return dataSource;
}
}
With this I hoped that simply not specifying the cloud SQL socket factory and dbInstance would allow me to use a local database. However, when attempting to run the application with only a DB url, user, and password variable set, I run into the following exception:
java.lang.IllegalArgumentException: An instance connection name must be provided in the format <PROJECT_ID>:<REGION>:<INSTANCE_ID>.
The full stacktrace can be found here
the DB URL is configured as follows:
DB_URL: jdbc:postgresql://localhost:5432/soundshare
What can I do to switch between the databases programmatically? (I'd rather not store database details in config files if it can be avoided)
Thanks!
So doing it yourself in code is not a good idea as mentioned by #Thomas Andolf, basically re-inventing the wheel. There are several methods for "externalising" your configurations.
https://docs.spring.io/spring-boot/docs/1.2.2.RELEASE/reference/html/boot-features-external-config.html
Personally I like using os environmental variables when I can because I can easily carry swap it when I deploy as containers as well.
This is a cross section of the application.properties file of a project I'm working on:
spring.datasource.url=${ospec_db_url}
spring.datasource.username=${ospec_db_user:somedefault}
spring.datasource.password=${ospec_db_password}
On my dev system, I just set the environmental variable and it gets picked up. In my case, I have a text file with my variables like this:
export ospec_db_url=jdbc:postgresql://localhost:5432/ospec_db
export ospec_db_password=somebadasspassword
I just source the text-file and it's applied. I can easily switch between projects doing this. When you package for say K8s, you can read from a secrets file to your env variable and when you go to the cloud, you can pass the values in as variables in your startup scripts.
Related
I have some java services which use environment variables for config values.
I'd like to migrate them to use Spring Cloud config instead of environment variables.
Currently, my config is all in application.yml files, as the following:
someKey: ${SOME_KEY_ENV_VAR}
If I were to migrate to using Spring Cloud Config, how would I modify the above line to load its value from the cloud config server, instead of environment variables? (Assuming I've separately setup the maven dependencies & other configuration, to hook them up)
All examples of cloud config clients only show java code, e.g:
#Value("${someKey}")
private String someKey
Is that enough, or will I also need to make any changes to the yaml?
What about things like datasource URLs which don't have a corresponding #Value but are only defined in yaml?
I am working on Java Springboot with MongoDB using Kubernetes. Currently I just hard coded the URI in application properties and I would like to know
how can I access to the MongoDB credentials on Kubernetes with Java?
The recommended way of passing credentials to Kubernetes pods is to use secrets and to expose them to the application either as environment variables, or as a volume. The link above describes in detail how each approach works.
If I properly understood the question, it is specifically about Java Spring Boot applications running on Kubernetes.
Few options come to my mind...some not that secure or exclusive to running on Kubernetes but still mentioned here:
Environment variables with values in the deployment/pod configuration. Everyone with access to the configuration will be able to see them.
Use ${<env-var>} / ${<end-var>:<default-value>} to access the environment variables in Spring Boot's application.properties/.yaml file. For example, if DB_USERNAME and DB_PASSWORD are two such environment variables:
spring.data.mongodb.username = ${DB_USERNAME}
spring.data.mongodb.password = ${DB_PASSWORD}
...or
spring.data.mongodb.uri = mongodb://${DB_USERNAME}:${DB_PASSWORD}#<host>:<port>/<dbname>
This will work regardless whether the application uses spring.data.mongodb.* properties or properties with custom names injected in a #Configuration class with #Value.
Based on how the Java application is started in the container, startup arguments can be defined in the deployment/pod configuration, similarly to the bullet point above.
Environment variables with values populated from secret(s). Access the environment variables from SpringBoot as above.
Secrets as files - the secrets will "appear" in a file dynamically added to the container at some location/directory; it would require you to define your own #Configuration class that loads the user name and password from the file using #PropertySource.
The whole application.properties could be put in a ConfigMap. Notice that the properties will be in clear text. Then populate a Volume with the ConfigMap so that application.properties will be added to the container at some location/directory. Point Spring Boot to that location using spring.config.location as env. var, system property, or program argument.
Spring Cloud Vault
Some other external vault-type of secure storage - an init container can fetch the db credentials and make them available to the Java application in a file on a shared volume in the same pod.
Spring Cloud Config...even though it is unlikely you'd want to put db credentials in its default implementation of the server storage backend - git.
I have a small API running on PCF using Spring JPA. Of course, within the code, I could use a JDBC connection running prepared statements to access a bound MySQL instance. Doing this requires a username and password, as per normal standards when connecting to a database via Java.
However, with Spring JPA, I don't have to do any of this. I simply bind the MySQL instance and can perform my queries using the JPA API.
For lack of a better question, what is this magic?
Cloudfoundry with Spring Cloud follows 12-factor app patterns through out.
For configuration also it uses the config pattern suggested by 12-factor app patterns.
According to this pattern we should be storing properties outside the code in the environment as environment variables. So that application bundle can be deployed to any environment once it's built without any modifications. Since it picks up configuration from the environment variables, different environments have to define same environment variables with the different values.
Whenever you add a service to your application using cf bind-service Cloudfoundry sets predefined environment variables related to that service in the virtual machine (or container or whatever it has).
You can check these environment variables using cf env app-name.(Command Refeference)
Sample output of cf env app-name
{
"VCAP_APPLICATION": {
"application_id": "fa05c1a9-0fc1-4fbd-bae1-139850dec7a3",
"application_name": "my-app",
"application_uris": [
"my-app.10.244.0.34.xip.io"
],
"application_version": "fb8fbcc6-8d58-479e-bcc7-3b4ce5a7f0ca",
"limits": {
"disk": 1024,
"fds": 16384,
"mem": 256
},
"name": "my-app",
"space_id": "06450c72-4669-4dc6-8096-45f9777db68a",
"space_name": "my-space",
"uris": [
"my-app.10.244.0.34.xip.io"
],
"users": null,
"version": "fb8fbcc6-8d58-479e-bcc7-3b4ce5a7f0ca"
}
Using the spring actuator endpoints you can inspect all environment variables using /env endpoint. It lists more properties than cf env.
When spring detects that
cloud profile is active (set by spring.profiles.active environment property, or spring.profile property in spring cloud)
Auto Configuration is enabled (enabled by #SpringBootApplication)
No in memory Datasource dependency is present on the classpath (though I assume it would give cloud datasource configuration preference, even if in memory dependency were present)
No data source has been explicitly configured
Spring creates the Datasource bean itself using environment variables if a datasource service (like Postgres) has been bound to application.
Below is the link for the environment properties that it uses for creating Datasource.
https://docs.cloudfoundry.org/buildpacks/java/spring-service-bindings.html
Here is a list of Datasource only properties.
cloud.services.<database-service-name>.connection.hostname
cloud.services.<database-service-name>.connection.name
cloud.services.<database-service-name>.connection.password
cloud.services.<database-service-name>.connection.port
cloud.services.<database-service-name>.connection.username
cloud.services.<database-service-name>.plan
cloud.services.<database-service-name>.type
database-service-name is defined in the Manifest.yml file in the env: block
In my experience if there's only one database service added to the application, there was no need to define the database service name in the environment variables section.
Note: By default spring would try to use the servlet container's poolable connection support, however most of the time we our self have to configure some properties that are only supported by connection pool providers like Apache DBCP. In these cases we have to create Datasource bean manually using environment properties (using System.getProperty() or spring Environment.getProperty()).
For a long time in many IT services, I see some complex process to manage Java EE application configuration depending of the environments:
- custom tools, with Database or not, to manage replacement in the properties file (unzip war, replace, zip war...)
- Externalize properties file in obscure directory in the server (and some process to update it some time) and some time with a JNDI configuration...
- maven profile and lot of big properties files
But for database connection everybody use jndi datasource.
Why this is not generalized for all configurations that depend of environment ?
Update : I want deal with other variable than datasource, there is no question about datasource : it's in configured in JNDI for Java EE application. After if you want hack JNDI...
Setting up database connectivity (like user name, password, URL, driver etc.) somewhere in the application server has several advantages over doing it yourself in the WAR:
The app server can be a central point where the DB is configured, and you might have several WARs running on that server sharing a DB. So you need to set it up only once.
The DB settings, especially the credentials (username, password) are stored somewhere in the app server instead of somewhere in the WAR. That can have security implications (for instance, restricting access to that file is easier done than in a WAR archive).
You can set up one JNDI path to retrieve a DataSource instance pointing to the DB and do not need to worry about username and password anymore. If you have multiple app servers (one live system, one test system, several developer machines) with different DB URLs and credentials, then you can just configure that in each app server individually and deploy the WAR files without the need to change DB settings (see below).
The server might provide additional services, like connection pools, container managed transactions, etc. So again, you don't have to do it on your own in the WAR.
This is true for other services provided by the app server as well, for example JavaMail.
There are other cases where it you want to configure something that is specific to one web application and does not rely on the environment (the app server), like logging (although that may be set up in the app server, too). In those cases you might prefer using static config files, for instance log4j.properties.
I want to illustrate the third bullet point a bit further ...
Suppose you have one WAR in three app servers (developer machine, test server, live server).
Option 1 (DB setup in WAR)
Create a database.properties :
db.url=jdbc:mysql://localhost:3306/localdb
db.user=myusername
db.pass=mysecretpassword
#db.url=jdbc:mysql://10.1.2.3:3306/testdb
#db.user=myusername
#db.pass=mysecretpassword
#db.url=jdbc:mysql://10.2.3.4:3306/livedb
#db.user=myusername
#db.pass=mysecretpassword
Before you deploy it somewhere, you need to check if your settings are pointing to the right DB!
Also, if you check this file in to some version control system, then you might not want to publish your DB username/password to your local machine.
Option 2 (DB setup in App Server)
Imagine you have configured the three servers with their individual DB settings, and each of them registers the DB with the JNDI path java:database/mydb.
Then you can retrieve the DataSource like so:
Context context = new InitialContext();
DataSource dataSource = (DataSource) context.lookup("java:database/mydb");
This is working on every app server instance and you can deploy your WAR without the need to modify anything.
Conclusion
By moving the configuration to the app server you'll have the advantage of separating settings depending on the environment from your app code. I would prefer this whenever you have settings involving IP addresses, credentials, etc.
Using a static .properties file on the other hand is simpler to manage. I would prefer this option when dealing with settings that have no dependencies to the environment or are app specific.
I use JDBC and created h2 database called usaDB from sql script. Then I filled all tables with jdbc.
The problem is that after I connect to usaDB at localhost:8082 I cannot see on the left tree
my tables. There is only INFORMATION_SCHEMA database and rootUser which I specified creating usaDB.
How to view the content of tables in my h2 database?
I tried query SELECT * FROM INFORMATION_SCHEMA.TABLES.
But it returned many table names except those I created. My snapshot:
I had the same issue and the answer seems to be really stupid: when you type your database name you shouldn't add ".h2.db" suffix, for example, if you have db file "D:\somebase.h2.db" your connection string should be like "jdbc:h2:file:/D:/somebase". In other way jdbc creates new empty database file named "somebase.h2.db.h2.db" and you see what you see: only system tables.
You can use the SHOW command:
Using this command, you can lists the schemas, tables, or the columns of a table. e.g.:
SHOW TABLES
This problem drove me around the twist and besides this page I read many (many!) others until I solved it.
My Use Case was to see how a SpringBatch project created in STS using :: Spring Boot :: (v1.3.1.RELEASE) was going to behave with the H2 database; to do the latter, I needed to be able to get the H2 console running as well to query the DB results of the batch run.
This is what I did and found out:
Created an Web project in STS using Spring Boot:
Added the following to the pom.xml of the latter:
Added a Spring configuration file as follows to the project:
This solves the Web project deficiencies in STS. If you run the project now, you can access the H2 console as follows: http://localhost:8080/console
Now create a SpringBatch project in STS as follows (the alternative method creates a different template missing most of the classes for persisting data. This method creates 2 projects: one Complete, and the other an initial. Use the Complete in the following.):
The SpringBatch project created with STS uses an in memory H2 database that it CLOSES once the application run ends; once you run it, you can see this in the logging output.
So what we need is to create a new DataSource that overrides the default that ships with the project (if you are interested, just have a look at the log messages and you will see that it uses a default datasource...this is created from:
o.s.j.d.e.EmbeddedDatabaseFactory with the following parameters:
Starting embedded database: url='jdbc:hsqldb:mem:testdb', username='sa')
So, it starts an in memory, and then closes it. You have no chance of seeing the data with the H2 console; it has come and gone.
So, create a DataSource as follows:
You can of course use a properties file to map the parameters, and profiles for different DataSource instances...but I digress.
Now, make sure you set the bit that the red arrow in the picture is pointing to, to a location on your computer where a file can be persisted.
Running the SpringBatch (Complete project) you should now have a db file in that location after it runs (persisting Person data)
Run the Web project you configured previously in these steps, and you WILL :=) see your data, and all the Batch job and step run data (et voila!):
Painful but rewarding. Hope it helps you to really BOOTSTRAP :=)
I have met exactly this problem.
From what you describe, I suppose that you connect your jdbc with the "real" h2 server, but you are connecting on web application to database by the wrong mode (embedded-in-memory mode, aka h2mem). It means that h2 will create a new database in-memory, instead of using your true database stored elsewhere.
Please make sure that when you connect to this database, you use the mode Generic H2 (Server), NOTGeneric H2 (Embedded). You can refer to the picture below.
Version of jar file and installed h2 database should be same.
If in case you have created and populated H2 database table using maven dependency in spring boot, then please do change the JDBC URL as jdbc:h2:mem:testdb while connecting to H2 using web console.
It is an old question, but I came across the same problem. Eventually I found out that the default JDBC URL is pointing a test server rather than my application. After correcting it, I could access the right DB.
I tried with both Generic H2 (Embedded) and the Generic H2 (Server) options, both worked as long as the JDBC URL: is provided correctly.
In grails 4.0.1 the jdbc URL for development is jdbc:h2:mem:devDb. Check your application.yml file for the exact URL.
For the people who are using H2 in embedded(persistent mode) and want to "connect" to it from IntelliJ(other IDEs probably apply too).
Using for example jdbc url as follows: jdbc:h2:./database.h2
Note, that H2 does not allow implicit relative paths, and requires adding explicit ./
Relative paths are relative to current workdir
When you run your application, your workdir is most likely set to your project's root dir
On the other hand, IDE's workdir is most likely not your project's root
Hence, in IDE when "connecting" to your database you need to use absolute path like: jdbc:h2:/Users/me/projects/MyAwesomeProject/database.h2
For some reason IntelliJ by default also adds ;MV_STORE=false. It disables MVStore engine which in fact is currently used by default in H2.
So make sure that both your application and your IDE use the same store engine, as MVStore and PageStore have different file layouts.
Note that you cannot "connect" to your database if your application is using it because of locking. The other way around applies too.
In my case the issue was caused by the fact that I didn't set the h2 username, password in java. Unfortunatelly, Spring didn't display any errors to me, so it was not easy to figure out. Adding this lines to dataSource method helped me fix the issue:
dataSource.setUsername("sa");
dataSource.setPassword("");
Also, I should have specified the schema when creating tables in schema.sql
Selecting Generic H2 (Server) solved for me. We tempted to use default Generic H2 (Embedded) which is wrong.