I've been trying to implement the Firebase Admin SDK into our backend server, running on JAVA. The source of my question is this piece of code, provided by Google:
FileInputStream serviceAccount = new FileInputStream("path/to/serviceAccountKey.json");
FirebaseOptions options = new FirebaseOptions.Builder()
.setCredentials(GoogleCredentials.fromStream(serviceAccount))
.setDatabaseUrl("https://<DATABASE_NAME>.firebaseio.com/")
.build();
FirebaseApp.initializeApp(options);
I've already tested it and have integrated it properly in my code. However, I dislike how it seems like I need to leave the serviceAccountKey.json in my server in order to use the Admin SDK.
I have two simple questions for you guys:
Is there a way to prevent me from having to store the sensible information (serviceAccountKey.json) in the server (since it could possibly be reverse-engineered)?
Is the .setDatabaseUrl(...) necessary if I'm using my own MySQL DB? The only database Firebase effectively has on their server for me is my user-base, since I use Firebase-authentication. I store the UID in my own DB to refer to users.
Yes, there are alternative ways to load the configuration needed for the Admin SDK and it's mentioned in this post by Hiranya Jayathilaka:
You can create a JSON file similar to the one below
{
"databaseURL": "https://database-name.firebaseio.com",
"projectId": "my-project-id",
"storageBucket": "bucket-name.appspot.com"
}
And then create an Environment Variable named FIREBASE_CONFIG in your server, and set it to point to the JSON file.
Then you'd only need to call FirebaseApp.initializeApp() with no parameters.
As the name suggests, that method is only needed to indicate the Realtime Database URL. If you're not using it, it can be omitted.
Related
I used StartApplicationRequest to create a sample request to start the application as given below:
StartApplicationRequest request = StartApplicationRequest.builder()
.applicationId("test-app-name")
.build();
Then, I used the ReactorCloudFoundryClient to start the application as shown below:
cloudFoundryClient.applicationsV3().start(request);
But my test application test-app-name is not getting started. I'm using latest Java CF client version (v4.5.0 RELEASE), but not seeing a way around to start the application.
Quite surprisingly, the outdated version seems to be working with the below code:
cfstatus = cfClient.startApplication("test-app-name"); //start app
cfstatus = cfClient.stopApplication("test-app-name"); //stop app
cfstatus = cfClient.restartApplication("test-app-name"); //stop app
I want to do the same with latest CF client library, but I don't see any useful reference. I referred to test cases written at CloudFoundry official Github repo. I derived to the below code after checking out a lot of docs:
StartApplicationRequest request = StartApplicationRequest.builder()
.applicationId("test-app-name")
.build();
cloudFoundryClient.applicationsV3().start(request);
Note that cloudFoundryClient is ReactorCloudFoundryClient instance as the latest library doesn't support the client class used with outdated code. I would like to do all operations (start/stop/restart) with latest library. The above code isn't working.
A couple things here...
Using the reactor based client, your call to cloudFoundryClient.applicationsV3().start(request) returns a Mono<StartApplicationResponse>. That's not the actual response, it's the possibility of one. You need to do something to get the response. See here for more details.
If you would like similar behavior to the original cf-java-client, you can call .block() on the Mono<StartApplicationResponse> and it will wait and turn into a response.
Ex:
client.applicationsV3()
.start(StartApplicationRequest.builder()
.applicationId("test-app-name")
.build())
.block()
The second thing is that it's .applicationId not applicationName. You need to pass in an application guid, not the name. As it is, you're going to get a 404 saying the application doesn't exist. You can use the client to fetch the guid, or you can use CloudFoundryOperations instead (see #3).
The CloudFoundryOperations interface is a higher-level API. It's easier to use, in general, and supports things like starting an app based on the name instead of the guid.
Ex:
ops.applications()
.start(StartApplicationRequest.builder()
.name("test-app-name").build())
.block();
What is the best and correct way to list Azure Database for PostgreSQL servers present in my Resource Group using Azure Java SDK?
Currently, we have deployments that happen using ARM templates and once the resources have been deployed we want to be able to get the information about those resources from Azure itself.
I have tried doing in the following way:
PagedList<SqlServer> azureSqlServers = azure1.sqlServers().listByResourceGroup("resourceGrpName");
//PagedList<SqlServer> azureSqlServers = azure1.sqlServers().list();
for(SqlServer azureSqlServer : azureSqlServers) {
System.out.println(azureSqlServer.fullyQualifiedDomainName());
}
System.out.println(azureSqlServers.size());
But the list size returned is 0.
However, for virtual machines, I am able to get the information in the following way:
PagedList<VirtualMachine> vms = azure1.virtualMachines().listByResourceGroup("resourceGrpName");
for (VirtualMachine vm : vms) {
System.out.println(vm.name());
System.out.println(vm.powerState());
System.out.println(vm.size());
System.out.println(vm.tags());
}
So, what is the right way of getting the information about the Azure Database for PostgreSQL using Azure Java SDK?
P.S.
Once I get the information regarding Azure Database for PostgreSQL, I would need similar information about the Azure Database for MySQL Servers.
Edit: I have seen this question which was asked 2 years back and would like to know if Azure added Support for Azure Database for PostgreSQL/MySQL servers or not.
Azure Java SDK for MySQL/PostgreSQL databases?
So, I kind of implemented it in the following way and it can be treated as an alternative way...
Looking at the Azure SDK for java repo on Github (https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/postgresql), looks like they have it in beta so I searched for the pom in mvnrepository. I imported the following pom in my project (azure-mgmt-postgresql is still in beta):
<!-- https://mvnrepository.com/artifact/com.microsoft.azure.postgresql.v2017_12_01/azure-mgmt-postgresql -->
<dependency>
<groupId>com.microsoft.azure.postgresql.v2017_12_01</groupId>
<artifactId>azure-mgmt-postgresql</artifactId>
<version>1.0.0-beta-5</version>
</dependency>
In the code, Following is the gist of how I did it:
I already have a service principal created and have its information with me.
But, anyone trying this will require clientId, tenantId, clientSecret, and subscriptionId with them, the way #Jim Xu explained.
// create the credentials object
ApplicationTokenCredentials credentials = new ApplicationTokenCredentials(clientId, tenantId, clientSecret, AzureEnvironment.AZURE);
// build a rest client object configured with the credentials created above
RestClient restClient = new RestClient.Builder()
.withBaseUrl(credentials.environment(), AzureEnvironment.Endpoint.RESOURCE_MANAGER)
.withCredentials(credentials)
.withSerializerAdapter(new AzureJacksonAdapter())
.withResponseBuilderFactory(new AzureResponseBuilder.Factory())
.withInterceptor(new ProviderRegistrationInterceptor(credentials))
.withInterceptor(new ResourceManagerThrottlingInterceptor())
.build();
// use the PostgreSQLManager
PostgreSQLManager psqlManager = PostgreSQLManager.authenticate(restClient, subscriptionId);
PagedList<Server> azurePsqlServers = psqlManager.servers().listByResourceGroup(resourceGrpName);
for(Server azurePsqlServer : azurePsqlServers) {
System.out.println(azurePsqlServer.fullyQualifiedDomainName());
System.out.println(azurePsqlServer.userVisibleState().toString());
System.out.println(azurePsqlServer.sku().name());
}
Note: Server class refers to com.microsoft.azure.management.postgresql.v2017_12_01.Server
Also, if you take a look at the Azure class, you will notice this is how they do it internally.
For reference, you can use SqlServerManager sqlServerManager in the Azure class and look at how they have used it and created an authenticated manager in case you want to use some services that are still in preview or beta.
According to my test, we can use java sdk azure-mgmt-resources to implement your need. For example
Create a service principal
az login
# it will create a service pricipal and assign a contributor rolen to the sp
az ad sp create-for-rbac -n "MyApp" --scope "/subscriptions/<subscription id>" --sdk-auth
code
String tenantId = "<the tenantId you copy >";
String clientId = "<the clientId you copy>";
String clientSecret= "<the clientSecre you copy>";
String subscriptionId = "<the subscription id you copy>";
ApplicationTokenCredentials creds = new
ApplicationTokenCredentials(clientId,domain,secret,AzureEnvironment.AZURE);
RestClient restClient =new RestClient.Builder()
.withBaseUrl(AzureEnvironment.AZURE, AzureEnvironment.Endpoint.RESOURCE_MANAGER)
.withSerializerAdapter(new AzureJacksonAdapter())
.withReadTimeout(150, TimeUnit.SECONDS)
.withLogLevel(LogLevel.BODY)
.withResponseBuilderFactory(new AzureResponseBuilder.Factory())
.withCredentials(creds)
.build();
ResourceManager resourceClient= ResourceManager.authenticate(restClient).withSubscription(subscriptionId);
ResourceManagementClientImpl client= resourceClient.inner();
String filter="resourceType eq 'Microsoft.DBforPostgreSQL/servers'"; //The filter to apply on the operation
String expand=null;//The $expand query parameter. You can expand createdTime and changedTime.For example, to expand both properties, use $expand=changedTime,createdTime
Integer top =null;// The number of results to return. If null is passed, returns all resource groups.
PagedList<GenericResourceInner> results= client.resources().list(filter, null,top);
while (true) {
for (GenericResourceInner resource : results.currentPage().items()) {
System.out.println(resource.id());
System.out.println(resource.name());
System.out.println(resource.type());
System.out.println(resource.location());
System.out.println(resource.sku().name());
System.out.println("------------------------------");
}
if (results.hasNextPage()) {
results.loadNextPage();
} else {
break;
}
}
Besides, you also can use Azure REST API to implement your need. For more details, please refer to https://learn.microsoft.com/en-us/rest/api/resources/resources
I have a java application running in ECS in which I want to read data from table in account 1 (source_table) and write it to a table in account 2 (destination_table). I created two dynamodb clients with different credential providers - for source_table client I'm using an STSAssumeRoleSessionCredentialsProvider with the arn of a role in account 1; for destination client I'm using DefaultAWSCredentialsProviderChain.
The assume role bit works and I'm able to read using the source client but using the destination client does not work - it still tries to use the assumed role credentials when trying to write to destination_table and fails with unauthorized error (assumed-role is not authorized to perform Put Item).
I tried using EC2ContainerCredentialsProviderWrapper on the destination client but same error.
Should this work? Or are the credentials shared under the hood which makes it impossible to have two different AWSCredentialProviders running simultaneously like this?
I noticed this answer which uses static credentials and apparently works, so I'm at a loss why this doesn't work.
I figured it out with some help from AWS support. It was a problem with my IAM configuration on the role in account 2. I was misled by the error message which said 'assumed-role is not authorized to perform Put Item' when in fact my original account 2 role itself was unable to do so.
I do not want to block threads in my application and so I am wondering are calls to the the Google Datastore async? For example the docs show something like this to retrieve an entity:
// Key employeeKey = ...;
LookupRequest request = LookupRequest.newBuilder().addKey(employeeKey).build();
LookupResponse response = datastore.lookup(request);
if (response.getMissingCount() == 1) {
throw new RuntimeException("entity not found");
}
Entity employee = response.getFound(0).getEntity();
This does not look like an async call to me, so it is possible to make aysnc calls to the database in Java? I noticed App engine has some libraries for async calls in its Java API, but I am not using appengine, I will be calling the datastore from my own instances. As well, if there is an async library can I test it on my local server (for example app engine's async library I could not find a way to set it up to use my local server for example I this library can't get my environment variables).
In your shoes, I'd give a try to Spotify's open-source Asynchronous Google Datastore Client -- I have not personally tried it, but it appears to meet all of your requirements, including being able to test on your local server. Please give it a try and let us all know how well it meets your needs, so we can all benefit and learn -- thanks!
I'm trying to start a new MySql Instance on Amazon RDS using the Java API and the following code:
CreateDBInstanceRequest createDBInstanceRequest = new CreateDBInstanceRequest();
createDBInstanceRequest.setEngine("MySQL");
createDBInstanceRequest.setLicenseModel("general-public-license");
createDBInstanceRequest.setEngineVersion("5.5.25a");
createDBInstanceRequest.setDBInstanceClass("db.t1.micro");
createDBInstanceRequest.setMultiAZ(false);
createDBInstanceRequest.setAutoMinorVersionUpgrade(true);
createDBInstanceRequest.setAllocatedStorage(5);
createDBInstanceRequest.setDBInstanceIdentifier("mydbinstance");
createDBInstanceRequest.setMasterUsername("master");
createDBInstanceRequest.setMasterUserPassword("password");
createDBInstanceRequest.setDBName("dbname");
createDBInstanceRequest.setPort(3306);
createDBInstanceRequest.setDBParameterGroupName("default.mysql5.5");
createDBInstanceRequest.setDBSubnetGroupName("dev");
createDBInstanceRequest.setBackupRetentionPeriod(1);
DBInstance dbInstance = RDS.createDBInstance(createDBInstanceRequest);
The problem is that this always results in the following error:
AWS Error Code: InsufficientDBInstanceCapacity, AWS Error Message:
Cannot create a database instance because there is no availability
zone with sufficient capacity. Please try your request again at a
later time.
As suggested, I tried at a later time but have never been able to launch a new instance programatically but when I try to launch an instance using the Amazon Mgmt Console, using exactly the same parameters, it launches instantly.
I have also noticed that this problem only occurs with DB Instance Class "db.t1.micro".
Is this instance class not available through the API?
Are you certain this exact version of MySQL is available in any of the availability zones in your region?
I would suggest to execute DescribeOrderableDBInstanceOptions for your engine of choice first, filter using your own criteria (e.g. DBInstanceClass="db.t1.micro") and then select the version from that.