Google appengine textsearch API is not setting the production namespace - java

I want to retrieve google app engine textsearch data from local machine by setting the namespace and application id to search API. But my code is pulling the local machine data instead of pulling production data. Below is my code. Can anyone please suggest, any mistake in my code.
NamespaceManager.set("my_name_space");
SearchServiceConfig config = SearchServiceConfig.newBuilder().setNamespace("my_name_space").build();
AdminSearchServiceFactory searchServiceFactory = new AdminSearchServiceFactory();
final SearchService searchService = searchServiceFactory.getSearchService("my_app_id", config);
GetResponse<Index> response2 = searchService.getIndexes(GetIndexesRequest.newBuilder());
for (Index index : response2)
{
System.out.println("index name---" + index.getName());
System.out.println("namespace---" + index.getNamespace());
}
From above code I am expecting the existed indexes from production environment, but my code is giving local machine indexes.

I would suggest you to try the Remote Client Api or Google Cloud Endpoints, as seems to be useful for the issue you are facing.
Hope this helps.
[1] https://cloud.google.com/appengine/docs/standard/java/tools/remoteapi
[2] https://cloud.google.com/endpoints/docs/

Related

Azure list Azure Database for PostgreSQL servers in Resource group using Azure Java SDK

What is the best and correct way to list Azure Database for PostgreSQL servers present in my Resource Group using Azure Java SDK?
Currently, we have deployments that happen using ARM templates and once the resources have been deployed we want to be able to get the information about those resources from Azure itself.
I have tried doing in the following way:
PagedList<SqlServer> azureSqlServers = azure1.sqlServers().listByResourceGroup("resourceGrpName");
//PagedList<SqlServer> azureSqlServers = azure1.sqlServers().list();
for(SqlServer azureSqlServer : azureSqlServers) {
System.out.println(azureSqlServer.fullyQualifiedDomainName());
}
System.out.println(azureSqlServers.size());
But the list size returned is 0.
However, for virtual machines, I am able to get the information in the following way:
PagedList<VirtualMachine> vms = azure1.virtualMachines().listByResourceGroup("resourceGrpName");
for (VirtualMachine vm : vms) {
System.out.println(vm.name());
System.out.println(vm.powerState());
System.out.println(vm.size());
System.out.println(vm.tags());
}
So, what is the right way of getting the information about the Azure Database for PostgreSQL using Azure Java SDK?
P.S.
Once I get the information regarding Azure Database for PostgreSQL, I would need similar information about the Azure Database for MySQL Servers.
Edit: I have seen this question which was asked 2 years back and would like to know if Azure added Support for Azure Database for PostgreSQL/MySQL servers or not.
Azure Java SDK for MySQL/PostgreSQL databases?
So, I kind of implemented it in the following way and it can be treated as an alternative way...
Looking at the Azure SDK for java repo on Github (https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/postgresql), looks like they have it in beta so I searched for the pom in mvnrepository. I imported the following pom in my project (azure-mgmt-postgresql is still in beta):
<!-- https://mvnrepository.com/artifact/com.microsoft.azure.postgresql.v2017_12_01/azure-mgmt-postgresql -->
<dependency>
<groupId>com.microsoft.azure.postgresql.v2017_12_01</groupId>
<artifactId>azure-mgmt-postgresql</artifactId>
<version>1.0.0-beta-5</version>
</dependency>
In the code, Following is the gist of how I did it:
I already have a service principal created and have its information with me.
But, anyone trying this will require clientId, tenantId, clientSecret, and subscriptionId with them, the way #Jim Xu explained.
// create the credentials object
ApplicationTokenCredentials credentials = new ApplicationTokenCredentials(clientId, tenantId, clientSecret, AzureEnvironment.AZURE);
// build a rest client object configured with the credentials created above
RestClient restClient = new RestClient.Builder()
.withBaseUrl(credentials.environment(), AzureEnvironment.Endpoint.RESOURCE_MANAGER)
.withCredentials(credentials)
.withSerializerAdapter(new AzureJacksonAdapter())
.withResponseBuilderFactory(new AzureResponseBuilder.Factory())
.withInterceptor(new ProviderRegistrationInterceptor(credentials))
.withInterceptor(new ResourceManagerThrottlingInterceptor())
.build();
// use the PostgreSQLManager
PostgreSQLManager psqlManager = PostgreSQLManager.authenticate(restClient, subscriptionId);
PagedList<Server> azurePsqlServers = psqlManager.servers().listByResourceGroup(resourceGrpName);
for(Server azurePsqlServer : azurePsqlServers) {
System.out.println(azurePsqlServer.fullyQualifiedDomainName());
System.out.println(azurePsqlServer.userVisibleState().toString());
System.out.println(azurePsqlServer.sku().name());
}
Note: Server class refers to com.microsoft.azure.management.postgresql.v2017_12_01.Server
Also, if you take a look at the Azure class, you will notice this is how they do it internally.
For reference, you can use SqlServerManager sqlServerManager in the Azure class and look at how they have used it and created an authenticated manager in case you want to use some services that are still in preview or beta.
According to my test, we can use java sdk azure-mgmt-resources to implement your need. For example
Create a service principal
az login
# it will create a service pricipal and assign a contributor rolen to the sp
az ad sp create-for-rbac -n "MyApp" --scope "/subscriptions/<subscription id>" --sdk-auth
code
String tenantId = "<the tenantId you copy >";
String clientId = "<the clientId you copy>";
String clientSecret= "<the clientSecre you copy>";
String subscriptionId = "<the subscription id you copy>";
ApplicationTokenCredentials creds = new
ApplicationTokenCredentials(clientId,domain,secret,AzureEnvironment.AZURE);
RestClient restClient =new RestClient.Builder()
.withBaseUrl(AzureEnvironment.AZURE, AzureEnvironment.Endpoint.RESOURCE_MANAGER)
.withSerializerAdapter(new AzureJacksonAdapter())
.withReadTimeout(150, TimeUnit.SECONDS)
.withLogLevel(LogLevel.BODY)
.withResponseBuilderFactory(new AzureResponseBuilder.Factory())
.withCredentials(creds)
.build();
ResourceManager resourceClient= ResourceManager.authenticate(restClient).withSubscription(subscriptionId);
ResourceManagementClientImpl client= resourceClient.inner();
String filter="resourceType eq 'Microsoft.DBforPostgreSQL/servers'"; //The filter to apply on the operation
String expand=null;//The $expand query parameter. You can expand createdTime and changedTime.For example, to expand both properties, use $expand=changedTime,createdTime
Integer top =null;// The number of results to return. If null is passed, returns all resource groups.
PagedList<GenericResourceInner> results= client.resources().list(filter, null,top);
while (true) {
for (GenericResourceInner resource : results.currentPage().items()) {
System.out.println(resource.id());
System.out.println(resource.name());
System.out.println(resource.type());
System.out.println(resource.location());
System.out.println(resource.sku().name());
System.out.println("------------------------------");
}
if (results.hasNextPage()) {
results.loadNextPage();
} else {
break;
}
}
Besides, you also can use Azure REST API to implement your need. For more details, please refer to https://learn.microsoft.com/en-us/rest/api/resources/resources

Getting all users from Openfire server using smack 4.2.2

Well, I'm trying to get all users from Openfire server using Smack, unfortunately I don't know how - I'm using Smack 4.2.2.
UserSearchManager usm= new UserSearchManager(connection);
DomainBareJid domainJid =
JidCreate.domainBareFrom(connection.getServiceName());
Form searchForm = usm.getSearchForm(domainJid);
Form answerForm = searchForm.createAnswerForm();
answerForm.setAnswer("Username", true);
answerForm.setAnswer("search", "*");
ReportedData data = usm.getSearchResults(answerForm, domainJid);
if (data.getRows() != null) {
for (ReportedData.Row row: data.getRows()) {
for (String jid:row.getValues("jid")) {
System.out.println(jid);
}
}
}
This code doesn't work because of:
java.lang.IllegalArgumentException: Must have a local (user) JID set. Either you didn't configure one or you where not connected at least once
You can't get all users through XEP-0055: Jabber Search, just can be used with a filter you sure that the users don't have it (like a special character). Only way I know is to use Rest API Plugin of openfire. You can read more about this plugin from the link. Good luck.
Error is obvious. Either you did not connect at least once (or got disconnected and did not reconnect) or your username is wrong.
Maybe you are trying to connect without local jid. Please check this explanation of XMPP address formats:
https://xmpp.org/rfcs/rfc6122.html#addressing-localpart
hope you have solved the problem. I got my code working with this little change
DomainBareJid domainJid =
JidCreate.domainBareFrom("search." + connection.getServiceName());
in your openfire go to Plugins and select available-plugins > then choose rest Api > then you can use following url to Get All users in Group:
http://localhost:9090/plugins/restapi/v1/users
Note: All Rest EndPoints you can find in following link:
https://www.igniterealtime.org/projects/openfire/plugins/1.2.1/restAPI/readme.html

AWS ElasticSearch 2.3 Java HTTP bulk API

I'm attampting to use a bulk HTTP api in Java on AWS ElasticSearch 2.3.
When I use a rest client for teh bulk load, I get the following error:
504 GATEWAY_TIMEOUT
When I run it as Lambda in Java, for HTTP Posts, I get:
{
"errorMessage": "2017-01-09T19:05:32.925Z 8e8164a7-d69e-11e6-8954-f3ac8e70b5be Task timed out after 15.00 seconds"
}
Through testing I noticed the bulk API doesn't work these with these settings:
"number_of_shards" : 5,
"number_of_replicas" : 5
When shards and replicas are set to 1, I can do a bulk load no problem.
I have tried using this setting to allow for my bulk load as well:
"refresh_interval" : -1
but so far it made no impact at all. In Java Lambda, I load my data as an InputStream from S3 location.
What are my options at this point for Java HTTP?
Is there anything else in index settings I could try?
Is there anything else in AWS access policy I could try?
Thank you for your time.
1Edit:
I also have tried these params: _bulk?action.write_consistency=one&refresh But makes no difference so far.
2Edit:
here is what made my bulk load work - set consistency param (I did NOT need to set refresh_interval):
URIBuilder uriBuilder = new URIBuilder(myuri);
uriBuilder = uriBuilder.addParameter("consistency", "one");
HttpPost post = new HttpPost(uriBuilder.build());
HttpEntity entity = new InputStreamEntity(myInputStream);
post.setEntity(entity);
From my experience, the issue can occur when your index replication settings can not be satisfied by your cluster. This happens either during a network partition, or if you simply set a replication requirement that can not be satisfied by your physical cluster.
In my case, this happens when I apply my production settings (number_of_replicas : 3) to my development cluster (which is single node cluster).
Your two solutions (setting the replica's to 1 Or setting your consistency to 1) resolve this issue because they allow Elastic to continue the bulk index w/o waiting for additional replica's to come online.
Elastic Search probably could have a more intuitive message on failure, maybe they do in Elastic 5.
Setting your cluster to a single

Java Google datastore async calls

I do not want to block threads in my application and so I am wondering are calls to the the Google Datastore async? For example the docs show something like this to retrieve an entity:
// Key employeeKey = ...;
LookupRequest request = LookupRequest.newBuilder().addKey(employeeKey).build();
LookupResponse response = datastore.lookup(request);
if (response.getMissingCount() == 1) {
throw new RuntimeException("entity not found");
}
Entity employee = response.getFound(0).getEntity();
This does not look like an async call to me, so it is possible to make aysnc calls to the database in Java? I noticed App engine has some libraries for async calls in its Java API, but I am not using appengine, I will be calling the datastore from my own instances. As well, if there is an async library can I test it on my local server (for example app engine's async library I could not find a way to set it up to use my local server for example I this library can't get my environment variables).
In your shoes, I'd give a try to Spotify's open-source Asynchronous Google Datastore Client -- I have not personally tried it, but it appears to meet all of your requirements, including being able to test on your local server. Please give it a try and let us all know how well it meets your needs, so we can all benefit and learn -- thanks!

Scope not allowed when calling API Method from API Explorer

I have a strange behaviour in Google App Engine. I am developing with Eclipse and Java, specifically with Google Cloud Endpoints. I created a sample API with the following settings. Actually I was working with many others scopes but I decided to try with only one to track down the error.
#Api(
name = "adminmanagement",
version = "v1",
scopes = {AdminManagement.EMAIL_SCOPE},
clientIds = {AdminManagement.WEB_CLIENT_ID, AdminManagement.API_EXPLORER_CLIENT_ID}
)
public static final String EMAIL_SCOPE = "https://www.googleapis.com/auth/userinfo.email";
public static final String WEB_CLIENT_ID = "***.apps.googleusercontent.com";
public static final String API_EXPLORER_CLIENT_ID = com.google.api.server.spi.Constant.API_EXPLORER_CLIENT_ID;
In the API Method as usual I check if the user object is null.
if (user == null) {
throw new OAuthRequestException("Unauthorised Access!");
}
This is pretty much straight forward and it always worked. However this time it does not. If I try to call the API method through the API Explorer I get the following error:
401 Unauthorized
And through the Eclipse Console I can see the following one:
INFO: getCurrentUser: AccessToken; scope not allowed
The SDK version is 1.9.1 but atm I have another application wich uses Drive API and works. I tryed deleting and creating a new Cloud Console, deleting and creating a new App Engine application but I always get this error. By the way, if I deploy the application on App Engine I get a 500 Internal Error with no specification and NOTHING shows up in the logs. Just the API call with no errors whatsover.
This is driving me crazy, what am I missing?
EDIT: The bug DOES NOT occur in version 1.8.9 and below...
The problem magically resolved itself, I haven't changed a thing, however I wasn't the only one with this problem so I supposed Google must have fixed something.

Categories