Azure Functions CosmosDb Binding api keeps loading - java

First of all, the api works as intended locally, when deploying to azure functions app, the api endpoint keeps loading and it will eventually show HTTP.504(Gateway Timeout)
page keeps loading, no response from azure functions
Integration
I'm looking to fetch all data from the collection when I call HttpTrigger
Function.java
#FunctionName("get")
public HttpResponseMessage get(
#HttpTrigger(name = "req",
methods = {HttpMethod.GET, HttpMethod.POST},
authLevel = AuthorizationLevel.ANONYMOUS)
HttpRequestMessage<Optional<String>> request,
#CosmosDBInput(name = "database",
databaseName = "progMobile",
collectionName = "news",
partitionKey = "{Query.id}",
connectionStringSetting = "CosmosDBConnectionString")
Optional<String> item,
final ExecutionContext context) {
// Item list
context.getLogger().info("Parameters are: " + request.getQueryParameters());
context.getLogger().info("String from the database is " + (item.isPresent() ? item.get() : null));
// Convert and display
if (!item.isPresent()) {
return request.createResponseBuilder(HttpStatus.BAD_REQUEST)
.body("Document not found.")
.build();
}
else {
// return JSON from Cosmos. Alternatively, we can parse the JSON string
// and return an enriched JSON object.
return request.createResponseBuilder(HttpStatus.OK)
.header("Content-Type", "application/json")
.body(item.get())
.build();
}
}
Function.json
{
"scriptFile" : "../ProgMobileBackend-1.0-SNAPSHOT.jar",
"entryPoint" : "com.function.Function.get",
"bindings" : [ {
"type" : "httpTrigger",
"direction" : "in",
"name" : "req",
"methods" : [ "GET", "POST" ],
"authLevel" : "ANONYMOUS"
}, {
"type" : "cosmosDB",
"direction" : "in",
"name" : "database",
"databaseName" : "progMobile",
"partitionKey" : "{Query.id}",
"connectionStringSetting" : "CosmosDBConnectionString",
"collectionName" : "news"
}, {
"type" : "http",
"direction" : "out",
"name" : "$return"
} ]
}
Azure Functions monitor log does not show any error
Running the function in the portal(Code + Test menu) does not show any error either
httpTrigger I'm using: https://johnmiguel.azurewebsites.net/api/get?id=id
I added CosmosDBConnectionString value to Azure Functions App configuration(did not check on "Deployment slot" option)
I'm using an instance of CosmosDB for NoSQL
Functions App runtime is set to Java and version set to Java 8

figured it out. Java function was in Java 17 and Function App in Java 8.

Related

How to upload Json Data or file in Elasticsearch using Java?

This is my sample Json Data coming from .json file now I want to do bulk_insert to elasticsearch dynamically so that I can perform operations on it ..can someone help me with java code to add this data dynamically ..this is just a piece of 5-6objects like this i have more then 500objects
[{
"data1" : "developer",
"data2" : "categorypos",
"data3" : "1001"
},
{
"data1" : "developer",
"data1" : "developerpos",
"data1" : "1002"
},
{
"data1" : "developer",
"data2" : "developpos",
"data3" : "1003"
},
{
"data1" : "support",
"data2" : "datapos",
"data3" : "1004"
}
]
There is a provision of bulk operations in elastic search following is the documentation this might help
https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html
You need to read the file from your application, iterate over the array and for each document, send it to elasticsearch.
To do the latest, you should use the bulk processor class.
BulkProcessor bulkProcessor = BulkProcessor.builder(
(request, bulkListener) -> esClient.bulkAsync(request, RequestOptions.DEFAULT, bulkListener),
new BulkProcessor.Listener() {
#Override
public void beforeBulk(long executionId, BulkRequest request) { }
#Override
public void afterBulk(long executionId, BulkRequest request, BulkResponse response) { }
#Override
public void afterBulk(long executionId, BulkRequest request, Throwable failure) { }
})
.setBulkActions(10000)
.setFlushInterval(TimeValue.timeValueSeconds(5))
.build();
For each json document, call:
bulkProcessor.add(new IndexRequest("INDEXNAME").source(json, XContentType.JSON));

Elastic Search autocompletion/suggestion Java

I want to implement a search as you type functionality in Elastic Search using Java API. The queries that I want to transform to Java are below.
Do you have any idea, how can I execute this queries in Java?
These queries are very similar but I want to solve at least one.
This is my initial approach:
SearchResponse response = client.prepareSearch("kal")
.setTypes("products")
.setQuery(multiMatchQuery("description_en", "name", "description_en"))// Query
.setFrom(0).setSize(60).setExplain(true)
.get();
SearchHit[] results = response.getHits().getHits();
for (SearchHit hit : results) {
String sourceAsString = hit.getSourceAsString();
Map<String, SearchHitField> responseFields = hit.getFields();
SearchHitField field = responseFields.get("product_id");
Map map = hit.getSource();
System.out.println(map.toString());
}
Queries:
POST /kal/products/_search?pretty
{
"suggest": {
"name-suggest" : {
"prefix" : "wine",
"completion" : {
"field" : "suggest_name"
}
}
}
}
GET /kal/products/_search
{ "query": {
"prefix" : {
"name" : "wine",
"description": "wine"
}
}
}
GET /kal/products/_search
{
"query" : {
"multi_match" : {
"fields" : ["name", "description_en"],
"query" : "description_",
"type" : "phrase_prefix"
}
}
}

apiKey as query param in Swagger UI 2.0

Context : Converting Swagger from current REST documentation in 1.2 spec to 2.0
Environment : Java 8, swagger-maven-plugin 3.0.1, swagger annotations (com.wordnik)
Where I am stuck: I was able to generate the REST API documentation successfully. However, REST APIs need an ApiKey as Query param. In 1.2 spec, this was added using the following snippet in index.html
function addApiKeyAuthorization() {
var key = $('#input_apiKey')[0].value;
log("key: " + key);
if(key && key.trim() != "") {
log("added key " + key);
//window.authorizations.add("api_key", new ApiKeyAuthorization("api_key", key, "query"));
window.authorizations.add("apiKey", new ApiKeyAuthorization("apiKey", key, "header"));
}
}
$('#input_apiKey').change(function() {
addApiKeyAuthorization();
});
// if you have an apiKey you would like to pre-populate on the page for demonstration purposes...
var apiKey = "ABCD";
$('#input_apiKey').val(apiKey);
addApiKeyAuthorization();
However, for 2.0 spec, my search led to the following changes in the yaml file.
securityDefinitions:
UserSecurity:
type: apiKey
in: header
name:myApiKey
The current index.html has the following in window function:
window.onload = function() {
// Build a system
const ui = SwaggerUIBundle({
url: "http://someCoolsite.com/swagger.json",
dom_id: '#swagger-ui',
presets: [
SwaggerUIBundle.presets.apis,
SwaggerUIStandalonePreset
],
plugins: [
SwaggerUIBundle.plugins.DownloadUrl
],
layout: "StandaloneLayout"
})
window.ui = ui
}
After further exploration, I have found answer to my question above.
First : My index.html is as below:
<script>
$(function(){
window.onload = function() {
// Build a system
const ui = SwaggerUIBundle({
url: "http://www.webhostingsite.com/swagger.json",
dom_id: '#swagger-ui',
presets: [
SwaggerUIBundle.presets.apis,
SwaggerUIStandalonePreset
],
plugins: [
SwaggerUIBundle.plugins.DownloadUrl
],
layout: "StandaloneLayout"
})
window.ui = ui
}
window.onFailure = function(data) {
log("Unable to Load SwaggerUI");
}
function addApiKeyAuthorization() {
var key = $('#input_apiKey')[0].value;
log("key: " + key);
if(key && key.trim() != "") {
log("added key " + key);
//window.authorizations.add("api_key", new ApiKeyAuthorization("api_key", key, "query"));
window.authorizations.add("apiKey", new ApiKeyAuthorization("apiKey", key, "query"));
}
}
$('#input_apiKey').change(function() {
addApiKeyAuthorization();
});
});
Then, I updated my swagger.json to be as below:
{
"swagger" : "2.0",
"securityDefinitions": {
"apiKey": {
"type": "apiKey",
"name": "apiKey",
"in": "query"
}
},
"host" : "<api base path>",
"basePath" : "/v1",
"security": [{"apiKey": []}]", //Global security (applies to all operations)
.......
Third: Hosted index.html and swagger.json on AWS S3 for static web hosting.
The part where I went wrong was, "security": [{"apiKey": []}]".
I was doing "security":{"apiKey":[]} all the while forgetting that value of "security" is a list.
Hope this helps.

Spring boot + Jersey parse Response JSON data

I am trying to build a simple page the will display some places on the map. I'm really new to rest in Java and i'm using Spring boot + jersey framework.
I'm using google places api to search for places, i made the GET respond for a lat/lng that i'm passing thru PostMan:
http://localhost:9000/res/-23.558712/-46.592235
Inside the code im calling:
#GET
#Path("/{lat}/{lng}")
#Produces("application/json")
public void getBook(#PathParam("lat") String lat,#PathParam("lng") String lng) {
this.lat = lat;
this.lng = lng;
getdata();
}
public static void getdata(){
Client client = ClientBuilder.newClient();
String entity = client.target("https://maps.googleapis.com/maps/api/place/nearbysearch/json?location="+ lat + "," + lng + "&radius=5000&types=market&name=dia&key=MY_KEY")
.request(MediaType.TEXT_PLAIN_TYPE)
.header("some-header", "true")
.get(String.class);
System.out.println(entity);
}
And i get a really big Json in the response:
"results" : [
{
"geometry" : {
"location" : {
"lat" : -23.5703071,
"lng" : -46.58153289999999
},
"viewport" : {
"northeast" : {
"lat" : -23.56895811970849,
"lng" : -46.5801839197085
},
"southwest" : {
"lat" : -23.5716560802915,
"lng" : -46.58288188029149
}
}
},....
And much more, my question is, how do i parse this JSON response to away that i can manage data? I want to get, name, location... to plot on the map.
I will have to create a model class with all response parameters?
I really don't know if this is all wrong.
This resource falls under reverse geocoding and if you need name of the location you should get it from "fromatted_address" attiribute, and address components will give you exact address. The following link has some information which might help you further. [reverse geocoding] https://developers.google.com/maps/documentation/geocoding/intro#ReverseGeocoding

Example of standalone Apache Qpid (amqp) Junit Test

Does anyone have an example of using Apache Qpid within a standalone junit test.
Ideally I want to be able to create a queue on the fly which I can put/get msgs within my test.
So I'm not testing QPid within my test, I'll use integration tests for that, however be very useful to test methods handling msgs with having to mock out a load of services.
Here is the setup method I use for QPID 0.30 (I use this in a Spock test but should be portable to Java of Junit with no problems). This supports SSL connection, the HTTP management, and uses only in-memory startup. Startup time is sub-second. Configuration for QPID is awkward compared to using ActiveMQ for the same purpose, but QPID is AMQP compliant and allows for a smooth, neutral testing for AMQP clients (obviously the use of exchanges can not mimic RabbitMQs implementation, but for basic purposes it is sufficient)
First I created a minimal test-config.json which I put in the resources folder:
{
"name": "${broker.name}",
"modelVersion": "2.0",
"defaultVirtualHost" : "default",
"authenticationproviders" : [ {
"name" : "passwordFile",
"type" : "PlainPasswordFile",
"path" : "${qpid.home_dir}${file.separator}etc${file.separator}passwd",
"preferencesproviders" : [{
"name": "fileSystemPreferences",
"type": "FileSystemPreferences",
"path" : "${qpid.work_dir}${file.separator}user.preferences.json"
}]
} ],
"ports" : [ {
"name" : "AMQP",
"port" : "${qpid.amqp_port}",
"authenticationProvider" : "passwordFile",
"keyStore" : "default",
"protocols": ["AMQP_0_10", "AMQP_0_8", "AMQP_0_9", "AMQP_0_9_1" ],
"transports" : [ "SSL" ]
}, {
"name" : "HTTP",
"port" : "${qpid.http_port}",
"authenticationProvider" : "passwordFile",
"protocols" : [ "HTTP" ]
}],
"virtualhostnodes" : [ {
"name" : "default",
"type" : "JSON",
"virtualHostInitialConfiguration" : "{ \"type\" : \"Memory\" }"
} ],
"plugins" : [ {
"type" : "MANAGEMENT-HTTP",
"name" : "httpManagement"
}],
"keystores" : [ {
"name" : "default",
"password" : "password",
"path": "${qpid.home_dir}${file.separator}keystore.jks"
}]
}
I
I also needed to create a keystore.jks file for localhost because the QPID broker and the RabbitMQ client do not like to communicate over an unencrypted channel. I also added a file called "passwd" in "integTest/resources/etc" that has this content:
guest:password
Here is the code from the unit test setup:
class level variables:
def tmpFolder = Files.createTempDir()
Broker broker
def amqpPort = PortFinder.findFreePort()
def httpPort = PortFinder.findFreePort()
def qpidHomeDir = 'src/integTest/resources/'
def configFileName = "/test-config.json"
code for the setup() method:
def setup() {
broker = new Broker();
def brokerOptions = new BrokerOptions()
File file = new File(qpidHomeDir)
String homePath = file.getAbsolutePath();
log.info(' qpid home dir=' + homePath)
log.info(' qpid work dir=' + tmpFolder.absolutePath)
brokerOptions.setConfigProperty('qpid.work_dir', tmpFolder.absolutePath);
brokerOptions.setConfigProperty('qpid.amqp_port',"${amqpPort}")
brokerOptions.setConfigProperty('qpid.http_port', "${httpPort}")
brokerOptions.setConfigProperty('qpid.home_dir', homePath);
brokerOptions.setInitialConfigurationLocation(homePath + configFileName)
broker.startup(brokerOptions)
log.info('broker started')
}
code for cleanup()
broker.shutdown()
To make an AMQP connection from a Rabbit MQ client:
ConnectionFactory factory = new ConnectionFactory();
factory.setUri("amqp://guest:password#localhost:${amqpPort}");
factory.useSslProtocol()
log.info('about to make connection')
def connection = factory.newConnection();
//get a channel for sending the "kickoff" message
def channel = connection.createChannel();
The Qpid project has a number of tests that use an embedded broker for testing. Whilst we use a base case to handle startup shutdown you could do the following to simply integrate a broker within your tests:
public void setUp()
{
int port=1;
// Config is actually a Configuaration File App Registry object, or Configuration Application Registry.
ApplicationRegistry.initialise(config, port);
TransportConnection.createVMBroker(port);
}
public void test()
{...}
public void tearDown()
{
TransportConnection.killVMBroker(port);
ApplicationRegistry.remove(port);
}
Then for the connection you need to specify the conectionURL for the broker. i.e. borkerlist='vm://1'
My solution on qpid-broker # 6.1.1, add below to pom.xml
<dependency>
<groupId>org.apache.qpid</groupId>
<artifactId>qpid-broker</artifactId>
<version>6.1.1</version>
<scope>test</scope>
</dependency>
qpid config file as:
{
"name" : "${broker.name}",
"modelVersion" : "6.1",
"defaultVirtualHost" : "default",
"authenticationproviders" : [ {
"name" : "anonymous",
"type" : "Anonymous"
} ],
"ports" : [ {
"name" : "AMQP",
"port" : "${qpid.amqp_port}",
"authenticationProvider" : "anonymous",
"virtualhostaliases" : [ {
"name" : "defaultAlias",
"type" : "defaultAlias"
} ]
} ],
"virtualhostnodes" : [ {
"name" : "default",
"type" : "JSON",
"defaultVirtualHostNode" : "true",
"virtualHostInitialConfiguration" : "{ \"type\" : \"Memory\" }"
} ]
}
code to start the qpid server
Broker broker = new Broker();
BrokerOptions brokerOptions = new BrokerOptions();
// I use fix port number
brokerOptions.setConfigProperty("qpid.amqp_port", "20179");
brokerOptions.setConfigurationStoreType("Memory");
// work_dir for qpid's log, configs, persist data
System.setProperty("qpid.work_dir", "/tmp/qpidworktmp");
// init config of qpid. Relative path for classloader resource or absolute path for non-resource
System.setProperty("qpid.initialConfigurationLocation", "qpid/qpid-config.json");
brokerOptions.setStartupLoggedToSystemOut(false);
broker.startup(brokerOptions);
code to stop qpid server
broker.shutdown();
Since I use anonymouse mode, client should do like:
SaslConfig saslConfig = new SaslConfig() {
public SaslMechanism getSaslMechanism(String[] mechanisms) {
return new SaslMechanism() {
public String getName() {
return "ANONYMOUS";
}
public LongString handleChallenge(LongString challenge, String username, String password) {
return LongStringHelper.asLongString("");
}
};
}
};
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
factory.setPort(20179);
factory.setSaslConfig(saslConfig);
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
That's all.
A little more on how to do it on other version.
You can download qpid-broker binary package from official site. After download and unzip, you can run it to test as server against your case. After your case connected server well, using commandline to generate or just copy the initial config file in QPID_WORK, remove useless id filed and use it for embedded server like above.
The most complicated thing is the authentication. You can choose PLAIN mode but you have to add the username and password in initial config. I choose anonymous mode which need a little code when connecting. For other authentication mode you have specify the password file or key/cert store, which I didnt try.
If it still not working, you can read the qpid-borker doc and Main class code in qpid-broker artifact which show how command line works for each settings.
The best I could figure out was:
PropertiesConfiguration properties = new PropertiesConfiguration();
properties.addProperty("virtualhosts.virtualhost.name", "test");
properties.addProperty("security.principal-databases.principal-database.name", "testPasswordFile");
properties.addProperty("security.principal-databases.principal-database.class", "org.apache.qpid.server.security.auth.database.PropertiesPrincipalDatabase");
ServerConfiguration config = new ServerConfiguration(properties);
ApplicationRegistry.initialise(new ApplicationRegistry(config) {
#Override
protected void createDatabaseManager(ServerConfiguration configuration) throws Exception {
Properties users = new Properties();
users.put("guest","guest");
users.put("admin","admin");
_databaseManager = new PropertiesPrincipalDatabaseManager("testPasswordFile", users);
}
});
TransportConnection.createVMBroker(ApplicationRegistry.DEFAULT_INSTANCE);
With a URL of:
amqp://admin:admin#/test?brokerlist='vm://:1?sasl_mechs='PLAIN''
The big pain is with configuration and authorization. Milage may vary.

Categories