I have 2 dashboards: an advanced and a basic dashboard. I want to provide link(url) of advanced dashboard in basic dashboard from the data saved in elasticsearch (which varies for each of the index pattern).
I tried using markdown (Text) but there I am not able to add any data from elastic search.
I am using elastic and kibana version 7.16
Related
I'm on the google sheets quickstart page and when I click on "Enable Google Sheets API" button it comes up with a menu called "Configure your OAuth client". There's a dropdown menu with a bunch of options like web browser, web server, ios, chrome app, etc. I'm new to programming with API's so I have no idea what this means. Which one should I use?
I'm making a web scraping program in the Eclipse IDE with Selenium and I want to dump my data I scrape into a google sheet. Any help would be appreciated
URL: https://developers.google.com/sheets/api/quickstart/java
In order to use any Google API, you need to create a Google Cloud Platform Project
Google offers two authentication ways
In most cases you will want use OAuth2 for safe authentication
You will need to obtain Credentials that will be used by the App to obtain the access and refresh tokens
To obtain credentials you need to set up the conset screen of you App first, as described here
The exact way depends on several factors, e.g. either you want to publish an external application or just use it for yourself and on which type of device you want to use it.
For most internal applications specifying the application type, the Application name and Scopes (can be edited later) is enough
The possible scopes for the Sheets API are listed here
For the beginning the easiest will be if you follow the quickstart for Java and set-up the Consent Screen by simply clicking on the Enable the Google Sheets API button
This will do all the configuration automatically in the background
Now, as the application type:
This depends strongly on how you want to implement the application - your choice will influence how the user will be redirected after authentication.
For testing on your local machine, Desktop App will mostly be the correct choice
I am not very familiar with Selenium, but for this kind of integration Web Server Application is likely the correct choice.
I recommend you to read how Google implements Web Server and Desktop applications to decide which type would be the appropiate one for you.
I am automating our salesforce application with Selenium(Java) in eclipse IDE.But not able to figure how to connect to salesforce database from selenium code through eclipse.My requirement is to query from salesforce and check case is created and delete it.
Any help is appreciated.
TIA
(1) I wanted to make sure that Selenium is popular for UI automation of websites.
If you are already have UI auto test developed for Salesforce and would like to check per your question - Create and Delete operations on records then use either 'Workbench'
provided by Salesforce or use 'Salesforce Inspector' plugin.
(2) Salesforce Workbench and Salesforce Inspector - places where you can fire SOQL query and check the records
(3) Try to play around with Salesforce IDs of objects. They are unique IDs which will give you ability to open any record/object directly on browser.
For example we have Opportunity default object in Salesforce:
You can fire SOQL on 'Salesforce Workbench' or 'Salesforce Inspector' -
select id, name from Opportunity limit 50
you will see list of IDs and open in Salesforce.
See link for more info: https://intellipaat.com/blog/tutorial/salesforce-tutorial/salesforce-workbench/
Better to integrate RestAssured with your selenium tests. Suppose we have a scenario where we are want to update particular account in salesforce.
We can follow below steps to automate this test using both Rest Assured and Selenium:
Use Rest Assured to integrate with Salesforce. You can find plenty of resources online to make this work.
Use rest assured to fetch the data(account) and store the name in a variable.
Using selenium to log in to Salesforce instance and open the account.
Perform the changes that are part of your test and Validate.
This is just a high level approach. We can leverage Salesforce APIs to access the data and perform modification using Rest Assured itself. But Rest Assured can also be used to fetch database values.
I have multiple ElasticSearch indices sharded over multiple machines into which I ingest logs using logstash. I expose the data via a REST API. The API layer searches for several terms (using the ElasticSearch Java API) and presents the user with the results.
Now, if the user is looking at the results in a browser, there may be new logs ingested which match the same queries and filters that the user is now looking at. I would like the API layer to be notified that there are new items which match the query. So, if I get the same API request, I can send back only new data (the delta) or send a HTTP 304 "Not modified" response.
I looked at the percolator API but this does not seem to do what I want - it appears to give the list of queries a given document that is about to be inserted will match. My requirements are these:
The same user may want to check for new log messages after a few
seconds or a few days.
Multiple users may search for the same terms
but they will be in different pages (I will implement pagination
using the SearchRequestBuilder.setSize API.) So, the changed items will be different for different users as they are on different pages of the same output.
Is there any way to do this in a scalable manner?
There is popular "Change API" proposal in Elasticsearch issue tracker, but it is not implemented yet - there are some major implementation problems and it has dependencies on some other tasks.
There is 3rd party plugin which tries to implement this feature, but it looks unmaintained.
I hope we can finally have this feature available in one of the next major ES releases.
Would this be the type of thing you are looking for Elastic Search X-Pack.
'Hello ! I would like to upload datas on a already created cloudsearch domain.
I already process the datas by using a mapreduce job with hadoop to create a string in the JSON format.
I can't find how to upload a String directly to Amazon Cloud Search to get the data into the search domain.
You post the documents to the documents search endpoints. (search endpoints can be found in the AWS console dashboard for your domain).
You can do this using command line tools Amazon provides, or using their java API, use curl, or just write code in your favorite language to post. There's also boto, which is a python library for aws.
Details here http://aws.amazon.com/cloudsearch/faqs/#How_do_I_upload_documents_to_my_search_domain
I am building a social web application using Java and Cassandra DB. I want some of the data from my database to be visible to search engines.
Since my application is completely dynamic & contains data only in DB and not in static pages, how do the crawlers read this data?
1.)How can I ensure that the data stored on my servers can be seen by the search engines? My application contains user specific data
2.)How do the search engines access that data ??
3.)How can I limit the search engines crawling only to some specific data?
Read the explanations from Google.
The search engines access your data as any other user of your website : by browsing it and clicking all the links they find. Content accessible only through AJAX will be more difficult to make accessible by search engines.
Access can be restricted using a robots.txt file. Explanations are given in the link given above.
1) You need to separate user specific info from public info, either you should have public and private pages - or you could decorate you'r public page with user specifics through some session based Ajax calls.
Meaning: the browser just load the public version of the page, while a javascript would load the users specifics and inject them into the page.
2 and 3 could be solved by uploading a site map to Google.
Or do you want Google to talk to Cassendra directly...? Then ignore all above - I think.