Getting Nova Server From Metadata
Hi,
I'm using jclouds SDK with Java to retrieve OpenStack Nova Servers, i can retrieve the server through its id, but i didn't find any other way that i can get a Nova Server.
I saw in the OpenStack documentation that i can get a server using the API /servers/{server_id} or i can list all the servers, but assume that i have a case that i only need to get the servers that designated with certain data, such as i need to list all servers that are designated as delete-able which i can set in the metadata when i create the server
in this case, is there any way to use some sort of filtration to the metadata for the servers?
Thanks
I don't think you can filter directly by the server metadata, but you should be able to filter using any of the query parameters that are available when listing servers.
You can just call the ServerApi.list(options) by passing the query parameters you want. You can build the options object by using PaginationOptions.queryParameters method.
Related
Working on simple use case of transferring a file from S3 to Azure Blob Storage container using Azure SDK in Java based AWS Lambda. But before going to file transfer, I wanted to test the connectivity itself from my Lambda so I decided to first try and list the blobs in a container.
I am using "Shared Access Signature" token for authenticating access to Azure Blob Storage container. Faced lot of challenges to establish the connection on the local but at last I was able to make successful connection and finally I was able to list all the blobs in a given container.
Now, when I merged the same code to my Lambda and run it. It is giving me Authorization error as below.
Lambda Exception Trace
Since I am new to Azure, can someone help me in understanding if there is any authentication, network configuration missing to establish this connection or am I fundamentally missing something.
Code this is working on Eclipse IDE on Local
It appears to be an Authentication Failure. This includes the possibility that the SAS (Shared access signature) token you are using to connect is missing one or more permissions to execute a particular action that is needed by the BlobContainerClient. The actions are: Read, Write, Delete, List, Add, Create, Process, Immutable storage, Permanent delete. You also have different types of services you can interact with: blob, file, queue, table. Finally, when the SAS token is created, it can be configured with an expiration date, a set of allowed IP addresses, limited to use only a certain protocol and choose a signing key. Perhaps one of these conditions is not allowing the same code to behave in the same way when it is executed from two different locations?
I am trying to access AWS elastic search (not the normal Elastic which could be hosted on some machine, but the AWS version of Elastic) using java. One thing I have identified is that we have to use REST TEMPLATE instead of the TransportClient method as AWS ES is hosted on port 80 and to get the data, we have to send a POST request with payload.
I am able to get simple data in this process but the request doesn't take wild card characters. It gives me below error:
{"type":"parse_exception","reason":"Failed to derive xcontent"}
Questions:
1. Is my understanding correct about Java hitting AWS ES on port 80? Does this mean we have to use POST instead of GET to send requests having attribute level filtering?
Is there a way to pass the attribute to the elastic search in the url itself?
I have tried below example which doesn't work
e.g. : http://helloworld.amazon.com/customer/_search?q=emailID:*abc#gmail.*
How do we pass wildcard character via java client to AWS ES to fetch the data?
I am reading data from google spreadsheet API using java. I am able to read on my local machine and the URL getting below for auth2:
https://accounts.google.com/o/oauth2/auth?access_type=offline&client_id=679281701678-iacku5po12k0if70abstnthne9ia57kg.apps.googleusercontent.com&redirect_uri=http://localhost:39740/Callback&response_type=code&scope=https://www.googleapis.com/auth/spreadsheets
My callback URL is
http://localhost:62238/Callback?code=4/k6rwrqBFTJ310Yhy9EBpIA7eH9PqL-HXwC3hi9Q0How#
However, when I am deploying my war on the production server so I am not able to see callback function.
If any one knows about this please suggest to me how to integrate on the production server.
Whenever you integrate with any OAUTH enabled google api. You need to provide the restrictions on google dev console like authorized origins and authorized redirect uris. I think you might have provided different port from your local than what is running in your production in your authorized origins, that is why it is not able to connect from your production server but it is able to connect from your local. you can cross check once.
I am using IBM Security Directory Integrator, formerly known as IBM Tivoli Directory Integrator, and in the feed section I have one connector that is connecting to a MySQL Database and providing data from the database, I want the data from the MySQL database to be displayed on a web page using an HTTP Server connector, however none of the attributes in the WORK object are available as output for the HTTP Server connector. There is not much documentation on this platform and I would like to know how to Route the data from a Database Connector to an HTTP Server connector which will then display the data on a web page.
None of the Attributes in the WORK object are available to scripts in the DATA FLOW section.
You need to update your question a little to reflect exactly what you want to do. Do you
Want all the data from your SQL DB to be shown on every request that reaches your HTTP Server connector
Display a specific entry from the DB depending on some input parameter on the request that reaches the HTTP Server connector ?
A little background on the feed and data flow sections
In your feed section, you would normally have a connector in iterator mode that will go through a number of data entries from a source.
In your data flow section you will have a number of connectors/functions/scripts that do transformations on the data
Each data entry that gets returned from the Iterator connector in the feed section, will go through the transformations described in the data flow section. This is configurable by mapping certain data attributes (columns in DB, fields in CSV, attributes in ldap, http parameters in http requests) to attributes in the work entry.
Normally it is up to you to do something with the transformed data, as in write them to a file, DB, ldap server.Again what will be written is configurable in the output map of the connector you use where you map attributes of the work entry to output attributes for the connector you use.
Now, the HTTP Server connector in Server mode is a slightly more complex beast in that it needs to send back a response to the HTTP client so it contains both an input map and an output map. What happens is that the request is read , the data flow section is executed and then the HTTP Server connector instance itself returns a reply to the HTTP client. This is described in detail here http://www-01.ibm.com/support/knowledgecenter/?lang=en#!/SSCQGF_7.1.0/com.ibm.IBMDI.doc_7.1/referenceguide29.htm%23serverconnect2 so I will not go into so much details.
Your specific scenario
If assumption 1 above is correct, then probably SDI is not the best tool for this. Or at least not as you plan to use it, anyway. You can have one assembly line that reads the data from the DB and then a file connector in AddOnly mode in your data flow (using an XML parser ) that will append the data in a specified form to a file. Then you need to do this once, or periodically, and serve the static html/xml file via a normal HTTP server. If you ABSOLUTELY need to use SDI for this, read below for assumption 2.
If assumption 2 is correct, then you have your connectors in the wrong sections. The HTTP Server connector needs to be in the feed section as this is the connector that would be listening all the time for incoming connections and would return something in response. The exact nature/Data of the response is up to you to decide by the connectors you will include in the Data Flow section. If you want to return a specific entry depending on the parameters of the request, then you would have a JDBC connector in lookup mode and the link criteria would be built based on the parameters of the incoming request in the HTTP Server. Otherwise, you need to read all the DB entries using the JDBC connector in lookup mode and a generic SQL query (select * from .. ) and then iterate over all the entries with a for-each attibute connector. No matter what/how you do it you will end up with some information you need to return to the client. For that you will use the output map of the HTTP Server connector and map the http.body and http.status
Website is developed on JSF, Servlet.
In my website, I accept data submission from few restricted websites using HTTP POST method. We exchange some secure key to ensure that correct source is sending data.
But is there any way to ensure that the data is submitted from specific domain / IP address only?
In application level I can check
request.header('Referer')
, but some proxy might hide the referer. Can this configuration done on firewall level?
Eg. Say my website is a payment gateway website, integrated with www.abc.com. I want only abc.com to submit data. So a user using abc.com should be able to submit data to my website only through abc.com, and not any other website.
You can use the ServletRequest.getRemoteAddr() method to verify the client or the last proxy.
If you are using an apache server you could use
Order Deny,Allow
Deny from all
Allow from xxx.xxx.xxx.xxx
this. So you could specify which all domains can access your application