I am using IBM Security Directory Integrator, formerly known as IBM Tivoli Directory Integrator, and in the feed section I have one connector that is connecting to a MySQL Database and providing data from the database, I want the data from the MySQL database to be displayed on a web page using an HTTP Server connector, however none of the attributes in the WORK object are available as output for the HTTP Server connector. There is not much documentation on this platform and I would like to know how to Route the data from a Database Connector to an HTTP Server connector which will then display the data on a web page.
None of the Attributes in the WORK object are available to scripts in the DATA FLOW section.
You need to update your question a little to reflect exactly what you want to do. Do you
Want all the data from your SQL DB to be shown on every request that reaches your HTTP Server connector
Display a specific entry from the DB depending on some input parameter on the request that reaches the HTTP Server connector ?
A little background on the feed and data flow sections
In your feed section, you would normally have a connector in iterator mode that will go through a number of data entries from a source.
In your data flow section you will have a number of connectors/functions/scripts that do transformations on the data
Each data entry that gets returned from the Iterator connector in the feed section, will go through the transformations described in the data flow section. This is configurable by mapping certain data attributes (columns in DB, fields in CSV, attributes in ldap, http parameters in http requests) to attributes in the work entry.
Normally it is up to you to do something with the transformed data, as in write them to a file, DB, ldap server.Again what will be written is configurable in the output map of the connector you use where you map attributes of the work entry to output attributes for the connector you use.
Now, the HTTP Server connector in Server mode is a slightly more complex beast in that it needs to send back a response to the HTTP client so it contains both an input map and an output map. What happens is that the request is read , the data flow section is executed and then the HTTP Server connector instance itself returns a reply to the HTTP client. This is described in detail here http://www-01.ibm.com/support/knowledgecenter/?lang=en#!/SSCQGF_7.1.0/com.ibm.IBMDI.doc_7.1/referenceguide29.htm%23serverconnect2 so I will not go into so much details.
Your specific scenario
If assumption 1 above is correct, then probably SDI is not the best tool for this. Or at least not as you plan to use it, anyway. You can have one assembly line that reads the data from the DB and then a file connector in AddOnly mode in your data flow (using an XML parser ) that will append the data in a specified form to a file. Then you need to do this once, or periodically, and serve the static html/xml file via a normal HTTP server. If you ABSOLUTELY need to use SDI for this, read below for assumption 2.
If assumption 2 is correct, then you have your connectors in the wrong sections. The HTTP Server connector needs to be in the feed section as this is the connector that would be listening all the time for incoming connections and would return something in response. The exact nature/Data of the response is up to you to decide by the connectors you will include in the Data Flow section. If you want to return a specific entry depending on the parameters of the request, then you would have a JDBC connector in lookup mode and the link criteria would be built based on the parameters of the incoming request in the HTTP Server. Otherwise, you need to read all the DB entries using the JDBC connector in lookup mode and a generic SQL query (select * from .. ) and then iterate over all the entries with a for-each attibute connector. No matter what/how you do it you will end up with some information you need to return to the client. For that you will use the output map of the HTTP Server connector and map the http.body and http.status
Related
Working on simple use case of transferring a file from S3 to Azure Blob Storage container using Azure SDK in Java based AWS Lambda. But before going to file transfer, I wanted to test the connectivity itself from my Lambda so I decided to first try and list the blobs in a container.
I am using "Shared Access Signature" token for authenticating access to Azure Blob Storage container. Faced lot of challenges to establish the connection on the local but at last I was able to make successful connection and finally I was able to list all the blobs in a given container.
Now, when I merged the same code to my Lambda and run it. It is giving me Authorization error as below.
Lambda Exception Trace
Since I am new to Azure, can someone help me in understanding if there is any authentication, network configuration missing to establish this connection or am I fundamentally missing something.
Code this is working on Eclipse IDE on Local
It appears to be an Authentication Failure. This includes the possibility that the SAS (Shared access signature) token you are using to connect is missing one or more permissions to execute a particular action that is needed by the BlobContainerClient. The actions are: Read, Write, Delete, List, Add, Create, Process, Immutable storage, Permanent delete. You also have different types of services you can interact with: blob, file, queue, table. Finally, when the SAS token is created, it can be configured with an expiration date, a set of allowed IP addresses, limited to use only a certain protocol and choose a signing key. Perhaps one of these conditions is not allowing the same code to behave in the same way when it is executed from two different locations?
Getting Nova Server From Metadata
Hi,
I'm using jclouds SDK with Java to retrieve OpenStack Nova Servers, i can retrieve the server through its id, but i didn't find any other way that i can get a Nova Server.
I saw in the OpenStack documentation that i can get a server using the API /servers/{server_id} or i can list all the servers, but assume that i have a case that i only need to get the servers that designated with certain data, such as i need to list all servers that are designated as delete-able which i can set in the metadata when i create the server
in this case, is there any way to use some sort of filtration to the metadata for the servers?
Thanks
I don't think you can filter directly by the server metadata, but you should be able to filter using any of the query parameters that are available when listing servers.
You can just call the ServerApi.list(options) by passing the query parameters you want. You can build the options object by using PaginationOptions.queryParameters method.
I am downloading gigabytes of data (a large number of small files) at a time and would like to optimize the download times by instead using a HTTP request instead of using HTTPS, which is a slower process, especially if repeated thousands of times before each transfer.
What is the default request protocol for the Java AWS SDK and how can I set it to HTTP?
When constructing a client object (For example AmazonEC2Client), you can pass in an optional com.amazonaws.ClientConfiguration object to customize the client's configuration.
Use the below constuctor:
AmazonEC2Client(AWSCredentials awsCredentials, ClientConfiguration clientConfiguration)
Read more here.
Now, while creating the ClientConfiguration object you can use setProtocol() to define the protocol to be HTTP or HTTPS. And accordingly the client object hence created would use that protocol. See here.
I'm sending JSON data from my Java application to my local webserver with my PHP script that is receiving this message. Now as far as I know I can only view what has been received by for example inserting the data in a database. Is there a way/application to view the live POST requests sent to my PHP webserver?
I like to use fiddler for these kinds of tasks if the java HTTP library has support for proxies. Fiddler will list all information about the HTTP requests that is available. It will by default log all HTTP requests across your system, but can be told to limit to one application.
You can try setting your httpd logging level to verbose or (depending on what httpd it is) try to use extension that would do log all the data send in requests
For debugging purposes why not just write the POST data to a file?
i.e.
file_put_contents(<some filename>, $HTTP_RAW_POST_DATA);
I am currently investigating possible solutions for flexible load balancing of a streaming application. I will have several nodes running the application that will process and stream content to the users. Documents are spread among any number of servers such that a single machine is responsible for serving requests to the set of documents it hosts.
Thus, I need some kind of broker that knows on which server the requested document resides and that does a forward to that server. I do NOT want to use some HttpConnection from the broker and then include the response of that http call into the response of the original request (this will obviously be a bottleneck).
So my question is: How can I forward a request to another server after having analyzed the requests data (POST/GET params, headers or whatever) and determined the destination server?
Is there already a good load balancer out there that allows me to use some form of hooks to provide the logic of how to determine destination servers (round robin will obviously not work)?
I want to use Tomcat or something similar to host the streaming application and use only the fundamental servlet stack. Any hints to tools or patterns are appreciated.
Thanks
One option could be looking for Layer 7 switching or URL load balancing hardware.
There are several load balancers which has well defined node selection mechanisms based on request parameters like:
headers
request uri
query parameters