Find Universe meta Data Information in BO SDK R4 - java

I am new to BO, I need to find universe name and the corresponding metadata information like(Table name, column names, join conditions etc...). I am unable to find proper way to start. I looked with Data Access SDK, Semantic SDk.
Can any one please provide me the sample code or procedure for starting..
I googled a lot but i am unable to find any sample examples
I looked into this link but that code will work only on R2 Server.
http://www.forumtopics.com/busobj/viewtopic.php?t=67088
Help is Highly Apprecitated.....

Assuming you're talking about IDT based universes, you'll need to code some Java. The JavaDoc for the API is available here.
In a nutshell, you do something like this:
SlContext context = SlContext.create() ;
LocalResourceService service = context.getService(LocalResourceService.class) ;
String blxFile = service.retrieve("universe.unx","output directory") ;
RelationalBusinessLayer businessLayer = (RelationalBusinessLayer)service.load(blxFile);
RootFolder rootFolder = businessLayer.getRootFolder() ;
Once you have a hook on the rootFolder, you can use the getChildren() method to drill into the folder structure and access the various subfolders/business objects available.
You may also want to check the CmsResourceService class to access universes stored on the repository.

To get the information you are after will require a 2 part solution. Part 1 use the Rebean SDK looking at WebI reports for the Universe and object names being used with in it.
Part 2 is to break out your favorite COM programming tool, since I try to avoid COM I use the Excel Macro editor, and access the BusinessObjects Designer library. Main code snippets that I currently have are:
Dim boUniv As Designer.Universe
Dim tbl As Designer.Table
For Each tbl In boUniv.Tables
Debug.Print tbl.Name
Next tbl
This prints all of the tables in a universe.
You will need to combine the 2 parts on your own for a dependency list between WebI reports and Universes.

Related

How to create a tensorflow serving client for the 'wide and deep' model?

I've created a model based on the 'wide and deep' example (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/learn/wide_n_deep_tutorial.py).
I've exported the model as follows:
m = build_estimator(model_dir)
m.fit(input_fn=lambda: input_fn(df_train, True), steps=FLAGS.train_steps)
results = m.evaluate(input_fn=lambda: input_fn(df_test, True), steps=1)
print('Model statistics:')
for key in sorted(results):
print("%s: %s" % (key, results[key]))
print('Done training!!!')
# Export model
export_path = sys.argv[-1]
print('Exporting trained model to %s' % export_path)
m.export(
export_path,
input_fn=serving_input_fn,
use_deprecated_input_fn=False,
input_feature_key=INPUT_FEATURE_KEY
My question is, how do I create a client to make predictions from this exported model? Also, have I exported the model correctly?
Ultimately I need to be able do this in Java too. I suspect I can do this by creating Java classes from proto files using gRPC.
Documentation is very sketchy, hence why I am asking on here.
Many thanks!
I wrote a simple tutorial Exporting and Serving a TensorFlow Wide & Deep Model.
TL;DR
To export an estimator there are four steps:
Define features for export as a list of all features used during estimator initialization.
Create a feature config using create_feature_spec_for_parsing.
Build a serving_input_fn suitable for use in serving using input_fn_utils.build_parsing_serving_input_fn.
Export the model using export_savedmodel().
To run a client script properly you need to do three following steps:
Create and place your script somewhere in the /serving/ folder, e.g. /serving/tensorflow_serving/example/
Create or modify corresponding BUILD file by adding a py_binary.
Build and run a model server, e.g. tensorflow_model_server.
Create, build and run a client that sends a tf.Example to our tensorflow_model_server for the inference.
For more details look at the tutorial itself.
Just spent a solid week figuring this out. First off, m.export is going to deprecated in a couple weeks, so instead of that block, use: m.export_savedmodel(export_path, input_fn=serving_input_fn).
Which means you then have to define serving_input_fn(), which of course is supposed to have a different signature than the input_fn() defined in the wide and deep tutorial. Namely, moving forward, I guess it's recommended that input_fn()-type things are supposed to return an InputFnOps object, defined here.
Here's how I figured out how to make that work:
from tensorflow.contrib.learn.python.learn.utils import input_fn_utils
from tensorflow.python.ops import array_ops
from tensorflow.python.framework import dtypes
def serving_input_fn():
features, labels = input_fn()
features["examples"] = tf.placeholder(tf.string)
serialized_tf_example = array_ops.placeholder(dtype=dtypes.string,
shape=[None],
name='input_example_tensor')
inputs = {'examples': serialized_tf_example}
labels = None # these are not known in serving!
return input_fn_utils.InputFnOps(features, labels, inputs)
This is probably not 100% idiomatic, but I'm pretty sure it works. For now.

how to connect protoge(OWL file)with java

Can anyone give me at least one idea to how can I connect java with protoge?
how can I access OWL using jena API in java?!
The Jena website has plenty of tutorials available. If you have difficulties getting started, please post the code that does not work and we'll help you along.
First tutorial here
// some definitions
static String personURI = "http://somewhere/JohnSmith";
static String fullName = "John Smith";
// create an empty Model
Model model = ModelFactory.createDefaultModel();
// create the resource
Resource johnSmith = model.createResource(personURI);
// add the property
johnSmith.addProperty(VCARD.FN, fullName);
In order to make this work, you'll need the right imports. Assuming that the Java technicalities are not a problem, this example shows how to create a statement and add it to a model, i.e., an rdf file.
From the same page you can get to more complex material, including OWL tutorials.
You have not mentioned which task you're trying to carry out. Can you describe it?

How to query Custom Object in Salesforce?

Ok, I am reposting this question because it really drives me crazy.
I have enterprise.wsdl downloaded from salesforce and generated to some jars.
I build path those jars to my Android project in Eclipse.
Here is my code:
ConnectorConfig config = new ConnectorConfig();
config.setAuthEndpoint(authEndPoint);
config.setUsername(userID);
config.setPassword(password + securityToken);
config.setCompression(true);
con = new EnterpriseConnection(config);
con.setSessionHeader(UserPreference.getSessionID(mContext));
String sql = "SELECT something FROM myNameSpace__myCustomObject__c";
con.query(sql);
but it returns me this error:
[InvalidSObjectFault [ApiQueryFault [ApiFault
exceptionCode='INVALID_TYPE' exceptionMessage='sObject type 'abc__c'
is not supported.'] row='-1' column='-1' ]]
I am pretty sure that my userID has been assigned with profile that has read, edit access to that custom object.
My code also can query standard object.
Anyone can advise me what could be wrong?
From what I know there are three reasons it may give this error.
1. User permission which you said is setup correctly.
2. Do you have the custom object deployed to the org where you are trying to establish the connection?
3. Check the enterprise WSDL if it contains the custom object name which you are trying to query.
Hope it helps.

JPivot Display Mondrian Result

I am trying to display the result of a Mondrian query using JPivot. Many examples are showing how to use the tag library for JSP but I need to use the Java API, I looked at the documentation but I cannot understand how to use it to display the results in the table. Here is my code
Query query = connection.parseQuery(mdxQuery);
Result result = connection.execute(query);
result.print(new PrintWriter(System.out,true));
I would like to know if I can use the result object to build the jpivot table.
Thanks in advance!
First of all, using JPivot
is a pretty bad idea.
It was discontinued back in 2008.
There is a good project which is intended to replace the JPivot called Pivot4j. Despite it is currently under development (0.8 -> 0.9 version), Pivot4j can actually do the business.
However, if we're talking about your case:
result.print(new PrintWriter(System.out,true));
This string prints the HTML code with OLAP cube into your System.out.
You can write the HTML code in some output stream (like FileOuputStream), and then display it.
OutputStream out = new FileOutputStream("result.html");
result.print(new PrintWriter(out, true));
//then display this file in a browser
However, if you want to have the same interface as in JPivot, I don't think there is an easy way to do it without .jsp. In these case I strongly recommend you to try Pivot4j.
Good luck!

Required Java API to generate Database ER diagram

Is there any java API/java plugin which can generate Database ER diagram when java connection object is provided as input.
Ex: InputSream generateDatabaseERDiagram(java connection object)// where inputsream will point to generated ER diagram image
The API should work with oracle,mysql,postgresql?
I was going through schemacrawler(http://schemacrawler.sourceforge.net/) tool but didint got any API which could do this.
If no API like this is there then let me know how can write my own API? I want to generate ER diagram for all the schema in a database or any specific schema if the schema name is provided as input.
It will be helpful if you show some light on how to achieve this task.
If I understood you question correctly, you might take a look at: JGraph
This is an old question but in case anyone else stumbles across it as I did when trying to do the same thing I eventually figured out how to generate the ERD using Schemacrawler's java API.
//Get your java connection however
Connection conn = DriverManager.getConnection("DATABASE URL");
SchemaCrawlerOptions options = new SchemaCrawlerOptions();
// Set what details are required in the schema - this affects the
// time taken to crawl the schema
options.setSchemaInfoLevel(SchemaInfoLevelBuilder.standard());
// you can exclude/include objects using the options object e.g.
//options.setTableInclusionRule(new RegularExpressionExclusionRule(".*qrtz.*||.*databasechangelog.*"));
GraphExecutable ge = new GraphExecutable();
ge.setSchemaCrawlerOptions(options);
String outputFormatValue = GraphOutputFormat.png.getFormat();
OutputOptions outputOptions = new OutputOptions(outputFormatValue, new File("database.png").toPath());
ge.setOutputOptions(outputOptions);
ge.execute(conn);
This still requires graphviz to be installed and on the path to work.

Categories