I want to fetch history of file elements like pdf files, doc files, etc. which are under clearcase control using Rational CM API which are provided by clearcase. I have written following code to fetch the history but it is incomplete so please help me out here.
public void fetchFileElementHistory()
{
try
{
CcFile fetchElement = provider.ccFile(provider.filePathLocation(testFile)); // file under Clearcase control
PropertyRequest wantedProps = new PropertyRequest(CcFile.DISPLAY_NAME, CcFile.CREATION_DATE,CcFile.VIEW_RELATIVE_PATH,CcFile.CLIENT_PATH,CcFile.VERSION_HISTORY,CcFile.PREDECESSOR_LIST,CcFile.ELEMENT);
fetchElement = (CcFile) fetchElement.doReadProperties(wantedProps);
VersionHistory versionHistory = fetchElement.getVersionHistory();
versionHistory = (VersionHistory) versionHistory.doReadProperties(new PropertyRequest(VersionHistory.CHILD_LIST,VersionHistory.ROOT_VERSION,
VersionHistory.CHILD_MAP,VersionHistory.PARENT_LIST,VersionHistory.PROVIDER_LIST,VersionHistory.WORKSPACE_FOLDER_LIST));
/*
* what to do here ?
*/
}
catch(Exception e){
e.printStackTrace();
}
}
Thanks in advance
The official documentation for CM API 7.1.x.
Make sure you have selected the "CM Library Samples and Documentation" feature under the Client Components section of the install. in order to check the code examples included with the javadoc.
From the object model overview, check if collections apply your case.
Related
Using this link: Method: locations.updateAttributes , I need to do updateAttributes for every location I have in google mybusiness information API.
I cannot find any code example. Would anyone please provide some codes?
Thanks a million
I get that Attributes vs List<Attribute> might be confusing.
This should help:
public void updateAttributes(String attributePath, List<Attribute> attributes,
String attributeMask) {
UpdateAttributes request = myBusinessBusinessInformation.locations()
.updateAttributes(attributePath, new Attributes().setAttributes(attributes))
.setAttributeMask(attributeMask);
request.execute();
}
I'm looking for a way to get model metadata from all currently active models on Tensorflow Serving in Java Maven.
I have some working code for retrieving metadata from a specific model and version so if it would be possible to get a list of all model names and versions through grpc (or api) that would be great. Working code using tensorflow-client (com.yesup.oss) :
static ManagedChannel channel = ManagedChannelBuilder.forAddress(TF_SERVICE_HOST, TF_SERVICE_PORT)
.usePlaintext(true).build();
static PredictionServiceGrpc.PredictionServiceBlockingStub stub = PredictionServiceGrpc.newBlockingStub(channel);
public static void getMetadata(String model, Integer version) {
System.out.println("Create request");
GetModelMetadataRequest request = GetModelMetadataRequest.newBuilder()
.setModelSpec(ModelSpec.newBuilder()
.setName(model)
.setSignatureName("serving_default")
.setVersion(Int64Value.newBuilder().setValue(version))
)
.addMetadataField("signature_def")
.build();
System.out.println("Collecting metadata...");
GetModelMetadataResponse response = stub.getModelMetadata(request);
System.out.println("Done");
try {
SignatureDefMap sdef = SignatureDefMap.parseFrom(
response.getMetadataMap().get("signature_def").getValue());
System.out.println( sdef);
} catch (InvalidProtocolBufferException e1) {
e1.printStackTrace();
}
}
Own thoughts
I have thought about a couple of solutions, however none of them are preferable.
Create a server on the same device running Tensorflow Serving that can share the content of Tensorflow Serving config file. The config file contains model names and version, but we will not know if they are currently active.
Use jython or python to access other libraries (tensorflow-serving-api) which seems to contain "list-all-model-names" and "retriveConfig".
Any advice are appreciated, thanks in advance!
I have this code that works for python
X = numpy.loadtxt("compiledFeatures.csv", delimiter=",")
model = load_model("kerasnaive.h5")
predictions = model.predict(X)
print(predictions);
and I am trying to write a code with the same functionality in java,
I have written this code but it do not works, anyone knows what I am doing wrong or is there another simpler way to do it?
the code is going to the catch block, and during debugging the code it seems that all the information gained from the model file is null
path = String.format("%s\\kerasnaive.h5", System.getProperty("user.dir"),
pAgents[i]);
try {
network = KerasModelImport.importKerasModelAndWeights(path, false);
}
catch (Exception e){
System.out.println("cannot build keras layers");
}
INDArray input = Nd4j.create(1);
input.add(featuresInput); //an NDarray that i got in the method
INDArray output = network[i].outputSingle(input);
it seems that the model does not built (the network is still null)
the code for python loads the model and it works,
in java i get the error: "Could not determine number of outputs for layer: no output_dim or nb_filter field found. For more information, see http://deeplearning4j.org/model-import-keras."
although the same file is used in both casses
Thanks,
Ori
You are currently importing the trained keras model using importKerasModelAndWeights. I'm not sure how you trained your model, but in Keras there are two types of models available: the Sequential model, and the Model class that uses the functional API. You can read more here.
If you used the Sequential model when you created the network, you need to use the importKerasSequentialModel function. Keras Sequential models.
I am trying to implement Google Webmasters Searchanalytics Query using using the Java API but i did not found any Java sample to use , in Google website here there is only Python samples for Searchanalytics Query , and they did not say that it's not available in Java API.
I found this class Webmasters.Searchanalytics.Query in the Java API which I assume that is equivalent to the Python function searchanalytics.query() but i did not found any implementation of it.
My question if it is possible to query data from Google Search Console using the Java API??
if yes i wounder if there is someone who can provide a Java sample, something like the Python sample provided by Google here.
Thank you in advance.
I succeded to implement Webmasters.Searchanalytics.Query
as follow
first you need to create your QueryRequest using the SearchAnalyticsQueryRequest class example:
private static SearchAnalyticsQueryRequest createSearchAnalyticsQueryRequest() {
SearchAnalyticsQueryRequest searQueryRequest = new SearchAnalyticsQueryRequest();
searQueryRequest.setStartDate("2016-04-10");
searQueryRequest.setEndDate("2016-04-20");
List<String> dimensions = new ArrayList<String>();
dimensions.add("page");
dimensions.add("query");
dimensions.add("country");
dimensions.add("device");
dimensions.add("date");
searQueryRequest.setDimensions(dimensions);
return searQueryRequest;
}
then executing the query as follow :
public static String Query(String site,
SearchAnalyticsQueryRequest searQueryRequest) throws Exception {
Webmasters.Searchanalytics.Query query = service.searchanalytics()
.query(site, searQueryRequest);
SearchAnalyticsQueryResponse queryResponse = query.execute();
return queryResponse.toPrettyString();
}
I think You missed it. here. Actually all you need to do is to click the Java link on the left.
Is there any java API/java plugin which can generate Database ER diagram when java connection object is provided as input.
Ex: InputSream generateDatabaseERDiagram(java connection object)// where inputsream will point to generated ER diagram image
The API should work with oracle,mysql,postgresql?
I was going through schemacrawler(http://schemacrawler.sourceforge.net/) tool but didint got any API which could do this.
If no API like this is there then let me know how can write my own API? I want to generate ER diagram for all the schema in a database or any specific schema if the schema name is provided as input.
It will be helpful if you show some light on how to achieve this task.
If I understood you question correctly, you might take a look at: JGraph
This is an old question but in case anyone else stumbles across it as I did when trying to do the same thing I eventually figured out how to generate the ERD using Schemacrawler's java API.
//Get your java connection however
Connection conn = DriverManager.getConnection("DATABASE URL");
SchemaCrawlerOptions options = new SchemaCrawlerOptions();
// Set what details are required in the schema - this affects the
// time taken to crawl the schema
options.setSchemaInfoLevel(SchemaInfoLevelBuilder.standard());
// you can exclude/include objects using the options object e.g.
//options.setTableInclusionRule(new RegularExpressionExclusionRule(".*qrtz.*||.*databasechangelog.*"));
GraphExecutable ge = new GraphExecutable();
ge.setSchemaCrawlerOptions(options);
String outputFormatValue = GraphOutputFormat.png.getFormat();
OutputOptions outputOptions = new OutputOptions(outputFormatValue, new File("database.png").toPath());
ge.setOutputOptions(outputOptions);
ge.execute(conn);
This still requires graphviz to be installed and on the path to work.