I am trying to create a basic Hello-World application in Opendaylight. I created the app by following the steps from https://docs.opendaylight.org/en/stable-magnesium/developer-guide/developing-apps-on-the-opendaylight-controller.html.
I was able to run basic Hello-World RPC as in the tutorial, and am trying to extend the app so that it can return Netconf Nodes that are present at the equivalent /restconf/operational/network-topology:network-topology/topology/topology-netconf/. For this, I am referring to the NCMount Example from https://github.com/opendaylight/coretutorials/tree/master/ncmount.
While trying to do maven build, I am getting the following error:
/home/ubuntu/ghost/odlapps/1.3.0-SS/hello/impl/src/main/java/org/opendaylight/hello/impl/HelloWorldImpl.java:[58,21] cannot find symbol
[ERROR] symbol: method getNode()
[ERROR] location: class java.util.Optional<org.opendaylight.yang.gen.v1.urn.tbd.params.xml.ns.yang.network.topology.rev131021.network.topology.Topology>
My HelloWorldImpl.java looks like:
package org.opendaylight.hello.impl;
import com.google.common.util.concurrent.ListenableFuture;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.ArrayList;
import java.util.List;
import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.hello.rev191127.HelloService;
import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.hello.rev191127.HelloWorldInput;
import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.hello.rev191127.HelloWorldOutput;
import org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.hello.rev191127.HelloWorldOutputBuilder;
import org.opendaylight.yangtools.yang.common.RpcResult;
import org.opendaylight.yangtools.yang.common.RpcResultBuilder;
import org.opendaylight.yangtools.yang.binding.InstanceIdentifier;
import org.opendaylight.yangtools.yang.common.QName;
import org.opendaylight.mdsal.binding.api.DataBroker;
import org.opendaylight.mdsal.binding.api.ReadTransaction;
import org.opendaylight.mdsal.common.api.LogicalDatastoreType;
import org.opendaylight.mdsal.common.api.ReadFailedException;
import org.opendaylight.yang.gen.v1.urn.opendaylight.netconf.node.topology.rev150114.NetconfNode;
import org.opendaylight.yang.gen.v1.urn.opendaylight.netconf.node.topology.rev150114.NetconfNodeConnectionStatus.ConnectionStatus;
import org.opendaylight.yang.gen.v1.urn.opendaylight.netconf.node.topology.rev150114.network.topology.topology.topology.types.TopologyNetconf;
import org.opendaylight.yang.gen.v1.urn.tbd.params.xml.ns.yang.network.topology.rev131021.NetworkTopology;
import org.opendaylight.yang.gen.v1.urn.tbd.params.xml.ns.yang.network.topology.rev131021.NodeId;
import org.opendaylight.yang.gen.v1.urn.tbd.params.xml.ns.yang.network.topology.rev131021.TopologyId;
import org.opendaylight.yang.gen.v1.urn.tbd.params.xml.ns.yang.network.topology.rev131021.network.topology.Topology;
import org.opendaylight.yang.gen.v1.urn.tbd.params.xml.ns.yang.network.topology.rev131021.network.topology.TopologyKey;
import org.opendaylight.yang.gen.v1.urn.tbd.params.xml.ns.yang.network.topology.rev131021.network.topology.topology.Node;
import org.opendaylight.yang.gen.v1.urn.tbd.params.xml.ns.yang.network.topology.rev131021.network.topology.topology.NodeKey;
public class HelloWorldImpl implements HelloService {
private static final Logger LOG = LoggerFactory.getLogger(HelloWorldImpl.class);
private final DataBroker dataBroker;
public static final InstanceIdentifier<Topology> NETCONF_TOPO_IID =
InstanceIdentifier
.create(NetworkTopology.class)
.child(Topology.class,
new TopologyKey(new TopologyId(TopologyNetconf.QNAME.getLocalName())));
#Override
public ListenableFuture<RpcResult<HelloWorldOutput>> helloWorld(HelloWorldInput input) {
HelloWorldOutputBuilder helloBuilder = new HelloWorldOutputBuilder();
ReadTransaction tx = dataBroker.newReadOnlyTransaction();
List<Node> nodes;
// Get All nodes from Operational Database
try {
nodes = tx.read(LogicalDatastoreType.OPERATIONAL, NETCONF_TOPO_IID)
// .checkedGet()
.get()
.getNode();
} catch (ReadFailedException e) {
LOG.error("Failed to read node config from datastore", e);
throw new IllegalStateException(e);
}
List<String> results = new ArrayList<>();
for (Node node : nodes) {
LOG.info("Node: {}", node);
NetconfNode nnode = node.augmentation(NetconfNode.class);
if (nnode != null) {
// We have a device
ConnectionStatus csts = nnode.getConnectionStatus();
}
results.add(node.getNodeId().getValue());
}
helloBuilder.setGreeting(results);
return RpcResultBuilder.success(helloBuilder.build()).buildFuture();
}
}
I am not sure what org.opendaylight.yang.gen.v1.urn.tbd.params.xml.ns.yang.network.topology.rev131021.network.topology.Topology returns because I dont seem to be able to find the apidoc for this package anywhere. I was getting the same error for getChecked(), so I commented it out to see if it helps.
So,
Is there any documentation/javadocs that I can refer to understand the Optional that is returned after performing READ on the operational datastore?
Are there examples based on current archetypes as NCMount is based on pretty old versions of Opendaylight?
Any help is greatly appreciated. Thanks!
Related
We are currently using Spring Boot 2.2.13 version with java 1.8-based dependencies. We are upgrading to Spring Boot version 2.7.3. with Java 17 support.
The current development uses CouchbaseClient.In the newer versions we hope to migrate, some functionality seems to be removed. I am specifically having an issue with replacing the below-mentioned functionality provided in (com.couchbase.client.java.Bucket) previously.
How could I replicate the functionality to Remove a {#link Document} from the Server identified by its ID in the newer version? Is there a possible alternative method that could be used for such a scenario?
import com.couchbase.client.java.Bucket;
import com.couchbase.client.java.document.JsonDocument;
import com.couchbase.client.java.document.json.JsonArray;
import com.couchbase.client.java.document.json.JsonObject;
import com.couchbase.client.java.query.N1qlQuery;
import com.couchbase.client.java.query.N1qlQueryResult;
import com.couchbase.client.java.query.N1qlQueryRow;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.core.io.Resource;
import org.springframework.core.io.ResourceLoader;
import org.springframework.core.io.support.ResourcePatternUtils;
import org.springframework.stereotype.Component;
import org.springframework.util.FileCopyUtils;
import com.acme.dto.configuration.ConfigEnvironment;
import com.acme.dto.configuration.ConfigurationStatus;
import com.acme.dto.configuration.LoadDbConfigurationResponseDTO;
import com.acme.exception.ErrorCodes;
import com.acme.exception.ServiceRuntimeException;
import java.io.InputStream;
import java.nio.charset.StandardCharsets;
import java.util.Iterator;
import java.util.concurrent.TimeUnit;
#Component
#Slf4j
public class DBDocumentLoader extends DBConfigurationChain{
#Autowired
private ResourceLoader resourceLoader;
#Value("${configuration.resource.db.root}")
private String dbConfigurationRoot;
#Autowired
private Bucket couchbaseBucket;
#Autowired
private Bucket masterConfigurationBucket;
private Bucket defaultBucket;
int successDocumentCount;
int failDocumentCount;
#Override
public DBConfigurationLogicData process(DBConfigurationLogicData logicData) {
log.info("Load Document process START");
defaultBucket = getDefaultBucket(logicData.getRequestDTO().getBucketName());
if(defaultBucket == null){
throw new ServiceRuntimeException(ErrorCodes.Constants.DB_CONFIGURATION_NOT_SUPPORT_BUCKET);
}
log.info("BUCKET : "+logicData.getRequestDTO().getBucketName());
Resource[] resources = loadResources(logicData.getRequestDTO().getEnv());
if(resources != null && resources.length > 0){
log.info("Resources found. Count : "+resources.length);
log.info("Flushing bucket "+logicData.getRequestDTO().getBucketName());
if(!flushBucket())
throw new ServiceRuntimeException(ErrorCodes.Constants.DB_CONFIGURATION_BUCKET_FLUSH_FAIL);
log.info("Bucket "+logicData.getRequestDTO().getBucketName() + "flushed");
processAvailableResources(resources);
log.info("Success resource count : "+successDocumentCount +"\n"+"Failed resource count : "+failDocumentCount +"\n");
setResponseDTO(logicData);
}else{
throw new ServiceRuntimeException(ErrorCodes.Constants.DB_CONFIGURATION_NO_RESOURCE);
}
return super.process(logicData);
}
private boolean flushBucket(){
try {
final String bucketName = "`"+defaultBucket.name()+"`";
String query = "SELECT " + "META(" + bucketName + ").id FROM " + bucketName;
N1qlQueryResult result = defaultBucket.query(N1qlQuery.simple(query), 1, TimeUnit.MINUTES);
final boolean isSuccess = result.finalSuccess();
if (isSuccess && !result.allRows().isEmpty()) {
Iterator<N1qlQueryRow> rows = result.rows();
while (rows.hasNext()) {
JsonObject jsonObject = rows.next().value();
if (jsonObject.containsKey("id")) {
defaultBucket.remove(jsonObject.getString("id")); ***//Depricated functionality that needs to be replaced***
}
}
}
return true;
}catch (Exception ex){
log.error("Error in DBDocumentLoader-flushBucket",ex);
}
return false;
}
}
Old API - https://docs.couchbase.com/sdk-api/couchbase-java-client-2.4.1/com/couchbase/client/java/Bucket.html
New API - https://docs.couchbase.com/sdk-api/couchbase-java-client/com/couchbase/client/java/Bucket.html
I'm beginning in Neo4j java embedded graph.
I have made a first test but I cant visualize my graph in neo4j-community.
Here is my code for creating the graph :
package connection;
import java.io.File;
import org.neo4j.graphdb.GraphDatabaseService;
import org.neo4j.graphdb.Label;
import org.neo4j.graphdb.Node;
import org.neo4j.graphdb.RelationshipType;
import org.neo4j.graphdb.Transaction;
import org.neo4j.graphdb.factory.GraphDatabaseFactory;
public class embededdedGraph {
public static void main(String... args) throws Exception {
GraphDatabaseFactory graphDbFactory = new GraphDatabaseFactory();
File graphDir = new File("/home/nicolas/neo4j-community-3.5.14/data/databases/cars.db");
GraphDatabaseService graphDb = graphDbFactory.newEmbeddedDatabase(graphDir);
Transaction tx = graphDb.beginTx();
createNode(graphDb);
tx.close();
}
public static void createNode(GraphDatabaseService graphDb) {
Node car = graphDb.createNode();
car.addLabel(Label.label("Car"));
car.setProperty("make", "tesla");
car.setProperty("model", "model3");
Node owner = graphDb.createNode(Label.label("Person"));
owner.setProperty("firstName", "Oliver");
owner.setProperty("lastName", "John");
owner.createRelationshipTo(car, RelationshipType.withName("owner"));
}
}
Next I changed the value of "#dbms.active_database" to "dbms.active_database=cars.db in the /neo4j-community-3.5.14/conf/neo4.conf file.
When I restart the neo4j-community, the database name is "cars.db" but it indicated that there are no labels and relationships in it.
What is the problem I cannot figure?
Nicolas
It looks like you need to call tx.success() or tx.fail() before tx.close().
https://neo4j.com/docs/java-reference/3.5/javadocs/org/neo4j/graphdb/Transaction.html
I am also new at this API, I hope it helps
i'm starting a new Eclipse RCP application and it's my first time and i have a problem , i want to display list of my available database(by the way i'm using nosql database(MongoDB)) but my code seems not to work, can anyone help please , can anyone point me to a good tutorial
thanks for your time and help guys.
package test2.parts;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import javax.annotation.PostConstruct;
import javax.inject.Inject;
import org.eclipse.e4.ui.di.Focus;
import org.eclipse.e4.ui.di.Persist;
import org.eclipse.e4.ui.model.application.ui.basic.MPart;
import org.eclipse.jface.viewers.ArrayContentProvider;
import org.eclipse.jface.viewers.TableViewer;
import org.eclipse.swt.SWT;
import org.eclipse.swt.layout.GridData;
import org.eclipse.swt.layout.GridLayout;
import org.eclipse.swt.widgets.Composite;
import org.eclipse.swt.widgets.Text;
import com.mongodb.MongoClient;
import com.mongodb.client.MongoCursor;
import org.eclipse.swt.widgets.Label;
public class SamplePart {
org.eclipse.swt.widgets.List list ;
private TableViewer tableViewer;
#Inject
private MPart part;
#PostConstruct
public void createComposite(Composite parent) {
parent.setLayout(new GridLayout(1, false));
Text txtInput = new Text(parent, SWT.BORDER);
txtInput.setMessage("Enter text to mark part as dirty");
txtInput.addModifyListener(e -> part.setDirty(true));
txtInput.setLayoutData(new GridData(GridData.FILL_HORIZONTAL));
list = new org.eclipse.swt.widgets.List(parent, SWT.BORDER);
tableViewer = new TableViewer(parent);
tableViewer.setContentProvider(ArrayContentProvider.getInstance());
tableViewer.setInput(createInitialDataModel());
tableViewer.getTable().setLayoutData(new GridData(GridData.FILL_BOTH));
}
#Focus
public void setFocus() {
tableViewer.getTable().setFocus();
}
#Persist
public void save() {
part.setDirty(false);
}
private List<String> createInitialDataModel() {
MongoClient mongoClient = new MongoClient("localhost", 27017);
ArrayList<String> dbs = new ArrayList<String>();
MongoCursor<String> dbsCursor = mongoClient.listDatabaseNames().iterator();
while (dbsCursor.hasNext()) {
list.add(dbsCursor.next());
}
return (List<String>) list;
}
}
The stack trace shows that the plug-in can't find the MongoClient class.
Eclipse plug-ins can only access code in other plug-ins or in jars included in the plug-in. They can't use jars which are just on the ordinary Java classpath.
So you will need to add the jar containing the MongoClient class in to your plugin and add it to the Bundle-Classpath in the MANIFEST.MF. You can do that in the MANIFEST.MF editor in the 'Classpath' section of the 'Runtime' tab.
You also need to include the jar in the build.properties file.
I have integrated Stanford NER in UIMA and developed a pipeline.
The pipeline contains a FileSystemCollectionReader,an NERAnnotator and a CasConsumer but the output so come isn't desired. In my input directory i have two files and after running the pipeline, i get two files as ouput but the second file is getting merged with the first file in second ouput. I don't know what's happening here.
The code for CasConsumer:
`
package org.gds.uima;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.PrintWriter;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Iterator;
import java.util.List;
import java.util.UUID;
import org.apache.uima.UimaContext;
import org.apache.uima.analysis_component.AnalysisComponent_ImplBase;
import org.apache.uima.analysis_component.CasAnnotator_ImplBase;
import org.apache.uima.analysis_component.JCasAnnotator_ImplBase;
import org.apache.uima.analysis_engine.AnalysisEngineProcessException;
import org.apache.uima.cas.CAS;
import org.apache.uima.cas.CASException;
import org.apache.uima.cas.Type;
import org.apache.uima.cas.text.AnnotationFS;
import org.apache.uima.fit.component.CasConsumer_ImplBase;
import org.apache.uima.fit.component.JCasConsumer_ImplBase;
import org.apache.uima.fit.descriptor.ConfigurationParameter;
import org.apache.uima.fit.util.CasUtil;
import org.apache.uima.jcas.JCas;
import org.apache.uima.resource.ResourceInitializationException;
public class CasConsumer extends JCasConsumer_ImplBase
{
public final static String PARAM_OUTPUT="outputDir";
#ConfigurationParameter(name = PARAM_OUTPUT)
private String outputDirectory;
public final static String PARAM_ANNOTATION_TYPES = "annotationTypes";
enter code here
#ConfigurationParameter(name = PARAM_ANNOTATION_TYPES,defaultValue="String")
public List<String> annotationTypes;
public void initialize(final UimaContext context) throws ResourceInitializationException
{
super.initialize(context);
}
#Override
public void process(JCas jcas)
{
String original = jcas.getDocumentText();
try
{
String onlyText="";
JCas sofaText = jcas.getView(NERAnnotator.SOFA_NAME);
onlyText = sofaText.getDocumentText();
String name = UUID.randomUUID().toString().substring(20);
File outputDir = new File(this.outputDirectory+"/"+name);
System.out.print("Saving file to "+outputDir.getAbsolutePath());
FileOutputStream fos = new FileOutputStream(outputDir.getAbsoluteFile());
PrintWriter pw = new PrintWriter(fos);
pw.println(onlyText);
pw.close();
}
catch(CASException cae)
{
System.out.println(cae);
}
catch(FileNotFoundException fne)
{
System.out.print(fne);
}
}
}
`
}
hello:
I'm writing code in java for nutch(open source search engine) to remove the movments from arabic words in the indexer.
I don't know what is the error in it.
Tthis is the code:
package com.mycompany.nutch.indexing;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.io.Text;
import org.apache.log4j.Logger;
import org.apache.nutch.crawl.CrawlDatum;
import org.apache.nutch.crawl.Inlinks;
import org.apache.nutch.indexer.IndexingException;
import org.apache.nutch.indexer.IndexingFilter;
import org.apache.nutch.indexer.NutchDocument;
import org.apache.nutch.parse.getData().parse.getData();
public class InvalidUrlIndexFilter implements IndexingFilter {
private static final Logger LOGGER =
Logger.getLogger(InvalidUrlIndexFilter.class);
private Configuration conf;
public void addIndexBackendOptions(Configuration conf) {
// NOOP
return;
}
public NutchDocument filter(NutchDocument doc, Parse parse, Text url,
CrawlDatum datum, Inlinks inlinks) throws IndexingException {
if (url == null) {
return null;
}
char[] parse.getData() = input.trim().toCharArray();
for(int p=0;p<parse.getData().length;p++)
if(!(parse.getData()[p]=='َ'||parse.getData()[p]=='ً'||parse.getData()[p]=='ُ'||parse.getData()[p]=='ِ'||parse.getData()[p]=='ٍ'||parse.getData()[p]=='ٌ' ||parse.getData()[p]=='ّ'||parse.getData()[p]=='ْ' ||parse.getData()[p]=='"' ))
new String.append(parse.getData()[p]);
return doc;
}
public Configuration getConf() {
return conf;
}
public void setConf(Configuration conf) {
this.conf = conf;
}
}
I think that the error is in using parse.getdata() but I don't know what I should use instead of it?
The line
char[] parse.getData() = input.trim().toCharArray();
will give you a compile error because the left hand side is not a variable. Please replace parse.getData() by a unique variable name (e.g. parsedData) in this line and the following lines.
Second the import of
import org.apache.nutch.parse.getData().parse.getData();
will also fail. Looks a lot like a text replace issue.