Error when I launch SPARQL query endpoint on the browser - java

I have a question:
I have two RDF files that I load on Jena TDB using this Java Code:
public void store() {
String directory = "C:\\tdb";
Dataset dataset = openTDB(directory);
String source = "C:\\file1.rdf";
String source1 = "C:\\file2.rdf";
Model tdb = loadModel(source, dataset);
dataset.addNamedModel("C://File1", tdb);
Model tdb1 = loadModel(source1, dataset);
dataset.addNamedModel("C://File2", tdb1);
tdb.close();
tdb1.close();
dataset.close();
}
public Dataset openTDB(String directory) {
// open TDB dataset
Dataset dataset = TDBFactory.createDataset(directory);
return dataset;
}
public Model loadModel(String source, Dataset dataset) {
Model model = ModelFactory.createDefaultModel();
FileManager.get().readModel(model, source, "RDF/XML");
return model;
}
As was suggested in this post https://stackoverflow.com/questions/24798024/how-i-can-use-fuseki-with-jena-tdb, I launch this command on CMD:
fuseki-server --update --loc C:\tdb /ds
On the localhost:3030 I see different page. In particular, I see "Control Panel" page where I can choose the dataset and I can execute a query.
Now, I'm reading this documentation http://jena.apache.org/documentation/serving_data/ and I see that if I want to launch the SPARQL query endpoint I can write http://host/dataset/query path in the browser.
But, If I launch this path (
http://localhost:3030/ds/query
), I get this error:
Error 404: Service Description: /ds/query
Fuseki - version 1.0.2 (Build date: 2014-06-02T10:57:10+0100)
Why?
I'm doing this research to find a Java method to launch Fuseki server in Java Code. Is it possible?

Related

How to create a Fuseki SPARQL server connection in Android Studio

How can I create a connection to a Fuseki server through the android studio and upload my owl file into the Fuseki server in order to send SPARQL query and get the result?
I did it from the command-line and it works fine but I need to do it through the android studio.
I found some code but "DatasetAccessor and DatasetAccessorFactory" can not be resolved
public static void uploadRDF(File rdf, String serviceURI)
throws IOException {
// parse the file
Model m = ModelFactory.createDefaultModel();
try (FileInputStream in = new FileInputStream(rdf)) {
m.read(in, null, "RDF/XML");
}
// upload the resulting model
DatasetAccessor accessor = DatasetAccessorFactory
.createHTTP(serviceURI);
accessor.putModel(m);
}

Conversion from OWLOntology to Jena Model in Java

I need to convert data from OWLOntology object(part of OWL api) to Model object(part of Jena Api). My Java program should be able to load owl file and send its content to fuseki server. According to what I read, working with fuseki server via Java program is possible only with Jena Api, that's why I use it.
So I found some example of sending ontologies to fuseki server using Jena api, and modified it to this function :
private static void sendOntologyToFuseki(DatasetAccessor accessor, OWLOntology owlModel){
Model model;
/*
..
conversion from OWLOntology to Model
..
*/
if(accessor != null){
accessor.add(model);
}
}
This function should add new ontologies to fuseki server. Any ideas how to fill missing conversion? Or any other ideas, how to send ontologies to fuseki server using OWL api?
I read solution of this :
Sparql query doesn't upadate when insert some data through java code
but purpose of my java program is to send these ontologies incrementally, because it's quite big data and if I load them into local memory, my computer does not manage it.
The idea is to write to a Java OutputStream and pipe this into an InputStream. A possible implementation could look like this:
/**
* Converts an OWL API ontology into a JENA API model.
* #param ontology the OWL API ontology
* #return the JENA API model
*/
public static Model getModel(final OWLOntology ontology) {
Model model = ModelFactory.createDefaultModel();
try (PipedInputStream is = new PipedInputStream(); PipedOutputStream os = new PipedOutputStream(is)) {
new Thread(new Runnable() {
#Override
public void run() {
try {
ontology.getOWLOntologyManager().saveOntology(ontology, new TurtleDocumentFormat(), os);
os.close();
} catch (OWLOntologyStorageException | IOException e) {
e.printStackTrace();
}
}
}).start();
model.read(is, null, "TURTLE");
return model;
} catch (Exception e) {
throw new RuntimeException("Could not convert OWL API ontology to JENA API model.", e);
}
}
Alternatively, you could simply use ByteArrayOutputStream and ByteArrayInputStream instead of piped streams.
To avoid such kind of dreadful transformation through i/o streams you can use ONT-API: it implements direct reading of the owl-axioms from the graph without any conversion

Export activity table as excel to share point server in java using webservices

I am trying to export excel file into share point server
I referred this link:Upload a file to SharePoint through the built-in web services.
I am unable to import Uri,SharePointserver2007,security etc... how can i import these?
public static void UploadFile2007(string destinationUrl, byte[] fileData)
{
// List of desination Urls, Just one in this example.
string[] destinationUrls = { Uri.EscapeUriString(destinationUrl) };
// Empty Field Information. This can be populated but not for this example.
SharePoint2007CopyService.FieldInformation information = new
SharePoint2007CopyService.FieldInformation();
SharePoint2007CopyService.FieldInformation[] info = { information };
// To receive the result Xml.
SharePoint2007CopyService.CopyResult[] result;
// Create the Copy web service instance configured from the web.config file.
SharePoint2007CopyService.CopySoapClient
CopyService2007 = new CopySoapClient("CopySoap");
CopyService2007.ClientCredentials.Windows.ClientCredential =
CredentialCache.DefaultNetworkCredentials;
CopyService2007.ClientCredentials.Windows.AllowedImpersonationLevel =
System.Security.Principal.TokenImpersonationLevel.Delegation;
CopyService2007.CopyIntoItems(destinationUrl, destinationUrls, info, fileData, out result);
if (result[0].ErrorCode != SharePoint2007CopyService.CopyErrorCode.Success)
{
// ...
}
}

'SolrCoreState already closed' with unit test using EmbeddedSolrServer v 5.2.1

Im trying to read 7-8 xml files of data that contain realistic data from our production server. Then I would like to read this data into EmbeddedSolrServer to test for edge cases for our custom date search. The use of EmbeddedSolrServer is purely to separate the data testing from any environment that might change over time.
I would also like to avoid writing plumbing-code to import each field from the xml since I already have a working DIH.
Seting up an integration test with EmbeddedSolrServer I get error:
ERROR o.a.s.h.dataimport.DataImporter - Full Import failed:java.lang.RuntimeException: org.apache.solr.common.SolrException: SolrCoreState already closed
The code for setting up the integration test is:
public class SolrEmbeddedSearchTest extends AbstractSolrTestCase {
static {
System.setProperty("solr.allow.unsafe.resourceloading", "true");
ClassLoader loader = SolrEmbeddedSearchTest.class.getClassLoader();
loader.setPackageAssertionStatus("org.apache.solr", true);
loader.setPackageAssertionStatus("org.apache.lucene", true);
}
private SolrClient server;
private final String SolrProjectPath = "\\src\\test\\resources\\solr-5.2.1\\server\\solr\\nmdc";
private final String userDir = System.getProperty("user.dir") + SolrProjectPath;
#Override
public String getSolrHome() {
return userDir;
}
#Before
#Override
public void setUp() throws Exception {
super.setUp();
initCore("solrconfig.xml", "schema.xml", userDir, "collection1");
server = new EmbeddedSolrServer(h.getCoreContainer(), h.getCore().getName());
SolrQuery qry = new SolrQuery();
qry.setRequestHandler("/dataimport2");
qry.setParam("command", "full-import");
qry.setParam("clean", false);
server.query(qry);
}
#Test
public void testThatResultsAreReturned() throws Exception {
SolrParams params = new SolrQuery("Entry_ID:imr_1423");
QueryResponse response = server.query(params);
assertEquals(1L, response.getResults().getNumFound());
assertEquals("1", response.getResults().get(0).get("Entry_ID"));
}
}
And when run causes this stacktrace:
DEBUG o.a.s.u.processor.LogUpdateProcessor - PRE_UPDATE add{,id=imr_1423} {qt=/dataimport2&expandMacros=false&config=dih-config.xml&command=full-import}
15:40:38.713 [Thread-2] WARN o.a.s.handler.dataimport.SolrWriter - Error creating document : SolrInputDocument(fields: [Entry_Title=...])
org.apache.solr.common.SolrException: SolrCoreState already closed
Here is the complete stacktrace: https://gist.github.com/emoen/f6c2f80b7ba09a59fa6b - exception starts at line 683.
Schema.xml, solrconfig.xml, and dih-config.xml has been copied from a standalone Solr 5.2.1 instance that works.
Why is SolrCoreState closed before setup has finished and the data imported with handler: /dataimport2 ?
What is the best way of doing dataimport with EmbeddedSolr? Ive tried the method suggested here: Embedded Solr DIH but solr is closed before the http request is sent.
Debugging the code - it looks like this line
initCore("solrconfig.xml", "schema.xml", userDir, "collection1");
initializes the core twice.
In solrconfig.xml I define dataDir with <dataDir>F:\prosjekt3\nmdc\source\central-api\src\test\resources\solr-5.2.1\server\solr\configsets\nmdc\data</dataDir>
When reading solrconfig.xml the log picks this up:
DEBUG org.apache.solr.core.Config - solrconfig.xml dataDir=F:\prosjekt3\nmdc\source\central-api\src\test\resources\solr-5.2.1\server\solr\nmdc\collection1\data
But further down the log it is using another dataDir:
INFO org.apache.solr.core.SolrCore - CORE DESCRIPTOR: {name=collection1, config=solrconfig.xml, transient=false, schema=schema.xml, loadOnStartup=true, instanceDir=collection1, collection=collection1, absoluteInstDir=F:\prosjekt3\nmdc\source\central-api\src\test\resources\solr-5.2.1\server\solr\nmdc\collection1\, dataDir=C:\Windows\Temp\no.nmdc.solr.request.SolrEmbeddedSearchTest_4D0704B125401CE0-001\init-core-data-001, shard=shard1}
And it starts loading the jar files, reading solrconfig.xml and schema.xml for the second time.

How to browse local Java App Engine datastore?

It seems there is no equivalent of Python App Engine's _ah/admin for the Java implementation of Google App Engine.
Is there a manual way I can browse the datastore? Where are the files to be found on my machine? (I am using the App Engine plugin with Eclipse on OS X).
http://googleappengine.blogspot.com/2009/07/google-app-engine-for-java-sdk-122.html: "At long last, the dev appserver has a data viewer. Start your app locally and point your browser to http://localhost:8888/_ah/admin http://localhost:8000/datastore* to check it out."
* as of 1.7.7
There's currently no datastore viewer for the Java SDK - one should be coming in the next SDK release. In the meantime, your best bet is to write your own admin interface with datastore viewing code - or wait for the next SDK release.
Java App Engine now has a local datastore viewer, accessible at http://localhost:8080/_ah/admin.
I have local datastore on my Windows+Eclipse environment on \war\WEB-INF\appengine-generated\local_db.bin
As far as I understood it uses internal format named "protocol buffers". I don't have external tools to present the file in human-readable format.
I'm using simple "viewer" code like this:
public void doGet(HttpServletRequest req, HttpServletResponse resp)
throws IOException
{
resp.setContentType("text/plain");
final DatastoreService datastore = DatastoreServiceFactory.getDatastoreService();
final Query query = new Query("Table/Entity Name");
//query.addSort(Entity.KEY_RESERVED_PROPERTY, Query.SortDirection.DESCENDING);
for (final Entity entity : datastore.prepare(query).asIterable()) {
resp.getWriter().println(entity.getKey().toString());
final Map<String, Object> properties = entity.getProperties();
final String[] propertyNames = properties.keySet().toArray(
new String[properties.size()]);
for(final String propertyName : propertyNames) {
resp.getWriter().println("-> " + propertyName + ": " + entity.getProperty(propertyName));
}
}
}
In the newest versions of the SDK (1.7.6+) the admin part of the dev server comes with it changed its location
Analyzing the server output logs we can see that it is accessible at:
http://localhost:8000
And the Datastore viewer:
http://localhost:8000/datastore
Looks pretty neat - according to google's new design guidlines.
Because Google App Engines Datastore viewer does not support displaying collections of referenced entities, I modified Paul's version to display all descendant entities:
public void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException {
String entityParam = req.getParameter("e");
resp.setContentType("text/plain");
final DatastoreService datastore = DatastoreServiceFactory.getDatastoreService();
// Original query
final Query queryOrig = new Query(entityParam);
queryOrig.addSort(Entity.KEY_RESERVED_PROPERTY, Query.SortDirection.ASCENDING);
for (final Entity entityOrig : datastore.prepare(queryOrig).asIterable()) {
// Query for this entity and all its descendant entities and collections
final Query query = new Query();
query.setAncestor(entityOrig.getKey());
query.addSort(Entity.KEY_RESERVED_PROPERTY, Query.SortDirection.ASCENDING);
for (final Entity entity : datastore.prepare(query).asIterable()) {
resp.getWriter().println(entity.getKey().toString());
// Print properties
final Map<String, Object> properties = entity.getProperties();
final String[] propertyNames = properties.keySet().toArray(new String[properties.size()]);
for(final String propertyName : propertyNames) {
resp.getWriter().println("-> " + propertyName + ": " + entity.getProperty(propertyName));
}
}
}
}
It should be noted that nothing is displayed for empty collections/referenced entities.
Open the \war\WEB-INF\appengine-generated\local_db.bin file with a text editor, like Notepad++.
The data is scrambled but at least you can read it and you can copy to extract it.
For me the fix was to do the login using below gcloud command
gcloud auth application-default login

Categories