I have just started working on share-point using java my firm uses share-point 2010 version so I have to work using SOAP and I have done following functionality.
Download Document from share-point.
Upload Document to share-point.
But I am stuck at creating new directory, I have tried some code by googling but no luck.
Here is my code:
public void createFolder(ListsSoap ls, String filePathToCreate,
String fileName, LoginDO loginDO, HttpServletResponse response)
throws Exception {
try {
String query = "";
String folderName = "NewFolder";
// 1. Prepare Query, Query Options and field Options
if (CommonUtilities.isValidStr(folderName)) {
// Prepare Query & Query Options for child folders
query = "<Batch OnError=\"Continue\" PreCalc=\"TRUE\" ListVersion=\"0\" " +
"RootFolder=\"https://xxx/Shared%20Documents/FPL%20SOs%20-%20JD%20Documents\">"
+ "<Method ID=\"1\" Cmd=\"New\">"
+ "<Field Name=\"FSObjType\">1</Field>"
+"<Field Name=\"ID\">New</Field>"
+ "<Field Name=\"BaseName\">" + folderName + "</Field>"
+ "</Method></Batch>";
} else {
// Prepare Query & Query Options for Parent folders
query = "<Query><Where><Eq><FieldRef Name=\"FSObjType\" />"
+ "<Value Type=\"Lookup\">1</Value></Eq></Where></Query>";
}
UpdateListItems.Updates updates = null;
// 2. Prepare Query, QueryOptions & ViewFields object as per options
if (CommonUtilities.isValidStr(query)) {
updates = new UpdateListItems.Updates();
updates.getContent().add(
sharepointUtil.createSharePointCAMLNode(query));
}
// 3. Call Web service to get result for selected options
UpdateListItemsResult result = ls
.updateListItems(SHAREPOINT_FOLDER_NAME,updates);
/*
* CommonUtilities
* .getApplicationProperty(ApplicationConstants.SHAREPOINT_FOLDER_NAME
* ), "", msQuery, viewFields, "", msQueryOptions, "");
*/
// 4. Get elements from share point result
Element element = (Element) result.getContent().get(0);
NodeList nl = element.getElementsByTagName("z:row");
for (int i = 0; i < nl.getLength(); i++) {
Node node = nl.item(i);
System.out.println("Some.!");
}
} catch (Exception e) {
e.printStackTrace();
}
// logger.logMethodEnd();
}`
Anyhow updateListItems() method is executed without error but there is nothing in result.
Any help is appreciated.
Thank you :)
seems there was problem in
"RootFolder=\"https://xxx/Shared%20Documents/FPL%20SOs%20-%20JD%20Documents\">"
while passing root directory to create folder within.
found other way around by passing folderName with full directories hierarchy.
Thank you all for your inputs :)
Related
I have a Java method in my code, in which I am using following line of code to fetch any data from azure cosmos DB
Iterable<FeedResponse<Object>> feedResponseIterator =
cosmosContainer
.queryItems(sqlQuery, queryOptions, Object.class)
.iterableByPage(continuationToken, pageSize);
Now the whole method looks like this
public List<LinkedHashMap> getDocumentsFromCollection(
String containerName, String partitionKey, String sqlQuery) {
List<LinkedHashMap> documents = new ArrayList<>();
String continuationToken = null;
do {
CosmosQueryRequestOptions queryOptions = new CosmosQueryRequestOptions();
CosmosContainer cosmosContainer = createContainerIfNotExists(containerName, partitionKey);
Iterable<FeedResponse<Object>> feedResponseIterator =
cosmosContainer
.queryItems(sqlQuery, queryOptions, Object.class)
.iterableByPage(continuationToken, pageSize);
int pageCount = 0;
for (FeedResponse<Object> page : feedResponseIterator) {
long startTime = System.currentTimeMillis();
// Access all the documents in this result page
page.getResults().forEach(document -> documents.add((LinkedHashMap) document));
// Along with page results, get a continuation token
// which enables the client to "pick up where it left off"
// in accessing query response pages.
continuationToken = page.getContinuationToken();
pageCount++;
log.info(
"Cosmos Collection {} deleted {} page with {} number of records in {} ms time",
containerName,
pageCount,
page.getResults().size(),
(System.currentTimeMillis() - startTime));
}
} while (continuationToken != null);
log.info(containerName + " Collection has been collected successfully");
return documents;
}
My question is that can we use same line of code to execute delete query like (DELETE * FROM c)? If yes, then what it would be returning us in Iterable<FeedResponse> feedResponseIterator object.
SQL statements can only be used for reads. Delete operations must be done using DeleteItem().
Here are Java SDK samples (sync and async) for all document operations in Cosmos DB.
Java v4 SDK Document Samples
I'm continuing a project that has been coming for a few years at my university. One of the activities this project does is to collect some web pages using the google bot.
Due to a problem that I cannot understand, the project is not getting through this part. Already research a lot about what may be happening, if it is some part of the code that is outdated.
The code is in Java and uses Maven for project management.
I've tried to update some information from maven's "pom".
I already tried to change the part of the code that uses the bot, but nothing works.
I'm posting the part of code that isn't working as it should:
private List<JSONObject> querySearch(int numSeeds, String query) {
List<JSONObject> result = new ArrayList<>();
start=0;
do {
String url = SEARCH_URL + query.replaceAll(" ", "+") + FILE_TYPE + "html" + START + start;);
Connection conn = Jsoup.connect(url).userAgent("Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)").timeout(5000);
try {
Document doc = conn.get();
result.addAll(formatter(doc);
} catch (IOException e) {
System.err.println("Could not search for seed pages in IO.");
System.err.println(e);
} catch (ParseException e) {
System.err.println("Could not search for seed pages in Parse.");
System.err.println(e);
}
start += 10;
} while (result.size() < numSeeds);
return result;
}
what some variables do:
private static final String SEARCH_URL = "https://www.google.com/search?q=";
private static final String FILE_TYPE = "&fileType=";
private static final String START = "&start=";
private QueryBuilder queryBuilder;
public GoogleAjaxSearch() {
this.queryBuilder = new QueryBuilder();
}
Until this part is ok, it connect with the bot and can get a html from google. The problem is to separate what found and take only the link, that should be between ("h3.r> a").
That it does in this part with the result.addAll(formatter(doc)
public List<JSONObject> formatter(Document doc) throws ParseException {
List<JSONObject> entries = new ArrayList<>();
Elements results = doc.select("h3.r > a");
for (Element result : results) {
//System.out.println(result.toString());
JSONObject entry = new JSONObject();
entry.put("url", (result.attr("href").substring(6, result.attr("href").indexOf("&")).substring(1)));
entry.put("anchor", result.text());
So when it gets to this part: Elements results = doc.select ("h3.r> a"), find, probably, no h3 and can't increment the "results" list by not entering the for loop. Then goes back to the querysearch function and try again, without increment the results list. And with that, entering in a infinite loop trying to get the requested data and never finding.
If anyone here can help me, I've been trying for a while and I don't know what else to do. Thanks in advance.
I am using the latest version of alfresco 5.1 version.
one of my requirement is to create properties (key / value) where user enter the key as well as the value.
so I have done that like this
Map<QName, Serializable> props = new HashMap<QName, Serializable>();
props.put(QName.createQName("customProp1"), "prop1");
props.put(QName.createQName("customProp2"), "prop2");
ChildAssociationRef associationRef = nodeService.createNode(nodeService.getRootNode(storeRef), ContentModel.ASSOC_CHILDREN, QName.createQName(GUID.generate()), ContentModel.TYPE_CMOBJECT, props);
Now what I want to do is search the nodes with these newly created properties. I was able to search the newly created property like this.
public List<NodeRef> findNodes() throws Exception {
authenticate("admin", "admin");
StoreRef storeRef = new StoreRef(StoreRef.PROTOCOL_WORKSPACE, "SpacesStore");
List<NodeRef> nodeList = null;
Map<QName, Serializable> props = new HashMap<QName, Serializable>();
props.put(QName.createQName("customProp1"), "prop1");
props.put(QName.createQName("customProp2"), "prop2");
ChildAssociationRef associationRef = nodeService.createNode(nodeService.getRootNode(storeRef), ContentModel.ASSOC_CHILDREN, QName.createQName(GUID.generate()), ContentModel.TYPE_CMOBJECT, props);
NodeRef nodeRef = associationRef.getChildRef();
String query = "#cm\\:customProp1:\"prop1\"";
SearchParameters sp = new SearchParameters();
sp.addStore(storeRef);
sp.setLanguage(SearchService.LANGUAGE_LUCENE);
sp.setQuery(query);
try {
ResultSet results = serviceRegistry.getSearchService().query(sp);
nodeList = new ArrayList<NodeRef>();
for (ResultSetRow row : results) {
nodeList.add(row.getNodeRef());
System.out.println(row.getNodeRef());
}
System.out.println(nodeList.size());
} catch (Exception e) {
e.printStackTrace();
}
return nodeList;
}
The alfresco-global.properties indexer configuration is
index.subsystem.name=buildonly
index.recovery.mode=AUTO
dir.keystore=${dir.root}/keystore
Now my question is
Is it possible to achieve the same using the solr4 indexer ?
Or Is there any way to use buildonly indexer for a particular query ?
In your query
String query = "#cm\\:customProp1:\"prop1\"";
remove cm as you are building the QName on the fly so it does not come under cm i.e. (ContentModel) properties. So your query will be
String query = "#\\:customProp1:\"prop1\"";
Hope this will work for you
First, double check if you're simply experiencing eventual consistency, as described below. If you are, and if this presents a problem for you, consider switching to CMIS queries while staying on SOLR.
http://docs.alfresco.com/5.1/concepts/solr-event-consistency.html
Other than this, check if the node has been indexed at all. If it has, take a closer look at how you build your query.
How to find List of unindexed file in alfresco
I need some help scraping a webpage with Jsoup. I want to pars player profiles from the hcfactions webpage and gather their kills and deaths. The problem I'm running into is that each profile page is dynamically created and will only have said tables if the player has kills or deaths. So in order to tell which table I'm parsing I need to get the header text that's set after the call.
example web page: http://www.hcfactions.net/index.php?action=playerinfo&player=Djmaddox.
Below is a html segment from the web page I'm scraping:
<table class='table-bordered'><h2 style='text-align:center'>Deaths</h2>
<tr><td>Date</td><td>Reason</td><td>Details</td></tr><tr><td>Dec 11 5:27pm CST</td>.....
I have this code that pulls the tables and counts entries but it wont pull the h2 tags with it for me to select.
public void getPlayerDetails(String name) {
String data = "";
Avatar temp = _db.getPlayer(name);
playerUrl = "http://www.hcfactions.net/index.php?action=playersearch&player=" + name;
try {
// data = Jsoup.connect(url)
// .url(url).get().html();
playerDoc = Jsoup.connect(playerUrl).get();
} catch (IOException ex) {
Logger.getLogger(JParser.class.getName()).log(Level.SEVERE, null, ex);
}
if (playerDoc.select("table").size() == 1) {
return;
} else if (playerDoc.select("table").size() >= 2) {
for (int x = 1; x < playerDoc.select("table").size(); x++) {
System.out.println("deaths");
Element table = playerDoc.select("table").get(x);
Iterator<Element> ite = table.select("tr").iterator();
int count = 0;
while (ite.hasNext()) {
data = ite.next().text();
count++;
}
if (count > 0) {
temp.setDeaths(count - 1);
}
}
}
}
The tag <h2> is on an invalid position. That's why JSoup cannot find it I think. You have to extract it yourself with regular expressions. You can get the content of the <h2> with the following code:
String tableToString = "<table class='table-bordered'><h2 style='text-align:center'>Deaths</h2>" + "<tr>" + "<td>Date</td>" + "<td>Reason</td>" + "<td>Details</td>" + "</tr>" + "</table>";
String regex = "<h2.*>(.*)?</h2>";
Pattern pattern = Pattern.compile(regex);
Matcher matcher = pattern.matcher(tableToString);
if (matcher.find()) {
System.out.println(matcher.group(1));
}
You can init tableToString with table.toString() from your code.
As ka3ak says, the <h2> is mispositioned. But you don't have to abandon your parser as resort to regex for that. Assuming JSoup is a decent HTML parser (never used it myself) the <h2> element should be the element immediately preceding the <table> element. Get your 'select' statement to look for it there.
Elements headers=playerDoc.select("div.span10.offset1 h2");
IMHO Your selections seams to be little bit overcomplicated, but maybe it has to be like that. Anyway snippet above will get you every H2 tags present in proper container.
Later on you ca select required tables like that Elements tables=playerDoc.select("div.span10.offset1 table"); and apply proper data digging onto them. Headers will be in corresponding order to tables ofc. I think, that my job is done here :)
How can I delete the saved content in eclipse secure storage programmatically? I need to reset all setting before I run some SWTBot tests.
I know, that I can delete the folder, but isn't there another way?
../.eclipse/org.eclipse.equinox.security
EDIT:
Thanks to Kris I solved the problem.
//part 1
try {
AuthPlugin.getDefault().stop(null);
} catch (final Exception e) {
e.printStackTrace();
}
//part 2
final ISecurePreferences rootNode = SecurePreferencesFactory.getDefault()
.node(ROOT_NODE_NAME);
final String[] names = rootNode.childrenNames().clone();
for (int i = 0; i < names.length; i++) {
rootNode.node(names[i]).removeNode();
}
The problem is solved in part 2. I want to also show a way how to stop the authentication for the secure storage, because it is very annoying, by testing with SWTBot.
You can remove stored values in secure store by using ISecurePreferences. Have a look here
ISecurePreferences root = org.eclipse.equinox.security.storage.SecurePreferencesFactory.getDefault();
if (root == null)
return null;
ISecurePreferences node = root.node("/your.class.path.or.something.else"); // get the node for your application e.g. this.getClass().getCanonicalName()
node = node.node( "some name"); // get custom node from the tree
node.get( "key" ); // load
node.put("key","value", true / false (encrypt) ); // store (no save operation)
node.remove("key"); // remove
node.flush(); // save