So I have been looking around for days and I still can't find a simple working method. This is what I am trying to do:
1 - Search and find web services registered in UDDI based on keywords
2 - Decide which service fits and use/call it
All this using Java (Eclipse).
I don't want to create my own uddi nor do I want to publish services, just find existing services stored in the public UDDI (I believe there's one, right?). I thought that these two tasks (find WS, call WS) would be easy and that it would be possible to find sample code to use, but I can't find any.
I came across Juddi while searching, but not sure if it works for my case and if it's worth installing.
Any tutorials? suggestions ? I found the following code, but can't find the jar file to use its libraries:
/*
* To change this template, choose Tools | Templates
* and open the template in the editor.
*/
package uddi.createbulk;
import javax.xml.bind.JAXB;
import org.apache.juddi.v3.client.config.UDDIClient;
import org.apache.juddi.v3.client.config.UDDIClientContainer;
import org.apache.juddi.v3.client.transport.Transport;
import org.apache.juddi.v3_service.JUDDIApiPortType;
import org.uddi.api_v3.*;
import org.uddi.v3_service.UDDIInquiryPortType;
import org.uddi.v3_service.UDDIPublicationPortType;
import org.uddi.v3_service.UDDISecurityPortType;
/**
*
* #author Alex
*/
public class UddiFindService {
private static UDDISecurityPortType security = null;
private static JUDDIApiPortType juddiApi = null;
private static UDDIPublicationPortType publish = null;
private static UDDIInquiryPortType inquiry = null;
public UddiFindService() {
try {
// create a manager and read the config in the archive;
// you can use your config file name
UDDIClient clerkManager = new UDDIClient("META-INF/simple-publish-uddi.xml");
// register the clerkManager with the client side container
UDDIClientContainer.addClient(clerkManager);
// a ClerkManager can be a client to multiple UDDI nodes, so
// supply the nodeName (defined in your uddi.xml.
// The transport can be WS, inVM, RMI etc which is defined in the uddi.xml
Transport transport = clerkManager.getTransport("default");
// Now you create a reference to the UDDI API
security = transport.getUDDISecurityService();
juddiApi = transport.getJUDDIApiService();
publish = transport.getUDDIPublishService();
inquiry = transport.getUDDIInquiryService();
} catch (Exception e) {
e.printStackTrace();
}
}
public void find() {
try {
// Setting up the values to get an authentication token for the 'root' user ('root' user has admin privileges
// and can save other publishers).
GetAuthToken getAuthTokenRoot = new GetAuthToken();
getAuthTokenRoot.setUserID("root");
getAuthTokenRoot.setCred("root");
// Making API call that retrieves the authentication token for the 'root' user.
AuthToken rootAuthToken = security.getAuthToken(getAuthTokenRoot);
System.out.println("root AUTHTOKEN = " + rootAuthToken.getAuthInfo());
GetServiceDetail fs = new GetServiceDetail();
fs.setAuthInfo(rootAuthToken.getAuthInfo());
fs.getServiceKey().add("mykey");
ServiceDetail serviceDetail = inquiry.getServiceDetail(fs);
if (serviceDetail == null || serviceDetail.getBusinessService().isEmpty()) {
System.out.println("mykey is not registered");
} else {
JAXB.marshal(serviceDetail, System.out);
}
} catch (Exception e) {
e.printStackTrace();
}
}
public static void main(String args[]) {
UddiFindService sp = new UddiFindService();
sp.find();
}
}
Related
Using CuratorFramework, could someone explain how I can:
Create a new path
Set data for this path
Get this path
Using username foo and password bar? Those that don't know this user/pass would not be able to do anything.
I don't care about SSL or passwords being sent via plaintext for the purpose of this question.
ACL in Apache Curator are for access control. Therefore, ZooKeeper do not provide any authentication mechanism like, clients who don't have correct password cannot connect to ZooKeeper or cannot create ZNodes. What it can do is, preventing unauthorized clients from accessing particular Znode/ZNodes. In order to do that, you have to setup CuratorFramework instance as I have described below. Remember, this will guarantee that, a ZNode create with a given ACL, can be again accessed by the same client or by a client presenting the same authentication information.
First you should build the CuratorFramework instane as follows. Here, the connectString means a comma separated list of ip and port combinations of the zookeeper servers in your ensemble.
CuratorFrameworkFactory.Builder builder = CuratorFrameworkFactory.builder()
.connectString(connectString)
.retryPolicy(new ExponentialBackoffRetry(retryInitialWaitMs, maxRetryCount))
.connectionTimeoutMs(connectionTimeoutMs)
.sessionTimeoutMs(sessionTimeoutMs);
/*
* If authorization information is available, those will be added to the client. NOTE: These auth info are
* for access control, therefore no authentication will happen when the client is being started. These
* info will only be required whenever a client is accessing an already create ZNode. For another client of
* another node to make use of a ZNode created by this node, it should also provide the same auth info.
*/
if (zkUsername != null && zkPassword != null) {
String authenticationString = zkUsername + ":" + zkPassword;
builder.authorization("digest", authenticationString.getBytes())
.aclProvider(new ACLProvider() {
#Override
public List<ACL> getDefaultAcl() {
return ZooDefs.Ids.CREATOR_ALL_ACL;
}
#Override
public List<ACL> getAclForPath(String path) {
return ZooDefs.Ids.CREATOR_ALL_ACL;
}
});
}
CuratorFramework client = builder.build();
Now you have to start it.
client.start();
Creating a path.
client.create().withMode(CreateMode.PERSISTENT).forPath("/your/ZNode/path");
Here, the CreateMode specify what type of a node you want to create. Available types are PERSISTENT,EPHEMERAL,EPHEMERAL_SEQUENTIAL,PERSISTENT_SEQUENTIAL,CONTAINER. Java Docs
If you are not sure whether the path up to /your/ZNode already exists, you can create them as well.
client.create().creatingParentsIfNeeded().withMode(CreateMode.PERSISTENT).forPath("/your/ZNode/path");
Set Data
You can either set data when you are creating the ZNode or later. If you are setting data at the creation time, pass the data as a byte array as the second parameter to the forPath() method.
client.create().withMode(CreateMode.PERSISTENT).forPath("/your/ZNode/path","your data as String".getBytes());
If you are doing it later, (data should be given as a byte array)
client.setData().forPath("/your/ZNode/path",data);
Finally
I don't understand what you mean by get this path. Apache Curator is a java client (more than that with Curator Recipes) which use Apache Zookeeper in the background and hides edge cases and complexities of Zookeeper. In Zookeeper, they use the concept of ZNodes to store data. You can consider it as the Linux directory structure. All ZNodePaths should start with / (root) and you can go on specifying directory like ZNodePaths as you like. Ex: /someName/another/test/sample.
As shown in the above diagram, ZNode are organized in a tree structure. Every ZNode can store up to 1MB of data. Therefore, if you want to retrieve data stored in a ZNode, you need to know the path to that ZNode. (Just like you should know the table and column of a database in order to retrive data).
If you want to retrive data in a given path,
client.getData().forPath("/path/to/ZNode");
That's all you have to know when you want to work with Curator.
One more thing
ACL in Apache Curator are for access control. That is, if you set ACLProvider as follows,
new ACLProvider() {
#Override
public List<ACL> getDefaultAcl () {
return ZooDefs.Ids.CREATOR_ALL_ACL;
}
#Override
public List<ACL> getAclForPath (String path){
return ZooDefs.Ids.CREATOR_ALL_ACL;
}
}
only the client with the credentials identical to the creator will be given access to the corresponding ZNode later on. Autherization details are set as follows (See the client building example). There are other modes of ACL availble, like OPEN_ACL_UNSAFE which do not do any access control if you set it as the ACLProvider.
authorization("digest", authorizationString.getBytes())
they will be used later to control access to a given ZNode.
In short, if you want to prevent others from interfering your ZNodes, you can set the ACLProvider to return CREATOR_ALL_ACL and set the authorization to digest as shown above. Only the CuratorFramework instances using the same authorization string ("username:password") will be able to access those ZNodes. But it will not prevent others from creating ZNodes in paths which are not interfering with yours.
Hope you found what you want :-)
It wasn't part of the original question, but I thought I would share a solution I came up with in which the credentials used determine the access level.
I didn't have much luck finding any examples and kept ending up on this page so maybe it will help someone else. I dug through the source code of Curator Framework and luckily the org.apache.curator.framework.recipes.leader.TestLeaderAcls class was there to point me in the right direction.
So in this example:
One generic client used across multiple apps which only needs to read data from ZK.
Another admin client has the ability to read, delete, and update nodes in ZK.
Read-only or admin access is determined by the credentials used.
FULL-CONTROL ADMIN CLIENT
import java.security.NoSuchAlgorithmException;
import java.util.ArrayList;
import java.util.List;
import org.apache.curator.RetryPolicy;
import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.CuratorFrameworkFactory;
import org.apache.curator.framework.api.ACLProvider;
import org.apache.curator.retry.ExponentialBackoffRetry;
import org.apache.zookeeper.ZooDefs;
import org.apache.zookeeper.data.ACL;
import org.apache.zookeeper.data.Id;
import org.apache.zookeeper.server.auth.DigestAuthenticationProvider;
public class AdminClient {
protected static CuratorFramework client = null;
public void initializeClient() throws NoSuchAlgorithmException {
String zkConnectString = "127.0.0.1:2181";
RetryPolicy retryPolicy = new ExponentialBackoffRetry(1000, 3);
final List<ACL> acls = new ArrayList<>();
//full-control ACL
String zkUsername = "adminuser";
String zkPassword = "adminpass";
String fullControlAuth = zkUsername + ":" + zkPassword;
String fullControlDigest = DigestAuthenticationProvider.generateDigest(fullControlAuth);
ACL fullControlAcl = new ACL(ZooDefs.Perms.ALL, new Id("digest", fullControlDigest));
acls.add(fullControlAcl);
//read-only ACL
String zkReadOnlyUsername = "readuser";
String zkReadOnlyPassword = "readpass";
String readOnlyAuth = zkReadOnlyUsername + ":" + zkReadOnlyPassword;
String readOnlyDigest = DigestAuthenticationProvider.generateDigest(readOnlyAuth);
ACL readOnlyAcl = new ACL(ZooDefs.Perms.READ, new Id("digest", readOnlyDigest));
acls.add(readOnlyAcl);
//create the client with full-control access
client = CuratorFrameworkFactory.builder()
.connectString(zkConnectString)
.retryPolicy(retryPolicy)
.authorization("digest", fullControlAuth.getBytes())
.aclProvider(new ACLProvider() {
#Override
public List<ACL> getDefaultAcl() {
return acls;
}
#Override
public List<ACL> getAclForPath(String string) {
return acls;
}
})
.build();
client.start();
//Now create, read, delete ZK nodes
}
}
READ-ONLY CLIENT
import java.security.NoSuchAlgorithmException;
import org.apache.curator.RetryPolicy;
import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.CuratorFrameworkFactory;
import org.apache.curator.retry.ExponentialBackoffRetry;
public class ReadOnlyClient {
protected static CuratorFramework client = null;
public void initializeClient() throws NoSuchAlgorithmException {
String zkConnectString = "127.0.0.1:2181";
RetryPolicy retryPolicy = new ExponentialBackoffRetry(1000, 3);
String zkReadOnlyUsername = "readuser";
String zkReadOnlyPassword = "readpass";
String readOnlyAuth = zkReadOnlyUsername + ":" + zkReadOnlyPassword;
client = CuratorFrameworkFactory.builder()
.connectString(zkConnectString)
.retryPolicy(retryPolicy)
.authorization("digest", readOnlyAuth.getBytes())
.build();
client.start();
//Now read ZK nodes
}
}
I am using Java Client to connect with soft layer API. I am able to create a new VM with the OS using the below code.
guest.setHostname("vstest2");
guest.setDomain("softlayer.com");
guest.setStartCpus(2L);
guest.setHourlyBillingFlag(true);
guest.setLocalDiskFlag(true);
guest.setOperatingSystemReferenceCode("UBUNTU_14_64");
But I am unable to create a new VM through already exisitng public image.
guest.setHostname("vstest2");
guest.setDomain("softlayer.com");
guest.setStartCpus(2L);
guest.setHourlyBillingFlag(true);
guest.setLocalDiskFlag(true);
Group blockDevice = new Group();
blockDevice.setGlobalIdentifier("ce3f5ea3-893a-4992-ad14-5bcd99d9b32a");
guest.setBlockDeviceTemplateGroup(blockDevice);
Please help in creating a new VM by using a public image. The error I got is
Caused by: com.softlayer.api.ApiException$Internal: Invalid value provided for 'blockDevices'. Block devices may not be provided when using an image template.(code: SoftLayer_Exception_InvalidValue, status: 500)
I simply want to create a new VM based on the public image template. But unable to find a way to do it.
I was able to order a VSI using the global identifier: ce3f5ea3-893a-4992-ad14-5bcd99d9b32a
Here the java script that I used:
package VirtualGuest;
import com.softlayer.api.ApiClient;
import com.softlayer.api.RestApiClient;
import com.softlayer.api.service.Location;
import com.softlayer.api.service.virtual.Guest;
import com.softlayer.api.service.virtual.guest.block.device.template.Group;
/**
* Created by Ruber Cuellar on 5/3/2016.
*/
public class CreateObject {
public CreateObject(){
// Declare username and api key
String username = "set me";
String apiKey = "set me";
// Get Api Client and service
ApiClient client = new RestApiClient().withCredentials(username, apiKey);
Guest.Service guestService = Guest.service(client);
Guest guest = new Guest();
guest.setHostname("rcvtest-3");
guest.setDomain("softlayer.com");
guest.setStartCpus(2L);
guest.setHourlyBillingFlag(true);
guest.setLocalDiskFlag(true);
guest.setMaxMemory(1L);
// Setting datacenter
Location newLocation = new Location();
newLocation.setName("sjc03");
guest.setDatacenter(newLocation);
// Setting image template
Group blockDevice = new Group();
blockDevice.setGlobalIdentifier("ce3f5ea3-893a-4992-ad14-5bcd99d9b32a");
guest.setBlockDeviceTemplateGroup(blockDevice);
try{
Guest result = guestService.createObject(guest);
System.out.println(result.getId());
} catch (Exception e)
{
System.out.println("Error: " + e);
}
}
public static void main(String [] args)
{
new CreateObject();
}
}
Try to do double check or can you provide the full code that you are trying, please?
I am trying to use crawler4j like it was shown to be used in this example and no matter how I define the number of crawlers or change the root folder I continue to get this error from the code stating:
"Needed parameters:
rootFolder (it will contain intermediate crawl data)
numberOfCralwers (number of concurrent threads)"
The main code is below:
public class Controller {
public static void main(String[] args) throws Exception {
if (args.length != 2) {
System.out.println("Needed parameters: ");
System.out.println("\t rootFolder (it will contain intermediate crawl data)");
System.out.println("\t numberOfCralwers (number of concurrent threads)");
return;
}
/*
* crawlStorageFolder is a folder where intermediate crawl data is
* stored.
*/
String crawlStorageFolder = args[0];
/*
* numberOfCrawlers shows the number of concurrent threads that should
* be initiated for crawling.
*/
int numberOfCrawlers = Integer.parseInt(args[1]);
There was a similar question asking exactly what I want to know here , but I didn't quite understand the solution, like where I was to type java BasicCrawler Controller "arg1" "arg2" . I am running this code on Eclipse and I am still fairly new to the world of programming. I would really appreciate it if someone helped me understand this problem
If you aren't giving any arguments when you are running the file, you will get that error.
Put the following as comment sin your code or delete it.
if (args.length != 2) {
System.out.println("Needed parameters: ");
System.out.println("\t rootFolder (it will contain intermediate crawl data)");
System.out.println("\t numberOfCralwers (number of concurrent threads)");
return;
}
And after that set your root folder to the one where you want to store the meta data.
To use crawler4j in your project you must create two classes. One of the them is CrawlController (Which start crawler according to the parameters) and the other is Crawler.
Just run the main method in the Controller class and see crawled pages
Here is Controller.java file:
import edu.uci.ics.crawler4j.crawler.CrawlConfig;
import edu.uci.ics.crawler4j.crawler.CrawlController;
import edu.uci.ics.crawler4j.fetcher.PageFetcher;
import edu.uci.ics.crawler4j.robotstxt.RobotstxtConfig;
import edu.uci.ics.crawler4j.robotstxt.RobotstxtServer;
public class Controller {
public static void main(String[] args) throws Exception {
RobotstxtConfig robotstxtConfig2 = new RobotstxtConfig();
System.out.println(robotstxtConfig2.getCacheSize());
System.out.println(robotstxtConfig2.getUserAgentName());
String crawlStorageFolder = "/crawler/testdata";
int numberOfCrawlers = 4;
CrawlConfig config = new CrawlConfig();
config.setCrawlStorageFolder(crawlStorageFolder);
PageFetcher pageFetcher = new PageFetcher(config);
RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
System.out.println(robotstxtConfig.getCacheSize());
System.out.println(robotstxtConfig.getUserAgentName());
RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcher);
CrawlController controller = new CrawlController(config,
pageFetcher, robotstxtServer);
controller.addSeed("http://cyesilkaya.wordpress.com/");
controller.start(Crawler.class, numberOfCrawlers);
}
}
Here is Crawler.java file:
import java.io.IOException;
import edu.uci.ics.crawler4j.crawler.Page;
import edu.uci.ics.crawler4j.crawler.WebCrawler;
import edu.uci.ics.crawler4j.url.WebURL;
public class Crawler extends WebCrawler {
#Override
public boolean shouldVisit(WebURL url) {
// you can write your own filter to decide crawl the incoming URL or not.
return true;
}
#Override
public void visit(Page page) {
String url = page.getWebURL().getURL();
try {
String url = page.getWebURL().getURL();
System.out.println("URL: " + url);
}
catch (IOException e) {
}
}
}
In Eclipse :
->Click on run
->Click on run configurations...
In the pop-up window :
First, left column: make sure that your application is selected in sub-dir Java Application, else create a new (Click on new).
Then in the central Window, go on "Arguments"
Write your arguments under "Program arguments" Once you have written your first argument press enter for the seconde arguments, and so on... (=newline because args is a [ ])
Then click Apply
And click Run.
I would like to be able to make a request to Twitter with Pentaho's REST client request however this software does not have any concept of oauth. I found this (Implement OAuth in Java)neat java class that I would like to implement with Pentaho's java class tranformation but I am so new to Pentaho this task will be very difficult. I am hoping someone can help me out with this....
I found this great twitter java library called twitter4J and imported the core classes into the pentaho directory pentaho/design-tools/data-integration/libext and wrote the below custom user java class.
// NO COLLECTION TYPE SAFETY ALLOWED, MUST CAST ALL OBJECTS
import twitter4j.*;
import twitter4j.auth.*;
import twitter4j.conf.*;
//import other libs here
//put your vars here
// Variables
private Twitter twitter = null;
public boolean processRow(StepMetaInterface smi, StepDataInterface sdi) throws KettleException
{
Object[] r = getRow();
if (r==null)
{
setOutputDone();
return false;
}
if (first) {
first=false;
paging = new Paging();
paging.setCount(100);
}
oauth_user_key = get(Fields.In, "oauth_user_key").getString(r);
oauth_user_secret = get(Fields.In, "oauth_user_secret").getString(r);
consumer_key = get(Fields.In, "consumer_key").getString(r);
consumer_secret = get(Fields.In, "consumer_secret").getString(r);
//wierd long/string thing here (pentho compiles java wierd)
user_id = get(Fields.In, "source_user_id").getInteger(r);
Long user = user_id;
ConfigurationBuilder cb = new ConfigurationBuilder();
cb.setDebugEnabled(true)
.setIncludeEntitiesEnabled(true)
.setOAuthConsumerKey(consumer_key)
.setOAuthConsumerSecret(consumer_secret)
.setOAuthAccessToken(oauth_user_key)
.setOAuthAccessTokenSecret(oauth_user_secret);
twitter = new TwitterFactory(cb.build()).getInstance();
try {
//be creative with twitter4j here and output rows with results (may require a loop)
} catch (TwitterException e){
logDebug(e.getMessage());
return true;
}
logBasic("twitter collect done" );
return true;
}
I have a case: make work with forum by API fo Forum Engine IP.Board.
So i wrote next code:
package ru.test;
import java.net.MalformedURLException;
import java.net.URL;
import org.apache.xmlrpc.XmlRpcException;
import org.apache.xmlrpc.client.XmlRpcClient;
import org.apache.xmlrpc.client.XmlRpcClientConfigImpl;
public class mainClass {
/**
* #param args
*/
public static void main(String[] args) {
XmlRpcClientConfigImpl config = new XmlRpcClientConfigImpl();
try {
config.setServerURL(new URL("http://hbf.by/interface/board/index.php"));
XmlRpcClient client = new XmlRpcClient();
client.setConfig(config);
Object[] params = new Object[]{"74600b7376c4b1db69eaf8f155f2d157", "ipb","','"};
Object result = client.execute("fetchOnlineUsers", params);
} catch (MalformedURLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (XmlRpcException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
But i get exception
org.apache.xmlrpc.XmlRpcException: IP.Board could not locate an API module called ''
at org.apache.xmlrpc.client.XmlRpcStreamTransport.readResponse(XmlRpcStreamTransport.java:197)
at org.apache.xmlrpc.client.XmlRpcStreamTransport.sendRequest(XmlRpcStreamTransport.java:156)
at org.apache.xmlrpc.client.XmlRpcHttpTransport.sendRequest(XmlRpcHttpTransport.java:143)
at org.apache.xmlrpc.client.XmlRpcSunHttpTransport.sendRequest(XmlRpcSunHttpTransport.java:69)
at org.apache.xmlrpc.client.XmlRpcClientWorker.execute(XmlRpcClientWorker.java:56)
at org.apache.xmlrpc.client.XmlRpcClient.execute(XmlRpcClient.java:167)
at org.apache.xmlrpc.client.XmlRpcClient.execute(XmlRpcClient.java:137)
at org.apache.xmlrpc.client.XmlRpcClient.execute(XmlRpcClient.java:126)
at ru.test.mainClass.main(mainClass.java:23)
What's wrong?
In Documentation (http://community.invisionpower.com/resources/documentation/index.html/_/developer-resources/miscellaneous-articles/xml-rpc-api-r246) sayed:
You should submit XML-RPC calls to the interface/board/index.php file.
You should send all parameters as a struct.
Every request must submit two parameters: api_key - this should be
the key set up earlier. api_module - this should be "ipb".
Theoretically, you can create other modules, but "ipb" is all that
ships with IP.Board.
Where i make mistake.
And also how i can get methods.php file?
I write URL http://hbf.by/interface/board/modules/ipb/methods.php
But get blank page.
But also in Documentation sayed:
Open the interface/board/modules/ipb/methods.php file to see which
parameters each method expects to receive and will send back in
response
May be some server need configurations, but in internet i can't find it.
Your code does seem to match the documentation.
But, XMLRPC often specifies the module in the call like this
Object result = client.execute("ipb.fetchOnlineUsers", params);
You could try that.
I found where is trouble
don't use Object[] params = new Object[]{"74600b5f2d157", "ipb","','"};
instead use
HashMap and then
Object result = client.execute("ipb.fetchOnlineUsers",new Object[] {hMap});
It's work correctly