I'm refactoring (not by choice) a Jenkins plugin (that I didn't write) so it can run distributed. The plugin is simple, it basically takes the resulting object from the job, creates a changelog, compresses them both in a zip file and sends it through an http post request to an app.
Following http://ccoetech.ebay.com/tutorial-dev-jenkins-plugin-distributed-jenkins I moved the code that's going to run on the node to a class like this:
private static class UploadLauncher implements Callable <Void, Exception> {
...
#Override
public Void call() throws Exception {
// Here's the code to run in the node.
...
}
}
The problem I'm having is that somewhere in the code the plugin tries to configure the proxy like this:
ProxyConfiguration proxy;
if (Jenkins.getInstance() != null && Jenkins.getInstance().proxy != null) {
proxy = Jenkins.getInstance().proxy;
} else {
proxy = new ProxyConfiguration("", 0, "", "");
}
//Continue proxy configuration code
Because I'm doing this on my local machine and I don't have a proxy configured (I actually don't know if the production server uses one) the code always goes through the else statement, and it's failing throwing an exception on this instruction proxy = new ProxyConfiguration("", 0, "", "");
java.lang.IllegalStateException: cannot initialize confidential key store until Jenkins has started
at jenkins.security.ConfidentialStore.get(ConfidentialStore.java:68)
at jenkins.security.ConfidentialKey.load(ConfidentialKey.java:47)
at jenkins.security.CryptoConfidentialKey.getKey(CryptoConfidentialKey.java:32)
at jenkins.security.CryptoConfidentialKey.decrypt(CryptoConfidentialKey.java:67)
at hudson.util.Secret.decrypt(Secret.java:137)
at hudson.util.Secret.fromString(Secret.java:186)
at hudson.ProxyConfiguration.<init>(ProxyConfiguration.java:117)
at hudson.ProxyConfiguration.<init>(ProxyConfiguration.java:109)
at hudson.ProxyConfiguration.<init>(ProxyConfiguration.java:105)
at example.maduploader.MADRecorder$UploadLauncher.getHttpClient(MADRecorder.java:363)
This happens only when the plugin runs on the slave node, if the job executes on the master node the plugin works fine.
Also I'm a python programmer I've only done things in Java back at college so maybe I'm approaching this the wrong way.
Related
I search for some help creating a web Flex application using BlazeDS and Java server with dynamic BlazeDS endpoint configuration.
First, I will try to explain my current situation.
I have a Flex 3.2 application that provides GUI of the application. From the ActionScript I call Java methods using BlazeDS. To access the BlazeDS I use a Config class that provides the endpoint as shown below (it is a constructor):
public function Config(): void {
if (_serviceUrl == null) {
try {
var browser: IBrowserManager = BrowserManager.getInstance();
browser.init();
var url: String = browser.url;
var host: String = mx.utils.URLUtil.getServerName(url);
var port: uint = mx.utils.URLUtil.getPort(url);
var parts: Array = url.split('/');
if (parts[2] == '') {
url = DEFAULT_URL;
Alert.show("Unable to determine server location, using default URL: " + DEFAULT_URL, "Connection error");
}
else {
url = parts[0] + '//' + parts[2] + '/' + parts[3] + '/messagebroker/amf';
}
_serviceUrl = url;
} catch (e: Error) {
Alert.show("Exception while trying to determine server location, using default URL: " + DEFAULT_URL, "Connection exception");
_serviceUrl = DEFAULT_URL;
}
}
}
The idea of the class is to determine the endpoint from the request URL. I use a Delegate class to call the remote methods using BlazeDS like the following:
{
import com.adobe.cairngorm.business.ServiceLocator;
import mx.rpc.IResponder;
import mx.rpc.remoting.RemoteObject;
public class AbstractRemoteDelegate
{
public function AbstractRemoteDelegate(responder:IResponder,serviceName:String)
{
_responder=responder;
_locator=ServiceLocator.getInstance();
_service=_locator.getRemoteObject(serviceName);
_service.showBusyCursor=true;
_service.endpoint = Config.instance.serviceUrl;
}
private var _responder:IResponder;
private var _locator:ServiceLocator;
private var _service:RemoteObject;
protected function send(operationName:String,... args:Array) : void {
_service.getOperation(operationName).send.apply(_service.getOperation(operationName),args).addResponder(_responder);
}
}
}
This approach actually works fine. However, I got across a situation where I can't use dynamically determined URL. In such a situation, I need a hard-coded URL in the Config.as file. And this is the problem. When trying to deploy the application to another server, I always need to rebuild the application with a new URL configuration in the ActionScript class Config.
Therefore I search for a way to define a static configuration for the Flex application to connect to a BlazeDS server. And the way to change such configuration without rebuilding the application so I can give the customer his own way to reconfigure and move the Flex application.
I thought about using a configuration file, but Flex runs on the client side and there is no configuration file!
I thought about using database configuration, but I don't have any database on the client side!
To sum up, I am looking for a way, how to get BlazeDS URL from a configuration to be able to change it without rebuilding the whole app.
Thanks for any useful suggestions.
EDIT: Revised the question to be more actual. I improved the way to determine the URL dynamically from the request URL, so it works now even for proxy server. However, my curiosity persists for the configuration of flex without rebuilding.
Here is an old example Blaze DS Service of mine which does basically the same as you did. It's just the string which needs to be created correctly. If the endpoint address is wrong, catch the error accordingly.
My project may currently not build because of Flexmojos ... I'm not able to test that yet.
Since it did not read you question properly, I misunderstood you: You can put a configuration file next to the SWF and load it via URLLoader or pass it via FlashVars. That should give you the freedom to pass the endpoint dynamically.
This might be a very trivial question, but I'm having trouble finding an answer:
Using the Google Plugin for Eclipse, I would like to develop a plain old Java application (not a web-app), that uses AppEngine for cloud storage.
For this, I could, of course, simply create two projects, one containing the AppEngine server and one containing the Java application.
But I'm wondering whether it is possible to set up a single project in Eclipse that contains both the server and the client code (like for a GWT project). To execute it for local debugging, I would then want Eclipse to launch Tomcat to make my servlets available and then launch my Main.java from the client directory of the project as if the project was just a simple Java application. Is this what the "Launch and deploy from this directory" checkbox is for in the "Google" -> "Web Application" settings? If so, how do I use it?
I found one way to do it, but it's a bit cheesy.
First, add the following helper-class to the project:
// other imports
import com.google.appengine.tools.development.DevAppServerMain;
public class DevServer {
public static void launch(final String[] args) {
Logger logger = Logger.getLogger("");
logger.info("Launching AppEngine server...");
Thread server = new Thread() {
#Override
public void run() {
try {
DevAppServerMain.main(args); // run DevAppServer
} catch (Exception e) { e.printStackTrace(); }
}
};
server.setDaemon(true); // shut down server when rest of app completes
server.start(); // run server in separate thread
URLConnection cxn;
try {
cxn = new URL("http://localhost:8888").openConnection();
} catch (IOException e) { return; } // should never happen
boolean running = false;
while (!running) { // maybe add timeout in case server fails to load
try {
cxn.connect(); // try to connect to server
running = true;
// Maybe limit rate with a Thread.sleep(...) here
} catch (Exception e) {}
}
logger.info("Server running.");
}
}
Then, add the following line to the entry class:
public static void main(String[] args) {
DevServer.launch(args); // launch AppEngine Dev Server (blocks until ready)
// Do everything else
}
Finally, create the appropriate Run Configuration:
Simply click "Run As" -> "Web Application". To create a default Run Configuration.
In the created Run Configuration, under the "Main"-tab select your own entry class as the "Main class" instead of the default "com.google.appengine.tools.development.DevAppServerMain".
Now, if you launch this Run Configuration, it will first bring up the AppEngine server and then continue with the rest of the main(...) method in the entry class. Since the server thread is marked as a daemon thread, once the other code in main(...) completes, the application quits normally, shutting down the server as well.
Not sure if this is the most elegant solution, but it works. If someone else has a way to achieve this without the DevServer helper-class, please do post it!
Also, there might be a more elegant way to check whether the AppEngine server is running, other than pinging it with a URL connection as I did above.
Note: The AppEngine Dev Server registers its own URLStreamHandlerFactory to automatically map Http(s)URLConnections onto AppEngine's URL-fetch infrastructure. This means that you get errors complaining about missing url-fetch capabilities if you then use HttpURLConnections in your client code. Luckily, this can be fixed in two way as described here: Getting a reference to Java's default http(s) URLStreamHandler.
If you definitely want to use appengine, then you will end up creating two projects, one on appengine and another a standalone (no servlets). In this case you can take a look at appengine Remote API
I'm not quite sure whether this is more of an Openbravo issue or more of a Quartz issue, but we have some manual processes that run on schedules via Openbravo ProcessRequest objects (OB v2.50MP24), but it seems that the processes are running twice, at the exact same time. Openbravo extends the Quartz platform for their scheduling. I've tried to resolve this issue on my own by ensuring that my process classes extend this class:
import java.util.List;
import org.openbravo.dal.service.OBDal;
import org.openbravo.model.ad.ui.ProcessRequest;
import org.openbravo.scheduling.ProcessBundle;
import org.openbravo.service.db.DalBaseProcess;
public abstract class RBDDalProcess extends DalBaseProcess {
#Override
protected void doExecute(ProcessBundle bundle) throws Exception {
org.quartz.Scheduler sched = org.openbravo.scheduling.OBScheduler
.getInstance().getScheduler();
int runCount = 0;
synchronized (sched) {
List<org.quartz.JobExecutionContext> currentlyExecutingJobs = (List<org.quartz.JobExecutionContext>) sched
.getCurrentlyExecutingJobs();
for (org.quartz.JobExecutionContext jec : currentlyExecutingJobs) {
ProcessRequest processRequest = OBDal.getInstance().get(
ProcessRequest.class, jec.getJobDetail().getName());
if (processRequest == null)
continue;
String processClass = processRequest.getProcess()
.getJavaClassName();
if (bundle.getProcessClass().getCanonicalName()
.equals(processClass)) {
runCount++;
}
}
}
if (runCount > 1) {
System.out.println("Process "
+ bundle.getProcessClass().getSimpleName()
+ " is already running. Cancelling.");
return;
}
doRun(bundle);
}
protected abstract void doRun(ProcessBundle bundle);
}
This worked fine when I tested by requesting the process to run immediately twice at the same time. One of them cancelled. However, it's not working on the scheduled processes. I have S.o.p's set up to log when the processes start, and looking at the logs shows each line of the output twice, each line one right after the other.
I have a sneaking suspicion that it's because the processes are either running in two completely different threads that don't know about each others' processes, however, I'm not sure how to verify my suspicions or, if I am correct, what to do about it. I've already verified that there is only one instance of each of the ProcessRequest objects stored in the database.
Has anyone else experienced this, know why they might be running twice, or know what I can do to prevent them from simultaneously running?
The most common reasons for a double Job execution are the following:
EDITED:
Your application is deployed in a clustered environment and you have not configured Quartz to run in a cluster environment.
Your application is deployed more than once. There are many cases where the application is deployed twice especially in Tomcat server. As a consequence the QuartzInitializerListener is invoked twice and the Jobs are executed twice. In case you use Tomcat server and you are defining contexts explicitly in server.xml, you should turn off automatic application deployment or specify deployIgnore. Both the autoDeploy set to true and the context element existence in server.xml, have as a consequence the twice deployment of the application. Set autoDeploy to false or remove the context element from the server.xml.
Your application has been redeployed without unscheduling the current processes.
I hope this helps you.
Quartz uses a thread pool for the jobs execution. So as you suspect, the RBDDalProcess will probably have separate instances a in separate thread and the counter check will fail.
One thing you can do is list the jobs registered in the Scheduler (you can get the Scheduler using the OB API as: OBScheduler.getScheduler()):
// enumerate each job group
for(String group: sched.getJobGroupNames()) {
// enumerate each job in group
for(JobKey jobKey : sched.getJobKeys(groupEquals(group))) {
System.out.println("Found job identified by: " + jobKey);
}
}
If you see the same job added twice, check out org.quartz.spi.JobFactory and the org.quartz.Scheduler.setJobFactory method for controlling jobs instantiations.
Also make sure you have only one entry for this process in the 'Report and Process' table in Openbravo.
I have used DalBaseProcess in Openbravo 3.0 and I cannot confirm this behavior you're describing. Having this in mind it would be probably a good idea to checkout the reported bugs for Openbravov2.50MP24 and Quartz or post a thread in Openbravo Forge forums with your problem.
I am using the matlabcontrol-4.0.0.jar library to call Matlab from Java. This on Ubuntu 11.10, Matlab r2011b and Java version "1.6.0_23".
When trying to run this simple program:
public static void main(String[] args) throws MatlabConnectionException,
MatlabInvocationException {
//Create a proxy, which we will use to control MATLAB
MatlabProxyFactory factory = new MatlabProxyFactory(options);
MatlabProxy proxy = factory.getProxy();
//Display 'hello world' just like when using the demo
proxy.eval("disp('hello world')");
//Disconnect the proxy from MATLAB
proxy.disconnect();
}
I get, after the Matlab launch screen appears (which is good), a time out:
Exception in thread "main" matlabcontrol.MatlabConnectionException:
MATLAB proxy could not be created in 180000 milliseconds at matlabcontrol.RemoteMatlabProxyFactory.getProxy(RemoteMatlabProxyFactory.java:158)
at
matlabcontrol.MatlabProxyFactory.getProxy(MatlabProxyFactory.java:81)
at Main.main(Main.java:15)
I've looked everywhere including all the tips from provided by stackoverflow, but nothing seems to fit the problem i am encountering
*UPDATE*
I forbore to mention that I already tried the scenario described by Joshua Kaplan (thanks!) .This seems be for my case of no help, meaning that it just keeps waiting. Could someone perhaps elaborate on the communication protocol between java and the matlab proxy?
-> It could be an incompatibility issue as well, I've posted on the website delivering the resource, have received no answer so far...
*END UPDATE*
So, any of you a tip where to start looking, that would be wonderful
thanks
The getProxy() method is a blocking operation with a default timeout of 3 minutes (or 180 seconds or 180000 milliseconds). For most people's machines that is long enough, if the connection was not established in that amount of time then something has gone wrong. However, this timeout can be changed by creating an instance of a MatlabProxyFactoryOptions which is done by using a MatlabProxyFactoryOptions.Builder. The MatlabProxyFactoryOptions instance you create is passed into MatlabProxyFactory's constructor. Here's an example with a 5 minute timeout:
MatlabProxyFactoryOptions options = new MatlabProxyFactoryOptions.Builder()
.setProxyTimeout(300000L)
.build();
MatlabProxyFactory factory = new MatlabProxyFactory(options);
MatlabProxy proxy = factory.getProxy();
Alternatively you can request a proxy which is a non-blocking operation that has no timeout. Once the proxy has been created it will be passed to the provided callback. Example:
MatlabProxyFactory factory = new MatlabProxyFactory();
factory.requestProxy(new MatlabProxyFactory.RequestCallback()
{
public void proxyCreated(MatlabProxy proxy)
{
//TODO: Make use of the proxy
}
});
I got similar problem. Main issue is that in your imported .jar file "matlabcontrol-4.0.0.jar" there is default, configuration in class Configuration.java. In my case there was problem, that libraries cannot properly call matlab with all arguments. Try to add to your project not .jar file, but package matalbcontrol with all source .java files. You can download it form the same page http://code.google.com/p/matlabcontrol/downloads/list, form where you got .jar libs. Then in Configuration.java edit getMatlabLocation() lines:
else if(isWindows() || isLinux())
{
matlabLoc = "matlab";
}
replace with:
else if(isLinux())
{
matlabLoc = "/usr/local/MATLAB/R2011b/bin/matlab"; //or place where you got installed your matlab, directory bin, in my case, like in example
}
else if(isWindows())
{
matlabLoc = "matlab";
}
We have an MTOM-enabled web service that is published with Grails and the Metro 1.0.2 plugin:
#MTOM
#WebService(targetNamespace="http://com.domain")
class TestService {
#WebMethod
int uploadFile(#XmlMimeType("application/octet-stream")DataHandler data) {
data.dataSource.inputStream.eachLine {
println "reading: -> ${it}"
}
return 0
}
}
Following this tutorial, we set up a Java test-client that looks like this
public class Client {
public static void main(String[] argv) {
MTOMFeature feat = new MTOMFeature();
TestService service = new TestServiceService().getTestServicePort(feat);
Map<String, Object> ctxt = ((BindingProvider)service).getRequestContext();
ctxt.put(JAXWSProperties.HTTP_CLIENT_STREAMING_CHUNK_SIZE, 8192);
service.uploadFile(new DataHandler(new FileDataSource("c:/file.xml")));
}
}
When I run the client, I get the following error message:
Couldn't create SOAP message due to
exception:
org.jvnet.mimepull.MIMEParsingException:
Missing start boundary
However, when I don't add the MTOMFeature, and just do
TestService service = new TestServiceService().getTestServicePort(); the files gets uploaded ok. But as I understand it if MTOM is not enabled on both server and client side, the entire file will be kept in memory (and not streamed). So, my questions are
Why do we get that error?
If I don't add the MTOMFeature, will the file still be MTOM-transmitted?
I would be very grateful for any help/tips!
After some research and testing, the answers are:
The error is because grails adds its own filtering, including services. So, by excluding the services from being filtered like this static excludes = ["/services/*"] in UrlMappings.groovy, it works.
No. Without the MTOMFeature the file will just be treated as any other data in the request. That means being stored in the memory, thus causing problems for big files.