I'm trying to run the following code but unfortunately facing Error problems
package jskypeexample;
// import the JSkype packages
import net.lamot.java.jskype.general.AbstractMessenger;
import net.lamot.java.jskype.general.MessageListenerInterface;
import net.lamot.java.jskype.windows.Messenger;
import java.lang.Thread;
import java.lang.Exception;
/**
*
* #author swhite
*/
public class JSkypeExample implements MessageListenerInterface {
// create a messenger which we'll use for sending messages
private AbstractMessenger msgr = null;
/** Creates a new instance of JSkypeExample */
public JSkypeExample() {
msgr = new Messenger();
msgr.addListener(this);
msgr.initialize();
try {
// This number may vary on your system depending on the amount
// of time required to initialize the msgr.
Thread.sleep(1000);
// send the Skype API text command
msgr.sendMessage("Message seanmwhite Hello from UI Student");
msgr.sendMessage("SEARCH FRIENDS");
} catch (Exception e) {
e.printStackTrace();
}
}
public static void main(String[] args) {
new JSkypeExample();
}
public void onMessageReceived(String str) {
// This is where you will handle all strings that are returned.
System.out.println(str);
}
}
But when I comment the following lines then it runs well.
msgr.initialize();
msgr.sendMessage("Message seanmwhite Hello from UI Student");
msgr.sendMessage("SEARCH FRIENDS");
But I have to send commands to receive the response. Actually I'm using JSkype Api (open source api from java ).
You have to set your boolean value that your initilaze function is returning to true or catch that execpetion if it is false.
Related
I am trying to develop a camunda process, but I don't know how to implement a multi insntance subprocess to iterate through a collection.
For example:
SubProcess subProcess = modelInstance.getModelElementById("elementVersionId-" + element.getId().toString());
subProcess.builder().multiInstance().multiInstanceDone() //Cant add a start event after multinstance done
After add a multiInstanceDone to the subprocess i cant start the subprocess with startEvent.
Does anyone have an idea, example to help me?
Hope this helps:
import lombok.extern.slf4j.Slf4j;
import org.camunda.bpm.model.bpmn.Bpmn;
import org.camunda.bpm.model.bpmn.BpmnModelInstance;
import org.camunda.bpm.model.bpmn.builder.MultiInstanceLoopCharacteristicsBuilder;
import org.camunda.bpm.model.bpmn.instance.*;
import java.io.File;
#Slf4j
public class MultiInstanceSubprocess {
public static final String MULTI_INSTANCE_PROCESS = "myMultiInstanceProcess";
// #see https://docs.camunda.org/manual/latest/user-guide/model-api/bpmn-model-api/fluent-builder-api/
public static void main(String[] args) {
BpmnModelInstance modelInst;
try {
File file = new File("./src/main/resources/multiInstance.bpmn");
modelInst = Bpmn.createProcess()
.id("MyParentProcess")
.executable()
.startEvent("ProcessStarted")
.subProcess(MULTI_INSTANCE_PROCESS)
//first create sub process content
.embeddedSubProcess()
.startEvent("subProcessStartEvent")
.userTask("UserTask1")
.endEvent("subProcessEndEvent")
.subProcessDone()
.endEvent("ParentEnded").done();
// Add multi-instance loop characteristics to embedded sub process
SubProcess subProcess = modelInst.getModelElementById(MULTI_INSTANCE_PROCESS);
subProcess.builder()
.multiInstance()
.camundaCollection("myCollection")
.camundaElementVariable("myVar")
.multiInstanceDone();
log.info("Flow Elements - Name : Id : Type Name");
modelInst.getModelElementsByType(FlowNode.class).forEach(e -> log.info("{} : {} : {}", e.getName(), e.getId(), e.getElementType().getTypeName()));
Bpmn.writeModelToFile(file, modelInst);
} catch (Exception e) {
e.printStackTrace();
}
}
}
Is there a way in the JVM to guarantee that some function will run before a AWS lambda function will exit? I would like to flush an internal buffer to stdout as a last action in a lambda function even if some exception is thrown.
As far as I understand you want to execute some code before your Lambda function is stopped, regardless what your execution state is (running/waiting/exception handling/etc).
This is not possible out of the box with Lambda, i.e. there is no event fired or something similar which can be identified as a shutdown hook. The JVM will be freezed as soon as you hit the timeout. However, you can observe the remaining execution time by using the method getRemainingTimeInMillis() from the Context object. From the docs:
Returns the number of milliseconds left before the execution times out.
So, when initializing your function you can schedule a task which is regularly checking how much time is left until your Lambda function reaches the timeout. Then, if only less than X (milli-)seconds are left, you do Y.
aws-samples shows how to do it here
package helloworld;
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.io.IOException;
import java.net.URL;
import java.util.HashMap;
import java.util.Map;
import java.util.stream.Collectors;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
/**
* Handler for requests to Lambda function.
*/
public class App implements RequestHandler<Object, Object> {
static {
Runtime.getRuntime().addShutdownHook(new Thread() {
#Override
public void run() {
System.out.println("[runtime] ShutdownHook triggered");
System.out.println("[runtime] Cleaning up");
// perform actual clean up work here.
try {
Thread.sleep(200);
} catch (Exception e) {
System.out.println(e);
}
System.out.println("[runtime] exiting");
System.exit(0);
}
});
}
public Object handleRequest(final Object input, final Context context) {
Map<String, String> headers = new HashMap<>();
headers.put("Content-Type", "application/json");
headers.put("X-Custom-Header", "application/json");
try {
final String pageContents = this.getPageContents("https://checkip.amazonaws.com");
String output = String.format("{ \"message\": \"hello world\", \"location\": \"%s\" }", pageContents);
return new GatewayResponse(output, headers, 200);
} catch (IOException e) {
return new GatewayResponse("{}", headers, 500);
}
}
private String getPageContents(String address) throws IOException {
URL url = new URL(address);
try (BufferedReader br = new BufferedReader(new InputStreamReader(url.openStream()))) {
return br.lines().collect(Collectors.joining(System.lineSeparator()));
}
}
}
I have a Grizzly Http Server with Async processing added. It is queuing my requests and processing only one request at a time, despite adding async support to it.
Path HttpHandler was bound to is: "/"
Port number: 7777
Behavior observed when I hit http://localhost:7777 from two browsers simultaneously is:
Second call waits till first one is completed. I want my second http call also to work simultaneously in tandom with first http call.
EDIT Github link of my project
Here are the classes
GrizzlyMain.java
package com.grizzly;
import java.io.IOException;
import java.net.URI;
import javax.ws.rs.core.UriBuilder;
import org.glassfish.grizzly.http.server.HttpServer;
import org.glassfish.grizzly.http.server.NetworkListener;
import org.glassfish.grizzly.nio.transport.TCPNIOTransport;
import org.glassfish.grizzly.strategies.WorkerThreadIOStrategy;
import org.glassfish.grizzly.threadpool.ThreadPoolConfig;
import com.grizzly.http.IHttpHandler;
import com.grizzly.http.IHttpServerFactory;
public class GrizzlyMain {
private static HttpServer httpServer;
private static void startHttpServer(int port) throws IOException {
URI uri = getBaseURI(port);
httpServer = IHttpServerFactory.createHttpServer(uri,
new IHttpHandler(null));
TCPNIOTransport transport = getListener(httpServer).getTransport();
ThreadPoolConfig config = ThreadPoolConfig.defaultConfig()
.setPoolName("worker-thread-").setCorePoolSize(6).setMaxPoolSize(6)
.setQueueLimit(-1)/* same as default */;
transport.configureBlocking(false);
transport.setSelectorRunnersCount(3);
transport.setWorkerThreadPoolConfig(config);
transport.setIOStrategy(WorkerThreadIOStrategy.getInstance());
transport.setTcpNoDelay(true);
System.out.println("Blocking Transport(T/F): " + transport.isBlocking());
System.out.println("Num SelectorRunners: "
+ transport.getSelectorRunnersCount());
System.out.println("Num WorkerThreads: "
+ transport.getWorkerThreadPoolConfig().getCorePoolSize());
httpServer.start();
System.out.println("Server Started #" + uri.toString());
}
public static void main(String[] args) throws InterruptedException,
IOException, InstantiationException, IllegalAccessException,
ClassNotFoundException {
startHttpServer(7777);
System.out.println("Press any key to stop the server...");
System.in.read();
}
private static NetworkListener getListener(HttpServer httpServer) {
return httpServer.getListeners().iterator().next();
}
private static URI getBaseURI(int port) {
return UriBuilder.fromUri("https://0.0.0.0/").port(port).build();
}
}
HttpHandler (with async support built in)
package com.grizzly.http;
import java.io.IOException;
import java.util.Date;
import java.util.concurrent.ExecutorService;
import javax.ws.rs.core.Application;
import org.glassfish.grizzly.http.server.HttpHandler;
import org.glassfish.grizzly.http.server.Request;
import org.glassfish.grizzly.http.server.Response;
import org.glassfish.grizzly.http.util.HttpStatus;
import org.glassfish.grizzly.threadpool.GrizzlyExecutorService;
import org.glassfish.grizzly.threadpool.ThreadPoolConfig;
import org.glassfish.jersey.server.ApplicationHandler;
import org.glassfish.jersey.server.ResourceConfig;
import org.glassfish.jersey.server.spi.Container;
import com.grizzly.Utils;
/**
* Jersey {#code Container} implementation based on Grizzly
* {#link org.glassfish.grizzly.http.server.HttpHandler}.
*
* #author Jakub Podlesak (jakub.podlesak at oracle.com)
* #author Libor Kramolis (libor.kramolis at oracle.com)
* #author Marek Potociar (marek.potociar at oracle.com)
*/
public final class IHttpHandler extends HttpHandler implements Container {
private static int reqNum = 0;
final ExecutorService executorService = GrizzlyExecutorService
.createInstance(ThreadPoolConfig.defaultConfig().copy()
.setCorePoolSize(4).setMaxPoolSize(4));
private volatile ApplicationHandler appHandler;
/**
* Create a new Grizzly HTTP container.
*
* #param application
* JAX-RS / Jersey application to be deployed on Grizzly HTTP
* container.
*/
public IHttpHandler(final Application application) {
}
#Override
public void start() {
super.start();
}
#Override
public void service(final Request request, final Response response) {
System.out.println("\nREQ_ID: " + reqNum++);
System.out.println("THREAD_ID: " + Utils.getThreadName());
response.suspend();
// Instruct Grizzly to not flush response, once we exit service(...) method
executorService.execute(new Runnable() {
#Override
public void run() {
try {
System.out.println("Executor Service Current THREAD_ID: "
+ Utils.getThreadName());
Thread.sleep(25 * 1000);
} catch (Exception e) {
response.setStatus(HttpStatus.INTERNAL_SERVER_ERROR_500);
} finally {
String content = updateResponse(response);
System.out.println("Response resumed > " + content);
response.resume();
}
}
});
}
#Override
public ApplicationHandler getApplicationHandler() {
return appHandler;
}
#Override
public void destroy() {
super.destroy();
appHandler = null;
}
// Auto-generated stuff
#Override
public ResourceConfig getConfiguration() {
return null;
}
#Override
public void reload() {
}
#Override
public void reload(ResourceConfig configuration) {
}
private String updateResponse(final Response response) {
String data = null;
try {
data = new Date().toLocaleString();
response.getWriter().write(data);
} catch (IOException e) {
data = "Unknown error from our server";
response.setStatus(500, data);
}
return data;
}
}
IHttpServerFactory.java
package com.grizzly.http;
import java.net.URI;
import org.glassfish.grizzly.http.server.HttpServer;
import org.glassfish.grizzly.http.server.NetworkListener;
import org.glassfish.grizzly.http.server.ServerConfiguration;
/**
* #author smc
*/
public class IHttpServerFactory {
private static final int DEFAULT_HTTP_PORT = 80;
public static HttpServer createHttpServer(URI uri, IHttpHandler handler) {
final String host = uri.getHost() == null ? NetworkListener.DEFAULT_NETWORK_HOST
: uri.getHost();
final int port = uri.getPort() == -1 ? DEFAULT_HTTP_PORT : uri.getPort();
final NetworkListener listener = new NetworkListener("IGrizzly", host, port);
listener.setSecure(false);
final HttpServer server = new HttpServer();
server.addListener(listener);
final ServerConfiguration config = server.getServerConfiguration();
if (handler != null) {
config.addHttpHandler(handler, uri.getPath());
}
config.setPassTraceRequest(true);
return server;
}
}
It seems the problem is the browser waiting for the first request to complete, and thus more a client-side than a server-side issue. It disappears if you test with two different browser processes, or even if you open two distinct paths (let's say localhost:7777/foo and localhost:7777/bar) in the same browser process (note: the query string partecipates in making up the path in the HTTP request line).
How I understood it
Connections in HTTP/1.1 are persistent by default, ie browsers recycle the same TCP connection over and over again to speed things up. However, this doesn't mean that all requests to the same domain will be serialized: in fact, a connection pool is allocated on a per-hostname basis (source). Unfortunately, requests with the same path are effectively enqueued (at least on Firefox and Chrome) - I guess it's a device that browsers employ to protect server resources (and thus user experience)
Real-word applications don't suffer from this because different resources are deployed to different URLs.
DISCLAIMER: I wrote this answer based on my observations and some educated guess. I think things may actually be like this, however a tool like Wireshark should be used to follow the TCP stream and definitely assert this is what happens.
I am trying to get an oozie job status using oozie java API. Currently it is failing with the message
Exception in thread "main" HTTP error code: 401 : Unauthorized
We are using a kerberos authentication in our cluster with a keytab file.
Please guide as how to proceed to implement the authentication.
My current program is:
import org.apache.oozie.client.OozieClient;
public class oozieCheck
{
public static void main(String[] args)
{
// get a OozieClient for local Oozie
OozieClient wc = new OozieClient(
"http://myserver:11000/oozie");
System.out.println(wc.getJobInfo(args[1]));
}
}
I figured a way to use kerberos in my java api.
First obtain kerberos tgt.
Then the below code works:
import java.io.BufferedReader;
import java.io.FileReader;
import java.net.MalformedURLException;
import java.net.URL;
import java.util.Properties;
import org.apache.log4j.Logger;
import org.apache.oozie.client.AuthOozieClient;
import org.apache.oozie.client.WorkflowJob.Status;
public class Wrapper
{
public static AuthOozieClient wc = null;
public static String OOZIE_SERVER_URL="http://localhost:11000/oozie";
public Wrapper ( String oozieUrlStr ) throws MalformedURLException
{
URL oozieUrl = new URL(oozieUrlStr);
// get a OozieClient for local Oozie
wc = new AuthOozieClient(oozieUrl.toString());
}
public static void main ( String [] args )
{
String lineCommon;
String jobId = args[0]; // The first argument is the oozie jobid
try
{
Wrapper client = new Wrapper(OOZIE_SERVER_URL);
Properties conf = wc.createConfiguration();
if(wc != null)
{
// get status of jobid from CLA
try
{
while (wc.getJobInfo(jobId).getStatus() == Status.RUNNING)
{
logger.info("Workflow job running ...");
logger.info("Workflow job ID:["+jobId+"]");
}
if(wc.getJobInfo(jobId).getStatus() == Status.SUCCEEDED)
{
// print the final status of the workflow job
logger.info("Workflow job completed ...");
logger.info(wc.getJobInfo(jobId));
}
else
{
// print the final status of the workflow job
logger.info("Workflow job Failed ...");
logger.info(wc.getJobInfo(jobId));
}
}
catch(Exception e)
{
e.printStackTrace();
}
}
else
{
System.exit(9999);
}
}
catch (Exception e)
{
e.printStackTrace();
}
}
}
You have to patch the oozie client if docs do not mention Kerberos.
I've requested trial license for Callback File System and tried to write simple application using java! So, I've written next few lines and run it and received exception eldos.cbfs.ECBFSError: Access is denied
Code
import eldos.cbfs.CallbackFileSystem;
import eldos.cbfs.ECBFSError;
import eldos.cbfs.boolRef;
import java.util.logging.Level;
import java.util.logging.Logger;
/**
* #author Sergii.Zagriichuk
*/
public class Test1 {
private static Logger logger = Logger.getLogger(Test1.class.getName());
public static void main(String[] args) {
CallbackFileSystem callbackFileSystem = new CallbackFileSystem();
callbackFileSystem.setRegistrationKey("My registration key ");
try {
callbackFileSystem.install("<path to cab>\\cbfs.cab", "Test", true, 131072, new boolRef(false));
} catch (ECBFSError ecbfsError) {
logger.log(Level.SEVERE, ecbfsError.getMessage(), ecbfsError);
}
}
}
What do I should to do for fix this problem?
Thanks