I'm going to read files from SFTP location line by line:
#Override
public void configure() {
from(sftpLocationUrl)
.routeId("route-name")
.split(body().tokenize("\n"))
.streaming()
.bean(service, "build")
.to(String.format("activemq:%s", queueName));
}
But this application will be deployed on two nodes, and I think that in this case, I can get an unstable and unpredictable application work because the same lines of the file can be read twice.
Is there a way to avoid such duplicates in this case?
Camel has some (experimental) clustering capabilities - see here.
In your particular case, you could model a route which is taking the leadership when starting the directory polling, preventing thereby other nodes from picking the (same or other) files.
soluion is active passive mode . “In active/passive mode, you have a single master instance polling for files, while all the other instances (slaves) are passive. For this strategy to work, some kind of locking mechanism must be in use to ensure that only the node holding the lock is the master and all other nodes are on standby.”
it can implement with hazelcast, consul or zookeper
public class FileConsumerRoute extends RouteBuilder {
private int delay;
private String name;
public FileConsumerRoute(String name, int delay) {
this.name = name;
this.delay = delay;
}
#Override
public void configure() throws Exception {
// read files from the shared directory
from("file:target/inbox" +
"?delete=true")
// setup route policy to be used
.routePolicyRef("myPolicy")
.log(name + " - Received file: ${file:name}")
.delay(delay)
.log(name + " - Done file: ${file:name}")
.to("file:target/outbox");
}}
ServerBar
public class ServerBar {
private Main main;
public static void main(String[] args) throws Exception {
ServerBar bar = new ServerBar();
bar.boot();
}
public void boot() throws Exception {
// setup the hazelcast route policy
ConsulRoutePolicy routePolicy = new ConsulRoutePolicy();
// the service names must be same in the foo and bar server
routePolicy.setServiceName("myLock");
routePolicy.setTtl(5);
main = new Main();
// bind the hazelcast route policy to the name myPolicy which we refer to from the route
main.bind("myPolicy", routePolicy);
// add the route and and let the route be named Bar and use a little delay when processing the files
main.addRouteBuilder(new FileConsumerRoute("Bar", 100));
main.run();
}
}
Server Foo
public class ServerFoo {
private Main main;
public static void main(String[] args) throws Exception {
ServerFoo foo = new ServerFoo();
foo.boot();
}
public void boot() throws Exception {
// setup the hazelcast route policy
ConsulRoutePolicy routePolicy = new ConsulRoutePolicy();
// the service names must be same in the foo and bar server
routePolicy.setServiceName("myLock");
routePolicy.setTtl(5);
main = new Main();
// bind the hazelcast route policy to the name myPolicy which we refer to from the route
main.bind("myPolicy", routePolicy);
// add the route and and let the route be named Bar and use a little delay when processing the files
main.addRouteBuilder(new FileConsumerRoute("Foo", 100));
main.run();
}}
Source : Camel In Action 2nd Edition
Related
I'm working on a project that uses JavaFX for the GUI (I know, non-serializable). I want to serialize objects such as my Users.
I'm not able to access the instance that JavaFX Application uses, but I have it associated in other classes.
For example - it is associated with my Controller class:
public class MyApp extends Application {
public void start(Stage stage){
// ... assume controller loaded
controller.setApp(this);
}
}
public class Controller {
MyApp app;
public setApp(MyApp app){
this.app = app
}
}
Now when I go to serialize an instance of MyApp, I'm having difficulty. I found a slight trick (option 1), but it feels a little messy. I'd much rather do option 2.
Option 1 [WORKS] - create an additional instance in the main method.
public class MyApp extends Application {
public void start(Stage stage){
// ... assume controller loaded
controller.setApp(this);
}
public static void main(String[] args){
MyApp app = new MyApp(); // this is a different instance than javafx instance.
launch(args)
app.users = Controller.getUsers();
writeApp(app);
}
public static void writeApp(PhotoApp photoApp) throws IOException {
ObjectOutputStream oos = new ObjectOutputStream(new
FileOutputStream(storeDir + File.separator + storeFile));
oos.writeObject(photoApp);
oos.close();
}
}
Thus, option 1 essentially creates a new instance and copies stuff back and forth from the actual instance in controller.
OPTION 2 [DOES NOT WORK] - serialize the instance associated with the controller (since that is the actual instance JavaFX is using)
public class Controller {
MyApp app;
public void setApp(MyApp app){
this.app = app
}
public void someAction(){
MyApp.writeApp(this.app);
}
}
When I do Option 2 I get errors saying Controller is not serializable. I understand that it is not (which is okay), but I don't get that error in Option 1. In both options I'm calling the same method with some instance of MyApp. I'm not sure why it works for Option 1 but not Option 2.
Any reason why one option works over the other? How do most people do serialization of some of their objects when they use JavaFX?
I would like to test a singleton actor using java in Scala IDE build of Eclipse SDK (Build id: 3.0.2-vfinal-20131028-1923-Typesafe) and Akka is 2.3.1.
public class WorkerTest {
static ActorSystem system;
#BeforeClass
public static void setup() {
system = ActorSystem.create("ClusterSystem");
}
#AfterClass
public static void teardown() {
JavaTestKit.shutdownActorSystem(system);
system = null;
}
#Test
public void testWorkers() throws Exception {
new JavaTestKit(system) {{
system.actorOf(ClusterSingletonManager.defaultProps(
Props.create(ClassSingleton.class), "class",
PoisonPill.getInstance(),"backend"), "classsingleton");
ActorRef selection = system.actorOf(ClusterSingletonProxy.defaultProps("user/classsingleton/class", "backend"), "proxy");
System.out.println(selection);
}};
}
}
the ClassSingleton.java:
public class ClassSingleton extends UntypedActor {
LoggingAdapter log = Logging.getLogger(getContext().system(), this);
public ClassSingleton() {
System.out.println("Constructor is done");
}
public static Props props() {
return Props.create(ClassOperator.class);
}
#Override
public void preStart() throws Exception {
ActorRef selection = getSelf();
System.out.println("ClassSingleton ActorRef... " + selection);
}
#Override
public void onReceive(Object message) {
}
#Override
public void postStop() throws Exception {
System.out.println("postStop ... ");
}
}
The ClassSingleton actor is doing nothing, the printout is:
Actor[akka://ClusterSystem/user/proxy#-893814405] only, which is printed from the ClusterSingletonProxy. No exception and Junit is done, green flag. In debugging ClassSingleton is not called (including contructor and preStart()). Sure it is me, but what is the mistake? Even more confusing that the same ClassSingleton ClusterSingletonManager code is working fine outside of javatestkit and junit.
I suspect that the cluster setup might be reponsible, so I tried to include and exclude the following code (no effect). However I would like to understand why we need it, if we need it (it is from an example code).
Many thanks for your help.
Address clusterAddress = Cluster.get(system).selfAddress();
Cluster.get(system).join(clusterAddress);
Proxy pattern standard behavior is to locate the oldest node and deploy the 'real' actor there and the proxy actors are started on all nodes. I suspect that the cluster configuration did not complete and thus why your actor never got started.
The join method makes the node to become a member of the cluster. So if no one joins the cluster the actor with proxy cannot be created.
The question is are your configuration files that are read during junit test have all the information to create a cluster? Seed-nodes? Is the port set to the same as the seed node?
I have a simply camel MINA server using the JAVA DSL, and I am running like the example documented here:
Running Camel standalone and have it keep running in JAVA
MINA 2 Component
I am trying to create a sample application hosted at "mina:tcp://localhost:9991" (aka MyApp_B) that sends a very simple message to a server hosted at "mina:tcp://localhost:9990" (aka MyApp_A).
I want is to send a simple message containing a String in the header (which is "Hellow World!") and with the address in the body.
public class MyApp_B extends Main{
public static final String MINA_HOST = "mina:tcp://localhost:9991";
public static void main(String... args) throws Exception {
MyApp_B main = new MyApp_B();
main.enableHangupSupport();
main.addRouteBuilder(
new RouteBuilder(){
#Override
public void configure() throws Exception {
from("direct:start")
.setHeader("order", constant("Hello World!"))
.setBody(constant(MINA_HOST))
.to("mina:tcp://localhost:9990");
}
}
);
System.out.println("Starting Camel MyApp_B. Use ctrl + c to terminate the JVM.\n");
main.run();
}
}
public class MainApp_A {
public static void main(String... args) throws Exception {
Main main = new Main();
main.enableHangupSupport();
main.addRouteBuilder(new RouteBuilder(){
#Override
public void configure() throws Exception {
from("mina:tcp://localhost:9990").bean(MyRecipientListBean.class,
"updateServers").to("direct:debug");
from("direct:debug").process(new Processor() {
public void process(Exchange exchange) throws Exception {
System.out.println("Received order: " +
exchange.getIn().getBody());
}
});
}
});
main.run(args);
}
}
Bean used by MyApp_A:
public class MyRecipientListBean {
public final static String REMOVE_SERVER = "remove";
public final static String ADD_SERVER = "add";
private Set<String> servers = new HashSet<String>();
public void updateServers(#Body String serverURI,
#Header("order") String order){
System.out.println("===============================================\n");
System.out.println("Received " + order + "request from server " + serverURI + "\n");
System.out.println("===============================================\n");
if(order.equals(ADD_SERVER))
servers.add(serverURI);
else if(order.equals(REMOVE_SERVER))
servers.remove(serverURI);
}
}
I have done this code, however, the servers on the other side don't seem to receive anything. Therefore I have 2 questions:
Am I doing something wrong?
Is there a better way to send simple message using Camel?
MyApp_A does NOT send any messages. You need to send a message to the direct endpoint to start the route.
You can also change direct to a timer component to have it trigger every X second etc.
Added latest comment as requested:
yes and the direct route is also running. Its just that to send a
message to direct, you need to do that using Camel. direct is an
internal Camel component for sending messages between its endpoint
(routes). To send a message to it, you can use the producer template.
See chapter 7, section 7.7 in the Camel in Action book.
I wrote a small HTTP server in Java and I have a problem passing static variables (server configuration: port, root, etc.) to the thread that handles requests. I do not want my thread to modify these variables and if it extends the server class, it will also inherit its methods which I don't want.
I don't want to use getters for reasons of performance. If I make the static members final, I will have a problem when loading their values from the config file.
here's an example
class HTTPServer {
static int port;
static File root;
etc..
....
//must be public
public void launch() throws HTTPServerException {
loadConfig();
while (!pool.isShutdown()) {
....
//using some config here
...
try {
Socket s = ss.accept();
Worker w = new Worker(s);
pool.execute(w);
}catch () {...}
}
}
private void loadConfig(){ //reading from file};
...
other methods that must be public goes here
}
I also don't want to have the worker as nested class. It's in another package...
What do you propose?
You could put your config in a final AtomicReference. Then it can be referenced by your worker and also updated in a thread-safe manner.
Something like:
class HTTPServer {
public static final AtomicReference<ServerConf> config =
new AtomicReference(new ServerConf());
}
Make the new ServerConf class immutable:
class ServerConf {
final int port;
final File root;
public ServerConf(int port, File root) {
this.port = port;
this.root = root;
}
}
Then your worker can get a reference to the current config via HTTPServer.config.get(). Perhaps something like:
Worker w = new Worker(s, HTTPServer.config.get());
loadConfig() can set new config via something like:
HTTPServer.config.set(new ServerConf(8080, new File("/foo/bar"));
If it's not important for all your config to change at the same time, you could skip the ServerConf class and use AtomicInteger for the port setting, and AtomicReference<File> for the root.
Read the static data into a static 'sharedConfig' object that also has a socket field - you can use that field for the listening socket. When acccpet() returns with a server<> client socket, clone() the 'sharedConfig', shove in the new socket and pass that object to the server<>client worker thread. The thread then gets a copy of the config that it can erad and even modify if it wants to without afecting any other thread or the static config.
So i'm trying to get my Apache xmlrpc client/server implementation to play ball. Everything works fine except for one crucial issue:
my handler class (mapped through the properties file org.apache.xmlrpc.webserver.XmlRpcServlet.properties) reacts as it should but it's constructor is called at every method invocation. It would seem that the handler class is instantiated at each call which is bad because I have data stored in instance variables that I need to save between calls.
How do I save a reference to the instantiated handler so that I can access it's instance variables?
So, for anyone else who still wants to use XMLRPC here's how I fixed this issue:
http://xmlrpc.sourceforge.net/
far superior to apache xmlrpc, in my opinion.
This is standard behaviour of Apache XMLRPC 3.x. http://ws.apache.org/xmlrpc/handlerCreation.html:
By default, Apache XML-RPC creates a new object for processing each
request received at the server side.
However, you can emulate the behaviour of XMLRPC 2.x, where you registered handler objects instead of handler classes, using a RequestProcessorFactoryFactory. I have written a custom RequestProcessorFactoryFactory that you can use:
public class CustomHandler implements RequestProcessorFactoryFactory {
Map<Class<?>, RequestProcessorFactory> handlers =
Collections.synchronizedMap(
new HashMap<Class<?>, RequestProcessorFactory>());
#Override
public RequestProcessorFactory getRequestProcessorFactory(Class pClass)
throws XmlRpcException {
return handlers.get(pClass);
}
public void addHandler(final Object handler) {
handlers.put(handler.getClass(), new RequestProcessorFactory() {
#Override
public Object getRequestProcessor(XmlRpcRequest pRequest)
throws XmlRpcException {
return handler;
}
});
}
}
This can then be used with e.g. a XMLRPC WebServer like this
WebServer server = ...
PropertyHandlerMapping phm = new PropertyHandlerMapping();
server.getXmlRpcServer().setHandlerMapping(phm);
Custom sh = new CustomHandler();
phm.setRequestProcessorFactoryFactory(sh);
Object handler = ... /** The object you want to expose via XMLRPC */
sh.addHandler(handler);
phm.addHandler(serverName, handler.getClass());
Maybe something to do with javax.xml.rpc.session.maintain set to true?
I know this is a really old post but I managed to solve the problem with Apache's Java XML-RPC.
First, I thought this could be solved with singleton class in Java but it doesn't work and throws "illegal access exception".
These are what I have done:
public class XmlRpcServer {
private static JFrame frame = new JFrame();
private static JPanel pane = new JPanel();
public static XmlRpcServer singleton_inst = new XmlRpcServer();
public XmlRpcServer() {
// I kept the constructor empty.
}
public static void main(String[] args) throws XmlRpcException, IOException {
// In my case, I put the constructor code here.
// Then stuff for XML-RPC server
// Server Part
WebServer ws = new WebServer(8741);
PropertyHandlerMapping mapping = new PropertyHandlerMapping();
mapping.addHandler("SERVER", singleton_inst.getClass());
ws.getXmlRpcServer().setHandlerMapping(mapping);
ws.start();
////
}
// I called doTheJob() from python via XML-RPC
public String doTheJob(String s) throws XmlRpcException {
loop();
return s;
}
// It executed loop() forever
private static void loop() throws XmlRpcException {
// Actual work is here
}
But metaspace increases gradually:
I worked too much on this metaspace issue when looping forever in Java but I couldn't figure out a solution.