I know that there is a supported Scala DSL for Camel. Apart from that
Is it realistic to replace Java (the language) completely by Scala for a Camel based project?
Which kind of known problems are known to exist?
Which workarounds exist for those problems (other than using Java)?
I am mainly looking for less boilerplaty code.
Akka offers stable Scala-idiomatic Camel integration.
The akka-camel module allows actors,
untyped actors and typed actors to
receive and send messages over a great
variety of protocols and APIs. This
section gives a brief overview of the
general ideas behind the akka-camel
module, the remaining sections go into
the details. In addition to the native
Scala and Java actor API, actors can
now exchange messages with other
systems over large number of protcols
and APIs such as HTTP, SOAP, TCP, FTP,
SMTP or JMS, to mention a few. At the
moment, approximately 80 protocols and
APIs are supported.
Apart from that, I'm sure this replacement is possible due to a good interop, and there could hardly be any Scala-specific issues that are not peculiar to Java. E.g., Akka Actors used for publishing to/consuming from Camel endpoints are based on java.util.concurrency, and the only problem I can think of is a fixable bug in the library.
In the meantime a relatively simple Scala DSL has been developed for Camel, that should have the functionality of the Java DSL.
To decide if it is realistic for you, consider:
- The quality of the IDE support for the languages
- The Scala language complexity
- The Scala/Java language popularity
- DSL extension possibilities. In Scala, it should be possible (with some Scala magic) to to extend the DSL (add additional DSL elements)
If you decide to try it out, it would be great if you share your experience with the Apache Camel community your impressions on:
code readability, code maintainability, code efficiency, developer satisfaction, code size, the number of "man-days".
Since then (2010-2011), there is now (Sept 2016) a recent initiative named for Akka Streams Integration, codename Alpakka.
We believe that Akka Streams can be the tool for building a modern alternative to Apache Camel. That will not happen by itself overnight and this is a call for arms for the community to join us on this mission. The biggest asset of Camel is its rich set of endpoint components. We would like to see that similar endpoints are developed for Akka Streams.
See "akka/akka-stream-contrib".
Related
It seems on every iteration of Java for the last few major releases, there are consistently new ways to manage concurrent tasks.
In Java 9, we have the Flow API which resembles the Flowable API of RxJava but with Java 9 has a much simpler set of classes and interfaces.
Java 9
Has a Flow.Publisher, Flow.Subscriber, Flow.Processor, Flow.Subscription, and SubmissionPublisher, and that's about it.
RxJava
Has whole packages of Flow API-like classes, i.e. io.reactivex.flowables, io.reactivex.subscribers, io.reactivex.processors, io.reactivex.observers, and io.reactivex.observables which seem to do something similar.
What are the main differences between these two libraries? Why would someone use the Java 9 Flow library over the much more diverse RxJava library or vice versa?
What are the main differences between these two libraries?
The Java 9 Flow API is not a standalone library but a component of the Java Standard Edition library and consists of 4 interfaces adopted from the Reactive Streams specification established in early 2015. In theory, it's inclusion can enable in-JDK specific usages, such as the incubating HttpClient, maybe the planned Async Database Connection in parts, and of course SubmissionPublisher.
RxJava is Java library that uses the ReactiveX style API design to provide a rich set of operators over reactive (push) dataflows. Version 2, through Flowable and various XxxProcessors, implements the Reactive Streams API which allows instances of Flowable to be consumed by other compatible libraries and in turn one can wrap any Publisher into a Flowable to consume those and compose the rich set of operators with them.
So the Reactive Streams API is the minimal interface specification and RxJava 2 is one implementation of it, plus RxJava declares a large set of additional methods to form a rich and fluent API of its own.
RxJava 1 inspired, among other sources, the Reactive Streams specification but couldn't capitalize on it (had to remain compatible). RxJava 2, being a full rewrite and a separate main version, could embrace and use the Reactive Streams specification (and even expand upon it internally, thanks to the Rsc project) and has been released almost a year before Java 9. In addition, it was decided both v1 and v2 keeps supporting Java 6 and thus a lot of Android runtimes. Therefore it couldn't capitalize directly on the Flow API provided now by Java 9 directly but only through a bridge. Such bridge is required by and/or provided in other Reactive Streams-based libraries too.
RxJava 3 may target the Java 9 Flow API but this hasn't been decided yet and depending on what features the subsequent Java versions bring (i.e., value types), we may not have v3 within a year or so.
Till then, there is a prototype library called Reactive4JavaFlow which does implement the Flow API and offers a ReactiveX style rich fluent API over it.
Why would someone use the Java 9 Flow library over the much more diverse RxJava library or vice versa?
The Flow API is an interoperation specification and not an end-user API. Normally, you wouldn't use it directly but to pass flows around to various implementations of it. When JEP 266 was discussed, the authors didn't find any existing library's API good enough to have something default with the Flow API (unlike the rich java.util.Stream). Therefore, it was decided that users will have to rely on 3rd party implementations for now.
You have to wait for existing reactive libraries to support the Flow API natively, through their own bridge implementation or new libraries to be implemented.
Providing a rich set of operators over the Flow API is only reason a library would implement it. Datasource vendors (i.e., reactive database drivers, network libraries) can start implementing their own data accessors via the Flow API and rely on the rich libraries to wrap those and provide the transformation and coordination for them without forcing everybody to implement all sorts of these operators.
Consequently, a better question is, should you start using the Flow API-based interoperation now or stick to Reactive Streams?
If you need working and reliable solutions relatively soon, I suggest you stick with the Reactive Streams ecosystem for now. If you have plenty of time or you want to explore things, you could start using the Flow API.
At the beginning, there was Rx, version one. It was a language agnostic specification of reactive APIs that has implementations for Java, JavaScript, .NET. Then they improved it and we saw Rx 2. It has implementations for different languages as well. At the time of Rx 2 Spring team was working on Reactor — their own set of reactive APIs.
And then they all thought: why not make a joint effort and create one API to rule them all. That was how Reactive Commons was set up. A joint research effort for building highly optimized reactive streams compliant operators. Current implementors include RxJava2 and Reactor.
At the same time JDK developers realized that reactive stuff is great and worth including in Java. As it is usual in Java world the de facto standard become de jure. Remeber Hibernate and JPA, Joda Time and Java 8 Date/Time API? So what JDK develpers did is extracting the very core of reactive APIs, the most basic part, and making it a standard. That is how j.u.c.Flow was born.
Technically, j.u.c.Flow is much more simpler, it consists only of four simple interfaces, while other libraries provide dozens of classes and hundreds of operators.
I hope, this answers the question "what is the difference between them".
Why would someone choose j.u.c.Flow over Rx? Well, because now it is a standard!
Currently JDK ships with only one implementation of j.u.c.Flow: HTTP/2 API. It is actually an incubating API. But in future we might expect support of it from Reactor, RxJava 2 as well as from other libraries, like reactive DB drivers or even FS IO.
"What are the main differences between these two libraries?"
As you noted yourself, the Java 9 library is much more basic and basically serves as a general API for reactive streams instead of a full-fledged solution.
"Why would someone use the Java 9 Flow library over the much more diverse RxJava library or vice versa?"
Well, for the same reason people use basic library constructs over libraries - one less dependency to manage. Also, due to the fact that the Flow API in Java 9 is more general, it is less constrained by the specific implementation.
What are the main differences between these two libraries?
This mostly holds true as an informative comment(but too long to fit in), the JEP 266: More Concurrency Updates responsible for the introduction of the Flow API in Java9 states this in its description(emphasis mine) -
Interfaces supporting the Reactive Streams publish-subscribe
framework, nested within the new class Flow.
Publishers produce items
consumed by one or more Subscribers, each managed by a Subscription.
Communication relies on a simple form of flow control (method
Subscription.request, for communicating back pressure) that can be
used to avoid resource management problems that may otherwise occur in
"push" based systems. A utility class SubmissionPublisher is provided
that developers can use to create custom components.
These (very
small) interfaces correspond to those defined with broad participation
(from the Reactive Streams initiative) and support interoperability
across a number of async systems running on JVMs.
Nesting the interfaces within a class is a conservative policy allowing
their use across various short-term and long-term possibilities. There
are no plans to provide network- or I/O-based java.util.concurrent
components for distributed messaging, but it is possible that future JDK
releases will include such APIs in other packages.
Why would someone use the Java 9 Flow library over the much more diverse RxJava library or vice versa?
Looking at a wider prospect this is completely opinion based on factors like the type of application a client is developing and its usages of the framework.
I am trying to set up a Kafka system. Since most of the existing code in my project is already in PHP, I will most probably be writing the producers in PHP itself. But I am comparatively very less constrained when it comes to choosing a language to write the consumer. Now, that there are so many clients which can be used I am in a fix.
In other to order to choose the right tech here, what are the various factors that should be kept in mind?
Would especially like to apply this knowledge to choose between java client vs node client(multithreaded model vs async model)
Any help will be highly appreciated.
The Java client is the most advance client and officially supported by the Kafka Project -- most other clients are third party projects and many do not implement all available features.
Thus, I would recommend to use Java clients.
Kafka is basically written in pure Java and Kafka’s native API is java, so this is the only language where you’re not using a third-party library.You always have an edge over writing in other languages which have an additional overhead.
Node.js isn’t optimized for high throughput applications such as Kafka. So if you need the high processing rates that come standard on Kafka, or perhaps C++.
Also, I believe Kafka consumer clients written in Java has good community support. So it makes sense to implement it using java as long as you don't have any other dependency stopping you from implementing it from.
Also, check this out for the benchmarking results using various Kafka Clients. The results are contrasting.
Client Type Throughput(No of messages)
Java 40,000 - 50,0000
Go 28,000 - 30,0000
Node 6,000 - 8,0000
Kafka-pixy 700 - 800
Logstash 250
As far as Kafka goes, I'd use any of the languages with an official Confluent supported client: JVM, C/C++, .NET, Python, Go
I'm sure you can get others to work like Node or PHP, and maybe those can use the C library, but I would prefer something with official language support and a broader user to ask questions to.
I have 2 processes that need to communicate over the same PC and different PCs. In the local case the process communication is among different processes e.g Process A and Process B.
In the remote case it will be among 2 instances of Process A running in different PCs.
I will create them from scratch and I am wondering what is the best approach. I am aware of RMI and sockets but I was wondering for my case as described, and taking also into account that the messages exchanged are small and the number of APIs really small, if there is a standard approach/library for this.
Any suggesstions are highly welcome
Update after #EJP comments:
My interest is 1)to implement the requirement for communication in a light manner since the API exposed will be really small and the messages as well 2)use and learn a new popular framework if possible (I already know RMI and sockets)
If you are just looking for messaging frameworks, there's a bunch available out such as
RabbitMQ - http://www.rabbitmq.com/
ZeroC Ice - http://www.zeroc.com/ice.html
AMQP - http://www.amqp.org
OpenSplice DDS - http://www.prismtech.com/opensplice
But when you use a 3rd party framework, you are then adding an additional dependency to your application. If it is something very simple like your case, perhaps writing a TCP client/server would be sufficient for a client/server paradigm or if you are looking for publisher/subscriber paradigm then you can look into using UDP multicast. You just need your data class to extends Serializable if you want to be able to marshal and unmarshal your data to buffer and send it over to network using typical JAVA socket API.
I strongly suggest having a look at Thrift. From all the technologies I've used (web services, RMI, XML-RPC, Corba comes to mind) it is currently my favourite. Essentially the steps involved are:
Download the Thrift compiler.
Add the Maven dependency (make sure it is the same version as the compiler!) I currently use 0.8.0.
Write your Thrift IDL (incredibly easy, google for it as there are plenty of examples).
Compile it for Java.
Writer your server/client.
In general, you can whip together a server and a client in about 30 lines of code. In terms of speed and reliability it has never failed me before.
You might have a look at Versile Java (full disclosure: I am one of the developers), it satisfies at least your criteria #1. From the API documentation, here are some examples of writing remote-enabled objects, running a service, and connecting to a service.
If you want to learn something new then I'd look at OpenSplice. The reason is pretty simple, among the technologies suggested above is the only one that provides you with Data-Centric abstractions.
The cool thing about OpenSplice is that gives you the abstraction of a Global Data Space, yet the implementation of this global data space is fully distributed and very high performance.
Take a look at some of the slides available at http://www.slideshare.net/angelo.corsaro and I am sure you'll get in love with the technology.
Finally OpenSplice is Open Source.
Happy Hacking.
A+
JMX is a good alternative .
Example :
http://www.javalobby.org/java/forums/t49130.html
IMB JMX Example
http://alvinalexander.com/blog/post/java/source-code-java-jmx-hello-world-application
Our frontend is simple Jetty (might be replaced with Tomcat later on) server. Through servlets, we are providing a public HTTP API (more or less RESTful) to expose our product functionality.
In the backend, we have a Java process which does several kind of maintenance tasks. While the backend process usually does it own tasks when it's time, now and then, the frontend needs to wake-up the backend to execute a certain task in the background.
Which (N)IO library would be ideal for this task? I found Netty, Grizzly, kryonet and plain RMI. For now, I am inclined to say Netty, it seems simple to use and it is probably very reliable.
Does any of you have experience in this kind of setups? What would your choice be?
thanks!
Try to translate this document which answer to your question.
http://blog.xebia.fr/2011/11/09/java-nio-et-framework-web-haute-performance/
This society, as french famous Java EE experts, did a lot of poc of NIO servers in the context of a french challenge sponsored by VmWare (USI2011). It was about building a simple quizz app that can handle a load of 1 million connected users.
They won that challenge with great results.
Their implementation was Netty + Gemfire and they only replaced the CachedThreadPool by a MemoryAwareThreadPool.
Netty seems to offer great performances, and is well documented.
They also considered Deft, inspired by Tornado (python/facebook) but it's still a bit immature for them
Edit: here's the translated link provided in the comments
My preference is Netty. It's simple yet flexible. Very fast and the community around Netty is awesome.
The company I work for is currently evaluating CoralReactor. It is a commercial software but it has the easiest API I have ever seen for Java NIO. My personal opinion is that Netty makes things too complicated, especially if you want to go garbage-free and single-threaded, which are a requirement for many companies from the finance, advertisement and game industry.
I would decouple them by using JMS, just have some (set of) control queues your backend sits there listening on and you're done. No need to write a custom nio api here.
One sample provider is hornetq. This can be run as an in process jms broker as well, it uses Netty under the covers.
I have a bunch of Java code which was written using the Hibernate framework, originally destined to have a front end written using JSPs. However, the requirements for the front end have changed, and we've decided that a desktop client (which will be written in .NET) is a better match for our users.
I don't really want to waste the code that's already been written - can anybody suggest a good set of tools for writing a document-based web services interface that we will be able to access from .NET?
Thanks,
Jim
If you truly want a document based service interface (rather than an RPC style web service architecture), your best bet is going to be creating a SOAP based web service interface.
A quick glance at the Java site shows that the Metro stack might help a bit:
Java Web Services at a Glance
We're developing an application with the exact architecture you describe for a finance application. We reviewed several different options, and have finally landed on using compressed CSV over HTTP.
CSV was chosen since the vast majority of data was going to be displayed in a grid on the front end, we had very large result sets >250k rows on a regular basis, and it compresses really really well.
We also looked at using:
ICE, but declined on that due to licensing costs and the need to reinvent so much.
Google's protocol buffers via servlets, but declined on that due to lack of C# support (as of last fall).
Compressed XML using WOX, but declined on that due to lock-in to a small thesis project for support and XML being too verbose.
The industry supports a couple of different options as well:
SOAP, but that has its own well documented issues.
IIOP, J-Integra has a product called Espresso which will allow you to do RMI from a front end.
I'd personally use some lightweight RPC protocol, be it XML-RPC or a homegrown one. SOAP, IMO, is way too fat and is not as interoperable as it's supposed to be. The simpler the better.
We have a quite large application using a Java RMI server and IIOP.NET for interoperability. We have used IIOP.NET with the Sun RMI and the Bea Weblogic (now Oracle) without major issues.