Camel-Twitter difference between direct and event-based - java

I'm considering the use of camel-twitter (The Twitter Component for Apache Camel: http://camel.apache.org/twitter.html). I want to use Twitters Streaming API.
What is the difference between the types event and direct?
Does somebody have an example code for the usage of the event-driven consumer? (I only found this one so far https://fisheye6.atlassian.com/browse/camel/trunk/components/camel-twitter/src/test/java/org/apache/camel/component/twitter/SearchEventTest.java)

direct means that you do an explict direct call to trigger twitter. For example using the direct component in Camel to call a route with twitter.
event means event driven consumer, where you have twitter react on events, such as new tweets found in a search etc.
And for examples, we have also this websocket twitter example: http://camel.apache.org/twitter-websocket-example.html

Keep in mind that the Streaming API cannot be used with the direct endpoint -- only event and polling are supported.
From an API usage standpoint, both event and polling work identically. A single stream listener is opened and maintained. Rate limit considerations do not differ.
The only difference is that the event endpoint sends 1 event per message, immediately when it's received. Polling queues up the received messages and releases them on each poll.
So, the differences are purely how they're delivered inside of Camel. With respect to the API, both streaming endpoints are the same.

The Direct type tells the consumer/producer that it will connect to Twitter each time the endpoint is activated somehow. Let's say you want to use a schedule saved in your database to do searches on Twitter:
Use a JDBC/JPA endpoint that consumes scheduling data
Dynamically create and register Quartz endpoints based on the scheduling data from your DB
Configure your Quartz endpoint to send a message to your Direct Twitter endpoint to do the search at that moment
You will be rated, always. No matter if you use Streaming, Direct, or Polling.
In case you are using Streaming, please read this FAQ from Twitter Developer Center

Related

Async tasks in java rest webservices

I am currently working on order management system for ecommerce portal
The backend are rest webservices in java while the front end is angular js.
The rest service in java does many tasks when an order is placed /updated
store/update order and items in the order in the db
Notify 3rd party logistics regd this order
send email notification to the customer
send sms notification to the customer
etc
We already have an async queue implemented for another feature using blocking queue.
1. use the same queue(current size is 200 and is in memory) and post to it
2. create a new queue inside the rest webservice application
3. integrate with 3rd party queues.
Can someone give insights on #3? or is it wise to go for #1 or #2?
Order management of e-commerce portal is not a simple problem. It will most likely have scalability requirements and using simple Blocking Queue for async processing will not be a good idea.
JVM based Blocking Queue is in-memory queue and requires producers and consumers to be running in same JVM process.
For sending emails whenver a customer places a new order, you need to ensure that email was really sent and application restarts does not result in loss of data present in Blocking Queue.
Hence, most likely you should use an out-of-process Queue systems such as Apache Kafka or Apache ActiveMQ or equivalent.

Integrate api service with message queue

Currently I'm doing the integration work of one project. In this project, we need to expose a restful api with java framework Wink. Since we have several other components to integrate, we put a message queue(activemq) between the api layer and other service parts.But this time the api layer will communicate to the lower level in an asynchronous way. In my understanding, the restful api should run in a synchronous way. For example, in the api layer, if one thread received a request, the response will get returned in the same thread. So there is a internal mismatch between these 2 communication styles. My question is how can we integrate these 2 parts to make the api layer work without sacrificing the features in message queue like reliability and performance?
Any suggestions will be apprciated here.
Thanks
Asynchronous callback is possible in REST communication, see this JERSEY framework example:
https://jersey.java.net/documentation/latest/async.html
But yes the latency should be controlled as your client would be waiting for server to respond, and would be good if client calls it in AJAX way.
Simplest way would be to fork a new process through "executor service", which sends a message in a channel to lower level api and listens back for response in another channel(MQ communication). And on process completion return a response, which then the higher API will push back to client.

Non-blocking queue of HTTP POST requests with persistence

Before we develop our custom solution, I'm looking for some kind of library, which provides:
Non-blocking queue of HTTP requests
with these attributes:
Persisting requests to avoid it's loss in case of:
network connectivity interruption
application quit, forced GC on background app
etc..
Possibility of putting out all these fields:
Address
Headers
POST data
So please, is there anything usable right know, what could save us whole day on developing this?
Right now we don't need any callbacks on completed request and neither saving result data, as there won't be such.
In my humble opinion, a good and straightforward solution would be to develop your own layer (which shouldn't be so complicated) using a sophisticated framework for connection handling, such as Netty https://netty.io/ , together with a sophisticated framework for asynchronous processing, such as Akka http://akka.io/
Let's first look inside Netty support for http at http://static.netty.io/3.5/guide/#architecture.8 :
4.3. HTTP Implementation
HTTP is definitely the most popular protocol in the Internet. There are already a number of HTTP implementations such as a Servlet container. Then why does Netty have HTTP on top of its core?
Netty's HTTP support is very different from the existing HTTP libraries. It gives you complete control over how HTTP messages are exchanged at a low level. Because it is basically the combination of an HTTP codec and HTTP message classes, there is no restriction such as an enforced thread model. That is, you can write your own HTTP client or server that works exactly the way you want. You have full control over everything that's in the HTTP specification, including the thread model, connection life cycle, and chunked encoding.
And now let's dig inside Akka. Akka is a framework which provides an excellent abstraction on the top of Java concurrent API, and it comes with API in Java or Scala.
It provides you a clear way to structure your application as a hierarchy of actors:
Actors communicate through message passing, using immutable message so that you have not to care about thread-safety
Actors messages are stored in message boxes, which can be durable
Actors are responsible for supervising their children
Actors can be run on one or more JVM and can communicate using a wide numbers of protocols
It provides a lightweight abstraction for asynchronous processing , Future, which is easier to use then Java Futures.
It provides other fancy stuff such as Event Bus, ZeroMQ adapter, Remoting support, Dataflow concurrency, Scheduler
Once you become familiar with the two frameworks, it turns out that what you need can easily be coded through them.
In fact, what you need is an http proxy coded in Netty, that upon a request receival sends immediately a message to an Akka Actor of type FSM (http://doc.akka.io/docs/akka/2.0.2/java/fsm.html) which using a durable mailbox (http://doc.akka.io/docs/akka/2.0.2/modules/durable-mailbox.html )
Here is a link to open-source library that was a Master Thesis of a student at Czech Technical University in Prague. It is very large and powerful library and mainly focuses on location. The good thing about it, though, is that it omitted the headers and other -ish that REST has.
It is the latest fork and hopefully it will give you at least inspiration for "own" solution.
how about those concurrent collections:
http://mcg.cs.tau.ac.il/projects/concurrent-data-structures
i hope that the license is ok .
You'll want to have a look to these to posts. (added at the end of the document)
Very basically an approach that works in a proficient way for me is to separate requests from the queue and the executor.
Requests are executed as Runnables or Callables. Inherit from them to create different kind of requests to your API or service. Set them up there adding headers and or body prior to to executing them.
Enqueue those requests in a queue (choose which fits better for you - I'd say LinkedBlockingQueue will make the job) linked to an executor from within a bound service and calling them from your activity or any other scope. If you don't need to get responses and callbacks you can avoid using Guava for listening to futures or create your own callbacks.
I'll stay tuned. If you need more depth I can post some specific pieces of code. There's the source of a basic example in the first link though.
http://ugiagonzalez.com/2012/08/03/using-runnables-queues-and-executors-to-perform-tasks-in-background-threads-in-android/
http://ugiagonzalez.com/2012/07/02/theres-life-after-asynctasks-in-android/
Update:
You can create another queue for those requests that were impossible to execute.
One approach that comes to my mind would be to add all your failed requests to the retry queue. The retry queue would be trying to re-run these tasks while the phone still thinks that there's any kind of internet connection available. In the request object you can set a max number of retrials and compare it to a currentRetry number increasing it in every retrial.
Mmm this might be interesting. I'll definitely think about including that in my library.

Can JMS message-queue be used in this context for my use case?

I'm new to message queue system and read a bit about JMS in particular. This question was also helpful in understanding better about real world use case of JMS.
Ours is a web-based application and am trying to find out whether a particular flow in our application context can leverage JMS effectively. Context explained below:
There is an Email Event in the application, that will trigger email
to a set of predefined listeners, whenever an event happens in the
application. Event could be Consultants submitting timesheet,
Consultants submitting expense, etc. Application allows to configure
different set of listeners for different events.
My question here is, whether JMS can be used in place for triggering of emails, so that it is loosely/decoupled from the application logic (in this case, submitting timesheet/expense), by not waiting for all emails to be delivered to the listeners. Does it make sense in using JMS in this context? I also want to understand whether my perception/view of the JMS architecture is correct in this regard. Comments/ideas/thoughts/suggestion/advice from experienced users are really appreciated.
NOTE: Our tools of trade are: Java, JDK1.6, JSP, Apache Tomcat v6.0.10, PostgreSQL v8.2.3
Definitely. You can create JMS message that will contain appropriate properties. Pre-configured listeners will subscribe to topic and receive messages filtered by selector. Since JMS selector uses SQL like syntax you can create your JMS subscribers dynamically and build the selector according to the application requirements and current configuration.
For example type='timesheet' from='Consultant' will select only timsheets submitted by consultant. Other selector type='expenses' from='Bookkeeper' will get other events (and probably will format email differently.
And this one: type='systemcrash' from='monitor' will send SMS to System Administrator at 3:00AM, Sunday :).

What is Java Message Service (JMS) for?

I am currently evaluating JMS and I don't get what I could use it for.
Currently, I believe this would be a Usecase: I want to create a SalesInvoice PDF and print it when an SalesOrder leaves the Warehouse, so during the Delivery transaction I could send a transactional print request which just begins when the SalesOrder transaction completes successfully.
Now I found out most JMS products are standalone server.
Why would a need a Standalone Server for Message Processing, vs. e.g. some simple inproc processing with Quartz scheduler?
How does it interact with my application?
Isn't it much too slow?
What are Usecases you already implemented successfully?
JMS is an amazingly useful system, but not for every purpose.
It's essentially a high-level framework for sending messages between nodes, with options for discovery, robustness, etc.
One useful use case is when you want a client and a server to talk to one another, but without the client actually having the server's address (E.g., you may have more than one server). The client only needs to know the broker and the queue/topic name, and the server can connect as well.
JMS also adds robustness. For instance, you can configure it so that if the server dies while the client sends messages or the other way around, you can still send messages from the client or poll messages from the server. If you ever tried implementing this directly with sockets - it's a nightmare.
The scenario you describe sounds like a classic J2EE problem, why are you not using a J2EE framework? JMS is often used inside J2EE for communications, but you got all the other benefits.
What ist Java Message Service (JMS) for
JMS is a messaging standard that allows Java EE applications to create, send, receive, and consume messages in a loosely coupled, reliable, and asynchronous way. I'd suggest to read the Java Message Service API Overview for more details.
Why would a need a Standalone Server for Message Processing, vs. e.g. some simple inproc processing with Quartz scheduler?
Sure, in your case, Quartz is an option. But what if the invoice system is a remote system? What if you don't want to wait for the answer? What if the remote system is down when you want to communicate with it? What if the network is not always available? This is where JMS comes in. JMS allows to send a message guaranteed to be delivered and to consume it in a transactional way (sending or consuming a message can be part of a global transaction).
How does it interact with my application?
JMS supports two communication modes: point-to-point and publish/subscribe (if this answers the question).
Isn't it much too slow?
The MOMs I've been working with were blazing fast.
What are Usecases you already implemented successfully?
Used in system such as a reservation application, a banking back-office (processing market data), or more simply to send emails.
See also
EJB Message-Driven Beans
Why would a need a Standalone Server
for Message Processing, vs. e.g. some
simple inproc processing with Quartz
scheduler?
The strength of JMS lies in the fact that you can have multiple producers and multiple consumers for the same queue, and the JMS broker manages the load.
If you have multiple producers but a single consumer, you can use other approaches as well, such as a quartz scheduler and a database table. But as soon as you have multiple consumer, the locking scheme become very hard to design; better go for already approved messaging solution. See these other answers from me for a few more details: Why choosing JMS for asynchronous solution ? and Producer/consumer system using database
The other points are just too vague to be answered.
I've used it on a number of projects. It can help with scalability, decoupling of services, high availability. Here's a description of how I used it on a project several years ago:
http://coders-log.blogspot.com/2008/12/favorite-projects-series-installment-2.html
The description explains what JMS brought to the table for this particular project, but other projects will use messaging systems for a variety of reasons.
Messaging is usually used to interconnect different systems and send requests/commands asynchronously. A common example is a bank client application requesting an approval for a transaction. The server is located in another bank's system. Both systems are connected in an Enterprise Service Bus. The request goes into the messaging bus, which instantly acknowledges the reception of the message. The client can go on with processing. Whenever the server system becomes available, the bus forwards the message to it. Of course there needs to be a second path, for the server to inform the client that the transaction executed successfully or failed. This again can be implemented with JMS.
Please note that the two systems need not to implement JMS. One can use JMS and the other one MSMQ. The bus will take care of the interconnection.
JMS is a message-oriented middleware.
Why would a need a Standalone Server for Message Processing, vs. e.g. some simple inproc processing with Quartz scheduler?
It depends on what other components you may have. I guess. But I don't know anything about Quartz
How does it interact with my application?
You send messages to the broker.
Isn't it much too slow?
Compare to what ?
What are Usecases you already implemented successfully?
I've used JMS to implement a SIP application server, to communicate between the various components.
From the Javadoc:
The Java Message Service (JMS) API provides a common way for Java programs to create, send, receive and read an enterprise messaging system's messages.
In other words, and contrary to every other answer here, JMS is nothing more than an API, which wraps access to third-party Message Brokers, via 'JMS Providers' implemented by the vendor. Those Message Brokers, such as IBM MQ and dozens of others, have the features of reliability, asynchronicity, etc. that have been mentioned in other answers. JMS itself provides exactly none of them. It is to Message Brokers what JDBC is to SQL databases, or JNDI is to LDAP servers (among other things).
I have found a very good explanation of JMS with an example.
That is a simple chat application with JMS queues are used to communicate messages between users and messages stay in the queue if the receiver is offline.
In this example implementation they have used
XSD to generate domain classes.
Eclipse EE as IDE.
JBoss as web/application server.
HTML/JavaScript/JQuery for UI.
Servlet as controller.
MySQL as DB.
The JBoss configuration step for queue is explained nicely
Its available at http://coder2design.com/messaging-service/
Even the downloadable code is also available there.

Categories