Need a framework for a notification/dashboard system? [closed] - java

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I've been asked to implement a notification system in a Cloud Java application. The premise is that administrators or components of the application can send either specific messages to individual users, or broadcast announcements to all users.
Notifications would be categorized by severity, type (outages, new services, etc.), and corresponding component.
Users would be able to select types and components they're interested in, and how they'd like to receive those notifications (by e-mail, just shown on dashboard, SMS, etc.). Users could acknowledge or delete notifications, so they won't show up for that user anymore.
Although I'm sure this would be interesting to implement from scratch, it just feels like there should be an existing system, Apache project, commercial project, etc. that does just this and would avoid me having to reinvent the wheel.
My question is: Can anyone recommend a framework for notification tracking that could be integrated into an existing application and automatically handle all the back end stuff? Commercial or open source are fine, as long as the licensing terms are commercial friendly (no GPL or LGPL, please).

I guess what you are looking for is something similar to Amazon Simple Notification Service (SNS). But first let's set some things straight:
You're trying to send Email/SMS - and both require Infrastructure, not just frameworks/libraries. I guess your client (or any client for that matter) would have an Email server running somewhere, so you won't have a direct cost impact. But sending SMS does incur an infrastructure overhead.
You won't have an out-of-the-box solution. Since you'll be having additional infrastructure, you will end up at least writing a good level of integration with this infrastructure.
Keeping all that in mind, here are the options, listed in order of difficulty:
Use Amazon SNS
Use Cloud Message Bus (CMB) - an open-source clone of Amazon SNS. It has the same API format as Amazon SNS, so you'll use this the same way you use Amazon SNS
Use Apache Camel with its various Email/SMS Components. Apache Camel is an enterprise routing framework. It has a Message Queue where developers can push messages into, and then it has various routers which take these messages and send them elsewhere. It has routers for sending Email/SMS out-of-the-box. You would first create a topic to which you post messages. Then when a user registers for email notifications, you would add an email endpoint for him/her. And when they opt-out of Email, you will remove that endpoint. Basically its very close to designing your own solution - except you don't have to write code for sending SMS/Email, it has out-of-the-box components for doing that, and you just have to write integration code to add those endpoints as and when the user subscribes for notifications.
Roll your own. Your solution would end up being very similar to the Apache Camel approach. You will have a message queue, you will have topics and listeners. Except you'll be writing your own code to send all the emails/SMS.
Edit: Minor clarifications

It looks like your requirement is some sort of live data in a browser based web application. If that is correct, there have been great strides in some of the HTML 5 apis, specifically web sockets.
Websockets are an extension of the HTTP protocol that allow for bi channel communication between client and server. There is a downside however. Believe it or not, browser support is still pretty scarce, and some complications arise with http proxies in the wild.
Typically, to circumvent the lack of widely accepted support, there have been quite a few javascript/server side frameworks that have surfaced that seem really promising. These frameworks typcially take care of fallback support when websockets aren't supported. Some fall back technologies include server sent events, jsonp, long polling, short polling etc.
2 excellent open source projects come to mind:
1). Atmoshphere: https://github.com/Atmosphere/atmosphere
2). Socket-io - http://socket.io/

Related

Implementing a persistent redelivery in a Java Boot integration microservice using Apache Camel and/or ActiveMQ [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I want to develop a small integration microservice, which implements communication between two existing systems.
System A has an Apache Kafka topics producing and consuming messages.
System B has a REST API and is capable calling a REST API callbacks.
Solution I'm trying to develop has to be able to communicate with each system, transform the messages and deliver it to another system (while doing extensive logging etc). The amount messages and the size of messages is small. Performance will not be an issue.
My chosen stack is Spring Boot + Apache Camel for routing + ELK for logging (+ templating engine etc, which is not really relevant).
My main concern is the requirement for a guaranteed delivery. From what I've read Camel stores the messages in memory, which means restarting/updating my microservice could lose some data which is inacceptable.
What are the relevant industry standards for implementing the guaranteed delivery?
I'm looking into ActiveMQ, but not sure if I need to bring the big guns since the solution is small and the amount of data is small. I'm not too opposed to the idea though.
I guess my questions are
What are the elegant ways of implementing persistent guaranteed delivery when integrating a 3rd party Kafka with a 3rd party REST systems.
Is bringing a whole message broker for the sake of a small microservice too much?
In short-- no, there is not an 'elegant' way to implement that. Kafka really can't do guaranteed delivery in the original sense of the meaning. Kafka solutions generally rely on all endpoints supporting the ability to replay the same data multiple times (or consumers having the ability to track what has been delivered already and drop repetitive messages). Same with REST-- REST endpoints (and HTTP in general, do not support guaranteed delivery.
This is a subjective question, but I'll try to answer objectively-- ActiveMQ has a smaller footprint than Kafka. If you are using Camel, you can readily co-locate small footprint ActiveMQ brokers with the Camel routes. This is a common architecture-- and one that has been around since Camel's inception. Alternatively, if the message volume is low a stand-alone ActiveMQ broker is as simple as running a single container or Java process.
"Kafka to Queued Messaging" is a common pattern to then provide guaranteed delivery to other systems. This centralizes the error handling and retry for network links to the queued messaging broker. Your Camel routes would then read from the queue(s). You can safely treat a queue-to-one-REST-endpoint as XA-like guaranteed delivery using JMS local-transactions.

Broadcast to everyone on lan [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am attempting to contact everyone on a LAN to discover which devices are currently using the ip and running my service. Every device running the service will know which other devices are connected when they come online. I have basic networking experience(tcp/udp), but I haven't done much with more complicated communication packages. I wanted to post what I have researched/tried so far and get some expert responses to limit my trial and error time on future potential solutions.
Requirements:
Currently using java, but require cross-language communication.
Must be done in an acceptable time frame(couple seconds max) and preferably reliably.
I would like to use similar techniques for both broadcast and later communications to avoid introducing added complexity of multiple packages/technologies.
Currently I am planning on a heartbeat to known ip's to alert that still connected, but I may want to continuously broadcast to lan later.
I am interested in using cross-language rpc communication for this service, but this technique doesn't necessarily have to use that.
Later communication(non-broadcast) must be reliable.
Research and things attempted:
UDP - Worried about cross-language communication, lack of reliable delivery, and would add another way of communicating rather than using one solution like the ones below. I would prefer to avoid it if another more complete solution can be found.
Apache Thrift - Currently I have tried to iterate through all potential ip's and try to connect to each one. This is far too slow since the timeout is long for each attempted connection(when I call open). I have yet to find any broadcast option.
ZeroMQ - Done very little testing with basic zeromq, but I have only used a wrapper of it in the past. The pub/sub features seem to be useful for this scenario, but I am worried about subscribing to every ip in the lan. Also worried what will happen when attempt to subscribe to an ip that doesn't yet have a service running on it.
Do any of these recommendations seem like they will work better than the others given my requirements? Do you have any other suggestions of technologies which might work better?
Thanks.
What you specify is basically two separate problems; discovery/monitoring and a service provider. Since these two issues are somewhat orthogonal, I would use two different approaches to implement this.
Discovery/monitoring
Let each device continuously broadcast a (small) heartbeat/state message on the LAN over UDP on a predefined port. This heartbeat should contain the ip/port (sender) of the device, along with other interesting data, for example an address (URL) to the service(s) this device provides. Choose a compact message format if you need to keep the bandwidth utilization down, for example Protocol Buffers (available in many languages) or JSON for readability. These messages shall be published periodically, for example every 5th second.
Now, let each device listen to incoming messages on the broadcast address and keep an in-memory map [sender, last-recorded-time + other data] of all known devices. Iterate the map say every second and remove senders who has been silent for x heartbeat intervals (e.g. 3 x 5 seconds). This way each nodes will know about all other responding nodes.
You do not have to know about any IP:s, do not need any extra directory server and do not need to iterate all possible IP addresses. Also, sending/receiving data over UDP is much simpler than over TCP and it does not require any connections. It also generates less overhead, meaning less bandwidth utilization.
Service Provider
I assume you would like some kind of request-response here. For this I would choose a simple REST-based API over HTTP, talking JSON. Switch out the JSON payload for Protocol Buffers if your payload is fairly large, but in most cases JSON would probably work just fine.
All-in-all this would give you a solid, performant, reliable, cross-platform and simple solution.
Take a look at the Zyre project in the ZeroMQ Guide (Chapter 8). It's a fairly complete local network discovery and messaging framework, developed step by step. You can definitely reuse the UDP broadcast and discovery, maybe the rest as well. There's a full Java implementation too, https://github.com/zeromq/zyre.
I would use JMS as it can cross platform (for the client at least) You still have to decide how you want to encode data and unless you have specific ideas I would use XML or JSon as these are easy to read and check.
You can use ZeroMQ for greater performance and lower level access. Unless you know you need this, I suspect you don't.
You may benefit from the higher level features of JMS.
BTW: These services do service discovery implicitly. There is no particular need (except for monitoring) to know about IP addresses or whether services are up or down. Their design assumes you want to protected from have to know these details.

Understanding ESB [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
though I understand what system integration is, I am a bit new to all the newest approaches. I am farily familiar with web services and JMS but I feel utterly confused by the concept of an ESB.
I have done some research but I still don't really get it. I work much better by example rather than theory.
So can someone please illustrate a simplistic example to demonstrate why one would use an Enterprise Service Bus vs just a Queue , a web service , the file system or else?
I would like the example to amplify the capabilities of the ESB which could not be achieved by any other conventional intgration method or at least not with the same efficiency.
All replies are greatly appreciated.
Thanks,
Bob
This is going to sound a bit harsh, but basically if you needed an ESB, you'd know you needed an ESB.
For a majority of use cases, the ESB is a solution looking for a problem. It's a stack of software over engineered for most scenarios. Most folks simply do not do enough variety of processing to warrant it. The "E" for "Enterprise" is notable here.
In a simple case:
tail -F server.log | grep SEVERE >> severe.log
THAT is a trivial example of an instance of an ESB scenario.
"But that's just a UNIX command pipeline!"
Yes, exactly.
The "ESB" part is the "|" and the ">>"
The ESB is the run time within which you can link together modules, monitor traffic, design all sorts of whacky scenarios like fan outs and joins, etc. etc.
ESBs are notable for having a bunch of connectors to read a bunch of sources and write a bunch of destinations. They're notable for weaving more complicated graphs and workflows for processing using rather coarse logic blocks.
But what most folks typically do is:
input -> DO_STUFF -> output
With an ESB they get:
ESB[input -> DO_STUFF -> output]
In the wild, most pipelines simply are not that complicated. They tend to have one off logic that's not reusable, and folks tend to glob it together in to a single logic module.
Well, heck, you can do that with a Perl script.
Long pipelines in ESBs tend to be more inefficient than not. With lots of marshaling of data in to and out of generic modules (since you rarely use a binary payload).
So, say, CSV comes in, converts to XML, process it, output XML for input to another step as XML, which marshals it, works on it, converts it back in to XML for Yet Another step. Rinse and repeat until the CPU hits 400% (multi-core FTW).
Then some one comes up with "Hey, if I drag an drop these modules together in to a single routine, we skip all this XML junk!", and you end up with "input -> DO_STUFF -> output".
For large systems, with lots of web services that need to do casual, ad hoc integration, they can be fine. If you're in a business that does that a lot, they can work really well. When you have dozens of pipelines, they can help with the operational aspect of managing them.
But for complicated pipelines, if you have a lot of steps, maybe it's not such a good idea beyond prototyping, especially if there's any real volume involved. Mind, you may not having any choice, depends on the systems you're integrating.
If not, if you have a single interface you need to stand up -- then just do it. Do it in Perl, in Java, in C#, in whatever. Don't run out and spool up some odd 100MBs of infrastructure and complexity that you now to get to learn, master, and maintain.
So, again, if you needed an ESB, you'd know it. Really. You'd have whatever system you've built together of disparate stuff, you'd be fighting it, talking to colleagues about what a pain all this stuff is, and you'd stumble across some link to some site to some vendor and read a white paper and go "THAT'S IT!", but if you haven't done that yet, then you're not missing anything.
ESB is for the cases where you do have that web service and queue and file system all in the same system and need to integrate them.
An ESB product usually solves the following
Security
Message routing
Orchestration (which is advanced message routing)
Protocol transformation
Message transformation
Monitoring
Eventing
You can do all of these with other tools as well and if you just need one or two of these capabilities you can probably do without and ESB (as it introduced additional complexity) but when you need several of them an integrated solution in the form of an ESB can be a better solution.
As #WillHartung concluded, ESBs tend to be properly used in large, complex situations. And that's why it's named Enterprise Service Bus.
Now, to actually answer your question, ESBs typically:
Communicate over several protocols (e.g. HTTP, Message Queue, etc.), for both input and output
Establish a common message format, and often translate from other formats into the 'canonical' format
Provide endpoint transparency (e.g. you send a message to the bus, and get an answer back, but you don't explicitly know what service, also connected to the bus, handled your request.
Provide monitoring and management capabilities
Facilitate versioning of services and messages
Enforce security, when needed.
So, as you can see, it's for when doing a lot of point-to-point communication ("just do it") would be a huge, unmanageable pile of spaghetti. Indeed, most places that I've seen SOA implemented, it's replacing that huge pile of spaghetti that already exists.
An ESB is an enterprise service bus, an infrastructure backplane if you like for a service-oriented architecture. Imagine the chaos of hundreds of services happily reusing each other. How do manage such an environment? How do you provide flexible, decoupled routing between your services? How do you avoid point-to-point spaghetti architecture? How do you manage transactions and security across a hybrid technology landscape? How do you track where messages are in complex flows across multiple systems?
You use an ESB.
ESBs typically allow you to design flows across multiple systems in an XML configuration language, offering you a host of EIS adaptors, transformation and mediation plugins etc. Some will offer an IDE to help you design flows. Some ESBs are very expensive, some are open source.
If you want to get a feel for ESB, check out either Mule or WSO2, both good open source products, or even Spring Integration which is a non-clustered solution but excellent for decoupling Java from the underlying external interface points.

What is the best way to implement a website where Users can interact together

I am creating a website where users will be able to chat and send files to one another through a browser. I am using GWT for the UI and hibernate with gilead to connect to a mysql database backend.
What would be the best strategy to use so Users can interact together?
I'd say you are looking for comet/AJAX|Server push/etc. See my previous answer on this matter for some pointers. Basically you are simulating inverting the communication between server and client - it's the server that's initiating the connection here, since it wants to, for example, inform the user that his/her friend just went online, etc.
The implementations of this technique change quite rapidly, so I won't make any definitive recommendations - choose the one that best suits your needs :)
COMET is the technology that allows chatting over a web page - It is basically communicating through keep-alive connections. This allows servers to push information to the client.
There are several implementations of this on the client side with GWT.
Most servers nowadays support this, It is also part of the Servlet 3.0 spec (Which nobody has implemented yet)
While COMET is very nice, it's not the only solution! Usual polling with time intervals (as opposed to COMET long polling) is still commonly used. It's also possible to require a manual refresh by the user.
Take Stackoverflow as an example - for most things you must refresh your browser manually to see the changes. I think, it's commonly perceived as normal and expected. COMET or frequent polling are an added bonus.
The problem with COMET is, that it can easily lead to lots of threads on the server. Except, if you additionally use asynchronous processing (also called "Advanced IO"), which is not too well supported yet (e.g. doesn't work with HTTPS in Glassfish v3 due to a severe bug), can lead to problems with Apache connectors etc.
The problem with frequent polling is, that it creates additional traffic. So, it's often necessary to make the polling less frequent, which will make it less convenient for the end user.
So you will have to weigh the options for your particular situation.

Best solution for Java HTTP push (messaging) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
We want to push data from a server to clients but can only use HTTP (port 80). What is the best solution for messaging? One idea is Comet. Are there other ideas or frameworks which offer lets say JMS over HTTP. (Yes, ActiveMQ supports it too, but waggly IMHO. And JXTA supports it too but the configuration is complicated. Something simple is preferred.)
The simplest solution for many, many reasons is to use a Comet based approach (like you mention). This means the clients (to whom you want to "push" messages) open long-lived HTTP connections. Those connections stay open until they time out or you send the client a message. As soon as either happens the client opens a new connection.
Directly connecting to clients could be problematic for many reasons: they could be behind firewalls that disallow that, they could be behind proxies and so on.
Unless your clients are real servers (in which case you're really the client), have them contact you and send a response to mimic push.
Atmosphere and DWR are both open source frameworks that can make Comet easy in Java.
We used COMET in conjunction with JMS using WAS Web 2.0 Feature Pack; in effect the server did the JMS subscribe and COMET-pushed the message to the browser. as a developer it "felt" like the browser was subscribing to JMS. This "just worked" so we didn't look further for alternatives.
I could imagine a pure JavaScript JMS implementation in the browser, using HTTP as a transport but my instict is that this would be very heavyweight. I know of no such implementations.
The alternative approach to those already discussed (i.e. Comet etc.) is to implement polling in the client. The downside of that approach is that you inevitably have a delay from the time of the message/event and until the client receives it. If your application is very sensitive to such delays, polling is out.
If a certain amount of delay (at minimum in the order of a few seconds) is acceptable, polling is less of an abuse of the HTTP protocol. It is also more robust against temporary network troubles as the server by default queues messages and wont get upset if the client isn't available on its schedule.
I created an example app using Comet, Raphael, Bayeux, Java and Maven running PaaS Cloudbees and wrote a blog post about it, hopefully it will be helpful to someone.
http://geeks.aretotally.in/thinking-in-reverse-not-taking-orders-from-yo

Categories