I have a situation where an application has a list of values, for example, a list of books, that changes from time to time. It also has a REST endpoint where this information is published.
There are several other applications that consume this information, and they should be aware if any of the books on my application changes.
Would the reactive style be adequate to this situation? At first, I thought so, based on the Observer pattern. But would this be a good approach, considering the applications involved only exchange information based on web services?
I also looked at retrofit, that could transform the endpoints, into java interfaces. But all the examples I found, were somehow related to android applications.
So, would this approach be advisable in this scenario? If it is, can someone recommend a book, or any kind of resource?
EDIT:
Since I will have an endpoint that publishes books, should I turn it in to an Observable, that when gets another book available, notifies all the subscribers of this event, and that would in turn decide if they should or not do something?
If so, how would a client, it can be for example, and angularjs app or another java application, subscribe to this observable?
I hope I could make myself a little bit more clear.
I think you are mixing up the Rx programming with a network problem. If your server sends data over the network at X interval of time then as #TheCoder said you can listen for changes on a socket and trigger an event on your rx stream with the help of a PublishSubject. But I think that the real issue lies in the way your data are sent by your server.
If you have to query your server to know if your list of books has been updated it is not very effective to trigger such calls when your goal is to have a real time update. In these type of scenario a Publish-Subscribe pattern is more appropriated where your client just act a receiver and can update itself as soon as your server push new values (a new list of book in your case). You can find tools like pubnub or the MQTT protocol to achieve such things.
For you to understand quickly how this system works you can look at this.
Related
So I'm a bit new with CQRS (not totally a beginner though). I'm trying to understand the best practices when it comes to aggregates interaction. I read a bit about using Integration Events (instead of Domain Events) in these situation, also a bit about Domain Services (that would supposedly link the 2 aggregates) but couldn't find any good definitive answer anywhere (especially not on the axonIQ Getting Started guide
Also another not too related question is that in layered architecture usually we have the controller directly linked to a service and this service can interact with other services (or repos) while with CQRS the controller is usually sending a command to the aggregate. So if my api call needs to interact with 2 aggregates do I have to build a middle-man service that would send commands (or listen to events) from the 2 services?
The interaction between components in a CQRS system can happen on a couple of levels.
On way to think about it is as Maxime suggest, with Microservices, very clearly showing the messaging focus of it all.
Regardless though, this can just as simply happen within one Application/Monolith which has several Aggregate types that together need to trigger some operation.
I feel that Maxime is providing you the answer you need. The Aggregate instances which you send commands to, act on their own and do not tie in to one another directly, at all. You'd thus react on the events as the driving force the start an interaction between both.
You can either do this by having a Event Handling Component which listen to both the events and performs the business transaction you're dealing with.
If the business transaction is a little more complex, looking at Saga's might be a good start.
Lastly, you state the 'Getting Started' part of the Axon Reference Guide is not clear about this topic. I think that's a valid conclusion, as from Axon's perspective this is not part of the Getting Started. Take a look at the Saga portion of the guide to get an idea of the interaction between Aggregates and/or Bounded Contexts.
If you think of this in term of microservices (which is a philosophy that fits CQRS very well) you should have one aggregate for one microservice. So you can't communicate between aggregates in memory because they're not part of the same process. A good way to do it is by using events that you can publish in a event bus. So the client send a command to "aggregate A" using the API of this microservice (i.e. "microservice A") (or maybe an API gateway). Then "aggregate A" is saved and the events generated by "aggregate A" are published to the event bus so that some process (aka. event handler) in "microservice B" can catch the event(s) and send the appropriate commands to "aggregate B".
It's just one way to do it there is many more and it can be very more complex than that, but I hope it's helping getting the big picture.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I am working on a REST API where I will have to introduce some breaking changes soon, so a v2 is needed. We still need to support v1 for a couple of months though in parallel, to give our clients time to switch over to the new API whenever they are ready. Our API is offered via shared cloud, and all our clients share the same system backend, especially a single, shared database.
I have found a lot of articles about REST API versioning, but they were all more from a client's point of view, or high-level design point of view. That's not really my concern, our API already has a versioning in the URI, so offering services with a /v2 base path won't be aproblem.
However I am asking myself how I am actually going to implement this, and I haven't really found good articles on that. I don't really want to branch off a v2 of my project and then build and deploy v1 and v2 as separate applications, because then I would have maintenance, bugfixing, configuration changes etc in two applications, which is double work and carries the usual dangers of redundancy (i.e.: possible inconsistencies between the versions). Also, the v2 of course is not different in every service, so most of the code would still be the same.
Is there any best practices on how to technically implement a REST API in a single application that provides multiple versions to the outside, and where some code is shared (i.e.: v2/someService would internally redirect to v1/someService), and just the actual differences are coded in new services? Maybe there's even frameworks that help with designing this? The app is coded in Java with Spring MVC if that's helpful.
I'm thankful for any tips or resources on how to tackle this. Thanks!
I'm also now facing such task, and still having no useful answers.
Though I believe having separate v1 and v2 instances in parallel can still be at least a fallback-solution, I'm currently thinking about a scheme for a single application, which will heavily use the benefits of dependency injection in the application.
So, basically idea is to configure your IoC container accordingly to a received request, so that each and every service would receive a needed version of its dependencies. This can theoretically be a good solution, but it requires an already near to ideal architecture of your application (which is often not the case) where all the concerns are separated, etc. As SOLID as possible, in other words.
At least with this approach, you'll be able to quickly identify all the components of your codebase which require refactoring, though making the whole process not a quick one. Besides, I believe the closer changes approach the core of the application, the more difficult parallel versioning may be, but we will see.
I should point out once more, that for me it's still just an idea which I'm going to research further specifically for my project, so I'm not sure how easy or problematic it will be in fact.
Hope you have seen
API design ensuring backward compatibility
http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/32713.pdf
Having two versions of API in the same application is quiet enough
api.mysite.com/[version1]/api/url
api.mysite.com/[version2]/api/url
Not sure why you need to build and deploy v1 and v2 as separate applications? Unless you are planning for a Zero-Down-time rolling upgrade in production
I like to bring up the following strategies into the discussion, and both are strategies in continuous delivery.
Branch Abstraction
The basic idea is to place an abstract layer in between the clients and your current implementation. Then introduce a second implementation behind the abstract layer. This gives you the opportunity to progress within your normal code base but support new features for your next version API instantly.
See BranchByAbstraction.
Feature Toggles
Add features to your code base without making them visible to your customers. This allows you stay on your main development branch even if things are not ready for end users yet.
See Feature Toggles (aka Feature Flags)
If I were faced with the situation you speak of, I would first try to keep my new version (v2) backward compatible with my first version (v1). If that were the case, you could just add functionality and update your API documentation, keeping only one active code base. I would think you could even add things to the response payload as long as the data coming back would not break anyone's code - sort of like adding fields to an existing database schema.
If v2 was not backward compatible with v1, you could possibly move v1 to another server and notify your users that it is being placed there for a stated, limited period to give them time to make code changes necessary to switch to v2, but also notify them that this version is no longer being updated and if they have issues, they will need to switch to the new version. Hence, v2 is the HEAD version of your code base with no other branches under active development.
I hope this helps and offers something you didn't already think of.
The v1/v2 dilemma is a strong hint that you actually do not have a REST API to start with. Peers in a REST architecture exchange more or less standardized content by clients requesting representations of media-types they understand. This technique is called content-type negotiation. Of course, a badly written server may ignore proposed media-types and send one the client does not understand. This will, though, prevent the client from interacting with the server further. A well-behaved server therefore should attempt to serve a client request as best as it can.
According to Fielding:
A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. Any effort spent describing what methods to use on what URIs of interest should be entirely defined within the scope of the processing rules for a media type (and, in most cases, already defined by existing media types). [Failure here implies that out-of-band information is driving interaction instead of hypertext.] Source
A media type describes how the syntax of a payload exchanged for such a media-type looks like as well as the semantics each element in that representation has. With the help of meaningful link relation names and media types a server can teach a client on the next options available a client can make use of while progressing through its task. I.e. think of a case where a previous response contained a link relation create to the client. The client does not really know how something has to look like in order to be processable by the server, thoug on invoking the URI returned for the create link relation name the server responds with a form like representation along the line of application/vnd.xyz-form+json where this media type defines some input controls the client can use in order to generate a request representation expected by the server on the target endpoint. Similar to the Web the custom form also contains a HTTP action as well as a target URI provided by the client to send the response to and eventually also the representation preferred by the server.
Clients in a REST architecture shouldn't care about the URI, so returning an URI containing either v1 or v2 should be more or less meaningless to them. Fielding even stated that a REST API itself shouldn't be versioned at all! What is important though is that the client and server are able to understand the payload received.
Instead of versioning the URI or API, the media type describing the syntax and semantic actually need to be versioned. I.e. if you take a look at the browser based Web (the big sibling of REST) and here HTML in particular you will notice that it is designed in a way to require new version to stay backwards compatible. I.e. client and server receiving a text/html defined payload will be able to handle pure HTML (1.0) up to HTML5 content regardless which actuall syntax (maybe even a mixture) was used. Other media types, however, might not be that lenient. Here you could either make use of profiles or register a whole new media-type if you think the old and new one are completly incompatible to each other.
Either way, I hope I could shed a bit more light on REST architecture and how you might get there. I am well aware that my suggestion may not be easy to achieve, though once you got it you basically decoupled clients from your API and gave the latter one freedom to evolve while not having to fear breaking clients. There will still be a coupling but both, client and server, will couple to the media types rather than to each other. Before creating a new media-type it is probably worth looking for already existing ones
I'm about to embark upon writing an android app which notifies the phone's user when an external mySQL DB is updated (add only) with a ticket so that the user can check if the ticket requires his attention (an attempt to reduce the buildup of tickets that he has to trawl through).
From my research, most questions suggest using a PHP web service with my program (written in java) and definitely/maybe/definitely not/it's deprecated using SQLNotification to fire the event. I've also seen something about some bloke called JSON and the brands of SOAP he uses.
What I've been unable to figure out is how all of these frameworks/toolkits/services/things work together.
My question is in two parts:
Is SQLNotification usable? If not, is there a simple way to check for changes (beyond the obvious answer of polling)
How does everything (SOAP, JSON, web service, app) fit together and have I missed anything on the frameworks front (Heard mentions of spring, hibernate, tomcat).
On my experience, I'm relatively fluent in Java, understand the basics of MySQL, am a beginner in PHP and haven't written for android before.
Thanks,
Ben
According to the information you have provided, seems like you have different options:
Push
If you want to go the push way, you will need some central architecture that can take notifications from the database and immediately send them to connected clients. It's not that easy to build such a scheme, would only recommend it if you really need immediate notification. As a start point, look at this sample: http://www.gianlucaguarini.com/blog/push-notification-server-streaming-on-a-mysql-database/
Pull
If you go the pull way (polling), you can have a service working on the phone wich polls every configured time. In that case you will need some stateless service, some simple JSON service would do great.
On both cases, be careful with security, you should secure your channel with ssl and have some decent sort of authentication.
I think a good rule of thumb is saying (it's just based on personal opinion/experience, maybe your decision path has other factors that you have to consider), if your pollig intervals don't need to be lower than 5 minutes, polling will do fine. If you need almost real-time notifications, you can implement the push architecture, but you have to know it will cost more efford to get it working as you have to take care of things like client disconnection, how to handle notifications if a client is not connected, get the real-time notifications from the database, etc.
Hope this helps as a starting point,
I am creating a website where users will be able to chat and send files to one another through a browser. I am using GWT for the UI and hibernate with gilead to connect to a mysql database backend.
What would be the best strategy to use so Users can interact together?
I'd say you are looking for comet/AJAX|Server push/etc. See my previous answer on this matter for some pointers. Basically you are simulating inverting the communication between server and client - it's the server that's initiating the connection here, since it wants to, for example, inform the user that his/her friend just went online, etc.
The implementations of this technique change quite rapidly, so I won't make any definitive recommendations - choose the one that best suits your needs :)
COMET is the technology that allows chatting over a web page - It is basically communicating through keep-alive connections. This allows servers to push information to the client.
There are several implementations of this on the client side with GWT.
Most servers nowadays support this, It is also part of the Servlet 3.0 spec (Which nobody has implemented yet)
While COMET is very nice, it's not the only solution! Usual polling with time intervals (as opposed to COMET long polling) is still commonly used. It's also possible to require a manual refresh by the user.
Take Stackoverflow as an example - for most things you must refresh your browser manually to see the changes. I think, it's commonly perceived as normal and expected. COMET or frequent polling are an added bonus.
The problem with COMET is, that it can easily lead to lots of threads on the server. Except, if you additionally use asynchronous processing (also called "Advanced IO"), which is not too well supported yet (e.g. doesn't work with HTTPS in Glassfish v3 due to a severe bug), can lead to problems with Apache connectors etc.
The problem with frequent polling is, that it creates additional traffic. So, it's often necessary to make the polling less frequent, which will make it less convenient for the end user.
So you will have to weigh the options for your particular situation.
we have a web application that does various things and sometimes emails users depending on a given action. I want to decouple the http request threads from actually sending the email in case there is some trouble with the SMTP server or a backlog. In the past I've used JMS for this and had no problem with it. However at the moment for the web app we're doing JMS just feels a bit of an over kill right now (in terms of setup etc) and I was wondering what other alternative there are out there.
Ideally I just like something that I can run in-process (JVM/Tomcat), but when the servlet context is unloaded any pending items in the queue would be swapped to disk/db. I could of course just code something together involving an in memory Q, but I'm looking to gain the benfit of opensource projects, so wondering whats out there if anything.
If JMS really is the answer anyone know of somethign that could fit our simple requirements.
thanks
I'm using JMS for something similar. Our reasons for using JMS:
We already had a JMS server for something else (so it was just adding a new queue)
We wanted our application be decoupled from the processing process, so errors on either side would stay on their side
The app could drop the message in a queue, commit, and go on. No need to worry about how to persist the messages, how to start over after a crash, etc. JMS does all that for you.
I would think spring integration would work in this case as well.
http://www.springsource.org/spring-integration
Wow, this issue comes up a lot. CommonJ WorkManagager is what you are looking for. A Tomcat implementation can be found here. It allows you to safely create threads in a Java EE environment but is much lighter weight than using JMS (which will obviously work as well).
Beyond JMS, for short messages you could also use Amazon Simple Queue Service (SQS).
While you might think it an overkill too, consider the fact there's minimal maintenance required, scales nicely, has ultra-high availability, and doesn't cost all that much.
No cost for creating new queues etc; or having account. As far as I recall, it's purely based on number of operations you do (sending messages, polling/retrieving).
Main limitation really is the message size (there are others, like not guaranteeing ordering due to distributed nature etc); but that might work as is. Or for larger messages, using related AWS service, s3, for storing actual body, and just passing headers through SQS.
You could use a scheduler. Have a look at Quartz.
The idea is that you schedule a job to start at regular intervals. All requests need to be persisted somewhere. The scheduled job will read them and process them. You need to define the interval between two subsequent jobs to fit your needs.
This is the recommended way of doing things. Full-fledged application servers offer Java EE Timers for this, but these aren't available in Tomcat. Quartz is fine though and you could avoid starting your own threads, which will cause mess in some situations (e.g. in application updates).
I agree that JMS is overkill for this.
You can just send the e-mail in a separate thread (i.e. separate from the request handling thread). The only thing to be careful about is that if your app gets any kind of traffic at all, you may want to use a thread pool to avoid resource depletion issues. The java.util.concurrent package has some nice stuff for thread pools.
Since you say the app "sometimes" emails users it doesn't sound like you're talking about a high volume of mail. A quick and dirty solution would be to just Runtime.getRuntime().exec():
sendmail recipient#domain.com
and dump the message into the resulting Process's getOutputStream(). After that it's sendmail's problem.
Figure a minute to see if you have sendmail available on the server, about fifteen minutes to throw together a test if you do, and nothing to install assuming you found sendmail. A few more minutes to construct the email headers properly (easy - here are some examples) and you're done.
Hope this helps...