I have small Java (Java EE) microservice, that do some calculations. This microservice is running on the same application server as other application, also written in Java EE. First question - should these apps communicate each other by REST API or different way? Second question - if so, is there a way to save some time, by not serializing/deserializing transfer objects? I understand that communication between two apps on different servers (languages) requires serialization/deserialization, but what about mentioned situation?
should these apps communicate each other by REST API or different way?
Microservices should communicate over network always. If they have a REST API then use that.
if so, is there a way to save some time, by not serializing/ deserializing transfer objects?
If they are communicating over network the serialization is a must. Anyway, serialization help the decoupling. Microservices should share data but not schema/classes. The serialization must be done by loosing the schema, i.e. you could use JSON. If you share the schema (classes) you break the microservice's encapsulation. You won't be able to change a microservice implementation with other implementation (that is using a different technology stack, PHP with Nginx for example).
If efficiency is paramount, you could use Google's Protobuf. Its a bit of a pain (when compared to json) but very efficient. Its also language-agnostic (or to be more precise: it has implementations in most common languages).
You basically define a message according to the proto spec and then a special compiler generates the relevant get/set code. you use that in your code to send and receive super-efficient messages.
Related
I'm sketching an architecture for a micro services system, planned to run currently on one machine (maybe distribution in the future).
The system will be composed of services written in both Node.js, GO and might be Java.
Both node.js and Java will need to pass instructions and receive results from the GO server.
Now, I'm trying to decide should I use IPC pipe or ramp up on gRPC and protobuff and use them.
These are on different abstraction levels and have different uses, as such the 'or' in the question is wrong. You will need both types (transport and encoding), even if you reimplement one of them.
IPC like an anonymous or named pipe is usually called a transport, they have no way to encode multiple instructions or results (though they encode a stream of bytes).
gRPC and protobuf need a transport, support multiple transports and add more fine grained encoding (how to represent an integer, a list, etc) and possibly more on top. Technologies that support encoding something can often be nested with a transport or encoding, this is common with technologies that are used together with HTTP, this may make sense but may only add a layer without having a use.
I have 2 processes that need to communicate over the same PC and different PCs. In the local case the process communication is among different processes e.g Process A and Process B.
In the remote case it will be among 2 instances of Process A running in different PCs.
I will create them from scratch and I am wondering what is the best approach. I am aware of RMI and sockets but I was wondering for my case as described, and taking also into account that the messages exchanged are small and the number of APIs really small, if there is a standard approach/library for this.
Any suggesstions are highly welcome
Update after #EJP comments:
My interest is 1)to implement the requirement for communication in a light manner since the API exposed will be really small and the messages as well 2)use and learn a new popular framework if possible (I already know RMI and sockets)
If you are just looking for messaging frameworks, there's a bunch available out such as
RabbitMQ - http://www.rabbitmq.com/
ZeroC Ice - http://www.zeroc.com/ice.html
AMQP - http://www.amqp.org
OpenSplice DDS - http://www.prismtech.com/opensplice
But when you use a 3rd party framework, you are then adding an additional dependency to your application. If it is something very simple like your case, perhaps writing a TCP client/server would be sufficient for a client/server paradigm or if you are looking for publisher/subscriber paradigm then you can look into using UDP multicast. You just need your data class to extends Serializable if you want to be able to marshal and unmarshal your data to buffer and send it over to network using typical JAVA socket API.
I strongly suggest having a look at Thrift. From all the technologies I've used (web services, RMI, XML-RPC, Corba comes to mind) it is currently my favourite. Essentially the steps involved are:
Download the Thrift compiler.
Add the Maven dependency (make sure it is the same version as the compiler!) I currently use 0.8.0.
Write your Thrift IDL (incredibly easy, google for it as there are plenty of examples).
Compile it for Java.
Writer your server/client.
In general, you can whip together a server and a client in about 30 lines of code. In terms of speed and reliability it has never failed me before.
You might have a look at Versile Java (full disclosure: I am one of the developers), it satisfies at least your criteria #1. From the API documentation, here are some examples of writing remote-enabled objects, running a service, and connecting to a service.
If you want to learn something new then I'd look at OpenSplice. The reason is pretty simple, among the technologies suggested above is the only one that provides you with Data-Centric abstractions.
The cool thing about OpenSplice is that gives you the abstraction of a Global Data Space, yet the implementation of this global data space is fully distributed and very high performance.
Take a look at some of the slides available at http://www.slideshare.net/angelo.corsaro and I am sure you'll get in love with the technology.
Finally OpenSplice is Open Source.
Happy Hacking.
A+
JMX is a good alternative .
Example :
http://www.javalobby.org/java/forums/t49130.html
IMB JMX Example
http://alvinalexander.com/blog/post/java/source-code-java-jmx-hello-world-application
I am working on an existing system written using .NET 2.0 remoting to integrate a number of embedded clients to a central server. Due to a number of issues, it has become desirable to rewrite the server in Java. Updating the clients is not really viable at this point; there are many of them and they are geographically scattered, so an update would be potentially expensive. To this end, I was wondering what solutions are available to implement a Java server that would be compatible with the existing over-the-wire protocol?
I am aware of JNBridgePro, but it is unfortunately too expensive for our current budget. I also have the CD from the book Microsoft® .NET and J2EE Interoperability Toolkit (Microsoft Press), which has a copy of a piece of software called "ja.net" from Intrinsyc Software that promises to fulfill this function, but in order to use it you need to obtain a licence from Intrinsyc and their web site is not responding (perhaps they have gone out of business since the book was published?).
Are there any others I'm not aware of?
No, no such thing (except custom commercial solutions).
However, if you are up to an in-house solution, you can:
Write your own .NET remoting adapter, which sits between the .NET clients and the Java server.
The .NET adapter translates the requests to something known by the Java server (maybe a web service interface, via SOAP) and the same for the responses.
So, the .NET adapter would be something like a pass-through and mapping component, with no actual logic. This way all logic can be in the Java server (which seems to be what you want).
It could take some time to do it, but it depends directly on the number of clients you have and on the number of types of requests and responses.
I have applications that have been written in Java, .NET and C++. They all use a common database.
Each app has it's own way of accessing the database, and so things are quite inconsistent.
I was thinking of writing a Data Access Layer using a ORM and having all applications use that.
The question is how to implement this ORM Data Access Layer:
Make a Java package using Hibernate;
use the Java package from the .NET and C++ apps
Make a .NET class library using Entity Framework;use the class library from the Java apps
In either case, is it easy to access the package/class library from the other platform? Any suggestions on the path to take?
Is communicating via XML between the two platforms the best way?
Ps. I have already seen this question, but I think my question is a super-set of that one.
Ps. Making a web-service is an option, but I would prefer not to write/use a web-service.
iKVM
This will allow you to share code between .NET and Java. I like most sane people in the world prefer to write data access bits in .NET, but if you've got existing code in Java you want to make available for .NET services this is available
http://www.ikvm.net/
RESTful Web Service++
This is the most sane and obviously the quickest way to get up and running. Again, you could use something like Jersey, ASP.NET MVC, NancyFx or whatever REST application server you wanted to get up and running.
I would recommend you use NancyFx and ServiceStack.Text. These are two very simple very pure implementations and are extremely fast, which is what you want if you're using it as a unified DAL on top of your database.
Nancy: https://github.com/NancyFx/Nancy/
ServiceStack.Text: https://github.com/ServiceStack/ServiceStack.Text
Jersey: http://jersey.java.net/
ZOMG T3h H0rror - COM+ / DCOM
This is actually a viable possibility if you can't use RESTful web services, and also assuming you're exclusively on Windows. This will also be the most troublesome depending on how insane your requirements are. That being said, I've seen this done before and seen it work quite well especially when you have legacy C/C++ components living inside different segments if your infrastructure.
Is communicating via XML between the two platforms the best way?
This will have a performance issue - but will provide a unified data layer in a language independent maner. You can consider the open source WSO2 Data Services Server, if you select to go ahead with this approach...
Thanks...
It sounds a bit like the database can be thought of as a service that your various apps consume to different extents.
What about writing a web service layer to wrap all access operations to and service-enable the database? You could use ORM in that layer and have all of your applications interact with the web service wrapper. If you can deploy it on the same server as the database you won't get much extra network overhead.
I think this would be a decent cross platform approach where you don't have to depend on libraries across platform boundaries, and you've provided a standard way of talking to the DB for all current/future apps.
I have done some searching but haven't come up with anything on this topic. I was wondering if anyone has ever compared (to some degree) the performance difference between an RPC over a socket and a REST web service. If both do the same thing, which would have a tendency to be the better performer? I've already started building some socket code and would like to know if REST would give better performance before I progress much further. Any input would be really appreciated. Thanks indeed
RMI
Feels like a local API, much like
XMLRPC
Can provide some fairly nice remote
exception data
Java specific means this causes lock
in and limits your options
Has horrible versioning problems
between different versions of clients
Skeleton files must be compiled in
like CORBA, which is not very flexible
REST:
easy to route around firewalls
useful for uploading files as it can
be rather lightweight
very simple if you just want to shove
simple things at something and get
back an integer (like for uploaders)
easy to proxy security behind Apache
and let it take the heat
does not define any standard format
for the way the data is being
exchanged (could be JSON, YAML 1.0,
YAML 2.0, arbitrary XML format, etc)
does not define any convention about
having remote faults sent back to the
caller, integer codes are frequently
used, but method of sending back data
is not defined. Ideally this would be
standardized.
may require a lot of work on the
client side caller of the library to
make use of data (custom serialization
and so forth)
In short from here
web services do allow a loosely
coupled architecture. With RMI, you
have to make sure that the objects
stay in sync in all applications
RMI works best for smaller
applications, that are not
internet-related and thus not scalable
Its hard to imagine that REST is faster than a simple socket connection given it also goes over a Socket.
However REST may be performant enough, standard and easier to use. I would test whether REST is fast enough and meets your requirements first (or one of the many other existing solutions) before attempting your own Socket solution.