I got two Java web services, hosted in Tomcat on the same server.
Is there any way to share memory (objects) between them?
I can turn the sharing into some kind of web methods calls, however
this is complicated, a lot of changes are required.
this is not really sharing, objects are duplicated, although it should work for my case.
this will expose methods that should not be called by the clients.
Not that I know of. Sounds like it's fraught with peril. It's hard enough to synchronize objects in one app; you have no hope with two. What good could this possibly do?
If it's common methods you need, put them into a service that both can call. If it's common data, put it in a database.
Is there any way to share memory (objects) between them?
You can create a shared memory region that is shared by two JVMs. You can do this using native code, or (in theory) by mapping a file into the address-space of two apps.
But you can't put Java objects in that region. The JVM doesn't support this, either in Java code or in native code. (And even if you could, synchronization would be a big problem.)
So could you use shared memory to share data between two JVMs?
Maybe. But you'd need to treat the share memory segment as a kind of database, and implement a scheme for copying object state between the segment and each JVM's heap. And you'd need to implement a robust synchronization scheme, probably using semaphores.
In short, it would be a significant amount of work to implement, and it wouldn't "feel" like the JVMs were sharing objects. It would be easier to use an existing database or distributed caching solution.
Try using JCS:
http://commons.apache.org/proper/commons-jcs/
Hope it helps! ;)
On Inter process communications, Java says:
To facilitate communication between processes, most operating systems
support Inter Process Communication (IPC) resources, such as pipes and
sockets. IPC is used not just for communication between processes on
the same system, but processes on different systems.
I would rather go for pipes or sockets. This will make your life a lot easier and your web services more flexible, as they can run on two separate machines still with the ability to talk to each other as if they were setting side by side.
This is being said, back to practice. Say for example you have a set of objects {a,b,c} you want to share between your services. Create a data store class that holds {a,b,c} objects and whenever there is an update, do it in the data store dataStore.setA(A new_a). Behind the scene, and for every update, the local data store will notify the remote data store sitting in the other application and transmit all the updates that have just been made. The following DTO can be used to transmit all changes from one data store to another:
public class ObjectUpdateEvent<Source> implements Serializable {
private String fieldName;
private Object previousValue;
private Object newValue;
private Source source;
// Constructor...
}
Updating an the object "a" can be done the following way
public class DataStore{
// .....
public setA(A new_a){
ObjectUpdateEvent<DataStore> updateDto = new ObjectUpdateEvent<DataStore>();
updateDto.setPreviousValue(a);
updateDto.setNewValue(new_a);
sendUpdateDto();
a = new_a;
}
}
EDIT: This is exactly what #duffymo mentioned above.
How about using a shared library.
You can refactor your logic, move them to a separate library, and build as a separate jar.
The jar should be place in tomcat_home/lib directory.
And in your web apps the library dependency should be set as provided ( in maven )
You store create and store the objects you need to be shared in the shared memory, and access them from any web
Related
I have a webapp with multiple Spring services (they each have their own ear and web-controllers) and they talk to each other via REST calls. Some of the different services use the same data POJOs. The existing code just has duplicates of those data objects in different services.
For example, my /users service has a myApp.users.UserData object, and my /emails service calls /users/{userId} and holds the result as a myApp.emails.UserData object. Both of these objects are identical.
The problem here is I have to keep myApp.emails.UserData and myApp.users.UserData in sync, since in reality they represent the same info and are meant to be the "same" class. Say I update the name of a field in emails.UserData, I better remember to update it in users.UserData, otherwise things will break.
I know I could make a shared dependency called something like SharedDataObjects, define myApp.sharedDataObjects.UserData there, and just have both versions refer to that. For some reason, though, my gut feeling is that this is not a good solution... (maybe it is though?)
Are there any better ways of approaching this issue? Is there something fundamentally wrong with the way the webapp structured, and if so how could that be addressed?
It's always tempting to spot duplicated (or nearly duplicated) code and try to eliminate the duplication. An obvious path in that direction is to make a shared library (or module) and have different services (or applications) depend on it.
Doing so, however, increases coupling between services/applications/modules, which has its own set of drawbacks in many contexts. Especially in a microservices architecture*, that kind of coupling often leads to headaches. It can even lead to loss of the value of using microservices* in the first place.
If two services* are supposed to be independent but integrated, they have an integration protocol of some kind. For REST services, for example, that's almost always HTTP and JSON**. By introducing a shared library, you have coupled them in a binary way, separate from (and more binding than) HTTP and JSON. Not a good situation; experience has taught many of us that painful lesson.
Instead, focus on the public interfaces that each service exposes, and use appropriate versioning to evolve those interfaces when needed. Don't worry about some duplicate-looking classes, especially if they're anemic POJOs; that's a minor concern compared to a tightly-coupled set of services that are supposed to be independent.
Not that shared code libraries are inherently bad, they're not and they have true value in many ways. Rather, my point is you need to make sure each service maintains its own definition of what important things are, so that each can remain as independent as possible - and evolve independently as much as possible.
By the way, this is somewhat related to the concept of Bounded Contexts; you might want to read more about it if you're working with microservices*.
*or whatever you call your architecture of independently-managed services/modules/apps
** Could also be some kind of messaging/queueing platform, as another example
[Context]
I need to send data from one applet to another. In addition, one of the applets needs to be deleted and reinstalled. After the installation, data exchange between the applets needs to be possible.
Is Shareable Interface useful to realize that?
[Theoretical]
In general, I would like to know the cases where shareable interface is a good idea and What its principal use.
[Practice]
I took example from this answer but it does not work. I think I did not understand how to implement. I tried to create two applets in the same package, one master and one slave. But I got 6F 00 when slave is selected. I did other test with two packages. But I got same error.
Shareable allows you to exchange the data between applets on the card.
There are some limitations though, the main being the fact that one cannot freely exchange internal objects. Only objects allowed for sharing can pass via the Shared interface. The example you mention uses some proprietary “SharedArray” interface to implement this.
By default, only standard global objects such as APDU backing array, or various STK objects can be used for this purpose.
In addition, it is possible to pass simple value types such as byte and short via the Shared interface methods.
In some cases, especially in STK environments the Shared interface is used to initiate the operations while the data is passed via a separate EF on the card which is used as a “mailslot”.
Regarding, the implementation itself, one needs to remember that Shareable interface is just a marker and as such you need to define a concrete interface that inherits from Shareable to be able to use it in the application.
The above interface constitutes a hard dependency for any application using or implementing this interface.
As a result, the package containing the interface definition cannot be deleted if any of the other applets/libraries use it.
One of the common options is to define the interface in a separate library and install it first. Since it is not likely to change, and if it does you would change the AID,version anyway, all other clients can be freely installed and deleted.
Lastly, please keep in mind Sharable interface should be used with care due to security issues associate with data sharing.
I highly recommend getting a copy of “Java Card Technology for Smart Cards: Architecture and Programmer's Guide” which covers these topics and much more.
Answering your question in order
[Context]
Shareable interface is used when one applet(Client Applet) need to access methods from another applet(Server applet) provided both the applets are located in different packages.Applets in different packages are separated by a firewall to prevent access to applet data across package.
Applet instances can be deleted in any order but Applet package should deleted in order. That is, first client package is deleted than server package is deleted.
[Theoretical]
Shareable interface is useful for object sharing since firewall restrict object sharing between packages.
For proper uses cases kindly go through this white paper - www.usenix.org/legacy/event/smartcard99/full_papers/montgomery/montgomery.pdf
[Practice]
Kindly check solution for shareable interface implementation - https://stackoverflow.com/a/57200926/4752262
I need to make a couple of services that will talk to both Amazon S3 and Riak CS.
They will handle the same operations, e.g. retrieve images.
Due to them returning different objects, in S3's case an S3Object. Is the proper way to design this to have a different class for each without a common interface?
I've been thinking on how to apply a common interface to both, but the return type of the methods is what's causing me some issues, due to them being different. I might just be going wrong about this and probably should just separate them but I am hoping to get some clarification here.
Thanks all!
Typically, you do this by wrapping the responses from the various external services with your own classes that have a common interface. You also wrap the services themselves, so when you call your service wrappers they all return your wrapped data classes. You've then isolated all references to the external service into one package. This also makes it easy to add or remove services.
A precise answer to your question would require knowing the language you are using and/or the platform. Eric in his answer above is correct that wrapping the data inside one of you own class is one way to handle this. However, depending on the language, the details of the final implementation will vary and the amount of work required when adding possible return value type will also vary.
In Java for example one way to handle this would be to return an heterogeneous container. Take a look at this thread:
Type safe heterogeneous container pattern to store lists of items
i am trying to figure out if there is a 'simple' way to store persistently a large object instance in the JVM memory to be shared and re-used for multiple runs by other programs.
I am working on netbeans using java 8. The data is some ~500 MB of serialized objects. They fit easily in the RAM but take few minutes to de-serialize from disk each time.
Currently the program load a serialized object from the local disk into memory for each run. As the data is only read from during the test, it would be optimal to hold it in memory and access it directly at each run.
We've looked into RMI but the overhead, the marshalling process and the transmission will kill the performance.
I was wondering if there is a more direct way to access data from a program running on the same JVM, like sharing memory.
The multiple runs are to test different processing / parameters on the same input data.
I am open to suggestion on the best practice to achieve this 'pre-loading', any hints would be very appreciated.
Thanks
Java serialization is never going to play well as a persistence mechanism - changes to the classes can easily be incompatible with the previously stored objects meaning they can no longer be de-serialized (and in general all object models evolve one way or another).
While suggestions are is really off-topic on SO, I would advise looking at using a distributed cache such as Hazelcast or Coherence.
While you'll still have to load the objects, both Hazelcast or Coherence provide a scalable way to store objects that can be accessed from other JVMs and provide various ways to handle long-term persistence and evolving classes.
However, neither works well with big object graphs, so you should look at breaking the model apart into key/value pairs.
An example might be an order system where the key might be a composite like this:
public class OrderItemKey
{
private OrderKey orderKey;
private int itemIdex;
...
}
And the value like this:
public class OrderItem
{
private ProductKey productKey;
private int quantity;
...
}
Where OrderItems could be in one cache, while Products would be in another.
Once you've got a model that plays well with a distributed cache you need to look at co-locating related objects (so they're stored in the same JVM) and replicating reference objects.
When you're happy with the model, look at moving processing into the cache nodes where the objects reside rather than pulling them out to perform operation on them. This reduces the network load giving considerable performance gains.
If I understood well you need to read a huge amount of data from disk and use this data only for test purpose.
So every time you run the tests you need to reload them and it slow down your tests.
If this is the situation you can also try to create a disk on memory (ram disk). So your file is saved on a disk with the performances of the ram.
Here is a link for the command ramfs to create it on linux systems
I want to make two programs. where program 1 will have a static collection and some getter/setter to access/update its values.
I want that program 2 should be able to access/call getter/setter of program1. so that static collection can be shared among many programs/process
*i dont want to engage any port.
You can't just declare a variable static (or super-static) and expect it to be available in code outside of your program - it just doesn't work that way. What you need is some sort of inter-process communication, and the possibilities are endless. To name a few:
- serialize / deserialize to and from a file (local or on the network)
- sockets (basically, you open a network connection between two ports on localhost)
- a database
- shared memory (whether this is possible depends on the OS)
Your OS of choice may offer other means, but the principle remains the same: whenever the variable changes, one application needs to notify the other.
This cannot be done just with static variables. They are accessable everywhere inside the JVM your program run in, but can't be accessed that simple. Use RMI, or sockets, or input streams to handle this interprocess communication.
There is no strait way to do this. RMI or CORBA should work. But it will be an overkill. You may use plain old sockets to communicate between Java apps. Or use java.nio channels.