Does anyone have any insight into transaction options available over web-service calls?
1.) There are two applications that we have that need transactional communication between them.
2.) App1 calls a web service on app 2 and then makes some changes to its own db. the call on app2 and the changes to it's own db need to be co-ordinated. How can we do this? what are the possible options ?
You make the webservice call and if its successful do change in your own DB. If changing your own DB fails then call the webservice to revert the changes done in earlier call. For this to happen the webservice must provide the revert functionality.
For example, the webservice have createUser function then they should have deleteUser function.
It depends what technology stack you are using. In .Net WCF offers transaction features, otherwise the only thing that you can do is minimize the timespan an error can occur.
In previous applications, I've given the service a token to the web service. When the service returns (sync or async) it returns the token. The token has an embedded timestamp. If the timestamp has expired then the transaction is aborted, if not I assume the web service call was successful.
After successful return of the webservice call, the NEXT method call is to record the transaction within your system. This creates a very small window where the system behind the web service and your system will be out of sync. It also lessens the chance that an unexpected error will occur that will prevent the update/insert on your side.
You could try Atomikos ExtremeTransactions. It includes support for WS/SOAP transactions whose rollback spans multiple sites.
Guy
Related
I am facing issue with shiro.
We have two applications(two WARS) on the same weblogic server 12c.
One WAR is UI which was integrated with CAS.
Second WAR is Jersey Rest services.
My problem is UI was auntheticated succefully and JsessionID was passed back to Rest Services while communicating with them.
Before reaching to the service we wrote one shiro filter class each time Subject is valid or not.
And also in our UI there is a requirement to call the Rest Service (One specific service) in every one minute.
Issue: Each time call reaches to shiro filter class, we are getting the different subject. i tried to print the sessionId from subject (each time its different), even though user was authenticated successfully in UI and in the backend some time user name is shown as null. Can you pls help.
Subject subject = getSubject(request, response);
There are a few things that typically cause this.
If you are handing the login yourself (by calling something like subject.login() directly, instead of letting the ShiroFilter handle it)
Both application servers are managing the sessions outside of Shiro: See https://shiro.apache.org/session-management.html#session-storage
That said, I'd need more details of how your app is setup. What do your cookies look like, how are your app servers configured, etc.
Usually in spring boot applications, we can use jpa audit to do the tracking.
Spring Boot Jpa Auditing
While in microservices architecture, I'd try to avoid involving security in core microservice. Instead, we can do authentication/authorization at api gateway.
While, if the core service didn't get the current login user, we have to find an way to pass the current operator to core services. It could be an user identifier header on the request. Or Maybe we can pass token to core services to let it fetch the login user from auth server.
I am wondering if anyone has handled such case and give out some suggestion.
If I understand the question correctly ...
You have an API gateway in which authentication/authorisation is implemented
On successful negotiation though the API gateway the call is passed on to a core service
The core services perform some auditing of 'who does what'
In order to perform this auditing the core services need the identity of the calling user
I think the possible approaches here are:
Implement auditing in the API gateway. I suspect this is not a runner because the auditing is likely to be more fine grained than can be implemented in the API gateway. I suspect the most you could audit in the API getway is something like User A invoked Endpoint B whereas you probably want to audit something like User A inserted item {...} at time {...} and this could only be done within a core service.
Pass the original caller's credentials through to the core service and let it authenticate again. This will ensure that no unauthenticated calls can reach the core service and would also have the side effect of providing the user identity to the core service which it can then use for auditing. However, if your API gateway is the only entrypoint for the core services then authenticating again within a core service only serves to provide the user identity in which case it could be deemed overkill.
Pass the authenticated user identity from the API gateway to the core service and let the core service use this in its auditing. If your API gateway is the only entrypoint for the core services then there is no need to re-authenticate on the core service and the provision of the authenticated user identity could be deemed part of the core service's API. As for how this identity should be propagated from the API gateway to the core service, there are several options depending on the nature of the interop between API gateway and core service. It sounds like these are HTTP calls, if so then a request header would make sense since this state is request scoped. It's possible that you are already propagating some 'horizontal state' (i.e. state which is related to the call but is not a caller supplied parameter) such as a correlationId (which allows you to trace the call though the API getway down into the core service and back again), if so then the authenticated user identify could be added to that state and provided to the core service in the same way.
DO you have code Example. I have tried to pass token from zuul to other module. But I always got null in header in another module.
Here's the situation. I have an application that's working fine with the transactionmanager that i'm using. what i need to do is post some information onto another application in the form of http calls at certain stages. What i want to happen is for the http calls to be made only when the transaction completes successfully. And if the transaction fails for some reason (Some Exception) then the http call should not be made.
Any suggestions on how this can be done?
is there a way where during the course of my code i can register these http calls and when the transaction manager completes successfully, these http calls are made.
Spring provides a clean way of handling callback events via TransactionSynchronization. This registers hooks for various transaction events (After commit, On completion etc).
Here is a related post which address your problem. Here is another link.
Don't do the http call directly, rather (in your JTA transaction) post a message to JMS/XA. That way, the message will be available to consumers only when the transaction commits.
Then, add a HTTP consumer that takes a message, and does the http call. As per the JMS/XA semantics, it will only get messages of committed transactions at the sender.
If you want a free JTA transaction manager then check out https://www.atomikos.com
Best
Guy Pardon
I have a service layer which is calling a webservice. The number of requests generated by the service layer could potentially be very large and i want to build in some contingency in case the volume of requests becomes to much for the web service to handle. I know i can add some exception handling which can tell if the request failed or not however i don't want to keep hitting the service if its down or struggling to handle the requests.
How can i tell my service layer to stop making calls when the service is unavailable and then resume once its active again? I know this can be done manually using a file containing a flag which the service would check before making a call to the webservice. This flag could then be updated whenever the server goes dowm, however i would prefer something automatic.
Thanks,
I think it is easily could be done with interceptors. Just make your own interceptor and implement the logic in here.
I have a Java SOAP data service which sits on top of a Sybase database which, for reasons out of my control, has unreliable performance. The database is part of a vendor package which has been modified by an internal team and most of the issues are caused by slow response times at certain times of the day.
The SOAP service provides data to a calculation grid and when I request data, I need the response time to be both fast and consistent. The service provides basic CRUD functionality, but the ratio of reads to writes is approximately 100:1.
What is the best strategy to isolate myself from the database's unreliable performance and ensure that the SOAP service is fast and reliable?
I have seen this issue a few times, normally with a vendor database.
If this is on Windows, you could create a Windows service as an intermediary between the SOAP service and the database. Then put a message queue (either MSMQ or a JMS implementation such as MQ Series) between the SOAP service and Windows service for asynchronous communications. In this way the database performance issues will no longer affect the SOAP service. This solution does, however, come at the cost of increased complexity.
Note that a .NET web service can be called by, and respond asynchronously to, its clients. I'm not sure if that's possible with a Java SOAP service.
If this is on some flavour of Unix, I assume it has similar functionality to a Windows service - maybe a daemon.
Why not use a thread? That way, the application could gently wait even if the database is slow.
RoadWarrior's response is right on. Requests to do any operation get put in a queue. The user comes in once to make the request, and once to pick up the request. This is in fact what is happening on sites like Expedia where it is talking to an unreliable service (the backend). The user's browser is pinging the server until the red light turns green.
How about caching the responses from the web service (either on the client invoking the WS request, or by setting up a proxy web service in between)?
You could cache the results from the DB if the DB Is not too big.
Get the other internal team to tune that database, so everyone using the app benefits. I do love me some indexes!