I want to make web application with REST and spring boot. Rest web service is stateless. I want to make it stateful so that the information server send to client after first request should be used in upcoming request. Or the execution done in first or second request will be used further.
Can we generate some session id for this and this session id client can send to sever in followed requests ? If yes, then
if state is changing (values get modified due to some manipulation) of some objects/beans. So how can we save the state inorder of objects/beans to make it stateful and what is the scope of these beans (whose value will be modified) and those classes/beans which will give call to these beans as multiple clients or users will be using this web application ?
Restful API's are not stateful by design, if you make them stateful using a server side then its not REST!
What you need a correlation Id which is a recognised pattern in a distributed system design. Correlation Id will let you tie requests together.
Sessions are typically an optimization to improve performance when running multiple servers. They improve performance by ensuring that a clients requests always get sent to the same server which has cached the clients data.
If you only want to run a single server, you won't have to worry about sessions. There are two common approaches to solve this problem.
1. In Memory State
If the state you want to maintain is small enough to fit into memory, and you don't mind losing it in the event of a server crash or reboot, you can save it in memory. You can create a spring service which holds a data structure. Then you can inject that service into your controllers and change the state in your http handlers.
Services are singletons by default. So state stored in a service is accessible to all controllers, components, and user requests. A small pseudo example is bellow.
Service Class
#Service
public class MyState
{
private Map<String, Integer> sums = new HashMap<>();
public synchronized int get(String key) {
return sums.get(key);
}
public synchronized void add(String key, int val) {
int sum = 0;
if (sums.contains(key)) {
sum = sum.get(key);
}
sum += val;
sums.put(key, (Integer)sum);
}
}
Controller Class
#RestController
#RequestMapping("/sum")
public class FactoryController
{
#Autowired
private MyState myState;
#PostMapping(consumes = MediaType.APPLICATION_JSON_VALUE, produces = MediaType.APPLICATION_JSON_VALUE)
#ResponseStatus(HttpStatus.OK)
#ResponseBody
public SuccessResponse saveFactory(#RequestBody KeyVal keyVal)
{
myState.add(keyVal.getKey(), keyVal.getValue());
}
}
Note: This approach will not work if you are running multiple servers behind a load balancer, unless you use a more complex solution like a distributed cache. Sessions can be used to optimize performance in this case.
2. Database
The other option is to just use a database to store your state. This way you won't lose data if you crash or reboot. Spring supports the hibernate persistence framework and you can run a database like Postgres.
Note: If you are running multiple servers you will need a more complex solution since hibernate caches data in memory. You will have to plug hibernate into a distributed cache to synchronize in memory state across multiple servers. Sessions could be used as a performance optimization here.
Important
Whenever you are modifying state you need to make sure you are doing it in a thread safe manner, otherwise your state may be incorrect.
Related
I have such situation. Imagine there is a public REST service. What we don't want, is for someone, to be able to access this service many times in short period of time, because they will be able to block our database (essentially a DDOS attack, I presume?).
Is there a way to effectively protect against this type of attack? Technology we use is Spring/Spring Security.
If you are using Spring Boot There is a fairly new opensource project which handles this:
https://github.com/weddini/spring-boot-throttling
Declarative approach of throttling control over the Spring services.
#Throttling annotation helps you to limit the number of service method
calls per java.util.concurrent.TimeUnit for a particular user, IP
address, HTTP header/cookie value, or using Spring Expression Language
(SpEL).
Obviously this wouldn't prevent DDOS attacks at the web server level, but it would help limit access to long running queries or implement a fair usage policy.
For those interested in the subject, spring-boot-throttling seems no longer maintained.
So, I take a look on bucket4j
The use is quite simple: There are 3 main objects:
Bucket : Interface allowing to define the total capacity of available tokens. It also provides the methods to consume the tokens.
Bandwidth : Class allowing to define the limits of the bucket.
Refill : Class allowing to define the way the bucket will be fed, with new tokens.
Example with simple Spring Boot controller:
#RestController
public class TestLimit {
private Bucket bucket = null;
public MsGeneratorController() {
Bandwidth limit = Bandwidth.classic(120, Refill.greedy(120, Duration.ofMinutes(1)));
this.bucket = Bucket4j.builder().addLimit(limit).build();
}
#RequestMapping(path = "/test-limit/", method = RequestMethod.GET)
public ResponseEntity<String> download() throws IOException {
if (this.bucket.tryConsume(1)) {
return ResponseEntity.status(HttpStatus.OK).build();
}else {
return ResponseEntity.status(HttpStatus.TOO_MANY_REQUESTS).build();
}
}
}
In this case, we have a limit of 120 requests per minute, with bucket capacity 120 and a refill rate of 120 tokens per minute.
If we exceed this limit, we will receive an HTTP 429 code (TOO_MANY_REQUESTS).
I have a design problem as follows: I want to execute several soap webservices, where each response depends on the former.
When all responses are obtained, I want to validate all obtained data, then build some output based on it, and also issue from DB update.
Therefore I created a TemplateFacade method that wrapps all webservices that are to be executed. Problem: I obviously have to persist the responses between the method calls. Which will be problematic as autowired services should by definition be stateless and are singletons.
So how can I use injection with services that have to maintain some kind of state (at least until the Executor.execute() has terminated)?
Could you recommend a better design approach?
#Component
class Executor {
#Autowired
TemplateFacade template;
public void execute() {
template.run();
template.validate();
template.buildOutput();
template.updateDatabase();
}
}
#Service
class TemplateFacade {
//service classes wrapping webservice soap execution logic
#Autowired
PersonSoap personSoap;
#Autowired
CarSsoap carSoap;
#Autowired
ServiceDao dao;
private WebserviceRsp personRsp, carRsp;
void run() {
personRsp = personSoap.invoke();
//process response and prepare CarSoapXML accordingly, then send
carRsp = carSoap.invoke();
}
//the following methods will all
void validate() {
//validate both responses
}
void buildOutput() {
//create system out based on responses
}
void updateDatabase() {
dao.update(..);
}
}
To share state between multiple web services, you could keep track using a PersonState in the session which is tied to the user. I recommend encryption or hashing to secure the information.
When the validate completes, you could keep a PersonState in the session. When the buildOutput starts, you could get the PersonState object and continue with your logic and so on.
It is important that, you keep the PersonState to have a smaller memory footprint. Incase of a lot of data, you could just create a stateObject that will have the necessary state for the next step. e.g. at the end of validate you could create, BuildState object and put it in the session. build will get the object from the session and continue.
But I am not sure if it is really necessary to keep track of state and do it in 2 web services calls. The better solution would be to move all the logic part to another layer, and use the web services as just a window to your business/process layer.
Edit:
One more solution that could work you, is that the response of each step could contain the necessary state that is required for the next step. e.g. validateResponse contains personState and that could somehow be passed for the build.
I have a Java based web app hosted on AWS. It is read-mostly so it makes a lot of sense to cache objects retrieved from the database for performance.
When I do update an object, I would like to be able to broadcast to all the servers that the object was saved and it should be invalidated from all local caches.
The does not need to be real time. Stale objects are annoying and need to be flushed within about 20 seconds. Users notice if they stick around for minutes. Cache invalidation does not have to happen the millisecond that objects get saved.
What I've thought about
I've looked into broadcast technologies just as jGroups, but jGroups isn't supported on AWS.
I don't think that Amazon's SQS messaging service can be made into a broadcast service.
I'm considering using the database for this purpose: I'd write events to a database table and have each server poll this table every few seconds to get a new list items.
Two options come to mind. The first is to use Amazon SNS, which can use SQS as a delivery backend. This might be overkill, though, since it's designed as a frontend to lots of delivery types, including e-mail and SMS.
The approach I'd try is something along the lines of Comet-style push notifications. Have each machine with a cache open a long-lived TCP connection to the server who's responsible for handling updates, and send a compact "invalidate" message from that server to everyone who's listening. As a special-purpose protocol, this could be done with minimal overhead, perhaps just by sending the object ID (and class if necessary).
Redis is handy solution for broadcasting a message to all subscribers on a topic. It is convenient because it can be used as a docker container for rapid prototyping, but is also offered by AWS as a managed service for multi-node clusters.
Setting up a ReactiveRedisOperations bean:
#Bean
public ReactiveRedisOperations<String, Notification> notificationTemplate(LettuceConnectionFactory lettuceConnectionFactory){
RedisSerializer<Notification> valueSerializer = new Jackson2JsonRedisSerializer<>(Notification.class);
RedisSerializationContext<String, Notification> serializationContext = RedisSerializationContext.<String, Notification>newSerializationContext(RedisSerializer.string())
.value(valueSerializer)
.build();
return new ReactiveRedisTemplate<>(lettuceConnectionFactory, serializationContext);
}
Subscribing on a topic:
#Autowired
private ReactiveRedisOperations<String, Notification> reactiveRedisTemplate;
#Value("${example.topic}")
private String topic;
#PostConstruct
private void init() {
this.reactiveRedisTemplate
.listenTo(ChannelTopic.of(topic))
.map(ReactiveSubscription.Message::getMessage)
.subscribe(this::processNotification);
}
Publishing a message on a topic:
#Autowired
private ReactiveRedisOperations<String, Notification> redisTemplate;
#Value("${example.topic}")
private String topic;
public void publish(Notification notification) {
this.redisTemplate.convertAndSend(topic, notification).subscribe();
}
RedisInsight is a GUI that can be used for interacting with redis.
Here is a complete sample implementation using spring-data-redis.
I'm starting to develop for Web and I'm using Spring MVC as my Server Framework. Now I'm wondering about creating variables in Controller class. I had to do it to manage some data in server, but now I'm concerned about the following case: If I have more than one user sending information to the same page, at the same time, would one user interfere on another user variable?
Here's some code example:
#Controller
public Class myController {
int number;
#RequestMapping("/userInformation")
public String getInformation(int info) {
number = info;
}
public void doSomethingWithIt() {
number = number + 1;
}
}
In this case, If I have more than one user sending data to /userInformation at the same time, would Spring MVC create one Controller for each user? This way I wouldn't have problem, I guess. But if not, I have to rethink this implementation, don't I?
You are right. Controllers are singletons and must be stateless. Server side state belongs in session or in a data store. You can also use a request scoped object (look at bean scopes in spring).
The Spring container will create one instance of your Controller. So all users will share that instance.
If you have data that is private to a user, you have several options:
store it in the HTTP session (not recommended if it's a lot of data, as your memory usage might explode)
store it in a database and retrieve it upon each request, based on some property identifying the user
store it in a memory cache and retrieve it upon each request, based on some property identifying the user
Option 3 is the most simple one of them, you can even implement it as a Map<User, UserData> instance variable on your Controller if you like. It's not the cleanest, most beautiful or most secure option, just the most simple.
You should not use any instance variables in the Spring Controller that represent state of the controller class. It should not have state since its single instance. Instead you could have references to the injected managed beans.
I have a JEE6 application that runs on an Glassfish 3.1.2 cluster.
One #Singleton Bean contains some kind of (readolny) cache. A User can press a button in the GUI to update the cache with (updated) content from the database.
This works well in a none clustered environment, but now we need to switch to an cluster.
So I am facing the problem, that when a user press that update button, only the Cache Singleton from his server node is updated. My question is, what would be the easiest way to make the other Singletons (in the other nodes) updating there data too?
I am aware of question Singleton in Cluster environment, but my question is specific for Glassfish (because I hope there is some build in support), the other one is taged with "Websphere". And my question is about JEE6, the other one is older than JEE6.
GlassFish High Availability Administration Guide explicitly states:
Restrictions
When configuring session persistence and failover, note the following
restrictions:
When a session fails over, any references to open files or network connections are lost. Applications must be coded with this restriction in mind.
EJB Singletons are created for each server instance in a cluster, and not once per cluster.
Another suggestion, would be to use JMS and have the GUI button press post a message to a JMS Topic. All the Singleton beans can subscribe to that Topic and receiving the message will cause them all to update from the database, almost simultaneously. The benefit of this approach, is that it leverages more of the built in features of Glassfish, without necessarily having to bring in another framework.
In any case, moving from single instance to multiple instance is never a truly seamless change, and is going to cause some difficulty. There will probably need to be application changes to make sure that all the relevant state (other than session state), is shared correctly to all instances in the cluster.
Unfortunately there's no built-in way of achieving what you want, but the shoal framework that Glassfish bases its clustering on could help you out here. You can solve the problem either by sending notifications to cluster members to update their caches or by replacing your current cache with a distributed one.
Below is an example using shoal to send notifications:
#Startup
#Singleton
public class Test {
private String groupName = "mygroup";
private String serverName = System.getProperty("HTTP_LISTENER_PORT");
private GroupManagementService gms;
#PostConstruct
public void init() {
Runnable gmsRunnable = GMSFactory.startGMSModule(serverName, groupName,
GroupManagementService.MemberType.CORE, null);
gms = (GroupManagementService) gmsRunnable;
try {
gms.join();
gms.addActionFactory(new MessageActionFactory() {
#Override
public Action produceAction() {
return new MessageAction() {
#Override
public void consumeSignal(Signal signal)
throws ActionException {
// Update your cache here
}
};
}
}, groupName);
} catch (GMSException e) {
Logger.getAnonymousLogger().severe(e.getMessage());
}
}
#PreDestroy
public void cleanup() {
gms.shutdown(GMSConstants.shutdownType.INSTANCE_SHUTDOWN);
}
/**
* Call this from your button click.
*/
public void updateCache() {
try {
byte[] message = new byte[] {};
gms.getGroupHandle().sendMessage(groupName, message);
} catch (GMSException e) {
Logger.getAnonymousLogger().severe(e.getMessage());
}
}
}
If you wanted to use a distributed cache instead:
DistributedStateCache cache = gms.getGroupHandle().getDistributedStateCache();
Items placed in the cache will be replicated to the other cluster nodes.
Take a look at JGroups. It's a framefork for reliable multicast communication. JBoss clustering mechanisms are currently based on this tool.
You can check out an example usage of JGroups here.