What we have on start
Let's say there is a simple Spring Boot application which provides an API for some frontend. Tech stack is quite regular: Kotlin, Gradle, Spring WebMVC, PostgreSQL, Keycloak.
The frontend interacts with the app synchronously via HTTP. Client authenticates with JWT token.
The business task
There is a list of events that could be raised somewhere in the system. User should be notified about them.
User is able to subscribe to one or more event notification. Subscriptions is just a pair of user_id + event_type_id persisted in dedicated Postgres table.
When event X is being raised we should find all the users subscribed to it and send them some data via Websocket
Configuration
Let's configure Spring first. Spring uses a STOMP over Websocket protocol.
Add dependencies to build.gradle.kts
implementation("org.springframework.boot:spring-boot-starter-websocket")
implementation("org.springframework:spring-messaging")
Add a Websocket config
#Configuration
#EnableWebSocket
#EnableWebSocketMessageBroker
class WebsocketConfig(
private val websocketAuthInterceptor: WebsocketAuthInterceptor
) : WebSocketMessageBrokerConfigurer {
override fun configureMessageBroker(config: MessageBrokerRegistry) {
config.enableSimpleBroker("/queue/")
config.setUserDestinationPrefix("/user")
}
override fun registerStompEndpoints(registry: StompEndpointRegistry) {
registry.addEndpoint("/ws").setAllowedOrigins("*")
registry.addEndpoint("/ws").setAllowedOrigins("*").withSockJS()
}
override fun configureClientInboundChannel(registration: ChannelRegistration) {
registration.interceptors(websocketAuthInterceptor) // we'll talk later about this
}
}
registerStompEndpoints is about how to establish connection with our websocket. The config allows frontend interact with us via SockJS of Websocket libraries. What they are and what the difference is not a today's topic
configureMessageBroker is about how we will interact with the frontend after the connection established.
configureClientInboundChannel is about message interception. Let talk about it later.
Add /ws/** path to ignored patterns in the Spring Security config.
Connection establishing
On the frontend side it should looks something like that (JavaScript)
const socket = new WebSocket('ws://<api.host>/ws')
//const socket = new SockJS('http://<api.host>/ws') // alternative way
const headers = {Authorization: 'JWT token here'};
const stompClient = Stomp.over(socket, {headers});
stompClient.connect(
headers,
function (frame) {
stompClient.subscribe('/user/queue/notification',
function (message) { ...message processing... });
});
The key moment is to pass an authorization header. It is not a HTTP header. And initial HTTP handshake itself will not be authorized. This is the reason of adding /ws/** to ignored patterns.
We need the header and the token because we want to allow websocket connection only for authorized users and also we want to know which exactly user is connected.
Authentication
Now we are adding the authentication mechanism
#Component
class WebsocketAuthInterceptor(
private val authService: AuthService, //your implementation
private val sessionStore: WebsocketUserSessionStore
) : ChannelInterceptor {
override fun preSend(message: Message<*>, channel: MessageChannel): Message<*>? {
val accessor: StompHeaderAccessor = MessageHeaderAccessor.getAccessor(message, StompHeaderAccessor::class.java)
val sessionId: String = accessor.messageHeaders["simpSessionId"].toString()
if (StompCommand.CONNECT == accessor.command) {
val jwtToken: String = accessor.getFirstNativeHeader("Authorization")
val token: AccessToken = authService.verifyToken(jwtToken)
val userId: Long = token.otherClaims["user_id"].toString().toLong()
sessionStore.add(sessionId, userId)
} else if (StompCommand.DISCONNECT == accessor.command) {
sessionStore.remove(sessionId)
}
return message
}
}
The point is to link a random generating websocket session ID with a user ID from our Spring Security store and persist that pair during the session life. JWT token should be parsed from the message headers.
Then by the given token a user ID should be obtained. Implementation of that part depends on what Spring Security config you exactly have. In case of Keycloack there is a useful static method org.keycloak.adapters.rotation.AdapterTokenVerifier::verifyToken
WebsocketUserSessionStore is just a map for linking session_id with user_id It may looks like the following code. Remember the concurrent access of course
#Component
class WebsocketUserSessionStore {
private val lock = ReentrantLock()
private val store = HashMap<String, Long>()
fun add(sessionId: String, userId: Long) = lock.withLock {
store.compute(sessionId) { _, _ -> userId }
}
fun remove(sessionId: String) = lock.withLock {
store.remove(sessionId)
}
fun remove(userId: Long) = lock.withLock {
store.values.remove(userId)
}
}
Notification actually
So event A was raised somewhere inside business logic. Let's implement a websocket publisher.
#Component
class WebsocketPublisher(
private val messagingTemplate: SimpMessagingTemplate,
private val objectMapper: ObjectMapper,
private val sessionStore: WebsocketUserSessionStore,
private val userNotificationRepository: UserNotificationRepository
) {
suspend fun publish(eventType: EventType, eventData: Any) {
val userIds = userNotificationRepository.getUserSubscribedTo(eventType)
val sessionIds = sessionStore.getSessionIds(userIds)
sessionIds.forEach {
messagingTemplate.convertAndSendToUser(
it,
"/queue/notification",
objectMapper.writeValueAsString(eventData),
it.toMessageHeaders()
)
}
}
private fun String.toMessageHeaders(): MessageHeaders {
val headerAccessor = SimpMessageHeaderAccessor.create(SimpMessageType.MESSAGE)
headerAccessor.sessionId = this
headerAccessor.setLeaveMutable(true)
return headerAccessor.messageHeaders
}
}
EventType is an enumeration of event types the system has.
UserNotificationRepository is just a part of data persistence layer (Hibernate|JOOQ repository, MyBatis mapper or smth). Function getUserSubscribedTo should do something like select user_id from user_subscription where event_type_id = X.
The rest of code is plenty straightforward. By the giving userIds it is possible to obtain living websocket sessions. Then for every session convertAndSendToUser function should be called.
I think they have the tutorial to build the WebSocket push-notification service.
Related
I have created a java lambda function within same vpc, security group of MSK cluster. But when lambda execute the code, in cloudwatch:
org.apache.kafka.common.errors.TimeoutException
My java creating topic code like this:
public String handleRequest(SQSEvent input, Context context) {
LambdaLogger logger = context.getLogger();
if(bootStrapServer == null) {
System.out.println("missing boot strap server env var");
return "Error, bootStrapServer env var missing";
}
Properties props = new Properties();
props.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootStrapServer);
props.put(AdminClientConfig.CLIENT_ID_CONFIG, "java-data-screaming-demo-lambda");
props.put(AdminClientConfig.SECURITY_PROTOCOL_CONFIG, "PLAINTEXT");
try {
this.createTopic("TestLambdaTopic", props, logger);
} catch (Exception e) {
logger.log("err in creating topic: " + gson.toJson(e));
}
return "Ok";
}
public void createTopic(String topicName, Properties properties, LambdaLogger logger ) throws Exception {
try (Admin admin = Admin.create(properties)) {
int partitions = 1;
short replicationFactor = 2;
NewTopic newTopic = new NewTopic(topicName, partitions, replicationFactor);
List<NewTopic> topics = new ArrayList<NewTopic>();
topics.add(newTopic);
CreateTopicsResult result = admin.createTopics(topics);
// get the async result for the new topic creation
KafkaFuture<Void> future = result.values().get(topicName);
// call get() to block until topic creation has completed or failed
future.get();
if (future.isDone()) {
logger.log("future is done");
}
logger.log("what is result from create topics: " + gson.toJson(result));
}
}
Finally I figure it out, it is the vpc. I used companies production vpc and its subnets or security groups setting have problem.
What is weird is that deploy a client EC2 in the vpc can access the cluser, just following aws msk tutorial and that client EC2 can create topics, send and receive messages.
But deploy a Lambda function in the vpc is not working somehow. I will update in here if I find out why later. Currently I use the default vpc, which dont have much security settings, and it works. Lambda -> MSK cluster can create topic.
I'm trying to get messages from Azure Service Bus via java application. I created necessary client config and for example there was successful connection through ManagementClient
#Bean
public ClientSettings getMessageReceiver() throws ServiceBusException, InterruptedException {
AzureTokenCredentials azureTokenCredentials = new ApplicationTokenCredentials(
"clientID,
"domain",
"secret",
AzureEnvironment.AZURE
);
TokenProvider tokenProvider = TokenProvider.createAzureActiveDirectoryTokenProvider(
new AzureAuthentication(azureTokenCredentials),
AzureEnvironment.AZURE.activeDirectoryEndpoint(),
null
);
ClientSettings clientSettings = new ClientSettings(tokenProvider,
RetryPolicy.getDefault(),
Duration.ofSeconds(30),
TransportType.AMQP);
return clientSettings;
}
ManagementClient managementClient =
new ManagementClient(Util.convertNamespaceToEndPointURI("namespace"),
clientSettings);
managementClient.getTopics();
But when I try to get messages from particular topic:
SubscriptionClient subscriptionClient = new SubscriptionClient("namespace", "events/subscriptions/subscription", clientSettings, ReceiveMode.PEEKLOCK);
And got an error message:
It is not possible for an entity that requires sessions to create a non-sessionful message receiver.
What additional steps should be provided?
You have enabled Session (by default disabled) while creating in your topic subscription. If you do not need message session, recreate the subscription with 'requires session' disabled (NOTE: you cannot change that property once a subscription is created).
Or, if you really need message session, update your code like below to receive session first, and from received session, receive messages. All the code samples can be found here and the session sample specifically here.
// The connection string value can be obtained by:
// 1. Going to your Service Bus namespace in Azure Portal.
// 2. Go to "Shared access policies"
// 3. Copy the connection string for the "RootManageSharedAccessKey" policy.
String connectionString = "Endpoint={fully-qualified-namespace};SharedAccessKeyName={policy-name};"
+ "SharedAccessKey={key}";
// Create a receiver.
// "<<topic-name>>" will be the name of the Service Bus topic you created inside the Service Bus namespace.
// "<<subscription-name>>" will be the name of the session-enabled subscription.
ServiceBusReceiverAsyncClient receiver = new ServiceBusClientBuilder()
.connectionString(connectionString)
.sessionReceiver()
.receiveMode(ReceiveMode.PEEK_LOCK)
.topicName("<<topic-name>>")
.subscriptionName("<<subscription-name>>")
.buildAsyncClient();
Disposable subscription = receiver.receiveMessages()
.flatMap(context -> {
if (context.hasError()) {
System.out.printf("An error occurred in session %s. Error: %s%n",
context.getSessionId(), context.getThrowable());
return Mono.empty();
}
System.out.println("Processing message from session: " + context.getSessionId());
// Process message
return receiver.complete(context.getMessage());
}).subscribe(aVoid -> {
}, error -> System.err.println("Error occurred: " + error));
// Subscribe is not a blocking call so we sleep here so the program does not end.
TimeUnit.SECONDS.sleep(60);
// Disposing of the subscription will cancel the receive() operation.
subscription.dispose();
// Close the receiver.
receiver.close();
I have following code snippet, that is supposed to run in a AWS Lambda function:
AWSSecretsManager client = AWSSecretsManagerClientBuilder.standard().withRegion(AWS_REGION).build();
GetSecretValueRequest getSecretValueRequest = new GetSecretValueRequest().withSecretId(SECRET_NAME);
GetSecretValueResult secretValue = client.getSecretValue(getSecretValueRequest);
As the lambda function is going to be run in the same VPC as the secret manager I don't have to provide credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) for it.
I use Localstack with Testcontainers for integration tests and set up the secret in the test setup like this:
AWSSecretsManager secretsManager = AWSSecretsManagerClientBuilder.standard()
.withEndpointConfiguration(secretsmanager.getEndpointConfiguration(SECRETSMANAGER))
.withCredentials(secretsmanager.getDefaultCredentialsProvider())
.build();
String secretString = "{'engine':'mysql','port':" + mysql.getMappedPort(3306) + ",'host':'" + mysql.getContainerIpAddress() + "'}";
CreateSecretRequest request = new CreateSecretRequest().withName("aurora")
.withSecretString(secretString)
.withRequestCredentialsProvider(secretsmanager.getDefaultCredentialsProvider());
secretsManager.createSecret(request);
Now the test crashes with an error:
com.amazonaws.services.secretsmanager.model.AWSSecretsManagerException:
The security token included in the request is invalid.
(Service: AWSSecretsManager;
Status Code: 400; Error Code:
UnrecognizedClientException;
Request ID: ...
Here is also the definition of the localstack container used in the test:
#ClassRule
public static LocalStackContainer secretsmanager = new LocalStackContainer("0.10.4")
.withServices(LocalStackContainer.Service.SECRETSMANAGER)
.withEnv("DEFAULT_REGION", "eu-west-1")
.withExposedPorts(4584);
How could I configure the LocalStackContainer to accept requests without any credentials validation going on?
Assuming you are a reasonable java developer which prefers spring boot test and junit5 over alternatives, #DynamicPropertySource can be quite handy here
private static final LocalStackContainer LOCALSTACK = ...;
#DynamicPropertySource
static void setCredentials(DynamicPropertyRegistry registry) {
var credentials = LOCALSTACK.getDefaultCredentialsProvider().getCredentials();
registry.add("cloud.aws.region.static", () -> "eu-central-1");
registry.add("cloud.aws.credentials.access-key", () -> credentials.getAWSAccessKeyId());
registry.add("cloud.aws.credentials.secret-key", () -> credentials.getAWSSecretKey());
registry.add("cloud.aws.s3.endpoint", () -> LOCALSTACK.getEndpointOverride(S3));
}
Also please doublecheck you've overridden the endpoints you rely on (s3 in my example), otherwise you may request real AWS API instead of the containerized one
I have a Spring Boot RESTful API to receive and send SMS to clients. My application connects to our local SMS server and receives and pushes SMS to clients via mobile operators. My application works well. But I wanted to optimize my application by implementing cache. I am using the Simple cache of Spring Boot. I face some challenges when creating new SMS.
All SMSes sent/received are in the form of conversations (per ticket) and have clients attached to it. So I faced difficult saving a client into the cache. Below is createClient() snippet:
#Transactional
#Caching(evict = {
#CacheEvict("allClientsPage"),
#CacheEvict("countClients")
}, put = {
#CachePut(value = "clients", key = "#result.id", unless="#result != null"),
#CachePut(value = "clientsByPhone", key = "#result.phoneNumber", unless="#result != null")
})
public Client create(Client client) {
Client c = new Client();
if (client.getName() != null) c.setName(client.getName().trim());
c.setPhoneNumber(client.getPhoneNumber().trim());
/**---***/
c.setCreatedAt(new Date());
return clientRepository.save(c);
}
When I tried creating a new client, a
org.springframework.expression.spel.SpelEvaluationException: EL1007E: Property or field 'id' cannot be found on null
is thrown.
Any assistance shall be greatly appreciated.
instead of using unless="# result! = null" use condition="#result != null"
I am learning Akka remoting, referring to the book Learning Akka.
Using a limited network, I can't use sbt (can't config the proxy well).
First, I create a project for an Akka server with the application.conf
akka {
actor {
provider = remote
}
remote {
emabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 2552
}
}
}
and the console shows
Remoting now listens on addresses: [akka.tcp://akkademy#127.0.0.1:2552]
The second project is the client having a JClient class:
public class JClient {
private static final int TIMEOUT = 2000;
private final ActorSystem system = ActorSystem.create("LocalSystem");
private final ActorSelection remoteDb;
public JClient(String remoteAddress) {
remoteDb = system.actorSelection("akka.tcp://LocalSystem#" + remoteAddress + "/user/akkademy-db");
}
public CompletionStage set(String key, Object value) {
return toJava(ask(remoteDb, new SetRequest(key, value), TIMEOUT));
}
public CompletionStage<Object> get(String key) {
return toJava(ask(remoteDb, new GetRequest(key), TIMEOUT));
}
}
I pass the value "127.0.0.1:2552" to remoteAddress, calling the set/get methods, and encounter the error:
java.util.concurrent.ExecutionException: akka.pattern.AskTimeoutException: Ask timed out on [ActorSelection[Anchor(akka://akkademy/deadLetters), Path(/user/.*)]] after [2000 ms]. Sender[null] sent message of type "javah.GetRequest".
Your client code to obtain an ActorSelection to the remote actor is incorrect. Instead of "LocalSystem", which is the name of the client's actor system, use "akkademy", the name of the server's actor system. Change the JClient constructor to the following:
public JClient(String remoteAddress) {
remoteDb = system.actorSelection("akka.tcp://akkademy#" + remoteAddress + "/user/akkademy-db");
}
In actorSelection the selector should be string of format akka.tcp://${remoteActorSystemName}#${remoteAddress}/user/$actorPath. In the snippet you've posted, looks like you were using LocalSystem as ${remoteActorSystemName} instead of the remote actor system name.
Let me know if switching it to remote works, if not, can you post the full code you are using or a link to it ?