Microservices communication with a spring gateway - java

I'm very new in springboot and I'm trying to explore this world.
I was creating three microservices that communicates to each other. Everything seems working except the Spring gateway that I just added.
The API call returns:
Error: Socket hang up
This is the configuration that I made but for sure I think it is not 100% correct. Can You help me to discover the bad config?
This is the docker-compose:
version: '3.4'
x-common-variables: &common-variables
DATASOURCE_USER: ${DB_USER}
DATASOURCE_PASSWORD: ${DB_PASSWORD}
DATASOURCE_PORT: ${DB_PORT}
services:
apigateway:
build:
context: .
dockerfile: APIgateway/Dockerfile
ports:
- "4444:4444"
restart: always
paymysqldb:
container_name: paymysqldb
image: mysql
ports:
- "3313:3306"
environment:
- MYSQL_DATABASE=${DB_DATABASE_PAY}
- MYSQL_USER=${DB_USER}
- MYSQL_PASSWORD=${DB_PASSWORD}
- MYSQL_ROOT_PASSWORD=${DB_ROOT_PASSWORD}
volumes:
- paystorage:/var/lib/mysql
usermysqldb:
container_name: usermysqldb
image: mysql
ports:
- "3311:3306"
environment:
- MYSQL_DATABASE=${DB_DATABASE_USER}
- MYSQL_USER=${DB_USER}
- MYSQL_PASSWORD=${DB_PASSWORD}
- MYSQL_ROOT_PASSWORD=${DB_ROOT_PASSWORD}
volumes:
- userstorage:/var/lib/mysql
catalogmysqldb:
container_name: catalogmysqldb
image: mysql
ports:
- "3312:3306"
environment:
- MYSQL_DATABASE=${DB_DATABASE_CATALOG}
- MYSQL_USER=${DB_USER}
- MYSQL_PASSWORD=${DB_PASSWORD}
- MYSQL_ROOT_PASSWORD=${DB_ROOT_PASSWORD}
volumes:
- catalogstorage:/var/lib/mysql
paymanager:
container_name: paymanager
image: arausa/payimage
build:
context: .
dockerfile: MicroServices/PaymentManager/Dockerfile
depends_on:
- paymysqldb
ports:
- "3333:3333"
restart: always
environment:
<<: *common-variables
PM_DATASOURCE_HOST: ${DB_HOST_PAY}
PM_DATASOURCE_NAME: ${DB_DATABASE_PAY}
usermanager:
container_name: usermanager
image: arausa/userimage
build:
context: .
dockerfile: MicroServices/UserManager/Dockerfile
depends_on:
- usermysqldb
ports:
- "1111:1111"
restart: always
environment:
<<: *common-variables
UM_DATASOURCE_HOST: ${DB_HOST_USER}
UM_DATASOURCE_NAME: ${DB_DATABASE_USER}
expose:
- "1111"
catalogmanager:
container_name: catalogmanager
image: arausa/catalogimage
build:
context: .
dockerfile: MicroServices/CatalogManager/Dockerfile
depends_on:
- catalogmysqldb
ports:
- "2222:2222"
restart: always
environment:
<<: *common-variables
CM_DATASOURCE_HOST: ${DB_HOST_CATALOG}
CM_DATASOURCE_NAME: ${DB_DATABASE_CATALOG}
#kafka usa zookeeper tiene traccia dei broker, topologia della network e info per la sincronizzazione
zookeeper:
image: wurstmeister/zookeeper
#identifica il broker kafka
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092" #porta di default per il broker kafka
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 #serve a dire dove sta girando zookeeper
volumes:
userstorage:
catalogstorage:
paystorage:
This is the API gateway class
package com.example.apigateway;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.gateway.route.RouteLocator;
import org.springframework.cloud.gateway.route.builder.RouteLocatorBuilder;
import org.springframework.context.annotation.Bean;
#SpringBootApplication
#EnableDiscoveryClient
public class APIgatewayApplication {
public static void main(String[] args) {
SpringApplication.run(APIgatewayApplication.class, args);
}
#Bean
public RouteLocator myRoutes(RouteLocatorBuilder builder) {
return builder.routes()
.route(p -> p
.path("/user/**")
.uri("http://usermanager:1111"))
.route(p -> p
.path("/catalog/**")
.uri("http://catalogmanager:2222"))
.route(p -> p
.path("/payment/**")
.uri("http://paymanager:3333"))
.build();
}
}
This is the application.properties:
spring.application.name=apigateway
server.port=4444
And finally this is the .env even if I don't think it is usefull for this problem:
DB_DATABASE_PAY=PayDB
DB_HOST_PAY=paymysqldb
DB_DATABASE_USER=UserDB
DB_HOST_USER=usermysqldb
DB_DATABASE_CATALOG=CatalogDB
DB_HOST_CATALOG=catalogmysqldb
DB_USER=db_user
DB_PASSWORD=ale2022
DB_ROOT_PASSWORD=user
DB_PORT=3306
My bad. I forgot to recreate the image of the API.
The error is now :
500 Server Error for HTTP POST "/user/addUser"
apigateway_1 |
apigateway_1 | io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: usermanager/172.18.0.10:1111

Related

MongoDb init-mongo.sh Authentication failed

I develop an application in Spring Boot which is a group of microservices and run them as Docker containers. I'm using MongoDB as my database. I create Root User and User when creating Monga using the init-mongo.sh and stage_mongo.env files, then I try to connect to the database using the stage_mongo_auth.env file from other microservices. When I try to connect as Root User everything goes fine but when I try to connect as User I get an authentication error.
Error:
com.mongodb.MongoCommandException: Command failed with error 18 (AuthenticationFailed): 'Authentication failed.' on server mongodb:27017. The full response is {"ok": 0.0, "errmsg": "Authentication failed.", "code": 18, "codeName": "AuthenticationFailed"} at com.mongodb.internal.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:198) ~[mongodb-driver-core-4.6.0.jar!/:na] at com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:413) ~[mongodb-driver-core-4.6.0.jar!/:na] at com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:337) ~[mongodb-driver-core-4.6.0.jar!/:na] at com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:101) ~[mongodb-driver-core-4.6.0.jar!/:na] at com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:45) ~[mongodb-driver-core-4.6.0.jar!/:na] at com.mongodb.internal.connection.SaslAuthenticator.sendSaslStart(SaslAuthenticator.java:230) ~[mongodb-driver-core-4.6.0.jar!/:na] at com.mongodb.internal.connection.SaslAuthenticator.getNextSaslResponse(SaslAuthenticator.java:137) ~[mongodb-driver-core-4.6.0.jar!/:na]
docker-compose.yaml
version: '3.3'
services:
mongodb:
image: mongo:6.0.2
restart: unless-stopped
env_file:
- ../config/stage_mongo.env
volumes:
- ../mongodb/db:/data/db
- ./init-mongo.sh:/docker-entrypoint-initdb.d/init-mongo.sh
ports:
- 30430:27017
deploy:
resources:
limits:
cpus: '4.0'
memory: 2GB
logging:
driver: "json-file"
options:
tag: "mongodb"
max-size: 256m
api:
image: amazoncorretto:17.0.3-alpine
depends_on:
- mongodb
restart: unless-stopped
env_file:
- ../config/stage_mongo_auth.env
volumes:
- ./java/api-0.0.1-SNAPSHOT.jar:/gjava/java.jar
- ../files:/files
environment:
spring_data_mongodb_host: mongodb
command: /bin/sh -c "cd /gjava && chmod +x /gjava/*.jar && java -Xmx2g -Dspring.profiles.active=dev -jar /gjava/java.jar"
ports:
- 30429:30329
deploy:
resources:
limits:
cpus: '2.0'
memory: 2GB
logging:
driver: "json-file"
options:
tag: "api"
max-size: 256m
init-mongo.sh
mongo -- "$MONGO_INITDB_DATABASE" <<EOF
var rootUser = '$MONGO_INITDB_ROOT_USERNAME';
var rootPassword = '$MONGO_INITDB_ROOT_PASSWORD';
var admin = db.getSiblingDB('admin');
admin.auth(rootUser, rootPassword);
var user = '$MONGO_INITDB_USERNAME';
var passwd = '$MONGO_INITDB_PASSWORD';
db.createUser({user: user, pwd: passwd, roles: ["readWrite"]});
EOF
stage_mongo.env
MONGO_INITDB_ROOT_USERNAME=someRootName
MONGO_INITDB_ROOT_PASSWORD=someRootPassword
MONGO_INITDB_USERNAME=someName
MONGO_INITDB_PASSWORD=somePassword
MONGO_INITDB_DATABASE=someDatabaseName
stage_mongo_auth.env
spring_data_mongodb_username=someName
spring_data_mongodb_password=somePassword
I've looked through my code several times, but I can't find the reason for this error, I've also tried to search the internet for answers, but I haven't found anything either.
I will be grateful for any help.
Update 1
I found the reason why some login credentials work and others don't - commands from init-mongo.sh do not run. I removed it and got the same way to authenticate to MongoDB.
I've tried different ways to enter commands like that:
mongo <<EOF
var rootUser = "${MONGO_INITDB_ROOT_USERNAME}";
var rootPassword = "${MONGO_INITDB_ROOT_PASSWORD}";
db.getSiblingDB('admin').auth(rootUser, rootPassword);
use ${MONGO_INITDB_DATABASE}
db.createCollection("someCollectionName")
use admin
db.createUser(
{
user: "${MONGO_INITDB_USERNAME}",
pwd: "${MONGO_INITDB_PASSWORD}",
roles: [ { role: "readWrite", db: "${MONGO_INITDB_DATABASE}" } ]
}
)
EOF
I've tried adding the :ro suffix to docker-cospose:
volumes:
- ../mongodb/db:/data/db
- ./init-mongo.sh:/docker-entrypoint-initdb.d/init-mongo.sh:ro
but it still doesn't work.

Unable to establish connection with elasticsearch 8.1 (java)

I have elasticsearch 8.1 running in docker with this docker compose file:
version: '3.7'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.1.0
container_name: es-node
environment:
- xpack.security.enabled=false
- discovery.type=single-node
volumes:
- ./elastic-data:/usr/share/elasticsearch/data
ports:
- 9200:9200
cap_add:
- IPC_LOCK
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
kibana:
image: docker.elastic.co/kibana/kibana:8.1.0
container_name: kibana
environment:
- ELASTICSEARCH_HOST=http://localhost:9200
ports:
- 5601:5601
depends_on:
- elasticsearch
I'm trying to make a simple GET request to the es cluster using the org.elasticsearch.client.RestClient.
Request:
Request request = new Request("GET", "_cluster/health");
try {
return restClient.performRequest(request).toString();
} catch (IOException e) {
throw new RuntimeException(e);
}
Rest client initialisation:
var hosts = buildClusterHosts(transportAddresses);
restClient = RestClient.builder(hosts).build();
if (isElasticSniffEnabled) {
sniffer = Sniffer.builder(restClient).build();
}
var esTransport = new RestClientTransport(restClient, new JacksonJsonpMapper());
elasticsearchClient = new ElasticsearchClient(esTransport);
Main method:
var es = ElasticEightClient.builder()
.transportAddresses("localhost:9200")
.isElasticSniffEnabled(true)
.build();
System.out.println("Started elasticsearch with health: " + es.getHealth());
buildClusterHosts() method is correctly building an array of HttpHost (in this case only one) and provides it to the rest client builder.
In theory this should be enough, but I keep getting Caused by: java.net.ConnectException: Timeout connecting to [/172.20.0.2:9200] and I'm not sure why?
Tldr;
It seems you are confusing the Transport port and the Rest Api port of Elasticsearch.
To Fix
You will first need to expose the port of the transport layer, which is 9300 by default
services:
elasticsearch:
ports:
- 9300:9300
Then update the main method
var es = ElasticEightClient.builder()
.transportAddresses("localhost:9200")
.isElasticSniffEnabled(true)
.build();
System.out.println("Started elasticsearch with health: " + es.getHealth());
Figured out what was the problem. To use the Sniffer you need to add http.publish_host=localhost as environment variable in the docker compose file.

Insert Data Into MongoDB Container at the Spring Boot Application initialization

I have to insert some sample data into Collection in MongoDB when the application starts to run. I have done the following steps but it didn't work.
created the entrypoint.js under the init_script folder
entrypoint.js
use admin;
db.createUser(
{
user: "patient_db",
pwd: "14292",
roles: [ { role: "readWrite", db: "patient_db" } ]
}
);
db.grantRolesToUser( "patient_db", [{ role: "readWrite", db: "patient_db"}]);
created data.js file in the resources path
src/main/resources/data.js
use patient_db;
db.createCollection("holiday");
db.holiday.insert({holiday_date:'25-12-2021',holiday_name:'Christmas',created_by:'John Wick',modified_by:'John_Wick',created_date_time:'2021-04-25 04:23:55',modified_date_time:'2021-04-25 04:23:55'});
configured the docker-compose.yml
docker-compose.yml
version: "3"
services:
patient-service:
image: patient-service:1.0
container_name: patient-service
ports:
- 9090:9090
restart: on-failure
networks:
- patient-mongo
depends_on:
- mongo-db
links:
- mysql-db
mongo-db:
image: mongo:latest
container_name: mongo-db
ports:
- 27017:27017
networks:
- patient-mongo
volumes:
- 'mongodata:/data/db'
- './init_scripts:/docker-entrypoint-initdb.d'
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=14292
restart: unless-stopped
networks:
patient-mongo:
volumes:
mongodata:
4.Finally, Connection with MongoDB
properties-dev.yml
spring:
data:
mongodb:
host: mongo-db
port: 27017
database: patient_db
This is how I insert the entrypoint code to mongodb container:
Create a .sh file (example.sh)
Create mongo users and the data you want to insert.
example.sh
#!/usr/bin/env bash
echo "Creating mongo users..."
mongo admin --host localhost -u root -p mypass --eval "
db = db.getSiblingDB('patient_db');
db.createUser(
{
user: "patient_db",
pwd: "14292",
roles: [ { role: "readWrite", db: "patient_db" } ]
}
);
db.createCollection('holiday');
db.holiday.insert({holiday_date:'25-12-2021',holiday_name:'Christmas',
created_by:'John Wick',modified_by:'John_Wick',created_date_time:'2021-04-25 04:23:55',modified_date_time:'2021-04-25 04:23:55'});
"
echo "Mongo users and data created."
At docker-compose, insert the entrypoint
volumes:
- 'mongodata:/data/db'
- './example.sh:/docker-entrypoint-initdb.d/example.sh'
Maybe its not the more clean option, but its works perfectly.
I did it like this because I didn't get it to work with js files.
Thanks, #Schwarz54 for your answer. It works with js file
init_scripts/mongo_init.js
var db = connect("mongodb://admin:14292#127.0.0.1:27017/admin");
db = db.getSiblingDB('patient_db'); /* 'use' statement doesn't support here to switch db */
db.createUser(
{
user: "patient_db",
pwd: "14292",
roles: [ { role: "readWrite", db: "patient_db" } ]
}
);
db.createCollection("holiday");
db.holiday.insert({holiday_date:'25-12-2021',holiday_name:'Christmas',created_by:'John Wick',modified_by:'John_Wick',created_date_time:'2021-04-25 04:23:55',modified_date_time:'2021-04-25 04:23:55'});
docker-compose.yml
volumes:
- 'mongodata:/data/db'
- './init_scripts:/docker-entrypoint-initdb.d'

Healthcheck not working at all when using docker-compose (My service do not wait for Kafka to be started before launching)

I have three services on my docker-compose:
version: '3.4'
setup-topics:
image: 'bitnami/kafka:2'
hostname: setup-topics
container_name: setup-topics
command: "bash -c 'echo Waiting for Kafka to be ready... && \
./opt/bitnami/kafka/bin/kafka-topics.sh --create --if-not-exists --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --topic orders && \
./opt/bitnami/kafka/bin/kafka-topics.sh --create --if-not-exists --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --topic redis'"
environment:
KAFKA_BROKER_ID: ignored
KAFKA_ZOOKEEPER_CONNECT: ignored
depends_on:
- kafka
kafka:
container_name: kafka
hostname: kafka
image: 'bitnami/kafka:2'
ports:
- '9092:9092'
- '29092:29092'
volumes:
- 'kafka_data:/opt/kafka'
- './Ping.jar:/Ping.jar'
environment:
- KAFKA_HEAP_OPTS=-Xms1g -Xmx1g
- KAFKA_JVM_PERFORMANCE_OPTS=-Xms512m -Xmx512M
- KAFKA_BROKER_ID:1
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper-server:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,PLAINTEXT_HOST://:29092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka-server:9092,PLAINTEXT_HOST://localhost:29092
depends_on:
- zookeeper
healthcheck:
test: ["CMD", "java", "-jar", "/Ping.jar", "localhost", "9092"]
interval: 30s
timeout: 10s
retries: 4
zookeeper:
container_name: zookeeper
hostname: zookeeper
image: 'bitnami/zookeeper:3'
ports:
- '2181:2181'
volumes:
- 'zookeeper_data:/bitnami'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
- ZOOKEEPER_CLIENT_PORT=32181
- ZOOKEEPER_TICK_TIME=2000
And here the Ping.java file (Found it on here on stackoverflow answer: Docker-Compose: How to healthcheck OpenJDK:8 container?):
import java.io.IOException;
import java.net.InetSocketAddress;
import java.net.Socket;
public class Ping {
public static void main(String[] args) {
if (args.length != 2) {
System.exit(-1);
}
String host = args[0];
int port = 0;
try {
port = Integer.parseInt(args[1]);
} catch (NumberFormatException e) {
e.printStackTrace();
System.exit(-2);
}
try (Socket socket = new Socket()) {
socket.connect(new InetSocketAddress(host, port), 10 * 1000);
System.exit(0);
} catch (IOException e) {
System.exit(1);
}
}
}
Even with depends_on on the SETUP-TOPICS service to be dependent on Kafka in order to works, but he don't wait until Kafka is started before running and install new topics.
I can avoid this steps by using:
KAFKA_AUTO_CREATE_TOPICS_ENABLE=true
But for my developement purpose, I need to make it FALSE and create them one by one.
In top of this, I already tested with this command in the Healthcheck without requiring third party file:
healthcheck:
test: ["CMD", "bash", "-c", "unset" , "JMX_PORT" ,";" ,"/opt/bitnami/kafka/bin/kafka-topics.sh","--zookeeper","zookeeper:2181","--list"]
interval: 30s
timeout: 10s
retries: 4
And finally, here is the error message I am getting for both tries:
Waiting for Kafka to be ready...
Error while executing topic command : Replication factor: 1 larger than available brokers: 0.
[2020-06-01 15:06:28,809] ERROR org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 1 larger than available brokers: 0.
(kafka.admin.TopicCommand$)
I am aware that we can do it also using SLEEP command, but its not professional and if there will be performances issue on the server and Kafka take longer to start, this one will be missed and recieve again the same error as above.
I hear also about kafkacat (Which I didn't found yet an example on how to integrate it with docker-compose for this purpose).
I want to stay basic and use limited third party tools to achieve this goal, this is way I choosen JAVA file since the image already have Java dependency installed.
Hpe you understand my view, thank you in advance for your help.
Not clear why you need a JAR file. This should work just as well
test: ["CMD", "nc", "-vz", "localhost", "9092"]
The problem is simply waiting for kafka has not been waiting long enough before you run your commands. depends_on does not wait for healthcheck, AFAIK

Elasticsearch client can't connect elasticsearch in docker container

I'm trying to use Elasticsearch inside a container from a Java app which is also inside a container. Without docker containers my app correctly connect to local elasticsearch. My docker-compose file:
version: "3.7"
volumes:
postgis:
services:
database:
container_name: database
build:
postgis/
ports:
- 5432:5432
volumes:
- ./postgis:/var/lib/postgresql:rw
restart: on-failure
networks:
- net
application:
depends_on:
- database
- es
container_name: application
build:
application/
ports:
- $LORRYAPP_DEBUG_PORT:8080
volumes:
- ./application:/app:rw
environment:
LORRYAPP_OPTS: $LORRYAPP_OPTS
restart: on-failure
networks:
- net
es:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
ports:
- "9200:9200"
- "9300:9300"
environment:
- discovery.type=single-node
networks:
- net
networks:
net:
driver: bridge
Initialising es-client in ctor:
public ElasticSearchDao(ObjectMapper mapper) {
this.esClient = new RestHighLevelClient(RestClient.builder(HttpHost.create("http://localhost:9200")));
this.mapper = mapper;
}
Stacktrace:
java.net.ConnectException: Connection refused
at org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:788) ~[elasticsearch-rest-client-7.4.0.jar!/:7.4.0]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:218) ~[elasticsearch-rest-client-7.4.0.jar!/:7.4.0]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:205) ~[elasticsearch-rest-client-7.4.0.jar!/:7.4.0]
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1454) ~[elasticsearch-rest-high-level-client-7.4.0.jar!/:7.4.0]
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1424) ~[elasticsearch-rest-high-level-client-7.4.0.jar!/:7.4.0]
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1394) ~[elasticsearch-rest-high-level-client-7.4.0.jar!/:7.4.0]
at org.elasticsearch.client.RestHighLevelClient.index(RestHighLevelClient.java:836) ~[elasticsearch-rest-high-level-client-7.4.0.jar!/:7.4.0]
ES available through browser. http://0.0.0.0:9200/, http://127.0.0.1:9200/, http://localhost:9200/ give response:
{
"name" : "254bdb7bcc2a",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "vm537RNGSiG3dW8ag2MDTw",
"version" : {
"number" : "7.6.1",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "aa751e09be0a5072e8570670309b1f12348f023b",
"build_date" : "2020-02-29T00:15:25.529771Z",
"build_snapshot" : false,
"lucene_version" : "8.4.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
I think this could be related to question I've asked a long while ago.
Accessing Elasticsearch Docker instance using NEST
It's targeted against C#, this is really an issue with containers. What happened in my case was that client would connect to docker, obtain internal IP which is used for inter-container communication only and then would try to use this IP to keep connection open - that obviously doesn't work.

Categories