How can we run multiple akka nodes in a single pc? Currently, I've following in my application.conf file. For each system, I added different port numbers, but, I can't start more than one instance. Error says, Address already in use failed to bind.
application.conf file
remotelookup {
include "common"
akka {
remote.server.port = 2500
cluster.nodename = "n1"
}
}
Update : multiple akka nodes means, I have different different stand alone server application, which will communicate to remote master node using akka.
The approach we are using is:
Create different settings in your application.conf for each of the systems:
systemOne {
akka {
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = ${public-hostname}
port = 2552
}
}
}
}
systemTwo {
akka {
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = ${public-hostname}
port = 2553
}
}
}
}
Application.conf is the default config file, so in your settings module add configs for you systems:
object Configs {
private val root = ConfigFactory.load()
val one = root.getConfig("systemOne")
val two = root.getConfig("systemTwo")
}
and then create systems with this configs:
val one = ActorSystem("SystemName", one)
val two = ActorSystem("AnotherSystemName", two)
Don't forget that system names must differ
If you don't want to hardcode the info into your application.conf, you can do this:
def remoteConfig(hostname: String, port: Int, commonConfig: Config): Config = {
val configStr = s"""
|akka.remote.netty.hostname = $hostname
|akka.remote.netty.port = $port
""".stripMargin
ConfigFactory.parseString(configStr).withFallback(commonConfig)
}
Then use it like:
val appConfig = ConfigFactory.load
val sys1 = ActorSystem("sys1", remoteConfig(args(0), args(1).toInt, appConfig))
val sys2 = ActorSystem("sys2", remoteConfig(args(0), args(2).toInt, appConfig))
If you use 0 for the port Akka will assign a random port # to that ActorSystem.
The problem was in the port definition. It should be like
remotelookup {
include "common"
akka {
remote.netty.port = 2500
cluster.nodename = "n1"
}
}
Other wise, akka will take default port.
Related
I have a docker compose file:
version: '3.3'
services:
bifrost:
image: ivorytoast3853/bifrost
container_name: bifrost-app
ports:
- "8084:8084"
thor:
image: ivorytoast3853/thor
container_name: thor-app
ports:
- "8085:8084"
loki:
image: ivorytoast3853/loki
container_name: loki-app
ports:
- "8086:8084"
Which is meant to test a ZeroMQ app.
Bifrost: Broker
Thor: Server
Loki: Client
I am using the exact code from ZeroMQ's start guide (and when I start it locally -- without Docker it works (Loki sends messages to Thor through the Bifrost)).
For reference, the 3 files are:
LOKI
try (ZContext context = new ZContext()) {
ZMQ.Socket requester = context.createSocket(SocketType.REQ);
boolean didConnect = requester.connect("tcp://0.0.0.0:5559");
log.info("Loki connected to the bifrost: " + didConnect);
for (int request_nbr = 0; request_nbr < 10; request_nbr++) {
requester.send("One", 0);
String reply = requester.recvStr(0);
System.out.println("Received reply " + request_nbr + " [" + reply + "]");
}
}
Thor
try (ZContext context = new ZContext()) {
ZMQ.Socket responder = context.createSocket(SocketType.REP);
boolean didConnect = responder.connect("tcp://0.0.0.0:5560");
log.info("Thor connected to the bifrost: " + didConnect);
while (!Thread.currentThread().isInterrupted()) {
String string = responder.recvStr(0);
System.out.printf("Received request: [%s]\n", string);
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
responder.send("You sent me: " + string);
}
}
Bifrost
while (true) {
try (ZContext context = new ZContext()) {
ZMQ.Socket frontend = context.createSocket(SocketType.ROUTER);
ZMQ.Socket backend = context.createSocket(SocketType.DEALER);
frontend.bind("tcp://*:5559");
backend.bind("tcp://*:5560");
log.info("Started Bifrost to connect Loki and Thor");
ZMQ.Poller items = context.createPoller(2);
items.register(frontend, ZMQ.Poller.POLLIN);
items.register(backend, ZMQ.Poller.POLLIN);
boolean more = false;
byte[] message;
while (!Thread.currentThread().isInterrupted()) {
items.poll();
if (items.pollin(0)) {
while (true) {
message = frontend.recv(0);
more = frontend.hasReceiveMore();
backend.send(message, more ? ZMQ.SNDMORE : 0);
if (!more) {
break;
}
}
}
if (items.pollin(1)) {
while (true) {
message = backend.recv(0);
more = backend.hasReceiveMore();
frontend.send(message, more ? ZMQ.SNDMORE : 0);
if (!more) {
break;
}
}
}
}
}
}
Am I doing something wrong with the Docker compose file? I know Docker compose creates a network automatically...
Thanks!
Turns out I was not internalizing fundamental ideas of docker and containers as a whole.
The Problem
I was trying to connect to: "tcp://0.0.0.0:5560" from Loki/Thor to Bifrost.
Why is that a problem?
It is a problem because unlike starting all 3 spring boot applications on the same computer (with the same IP), I am starting each spring application in its OWN docker container -- which has its OWN UNIQUE IP. Therefore, I cannot say to Loki/Thor to "on this computer (IP), connect to Bifrost." -- since Bifrost lies on a completely separate IP address.
How did I fix it:
I changed the docker-compose file for Bifrost to contain a network alias:
image: ivorytoast3853/bifrost
container_name: bifrost-app
networks:
my-net:
aliases:
- queue
All this does is allow me to say, "if I give you the hostname of "queue", please connect to the IP address of the container that the Bifrost application is found on."
Then, all I had to do is change the host:port string in Loki and Thor to reflect the following:
responder.connect("tcp://queue:5560");
Hope this helps anyone who comes across a similar issue (or lack of understanding in my case)
Context
I run spark applications on an Amazon EMR cluster.
These applications are orchestrated by Yarn.
From AWS Console, I am able to get YARN application status using the Application History tab of the cluster's detail page. (cf. View Application History)
Expectation / Question
I would like to get the same information (application status) but from a java or scala program.
So, is it possible to get yarn application status from AWS EMR Java SDK ?
In my application, I manage some EMR object instance like:
AmazonElasticMapReduceClient
Cluster
Thanks in advance.
I came upon this because i was looking for a way to get the job status via EMRs "steps" api...but if you're looking to get it via yarn directly here is some sample code:
object DataLoad {
private def getJsonField(json: JValue, key: String): Option[String] = {
val value = (json \ key)
value match {
case jval: JValue => Some(jval.values.toString)
case _ => None
}
}
def load(logger: Logger, hiveDatabase: String, hiveTable: String, dw_table_name: String): Unit = {
val conf = ConfigFactory.load
val yarnResourceManager = conf.getString("app.yarnResourceManager")
val sparkExecutors = conf.getString("app.sparkExecutors")
val sparkHome = conf.getString("app.sparkHome")
val sparkAppJar = conf.getString("app.sparkAppJar")
val sparkMainClass = conf.getString("app.sparkMainClass")
val sparkMaster = conf.getString("app.sparkMaster")
val sparkDriverMemory = conf.getString("app.sparkDriverMemory")
val sparkExecutorMemory = conf.getString("app.sparkExecutorMemory")
var destination = ""
if(dw_table_name.contains("s3a://")){
destination = "s3"
}
else
{
destination = "sql"
}
val spark = new SparkLauncher()
.setSparkHome(sparkHome)
.setAppResource(sparkAppJar)
.setMainClass(sparkMainClass)
.setMaster(sparkMaster)
.addAppArgs(hiveDatabase)
.addAppArgs(hiveTable)
.addAppArgs(destination)
.setVerbose(false)
.setConf("spark.driver.memory", sparkDriverMemory)
.setConf("spark.executor.memory", sparkExecutorMemory)
.setConf("spark.executor.cores", sparkExecutors)
.setConf("spark.executor.instances", sparkExecutors)
.setConf("spark.driver.maxResultSize", "5g")
.setConf("spark.sql.broadcastTimeout", "144000")
.setConf("spark.network.timeout", "144000")
.startApplication()
var unknownCounter = 0
while(!spark.getState.isFinal) {
println(spark.getState.toString)
Thread.sleep(10000)
if(unknownCounter > 3000){
throw new IllegalStateException("Spark Job Failed, timeout expired 8 hours")
}
unknownCounter += 1
}
println(spark.getState.toString)
val appId: String = spark.getAppId
println(s"appId: $appId")
var finalState = ""
var i = 0
while(i < 5){
val response = Http(s"http://$yarnResourceManager/ws/v1/cluster/apps/$appId/").asString
if(response.code.toString.startsWith("2"))
{
val json = parse(response.body)
finalState = getJsonField(json \ "app","finalStatus").getOrElse("")
i = 55
}
else {
i = i+1
}
}
if(finalState.equalsIgnoreCase("SUCCEEDED")){
println("SPARK JOB SUCCEEDED")
}
else {
throw new IllegalStateException("Spark Job Failed")
}
}
}
I want to ensure whether kafka server is running or not before starting production and consumption jobs. It is in windows environment and here's my kafka server's code in eclipse...
Properties properties = new Properties();
properties.setProperty("broker.id", "1");
properties.setProperty("port", "9092");
properties.setProperty("log.dirs", "D://workspace//");
properties.setProperty("zookeeper.connect", "localhost:2181");
Option<String> option = Option.empty();
KafkaConfig config = new KafkaConfig(properties);
KafkaServer kafka = new KafkaServer(config, new CurrentTime(), option);
kafka.startup();
In this case if (kafka != null) is not enough because it is always true. So is there any way to know that my kafka server is running and ready for producer. It is necessary for me to check this because it causes loss of some starting data packets.
All Kafka brokers must be assigned a broker.id. On startup a broker will create an ephemeral node in Zookeeper with a path of /broker/ids/$id. As the node is ephemeral it will be removed as soon as the broker disconnects, e.g. by shutting down.
You can view the list of the ephemeral broker nodes like so:
echo dump | nc localhost 2181 | grep brokers
The ZooKeeper client interface exposes a number of commands; dump lists all the sessions and ephemeral nodes for the cluster.
Note, the above assumes:
You're running ZooKeeper on the default port (2181) on localhost, and that localhost is the leader for the cluster
Your zookeeper.connect Kafka config doesn't specify a chroot env for your Kafka cluster i.e. it's just host:port and not host:port/path
You can install Kafkacat tool on your machine
For example on Ubuntu You can install it using
apt-get install kafkacat
once kafkacat is installed then you can use following command to connect it
kafkacat -b <your-ip-address>:<kafka-port> -t test-topic
Replace <your-ip-address> with your machine ip
<kafka-port> can be replaced by the port on which kafka is running. Normally it is 9092
once you run the above command and if kafkacat is able to make the connection then it means that kafka is up and running
I used the AdminClient api.
Properties properties = new Properties();
properties.put("bootstrap.servers", "localhost:9092");
properties.put("connections.max.idle.ms", 10000);
properties.put("request.timeout.ms", 5000);
try (AdminClient client = KafkaAdminClient.create(properties))
{
ListTopicsResult topics = client.listTopics();
Set<String> names = topics.names().get();
if (names.isEmpty())
{
// case: if no topic found.
}
return true;
}
catch (InterruptedException | ExecutionException e)
{
// Kafka is not available
}
For Linux, "ps aux | grep kafka" see if kafka properties are shown in the results. E.g. /path/to/kafka/server.properties
Paul's answer is very good and it is actually how Kafka & Zk work together from a broker point of view.
I would say that another easy option to check if a Kafka server is running is to create a simple KafkaConsumer pointing to the cluste and try some action, for example, listTopics(). If kafka server is not running, you will get a TimeoutException and then you can use a try-catch sentence.
def validateKafkaConnection(kafkaParams : mutable.Map[String, Object]) : Unit = {
val props = new Properties()
props.put("bootstrap.servers", kafkaParams.get("bootstrap.servers").get.toString)
props.put("group.id", kafkaParams.get("group.id").get.toString)
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
val simpleConsumer = new KafkaConsumer[String, String](props)
simpleConsumer.listTopics()
}
The good option is to use AdminClient as below before starting to produce or consume the messages
private static final int ADMIN_CLIENT_TIMEOUT_MS = 5000;
try (AdminClient client = AdminClient.create(properties)) {
client.listTopics(new ListTopicsOptions().timeoutMs(ADMIN_CLIENT_TIMEOUT_MS)).listings().get();
} catch (ExecutionException ex) {
LOG.error("Kafka is not available, timed out after {} ms", ADMIN_CLIENT_TIMEOUT_MS);
return;
}
Firstly you need to create AdminClient bean:
#Bean
public AdminClient adminClient(){
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
StringUtils.arrayToCommaDelimitedString(new Object[]{"your bootstrap server address}));
return AdminClient.create(configs);
}
Then, you can use this script:
while (true) {
Map<String, ConsumerGroupDescription> groupDescriptionMap =
adminClient.describeConsumerGroups(Collections.singletonList(groupId))
.all()
.get(10, TimeUnit.SECONDS);
ConsumerGroupDescription consumerGroupDescription = groupDescriptionMap.get(groupId);
log.debug("Kafka consumer group ({}) state: {}",
groupId,
consumerGroupDescription.state());
if (consumerGroupDescription.state().equals(ConsumerGroupState.STABLE)) {
boolean isReady = true;
for (MemberDescription member : consumerGroupDescription.members()) {
if (member.assignment() == null || member.assignment().topicPartitions().isEmpty()) {
isReady = false;
}
}
if (isReady) {
break;
}
}
log.debug("Kafka consumer group ({}) is not ready. Waiting...", groupId);
TimeUnit.SECONDS.sleep(1);
}
This script will check the state of the consumer group every second till the state will be STABLE. Because all consumers assigned to topic partitions, you can conclude that server is running and ready.
you can use below code to check for brokers available if server is running.
import org.I0Itec.zkclient.ZkClient;
public static boolean isBrokerRunning(){
boolean flag = false;
ZkClient zkClient = new ZkClient(endpoint.getZookeeperConnect(), 10000);//, kafka.utils.ZKStringSerializer$.MODULE$);
if(zkClient!=null){
int brokersCount = zkClient.countChildren(ZkUtils.BrokerIdsPath());
if(brokersCount > 0){
logger.info("Following Broker(s) {} is/are available on Zookeeper.",zkClient.getChildren(ZkUtils.BrokerIdsPath()));
flag = true;
}
else{
logger.error("ERROR:No Broker is available on Zookeeper.");
}
zkClient.close();
}
return flag;
}
I found an event OnError in confluent Kafka:
consumer.OnError += Consumer_OnError;
private void Consumer_OnError(object sender, Error e)
{
Debug.Log("connection error: "+ e.Reason);
ConsumerConnectionError(e);
}
And its documentation in code:
//
// Summary:
// Raised on critical errors, e.g. connection failures or all brokers down. Note
// that the client will try to automatically recover from errors - these errors
// should be seen as informational rather than catastrophic
//
// Remarks:
// Executes on the same thread as every other Consumer event handler (except OnLog
// which may be called from an arbitrary thread).
public event EventHandler<Error> OnError;
I am working on a Gradle script where I need to read the local.properties file and use the values in the properties file in build.gradle. I am doing it in the below manner. I ran the below script and it is now throwing an error, but it is also not doing anything like creating, deleting, and copying the file. I tried to print the value of the variable and it is showing the correct value.
Can someone let me know if this is the correct way to do this? I think the other way is to define everything in the gradle.properties and use it in the build.gradle. Can someone let me know how could I access the properties in build.gradle from build.properties?
build.gradle file:
apply plugin: 'java'
// Set the group for publishing
group = 'com.true.test'
/**
* Initializing GAVC settings
*/
def buildProperties = new Properties()
file("version.properties").withInputStream {
stream -> buildProperties.load(stream)
}
// If jenkins build, add the jenkins build version to the version. Else add snapshot version to the version.
def env = System.getenv()
if (env["BUILD_NUMBER"]) buildProperties.test+= ".${env["BUILD_NUMBER"]}"
version = buildProperties.test
println "${version}"
// Name is set in the settings.gradle file
group = "com.true.test"
version = buildProperties.test
println "Building ${project.group}:${project.name}:${project.version}"
Properties properties = new Properties()
properties.load(project.file('build.properties').newDataInputStream())
def folderDir = properties.getProperty('build.dir')
def configDir = properties.getProperty('config.dir')
def baseDir = properties.getProperty('base.dir')
def logDir = properties.getProperty('log.dir')
def deployDir = properties.getProperty('deploy.dir')
def testsDir = properties.getProperty('tests.dir')
def packageDir = properties.getProperty('package.dir')
def wrapperDir = properties.getProperty('wrapper.dir')
sourceCompatibility = 1.7
compileJava.options.encoding = 'UTF-8'
repositories {
maven { url "http://arti.oven.c:9000/release" }
}
task swipe(type: Delete) {
println "Delete $projectDir/${folderDir}"
delete "$projectDir/$folderDir"
delete "$projectDir/$logDir"
delete "$projectDir/$deployDir"
delete "$projectDir/$packageDir"
delete "$projectDir/$testsDir"
mkdir("$projectDir/${folderDir}")
mkdir("projectDir/${logDir}")
mkdir("projectDir/${deployDir}")
mkdir("projectDir/${packageDir}")
mkdir("projectDir/${testsDir}")
}
task prepConfigs(type: Copy, overwrite:true, dependsOn: swipe) {
println "The name of ${projectDir}/${folderDir} and ${projectDir}/${configDir}"
from('${projectDir}/${folderDir}')
into('${projectDir}/$configDir}')
include('*.xml')
}
build.properties file:
# -----------------------------------------------------------------
# General Settings
# -----------------------------------------------------------------
application.name = Admin
project.name = Hello Cool
# -----------------------------------------------------------------
# ant build directories
# -----------------------------------------------------------------
sandbox.dir = ${projectDir}/../..
reno.root.dir=${sandbox.dir}/Reno
ant.dir = ${projectDir}/ant
build.dir = ${ant.dir}/build
log.dir = ${ant.dir}/logs
config.dir = ${ant.dir}/configs
deploy.dir = ${ant.dir}/deploy
static.dir = ${ant.dir}/static
package.dir = ${ant.dir}/package
tests.dir = ${ant.dir}/tests
tests.logs.dir = ${tests.dir}/logs
external.dir = ${sandbox.dir}/FlexCommon/External
external.lib.dir = ${external.dir}/libs
If using the default gradle.properties file, you can access the properties directly from within your build.gradle file:
gradle.properties:
applicationName=Admin
projectName=Hello Cool
build.gradle:
task printProps {
doFirst {
println applicationName
println projectName
}
}
If you need to access a custom file, or access properties which include . in them (as it appears you need to do), you can do the following in your build.gradle file:
def props = new Properties()
file("build.properties").withInputStream { props.load(it) }
task printProps {
doFirst {
println props.getProperty("application.name")
println props.getProperty("project.name")
}
}
Take a look at this section of the Gradle documentation for more information.
Edit
If you'd like to dynamically set up some of these properties (as mentioned in a comment below), you can create a properties.gradle file (the name isn't important) and require it in your build.gradle script.
properties.gradle:
ext {
subPath = "some/sub/directory"
fullPath = "$projectDir/$subPath"
}
build.gradle
apply from: 'properties.gradle'
// prints the full expanded path
println fullPath
We can use a separate file (config.groovy in my case) to abstract out all the configuration.
In this example, we're using three environments viz.,
dev
test
prod
which has properties serverName, serverPort and resources. Here we're expecting that the third property resources may be same in multiple environments and so we've abstracted out that logic and overridden in the specific environment wherever necessary:
config.groovy
resources {
serverName = 'localhost'
serverPort = '8090'
}
environments {
dev {
serverName = 'http://localhost'
serverPort = '8080'
}
test {
serverName = 'http://www.testserver.com'
serverPort = '5211'
resources {
serverName = 'resources.testserver.com'
}
}
prod {
serverName = 'http://www.productionserver.com'
serverPort = '80'
resources {
serverName = 'resources.productionserver.com'
serverPort = '80'
}
}
}
Once the properties file is ready, we can use the following in build.gradle to load these settings:
build.gradle
loadProperties()
def loadProperties() {
def environment = hasProperty('env') ? env : 'dev'
println "Current Environment: " + environment
def configFile = file('config.groovy')
def config = new ConfigSlurper(environment).parse(configFile.toURL())
project.ext.config = config
}
task printProperties {
println "serverName: $config.serverName"
println "serverPort: $config.serverPort"
println "resources.serverName: $config.resources.serverName"
println "resources.serverPort: $config.resources.serverPort"
}
Let's run these with different set of inputs:
gradle -q printProperties
Current Environment: dev
serverName: http://localhost
serverPort: 8080
resources.serverName: localhost
resources.serverPort: 8090
gradle -q -Penv=dev printProperties
Current Environment: dev
serverName: http://localhost
serverPort: 8080
resources.serverName: localhost
resources.serverPort: 8090
gradle -q -Penv=test printProperties
Current Environment: test
serverName: http://www.testserver.com
serverPort: 5211
resources.serverName: resources.testserver.com
resources.serverPort: 8090
gradle -q -Penv=prod printProperties
Current Environment: prod
serverName: http://www.productionserver.com
serverPort: 80
resources.serverName: resources.productionserver.com
resources.serverPort: 80
Another way... in build.gradle:
Add :
classpath 'org.flywaydb:flyway-gradle-plugin:3.1'
And this :
def props = new Properties()
file("src/main/resources/application.properties").withInputStream { props.load(it) }
apply plugin: 'flyway'
flyway {
url = props.getProperty("spring.datasource.url")
user = props.getProperty("spring.datasource.username")
password = props.getProperty("spring.datasource.password")
schemas = ['db_example']
}
This is for Kotlin DSL (build.gradle.kts):
import java.util.*
// ...
val properties = Properties().apply {
load(rootProject.file("my-local.properties").reader())
}
val prop = properties["myPropName"]
In Android projects (when applying the android plugin) you can also do this:
import com.android.build.gradle.internal.cxx.configure.gradleLocalProperties
// ...
val properties = gradleLocalProperties(rootDir)
val prop = properties["propName"]
Just had this issue come up today. We found the following worked both locally and in our pipeline:
In build.gradle:
try {
apply from: 'path/name_of_external_props_file.properties'
} catch (Exception e) {}
This way when an external props file which shouldn't get committed to Git or whatever (as in our case) you are using is not found in the pipeline, this 'apply from:' won't throw an error in it. In our use case we have a file with a userid and password that should not get committed to Git. Aside from the problem of file-reading: we found that the variables we had declared in the external file, maven_user and maven_pass, had in fact to be declared in gradle.properties. That is they simply needed to be mentioned as in:
projectName=Some_project_name
version=1.x.y
maven_user=
maven_pass=
We also found that in the external file we had to put single-quotes around these values too or Gradle got confused. So the external file looked like this:
maven_user='abc123'
maven_pass='fghifh7435bvibry9y99ghhrhg9539y5398'
instead of this:
maven_user=abc123
maven_pass=fghifh7435bvibry9y99ghhrhg9539y5398
That's all we had to do and we were fine. I hope this may help others.
I'm using spray.io and akka.io on my freebsd 1CPU/2GB server and I'm facing
threading problems. I've started to notice it when I got OutOfMemory exception
because of "can't create native thread".
I check Thread.activeCount() usually and I see it grows enormously.
Currently I use these settings:
myapp-namespace-akka {
akka {
loggers = ["akka.event.Logging$DefaultLogger"]
loglevel = "DEBUG"
stdout-loglevel = "DEBUG"
actor {
deployment {
default {
dispatcher = "nio-dispatcher"
router = "round-robin"
nr-of-instances = 1
}
}
debug {
receive = on
autoreceive = on
lifecycle = on
}
nio-dispatcher {
type = "Dispatcher"
executor = "fork-join-executor"
fork-join-executor {
parallelism-min = 8
parallelism-factor = 1.0
parallelism-max = 16
task-peeking-mode = "FIFO"
}
shutdown-timeout = 4s
throughput = 4
throughput-deadline-time = 0ms
attempt-teamwork = off
mailbox-requirement = ""
}
aside-dispatcher {
type = "Dispatcher"
executor = "fork-join-executor"
fork-join-executor {
parallelism-min = 8
parallelism-factor = 1.0
parallelism-max = 32
task-peeking-mode = "FIFO"
}
shutdown-timeout = 4s
throughput = 4
throughput-deadline-time = 0ms
attempt-teamwork = on
mailbox-requirement = ""
}
}
}
}
I want nio-dispatcher to be my default non blocking (lets say single thread)
dispatcher. And I execute all my futures (db, network queries) on aside-dispatcher.
I get my contexts through my application as follows:
trait Contexts {
def system: ActorSystem
def nio: ExecutionContext
def aside: ExecutionContext
}
object Contexts {
val Scope = "myapp-namespace-akka"
}
class ContextsImpl(settings: Config) extends Contexts {
val System = "myapp-namespace-akka"
val NioDispatcher = "akka.actor.nio-dispatcher"
val AsideDispatcher = "akka.actor.aside-dispatcher"
val Settings = settings.getConfig(Contexts.Scope)
override val system: ActorSystem = ActorSystem(System, Settings)
override val nio: ExecutionContext = system.dispatchers.lookup(NioDispatcher)
override val aside: ExecutionContext = system.dispatchers.lookup(AsideDispatcher)
}
// Spray trait mixed to service actors
trait ImplicitAsideContext {
this: EnvActor =>
implicit val aside = env.contexts.aside
}
I think that I did mess up with configs or implementations. Help me out here.
Usually I see thousands of threads for now on my app until it crashes (I set
freebsd pre process limit to 5000).
If your app indeed starts so many threads this can be usually tracked back to blocking behaviours inside ForkJoinPools (bad bad thing to do!), I explained the issue in detail in the answer here: blocking blocks, so you may want to read up on it there and verify what threads are being created in your app and why – the ForkJoinPool does not have a static upper limit.