Avoid Cassandra 2.1 Warning trigger directory does not exist - java

Creating a new Cassandra and do a simple insert results in the unexpected warning:
SharedPool-Worker-1] WARN o.apache.cassandra.utils.FBUtilities - Trigger directory doesn't exist, please create it and try again.
Checking the source it seams that Cassandra is expecting a trigger directory (default name 'triggers') to exist.
Since I start a fresh Cassandra every time, I would like to know how I can advice Cassandra to create the triggers directory itself. I do not want to artificially fumble with it.
[update] The Cassandra uses the default main method and is started in the user space. Since during the cassandra.yaml definition the directories for cache, data and third one are created I wonder where to specify the trigger directory or how else it is going to be created.
#close screamers
Having an annoying warning in the logs that should not exist after all is what I consider a bug so please allow this question... . (no offense, just plain stackoverflow begging)

As I learned from the code of the FBUtilities.cassandraTriggerDir method, the property "cassandra.triggers_dir" is read before trying the default trigger directory "triggers". By setting the property to the correct directory (after creation) solved the issue.
The main reason for the problem was first, the triggers directory did not exist at all and second the Cassandra directory is not part of the class path. So there was no way Cassandra could not detect the trigger directory correctly.
So to summaries a cassandra.yaml entry is missing for this issue.
PS: Thanks Bryce for your help!

Do you have a trigger defined on the table you are inserting into, or in your schema? Or did you upgrade Cassandra from a pre 2.0 version?
In any case, the /triggers directory for 2.1 depends on your install type.
For a tarball install, it should be: {install_location}/conf/triggers
For a packaged install, it should be: /etc/cassandra/triggers

Related

Spark: How to obtain the location of configurations spark is using?

Right now, I am running into the following issue exactly. Specifically, spark-submit is attempting to connect to the yarn.resourcemanager at location 0.0.0.0/0.0.0.0.
I have checked all of the logs delineated in the stack overflow thread above. They all seem to be correct. I have also added in a yarn.resourcemanager.address=... line to the default settings files at the top of the spark configuration directories, exported YARN_CONF_DIR and all of the other fixes listed on that thread.
At the bottom of the comments in the top rated answer, a commentator pointed out that if none of the above fixes work, then spark is not using the correct configurations.
At this point, I am pretty sure that my spark install is not using the correct configurations (I did not install it).
How does one go about determining which configurations spark is using, and how does one change them to the correct configurations? (or maybe I just need to reboot the machine?)
In spark-shell for example, I can do this:
scala> getClass.getClassLoader.getResource("yarn-site.xml")
res1: java.net.URL = file:/etc/spark2/conf.cloudera.spark2_on_yarn/yarn-conf/yarn-site.xml
...and the result shows the exact resolved location of a config file from my current classpath. Same could be easily translated to Java (almost verbatim) if your application is Java-based.
You can try to access creationSite field on org.apache.spark.sql.SparkSession through debugger or through reflection. Then you can try to find class and place in code where your spark session is created and then you can try to find how org.apache.spark.sql.SparkSession.Builder is called.

cassandra 3.5 fails to load trigger class

I am trying to get started with Cassandra triggers, but I cannot get Cassandra to load them. I have built jar files from here and here, and put them under C:\Program Files\DataStax-DDC\apache-cassandra\conf\triggers. I have restarted the DataStax_DDC_Server service (on Windows) and reopened the CQLSH command line, but trying to use the trigger class in a create trigger command gives me only:
ConfigurationException: <ErrorMessage code=2300 [Query invalid because of configuration issue] message="Trigger class 'org.apache.cassandra.triggers.InvertedIndex' doesn't exist">
I checked the jar files, and they include the class files.
The only thing I could find in the log files of cassandra is Trigger directory doesn't exist, please create it and try again. But I don't know if that is relevant.
EDIT: Following the last line shown here, I edited the cassandra.bat file. Now if I stop the DataStax_DDC_Server service and run the bat file directly, the create trigger command succeeds. Nevertheless, the service seems to be independent of this bat file. The question now is how to apply the same config to the service?
After googling creatively, I found a solution. As mentioned here you need to explicitly set the cassandra.triggers_dir variable, but for the service to pick it up, as explained here, you must configure it in the registry. So the answer is to update the registry key
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Apache Software Foundation\Procrun 2.0\DataStax_CDC_Server\Parameters\Java\Options
and add the line
-Dcassandra.triggers_dir=C:\Program Files\DataStax-DDC\apache-cassandra\conf\triggers
Note that the path should not be enclosed in quotations, or it won't work.
Don't forget to restart the service.
Above solution is working for window. it's difficult in window to find registry option. so to find registry option go to start menu and type "regedit" it will open registry window then you can do above settings.

GAE System property rdbms.driver must be set

Exception
System property rdbms.driver must be set
I currently running my project through the console with:
appengine-java-sdk-*/bin/dev_appserver.sh
I have placed the Mysql driver in appengine-java-sdk-*/lib/impl as well as in war/WEB-INF/lib, which this issue is usually solved by this action.
What I tried:
restarted the server (Debian)
using an older version of the SDK
but still without success, is is possible that this is due to a cache problem?
It's not enough that the driver is in your lib folder. This only means that it's there and ready to be referenced. The thing is, the devserver needs to know that. On the command line, when you invoke devserver.sh, try the flag -Drdbms.driver=com.mysql.jdbc.Driver, and let me know if that helps.

Not found exception while doing svn update

I'm having the following situation:
A configuration file (config.cfg) that gets accessed a lot by
different processes.
Config.cfg is under version control - SVN.
I develop and test on a staging environment, when everything is working I go to the server and execute svn up on the config.cfg.
The problem is: During svn up I get an exception by the processes accessing config.cfg: "config.cfg" not found.
It seems that svn causes a short period where the file is beeing replaced and therefore not accessible for my processes.
Any input on how to solve this issue is very much appreciated.
As suggested by ThisSuitIsBlackNot the way to go is to use a semaphor file.
Another solution which just came to my mind is to cache the config file in the process. If it is not there the cached version of the config file is used. As "svn update" doesn't take very long the process will work with cached version until it needs to use the config file the next time.

Luntbuild No modules defined! Error

I am trying to set up Luntbuild 1.6.2 from scratch in our UAT environment. I have created a project, builder and schedule. We use subversion as source control so i have specified the repository and path as well in luntbuild.
But when I trigger the schedule nothing happens and system log reads as below:
com.luntsys.luntbuild.utility.ValidationException: No modules defined!
at com.luntsys.luntbuild.vcs.Vcs.validateModules(Vcs.java:323)
at com.luntsys.luntbuild.vcs.SvnAdaptor.validateModules(SvnAdaptor.java:739)
at com.luntsys.luntbuild.vcs.Vcs.validate(Vcs.java:342)
at com.luntsys.luntbuild.db.Project.validate(Project.java:347)
at com.luntsys.luntbuild.db.Project.validateAtBuildTime(Project.java:363)
at com.luntsys.luntbuild.db.Schedule.validateAtBuildTime(Schedule.java:383)
at com.luntsys.luntbuild.BuildGenerator.execute(BuildGenerator.java:186)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:529)
I don't know what I am missing? any clues...??
I was able to resolve this problem with help of one of my colleague...
You have to add the module inside VCS adapter tab and you need not to define all entries. I left all of it as blank and it resolved the issue.

Categories