Not found exception while doing svn update - java

I'm having the following situation:
A configuration file (config.cfg) that gets accessed a lot by
different processes.
Config.cfg is under version control - SVN.
I develop and test on a staging environment, when everything is working I go to the server and execute svn up on the config.cfg.
The problem is: During svn up I get an exception by the processes accessing config.cfg: "config.cfg" not found.
It seems that svn causes a short period where the file is beeing replaced and therefore not accessible for my processes.
Any input on how to solve this issue is very much appreciated.

As suggested by ThisSuitIsBlackNot the way to go is to use a semaphor file.
Another solution which just came to my mind is to cache the config file in the process. If it is not there the cached version of the config file is used. As "svn update" doesn't take very long the process will work with cached version until it needs to use the config file the next time.

Related

How to make TortoiseSVN to create a log file with the current rev number after commiting any files?

I need to use the current SVN revision in my Java code.
I use TortoiseSVN, Eclipse, Java 1.8, Windows 7.
I tried 2 things:
1) buildnumber-maven-plugin:
Our connection to the SVN server is way too complicated to connect them via the IDE/Maven because of the company security rules.
2) svn:keywords:
I created a file (revTemp.txt) and inserted an svn keyword in it: $revision$. The svn keywords used as $revision$ works perfectly, and I can read the file with Java. The content of this file is always updated when I commit it.
My problem is that i have to commit the revTemp all the time when I commit other files to the repository.
Can you help me?
Thank you very much!

cassandra 3.5 fails to load trigger class

I am trying to get started with Cassandra triggers, but I cannot get Cassandra to load them. I have built jar files from here and here, and put them under C:\Program Files\DataStax-DDC\apache-cassandra\conf\triggers. I have restarted the DataStax_DDC_Server service (on Windows) and reopened the CQLSH command line, but trying to use the trigger class in a create trigger command gives me only:
ConfigurationException: <ErrorMessage code=2300 [Query invalid because of configuration issue] message="Trigger class 'org.apache.cassandra.triggers.InvertedIndex' doesn't exist">
I checked the jar files, and they include the class files.
The only thing I could find in the log files of cassandra is Trigger directory doesn't exist, please create it and try again. But I don't know if that is relevant.
EDIT: Following the last line shown here, I edited the cassandra.bat file. Now if I stop the DataStax_DDC_Server service and run the bat file directly, the create trigger command succeeds. Nevertheless, the service seems to be independent of this bat file. The question now is how to apply the same config to the service?
After googling creatively, I found a solution. As mentioned here you need to explicitly set the cassandra.triggers_dir variable, but for the service to pick it up, as explained here, you must configure it in the registry. So the answer is to update the registry key
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Apache Software Foundation\Procrun 2.0\DataStax_CDC_Server\Parameters\Java\Options
and add the line
-Dcassandra.triggers_dir=C:\Program Files\DataStax-DDC\apache-cassandra\conf\triggers
Note that the path should not be enclosed in quotations, or it won't work.
Don't forget to restart the service.
Above solution is working for window. it's difficult in window to find registry option. so to find registry option go to start menu and type "regedit" it will open registry window then you can do above settings.

Avoid Cassandra 2.1 Warning trigger directory does not exist

Creating a new Cassandra and do a simple insert results in the unexpected warning:
SharedPool-Worker-1] WARN o.apache.cassandra.utils.FBUtilities - Trigger directory doesn't exist, please create it and try again.
Checking the source it seams that Cassandra is expecting a trigger directory (default name 'triggers') to exist.
Since I start a fresh Cassandra every time, I would like to know how I can advice Cassandra to create the triggers directory itself. I do not want to artificially fumble with it.
[update] The Cassandra uses the default main method and is started in the user space. Since during the cassandra.yaml definition the directories for cache, data and third one are created I wonder where to specify the trigger directory or how else it is going to be created.
#close screamers
Having an annoying warning in the logs that should not exist after all is what I consider a bug so please allow this question... . (no offense, just plain stackoverflow begging)
As I learned from the code of the FBUtilities.cassandraTriggerDir method, the property "cassandra.triggers_dir" is read before trying the default trigger directory "triggers". By setting the property to the correct directory (after creation) solved the issue.
The main reason for the problem was first, the triggers directory did not exist at all and second the Cassandra directory is not part of the class path. So there was no way Cassandra could not detect the trigger directory correctly.
So to summaries a cassandra.yaml entry is missing for this issue.
PS: Thanks Bryce for your help!
Do you have a trigger defined on the table you are inserting into, or in your schema? Or did you upgrade Cassandra from a pre 2.0 version?
In any case, the /triggers directory for 2.1 depends on your install type.
For a tarball install, it should be: {install_location}/conf/triggers
For a packaged install, it should be: /etc/cassandra/triggers

Vert.x fails to "Run your module and see your changes immediately"

I am working with the Vert.x Gradle template hosted at the Vert.x Github space.
The build file suggests that there is a runModIDEA target that runs IDEA-built class files so that rebuild/redeploy is not required to pick up changes:
runModIDEA - run the module from the project resources in IDEA. This allows you to run the module without building it
first!
... yet the task does not exist per ./gradlew tasks.
I am not tied to this particular build task per se.
I just want a working auto-redeploy solution that enables me to see updates without a two minute rebuild/redeploy cycle.
EDIT: I also tried running it directly, pointing to InteliJ IDEA output classpath. It works fine, but doesn't pick up changes.
vertx runmod com.mycompany~vert-x-reverse-proxy~1.0.0-final -c conf.json -cp out/production/vert-x-reverse-proxy
EDIT: I also tried ./gradlew runmod -m, first changing vertx_classpath.txt so that the IDEA files (out/production) are looked at first. Still no redeploy. In fact, while it was running, I deleted the out directory and it continued working.
EDIT: I also tried vertx run com.mycompany.myproject.ReverseProxyVerticle -c conf.json -cp out/production/vert-x-reverse-proxy... same results. It ran as expected but did not pick up changes. Only way to pick up changes was to gradlew clean and re-assemble.
EDIT: I have been through these instructions as well.
For anyone who stumbles upon this question, I had the same problem and managed to fix it by deleting everything under the /mods folder in the /target directory. This is in fact mentioned in the vertx documentation - though maybe could be a little more emphatic. Once everything under /mods is removed, start up the application and it redeploys whenever anything is changed.
If you are new to vertx and stumble with this problem or similar, it might be worth to have a look at this vertx google group entry. It describes the changes that need to be done to the generated project by the Vertx Gradle Template to get it running.
I know, this does not answer directly the question posted here but I hope it helps you further.

Why do I keep getting 'SVN: Working Copy XXXX locked; try performing 'cleanup'?

If you have worked with SVN tools in Eclipse (Subversion, subversive) before, then you are likely familiar with the 'working copy 'XXX' locked..." error.
I found a very useful post with a workaround for this problem at: Working copy XXX locked and cleanup failed in SVN
As great as the workaround is, it is a pain to do it over and over again. Does anyone know why I keep getting this error and what steps I could take to prevent it?
Context: I am creating an Eclipse plugin that involves listening for SVN events, so in testing this plugin, I am constantly opening and closing the workspace. I usually do 1 or 2 commits each time I open the workspace. Every so often the commit will fail and I get the 'working copy locked' error. I would love for this error to not happen anymore, so any advice is appreciated.
Thanks!
Select the project
Right click on the selected Project
Team -> Cleanup
Problem Solved.
Note: The Above steps will work only Eclipse(Indigo package)
Generally a .lock file is created and it decides lock/unlock state checking the existince of this file. I think if you delete this .lock file only, then the problem will go away.
I've had a lot of issues with SVN before and one thing that has definitely caused me problems is modifying files outside of Eclipse or manually deleting folders (which contains the .svn folders), that has probably given me the most trouble.
edit
You should also be careful not to interrupt SVN operations, though sometimes a bug may occur and this could cause the .lock file to not be removed, and hence your error.
Make sure you exactly cleanup what the console says. For example if a subfolder (a package) is locked:
svn: E155004: Commit failed (details follow):
svn: E155004: Working copy 'C:\Users\laura\workspace\tparser\src\de\test\order' locked
svn: E155004: 'C:\Users\laura\workspace\tparser\src\de\test\order' is already locked.
cleanup C:/Users/liparulol/workspace/tparser/src/de/mc/etn/parsers/order
Then you need to cleanup the specified folder and not the whole project. If you are in eclipse right click on the package, not on the project folder and execute the clean up.
After more exploration and testing, it appears that this issue was being caused by debugging the plugin and using breakpoints. SVN/Subclipse apparently didn't like having breakpoints midway through their execution and as a result this lock files were being created. As soon as I started just running the plugin, this issue disappeared.
This will happen when something went wrong in one of your folders in you project.
You need to find out the exact folder that locked and execute svn cleanup under the specific folder.
You can solve this as follows:
run svn commit command to find out which folder went wrong.
change directory to that folder and run svn cleanup. Then it's done.
The following should unlock a locked working copy (tested on svn client version 1.6.11 and elipse version: Mars.2 Release (4.5.2))
step 1: (go to working copy directory) $cd working_copy_dir
step 2: (connect to svn sqlite database) $sqlite3 .svn/wc.db
step 3: (delete all records from table WC_LOCK) sqlite> delete from WC_LOCK;
step 4: (disconnect from sqlite 3 database) sqlite>ctrl + d
step 5: (from eclipse) right click on your working copy, then click Team -> Refresh/Cleanup
I had the same problem using the com.xxx.service.model package.
To fix it, I first made a backup of the code changes in the model package. Then deleted model package and synchronized with the repository. It will show incoming the entire folder/package. Then updated my code.
Finally, paste the old code commit to the SVN Repository. It works fine.
This happened to me when I copied a directory from another subversion project and tried to commit. The soluction was to delete the .svn director inside the directory I wanted to commit.
This type of problem can happen when you delete/move files around - in essence making changes to your directory structure. Subversion only checks for changes made in files already added to subversion, not changes made to the directory structure. Instead of using your OS's copy etc commands rather use svn copy etc. Please see http://svnbook.red-bean.com/en/1.7/svn.tour.cycle.html
Further, upon committing changes svn first stores a "summary" of changes in a todo list. Upon performing the svn operations in this todo list it locks the file to prevent other changes while these svn actions are performed. If the svn action is interrupted midway, say by a crash, the file will remain locked until svn could complete the actions in the todo list. This can be "reactivated" by using the svn cleanup command. Please see http://svnbook.red-bean.com/en/1.7/svn.tour.cleanup.html
Solution:
Step1: Have to remove “lock” file which present under “.svn” hidden file.
Step2: In case if there is no “lock” file then you would see “we.db” you have to open this database and need to delete content alone from the following tables
– lock
– wc_lock
Step3: Clean your project
Step4: Try to commit now.
Step5: Done.

Categories