It is a little high level question and I'm looking for possible solutions.
So I've a Java program A and another Java program B. They are supposed to run in a VM. Now there is one constraint - that B shouldn't run until a file, say C.xml, is generated by A and a few minutes after. How can I solve this problem ?
One solution might be using some scheduler which schedules A and then runs B based of a trigger (I've no idea which scheduler can be used).
And another is write a service that runs on the VM and waits for C.xml to be generated. Once its generated, kicks-off B.
Any other ideas ? (or any help/comment on the two ideas that I've suggested).
Related
I have a general design problem regarding Cucumber-
I'm trying to build some cucumber scenarios around a specific external process that takes some time. Currently, the tests look like this:
Given some setup
When I perform X action
And do the external process
Then validate some stuff
I have a number of these tests, and it would be massively more performant if I could do the external process just once for all these scenarios.
The problem I'm running into is that it doesn't seem like theres any way to communicate between scenarios in cucumber.
My first idea was to have each test running concurrently and have them hit a wait and poll the external process to see if it's running before proceeding, but I have no way of triggering the process once all the tests are waiting since they can't communicate.
My second idea was to persist data between tests. So, each test would just stop at the point the process needs to be run, then somehow gets their CucumberContext to a follow up scenario that validates things after the process. However, I'd have to save this data to the file system and pick it up again, which is a very ugly way to handle it.
Does anyone have advice on either synchronizing steps in cucumber, or creating "continuation" scenarios? Or is there another approach I can take?
You can't communicate data between scenarios, nor should you try to. Each scenario (by design) is its own separate thing, which sets and resets everything.
Instead what you can do is improve the way you execute your external process so instead of doing it each time, you use the results of it being done once, and then re-use that result in future executions of the scenario.
You could change your scenarios to reflect this e.g.
Given I have done x
And the external process has been run for x
Then y should have happened
You should also consider the user experience of waiting for the external process. For new behaviours you could do something like
When I do x
Then I should see I am waiting for the external process
and then later do another scenario
Given I have done x
And the external process has completed
Then I should see y
You can use something like VCR to record the results of executing your external process. (https://rubygems.org/gems/vcr/versions/6.0.0)
Note: VCR is ruby specific, but I am sure you can find a java equivalent.
Now that your external process executes pretty much instantly (a few milliseconds) your no longer have any need to share things between scenarios.
I have a piece of Java code and I compiled and ran it. I got an output and I made some changes to it, before compiling and running again.
Is there any difference between the time it takes during the first compilation compared to the second compilation. Similarly are there changes between the first runtime to second runtime? Is there any way to find that difference in processing time?
There can be certain difference according to the changes you have made.
It depends on what your program did and what it does now, i think you can understand that.
To check the time, you can do this by creating a thread that can act like a timer just after the program execution, and stop that thread after all your processes have been done and simply display to see the time.
Firstly, I'm unsure why this is important to you. Perhaps by providing some more context you will get a more detailed answer.
Comparing compilation time can be achieved using Operating System tools. For example, on Linux try using time.
Complete execution time of your two Java programs can be achieved in the same fashion. However, if you are looking more closely at whether your code changes have improved your execution performance, I would suggest you Google "benchmarking in Java" to find a wealth of information on the correct methods to benchmark code.
If you use Eclipse you can configure Project -> Build Automatically to rebuild the project after every change.
So once you want to run it would take minimal time.
I have a .bat file in a Windows machine that starts our program by calling a main class of a Java executable(.Jar)
Now I need to run this every 30 mins.
I gone through several ways of doing it, but unable to decide which is better.
Scheduling through Windows scheduler or Using Java Timer. Which one to choose?
I want only one instance of the process running. If the previous process doesnt complete within 30min, i could wait.
Please let me know what to go for, based on my use case.
Thanks in advance.
You're better off using the Windows Scheduler. If there's a real risk of the process taking too long, you can create a file, or open a socket while the process is running and when another one tries to start up, it can detect that and simply quit. This would make it "miss" the 30m window (i.e. if the first job started at 12 and finished at 12:35, the next job would not start until 1).
But this way you don't have to worry at all about setting up long running processes, starting and stopping the java service, etc. The Windows scheduler just makes everything easier for something like this.
TimerTask is not a scheduling system, it is a library that provides tools for in-app scheduling. Seems that for your use-case you need a the system: you need it to run whether or not your app is running, you need reporting, etc. Windows Scheduler (or cron on unix/linux) is more appropriate for your needs.
Basically I am trying to profile web application which runs on tomcat and uses hsqldb(insecure web application from OWASP). I am using java profiler(jp2-2.1 not widely used at all)to profile tomcat server. The profiler profiles sequence of method call in which they executed in xml format. In short it generates calling context tree of the program/application run.
I noticed that the sequence in which methods of hsqldb get executed differ for EXACTLY two same runs of an application. which I expect to be same. To confirm this, I tried to profile sample program of hsqldb and profiler again generated diffrent output for the same program.
I am running the sample program from here: (http://hsqldb.sourceforge.net/doc/guide/apb.html)
So now I am sure that, the sequence in which hsqldb methods get executed, differ for exact two same runs of the program.
Could someone please tell me what is the reason behind this ? I would be very curious to know this.
I have not used hsqldb ever so dont know in detail how it works exactly.
Thanks.
The sequence in which HSQLDB methods are executed should generally be the same if the executed SQL statements are exactly the same, and each run starts with an empty database.
There will be minor differences between the first run and the runs that follow, because some static data is initialised in the first run.
We've developed a Java standalone program. We've configured in our Linux (RedHat ES 4) cron
schedule to execute this Java standalone every 10 minutes. Each standalone
may sometime take more than 1 hour to complete, or sometime it may complete
even within 5 minutes.
My problem/solution I'm looking for is, the number of Java standalones executing
at any time should not exceed, for example, 5 process. So, for example,
before even a Java standalone/process starts, if there are already 5 processes running,
then this process should not be started; otherwise this would indirectly start
creating OutOfMemoryError problems. How do I control this? I would also like to make this 5 process limit configurable.
Other Information:
I've also configured -Xms and -Xmx heap size settings.
Is there any tool/mechanism by which we can control this?
I also heard about Java Service Wrapper. What is this all about?
You can create 5 empty files (with names "1.lock",...,"5.lock") and make the app to lock one of them to execute (or exit if all files are already locked).
First, I am assuming you are using the words "thread" and "process" interchangably. Two ideas:
Have the cron job be a script that will check the currently running processes and count them. If less than threshold spawn new process, otherwise exit, here threshold can be defined in your script.
Have the main method in your executing java file check some external resource (a file, database table, etc) for a count of running processes, if it is below threshold increment and start process, otherwise exit (this is assuming the simple main method will not be enough to cause your OOME problem). You may also need to use an appropriate locking mechanism on the external resource (though if your job is every 10 minutes, this may be overkill), here you could defin threshold in a .properties, or some other configuration file for your program.
Java Service Wrapper helps you set up a java program as a Windows service or a *nix daemon. It doesn't really deal with the concurrency issue you are looking at--the closest thing is a config setting that disallows concurrent instances if its a Windows service.