I was wondering if it's possible to run two projects at the same time in Eclipse, for example by using two different instances of JVM (if that makes any sense).
A bit of the background: I have a project that executes relatively long experiments (6-8h). I have recently managed to come to a point in development where I could branch off to develop different strategies for improving/adding code to the project. However at the same time I need to get some experiments done, and as the experiments take a long while to finish I'd like to make use of the long waiting time, and work on the branch code.
In short my ideal scenario is: start an experiment on the trunk in Eclipse, switch to the branch and develop code/run shorter experiments on the branch when I need to test functionality. Is this possible, or do I need to come up with an alternative strategy?
Thanks in advance!
EDIT: I have realized that the the word choice "test" was misleading as it could be misunderstood. I mean executing the program as it's supposed to run, not testing with JUnit or anything like that. I apologize for the inconvenience.
I just check out different branches as different projects. MyProjectTrunk, MyProjectBranch1, MyProjectBranch2 etc. No problem. The projects will never run on on the same JVM if you're using Run as Application.
Of course it is possible - you just need to have them configured as two separate projects with separate run configurations for each of them.
Unfortunately as far as I remember when you close a project all the associated running tasks (svn commits, debugs, runs, etc.) shut down as well and having two separate branches of the same project open in the same time might get very confusing when using keyboard shortcuts for class browsing.
Related
So Im currently in a project where we are using Java playframework 2.3.7 with activator.
One of the things I liked about playframework is the hot-reloading feature. I can modify java files save and the changes are compiled and refreshed on runtime.
How do I get that functionality but for testing? I want to be able to run a single test with this hot reloading feature, so that when I save. Tests for the given file (specified by test-only) is re-runned automatically.
There is not such a solution, however you have two choices:
Use IntellJ: To re-run the previous test(s) in IntellJ, you press shift + F10.
Write a watcher: Write a file/directory watcher such as this question/answer here, and then as soon as there are changes, the program, re-runs the test command, such as sbt clean compile test or activator compile test.
Little advice auto running tests: I don't know how complicated your application is, but as soon as you have couple of injections here and there and with additional concurrency; you do not want to run the tests as soon as you put a char in.
Little advice on Test Driven Development: Your approach should be the other way around! You write a test, which fails because there is no implementation; then you leave it alone. You go and write the implementation, then rerun the test to pass it or to get a feedback. Again, you need your cpu/memory power to focus on one thing, you don't want to brute force your implementation. Hope this makes sense!.
Little advice on your Play version: The Play 2.6 is way much better than Play 2.3; you should slowly but surely update your application; at least for the sake of security.
Ok so I found what I was looking for.
For anybody in need of this particular feature in that particular version of play (I'm not sure about other versions) what you need to do is really simple. run activator and put the ~ prefix before test. for example
#activator
[my-cool-project]~test
That will reload your tests when you make a change. if you want to do this for a particular test then you have to do the same but with test-only
#activator
[my-cool-project]~test-only MyCoolTest
hope it helps anyone looking for the same thing
Im currently reading Continius Delivery and in the book the author says that it is crucial to build the binarys only once, and then use the same binarys for every deployment. What im having problem understanding is how this can be done in practice? For examaple in order to run the mocked unit tests there would be a special build? What im refering to is the scope tag in Maven.
If you look at the Maven life Cycle you'll see that you have only one compile task. For your tests, they will be compiled and executed right after the source compilation. With mocked unit tests, it is the same : two separated compilation for two objectives.
I think that the author of your book refers to a problem that may appear when you deploy automatically on several environment : it create more environment to debug. It is compulsory to have only one final binary for all the environment. If you have several binaries that you split over your environment, you can be assured that you will forget what difference there are between two of them, which argument you give to the first one and not the other. For continuous Delivery, it has to be the same everywhere.
Let's come back on Maven. Maven has a lot of possibilities during its life cycle. Sometimes you'll have to run several build to complete everything (as code coverage for example). This may be useful in your continuous integration process and can be done through different build type (every hour for unit test, everyday for code coverage, quality analysis and integration tests).
But in the end, when you'll enter Continuous Delivery, you'll build one final binary, one unique binary copy over your environment
I am currently working on a rather large project with a team distributed across the United States. Developers regular commit code to the source repository. We have the following application builds (all are managed by an application, no manual processes):
Continuous Integration: a monitor checks to see if the code repository has been updated, if so it does a build and runs our unit test suite. On errors, the team receive email notifications
Daily Build: Developers use this build to verify their bug fixes or new code on an actual application server, and if "things" succeed, the developer may resolve the task.
Weekly Build: Testers verify the resolved issue queue on this build. It is a more stable testing environment.
Current Release build: used for demoing and an open testing platform for potential new users.
Each build refreshes the database associated with it. This cleans data and verifies any databases changes that go along with the new code are pulled in. One concern I hear from our testers is that we need to pre-populate the weekly build database with some expected testing data, as opposed to more generic data that developers work with. This seems like a legitimate concern/need and is something we are working on.
I am tossing what we are doing out to see if the SO community sees any gap with what we are doing, or have any concerns. Things seems to be working well, but it FEELS like it could be better. Your thoughts?
An additional step that is followed is that once the release build passes tests (say smoke test) then it is qualified as a good build (say a golden build) and you use some kind of labeling mechanism to label all the artefacts (code, install scripts, makefiles, installable etc.) that went into the creation of the golden image. The golden build may become a release candidate later or not.
Probably you are already doing this, since you don't mention I added what I had observed.
this is pretty much the way we do it.
The DB of the testers themselves is only reset on demand. If we would refresh this automatically every week then
we would lose the references to bug symptoms; if a bug is found but a developer only looks at it a few weeks later (or simply after the weekend) then all eveidence of that bug may have dissapeared
testers might be in the middle of a big test case (taking more than 1 day for instance)
we have tons of unit tests which are running against a DB which is refreshed (automatically of course) each time an integration build is executed
regards,
Stijn
I think you have a good, comprehensive process, as long as it fits in with when your customers want to see updates. One possible gap I can see is that it looks like you wouldn't be able to get a critical customer bug fix into production in less than a week, since your test builds are weekly and then you'd need time for the testers to verify the fix.
If you fancy thinking about things a different way, have a look at this article on continuous deployment - it can be a bit hard to accept the concept at first, but it definitely has some potential.
How does your team handle Builds?
We use Cruise Control, but (due to lack of knowledge) we are facing some problems - Code freeze in SVN - Build management
Specifically, how do you make available a particular release when code is constantly being checked in?
Generally, can you discuss what best practices you use in release management?
I'm positively astonished that this isn't a duplicate, but I can't find another one.
Okay, here's the deal. They are two separate, but related questions.
For build management, the essential point is that you should have an automatic, repeatable build that rebuilds the entire collection of software from scratch, and goes all the way to your deliverable configuration. in other words, you should build effectively a release candidate every time. Many projects don't really do this, but I've seen it burn people (read "been burned by it") too many times.
Continuous integration says that this build process should be repeated every time there is a significant change event to the code (like a check in) if at all possible. I've done several projects in which this turned into a build every night because the code was large enough that it took several hours to build, but the ideal is to set up your build process so that some automatic mechanism --- like an ant script or make file --- only rebuilds the pieces affected by a change.
You handle the issue of providing a specific release by in some fashion preserving the exact configuration of all affected artifacts for each build, so you can apply your repeatable build process to the exact configuration you had. (That's why it's called "configuration management.") The usual version control tools, like git or subversion, provide ways to identify and name configurations so they can be recovered; in svn, for example, you might construct a tag for a particular build. You simply need to keep a little bit of metadata around so you know which configuration you used.
You might want to read one of the "Pragmatic Version Control" books, and of course the stuff on CI and Cruise Control on Martin Fowler's site is essential.
Look at continuous integration: best pratices, from Martin Fowler.
Well, I have managed to find a related thread, I participated in, a year ago. You might find it useful, as well.
And here is how we do it.
[Edited]
We are using Cruise Control as integration tool. We just deal with the trunk, which is the main Subversion repository in our case. We seldom pull out a new branch for doing new story cards, when there is a chance of complex conflicts. Normally, we pull out a branch for a version release and create the build from that and deliver that to our test team. Meanwhile we continue the work in trunk and wait for the feedback from test team. Once all tested we create a tag from the branch, which is immutable logically in our case. So, we can release any version any time to any client in case. In case of bugs in the release we don't create tag, we fix the things there in the branch. After getting everything fixed and approved by test team, we merge the changes back to trunk and create a new tag from the branch specific to that release.
So, the idea is our branches and tags are not really participating in continuous integration, directly. Merging branch code back to the trunk automatically make that code becomes the part CI (Continuous Integration). We normally do just bugfixes, for the specific release, in branches, so it doesn't really participate into CI process, I believe. To the contrary, if we start doing new story cards, for some reasons, in a branch, then we don't keep that branch apart too long. We try to merge it back to trunk as soon as possible.
Precisely,
We create branches manually, when we plan a next release
We create a branch for the release and fix bugs in that branch in case
After getting everything good, we make a tag from that branch, which is logically immutable
At last we merge the branch back to trunk if has some fixes/modifications
Release Management goes well beyond continuous integration.
In your case, you should use Cruise Control to automatically make a tag, which allows developers to go on coding while your incremental build can take place.
If your build is incremental, that means you can trigger it every x minutes (and not for every commit, because if they are too frequent, and if your build is too long, it may not have time to finish before the next build tries to take place). The 'x' should be tailored to be longer that a compilation/unit test cycle.
A continuous integration should include automatic launch of unit tests as well.
Beyond that, a full release management process will involve:
a series of deployment on homologation servers
a full cycle of homologation / UAT (User Acceptance Test)
non-regression tests
performance / stress tests
pre-production (and parallel run tests)
before finally releasing into production.
Again "release management" is much more complex than just "continuous integration" ;)
Long story short: Create a branch copied from trunk and checkout/build your release on that branch on the build server.
However, to get to that point in a completely automated fashion using cc.net is not an easy task. I could go into details about our build process if you like, but it's probably too fine grained for this discussion.
I agree with Charlie about having an automatic, repeatable build from scratch. But we don't do everything for the "Continuous" build, only for Nightly, Beta, Weekly or Omega (GA/RTM/Gold) release builds. Simply because some things, like generating documentation, can take a long time, and for the continuous build you want to provide developer with rapid feedback on a build result.
I totally agree with preserving exact configuration, which is why branching a release or tagging is a must. If you have to maintain a release, i.e. you can't just release another copy of trunk, then a branch on release approach is the way to go, but you will need to get comfortable with merging.
You can use Team Foundation Server 2008 and Microsoft Studio Team System to accomplish your source control, branching, and releases.
We've trying to separate a big code base into logical modules. I would like some recommendations for tools as well as whatever experiences you might have had with this sort of thing.
The application consists of a server WAR and several rich-clients distributed in JARs. The trouble is that it's all in one big, hairy code base, one source tree of > 2k files war. Each JAR has a dedicated class with a main method, but the tangle of dependencies ensnares quickly. It's not all that bad, good practices were followed consistently and there are components with specific tasks. It just needs some improvement to help our team scale as it grows.
The modules will each be in a maven project, built by a parent POM. The process has already started on moving each JAR/WAR into it's own project, but it's obvious that this will only scratch the surface: a few classes in each app JAR and a mammoth "legacy" project with everything else. Also, there are already some unit and integration tests.
Anyway, I'm interesting in tools, techniques, and general advice to breaking up an overly large and entangled code base into something more manageable. Free/open source is preferred.
Have a look a Structure 101. It is awesome for visualizing dependencies, and showing the dependencies to break on your way to a cleaner structure.
We recently have accomplished a similar task, i.e. a project that consisted of > 1k source files with two main classes that had to be split up. We ended up with four separate projects, one for the base utility classes, one for the client database stuff, one for the server (the project is a rmi-server-client application), and one for the client gui stuff. Our project had to be separated because other applications were using the client as a command line only and if you used any of the gui classes by accident you were experiencing headless exceptions which only occurred when starting on the headless deployment server.
Some things to keep in mind from our experience:
Use an entire sprint for separating the projects (don't let other tasks interfere with the split up for you will need the the whole time of a sprint)
Use version control
Write unit tests before you move any functionality somewhere else
Use a continuous integration system (doesn't matter if home grown or out of the box)
Minimize the number of files in the current changeset (you will save yourself a lot of work when you have to undo some changes)
Use a dependency analysis tool all the way before moving classes (we have made good experiences with DependencyFinder)
Take the time to restructure the packages into reasonable per project package sets
Don't fear to change interfaces but have all dependent projects in the workspace so that you get all the compilation errors
Two advices: The first thing you need is Test suites. The second advice is to work in small steps.
If you have a strong test suite already then you're in a good position. Otherwise, I would some good high level tests (aka: system tests).
The main advantage of high level tests is that a relatively small amount of tests can get you great coverage. They will not help you in pin-pointing a bug, but you won't really need that: if you work in small steps and you make sure to run the tests after each change you'll be able to quickly detect (accidentally introduced) bugs: the root of the bug is in the small portion of the code has changed since the last time you ran the tests.
I would start with the various tasks that you need to accomplish.
I was faced with a similar task recently, given a 15 year old code base that had been made by a series of developers who didn't have any communication with one another (one worked on the project, left, then another got hired, etc, with no crosstalk). The result is a total mishmash of very different styles and quality.
To make it work, we've had to isolate the necessary functionality, distinct from the decorative fluff to make it all work. For instance, there's a lot of different string classes in there, and one person spent what must have been a great deal of time making a 2k line conversion between COleDateTime to const char* and back again; that was fluff, code to solve a task ancillary to the main goal (getting things into and out of a database).
What we ended up having to do was identify a large goal that this code accomplished, and then writing the base logic for that. When there was a task we needed to accomplish that we know had been done before, we found it and wrapped it in library calls, so that it could exist on its own. One code chunk, for instance, activates a USB device driver to create an image; that code is untouched by this current project, but called when necessary via library calls. Another code chunk works the security dongle, and still another queries remote servers for data. That's all necessary code that can be encapsulated. The drawing code, though, was built over 15 years and such an edifice to insanity that a rewrite in OpenGL over the course of a month was a better use of time than to try to figure out what someone else had done and then how to add to it.
I'm being a bit handwavy here, because our project was MFC C++ to .NET C#, but the basic principles apply:
find the major goal
identify all the little goals that make the major goal possible
Isolate the already encapsulated portions of code, if any, to be used as library calls
figure out the logic to piece it all together.
I hope that helps...
To continue Itay's answer, I suggest reading Michael Feathers' "Working Effectively With Legacy Code"(pdf). He also recommends every step to be backed by tests. There is also A book-length version.
Maven allows you to setup small projects as children of a larger one. If you want to extract a portion of your project as a separate library for other projects, then maven lets you do that as well.
Having said that, you definitely need to document your tasks, what each smaller project will accomplish, and then (as has been stated here multiple times) test, test, test. You need tests which work through the whole project, then have tests that work with the individual portions of the project which will wind up as child projects.
When you start to pull out functionality, you need additional tests to make sure that your functionality is consistent, and that you can mock input into your various child projects.