I have a problem and I would like to ask for alternatives on my existing technologies since the programmed feature would be complex and would be given to users, so it should be as simple as it can be on front-end. I need java based technology.
What I need to do:
I am having a basic structure with lot of datas. These datas are mostly well written like Integers, Dates, Booleans etc, so things what can be compared easily.
I need to model decisions with batches of requirements which can be defined and altered by many sources like inner business processes and governmental laws.
So I am thinking to give a scripting ability to the users (most of them have university degrees, so some complexity is ok).
Let's see a simplified example.
Let A be a structure with the following.
A.budget - Integer
A.bankRelatedDebt - Integer
A.privateRelatedDebt - Integer
A.deadLine - Date
A.hasPermissionFromGovernment - Boolean
A.hasProblematicContracts - Boolean
I need rules to define to decide if the rule stands or falls, so I need boolean back.
Rule1: The budget is over 1 million EUR
Rule2: Has no problematic document or has a permission from government
Rule3: The deadline won't be in a month range.
Rule4: The overall debt (local + private) doesn't exceed 100.000 EUR
These rules could be hardcoded in other cases, but this has to be super-dynamic and based on given datas.
We have the options of drools and antlr I would need alternatives if you can mention. Or if you can mention technologies to avoid, that is helpful as well and welcomed, so I can avoid it.
For what it's worth. I would love to do such an expert system too, so bear with my ramblings. First some negative points as you asked what to avoid.
There are many pitfalls.
The "programming" is done by the users, there probably is no version control system for restoral, there maybe is no staging system but one is working in the production system. Think of extending a common library rule test wise. No unit tests?
Then there is the user acceptance. Especially there is a competitor, Excel programming, which you have to supercede. Generating reports with human electable text blocks, diagrams.
Your nunbered rules still lack some life: the system could assist with providing categories to select from: Rule1 - restriction on monetary resource. Nice would be to propose "would you also like to restrict on limitited resources? (a) Rule1, (b) ... .
Also what is the product? What are the advantages? What are the goals?
Reports, calculation scenarios (what-ifs, tolerances calculated through).
I certainly would first write a technical document along above lines, and than search the tools - as you seem to be doing. Drools is too basic. ANTLR for a DSL I find risky.
Tools
Data mining seems to be the keyword you are searching.
The JVM programming language Scala (not easily acquired), is productive for DSL, parsing.
Many functional languages are a bit easier and offer scripting too (Java scripting API).
What about a web project, maybe using jetty as embedded web server. So you may apply HTML and JavaScript. HTML5?
A rich client platform (eclipse or NetBeans) requires experience for rapid development. For nice graphics, maybe JavaFX (too early).
Develop a DSL for your needs using either Groovy or Scala.
We use CodeMirror to provide syntax highlighting in a web page.
Works great for us with Groovy.
I would vote against drools because I have terrible experiences, but some people like it.
I would propose a language already integrated in java: JavaScript. Why?
Is simple enough and has nice access to java beans: instead of
budget.getDealLine() you can use budget.deadLine
you have tons of places to check for information
you can add simple functions to make it more easy to use
But if you choose JavaScript, Python, Drools, ANTLR remember:
Users do not have version control systems like SVN/GIT, so it is up
to you make it happen.
Give them a tool (a webpage or whatever) that automatically save every version of every script they wrote.
Give them a way to test what they wrote without damaging anything.
Give them tools to rollback whatever changes they made.
Make as much static tests as possible once they commit the code before executing it.
Syntax highlighting will make them happier.
And remember: they will use the tool in ways you don't expect, and you will end up writing (or rewriting) most of the scripts. No university degree means you can trust them to understand what they are doing. (Not even CS!)
So if you can make the system less dynamic, would be in your benefit
It's like strategy pattern,all different rules are different algorithm apply to the Context(A),algorithm can be selected at runtime.
Add a filter chain design pattern to that,so that you can choose different algorithms(rules) at the same time.
Roolie is a very simple java rule engine that meight be helpful for you .As Roolie says:
Roolie is an extremely simple Java Rule Engine (Non-JSR 94) that uses rules you create in Java. Simply create your basic rules, implement the single "passes" method for each, then chain them together in an XML file to create more complex rules.
If you had the records in a database, you could select the matching ones with SQL syntax.
For example:
SELECT * FROM data
WHERE budget > 100000
AND privatCredits < 50000
Related
I know this question is vague, but I'll try to make myself clear.
I am starting a Java project involving a Swing GUI. I want to follow the MVC pattern, and could use some help from a framework to organize the project's architecture. I was thinking of using Griffon, though I suppose others might do the trick.
So, is it a good idea to use a framework in terms of:
Programming efficiency: Of course, it will be improved... most of the time. More precisely, what if the project is a small one? Or a large one? What if I'm already very familiar with Java and Swing? What if I'm not? What if the project has to be maintained by someone who knows nothing about the framework I used?
Learning value: Will I merely "learn how to use the framework", instead of learning more about Java and Swing in a different environment?
Professional value: Would companies prefer a developer who knows "more" frameworks (even if they might not be the ones they intend to use) to a developer who knows the "traditional" approach better?
I found little information elsewhere, which is surprising, considering how big this question is. It might seem trivial, but I'm actually wondering about it.
Of course I'm biased when it comes to Griffon however I'll try to be as objective as possible:
Griffon is an MVC framework/platform for the JVM. It's true that the programming language of choice is Groovy, however many others can be used too, see this example from the Guide http://griffon.codehaus.org/guide/latest/guide/tips.html#nonGroovyArtifacts where it shows how a pure Java application can be written. Other options are possible if you install a specific plugin http://artifacts.griffon-framework.org/tags/plugin/polyglot
Griffon's philosophy is one of keeping your choices open. It's true that sometimes the framework will steer you to follow particular path however it has provides plenty of leg room, that is, you make it dance to your own tune. For example, writing Views is usually done following the Groovy SwingBuilder DSL (a subtle abstraction layer on to of Swing), but you can drop down to the Java layer and write in plain Java/Swing if you want; or pick NetBeans Matisse, Abeille Froms Designer, or any other Visual tool that supports Swing.
Plugins are key to Griffon's success. As you can see at http://artifacts.griffon-framework.org/category/all/plugins there are currently 211 plugins, and more are coming.
But in the end there's only one opinion that matters: yours. I'd recommend you to spend a few hours with Griffon, if you don't see the value added by it by then ... I'm afraid we'll have to work harder to make it better.
Cheers
As we know, Griffon is based on Groovy and Groovy has a beautiful Java style, probably you will avoid lots of code line, but always we need to consider some aspects like knowledge and schedule.
Knowledge: Your productivity is related with what you know and how to use what you know, if you feel confortble in Java, use the Java, because, seems that your goal is to use MVC and as Juned said, we can do this with Swing, too.
Schedule : If you have time to study and really want to learn a new framework now, this is the time, but you must to follow your schedule, don't forget that you need to finish this project in the time.
So, consider to use what you know, and study new things to another projects.
Avoid diving in the dark without your flashlight.
I was evaluating Griffon as a framework. I got the impression that this project is slowly dieing. IMHO Groovy is not a mainstream anymore (I wonder if it ever was a mainstream?). Now everybody fancies Scala.
Now back to your question:
Most frameworks expect that you follow the standard development path. Any changes / customizations will most probably introduce difficult to maintain solutions (they will call it architecture afterwards). Choose a framework that allows you to do 95% of the things you plan to do. And yeah, choose a mainstream framework.
Griffon is based on Groovy, so you have to master Groovy first. Ok, Groovy is a JVM language and if you're OK with Java it will greatly help, but still all those DSLs will require some time to settle in your head.
If you know any mainstream framework - this is a valuable asset. The sad fact is that frameworks tend to fade / die and you have to constantly look for new buzz things. You never stop learning (although key principles cannot be changed and remain constant from framework to framework)
Implementing the MVC pattern for an environment should be easy if you understand it. First a note on it: http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller
Now coming to your environment which is Swing based, you can implement your code by dividing it Model-View-Controller. Views are your Swing classes where are you are actually creating the user interface. In such classes, you should simply capture the user actions through different listeners but should not implement any business logic.Controller should be doing the business logic and may use Model whenever desired.
For example, you are creating a Swing GUI for login. Create a LoginView class where you will create the frame, textfields, buttons etc. And also attach the listeners to different controls as desired. Now whenever a user submits the login, you should call controller to do the credentials validation. Credentials may be stored in a DB, which should be loaded and stored in Model(DAOs). Controller should get the user input from View, correct credentials from Model, and the compare logic should be implemented in Controller.
Hope it helps!
We're using JBoss Drools to externalise some particularly prone to change business logic in some services we are building.
Where these rules can be created and maintained by our developers this is working very well and we have a good level of integration and integrated workflow.
We are looking to expand its use to a new service that has a very high level of customisation required. Essentially an "expert user" needs to be able to setup rules of two different kinds:
"standard" rules - these are almost implicit rules that we know are common requirements and which we could build UI for to set e.g. only allowing certain operations to take place between two dates etc.
"custom" rules - completely off the wall requests that whilst we could try and anticipate we'd rather just let people write and test their own rules against :)
My question is, is it possible (and indeed is there anything out there as an example) of using Drools for both 1 & 2? Basically, to have a fixed UI application author Drools rules effectively AND have a "free text" rule editor embedded in our UI?
Any suggestions appreciated!
You have a few options.
For (2), you can simply embed the rules editor from Guvnor in your web application. All editors in Guvnor are embeddable components, so you can choose what you want to use and what you don't. The problem that I see in this approach is that you might be giving too much power to the users :). In other words, the ability to write any rules based on the model requires disciplines that are typically only known to technical users. For instance, writing tests to validate the rules. Some business users have enough technical knowledge for that, but I would say it is probably the exception, not the rule.
What I prefer and recommend most of the time is to develop your own domain specific GUI, that uses/exposes concepts and terminology that are familiar to the business users and a way to write rules that "makes sense" for their specific job. Sometimes, they will not even know they are writing "rules", but they will. Behind the scenes, your application takes the input from the Domain Specific GUI and generates the rules dynamically, either using the drools API or a string based template. This solves your (1) requirement, but might be powerful enough to solve (2) as well.
I have a Java source code that I need to interrogate and apply security policies [for e.g. applying CWE]
I have couple of ideas, for starters using AST and then travel thru the tree. Others include using regular expression.
Are there any options other than AST or regex that I could use for such process.
An AST is a good choice, much better than regular expressions.
There are numerous Java parsers available. ANTLR's java grammar is one example.
You can also adapt the source code of the javac compiler from OpenJDK.
Some static analysis tools like PMD support user-defined rules that would allow you to perform many checks without a lot of work.
There are a number of pre-existing tools that do some or all of what you are asking for. Some on the source code level, and some by parsing the byte code.
Have a look at
- CheckStyle
- FindBugs
- PMD
All of these are extendable in one way or another, so you can probably get them to check what you want to check in addition to the many standard checks they have
Many static source code analysis (SCA) tools use a collection of regular expressions to detect code that maybe vulnerable. There are many SCA tools for Java and I don't know the best open source one off hand. I can tell you that Coverity makes the best Java SCA tool that i have used, its much more advanced than just regular expressions as it can also detect race conditions.
What I can tell you is that this approach is going to produce a lot of false positives and false negatives. The CWE system indexes HUNDREDS of different vulnerabilities and covering all of them is completely and totally impossible.
You either want to get an existing static analysis tool that focuses on the vulnerabilities of interest to you, or you want to get a tool with strong foundations for building custom analyses.
Just parsing to ASTs doesn't get you a lot of support for doing analysis. You need to know what symbols mean where encountered (e.g., scopes, symbol tables, type resolution), and you often need to know how information flows (inheritance graphs, calls graphs, control flows, data flows) across the software elements that make up the system. Tools like ANTLR don't provide this; they are parser generators.
A tool foundation having this information available for Java is our DMS Software Reengineering Toolkit and its Java Front End.
Question in nutshell
What is the best way to get Python and Java to play nice with each other?
More detailed explanation
I have a somewhat complicated situation. I'll try my best to explain both in pictures and words. Here's the current system architecture:
We have an agent-based modeling simulation written in Java. It has options of either writing locally to CSV files, or remotely via a connection to a Java server to an HDF5 file. Each simulation run spits out over a gigabyte of data, and we run the simulation dozens of times. We need to be able to aggregate over multiple runs of the same scenario (with different random seeds) in order to see some trends (e.g. min, max, median, mean). As you can imagine, trying to move around all these CSV files is a nightmare; there are multiple files produced per run, and like I said some of them are enormous. That's the reason we've been trying to move towards an HDF5 solution, where all the data for a study is stored in one place, rather than scattered across dozens of plain text files. Furthermore, since it is a binary file format, it should be able to get significant space savings as compared to uncompressed CSVS.
As the diagram shows, the current post-processing we do of the raw output data from simulation also takes place in Java, and reads in the CSV files produced by local output. This post-processing module uses JFreeChart to create some charts and graphs related to the simulation.
The Problem
As I alluded to earlier, the CSVs are really untenable and are not scaling well as we generate more and more data from simulation. Furthermore, the post-processing code is doing more than it should have to do, essentially performing the work of a very, very poor man's relational database (making joins across 'tables' (csv files) based on foreign keys (the unique agent IDs). It is also difficult in this system to visualize the data in other ways (e.g. Prefuse, Processing, JMonkeyEngine getting some subset of the raw data to play with in MatLab or SPSS).
Solution?
My group decided we really need a way of filtering and querying the data we have, as well as performing cross table joins. Given this is a write-once, read-many situation, we really don't need the overhead of a real relational database; instead we just need some way to put a nicer front end on the HDF5 files. I found a few papers about this, such as one describing how to use [XQuery as the query language on HDF5 files][3], but the paper describes having to write a compiler to convert from XQuery/XPath into the native HDF5 calls, way beyond our needs.
Enter [PyTables][4]. It seems to do exactly what we need (provides two different ways of querying data, either through Python list comprehension or through [in-kernel (C level) searches][5].
The proposed architecture I envision is this:
What I'm not really sure how to do is to link together the python code that will be written for querying, with the Java code that serves up the HDF5 files, and the Java code that does the post processing of the data. Obviously I will want to rewrite much of the post-processing code that is implicitly doing queries and instead let the excellent PyTables do this much more elegantly.
Java/Python options
A simple google search turns up a few options for [communicating between Java and Python][7], but I am so new to the topic that I'm looking for some actual expertise and criticism of the proposed architecture. It seems like the Python process should be running on same machine as the Datahose so that the large .h5 files do not have to be transferred over the network, but rather the much smaller, filtered views of it would be transmitted to the clients. [Pyro][8] seems to be an interesting choice - does anyone have experience with that?
This is an epic question, and there are lots of considerations. Since you didn't mention any specific performance or architectural constraints, I'll try and offer the best well-rounded suggestions.
The initial plan of using PyTables as an intermediary layer between your other elements and the datafiles seems solid. However, one design constraint that wasn't mentioned is one of the most critical of all data processing: Which of these data processing tasks can be done in batch processing style and which data processing tasks are more of a live stream.
This differentiation between "we know exactly our input and output and can just do the processing" (batch) and "we know our input and what needs to be available for something else to ask" (live) makes all the difference to an architectural question. Looking at your diagram, there are several relationships that imply the different processing styles.
Additionally, on your diagram you have components of different types all using the same symbols. It makes it a little bit difficult to analyze the expected performance and efficiency.
Another contraint that's significant is your IT infrastructure. Do you have high speed network available storage? If you do, intermediary files become a brilliant, simple, and fast way of sharing data between the elements of your infrastructure for all batch processing needs. You mentioned running your PyTables-using-application on the same server that's running the Java simulation. However, that means that server will experience load for both writing and reading the data. (That is to say, the simulation environment could be affected by the needs of unrelated software when they query the data.)
To answer your questions directly:
PyTables looks like a nice match.
There are many ways for Python and Java to communicate, but consider a language agnostic communication method so these components can be changed later if necessarily. This is just as simple as finding libraries that support both Java and Python and trying them. The API you choose to implement with whatever library should be the same anyway. (XML-RPC would be fine for prototyping, as it's in the standard library, Google's Protocol Buffers or Facebook's Thrift make good production choices. But don't underestimate how great and simple just "writing things to intermediary files" can be if data is predictable and batchable.
To help with the design process more and flesh out your needs:
It's easy to look at a small piece of the puzzle, make some reasonable assumptions, and jump into solution evaluation. But it's even better to look at the problem holistically with a clear understanding of your constraints. May I suggest this process:
Create two diagrams of your current architecture, physical and logical.
On the physical diagram, create boxes for each physical server and diagram the physical connections between each.
Be certain to label the resources available to each server and the type and resources available to each connection.
Include physical hardware that isn't involved in your current setup if it might be useful. (If you have a SAN available, but aren't using it, include it in case the solution might want to.)
On the logical diagram, create boxes for every application that is running in your current architecture.
Include relevant libraries as boxes inside the application boxes. (This is important, because your future solution diagram currently has PyTables as a box, but it's just a library and can't do anything on it's own.)
Draw on disk resources (like the HDF5 and CSV files) as cylinders.
Connect the applications with arrows to other applications and resources as necessary. Always draw the arrow from the "actor" to the "target". So if an app writes and HDF5 file, they arrow goes from the app to the file. If an app reads a CSV file, the arrow goes from the app to the file.
Every arrow must be labeled with the communication mechanism. Unlabeled arrows show a relationship, but they don't show what relationship and so they won't help you make decisions or communicate constraints.
Once you've got these diagrams done, make a few copies of them, and then right on top of them start to do data-flow doodles. With a copy of the diagram for each "end point" application that needs your original data, start at the simulation and end at the end point with a pretty much solid flowing arrow. Any time your data arrow flows across a communication/protocol arrow, make notes of how the data changes (if any).
At this point, if you and your team all agree on what's on paper, then you've explained your current architecture in a manner that should be easily communicable to anyone. (Not just helpers here on stackoverflow, but also to bosses and project managers and other purse holders.)
To start planning your solution, look at your dataflow diagrams and work your way backwards from endpoint to startpoint and create a nested list that contains every app and intermediary format on the way back to the start. Then, list requirements for every application. Be sure to feature:
What data formats or methods can this application use to communicate.
What data does it actually want. (Is this always the same or does it change on a whim depending on other requirements?)
How often does it need it.
Approximately how much resources does the application need.
What does the application do now that it doesn't do that well.
What can this application do now that would help, but it isn't doing.
If you do a good job with this list, you can see how this will help define what protocols and solutions you choose. You look at the situations where the data crosses a communication line, and you compare the requirements list for both sides of the communication.
You've already described one particular situation where you have quite a bit of java post-processing code that is doing "joins" on tables of data in CSV files, thats a "do now but doesn't do that well". So you look at the other side of that communication to see if the other side can do that thing well. At this point, the other side is the CSV file and before that, the simulation, so no, there's nothing that can do that better in the current architecture.
So you've proposed a new Python application that uses the PyTables library to make that process better. Sounds good so far! But in your next diagram, you added a bunch of other things that talk to "PyTables". Now we've extended past the understanding of the group here at StackOverflow, because we don't know the requirements of those other applications. But if you make the requirements list like mentioned above, you'll know exactly what to consider. Maybe your Python application using PyTables to provide querying on the HDF5 files can support all of these applications. Maybe it will only support one or two of them. Maybe it will provide live querying to the post-processor, but periodically write intermediary files for the other applications. We can't tell, but with planning, you can.
Some final guidelines:
Keep things simple! The enemy here is complexity. The more complex your solution, the more difficult the solution to implement and the more likely it is to fail. Use the least number operations, use the least complex operations. Sometimes just one application to handle the queries for all the other parts of your architecture is the simplest. Sometimes an application to handle "live" queries and a separate application to handle "batch requests" is better.
Keep things simple! It's a big deal! Don't write anything that can already be done for you. (This is why intermediary files can be so great, the OS handles all the difficult parts.) Also, you mention that a relational database is too much overhead, but consider that a relational database also comes with a very expressive and well-known query language, the network communication protocol that goes with it, and you don't have to develop anything to use it! Whatever solution you come up with has to be better than using the off-the-shelf solution that's going to work, for certain, very well, or it's not the best solution.
Refer to your physical layer documentation frequently so you understand the resource use of your considerations. A slow network link or putting too much on one server can both rule out otherwise good solutions.
Save those docs. Whatever you decide, the documentation you generated in the process is valuable. Wiki-them or file them away so you can whip them out again when the topic come s up.
And the answer to the direct question, "How to get Python and Java to play nice together?" is simply "use a language agnostic communication method." The truth of the matter is that Python and Java are both not important to your describe problem-set. What's important is the data that's flowing through it. Anything that can easily and effectively share data is going to be just fine.
Do not make this more complex than it needs to be.
Your Java process can -- simply -- spawn a separate subprocess to run your PyTables queries. Let the Operating System do what OS's do best.
Your Java application can simply fork a process which has the necessary parameters as command-line options. Then your Java can move on to the next thing while Python runs in the background.
This has HUGE advantages in terms of concurrent performance. Your Python "backend" runs concurrently with your Java simulation "front end".
You could try Jython, a Python interpreter for the JVM which can import Java classes.
Jython project homepage
Unfortunately, that's all I know on the subject.
Not sure if this is good etiquette. I couldn't fit all my comments into a normal comment, and the post has no activity for 8 months.
Just wanted to see how this was going for you? We have a very very very similar situation where I work - only the simulation is written in C and the storage format is binary files. Every time a boss wants a different summary we have to make/modify handwritten code to do summaries. Our binary files are about 10 GB in size and there is one of these for every year of the simulation, so as you can imagine, things get hairy when we want to run it with different seeds and such.
I've just discovered pyTables and had a similar idea to yours. I was hoping to change our storage format to hdf5 and then run our summary reports/queries using pytables. Part of this involves joining tables from each year. Have you had much luck doing these types of "joins" using pytables?
My organization has a lot of legacy ASP software on its hands.
Since our perception is that Microsoft has shown a distinct lack of support for its older products, we need to figure out what to migrate to next.
If we migrate to ASP.NET from Classic ASP, It feels like that'd be a 'complete rewrite' "migration". Since that is probably the case, we're thinking about turning to a free (as in beer and as in speech) platform. I particularly think our investment (in programming effort, I mean) would be better spent with a platform that will let us evolve, rather than forcing us to dump very single line of code every 4 / 5 years. This is the main argument we've been using to advocate moving towards a 'free' platform.
Currently we're thinking about using a mixture of Java with a dynamic language like JRuby or Groovy.
My questions are: What would be a good migration choice? Are our fears unfounded? Would we (conceivably) need to rewrite a large portion of the codebase after 4 or 5 years anyway? What arguments do you have for moving to either .NET or to a free platform?
We recently migrated a substantial ASP project to ASP.NET and it's definitely possible to do a migration that doesn't become a complete rewrite, preserving a susbstantial proportion of the original code (although we benefited from some fairly well architected code with good separation between the business logic and the presentation layers from the outset).
We looked at two possible routes to the migration:
A ground-up rewrite as a 'proper' ASP.NET application making the most of the built-in controls and shifting all of the business and data access layers into classes etc - ie the way we approach a new ASP.NET project.
A more basic syntactic migration, taking the ASP pages and changing them over to ASP.NET without making too many up-front changes to the structure or logic. Then building on that as we continue to develop the site by making better use of ASP.NET controls, separate classes etc for new and revised functionality.
Due to schedule pressures, we adopted the second route, allowing us to make the switch sooner rather than later and then take a longer-term view on more fundamental changes as part of future developments.
On that basis, for the initial migration we were able to re-use the majority of the existing code, although this is gradually being replaced as we go forward. With an 'average' page having, say, 50-200 lines of HTML and 50-500 lines of VBScript code (as separate functions, not mixed into the HTML spaghetti-style), we found it took about an hour per page to migrate them, including changing all of the ADO data access to a new ADO.NET data access layer. Some simple pages took just a few minutes - some of the more complex pages took a whole day - but about an hour a page was the overall figure across hundreds of pages, with the time to change the data access representing a significant part of the effort.
If we were to redo the migration today, we'd use ASP.NET MVC rather than web forms as it maps better to the way that the original pages were written - although I couldn't comment on whether or not a migration to MVC would take more or less time.
This doesn't necessarily have any bearing on whether or not you want to move to a free platform or change languages - you might have other reasons for wanting to do this. (Our client was Windows based so moving to ASP.NET, development time aside, carried no additional costs.) However, I can say that it's possible to migrate a substantial ASP site to ASP.NET without a complete rewrite - and it's equally possible to migrate from earlier versions of ASP.NET to more recent versions with minimal work. Obviously the route that we chose, preserving some of the ASP feel to the code rather than opting for a complete redevelopment, might not suit everyone - but it does give you a low impact entry point into ASP.NET that then allows you to build on that groundwork while making the most of your existing codebase and skills.
You know mono exists right?
I think you'll find that the longevity of a language is not as important as you think it is, since the lifetime of the major versions of an app is really quite likely to be less than that of most languages. And to be fair to .NET 2.0 -> 3.0 is a pretty small shift up, not a "dumping every single line of code" exercise. If you even wanted to shift up, which isn't really required yet.
I think a strong case could be made for any given platform you might consider, I'd look at other factors like your programmers comfort/skills/familiarity, community support, extensions, and fitness for purpose.
The choice lies in your company's ability to support whatever you choose.
Are you going to support the code yourselves? If so, what is your skill set base right now?
If you are currently enjoying ASP, ASP.NET (code rewrite or no) seems like the first idea to try, even if you don't realise it, by having used ASP thus far, you have probably acquired a great deal of knowledge in maintaining IIS, as well as the plethora of Microsoft based Services and Servers that you can reuse or at least easily adapt to, in ASP.NET development, deployment and support.
JAVA, RUBY, Whatever you choose, the beauty/usefulness/power/usability/durability of any technology lies on the people behind its maintenance.
The questions you should be making are:
best directed at your team
who will be developing, planning managing the resulting app/service?
Are you Ruby people? If not, do you want to be?
Are you JAVA people? same as above
are you....?
..and so on..
Don't distract yourselves with costs here and there or free versus non free.
when the technology is unknown/new to the people behind a project, man hours in support can be much more expensive than most packaged software solutions out there, which will then provide you with support.
This is also true for development environments where you have to pay more for the IDE/Server/etc, but on the bottom line you have less of a learning curve to go through and support at your level of expertise.
Besides: just like #annakata said, you have plenty of opportunities/projects to implement "free as in beer" ASP.NET functionality.. mono or not..
Example: you can set up ASP.NET service with Apache if you like that sort of thing..
Choose what technology you want to adopt by what you know and what you want to know, not what you think is the latest and greatest in buzz, specially if the whole reason why you even started thinking about rebuilding your code set is due to the lack of support for ASP from Microsoft.
I don't think you can escape the pain of migrating every 4-5 years given that the industry is in a continuous flux and tools, platforms and languages evolve to maximize efficiency. There's no guarantee that Groovy/Ruby/C# won't change to adopt the next big thing.
Personally I wouldn't lose sleep trying to 100% future proof myself. If that is the goal then you might as well use assembly and even then there's no guarantee that'll stay constant either.
Just consider the following
1) What do your guys know? If they are vb guys it makes sense to move to VB.NET
2) What budget do you have for tools?
3) Does the platform you want to move to have the stuff that you need? I remember some time back discovering with amazement Powerbuilder's support (or lack thereof) for regular expressions
4) Is there a community around the platform?