Proposing a project redesign - java

My question is of a generic nature, I'm not even sure if such questions are allowed on SO, but it's been weighing on me for the past couple of months and I couldn't find anything on this elsewhere. I should mention I'm a junior dev still, and this might be me simply going about this whole thing wrong, but here goes nothing..
I'm currently tasked with the maintenance and further development of an Android application. The work conducted on it before I took over is best described as "at least it works, but not really". Bulk of functionality is there, accompanied with tons of little awkward bugs that happen intermittently and other multi-tasking and async behavioral issues and limitations.
No design was agreed upon, which I suppose indicates no proper research having gone into how to structure the whole things and javadoc is nearly non existent. I believe its only present from sources which were imported into this project, not ones made from scratch.
Put in technical terms, it's a crippled MVC featuring an anorexic model (getters, setters, no behavior) and starving views (just the necessary widgets thrown in there) with a buffet of morbidly obese controllers in the form of activities to compensate for it all. The activities are in charge of handing business logic, user interaction, data manipulation as well as network communication - from parsing the message all the way to firing new activities based on the server's response.
Every issue obviously has to be handled in the activities but the workarounds are becoming more and more restricted and dodgy. I had another developer maintaining the iOS version weigh in who was quite shocked at the state of things and the senior developer who pretty much said "can't be helped, just do what you can".
So in conclusion and the point of this post is I would like to transfer some meat and bones to the model, allowing it to be in charge of server communication to populate its data, and have its own behavior for both providing to the controller and maintaining itself. This change would almost entirely decouple the activities from having to directly handle server communication and instead focus on the business logic and user interaction for now.
What do people with any experience facing something like this end up doing and how does it turn out?

Related

Logging Features/Functions Used In an Application

Here's the scenario: I am trying to keep myself and fellow employees from wasting time on programming fixes on features that are never used by users. I work on an application that has been around for 10 years and has a lot of features that may never be used by customers, even though at some point they were. I find myself going down rabbit holes fixing problems that were never reported because QA and customers never encountered them. I don't want to leave in code that has significant problems but I also can't just remove it because perhaps parts of that class is used elsewhere.
I proposed some sort of a logging architecture that keeps track of every function call or Object creation within the application while a user uses the application. This way we can ask customers for the logging file stored somewhere in a folder we create. Once we obtain all the logs from all our users after a certain amount of time (1-2 months or so) we can analyze the logs and see exactly which features and functions are being used. I believe this data will be super useful in discovering how our customers are using our application. It can also point to features in our application that simply never get used and which features are used the most.
What would be a good solution/architecture, design pattern to implement this logging feature into an existing large code base? Or is this just simply a bad idea all together?
My initial thought would be something like this but with an extremely large application this would take a while. Plus it doesn't look great to have something like this in every single method but perhaps it's the only way?
private void createData()
{
write.toLog("ClassName.createData");
//the magic of the createData function
}
Aspect Oriented Programming is a way to implement the idea.
You are not the first one that has such an idea. Analytics are usual on web pages and in web apps. This is probably mainly driven by business considerations.
There are also ideas around in the DevOps community. I think there is a good chance that there is already an implementation of what you want. This podcast episode is about the topic. It is not in English, but the show notes on the linked page contain some links.
There might be privacy related questions, especially if your software is not used in only a single company.
I think it is a bad idea to log every method call for a large system. Some methods will be called a lot of times and the log files will be overwhelmed with method call log statements.
Every time I have seen every method call logged in a large system that logging was at a DEBUG level or finer and could be selectively turned on if needed (and rarely ever was).
It's better to be more strategic with this kind of logging and at a coarser level, such as logging high level requests, user initiated activities, job initiation, and so forth.

Real life experience with the Axon Framework [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
As part of researching CQRS for use with a project, I ran across the Axon Framework, and I was wondering if anyone has any real life experience with it. Just to be clear, I'm asking about the framework, not CQRS as an architectural pattern.
My project already uses Spring and Spring Integration which fits nicely with Axon's own requirements, but before i dedicate a lot of time to it, I would like to know if anyone has some first hand experience. In particular I'm interested i possible pitfalls that are not immediately apparent from the documentation.
The framework relies heavily on eventsourcing, which means that all state changes are >written to the data store as events. "
This is completely untrue, it does not rely heavily on event-sourcing. One of the implementations for storing the aggregate in this framework use Event-Sourcing but you can easily use also the classes provided to use a standard relational model.
It is just better with event-sourcing.
So you have a historical reference of all your data. This is nice but makes changing your >domain after you've gone in production a very daunting proposition especially if you sold >the customer on the system's "strong auditability" "
I don't think it is a lot easier with a standard relational model that only stores the current state.
The framework encourages denormalizing your data, to the point that some have suggested >having a table per view in the application. This makes your application extremely >difficult to maintain, especially when the original developers are gone"
This is unrelated to the framework but to the architectural pattern in use (CQRS).
And sorry to mention that but having one denormalizer/view is a good idea as it stays a simple object.
So maintenance is easy because SQL request/insertion as also easy.
So this argument is not very strong.
How about a view which uses a 1000 tables model with inner joins everywhere and complex SQL queries?
Again, CQRS helps because, basically, the view data is just a SELECT * from the table which correspond to the view.
if somehow you made a mistake in one of the eventhandlers, your only option is to >"replay" the eventlog, which depending on the size of your data can take a very long >time. The tooling for this however is non-existent.
I agree on the point that currently there is a lack of tooling to replay events and that this can take a long time. However, it is theoretically possible to only replay a portion of the event and not all the content of the event store.
Replaying can have side effects, so >developers become scared of doing it
Replaying event have side effects -> that's untrue. For me side effects means modifying the state of the system. In an event-sourced CQRS application, the state is stored in the event-store. Replaying the events does not modify the event store.
You can have side effect on the query side of the model yes. But you don't care if you have made a mistake because you are still able to correct it and replay the event once again.
it's extremely easy to have developers mess up using this framework. if they don't store >changes to domain objects in events, next time you replay your events you are in for a >surprise.
Well if you misused and misunderstand the architecture, the concept, etc. then ok I agree with you. But perhaps the problem is not the framework here.
Should you store delta's ? absolute values ? if you don't keep tabs on your developers >you are bound to end up with both and you will be f***ed
I can say that for every system I would say that it's unrelated directly to the framework itself. It's like saying, "Java is crap because you can messed up everything if someone codes a bad implementation of hashCode and equals methods."
And for the last part of your comment, I already seen samples like helloWorld with the Spring framework.
Of course it is completely useless in a simple example.
Be careful in your comment to make a difference between the concept (CQRS + EventSourcing) and the framework. Make a difference please.
Since you have stated that you want to use CQRS for your project (and I assume that the JVM is your target platform) I think Axon Framework is an excellent choice.
I have built a fairly complex trading platform on it (no, the trading sample is not complex) and I have not seen any obvious flaws of the framework.
Since I use EventSourcing, the test fixtures made it very easy to write BDD style "given, when, then" tests. This lets you treat an aggregate as a black box and concentrate on checking that the correct set of events come out when you put in a certain command.
About pitfalls: before jumping in, make sure
That you have the concepts of CQRS figured out.
Make a list (paper, whiteboard, whatever) of all your aggregates, command handlers, event handlers, sagas, commands and events. This is the hard part of building your system, figuring out what it should do and how. After this, the reference manual should show you how to wire it all together with Axon.
Some non Axon specific points:
Being able to rebuild the view store from events is a concept of EventSourcing, and not something that is exclusive to Axon, but I found it pretty easy to create a service that will send me all events from an aggregate type, aggregate id or a certain event type.
Being able to build a new reporting component one year after the project is launched and instantly get reports on data from the time of the project launch and onwards is awesome.
I've been using AxonFramework for more than one year on a complex project developed for a big bank.
The requirements were demanding, customer's expectations were high, and release times narrow.
I've choosed AxonFramework because, at the project kick off moment, it was the most complete and the best documented implementation of CQRS available in Java, well designed, easy to integrate, to test and to extend.
After more than one year I think that these considerations are still valid and current.
Another consideration has guided my choice: I wanted that the commitment on such a difficult project to become a training opportunity for me and other members of the team.
We started to develop with AxonFramework version 1.0 and moved to version 1.4 as newer versions were released.
Our team experience with CQRS and the implementation provided by the AxonFramework was absolutely positive.
It provided us with a consistent and uniform manner to develop each feature that guided us and make you feel at ease.
Without it some features of the application would have been much more complicated to develop.
I am referring mainly to the various long-running processes that need to be handled and to the related compensation logic, but also to the many business logics pieces that have been necessary, here and there, that fitted nicely and uncoupled in the event driven architecture promoted by CQRS.
Our choice was to be conservative in the write model, so we preferred a JPA based persistence instead of the event sourced one.
The query model is made up of views. We have tried to make sure that each view contains all the required data from a single page using intermediate views when necessary.
Anyhow we developed the write model as we were applying event sourcing, so we take care of modifying the state of aggregates exclusively through events. When the customer asked for a cloning function of a very complex aggregate it was just a matter of replaying the source events (with uuid translated) to a brand new instance - the down side in this case have been the events upcasting (but this functionality was greatly improved in the imminent 2.0 version).
As in each project during the development we found a lot of bugs, in our code mainly, but also in components supposed to be mature and stable, like the application server, the IoC container, the cache, the workflow engine and some of the other libraries that are easily to be found in any large J2EE application.
As any other human product AxonFramework was not immune to bugs too, but surprisingly for a young and niche project like this, they have been few, not critical, and quickly resolved by new releases.
The kind and immediate support provided by the author on the mailing list is another invaluable feature and helped me a lot when I was in trouble.
The application was released in production a year ago and is currently maintained and under active development of new features.
The customer is satisfied and asks for more.
When to use AxonFramework is more a matter of when to use CQRS. For a response it's worth to go back to the official documentation: http://www.axonframework.org/docs/1.4/introduction.html#d4e51
In our case definitively it was worth it.
The OP specifically asks about the pitfalls relating to the Axon Framework rather than CQRS. This makes the question difficult to answer, as Axon started as a fairly faithful implementation of the famous book by Eric Evans
The main advantage is that it does exactly what it says on the tin: it handles the hard parts of a CQRS based design for you: aggregates, sagas, event sourcing, command handlers, event handlers, BASE consistency etc. When you follow the best practices, you end up with a highly responsive and horizontally scalable application. If you use it with event sourcing, your data is completely auditable, and at least in theory, you can determine the state your application had at any given point in time. Tooling to do this is not provided; you will have to roll your own.
The main developer of the framework is very approachable and extremely knowledgeable on the subject of high performance and scalable computing in java. He tends to answer every question on the mailing list within a few hours. This is both an advantage and the major pitfall: at this time (early 2014), the Axon Framework depends heavily on one person. The rest of the pitfalls I would like to mention are probably more the result of event sourcing than of CQRS or Axon (as of 2018 the framework is supported by the company Axoniq)
Design your data model very carefully upfront. Though it is easy to add to, making fundamental changes to your datamodel can be very difficult. If you make a fundamental mistake in the datamodel, your application may not perform well, or even fail to work at all. For example, if you choose a tree shaped data model, with one long lived aggregate root at the top, this aggregate may grow very large as it accrues more and more events over time, and it may take a long time to load and store. I don't know what will happen if this goes on until an instance of the aggregate no longer fits in RAM, but I imagine could be bad. Don't do it that way.
Another pitfall (event sourcing related) is that, after a number of revisions, it can become increasingly difficult to reason about the state of an aggregate, as you sometimes have to keep in mind not only what the code does today, but also what it did in the past. This definitely makes replaying (a portion of) the event store to rebuild a view table a non trivial task.
Fixing data errors can be more difficult than with a 'traditional' design. Rather than a simple SQL statement, you will often need to make a command to change the state of your application. If the error in your data was caused by a faulty event handler, you can usually just fix the bug, clear the snapshots and let he events for the aggregate be replayed. If your bug caused spurious events to be applied, it can me much more trouble to fix. The faulty events will stay in the event store, and you may have to apply some new ones to restore your data to the correct state, or change the code to ignore or fix their behaviour.
While the framework itself is written decent enough, using it in a real world project has been nothing short of a nightmare and the choice of this framework imo was a major contributing factor to this project's failing.
The framework relies heavily on eventsourcing, which means that all state changes are written to the data store as events. So you have a historical reference of all your data. This is nice but makes changing your domain after you've gone in production a very daunting proposition especially if you sold the customer on the system's "strong auditability"
You cannot have ops guys make ad-hoc changes to the database
The framework encourages denormalizing your data, to the point that some have suggested having a table per view in the application. This makes your application extremely difficult to maintain, especially when the original developers are gone
if somehow you made a mistake in one of the eventhandlers, your only option is to "replay" the eventlog, which depending on the size of your data can take a very long time. The tooling for this however is non-existent. Replaying can have side effects, so developers become scared of doing it
it's extremely easy to have developers mess up using this framework. if they don't store changes to domain objects in events, next time you replay your events you are in for a surprise. Should you store delta's ? absolute values ? if you don't keep tabs on your developers you are bound to end up with both and you will be f***ed
There is practically no adoption of this framework, so googling for answers will not do you any good
Even though the framework does not yet support distribution it's written with it in mind and the api's are a pain to work with because of it. Firing off an event is async by default and if you want to check if an exception was raised executing the command, say a duplicate username exception, you need to pass in a listener to your commandhandler which is a future, then you wait for the future's result to come in, handle any checked exceptions, interuptedexception etc and then you can grab the exception that was thrown from the future. Ofcourse which exceptions a command can raise is not apparent from the api. Defeating the purpose of checked exceptions
Check out some of the example apps. I somehow need a unit of work listener to create an addressbook application? My goodness...
I am currently with a team working on an online casino platform launching our brand Casumo this summer. The domain and platform is build using Axon Framework and so far it it has served us solidly.
A lot of time has been saved not having to build all the infrastructure needed for command handling, event routing, event sourcing, snapshoting etc and the APIs are really nice to work with. The one bug we found in the framework so far was fixed in .. release 12 hours later and Allard is always quick to take suggestions on new features and discussing ways to leverage the framework to fulfill your needs.

Is it wise to develop a prototype GUI before designing other part of the system?

Is it wise to develop a prototype GUI before designing other part of the system?
I am using Java for this small project. It will be a program with GUI and database connection. Say the database has table A and B, the user can choose which table to interact with. The program then display the contents of, say, table A in the GUI, and allows the user to change the content and submit the changes, or delete, or insert.
I think GUI should be developed first before any back-end development starts. There are couple of reason to do this:
You gain clarity on how model objects should interact.
Usability poses lots of restrictions on the way you want to pull data. You will probably want to develop and architect after you're 100% sure what constraints are there.
On business point, managers like to have a dumb function UI before any development start. Many times, the feedback leads in major changes in back-end assumptions. Which is a lot less pain than the case when you get a change request after the back-end development is over.
My personal experience goes that simultaneous development of GUI and back-end is a bit messy. Plus GUI provides solid expectation of behavior from back-end. Moreover, this approach makes sure all the developers, your client and your manager on the same page.
I agree with Joel Spolsky that it is a great idea to write a functional spec before writing code. Part of that spec should include a collection of screen mockups. #O.D. is right, Balsamiq is a great tool. It has saved me a lot of time in the past.
Once you have a functional spec in place that the business users are happy with, you will then have a better idea of how to design your system to meet the requirements. e.g. is high performance a requirement, domain model vs simple crud etc.
Then you should start by taking a single use case and building a vertical slice of your application. Build a GUI, service layer, persistence layer, database schema in one iteration. This will hopefully point out any problems with your design and give you the chance to modify it before you start building out the horizontal functionality.
I'd say yes and no.
No because you should design you application to be modularized enough so that your logic and data do not depend on UI design.
Yes because it is always smart to design everything before you actually start implementing it.
So what I mean is that you should make a concept, but not let your UI concept 'tie your hands' when you implement your logic. So if your managers clients don't like your conceptual UI, you can always change it without actually changing your application logic.
Well showing you GUI brfore starting to program is a very a good Idea, specially that you enable the enduser (Customer) to check if the UI is up to his expectations, which can save you lots of time.
In order to do that you dont necessarily need to develope a "real" prototype, you can use programms which enable you to fast design the UI of your App, including a minimal workflow simulation instead of full funcionality.
i had a very good experience with: Balsamiq can really recommend it
Writing spec before your code is always a good idea, because it makes you think. But most specs I have seen are not that good. And if the spec is too technical, users will at the end sign-off your spec without really understanding what are they going to get.
I have seen best results when either presenting the User Manual to the client, or by discussing mockups of the system one scenario at a time.
Note that half-baked mockups won't do the trick. You need your mockups to be fully populated with relevant data (Ever tried to discuss some screens with accounting while the numbers on the screen don't match? There's no way at all you could explain to them these are only dummy numbers...)
And the caveat of using mockups is that users will more often than not believe the app is "almost finished", whatever you do or say. It must be some subconscious thing, I'm not sure. But to avoid that, most of specialized tools have either only "black&white" look and feel or multiple skins you can switch to and from.
There is a pretty complete list of mockup tools here. Many of them are free:
http://c2.com/cgi/wiki?GuiPrototypingTools
My own tool is pretty popular: http://MockupScreens.com, I created it a long time ago exactly because of my own frustration with above mentioned problems.

Naked Objects. Good or Bad [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have recently been exposed to naked objects. It looks like a pretty decent framework. However I do not see it in widespread use like say, Spring. So why is this framework not getting any mainstream application credit. What are its shortcomings as you see?
From my experience using NOF 3.0.3...
The good:
Automagically generates an DnD UI for your domain objects, like what db4o does for persistence.
This is what MVC was always meant to be, according to the MVC pattern creator.
The framework only asks your domain objects (POJOs) to be subclassed from AbstractDomainObject thats all the minimum wiring.
The framework favors convention OVER configuration: lots of annotations no freaking XML config giles.
Works great for prototyping along with db4o for persistence.
Out of the box functionality for Hibernate.
In my case, I required like 30 mins from Download to Hello world app. (IntelliJ IDEA IDE)
Deployment as JNLP, standalone, Web (NOX embedded Jetty or Scimpi flavor) and Eclipse RCP.
The NOF team is ALWAYS there for you when you ask for help in the forums.
The Naked Object Pattern is an awesome idea, do yourself a favor and take your time to grok it.
Theres a lot of usability flaming going on around the Drag and Drop GUI, but if your prospective end users simply can't work with the DnD UI then you are in deep trouble anyway.
The bad:
None that I can think of.
The kinda ugly:
No Swing components allowed, so say goodbye to JGoodies and all your favorite Swing component sets. The UI components are custom made; to get you an idea they look like early 90's VB controls. But there's a SWT port in the works.
The multiline line field for long strings has some issues. (NOF 3.0.3)
DnD UI for images is kinda buggy.
The validation code for getters n setters only fires if the domain object is modified from the UI. (This is probably wrong due to my n00bness, lets hope a NOF committer corrects me)
If an object is modified from a non-ui thread, lets say a b.g. worker, such object will
not update its view on screen. This invalidates a use case such as representing a mail queue in real time on the DnD autogenerated UI. (Again)
Veikko
I've been working on the naked objects approach for over a year now and I haven't even begun to scratch the surface of the possibilities it provides for your system's architecture. To properly utilize it though, it requires that you create a paradigm shift and seek out full OO solutions and revert from resorting to functional duck tapes, because the paradigm seems to work only when you create a design that would allow for high-level development.
Having said that, I absolutely love how Django has implemented naked objects within it's Django Models. Most of the things I love about the framework have been, what I come to believe, a direct result of it's models and there are some wows off the top I'd like to share about the architecture:
Model fields, that map to table columns, are behaviorally complete objects--they know how they're represented in both the application and database domain, how they're converted between the two and how the information they hold is validated and displayed to the user visually for inputs. All of this utilized with a single line of code in your model. Wow!
Managers are attached to models and provide CRUD and any generic operations on collections, such as reusable queries (give me the last five blog posts, most occuring tags, etc.), mass delete\update operations, and business logic performed on instances. Wow!
Now consider you have a model that represents a user. Sometimes, you'd only like to have a partial view of all the information a user model holds (when resetting a user's password you may only need need the user's email and his secret question). They've provided a Forms API that exactly displays and manages inputs for only parts of the model data. Allows for any customization of the what/how in handling user input. Wow!
The end result is that your models are only used to describe what information you use to describe a particular domain; managers perform all the operations on models; forms are used for creating views and for handling user inputs; controllers (views) are only there for handling HTTP verbs and if they work with models it's solely through managers and forms; views (templates) are there for the presentation (the part that can't be automatically generated). This, imho, is a very clean architecture. Different managers can be used and reused across different models, different forms can be created for models, different views can use different managers. These degrees of separation allow you to quickly design your application.
You create a ecosystem of intelligent objects and get a whole application from the way they're interconnected. With the premise that they're loosely coupled (lot's of possibilities for letting them communicate in different ways) and can be easily modified and extended (a few lines for that particular requirement), following the paradigm you really do get an architecture where you a component write once and then reuse it throughout your other projects. It's what MVC should have always been, yet I've often had to write something from scratch even though I did the same thing a few projects ago.
It has been successfully used here in Ireland.
I think reasons why it hasnt been more popular are:
You need a lot of confidence in the toolkits you are using
It makes the GUI a risk factor instead of a no-brainer (both technically and in usability testing)
Its not applicable to the web (as far as I know), which is where most of the focus is as present...
I've only just seen this. A couple of minor corrections, otherwise most of the comments are very fair.
1) 'The framework only asks your domain objects (POJOs) to be subclassed from AbstractDomainObject thats all the minimum wiring.'
Naked Objects does not require the domain objects to be subclassed from AbstractDomainObject, although that is typically the most convenient thing to do.
If you don't want to inherit, all you need to do is provide a property of type IDomainObjectContainer, and the framework will then inject an container into your objects when they are created or retrieved. The container has methods for Resolve(), ObjectChanged() and NewTransientInstance(), which are the three minimalist points of contact with the framework that you must use, so that the framework remains in synch with your domain objects.
2) 'Works great for prototyping along with db4o for persistence'. We're quite keen on the idea of working with db4o, but I'm not aware of anyone who has made Naked Objects and db4o play together. If anyone has done this, I'd like to hear more about it.
3) 'The general model of citzen programmer as espoused in the smalltalk and naked object communities ...'. We have never espoused that idea, and I don't agree with it. Naked Objects is NOT about encouraging users to program. I believe firmly in the role of the professional developer - Naked Objects just helps them to write better software and more productively.
Richard
I have played with it last year or so, and concluded it is very easy to work with.
The strength of Naked Objectsis that you get a GUI structured according to your data model for free. The disadvantage is that a typical user does not think about his proces as a collection of records.
My conclusion was that Naked Objects is really great for an internal application which conceptually deals with records, like an inventory application or bill processing application.
If you need anything different adapting the framework to your wishes may just be a lot more work than using a framework written to support the kind of application you want.
By the way, there is a Web rendering option; see the demo at Naked Objects Demo.
Gareth makes some excellent points.
There are other issues, such as the fact that it's hard to control the look and feel, and they are counter-intuitive to people who have become used to the window model. There is also something of a modelling issue, in that not all application domains lend themselves well to direct oject representation.
The general model of 'citizen programmer' as espoused in the smalltalk and naked object communities also comes to bear as a questionable idea. Most users don't seem to be hugely bothered with changing the functionality themselves, so thinking in objects is not that useful.
Probably the reason it hasn't gotten more attention is that the J2EE world has become so used to piling on so many layers onto an application, that naked objects comes across as naive.
Where are our services? You mean that any naked object gives me immediate access to the database? What if we needed to expose the application with RMI calls?
Plus there isn't as much to market, because it puts the burden of developing a successful application squarely on the application developers not the framework developers :)
I guess NakedObject definitely has its relevance and its more than time that developer community refocuses on what is really paying them: the business.
Instead, we mostly spend our time with infrastructure, protocols and all that technical crap. I have seen such miss constructed applications and I even did some myself following the mainstream, teaching you that layering a system is always a good thing to do. The worst thing is that if you ask some developers about what kind of business the company they are working for does, you’ll find at least some who worked for the company for years without gaining a deeper understanding of the business.
However, I don’t believe that NakedObject will attract a vast majority of developers (even those who are inspired from DomainDrivenDevelopment) simply because people love to construct UIs and taken that job away from them, directing their work towards businesses needs, is simply not what they want: We are all VB jerks.
NakedObjects (NO) are good for rapid prototyping. You can concentrate on Domain Model while not paying attention to GUI, DB and other parts of your solution. For production it requires alot of improvements (bugs fixing, data mapping, gui, etc.) in NakedObjects framework itself.
So if you need to get some kind of "proof of concept" for your solution, you may use NO. But for production be ready to invest resources into development of NO framework.
BTW, recently we are working on creating DnD viewer based on GWT for NO 4.0.
widespread use of technology has no strong correlation to technological quality.
The nakedobject system is difficult to use in combination with type objects:
if I'm selling different kinds of products and need different data for different products, it is difficult to constrain the data on the product type.
NO lost momentum when they switched licences. (to GPL+Commercial, not the recent move to Apache)
Did you take a look at jmatter?
[edit]
And another one: it makes it obvious to non-programmers if you can deliver. Spring is very much in the technological domain, NO means a developer has to talk to users. Large organisations don't do that.

How do you begin designing a large system? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
It's been mentioned to me that I'll be the sole developer behind a large new system. Among other things I'll be designing a UI and database schema.
I'm sure I'll receive some guidance, but I'd like to be able to knock their socks off. What can I do in the meantime to prepare, and what will I need to keep in mind when I sit down at my computer with the spec?
A few things to keep in mind: I'm a college student at my first real programming job. I'll be using Java. We already have SCM set up with automated testing, etc...so tools are not an issue.
Do you know much about OOP? If so, look into Spring and Hibernate to keep your implementation clean and orthogonal. If you get that, you should find TDD a good way to keep your design compact and lean, especially since you have "automated testing" up and running.
UPDATE:
Looking at the first slew of answers, I couldn't disagree more. Particularly in the Java space, you should find plenty of mentors/resources on working out your application with Objects, not a database-centric approach. Database design is typically the first step for Microsoft folks (which I do daily, but am in a recovery program, er, Alt.Net). If you keep the focus on what you need to deliver to a customer and let your ORM figure out how to persist your objects, your design should be better.
This sounds very much like my first job. Straight out of university, I was asked to design the database and business logic layer, while other people would take care of the UI. Meanwhile the boss was looking over my shoulder, unwilling to let go of what used to be his baby and was now mine, and poking his finger in it. Three years later, developers were fleeing the company and we were still X months away from actually selling anything.
The big mistake was in being too ambitious. If this is your first job, you will make mistakes and you will need to change how things work long after you've written them. We had all sorts of features that made the system more complicated than it needed to be, both on the database level and in the API that it presented to other developers. In the end, the whole thing was just far too complicated to support all at once and just died.
So my advice:
If you're not sure about taking on such a big job single-handed, don't. Tell your employers, and get them to find or hire somebody for you to work with who can help you out. If people need to be added to the project, then it should be done near the start rather than after stuff starts going wrong.
Think very carefully about what the product is for, and to boil it down to the simplest set of requirements you can think of. If the people giving you the spec aren't technical, try to see past what they've written to what will actually work and make money. Talk to customers and salespeople, and understand the market.
There's no shame in admitting you're wrong. If it turns out that the entire system needs to be rewritten, because you made some mistake in your first version, then it's better to admit this as soon as possible so you can get to it. Correspondingly, don't try to make an architecture that can anticipate every possible contingency in your first version, because you don't know what every contingency is and will just get it wrong. Write once with an eye to throwing away and starting again - you may not have to, the first version may be fine, but admit it if you do.
I also disagree about starting with the database. The DB is simply an artifact of how your business objects are persisted. I don't know of an equivalent in Java, but .Net has stellar tools such as SubSonic that allow your DB design to stay fluid as you iterate through your business objects design. I'd say first and foremost (even before deciding on what technologies to introduce) focus on the process and identify your nouns and verbs ... then build out from those abstractions. Hey, it really does work in the "real world", just like OOP 101 taught you!
Before you start coding, plan out your database schema - everything else will flow from that. Getting the database reasonably correct early on will save you time and headaches later.
The main thing is being able to abstract the complexity of the system so that you don't get bogged down by it as soon as you start off.
First read the spec like a story (skimming through it). Don't stop at every requirement to analyze it right there and then. This will allow you to get an overall picture of the system without too many details. At this point you would start identifying the major functional components of the system. Start putting these down (use a mindmap tool if you like).
Then take each component and start exploding it (and tying each detail with requirements in the spec document). Do this for all components, till you have covered all requirements.
Now, you should start looking at relationships between the components, and whether there are repetitions of features or functions across the various components (which you can then pull out to create utility components, or such). Around now, you would have a good detailed map of your requirements in your mind.
NOW, you should think of designing the database, ER diagrams, Class Design, DFDs, deployment, etc.
The problem with doing the last step first is that you can get bogged down in the complexity of your system without really gaining an overall understanding in the first place.
I do it the other way around. I find that doing it database-schema-first gets the system stuck in a data-driven-design that is difficult to abstract from persistence. We try to do domain model designs first and then base the database schema on those.
And then there's the infrastructure design: the team should settle on conventions on how to structure the program first and foremost. And then we work together to agree first on a design for the common functionality of the system (e.g., things everyone needs like persistence, logging, etc.). This becomes the framework of the system.
We all work on that together first before we split the rest of the functionalities amongst ourselves.
It has been my experience that Java applications (.NET also) that consider the database last are highly likely to perform poorly when placed into a corporate environment. You need to really think about your audience. You didn't say if it was a web app or not. Either way the infrastructure that you are implementing on is important when considering how you handle your data.
No matter what methodology you consider, how you get and save your data and it's impact on performance should be right up there as one of your #1 priorities.
I'd suggest thinking about how this application will be used. How will future users work with it? I'm sure you know at least a few things about what this application needs to handle, but my first advice is "think of the user and what he or she needs".
Draw it up on plain paper, thinking of where to section off the code. Remeber not to mix logic with GUI code (common error). This way you will be set to extend your applications reach in the future to servlets and/or applets or whatever platform comes along. Section in layers so that you can respond to large changes faster without rebuilding everything. Layers should not see any other layers than their closest neighbouring layers.
Begin with true core functionallity. All that time consuming fluff (that will make your project 4 weeks late), wont matter much to the wast majority of users. It can be added later once you are sure you can deliver on time.
Btw. Even though this has nothing to do with design I'd just like to say that you won't deliver on time. Make a realistic estimate on time consumption and then double it :-) I assume here that you will not be alone in this project and that people will come and go as the project progresses. You may need to train people midway through the project, people go on holiday / need surgery etc.
Split the big system to smaller pieces.
And don't think that it's so complex, because it usually isn't. By thinking too complex it just ruins your thoughts and eventually the design. Some point you just realize that you could do the same thing easier, and then you redesign it.
Atleast this has been my major mistake in designing.
Keep it simple!
I found very insightful ideas about starting a new large project, based on
common good practices
Test Driven Development
and pragmatic approach
in the book Growing Object-Oriented Software, Guided by Tests.
It is still under development, but first 3 chapters may be what You are looking for and IMHO worth reading.

Categories