Learning Java and logic using debugger. Did I cheat? [closed] - java

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
After a break from coding in general, my way of thinking logically faded (as if it was there to begin with...). I'm no master programmer. Intermediate at best. I decided to see if i can write an algorithm to print out the fibonacci sequence in Java. I got really frustrated because it was something so simple, and used the debugger to see what was going on with my variables. solved it in less than a minute with the help of the debugger.
Is this cheating?
When I read code either from a book or someone else's, I now find that it takes me a little more time to understand. If the alghorithm is complex (to me) i end up writing notes as to whats going on in the loop. A primitive debugger if you will.
When you other programmers read code, do you also need to write things down as to whats the code doing? Or are you a genius and and just retain it?

No, it's not cheating.
I think that if your sense of programming "logic" has faded a bit, then the absolute, best, 100% way to refresh or even learn this is to watch code in the debugger.
This is such a cool idea that if I ever teach a beginning programming class again, I should have a computer right there running code in the debugger so that the students can watch what happens.
In answer to your second question, if I really had to worry about what the code was doing, then I'd start writing things down. However, if I'm looking at code by navigating around in Eclipse, then I rarely have to write things down because the history of where I just was is readily available. However, if that history were not written down by the computer, I would absolutely be furiously scribbling on a pad as I navigated around the code.

This was something i needed time to realize:
Understanding the code written by someone else is not voodoo magic, it's just practice.
It's not a matter of intelligence nor logic, truly this is a skill your develop while actually having to understand other's code.
I really began to understand this and increase this skill while i start working, as I needed to make changes in others' code.
Don't be afraid, the more code you'll read, the easier it'll be.

It's not "cheating" to use the debugger to find bugs or to observe your program behavior, but you have to be careful not to let it turn into a crutch. Too much reliance on the debugger can also lead to "programming by accident", which is also not very productive. Also, you really want to be able to conceptualize how something is supposed to work before you even observe in the debugger whether it works the way you think it should.
Programming is largely an abstract, mental activity. You have to work out in your head how something is going to work (the design), then you go and write the code (the implementation). The more you can work out in your head how something is going to work, the more productive you will be in the long run.
As others mentioned, there are many times when you can't use a debugger. I think in the long run you will be best served by writing your code so it is easier to understand its behavior.

Even the most experienced programmers lookup the debugger for answers and clarifications; or write plain old printf's to understand states; or write down things when they're reading/reviewing someone else's code.
I think you're learning any thing by looking at what happens under-the-hood is no cheating and in deed you'll have a clearer, more concrete understanding than just reading books and having it as an abstract idea.
So no, it isn't cheating at all.

Using a debugger and step by step execution is one of the best way to understand the internals of code, libraries, APIs,... and to learn. So it's definitely not cheating, it's learning, it's getting knowledge.

Most of the time the exercise is to get you to think about what might happen in the code, not what does happen on a particular run. So running through a couple of times with a debugger might help, but you still have to do the work to generalise from those specific runs. For algorithms, this often means thinking about how the paths grow with increasing input size. For concurrent programs, this means thinking about how the paths of different execution threads will interact with each other in your code. A debugger won't tell you these things, however many times you run it.
Stepping through with a debugger can only tell you what did happen in one trial; it won't train you to think about your program abstractly - it's one apple dropping from a tree, not the theory of gravity.

It's OK to use the debugger to try to figure out why something happened, especially in mysterious code, but it's better to try to predict what should happen first and then see if the debugger confirms your reasoning and intuition. In the future this will help you write test cases that catch unanticipated errors, and increasingly you'll write code that works from the start.

Absolutely not! That's the exact opposite of cheating! The best way learn what's happening is to get in there in the guts of things and watch it happen. Reading code out of a book can be eye-glazingly boring... but watching it execute in the debugger can be magic, especially if you are having problems with a particular section.
I NEVER release code I have written without having run it in a debugger and watched almost all of it.

I agree that you didn't cheat in the general sense. I can only see two cases where you could conceivably consider using a debugger as cheating:
you have set a bar for yourself where you want to complete the task at hand without any aid such as a debugger. Obviously if you use a debugger in this case you can consider it cheating.
you have been given instructions by your teacher or the proctor for an interview test not to use external tools.

Well, it's cheating if you're taking a midterm for a class, and there's a "programming" problem that requires you to analyze something on paper, without running it. But I always hated that kind of test, precisely because it didn't seem at all useful - it's nothing like actually programming. And heck, even there, you still probably would end up "run" it by writing down interim processing in your notes.
I do find that it's good not to over-rely on an actual debugger too much, because occasionally you have to rely on simpler methods of debugging (if, say, you're trying to debug a problem that can only be reproduced on a computer owned by a tester who doesn't want you fiddling about on her computer). But even there, you're still running code and seeing what happens (and possibly adding "debugging" message boxes or writing-text-to-disk functions). Trying to read any program much more complicated than "Hello World" without actually running it (even on paper) isn't avoiding cheating, it's masochism.
That said, it is true, you start having to do this less the more you've seen of a particular language / class of problem. But certainly, if there's ever a bug, even a bug in code you've seen similar versions of millions of times, the fastest and least painful way of finding it is always going to be running the code and seeing what it does.

I think there is extreme value in debuggers and they can be especially great as a learning tool. Sometimes you simply need them to diagnose a defect. However, I think bad programming habits can emerge if you continually rely on a debugger as you develop software.
The perfectly efficient software developer understands the abstractions that she works with -- the languages, frameworks, OSes -- well enough to be able to declare her problem to the system and know that it is correct. Of course this is an unrealistic ideal, but I think it is the right thing to strive for. If you find yourself frequently in a debugger, it means you don't understand your abstractions and this means you are going to move a lot slower when writing code. Even if you sort out that one issue with the help of the debugger, your ignorance is going to slow you down as you try to layer on capability and features.
Clearly you're in "learning" mode, so whatever gets you to that level of understanding depends on your learning style. If the debugger helps you get there, great. But if you're looking to be productive, it is a good goal to have a level of understanding where you can get code correct as you write it not when you run it.
Related to this question is Linus Torvalds' rant on debuggers, which I particularly like:
http://lwn.net/2000/0914/a/lt-debugger.php3

Related

Lock (encrypt) java Netbeans files for customer

I make a program in Netbeans but I want my program files locked because I not want that any one can read or copy the source. How can I lock the files?
thank!
This is a very very broad question, but i'll try to help a little bit.
The best thing you could probably do for something written in java is called 'obfuscation', which means that your code is still readable, just very jumbled and very difficult to understand to a human
"Locking" files on a JVM language is an uphill battle that you probably won't ever win, if someone really really wants to see your source code and edit it, they can, it will just take some time, this actually applies to any language really, to be honest. No matter what you do, if someone is smart enough and really wants to edit your program, they can and will find a way, the best you can do is try to make it as hard as possible for them.
That being said, a decent obfuscator is enough to make most people look at your project and decide it probably isn't worth their time to manually deobfuscate it.

Code refactoring on bad system design

I am a junior software engineer who've been given a task to take over a old system. This system has several problems, based on my preliminary assessment.
spaghetti code
repetitive code
classes with 10k lines and above
misuse and over-logging using log4j
bad database table design
Missing source control -> I have setup Subversion for this
Missing documents -> I have no idea of the business rule, except to read the codes
How should I go about it to enhance the quality of the system and resolve such issues? I can think of using static code analysis software to resolve any bad coding practice.
However, it can't detect any bad design issues or problems. How should I go about resolving these issues step by step?
Get and read Working Effectively With Legacy Code. It deals exactly with this situation.
As others have also advised, for refactoring you need a solid set of unit tests. However, legacy code is typically very difficult to unit test as is, since it has not been written to be unit testable. So you need to refactor first to allow unit testing, which would allow you to start refactoring... a bad catch.
This is where the book will help you. It gives lots of practical advice on how to make badly designed code unit testable with the minimal, and safest possible, code changes. Automatic refactorings can also help you here, but there are tricks described in the book which can only be done by hand. Then once the first set of unit tests are in place, you can start gradually refactoring towards better, more maintainable code.
Update: For hints on how to take over legacy code, you may find this earlier answer of mine useful.
As #Alex noted, unit tests are also very useful to understand and document the actual behaviour of the code. This is especially useful when documentation about the system is nonexistent or outdated.
Focus on stability first. You can't enhance or refactor until you have some kind of stable environment in-place around the application.
Some thoughts:
Revision control. You've made a start by setting-up subversion. Now make sure that your database schemas, stored procedures, scripts, third-party components, etc. are under revision control too. Have a version labelling system, make sure you label versions and can accurately access old versions in the future.
Build and release. Have a way to build stable releases on a machine other than your dev machine. You may want to use ant/nant, make, msbuild, or even a batch file or shell script. You may need deployment scripts / installers too if they don't exist.
Get it under test. Do not change the app until you have a way to know whether your change has broken it. For this you need tests. You should hopefully be able to write xunit unit tests for some of the simpler, stand-alone classes, but try to build some system/integration tests that exercise the application as a whole. Without high code coverage (which you won't have to begin with) integration tests are your best bet. Get into the habit of running the tests as often as possible. Take every opportunity to extend them.
Make small, focussed changes. Try to identify systems/subsystems within the application, and improve the boundaries between them. This reduces the knock-on effects of changes you may make. Beware the temptation to "pretty-up" the code by reformatting it or imposing the latest fashionable design pattern. Turning-around a system like this takes time.
Documentation. Its necessary, but don't worry too much about it. System documentation is rarely used in my experience. Good tests are usually better than good documentation. Concentrate on documenting the interfaces between the application and the system context that it runs in (inputs, outputs, file structures, db schemas, etc).
Manage expectations. If its in bad shape then it will probably resist your efforts to make changes and timescales may be harder than usual to estimate. Make sure management and stakeholders understand that.
At all costs, beware the temptation to just rewrite the whole thing. Its almost never the right thing to do in this situation. If it works, concentrate on keeping it working.
As a junior developer, don't be afraid to ask for help. As others have said, Working Effectively With Legacy Code is a good book to read, as is Martin Fowler's Refactoring.
Good luck!
First, don't fix what isn't broken. As long as the system you are to take over works, leave functionality alone.
The system is obviuosly broken when it comes to maintainability, however, so that is what you tackle. As mentioned above, write some tests first, get the source backed up in a cvs, and THEN start by cleaning up small pieces first, then the larger ones and so on. Do NOT attack the bigger architectural issues until you have gained a good understanding of how the system works. Tools won't help you as long as you don't dive into the code yourself, but when you do, they do help a lot.
Remember, nothing is "perfect". Don't over-engineer. Obey the KISS and YAGNI principles.
EDIT: Added direct link to YAGNI article
Your issue #7 is by far the most important. As long as you have no idea how the system is supposed to behave, all technical considerations are secondary. Everyone is suggesting unit tests - but how can you write a useful test if you can't distinguish between wanted and unwanted behaviour?
So before you start touching the code, you have to understand the system from the user's point of view: talk to users, observe them using the system, write documentation on the use case level.
Yes, I am seriously suggesting that you spend days, more likely weeks, without changing a single line of code. Because right now, any change you make is likely to break things without you realizing it.
Once you understand the app, you'll at least know which functionality is important to test (manually or automated).
Write some unit tests first, and make sure they pass. Then with each refactoring change you make, just keep making sure the tests keep passing. Then you can be confident that your application behaviour to the outside world hasn't changed.
This also has the added benefit that the tests will always be there, so for any future changes the tests should still pass, guarding against any regressions in the new changes.
First and foremost, make sure you have source control system installed and all source code is versioned and can be built.
Next, you can try writing unit test for core parts of your system. From there, when you have a more or less solid body of regression tests, you can actually proceed with refactoring.
When I encounter messy codebase, I usually start with renaming poorly-named types and methods to better reflect their initial intent. Next you can try splitting huge methods into smaller ones.
Keep in mind that this legacy system, with all it's spaghetti code, currently works. Don't go changing things just because they don't look as pretty as they should. Focus on stability, new features & familiarity before ripping old code out left right and centre.
Firstly, let me say that Working Effectively with Legacy Code is probably a really good book to read, judging by three answers within a minute of each other.
bad database table design
This one, you are probably stuck with. If you try to change an existing database design you are probably committing yourself to redesigning the whole system and writing migration tools for the existing data. Leave well alone.
My standard answer to this question is: Refactor the Low-hanging Fruit. In this case, I'd be inclined to take one of the 10K-line classes and seek out opportunities to Sprout Class, but that's just my own proclivity; you might be more comfortable changing other things first (setting up source control was an excellent first step!) Test what you can; refactor what can't be tested, take a step at a time, and make it better.
Keep in mind as you progress how much better you are making things; if you concentrate only on how bad things still are, you're likely to become discouraged.
As others have noted, don't change something that works just to make it prettier. The risk that you will introduce errors is great.
My philosophy is: As I have to make changes to satisfy new requirements or to fix reported bugs, I try to make the piece of code that I have to change a little cleaner. I'm going to have to test the changed code anyway, so now is a good time to do a little clean-up at small additional cost.
Fundamental design changes are the toughest and must be saved for occasions where you have to make a big enough change that you would be testing all the changed code anyway.
Changing bad database design is hardest of all because the poorly designed tables are likely used by many programs. Any change to the database requires changing every program that reads or writes it. The best way to accomplish this is usually to try to reduce the number of places that access any given part of the database. To take a simple example: Suppose there are 20 places that read through customer records and calculate the customer account balance. Replace this with one function that reads the database and returns the total, and twenty calls to that function. Now you can change the schema for the customer records and there is only one piece of code to change instead of 20. The principle is simple enough, but in practice it is unlikely that every function that accesses a given record is doing the same thing. Even if the original programmer was clumsy enough to write the same code 20 times (not unlikely -- I've seen plenty of that), the real situation is probably not that he wrote 1 function 20 times, period, but that he wrote function A 20 times, function B 12 times, function C 4 times, etc.
Working Effectively With Legacy Code might be helpful.
Design issues are very difficult to catch. The first place to start is understanding the design of the application. I find it useful to diagram using either UML or a process flow diagram, anything works that communicates the design and working for the application.
From there I go into more detail, and ask myself the questions "Would I have done it this way", what other options are there. It is easy to see code-debt, i.e. the debt that we get from making bad choices, as always bad, but sometimes there are other factors involved like budget, time, availability of resources etc. Their you have to ask the question if it is worth refactoring a working but bad designed application.
If there are many upcoming new features, changes, bug fixes, etc I would say it is good to refactor, but if the application rarely changes and is stable, then maybe leaving it as is is a better approach.
Another sidepoint to note, is that if the code is used by another application as a service or module, then refactoring might first mean create a stub around the code that servers as the interfaces, once that is defined clearly and has unit test to prove it work. You can choose any technology to fill in the details.
A good book on this subject is Working Effectively with Legacy Code By Michael Feathers (2004). It goes through the process of making small changes, while working towards a bigger clean up.
Write unit test & Find and remove duplicate code.
Write unit test & Break long methods into a series of short methods.
Write unit test & Find and remove duplicate method.
Write unit test & Break apart classes so that the follow the single responsibility principle.
Try to create some unit tests first that can trigger some actions in your code.
Commit everyting in SVN and TAG it (in case that something goes bad you'll have an escape pod).
Use inCode Eclipse plugin http://www.intooitus.com/inCode.html and look for what refactorings it proposes. Check if the refactorings proposed seem ok for your proble. Try to understand them.
Retest with the units created before.
Now you can use FindBugs and/or PMD to check for other subtle issues.
If everything is oka you might want to check-in again.
I'd also try reading the source in order to detect some cases where patterns can be applied.

Understanding a big company project in Java [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
what is the best way to understand a big company project in java?
There's a nice podcast/interview with Dave Thomas ("The Pragmatic Programmer" Dave) about this topic, here.
He calls it "software archaeology".
For many different reasons organizations frequently lose control of their code bases. Knowledge gets forgotten, people leave. One could very easily mistake the code base for an archaeological concern were it not for the fact that mission critical applications built upon it keep chugging along until something needs to be modified or enhanced and then suddenly you have a vat of source code that no one understands but that has become the most important thing in the world overnight.
There are very few short cuts for coming to terms with big vats of code. Generally, one has to balance the quest for "understanding it" with the pressure to make pointed pragmatic changes to "complete the mission".
You'll never know every line of every piece of code at a company. What you CAN do is understand the classes, what they do, and how they relate to each other.
Start by getting the most basic understanding of the way the code flows from a class perspective focusing on the large classes that do that majority of the work. Once you understand what's happening there start focusing on smaller classes. Keep this up until you have a pretty good understanding of a handful of classes.
Additionally you can break it down in to common programming tasks that your company has. Once you have a list of these research what's involved in implementing each of them.
The final thing is just experience. You can study the code as long as you want, but there's no substitute for playing with the actual code. Writing programs, testing things out, and looking at how existing code runs are the best way to learn a new system.
Start writing unit tests - you do something useful (which will probably earn you more time to understand the code) and you get an in-depth look into the functionality. Also, start asking questions when they come up during that process, you will be able to ask very specific questions and you might impress your colleagues :)
Few places I often start from is the build/package/deploy scripts and configuration files - depending on the size of the application we are talking about, they can tell you a lot about the internal structure, external dependencies and highlight things to dig in further.
Next you can use code coverage tool and record the coverage for a simple scenario, which points roughly which areas are performing it. A variation of this is running under tracing profiler. Heap dumps are also useful for getting an idea of the basic data structures.
Finally, look application logs during various scenarios, though these are usually too much information and you need to know what are you looking for.
All these should give you a good idea of the overall application. After that, you need to fire up a debugger and start poking in the code. Ask your colleagues for their favourite breakpoint locations - usually everybody has some.
In case you are doing Java or C#, make sure you know your IDE and how to use find-usages. There are further more advanced tools for static analysis and comprehension like Structure101 (my favourite), SonarJ or LattixDSM.
Some UML reverse-engineering tools can generate class diagrams, but usually they create too much noise and doing the pruning manually assumes you can discern the important from the unimportant (which usually is not the case with new codebase). If you get one of these, I recommend starting with essential class and using the "Add dependencies" functionality to incrementally explore the application.
Take a simple first task, and don't rush anywhere. Look around, use the time to learn and understand the environment. Examine the call stack, use a debugger. And when you have a solution, ask your peers for a code review, and learn from them.
Then take another task, and another one, and another...
You'll get there soon :)
Documentation my friend, look for manager patterns seek the jsp play with the code dont be afraid to make small changes and see what happens, understand the logic and the building blocks, stay way from companies with poor documentation.

How can one convince a team to use a new technology (LinQ, MVC, etc )?

Obviously, it's easier to do with some developers, but I'm sure many of us are on teams that prefer the status quo.
You know the type. You see some benefit in a piece of new technology and they prefer the tried and true methods.
Try, for example, DBA/C# programmer the advantages of using LinQ ( not necessarily LinQ to SQL, just LinQ in general ).
For example, When a project requirement is to be cross-platform... instead of thinking about how one can run Windows on a Mac through a VM Machine, introducing the idea of using relatively new Silverlight or creating it in Java ( as an option to look into ).
I know most people don't like to be out of their comfort level, so it takes a bit of convincing, and not ALL new technology makes business sense... but how have you convinced your team to look at a new technology?
What technologies have you successfully introduced to your workplace?
What technologies do you think are hardest to introduce? ( I'm thinking paradigm-shifting ones, like MVC from WebForms... or new languages )
What strategies do you employ to make these new technologies appealing?
Know the technology well before pitching it. You're going to get questions like "but how can we make it do X?", and you want to be able to give at least a general answer.
Try not to be a religious zealot. Acknowledging that the new technology is not perfect, that it's just another tool in the toolbox, goes a long way towards credibility.
Give a well-prepared live demo to show what it can do. For example, a friend of mine built a simple blog in Ruby on Rails in half an hour, in front of a live audience. I want to stress the word "well-prepared"; if things keep breaking along the way, or you don't fully understand what you're doing, or you are unable to answer basic questions, you'll hurt your cause rather than help it.
When it comes to coding practices my favorite is to quite simply use examples. I will take a few hours and edit our code base to use the new technique in place of the previous pattern. Then send a shelveset or changelist around to the rest of the developer list displaying the difference. Or just have a meeting to talk about the difference.
Showing examples in real production code really helps other developers see the advantages.
I've successfully introduced LINQ to my company and it has helped quite a bit.
What worked for me? Show and tell. Our previous technology was database programming with C, which is quite the mess. Our lead developer did about 3000 lines of code to fill a dataset, and I did it in a 10th of that with LINQ/C#.
Once I broke down what I did and he saw how powerful it was, he was convinced it was time to upgrade.
The advice from people who convinced the management to consider using F# goes something like this:
Implement the most important bits of the next key project of the company in F# in your free time and then show others what benefits it has, how quickly you were able to implement it and how easy it is to adapt the solution to changing requirements.
I think this is quite effective way - when people actually see the productivity (of any new technology), it is much easier to convince them that it is worht learning it.
It's best to lead by example. Complete a successful project using the new tool and wait for developers to ask how you did it.
I managed to convince the team Im part of to switch from CVS to Mercurial. Can you believe we were still using CVS? Neither could I when I started.
I became almost somewhat of a preacher, a royal pain in the butt. Everytime CVS screw up or caused some sort of discomfort (being brutally slow, for example) I held a little speech of how much better it could be.
Soon enough they accepted the possibility that there were alternatives (none of them actually knew there were alternatives to CVS!) and began to say things like "if there indeed are alternatives, anything must be better than this".
Thats when I made my move and simply ran some scripts converting the CVS repository into a Mercurial one and uploaded it to the company server. Once they saw it in action, they were sold.
Not that I planned anything during this little migration black ops, but in retrospect I would give the following advice to anyone attempting something similar:
Let people know there are (better) alternatives, it is entirely possible to work outside your comfort zone.
Lead by example, if you want something done, do it yourself. Show the alternative in action. No one is going to make the jump unless you jump first, especially not if they are already hesitating.
Show them how it solves a common problem. Pick out some problem that appears regularly for them and show them the solution. That will usually at least get them thinking about it.
Stand the two technologies next to one another, its safe to assume that advancements have been made and that what you are bringing to the table is better suited to the job at hand.
Put the raw results in front of them and let them decide for them selves!
I work for a data beauro, and until recently the company was hooked on MS Access, which was cumbersom and unfit for the job, after some serious convincing and showing the power of SQL in comparison to Access, its now the weapon of choice.
And it took standing the two techs side by side and allowing the guys up top to see for themselves, the time saved did make business sense!
You'll need to show why it's a BETTER technology (or at least better at something) than the current tool/method in use, and probably significantly so. Otherwise, why go through the effort of learning something new?
Otherwise, convince the boss and then get a mandate ... (though I don't really recommend that if you can't get at least half the team on board).

How do you begin designing a large system? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
It's been mentioned to me that I'll be the sole developer behind a large new system. Among other things I'll be designing a UI and database schema.
I'm sure I'll receive some guidance, but I'd like to be able to knock their socks off. What can I do in the meantime to prepare, and what will I need to keep in mind when I sit down at my computer with the spec?
A few things to keep in mind: I'm a college student at my first real programming job. I'll be using Java. We already have SCM set up with automated testing, etc...so tools are not an issue.
Do you know much about OOP? If so, look into Spring and Hibernate to keep your implementation clean and orthogonal. If you get that, you should find TDD a good way to keep your design compact and lean, especially since you have "automated testing" up and running.
UPDATE:
Looking at the first slew of answers, I couldn't disagree more. Particularly in the Java space, you should find plenty of mentors/resources on working out your application with Objects, not a database-centric approach. Database design is typically the first step for Microsoft folks (which I do daily, but am in a recovery program, er, Alt.Net). If you keep the focus on what you need to deliver to a customer and let your ORM figure out how to persist your objects, your design should be better.
This sounds very much like my first job. Straight out of university, I was asked to design the database and business logic layer, while other people would take care of the UI. Meanwhile the boss was looking over my shoulder, unwilling to let go of what used to be his baby and was now mine, and poking his finger in it. Three years later, developers were fleeing the company and we were still X months away from actually selling anything.
The big mistake was in being too ambitious. If this is your first job, you will make mistakes and you will need to change how things work long after you've written them. We had all sorts of features that made the system more complicated than it needed to be, both on the database level and in the API that it presented to other developers. In the end, the whole thing was just far too complicated to support all at once and just died.
So my advice:
If you're not sure about taking on such a big job single-handed, don't. Tell your employers, and get them to find or hire somebody for you to work with who can help you out. If people need to be added to the project, then it should be done near the start rather than after stuff starts going wrong.
Think very carefully about what the product is for, and to boil it down to the simplest set of requirements you can think of. If the people giving you the spec aren't technical, try to see past what they've written to what will actually work and make money. Talk to customers and salespeople, and understand the market.
There's no shame in admitting you're wrong. If it turns out that the entire system needs to be rewritten, because you made some mistake in your first version, then it's better to admit this as soon as possible so you can get to it. Correspondingly, don't try to make an architecture that can anticipate every possible contingency in your first version, because you don't know what every contingency is and will just get it wrong. Write once with an eye to throwing away and starting again - you may not have to, the first version may be fine, but admit it if you do.
I also disagree about starting with the database. The DB is simply an artifact of how your business objects are persisted. I don't know of an equivalent in Java, but .Net has stellar tools such as SubSonic that allow your DB design to stay fluid as you iterate through your business objects design. I'd say first and foremost (even before deciding on what technologies to introduce) focus on the process and identify your nouns and verbs ... then build out from those abstractions. Hey, it really does work in the "real world", just like OOP 101 taught you!
Before you start coding, plan out your database schema - everything else will flow from that. Getting the database reasonably correct early on will save you time and headaches later.
The main thing is being able to abstract the complexity of the system so that you don't get bogged down by it as soon as you start off.
First read the spec like a story (skimming through it). Don't stop at every requirement to analyze it right there and then. This will allow you to get an overall picture of the system without too many details. At this point you would start identifying the major functional components of the system. Start putting these down (use a mindmap tool if you like).
Then take each component and start exploding it (and tying each detail with requirements in the spec document). Do this for all components, till you have covered all requirements.
Now, you should start looking at relationships between the components, and whether there are repetitions of features or functions across the various components (which you can then pull out to create utility components, or such). Around now, you would have a good detailed map of your requirements in your mind.
NOW, you should think of designing the database, ER diagrams, Class Design, DFDs, deployment, etc.
The problem with doing the last step first is that you can get bogged down in the complexity of your system without really gaining an overall understanding in the first place.
I do it the other way around. I find that doing it database-schema-first gets the system stuck in a data-driven-design that is difficult to abstract from persistence. We try to do domain model designs first and then base the database schema on those.
And then there's the infrastructure design: the team should settle on conventions on how to structure the program first and foremost. And then we work together to agree first on a design for the common functionality of the system (e.g., things everyone needs like persistence, logging, etc.). This becomes the framework of the system.
We all work on that together first before we split the rest of the functionalities amongst ourselves.
It has been my experience that Java applications (.NET also) that consider the database last are highly likely to perform poorly when placed into a corporate environment. You need to really think about your audience. You didn't say if it was a web app or not. Either way the infrastructure that you are implementing on is important when considering how you handle your data.
No matter what methodology you consider, how you get and save your data and it's impact on performance should be right up there as one of your #1 priorities.
I'd suggest thinking about how this application will be used. How will future users work with it? I'm sure you know at least a few things about what this application needs to handle, but my first advice is "think of the user and what he or she needs".
Draw it up on plain paper, thinking of where to section off the code. Remeber not to mix logic with GUI code (common error). This way you will be set to extend your applications reach in the future to servlets and/or applets or whatever platform comes along. Section in layers so that you can respond to large changes faster without rebuilding everything. Layers should not see any other layers than their closest neighbouring layers.
Begin with true core functionallity. All that time consuming fluff (that will make your project 4 weeks late), wont matter much to the wast majority of users. It can be added later once you are sure you can deliver on time.
Btw. Even though this has nothing to do with design I'd just like to say that you won't deliver on time. Make a realistic estimate on time consumption and then double it :-) I assume here that you will not be alone in this project and that people will come and go as the project progresses. You may need to train people midway through the project, people go on holiday / need surgery etc.
Split the big system to smaller pieces.
And don't think that it's so complex, because it usually isn't. By thinking too complex it just ruins your thoughts and eventually the design. Some point you just realize that you could do the same thing easier, and then you redesign it.
Atleast this has been my major mistake in designing.
Keep it simple!
I found very insightful ideas about starting a new large project, based on
common good practices
Test Driven Development
and pragmatic approach
in the book Growing Object-Oriented Software, Guided by Tests.
It is still under development, but first 3 chapters may be what You are looking for and IMHO worth reading.

Categories