Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
i'm experienced with Java and want to learn objective-c to write apps for the iPhone. What are some fundamental differences? (other than syntax)
Conceptually, the biggest difference is that Objective-C is dynamically typed and you don't call methods, you send messages. This means that the Objective-C runtime does not care what type your object is, only whether it will respond to the messages you send it. This in turn means that you could (for example) create a class with an objectForIndex: method and use it in place of an NSArray as long as the code that uses it only calls objectForIndex:
This allows you to do all sorts of funky things, like have one object pose as an object of a different class and you can add methods at run time or add collections of methods (called categories) to prebuilt classes like NSString at compile time. Most of the time you'll never bother with any of those tricks, except the categories.
On a more practical level you'll notice:
the syntax is different
Memory management is more manual. On the iPhone, you have to use retain/release (OS X has garbage collection). This is not actually as bad as it sounds. If you follow the rules, and wrap your instance variables in getters and setters you'll find yourself rarely having to write retain or release. Update: some time after I wrote this, Apple introduced automatic reference counting (ARC). ARC grew out of the observation that the clang static analyser was capable of spotting just about every single missing (or extra) retain or release. So they extended the principle by having the compiler put in the retains and releases automatically. Apart from some simple rules about strong and weak relationships (i.e. whether an object claims to own another object or not), you can more or less forget about memory management. Also, ARC is available on iOS.
All methods are public. This is a direct consequence of the message sending paradigm, but you can't define private or protected methods.
The library is much smaller. In particular, you will notice that there are only three collection classes NSArray, NSDictionary and NSSet (plus their mutable versions). The philosophy is that you program to the interface. The runtime worries about what the implementation should be.
ETA: I forgot one important thing, you'll miss from Java. Objective-C does not support name spaces. This is why you'll see OBjective-C classes with two (or more) letter prefixes and it's the feature I really wish they would add.
First, Objective-C doesn't provide a garbage collector for iPhone. On the Mac, a garbage collector is present.
But, Possibly the biggest difference for me is that there are 2 files for each class. A header file (.h) where you have to declare instance variables, properties, and methods. Then is the implementation (.m) file where you write your methods. Properties in Objective-C have to be "synthesized" with the #synthesize keyword to create the getter and setter methods.
The transition isn't too bad. Both languages follow similar rules in terms of object models and even some of the syntax. I actually made the opposite transition. I started with Objective-C for iPhone, then picked up Java to do Android development.
On an unrelated note, building your UI is much easier using Apple's tools. Interface builder is drop-dead simple. Hooking up UI objects in the nib files to their declarations in code is so easy. Instruments provides an easy way to check CPU usage, memory leaks, allocations, and so on. Plus, just in terms of features, overall polish, and ease of use, I'll take XCode and Apple's tools to Eclipse any day.
If you're "fluent" in Java, the move to Objective-C won't be too hard. Just get your [] keys ready and practice typing "release"!
The biggest difference that will affect you immediately, besides an entirely different set of libraries[1], is that Objective-C doesn't provide garbage collector. The Apple libraries provide some garbage collection related routines and objects, I believe using reference counting, but you don't have the garbage collection you're used to in Java.
Other than that, many things will be similar: single inheritance, late binding, etc. Objective C doesn't provide method overloading, but that's a somewhat trivial difference. Java and Objective-C aren't too far apart in terms of how their object model works. Obj. C has a few tricks up its sleeve, such as categories, but you don't need to worry about those at first.
See the related C# question suggested by Remus for more (and much more detailed) information (and thanks to Remus for reminding me of the library difference - I nearly forgot that important facet).
Any object declared in Objective C is a pointer of another
Related
While studying the standard Java library and its classes, i couldn't help noticing that some of those classes have methods that, in my opinion, have next to no relevance to those classes' cause.
The methods i'm talking about are, for example, Integer#getInteger, which retrieves a value of some "system property", or System#arraycopy, whose purpose is well-defined by its name.
Still, both of these methods seem kinda out of place, especially the first one, which for some reason binds working with system resources to a primitive type wrapper class.
From my current point of view, such method placement policy looks like a violation of a fundamental OOP design principle: that each class must be dedicated to solving its particular set of problems and not turn itself into a Swiss army knife.
But since i don't think that Java designers are idiots, i assume that there's some logic behind a decision to place those methods right where they are. So i'd be grateful if someone could explain what that logic really is.
Thanks!
Update
A few people have hinted at the fact that Java does have its illogical things that are simply remnants of a turbulent past. I reformulate my question then: why is Java so unwilling to mark its architectural flaws as deprecated, since it's not like that the existing deprecated features are likely to be discontinued in any observable future, and making things deprecated really helps refraining from using them in newly created code?
This is a good thing to wonder about. I know about more recent features (such as generics, lambda's etc) there are several blogs and posts on mailing lists that explain the choices made by the library makers. These are very interesting to read.
In your case I expect the answer isn't too exiting. The reason they were made is hard to tell. But both classes exist since JDK1.0. In those days the quality of programming in general (and also Java and OO in particular) was perhaps lower (meaning there were fewer common practices, library makers had to invent many paradigms themselves). Also there were other constraints in those times, such as Object creation being expensive.
Many of those awkwardly designed methods and classes now have a better alternative. (See Date and the package java.time)
The arraycopy you would expect to be added to the Arrays class, but unfortunately it is not there.
Ideally the original method would be deprecated for a while and then removed. Many libraries follow this strategy. Java however is very conservative about this and only deprecates things that really should not be used (such as Thread.stop(). I don't think a method has ever been removed in Java due to deprecation. This means it is fairly easy to upgrade your software to a newer version of Java, but it comes at the cost of leaving some clutter in the libraries.
The fact that java is so conservative about keeping the new JDK/JRE versions compatible with older source code and binaries is loved and hated. For your hobby project, or a small actively developed project upgrading to a new JVM that removes deprecated functions after a few years is not too difficult. But don't forget that many projects are not actively developed or the developers have a hard time making changes securely, for instance because they lack a proper regression test. In these projects changes in APIs cost a lot of time to comply to, and run the risk of introducing bugs.
Also libraries often try to support older versions of Java as well as newer version, they will have a problem doing so when methods have been deleted.
The Integer-example is probably just a design decision. If you want to implicitly interpret a property as Integer use java.lang.Integer. Otherwise you would have to provide a getter method for each java.lang-Type. Something like:
System.getPropertyAsBoolean(String)
System.getPropertyAsByte(String)
System.getPropertyAsInteger(String)
...
And for each data type, you'd require one additional method for the default:
- System.getPropertyAsBoolean(String, boolean)
- System.getPropertyAsByte(String, byte)
...
Since java.lang-Types already have some cast abilities (Integer.valueOf(String)), I am not too surprised to find a getProperty method here. Convenience in trade for breaking principles a tiny bit.
For the System.arraycopy, I guess it is an operation that depends on the operating system. You probably copy memory from one location to another in a very efficient way. If I would want to copy an array like that, I'd look for it in java.lang.System
"I assume that there's some logic behind a decision to place those
methods right where they are."
While that is often true, I have found that when somethings off, this assumption is typically where you are mislead.
A language is in constant development, from the day someone proposes a new language to the day it is antiquated. In between those extremes are some phases that the language, go through. Especially if someone is spending money on it and wants people to use it, a very peculiar phase often occurs, just before or after the first release:
The "we need this to work yesterday" phase.
This is where stuff like this happens, you have an almost complete language, but the programmers need to do something to to show what the language can do, or a specific application needs a feature that was not designed into the language.
So where do we add this feature?
- well, where it makes most sense to that particular programmer who's task it is to "make it work yesterday".
The logic may be that, this is where the function makes the most sense, since it doesn't belong anywhere else, and it doesn't deserve a class of its own. It could also be something like: so far, we have never done an array copy, without using system.. lets put arraycopy in there, and save everyone an extra include..
in the next generation of the language, people will not move the feature, since some experienced programmers will complain. So the feature may be duplicated, and found in a place where it makes more sense.
much later, it will be marked as deprecated, and deleted, if anyone cares to clean it up..
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have read through a bunch of best practices online for JUnit and Java in general, and a big one that people like to point out is that fields and methods should be private unless you really need to let users access them. Class variables should be private with getters and setters, and the only methods you should expose should be ones that users will call directly.
My question: how strictly necessary are these rules when you have things like standalone apps that don't have any users? I'm currently working on something that will get run on a server maybe once a month. There are config files that the app uses that can be modified, but otherwise there is no real user interaction once it runs. I have mostly been following best practices but have run into issues with unit testing. A lot of the time it feels like I am just jumping through hoops with my unit testing getting things just right, and it would be much easier if the method or whatever was public or even protected instead.
I understand that encapsulation will make it easier to make changes behind the scenes without needing to change code all over, but without users to impact that seems a bit more flimsy. I am just making my current job harder on the off-chance it will save me time later. I've also seen all of the answers on this site saying that if you need to unit test a private method you are doing something wrong. But that is predicated on the idea that those methods should always be private, which is what I am questioning.
If I know that no one will be using the application (calling its methods from a jar or API or whatever) is there anything wrong with making everything protected? Or even public? What about keeping private fields but making every method public? Where is the balance between "correct" accessibility on pieces of code, and ease of use?
It is not "necessary", but applying standards of good design and coding principles even in the "small" projects will help you in the long run.
Yes, it takes discipline to write good software. Languages are tools that help you accomplish a goal. Like any tool, they can be misused, and when misused can be dangerous. Power tools, like a table saw, can be very dangerous if misused, so if you care about your own safety you always follow proper procedure, even if it might feel a little inconvenient (or you end up nicknamed "stubby").
I'd argue that it's on the small projects, where you want to cut corners and "just write the code", that adhering to the best practices is most important. You are training yourself in the proper use of your tools, so when it really matters you do the right thing automatically.
Also consider that projects that start out "small" can evolve over time to become quite large as you keep adding enhancements and new functionality. This is the nature of agile software development. If you followed best practices from the start you'll find it much easier to adapt as the project grows.
Another factor is that using OOP principles is a way of taming complexity. If you have a well-defined API and, for example, use only getters and setters, you can partition off the API from the implementation in your own mind. After writing class A, when writing a client of A, say B, you can think only about the API. Later when you need to enhance A you can easily tell what parts of A affect the API vs what parts are purely internal. If you didn't use encapsulation you'd have to scan your entire codebase to see if a change to A would break something else.
Do I apply this to EVERYTHING I write? No, of course not. I don't do this with short single-use scripts in dynamic languages (Perl, AWK, etc) but when writing Java I make it a point to always write "good" code, if only to keep my skills sharp.
There is generally no necessity to follow any rules as long as your code compiles and runs correctly.
However code style "best practices" have proven to enhance code quality, especially over time when a project develops/matures. Making fields private makes your code more resilient to later changes; if you ommit the getters/setters and access fields directly, any changes to a field impact related code much more directly.
While there is seemingly no advantange in a getter/setter at first, the advantage lies in the future: A getter forces any code working with the attribute through a single point of control which in case of any changes related to that field helps either mask the concrete representation/location of the field and/or allows for polymorphism when required whithout changes/checking all the existing callers.
Finally the less surface (accessible methods/fields) a class exposes to other classes (users) the less you have to maintain. Reducing the exposed API to the absolute minimum reduces coupling between classes, which again is an advantage when something needs to be changed. Striving to hide the inner workings of every object as good as possible is not a goal by itself, its the advantages that result from it that are the goal.
As always, good balancing is always required. But when in doubt, it is better to error/lean on the side of "source code quality" practices; instead of taking too many shortcuts; as there are many different aspects in your "simple" question one should consider:
It is hard to anticipate what will happen to some piece of software over time. Yes, you don't have any users today. But you know what: one major property of great tools is ... as soon as other people see them, they want to use them, too. And all of a sudden, you have users. And feature requests, bug reports, ... and make no mistake: first people will love you for the productivity gain from your tool; and then they will start to put pressure on you because all of a sudden your tool is essential for other people to make their goals.
Many things are fine to be addressed via convention. Example: sometimes, if I would only be using public methods of my "class under test", unit tests become more complicated than necessary. In such a case, I absolutely have no problem about putting a getter here or there that allows me to inspect the internal state of my "class under test"; so that my test can trigger some activity; and then call the getter. I make those methods "package protected"; and I put a // used for unit testing above them. I have not seen problems coming out of that informal practice. Please note: those methods should only be used in test cases. No other production class is supposed to call them.
Regarding the core of your question on private stuff: I think, one should always hide implementation details from the outside. Whenever you write a piece of code that is supposed to live longer than the next hour, you should do the right thing - and try to write code with very high quality. And making the internals of your objects visible on the outside
comes only with drawbacks; there is nothing positive in doing so.
Good OO is about using models that come with certain behavior.
Internal state should stay internal; there is no benefit in
exposing. For the record: sometimes, you have simple data
containers. Classes that only have some fields, but no methods on
them. In that case, yeah, make the fields public; there is (not
much) advantage in providing getters/setters. ( See "Clean Code" by
Robert Martin, chapter 6 on "Objects and Data structures")
My question is: is it possible (in ANY way) to analyze and modify call stack (both content of frames and stack content) in runtime?
I'm looking for any possibility - low-level, unsafe or internal API, possibility to write C extension, etc. Only constraint: it should be usable in standard runtime, without debugging or profiling mode. This is the point where I'm doing research "is it possible at all?", not "is it good idea?".
I'd like to gather all local data from a frame, store it somewhere, and then remove that frame from stack, with possibility of restoring it later. Effectively that gives us continuations in JVM, and they will allow fast async frameworks (like gevents from python) and generator constructs (like those from python) to come up.
This may look like repeated question, but I've only found questions that were answered with "use Thread.currentThread().getStackTrace()" or "that should be done with debugging tools". There was similiar question to mine, but it was only answered in context of what asking guy wanted to do (work on async computations), while I need more general (java-stack oriented) answer. This question is similiar too, but as before, it is focused on parallelization, and answers are focused on that too.
I repeat: this is research step in process of coming up with new language feature proposal. I don't wanna risk corrupting anything in JVM - I'm looking for possibility, then I'm gonna analyse possible risks and look out for them. I know that manipulating stack by hand is ugly, but so is creating instances with ommiting consrtuctor - and it is basis for objenesis. Dirty hacks may be dirty, but they may help introducing something cool.
PS. I know that Quasar and Lightwolf exist, but, as above, those are concurrency-focused frameworks.
EDIT
Little clarification: I'm looking for something that will be compatible with future JVM and libraries versions. Preferably we're talking about something that is considered stable public API, but if the solution lies in something internal, yet almost standard or becoming standard after being internal (like sun.misc.Unsafe) - that will do too. If it is doable by C-extension using only C JVM API - that's ok. If that is doable with bytecode manipulation - that's ok too (I think that MAY be possible with ASM).
I think there is a way achieving what you want using JVMTI.
Although you cannot directly do what you want (as stated in a comment above), you may instrument/redefine methods (or entire classes) at run time. So you could just define every method to call another method directly to "restore execution context" and as soon as you have the stack you want, redefine them with your original code.
For example: lets say you want to restore a stack where just A called B and B called C.
When A is loaded, change the code to directly call B. As soon as B is loaded, redefine it to directly call C; Call the topmost method (A); As soon as C gets called (which should be very fast now), redefine A and B to their original code.
If there are multiple threads involved and parameter values that must be restored, it gets a little more complicated, but still doable with JVMTI. However, this would then be worth another question ;-).
Hope this helps. Feel free to contact me or comment if you need clarification on anything.
EDIT:
Although I think it IS doable, I also think this is a lot (!!!) of work, especially when you want to restore parameters, local variables, and calling contexts (like this pointers, held locks, ...).
EDIT as requested: Assume the same stack as above (A calling B calling C). Although A, B, and C have arbitrary code inside them, just redfine them like this: void A() { B(); } void B() { C(); } void C() { redefine(); } As soon as you reach the redefine method, redefine all classes with their original code. Then you have the stack you want.
Not sure in this tool, but you can check http://en.wikipedia.org/wiki/GNU_Debugger.
GDB offers extensive facilities for tracing and altering the execution of computer programs. The user can monitor and modify the values of programs' internal variables, and even call functions independently of the program's normal behavior.
I am converting the PHP plugin to ColdFusion. In PHP, there are used OO concepts so classes and objects are used.
How I can convert those classes to ColdFusion class and create objects for those classes.
Also I have created Java class and using <cfobject> tag, I have created object, but I need ColdFusion classes and created objects.
Please let me know if you have any ideas.
ColdFusion does have classes and objects and follows limited OOPS principles. You can do inheritance, interfaces. Polymorphic functions is still not allowed.
Classes in ColdFusion are called as Components. CFC -> ColdFusion component. Depending on the ColdFusion version, you can write them in script mode or in tag mode.
You can refer to the documentation for CF8 about creating component and their objects.
The createObject() method you have mentioned is one way of creating different type of objects. The other ways are to use <cfinvoke> or <cfobject>
Hope this helps. Just read the docs in detail and they will help you every time.
Realistically you should be able to solve this by reading the docs a bit more thoroughly than you already have. However this question is fairly easily answered. Firstly let me disabuse you of something though:
there is no option to create classes in coldfusion without using the
java,com and corba
This is just you not reading properly. Even on the page you link to (cfobject, which is pointing to an obsolete version of ColdFusion btw), the third link it provides "component object" discusses instantiating native CFML "classes" ("components" in CFML parlance, for some reason). It's perhaps not clear from top-level browsing that a "component" is a "class", but if you're learning something, you should be doing more than top-level browsing.
You are approaching your learning from a very odd angle: reading up on how to instantiate an object is not the direction you ought to be taking if you want to find out how to define the class the object will be an instance of. It kinda suggests a gap in your knowledge of OO (which could make this work a challenge for you).
Anyway, of course CFML allows for the definition of classes and the usage thereof, natively in the language. And has been able to do so since v6.0 (although this was not really production-ready until 6.1, due to some poor implementation decisions), over a decade ago.
The answer to your broader question, can be found by reading the docs starting with "Building and Using ColdFusion Components". But the basic form is:
// Foo.cfc
component {
public Foo function init(/* args here */){
// code here
}
// etc
}
And that sort of thing.
I have always worked on statically typed languages (C/C++, Java). I have been playing with Clojure and I really like it.
One thing I am worried about is: say that I have a windows that takes 3 modules as arguments and along the way the requirements change and I need to pass another module to the function. I just change the function and the compiler complains everywhere I used it. But in Clojure it won't complain until the function is called. I can just do a regex search and replace but it seems there is a chance to miss a call and it will go unnoticed until that function is actually called. How do you guys deal with this?
This is one of the reasons automated testing/test driven development is even more important in dynamically typed languages. I haven't used Clojure (I mostly use Ruby), so unfortunately I can't recommend a specific testing framework.
The first thing I'd like to mention is that Bruce Eckel has written a very interesting article called Strong Typing vs Strong Testing (the link is down at the moment, unfortunately, but hopefully it will be up soon).
His idea is that when dealing with compiled languages, the compiler is just acting as the first, automatic step of automatic testing. When making the move to a dynamic language, you lose this first level of automatic testing. But in both cases, this first, automatic level is just one part of testing, and not even a very important part.
His point is that if you're developing programs properly, i.e. doing some form of tests and regression tests, the lack of a compiler will only force you to add some more, somewhat basic tests anyways, which is why it's no big loss.
So I guess the first answer I'd give you is, focus on your testing, something you should be doing anyway, and such changes shouldn't affect you too badly.
The second thing I'd like to mention is many dynamic languages that I've seen (for example, Python) have much better abilities to change what methods/classes do without breaking existing code.
For example, with Python, if your method used to accept two parameters but now requires a third one, you can always add a default parameter without breaking any existing code, but that you can now utilize. This is a very basic technique, but in Python's case (and I assume most other dynamic languages as well), these techniques can get much more interesting; since they're dynamic, you can pretty much change the implementation of functions for specific modules, change what variables mean, etc.
I'd suggest looking at which techniques Clojure has that allow similair things, and deciding if they apply in your situation.
You do the same thing you did if the method was part of a public interface that you weren't the only user of.
You add a new method with the extra module and and change the old one to call the new one with a suitable default.
Oh and if your program is that big, make sure you have good tests (test-is should make it simpler than Java)
Test coverage is definitely important. But a dynamically typed language will allow you to work in a different way. In a strongly typed language (like Java), a change in the interface needs to modify all the callers. In Ruby, you could do this-- but probably won't. Instead, you'll probably add flexibility to the method on one of a few ways. Namely:
you tend to have very few methods that take as many as three parameters in Ruby (as opposed to Java). Because you don't have strong typed interface of Java, you break the problem down into smaller pieces and steps. It's much more common to write methods that take just 1 parameter, and then refactor when it becomes more complex.
it's possible-- and common-- to leave the old behavior in place while adding more arguments. For example, if you have to add a third argument to a two argument method, you will set its default value to preserve the old behavior (and save you a refactor). If you are familiar with Javascript libraries like jQuery, they take advantage of this everywhere with "optional" arguments.
similar to optional arguments, methods can grow to take a flexible parameter list. With solid test coverage, you can quite easily add a new behavior to an existing method and safely know you haven't broken the existing code. In Rails, methods like "render" take a wide range of options.
You're not completely without compiler support in Clojure. In the specific example you give, it's the arity of the function that changed, which would be picked up by compiling the Clojure code. I'm still making the strong -> dynamic typing transition and find this comforting!
You lose some level of refactoring and type safety when you move to dynamic languages. The more information the compiler has, the more it can do at compile time for you.
Tim Bray discusses it here,critique of which by Cedric is here,and a post on artima discussing it at length.
If you really need static typing, you can use https://github.com/clojure/core.typed and it's leiningen module to test static variable passing.