In the companies I've been working, I've seen a lot the use of prefixes to indicate the scope or the origin of variables, for example m for classes members, i for methods intern variables and a (or p) for methods parameters:
public class User {
private String mUserName;
public String setUserName(final String aUserName) {
final String iUserName = "Mr " + aUserName;
mUserName = iUserName;
}
}
What do you think about it? Is it recommended (or precisely not)? I found it quite ugly in a first phase, but the more I use it, the more I find it quite convenient when working on big methods for example.
Please note that I'm not talking about the Hungarian notation, where prefixes indicate the type rather than the scope.
I've also worked in shops that had rigid prefix notation requirements, but after awhile this became a "smell" that the code had grown out-of-control and global variables were leaking from everywhere indicating poor code/review.
Java's "this." notation is the prefered way to reference a field, over a local. The use of "m" prefix for variables was popularized by that "Micro.." company as a branding gimmick (they even said "don't use that because we do").
The general rule I follow is to name the variable according to what it is used to store. The variable name is simply an alias. If it stores a user name, then userName is valid. If it is a list of user names, then userNames or userNameList is valid. However, I avoid including the "type" in the variable-name now, because the type changes quite often (shouldn't a collection of user names be a set, in practice? and so on...)
At the end of the day, if the variable name is useful to you to remember what the code was doing down the road, it is probably a good idea. Maintainability and Readability trump "perceived" efficiency and terse syntax, especially because modern compilers are rewriting your code according to macro usage patterns.
I hope this helps somewhat, and am happy to supply more details of any claims herein.
ps. I highly recommend the Elements of Java Style for these types of questions. I used to work with the authors and they are geniuses when it comes to style!
Note: yours is a very opinion-based question (generally frowned on in StackOverflow these days), but I still think it's a worthwhile topic.
So here's my perspective, for what it's worth:
Personally, I think an indicator of scope in variable names can be helpful, both when writing and reading code. Some examples:
If I am reading a class method and I don't see any "m_XXX" being used, I can conclude "this function might as well be static — it doesn't use instance data." This can be done with a quick scan of variables, if the names have that information.
Any time I see "g_XXX" (global), I can start being worried, and pay closer attention (: Especially writing to a global is a big red flag, and especially especially if there is any concurrency/threading involved.
Speaking of concurrency, there is a pretty clear ordering of "safeness" for mutable data: locals are okay, members are dangerous, globals are very dangerous. So, when thinking about such code, having variable scope in mind is important. For this reason, in C/C++ I think having a prefix for function-static variables is useful, too (they're essentially "global" across invocations of that function). More an indication of lifetime than scope, in that case.
It can help junior developers think about the above issues more actively.
The popularity of this convention varies by language. I see it in C++ and C most often. Java somewhat frequently. Not very much in Python, Perl, Bash or other "scripting" languages. I wonder if there is some correlation between "high performance" code and benefit from such a scheme. Maybe just historical happenstance, though. Also, some languages have syntax that already includes some of this info (such as Python's self.xxx).
I say disregard any arguments along the lines of "oh, Microsoft invented that for XYZ, ignore it" or "it looks clunky." I don't care who invented it or why or what it looks like, as long as it's useful (:
Side note: Some IDEs can give you scope information (by hovering your mouse, doing special highlighting, or otherwise), and I can understand that people using such systems find putting that info in the variable names redundant. If your whole team uses a standard environment like that, then great; maybe you don't need a naming scheme. Usually there is some variation across people though, or maybe your code review and diff tools don't offer similar features, so there are still often cases where putting the info inside the text itself is useful.
In an ideal world, we would have only small functions that don't use lots of variables, and the problem such naming prefixes try to solve would not exist (or be small enough to not warrant "corrupting" all your code with such a scheme just to improve some corner cases).
But we do not live in an ideal world.
Small functions are great, but sometimes that is not practical. Your algorithm may have inherent complexity that cannot be expressed succinctly in your language, or you may have other constraints (such as performance or available development time) that require you to write "ugly" code. For the above-mentioned reasons, a naming scheme can help in such cases, and others.
Having variable naming convention works great for teams, and IMO must be used for languages that are not string typed. e.g. JScript.
In Jscript the only type is a ‘var’ which is basically evaluated at run-time. It becomes more important in Jscript to decorate the variables with the type of the data that is expected to be in them. Use consistent prefixes, that your team can decide, e.g. s or str, or txt for string, i/j/l/l/m/n for integers ( like FORTRAN :-) ), q for double, obj or o for non primitive data etc. basically using common sense. Do not use variable names without any prefix unless the variable name clearly indicates the data type.
e.g.
variable name “answer” is a bad name since answer could be text, or a number, so it must be
strAnswer or sAnswer , jAnswer, qAnswer etc.
But
“messageText” or "msgTxt" is a good enough because it is clear that the content is some text.
But naming a variable "dataResponse" or "context" is confusing.
Sometimes on the server one needs to fix or debug something and the only editors are vi or notepad in worst cases nano / sed, where there is no contextual help from the editor, having a coding convention can be really helpful.
Reading the code is also faster if the convention is followed. It is like having a prefix Mr. or Ms. to determine the gender. Mr.Pat or Ms.Pat... when Pat itself does not tell the gender of Patrick or Patricia...
Related
i am trying to get a compile time safe field reference in java, done not with reflection and Strings, but directly referencing the field. Something like
MyClass::myField
I have tried the usual reflection way, but you need to reference the fields as strings, and this is error prone in case of a rename, and will not throw a compile time error
EDIT: just want to clarify that my end goal is to get the field NAME for entity purposes, such as reference the entity field in a query, and not the value
Unfortunately, you might as well want to wish for a unicorn. The notion of 'a field reference', in the sense that you are asking for, simply isn't part of java-the-language.
That MyClass::myThing syntax works only for methods. There's simply no such thing for fields. It's unfortunate.
It's very difficult to give objective reasons for the design decisions of any language; it either requires spelunking through the designer's collective heads which requires magic or science fiction, or asking them to spill the beans, which they're probably not going to do in a stack overflow question. Sometimes (and more recent java features, such as this one), design is debated in public. Specifically, you can search for the openjdk lamba-dev mailing list where no doubt this question was covered. You'll need to go through, and I'm not exaggerating, tens of thousands of posts, but, the good news is, it's searchable.
But, I can guess / dig through my own memory as I spent some time discussing Project Lambda as it was designed:
Direct field access isn't common in the java ecosystem. The language allows direct field access but few java programs are written that way, so why make a language feature that would only be immediately useful and familiar to an exotic bunch.
The infrastructure required is also rather significant - a method lambda isn't allowed to be written in java unless you use it in a context that makes it possible for the compiler to 'treat' the lambda as a type - specifically, a #FunctionalInterface - any interface that contains exactly 1 method (other than methods that already exist in j.l.Object itself). In other words, this is fine:
Function<String, String> f = String::toLowerCase;
But this is not:
Object o = String::toLowerCase;
So, let's imagine for a moment that field refs did exist. What does that mean? What is the 'type' of the expression MyClass::myField? Perhaps a new concept: An interface with 2 methods; one of them takes no arguments and returns a T, the other wants a T and returns nothing (to match the act of reading the field, and writing it), but where it's also acceptable if it's a FunctionalInterface that is either one of those, perhaps? That sounds complicated.
The general mindset of the java design team right now (and has been for a while) is not to overcomplicate matters: Do not add features unless you have a good reason. After all, if it turns out that the community really clamours for field refs, they can be added. But, if on the other hand, they were added but nobody uses them, they can't be removed (and thus you've now permanently made the language more complicated and reduced room for future language features for a thing nobody uses and which most style guides tell you to actively avoid).
That's, I'm pretty sure, why they don't exist.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
When should one use final?
I tend to declare all variables final unless necessary. I consider this to be a good practice because it allows the compiler to check that the identifier is used as I expect (e.g. it is not mutated). On the other hand it clutters up the code, and perhaps this is not "the Java way".
I am wondering if there is a generally accepted best practice regarding the non-required use of final variables, and if there are other tradeoffs or aspects to this discussion that should be made aware of.
The "Java way" is inherently, intrinsically cluttery.
I say it's a good practice, but not one I follow.
Tests generally ensure I'm doing what I intend, and it's too cluttery for my aesthetics.
Some projects routinely apply final to all effectively final local variables. I personally find reading such code much easier, due to the lessened cognitive load. A non-final variable could be reassigned anywhere, and it's especially problematic in code with multiple levels of nested ifs and fors. You never know what code path may have reassigned it.
As for the concern of code clutter, when applied to local variables I don't find it damaging—in fact it makes me spot all declarations more easily due to syntax coloring.
Unfortunately, when final is used on parameters, catch blocks, enhanced-for loops and all other places except local variables in the narrow sense, the code does tend to become cluttered. This is quite unfortunate because a reassignment in these cases is even more confusing and they should really have been final by default.
There are code linting tools that will flag the reassignment of these variables, and that helps.
I consider it good practice, more for maintenance programmers (including me!) than for the compiler. It's easier to think about a method if I don't need to worry about which variables might be changing inside it.
Yes, it's a very good idea, because it clearly shows what fields must be provided at object construction.
I strongly disagree that it creates "code clutter"; it's a good and powerful aspect of the language.
As a design principle, you should make your classes immutable (all final fields) if you can, because they may be safely published (ie freely passed around without fear they will be corrupted). Although note that the fields themselves need to be immutable objects too.
It definitely gives you a better code, easy to see which all variables are going to be changed.
It also informs the compiler that it is not going to change which can result to better optimization.
Along side it allows your IDE to give you compile time notification if you tend to do any mistake.
Some good analysis tools, like PMD, advices to put always final unless necessary. So the convention in that tools says it's a good practice
But I think that so many final tokens in code may get it less human-friendly.
I would say yes, not so much for the compiler optimisation, but rather for readibility.
But personally I don't use it. Java is quite verbose by itself, and if we followed everything considered "good practice", the code would be unredable from all the boilerplate. It's a matter of preference, though.
You pretty much summed up the pros and cons...
I can just add another con:
the reader of the code need not to reason at all about the value of a final variable (except for rare bad-code cases).
So, yes, it's a good practice.
And the clutter isn't that bad, after you get used to it (like unix :-P). Plus, typical IDEs do it automatically for ya...
Is there a practical or historical reasoning behind languages allowing the most egregious naming convention taboos? The two most obvious examples are uppercase function names and lowercase class names, which I often see violated in stackoverflow newbie questions.
There is no style-justification that I know of where you can do these things, so why are they even allowed to compile? At the moment, my theories are
It was not such a taboo when the language was built,
It would make some important edge cases impossible, or
It's not the language's job to enforce good style.
I can find nothing on this topic (some links are below).
There are some conventions, such as underscores beginning variable names, or Hungarian notation (the latter of which I have been personally disabused of in comments) that are not overwhelmingly accepted, but are less divisive.
I'm asking this as a Java programmer, but would also be interested in answers form other language's.
Some links:
http://en.wikipedia.org/wiki/Naming_convention_(programming)#Java
http://docs.oracle.com/javase/tutorial/java/javaOO/methods.html
http://en.wikibooks.org/wiki/Java_Programming/History
How important are naming conventions for getters in Java?
Coding style is like writing style. If you write in a style that is diFFicult TO READ And does not read very well your mINd hAs GReat diFFICULTies actUally understanding what you are reading.
If, however, like in normal reading text - it is laid out in a form that matches well with what your mind expects then it is clear and easy to understand.
On the other hand, if the language actually FORCED you to write everything using EXACTLY the the right syntax then not only would it make coding slow and awkward but it would restrict your expressiveness.
Many years ago I came across a language that allowed you to add strange symbols to variable names. Users were allowed to do thing like:
var a=b = a = b;
var c<d = c > d;
if ( a=b & c<d ) ...
i.e. a=b was a perfectly acceptable variable name, so was c<d. As I am sure you would agree, this led to many mistakes of interpretation. Even if a new language allowed that I would not use it. Coding standards are for consistency and helping the human mind understand, syntax is for helping the computer mind understand.
Depending on the language designer's intent some languages are more opinionated than others when it comes to implementation and how the designers think things should be done.
The easiest example I can think of right now is Go, which has unit testing and code formatting built in. The designers are of the opinion that things should be done a certain way and they provide you the tools to do it.
Other languages do nothing of the sort like Scala where the language designers were very unopinionated in their implementation and supply you the tools to accomplish any given task in 10 different ways.
This isn't to say that some languages are built under tyrannical rule and others are extremely loose with their requirements. Its merely a choice made by the language designers which we end up having to live with.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
So I have seen a lot of different coding styles, but I'm only going to talk about two big ones. I use a style where I just name everything like their class name when used in a general sense, like this:
String str = "This is some text";
But over at Java Practices, I see a style where they will put an 'I' in front of Interfaces class names, or they put 'f' or 'a' in front of object names. Take this snippet from "Don't subclass JDialog or JFrame"':
/**
Constructor.
<P>Called when adding a new {#link Movie}.
*/
MovieView(JFrame aParent) {
fEdit = Edit.ADD;
buildGui(aParent, "Add Movie");
fStandardDialog.display();
}
Why do programmers code in this style? Do a lot of people use it? And also, do professional programmers use this style?
Thanks in advance :)
This my personal opinion.
I prefer not to use prefixes on interface (or anything else for that matter). I just prefer to call it what it is. Interfaces are meant to represent an object (or part of it) without making any implication towards it's actual implementation.
Say you have a Car interface. And AudiA4 could be an implementation of that car. If you just bought a new Audi A4, you say, "I bought a new AudiA4" to those you think care about the kind of car you bought. To others, you can say "I bought a new Car". Certainly, you never say, I bought a new IAudiA4 or a new ICar.
The JFrame naming came about because it's a Swing Frame and Swing came after AWT (the original Java windowing toolkit, which already had a Frame class). Since both AWT and Swing where available at the same time, they used the 'J' prefix to demarcate the toolkits (note that JFrame extends Frame, btw). They could have called it SwingFrame but the 'J' prefix was apparently a good choice to represent the Swing package. So basically this prefix is just a naming choice, not a convention similar to the 'I' for interfance (or Impl suffix for implementations you see sometimes as well)
My point is you always have to name your classes and interface according to exactly what they represent. No more, no less. No point having a CarImpl class. Who cares that it's an implementation. Which implementation is it? Why does it need its own class? What more do I get when I use a CarImpl? What happens when I make a second implementation, I call it CarImpl2? All this is very constraining and doesn't bring much value.
Call it what it is. That's the only rule I'd set forth.
All this being said, the Eclipse project, amongst many others, does indeed use the I-for interface notation (WIKI). But it's their choice. I've seen professionals use it as well. I don't like it, but generally speaking, I respect the team's naming convention.
There is a book about such things - Code Complete by Steve McConnell
I might be wrong but the only universal convention I've seen when naming Java variables is using Camel-Case notation, that's regarding the format of the name.
As for the name itself, I've always found useful to name the variables according to what they actually are. In your String example, although you mention this would be in a general purpose variable, I would still give it a more meaningful name, like:
String message = "This is some text";
Or:
String msg = "This is some text";
Some of the Java libraries I've seen source code from tend to be quite verbose when naming variables, others just use single letter names when the variable is used in a reduced context:
public Rectangle setLocation(Point p) {
return setLocation(p.x(), p.y());
}
I think the main goal when naming variables (or anything else for that matter) is always to communicate in the best way possible the intent of what you were trying to do.
Code styles help make it easier for developers to read and understand each others code. Java conventions prescribe the use of short and descriptive identifiers, but unfortunately short and descriptive cannot always be achieved together so you may have to compromise shortness for clarity hence: atmosPres - still clear but short, atmosphericPressure - this can't be mistaken, atm - because everyone just knows ATM, right?, ap - WTF?
I first encountered the practice of prefixing variable names with a three letter type identifier while developing programs in C# - it helps the reader know what data type is contained in a variable without having to look for its declaration (due to short memory or maybe laziness?). Arrays are also prefixed with I e.g IList to distinguish them from other data types (and for what purpose, I just dunno).
For me, the worst code conventions are in C++ (if indeed there are any at all) - there's a mix of case types for data types and variables, conflicting method and function naming styles and endless cryptic abbreviation which all make it hard for non-regular C++ coders to read and understand C++ code.
What you're describing is sometimes referred to as "Hungarian notation", though it's not "Hungarian" in the truest sense of the term.
Basically, the idea is to differentiate between different classes of variables -- instance variables, local variables, parameters, et al. This serves two basic purposes:
It helps avoid name collisions, where, say, there might naturally (using "descriptive" variable naming) be an instance variable ralphsLeftFoot and a local variable ralphsLeftFoot. Using a prefix allows the two to co-exist, and, especially in languages where the local might (without warning message) "hide" the instance variable, prevents unintended changes in semantics from such collisions.
It makes the scope of variables obvious, so that, during maintenance, one does not accidentally assume that a local variable has instance scope or vice-versa.
Is this approach worthwhile? Many developers use a subset of the scheme, apparently to good effect. For instance, many Objective-C developers will name the instance variable behind a "property" with a leading "_" character, to clearly differentiate between the two and to avoid accidentally using the instance variable when the property was intended.
Likewise, many developers in a number of languages will prefix instance variables with a letter (often "m") to differentiate them from "normal" local/parameter variables.
What's probably most important is to pick a style that you (and your team) likes and stick with it. If the team likes the prefixes then use the prefixes. If the team prefers something else, stick with that. Of course, changing preferences, when a better choice is "revealed" to you, is OK, but don't switch back and forth willy-nilly.
So I have seen a lot of different coding styles, but I'm only going to
talk about two big ones. I use a style where I just name everything
like their class name when used in a general sense, like this:
String str = "This is some text";
That is awful. Imagine if someone were reading your code, trying to understand what it was doing, and they came across a variable named str. It doesn't convey any meaning to the person who has to read this code as to your intentions.
Conventions are used by and for people to improve readability, and thus the overall quality of software. Without a convention, any project that has more than one developer will suffer from varying styles that will only hurt the readability of the code. If you want to know what professionals do, look around on the internet for various conventions.
C# has syntax for declaring and using properties. For example, one can declare a simple property, like this:
public int Size { get; set; }
One can also put a bit of logic into the property, like this:
public string SizeHex
{
get
{
return String.Format("{0:X}", Size);
}
set
{
Size = int.Parse(value, NumberStyles.HexNumber);
}
}
Regardless of whether it has logic or not, a property is used in the same way as a field:
int fileSize = myFile.Size;
I'm no stranger to either Java or C# -- I've used both quite a lot and I've always missed having property syntax in Java. I've read in this question that "it's highly unlikely that property support will be added in Java 7 or perhaps ever", but frankly I find it too much work to dig around in discussions, forums, blogs, comments and JSRs to find out why.
So my question is: can anyone sum up why Java isn't likely to get property syntax?
Is it because it's not deemed important enough when compared to other possible improvements?
Are there technical (e.g. JVM-related) limitations?
Is it a matter of politics? (e.g. "I've been coding in Java for 50 years now and I say we don't need no steenkin' properties!")
Is it a case of bikeshedding?
I think it's just Java's general philosophy towards things. Properties are somewhat "magical", and Java's philosophy is to keep the core language as simple as possible and avoid magic like the plague. This enables Java to be a lingua franca that can be understood by just about any programmer. It also makes it very easy to reason about what an arbitrary isolated piece of code is doing, and enables better tool support. The downside is that it makes the language more verbose and less expressive. This is not necessarily the right way or the wrong way to design a language, it's just a tradeoff.
For 10 years or so, sun has resisted any significant changes to the language as hard as they could. In the same period C# has been trough a riveting development, adding a host of new cool features with every release.
I think the train left on properties in java a long time ago, they would have been nice, but we have the java-bean specification. Adding properties now would just make the language even more confusing. While the javabean specification IMO is nowhere near as good, it'll have to do. And in the grander scheme of things I think properties are not really that relevant. The bloat in java code is caused by other things than getters and setters.
There are far more important things to focus on, such as getting a decent closure standard.
Property syntax in C# is nothing more than syntactic sugar. You don't need it, it's only there as a convenience. The Java people don't like syntactic sugar. That seems to be reason enough for its absence.
Possible arguments based on nothing more than my uninformed opinion
the property syntax in C# is an ugly
hack in that it mixes an
implementation pattern with the
language syntax
It's not really necessary, as it's fairly trivial.
It would adversly affect anyone paid based on lines of code.
I'd actually like there to be some sort of syntactical sugar for properties, as the whole syntax tends to clutter up code that's conceptually extremely simple. Ruby for one seems to do this without much fuss.
On a side note, I've actually tried to write some medium-sized systems (a few dozen classes) without property access, just because of the reduction in clutter and the size of the codebase. Aside from the unsafe design issues (which I was willing to fudge in that case) this is nearly impossible, as every framework, every library, every everything in java auto-discovers properties by get and set methods.They are with us until the very end of time, sort of like little syntactical training wheels.
I would say that it reflects the slowness of change in the language. As a previous commenter mentioned, with most IDEs now, it really is not that big of a deal. But there are no JVM specific reasons for it not to be there.
Might be useful to add to Java, but it's probably not as high on the list as closures.
Personally, I find that a decent IDE makes this a moot point. IntelliJ can generate all the getters/setters for me; all I have to do is embed the behavior that you did into the methods. I don't find it to be a deal breaker.
I'll admit that I'm not knowledgeable about C#, so perhaps those who are will overrule me. This is just my opinion.
If I had to guess, I'd say it has less to do with a philosophical objection to syntactic sugar (they added autoboxing, enhanced for loops, static import, etc - all sugar) than with an issue with backwards compatibility. So far at least, the Java folks have tried very hard to design the new language features in such a way that source-level backwards compatibility is preserved (i.e. code written for 1.4 will still compile, and function, without modification in 5 or 6 or beyond).
Suppose they introduce the properties syntax. What, then does the following mean:
myObj.attr = 5;
It would depend on whether you're talking about code written before or after the addition of the properties feature, and possibly on the definition of the class itself.
I'm not saying these issues couldn't be resolved, but I'm skeptical they could be resolved in a way that led to a clean, unambiguous syntax, while preserving source compatibility with previous versions.
The python folks may be able to get away with breaking old code, but that's not Java's way...
According to Volume 2 of Core Java (Forgotten the authors, but it's a very popular book), the language designers thought it was a poor idea to hide a method call behind field access syntax, and so left it out.
It's the same reason that they don't change anything else in Java - backwards-compatibility.
- Is it because it's not deemed important enough when compared to other possible improvements?
That's my guess.
- Are there technical (e.g. JVM-related) limitations?
No
- Is it a matter of politics? (e.g. "I've been coding in Java for 50 years now and I say: we don't need no steenkin' properties!")
Most likely.
- Is it a case of bikeshedding?
Uh?
One of the main goals of Java was to keep the language simple.
From the: Wikipedia
Java suppresses several features [...] for classes in order to simplify the language and to prevent possible errors and anti-pattern design.
Here are a few little bits of logic that, for me, lead up to not liking properties in a language:
Some programming structures get used because they are there, even if they support bad programming practices.
Setters imply mutable objects. Something to use sparsely.
Good OO design you ask an object to do some business logic. Properties imply that you are asking it for data and manipulating the data yourself.
Although you CAN override the methods in setters and getters, few ever do; also a final public variable is EXACTLY the same as a getter. So if you don't have mutable objects, it's kind of a moot point.
If your variable has business logic associated with it, the logic should GENERALLY be in the class with the variable. IF it does not, why in the world is it a variable??? it should be "Data" and be in a data structure so it can be manipulated by generic code.
I believe Jon Skeet pointed out that C# has a new method for handling this kind of data, Data that should be compile-time typed but should not really be variables, but being that my world has very little interaction with the C# world, I'll just take his word that it's pretty cool.
Also, I fully accept that depending on your style and the code you interact with, you just HAVE to have a set/get situation every now and then. I still average one setter/getter every class or two, but not enough to make me feel that a new programming structure is justified.
And note that I have very different requirements for work and for home programming. For work where my code must interact with the code of 20 other people I believe the more structured and explicit, the better. At home Groovy/Ruby is fine, and properties would be great, etc.
You may not need for "get" and "set" prefixes, to make it look more like properties, you may do it like this:
public class Person {
private String firstName = "";
private Integer age = 0;
public String firstName() { return firstName; } // getter
public void firstName(String val) { firstName = val; } // setter
public Integer age() { return age; } // getter
public void age(Integer val) { age = val; } //setter
public static void main(String[] args) {
Person p = new Person();
//set
p.firstName("Lemuel");
p.age(40);
//get
System.out.println(String.format("I'm %s, %d yearsold",
p.firstName(),
p.age());
}
}