(This is probably a duplicate, but I could not find it - feel free to point it out)
Consider the following Java class:
public class A<T0, T1> {
public A(T0 t0, T1 t1) {
...
}
}
Instantiating this class is easy using something along the lines of new A<Integer, String>(1, "X").
Suppose now that most instances of this class have a String as the second type parameter T1 and that the object of this type used in the constructor call is also pretty much standard.
If A had not been using generics, a common extension would be an additional constructor without the second argument:
public class A {
public A(int t0, String t1) {
...
}
public A(int t0) {
this(t0, new String("X"));
}
}
Unfortunately, this does not seem to be possible for a class that does use generics - at least not without a forced cast:
public A(T0 t0) {
this(t0, (T1)(...));
}
The reason? While this constructor only takes a single argument, it still uses two type parameters and there is no way to know a priori that whatever type T1 the user of the class supplies will be compatible with the default value used in the constructor.
A slightly more elegant solution involves the use of a subclass:
public class B<T0> extends A<T0, String> {
...
}
But this approach forces yet another branch in the class hierarchy and yet another class file with what is essentially boilerplate code.
Is there a way to declare a constructor that forces one or more of the type parameters to a specific type? Something with the same effects as using a subclass, but without the hassle?
Is there something fundamentally wrong in my understanding of generics and/or my design? Or is this a valid issue?
Easiest method is just to add a static creation method.
public static <T0> A<T0,String> newThing(T0 t0) {
return new A<T0,String>(t0, "X");
}
(Perhaps choose a name appropriate for the particular usage. Usually no need for new String("...").)
From Java SE 7, you can use the diamond:
A<Thing,String> a = new A<>(thing);
As I understand it, you want to have a second constructor that (if called) would force the generic type T1 to be a String.
However, the generics are specified BEFORE you call the constructor.
That second constructor, if valid, could allow someone to do this:
B<Integer, Integer> b = new B<Integer, Integer>(5);
The error here is that you've specified the second generic type as an Integer BEFORE calling the constructor. And then the constructor would, in theory, specify the second generic type as a String. Which is why I believe it's not allowed.
You could qualify the generic types, i.e.
A<T0, T1 super MyDefaultType> {
public A(T0 t0) {
this(t0, new MyDefaultType());
}
}
You can't use T1 extends MyDefaultType since if you define a subclass, a MyDefaultType instance would not be compatible with the type of T1.
"Most instances" is the root problem.
Either T1 is a parameterized type or not. The single-argument constructor presumes both. Therein lies the problem.
The subclass solution solves the problem by making all instances satisfy T1=String.
A named constructor / factory method would also solve the problem, by ensuring T1=String.
public static <T0> A<T0,String> makeA( T0 t0 ) {
return new A<T0,String>( t0, "foo" );
}
Is there a way to declare a constructor that forces one or more of the type parameters to a specific type? Something with the same effects as using a subclass, but without the hassle?
I believe it is impossible. Think about this. Developer defines class that can be generic, i.e. the type of parameter is defined during creating the object. How can the developer define constructor that forces user to use specific type of the parameter?
EDIT:
If you need this you have to create factory or factory method that creates instances of this class with predefined parameter type.
Subclass it. As far as I've ever been taught, that's one of the great features of OOP. Enjoy it. Disk space is cheap.
If it's an issue with future maintenance of the code, consider making the original class abstract, and creating two subclasses off of it (one with the double-generic constructor, and one with the single.
Related
I'm new to Java programming. While reading through the code of an open source project, I came across a line of code which I can't understand:
final Type typeOfMap = new TypeToken<Map<String, Object>>() {}.getType();
My questions are:
I usually call a constructor like this: final Type typeOfMap = new TypeToken<Map<String, Object>>(). I have never seen it followed by other pieces of code such as {}.getType(). What kind of syntax is this?
Is {} an object? Why can you call a function on it?
P.S. Type is java.lang.reflect.Type, and TypeToken is com.google.gson.reflect.TypeToken.
I'm also new at Java, but, as far as I know, that is a constructor that belongs to an abstract generic class new TypeToken<Map<String, Object>>() {} The class tiself may look something like this: public abstract class TypeToken<X> ... now, for the method .getType(). I'm not really sure how that is coded.
You reminded me that this is in my bucket list of things to learn/understand, but I'm pretty sure that this code pattern is a little too over engineered, (ofc this may be my bias precisely because I dont know it or what it could be useful for)
the .getType() method, may be a method inside the abstract that is public and non abstract.
I personally have found that in some cases (just in some), it is more convenient to instantiate abstract objects instead of extending them (which is how they are usually used), specially in cases when your abstract object needs another object created at an specific lifecycle, or when the abstract object needs interoperability within the same class.
Now If I'm not mistaken, I Think that THAT specific implementation com.google.gson.reflect.TypeToken makes use of reflect in order to get the class type of a non initialized object, without actually creating an object (maybe it does behind curtains), if you've tried to make a newInstance of an Array of nested generic classes, you know how it can become a headache, because of something called "erasure".
I usually call a constructor like this: final Type typeOfMap = new
TypeToken<Map<String, Object>>(). I have never seen it followed by
other pieces of code such as {}.getType(). What kind of syntax is
this?
It is a syntax for Anonymous inner classes.
Is {} an object? Why can you call a function on it?
Yes, you get an object from it. That's why a method can be invoked on it.
Anonymous classes are useful when you need a specific behaviour from a class for a single time. Like in below example, if you invoke sayHello on normal A object, then it will return Hello. But, the behaviour of sayHello method gets changed for object of anonymous class and it returns Bonjour this time.
public class SomeClass {
public static void main(String[] args) {
A defaultObj = new A();
A customObj = new A() {
#Override
public String sayHello() {
return "Bonjour";
}
};
System.out.println(defaultObj.sayHello());
System.out.println(customObj.sayHello());
}
}
class A {
String sayHello() {
return "Hello";
}
}
Output
Hello
Bonjour
Gson documentation for TypeToken also mentions about the reason and usage of anonymous class. The reason for usage in TypeToken class is that it is used to retrieve the type of token at runtime. As otherwise generic type information is not available at runtime because of type erasure.
https://www.javadoc.io/doc/com.google.code.gson/gson/2.6.2/com/google/gson/reflect/TypeToken.html
Represents a generic type T. Java doesn't yet provide a way to
represent generic types, so this class does. Forces clients to create
a subclass of this class which enables retrieval the type information
even at runtime. For example, to create a type literal for
List, you can create an empty anonymous inner class:
TypeToken<List> list = new TypeToken<List>() {};
I have a method (prepareErrorMessage) that accepts objects of type ErrorMessagePojoSuperclass. However, I only pass subclasses of ErrorMessagePojoSuperclass as arguments:
public class ErrorMessagePojoBundle extends ErrorMessagePojoSuperclass {}
public class Tester {
ErrorMessagePojoBundle empb = new ErrorMessagePojoBundle();
prepareErrorMessage(empb);
public void prepareErrorMessage(ErrorMessagePojoSuperclass errorMessagePojo) {
String errorStatusMsg = messageConverter.convertXMLToString(errorMessagePojo);
}
}
The class ErrorMessagePojoBundle has more methods than its superclass.
I need to make sure that when the line of code is running messageConverter.convertXMLToString(errorMessagePojo), messageConverter processes an instance of the subclass - in this case the object empb. Any ideas? I want to solve this without the use of casting. Thank you.
Any ideas? I want to solve this without the use of casting.
Your options are:
Defining an interface with the necessary method, having the subclass implement it, and using that interface as the parameter type rather than the superclass.
Changing the parameter type to the subclass, not the superclass.
instanceof and casting (not usually what you want to do).
1 and 2 are basically just variants of each other.
In your example code, there's no reason for prepareErrorMessage to accept the superclass rather than the subclass (or an interface), since the only thing it does can only be done with the subclass (or something implementing the same interface).
In java, can we pass superclass Object to subclass reference ?
I know that it is a weird question/practically not viable,
but I want to understand the logic behind this
Why is it not allowed in java.
class Employee {
public void met1(){
System.out.println("met1");
}
}
class SalesPerson extends Employee
{
#Override
public void met1(){
System.out.println("new met1");
}
public void met2(){
System.out.println("met2");
}
}
public class ReferenceTest {
public static void main(String[] args) {
SalesPerson sales = new Employee(); // line 1
sales.met1(); // line 2
sales.met2(); // line 3
}
}
What would have happened if Java allowed compilation of line 1?
Where would the problem arise?
Any inputs/link are welcomes.
If your SalesPerson sales = new Employee(); statement was allowed to compile, this would have broken the principles of Polymorphism, which is one of the features that the language has.
Also, you should get familiar with that does compile time type and runtime type mean:
The compile-time type of a variable is the type it is declared as, while the runtime type is the type of the actual object the variable points to. For example:
Employee sales = new SalesPerson();
The compile-time type of sales is Employee, and the runtime type will be SalesPerson.
The compile-time type defines which methods can be called, while the runtime type defines what happens during the actual call.
Let's suppose for a moment that this statement was valid:
SalesPerson sales = new Employee();
As I said, the compile-time type defines which methods can be called, so met2() would have been eligible for calling. Meanwhile, the Employee class doesn't have a met2() and so the actual call would have been impossible.
No. It makes zero sense to allow that.
The reason is because subclasses generally define additional behavior. If you could assign a superclass object to a subclass reference, you would run into problems at runtime when you try to access class members that don't actually exist.
For example, if this were allowed:
String s = new Object();
You would run into some pretty bad problems. What happens if you try to call a String method? Would the runtime crash? Or perhaps a no-op would be performed? Should this even compile?
If the runtime were to crash, you could use runtime checks to make sure the objects you receive will actually contain the methods you want. But then you're basically implementing guarantees that the Java type system already provides at compile-time. So really that "feature" cost you nothing but a bunch of type-checking code that you shouldn't have had to write in the first place.
If no-ops were executed instead of nonexistent methods, it would be extremely difficult to ensure that your programs would run as written when the members you want to access don't exist, as any reference could really be an Object at any point. This might be easy to handle when you are working on your own and control all your code, but when you have to deal with other code those guarantees essentially vanish.
If you want the compiler to do the checking, assuming compiler writers don't hunt you down and give you a stern talking-to -- well, you're back to "normal" behavior once more. So again, it's just a lot of work for zero benefit.
Long story short: No, it's not allowed, because it makes zero sense to do so, and if a language designer tried to allow that they would be locked up before they could do any more harm.
If you inherit from a class, you always specialize the common behavior of the super class.
In your example, the SalesPerson is a special Employee. It inherits all behavior from the super class and can override behavior to make it different or add new behavior.
If you, as it is allowed, initialize a variable of the super type with an instance of the sub type like Employee e = new SalesPerson(), then you can use all common behavior on that variable.
If instead, you were possible to do the other way round, there might be several uninitialized members in the class.
You find this very often when using the Java Collection API, where for example you can use the common List class on operations like iterating through it, but when initializing it, you use for example the sub class ArrayList.
I know this question has been asked a lot, but the usual answers are far from satisfying in my view.
given the following class hierarchy:
class SuperClass{}
class SubClass extends SuperClass{}
why does people use this pattern to instantiate SubClass:
SuperClass instance = new SubClass();
instead of this one:
SubClass instance = new SubClass();
Now, the usual answer I see is that this is in order to send instance as an argument to a method that requires an instance of SuperClass like here:
void aFunction(SuperClass param){}
//somewhere else in the code...
...
aFunction(instance);
...
But I can send an instance of SubClass to aFunction regardless of the type of variable that held it! meaning the following code will compile and run with no errors (assuming the previously provided definition of aFunction):
SubClass instance = new SubClass();
aFunction(instance);
In fact, AFAIK variable types are meaningless at runtime. They are used only by the compiler!
Another possible reason to define a variable as SuperClass would be if it had several different subclasses and the variable is supposed to switch it's reference to several of them at runtime, but I for example only saw this happen in class (not super, not sub. just class). Definitly not sufficient to require a general pattern...
The main argument for this type of coding is because of the Liskov Substituion Principle, which states that if X is a subtype of type T, then any instance of T should be able to be swapped out with X.
The advantage of this is simple. Let's say we've got a program that has a properties file, that looks like this:
mode="Run"
And your program looks like this:
public void Program
{
public Mode mode;
public static void main(String[] args)
{
mode = Config.getMode();
mode.run();
}
}
So briefly, this program is going to use the config file to define the mode this program is going to boot up in. In the Config class, getMode() might look like this:
public Mode getMode()
{
String type = getProperty("mode"); // Now equals "Run" in our example.
switch(type)
{
case "Run": return new RunMode();
case "Halt": return new HaltMode();
}
}
Why this wouldn't work otherwise
Now, because you have a reference of type Mode, you can completely change the functionality of your program with simply changing the value of the mode property. If you had public RunMode mode, you would not be able to use this type of functionality.
Why this is a good thing
This pattern has caught on so well because it opens programs up for extensibility. It means that this type of desirable functionality is possible with the smallest amount of changes, should the author desire to implement this kind of functionality. And I mean, come on. You change one word in a config file and completely alter the program flow, without editing a single line of code. That is desirable.
In many cases it doesn't really matter but is considered good style.
You limit the information provided to users of the reference to what is nessary, i.e. that it is an instance of type SuperClass. It doesn't (and shouldn't) matter whether the variable references an object of type SuperClass or SubClass.
Update:
This also is true for local variables that are never used as a parameter etc.
As I said, it often doesn't matter but is considered good style because you might later change the variable to hold a parameter or another sub type of the super type. In that case, if you used the sub type first, your further code (in that single scope, e.g. method) might accidentially rely on the API of one specific sub type and changing the variable to hold another type might break your code.
I'll expand on Chris' example:
Consider you have the following:
RunMode mode = new RunMode();
...
You might now rely on the fact that mode is a RunMode.
However, later you might want to change that line to:
RunMode mode = Config.getMode(); //breaks
Oops, that doesn't compile. Ok, let's change that.
Mode mode = Config.getMode();
That line would compile now, but your further code might break, because you accidentially relied to mode being an instance of RunMode. Note that it might compile but could break at runtime or screw your logic.
SuperClass instance = new SubClass1()
after some lines, you may do instance = new SubClass2();
But if you write, SubClass1 instance = new SubClass1();
after some lines, you can't do instance = new SubClass2()
It is called polymorphis and it is superclass reference to a subclass object.
In fact, AFAIK variable types are meaningless at runtime. They are used
only by the compiler!
Not sure where you read this from. At compile time compiler only know the class of the reference type(so super class in case of polymorphism as you have stated). At runtime java knows the actual type of Object(.getClass()). At compile time java compiler only checks if the invoked method definition is in the class of reference type. Which method to invoke(function overloading) is determined at runtime based on the actual type of the object.
Why polymorphism?
Well google to find more but here is an example. You have a common method draw(Shape s). Now shape can be a Rectangle, a Circle any CustomShape. If you dont use Shape reference in draw() method you will have to create different methods for each type of(subclasses) of shape.
This is from a design point of view, you will have one super class and there can be multiple subclasses where in you want to extend the functionality.
An implementer who will have to write a subclass need only to focus on which methods to override
Let's say you have some Java code as follows:
public class Base{
public void m(int x){
// code
}
}
and then a subclass Derived, which extends Base as follows:
public class Derived extends Base{
public void m(int x){ //this is overriding
// code
}
public void m(double x){ //this is overloading
// code
}
}
and then you have some declarations as follows:
Base b = new Base();
Base d = new Derived();
Derived e = new Derived();
b.m(5); //works
d.m(6); //works
d.m(7.0); //does not compile
e.m(8.0); //works
For the one that does not compile, I understand that you are passing in a double into Base's version of the m method, but what I do not understand is... what is the point of ever having a declaration like "Base b = new Derived();" ?
It seems like a good way to run into all kinds of casting problems, and if you want to use a Derived object, why not just go for a declaration like for "e"?
Also, I'm a bit confused as to the meaning of the word "type" as it is used in Java. The way I learned it earlier this summer was, every object has one class, which corresponds to the name of the class following "new" when you instantiate an object, but an object can have as many types as it wants. For example, "e" has type Base, Derived, (and Object ;) ) but its class is Derived. Is this correct?
Also, if Derived implemented an interface called CanDoMath (while still extending Base), is it correct to say that it has type "CanDoMath" as well as Base, Derived, and Object?
I often write functions in the following form:
public Collection<MyObject> foo() {}
public void bar(Collection<MyObject> stuff){}
I could just as easily have made it ArrayList in both instances, however what happens if I later decide to make the representation a Set? The answer is I have a lot of refactoring to do since I changed my method contract. However, if I leave it as Collection I can seamlessly change from ArrayList to HashSet at will. Using the example of ArrayList it has the following types:
Serializable, Cloneable, Iterable<E>, Collection<E>, List<E>, RandomAccess
There are a number of cases where confining yourself to a particular (sub)class is not desired, such as the case you have where e.m(8.0);. Suppose, for example, you have a method called move that moves an object in the coordinate graph of a program. However, at the time you write the method you may have both cartesian and radial graphs, handled by different classes.
If you rely on knowing what the sub-class is, you force yourself into a position wherein higher levels of code must know about lower levels of code, when really they just want to rely on the fact that a particular method with a particular signature exists. There are lots of good examples:
Wanting to apply a query to a database while being agnostic to how the connection is made.
Wanting to authenticate a user, without having to know ahead of time the strategy being used.
Wanting to encrypt information, without needing to rip out a bunch of code when a better encryption technique comes along.
In these situations, you simply want to ensure the object has a particular type, which guarantees that particular method signatures are available. In this way your example is contrived; you're asking why not just use a class that has a method wherein a double is the signature's parameter, instead of a class where that isn't available. (Simply put; you can't use a class that doesn't have the available method.)
There is another reason as well. Consider:
class Base {
public void Blah() {
//code
}
}
class Extended extends Base {
private int SuperSensitiveVariable;
public setSuperSensistiveVariable(int value) {
this.SuperSensistiveVariable = value;
}
public void Blah() {
//code
}
}
//elsewhere
Base b = new Extended();
Extended e = new Extended();
Note that in the b case, I do not have access to the method set() and thus can't muck up the super sensitive variable accidentally. I can only do that in the e case. This helps make sure those things are only done in the right place.
Your definition of type is good, as is your understanding of what types a particular object would have.
What is the point of having Base b = new Derived();?
The point of this is using polymorphism to change your implementation. For example, someone might do:
List<String> strings = new LinkedList<String>();
If they do some profiling and find that the most common operation on this list is inefficient for the type of list, they can swap it out for an ArrayList. In this way you get flexibility.
if you want to use a Derived object
If you need the methods on the derived object, then you would use the derived object. Have a look at the BufferedInputStream class - you use this not because of its internal implementation but because it wraps an InputStream and provides convenience methods.
Also, I'm a bit confused as to the meaning of the word "type" as it is used in Java.
It sounds like your teacher is referring to Interfaces and Classes as "types". This is a reasonable abstraction, as a class that implement an interface and extends a class can be referred to in 3 ways, i.e.
public class Foo extends AbstractFoo implements Comparable<Foo>
// Usage
Comparable<Foo> comparable = new Foo();
AbstractFoo abstractFoo = new Foo();
Foo foo = new Foo();
An example of the types being used in different contexts:
new ArrayList<Comparable>().Add(new Foo()); // Foo can be in a collection of Comparable
new ArrayList<AbstractFoo>().Add(new Foo()); // Also in an AbstractFoo collection
This is one of the classic problems on object oriented designs. When something like this happens, it usually means the design can be improved; there is almost always a somewhat elegant solution to these problems....
For example, why dont you pull the m that takes a double up into the base class?
With respect to your second question, an object can have more than one type, because Interfaces are also types, and classes can implement more than one interface.