I'm porting JBox2D to Xojo. Java is not a language I know well but there are enough similarities to Xojo for this to be the easiest way to port Box2D to it.
I am well into the port but I cannot fathom the meaning of this method signature:
public static <T> T[] reallocateBuffer(Class<T> klass, T[] oldBuffer, int oldCapacity,
int newCapacity) {}
Does this method return an array of any class type?
Does Class<T> klass mean that the klass parameter can be of any class?
Basically, that function signature makes it possible to handle arrays of different types in one place. If it were programmed in C, it would probably use a macro (#define) to accomplish something similar.
Syntactically, the <T> means: T is a placeholder for any class of objecs that arr passed to this function. If you pass an object of type T to this function, then all other places that mention T inside this function will also mean that type. That way, you don't have to write separate functions if you want to handle different types. Internally, the compiler may well generate separate code for each type, though. So, generics are a shortcut, letting you work with variable types.
This will be difficult to translate into Xojo, as it doesn't provide any means for that.
Since Xojo does not offer support for Generics (Templates), you need to find out which different array types are actually used with this function, and write a specific function for each of these cases.
You may be able to work with Xojo's base class Object as the parameter, although passing arrays of Object will often not work due to Xojo's rather static type checking on arrays.
A trick around this would be to pack the array into a Variant, and then special handle each array type inside. That would still not be generic but would at least keep it all in a single function, like the original does.
Something like this:
Sub createObjects(arrayContainer as Variant, newAmount as Integer)
if not arrayContainer.IsArray then break ' assertion
// Handle case when array is of MyObject1
try
#pragma BreakOnExceptions off ' prevents Debugger from stopping here
dim a() as MyObject1 = arrayContainer
#pragma BreakOnExceptions default
for i as Integer = 1 to newAmount
a.Append new MyObject1
next
return
catch exc as TypeMismatchException
' fall thru
end try
// Handle more types here
break
End Sub
Then call it like this:
dim d() as MyObject1
createObjects d, 3
Related
I need to write a class with function like this
class Converter <T>{
public <R> R convertBy(Function<T,?>... args){
Function<T,?> function = args[0];
// doing something
return function.apply(t);
}
}
I need this method to be type safe i.e first Function need from args needs to have the same argument type as this class type. For example, if my Converter would be Converter<String> I need to check (at the compile time) if the first function has String parameter.
Also, when I wrote method like that is saying that it returns object, and I cannot do
int a = converter.convertBy(func1,func2);
because is saying that Object is not convertible to int.
-- Edit
Maybe with bigger picture will be easier to get what is it about. So the meaning of convertBy function is that it can with easy combine diffrent operation on different types.
This is working, when I define function like
public <R> R convertBy(Function... args)
but then It is not type safe. What I need, is to make sure If my Converter is 'Converter ' user cannot pass as first parametr function like
Function func = (string)->{ return (String)string.length();}
Also I cannot change paramters from
convertBy(Function... args)
to
convertBy(Function first, Function... rest)
edit.
Well I what appeared later is that I can do that. But it still doesn't check.
Thank You.
The way to make sure at compile time that your Function has return type R is to declare it as Function<T,R>.
If you also want to accept an unspecified number of other functions with different types, you could write your function like this:
public <R> R convertBy(Function<T,R> first, Function<?,?>... others)
That way the return type matches the return type of the first function.
But without knowing what you're trying to do with those other functions, it is hard to know what types they are supposed to have.
You could make it without varargs quite easily actually, all you need to do is to make several methods with different amount of functions in those. This approach is often used for optimization anyway, as varargs aren't nearly as optimized as normal methods.
<R> R convertBy(Function<T, R> fnc);
<R,U> R convertBy(Function<T, U> fnc1, Function<U, R> fnc2);
// etc.
You can't ensure type safety in an array of Function, that is just impossible task. You lose type information there, and the best you could do would be checking the type using reflections, which happens at runtime, thus type safety is not checked compile time.
So I advise just sticking to the andThen or compose methods unless you have plenty functions and the utility brings you enough syntax sugar to be worth doing it that way.
I'd like to know if something like this is possible in Java (mix of C++ and Java ahead)
template<typename T> bool compare(Wrapper wrapper) {
if(wrapper.obj.getClass().equals(T.class))
return true
return false
}
To clarify, the function takes in an object which contains a java.lang.object, but I'd like to be able to pass that wrapper into this generic comparison function to check whether that object is of a particular type, ie
if(compare<String>(myWrapper))
// do x
No, it's not possible due to erasure. Basically, the compare method has no idea what T is. There's only one compare method (as opposed to C++, where there's one per T), and it isn't given any information about how it was invoked (ie, what the caller considered its T to be).
The typical solution is to have the class (or method) accept a Class<T> cls, and then use cls.isInstance:
public <T> boolean compare(Wrapper wrapper, Class<T> cls) {
return cls.isInstance(wrapper.obj);
}
// and then, at the call site:
if (compare(wrapper, Foo.class)) {
...
}
Of course, this means that the call site needs to have the Class<T> object. If that call site is itself a generic method, it needs to get that reference from its caller, and so on. At some point, somebody needs to know what the specific type is, and that somebody passes in Foo.class.
You cannot reference static members of a type parameter (such as you try to do in the form of T.class). You also cannot use them meaningfully in instanceof expressions. More generally, because Java generics are implemented via type erasure, you cannot use type parameters in any way at run time -- all type analysis is performed statically, at compile time.
Depending on exactly what you're after, there are at least two alternative approaches.
The first, and more usual, is to ensure that the necessary types can be checked statically. For example, you might parameterize your Wrapper class with the type of the object it wraps. Then, supposing that you use it in a program that is type-safe, wherever you have a Wrapper<String> you know that the wrapped object is a String.
That doesn't work so well if you want to verify the specific class of the wrapped object, however, when the class to test against is not final. In that case, you can pass a Class object, something like this:
<T> boolean compare(Wrapper<? super T> wrapper, Class<T> clazz) {
return wrapper.obj.getClass().equals(clazz);
}
That checks the class of the wrapped object against the specified class, allowing the method to be invoked only in cases where static analysis allows that it could return true.
You can actually combine those two approaches, if you like, to create a Wrapper class whose instances can hold only members of a specific class, as opposed to any object that is assignable to a given type. I'm not sure why you would want to do that, though.
Referring to : Wildcard Capture Helper Methods
It says to create a helper method to capture the wild card.
public void foo(List<?> i) {
fooHelper(i);
}
private <T> void fooHelper(List<T> l) {
l.set(0, l.get(0));
}
Just using this function below alone doesn't produce any compilation errors, and seems to work the same way. What I don't understand is: why wouldn't you just use this and avoid using a helper?
public <T> void foo(List<T> l) {
l.set(0, l.get(0));
}
I thought that this question would really boil down to: what's the difference between wildcard and generics? So, I went to this: difference between wildcard and generics.
It says to use type parameters:
1) If you want to enforce some relationship on the different types of method arguments, you can't do that with wildcards, you have to use type parameters.
But, isn't that exactly what the wildcard with helper function is actually doing? Is it not enforcing a relationship on different types of method arguments with its setting and getting of unknown values?
My question is: If you have to define something that requires a relationship on different types of method args, then why use wildcards in the first place and then use a helper function for it?
It seems like a hacky way to incorporate wildcards.
In this particular case it's because the List.set(int, E) method requires the type to be the same as the type in the list.
If you don't have the helper method, the compiler doesn't know if ? is the same for List<?> and the return from get(int) so you get a compiler error:
The method set(int, capture#1-of ?) in the type List<capture#1-of ?> is not applicable for the arguments (int, capture#2-of ?)
With the helper method, you are telling the compiler, the type is the same, I just don't know what the type is.
So why have the non-helper method?
Generics weren't introduced until Java 5 so there is a lot of code out there that predates generics. A pre-Java 5 List is now a List<?> so if you were trying to compile old code in a generic aware compiler, you would have to add these helper methods if you couldn't change the method signatures.
I agree: Delete the helper method and type the public API. There's no reason not to, and every reason to.
Just to summarise the need for the helper with the wildcard version: Although it's obvious to us as humans, the compiler doesn't know that the unknown type returned from l.get(0) is the same unknown type of the list itself. ie it doesn't factor in that the parameter of the set() call comes from the same list object as the target, so it must be a safe operation. It only notices that the type returned from get() is unknown and the type of the target list is unknown, and two unknowns are not guaranteed to be the same type.
You are correct that we don't have to use the wildcard version.
It comes down to which API looks/feels "better", which is subjective
void foo(List<?> i)
<T> void foo(List<T> i)
I'll say the 1st version is better.
If there are bounds
void foo(List<? extends Number> i)
<T extends Number> void foo(List<T> i)
The 1st version looks even more compact; the type information are all in one place.
At this point of time, the wildcard version is the idiomatic way, and it's more familiar to programmers.
There are a lot of wildcards in JDK method definitions, particularly after java8's introduction of lambda/Stream. They are very ugly, admittedly, because we don't have variance types. But think how much uglier it'll be if we expand all wildcards to type vars.
The Java 14 Language Specification, Section 5.1.10 (PDF) devotes some paragraphs to why one would prefer providing the wildcard method publicly, while using the generic method privately. Specifically, they say (of the public generic method):
This is undesirable, as it exposes implementation information to the caller.
What do they mean by this? What exactly is getting exposed in one and not the other?
Did you know you can pass type parameters directly to a method? If you have a static method <T> Foo<T> create() on a Foo class -- yes, this has been most useful to me for static factory methods -- then you can invoke it as Foo.<String>create(). You normally don't need -- or want -- to do this, since Java can sometimes infer those types from any provided arguments. But the fact remains that you can provide those types explicitly.
So the generic <T> void foo(List<T> i) really takes two parameters at the language level: the element type of the list, and the list itself. We've modified the method contract just to save ourselves some time on the implementation side!
It's easy to think that <?> is just shorthand for the more explicit generic syntax, but I think Java's notation actually obscures what's really going on here. Let's translate into the language of type theory for a moment:
/* Java *//* Type theory */
List<?> ~~ ∃T. List<T>
void foo(List<?> l) ~~ (∃T. List<T>) -> ()
<T> void foo(List<T> l) ~~ ∀T.(List<T> -> ()
A type like List<?> is called an existential type. The ? means that there is some type that goes there, but we don't know what it is. On the type theory side, ∃T. means "there exists some T", which is essentially what I said in the previous sentence -- we've just given that type a name, even though we still don't know what it is.
In type theory, functions have type A -> B, where A is the input type and B is the return type. (We write void as () for silly reasons.) Notice that on the second line, our input type is the same existential list we've been discussing.
Something strange happens on the third line! On the Java side, it looks like we've simply named the wildcard (which isn't a bad intuition for it). On the type theory side we've said something _superficially very similar to the previous line: for any type of the caller's choice, we will accept a list of that type. (∀T. is, indeed, read as "for all T".) But the scope of T is now totally different -- the brackets have moved to include the output type! That's critical: we couldn't write something like <T> List<T> reverse(List<T> l) without that wider scope.
But if we don't need that wider scope to describe the function's contract, then reducing the scope of our variables (yes, even type-level variables) makes it easier to reason about those variables. The existential form of the method makes it abundantly clear to the caller that the relevance of the list's element type extends no further than the list itself.
public class helloworld {
public static void main(String[] args) {
String text = "Hello World";
l(text);
int n = 0;
l("--------------------------");
l(n);
}
public static void l(Object obj) {
System.out.println(obj);
}
}
I wrote this simple program in Java and it worked. Now I am confused that if all the data types (int, char, double etc.) come under Object, then why do we specify which data type we want to accept when we pass values?
I mean we can always use the data type Object as used in the function l. Is there a specific reason why people don't always use Object as their data type to pass values?
There is an implicit conversion defined between all primitive types and their respective object counterparts:
int -> Integer
char -> Character
etc...
This is called autoboxing.
Is there a specific reason why people don't always use "Object" as their data type to pass values?
Since Java is strongly typed, you cannot do a whole lot with Object.
E.g. try this:
static Object add(Object a, Object b) {
return a + b; // won't compile
}
This is because methods, operators, etc. available to use depend on the static type of the variable.
println can accept Object because it only needs to call the toString method. If you only need the limited functionality provided by the methods in Object, then sure, you can use it as a type. This is rarely the case, however.
For the primitives you mentioned, they are not really objects, they will simply be boxed to their representation as an object. An int would become an Integer, a long would become a Long etc.
Read this article about Autoboxing in java.
As for your question
Is there a specific reason why people don't always use "Object" as
their data type to pass values?
If you specify Object as the parameter of your method you won't be able to call the methods the real object contains without doing a cast. For example, if you have a custom object AnyObject that contains a method anyMethod, you won't be able to call it without casting the object to AnyObject.
It will also be unsafe as you will be able to pass any type of object to a method which may not be designed to function properly with any of these types. A method containing only System.out.println is not representative of a real use case, it will work with any object simply because by default the println will call the toString method which is already defined in an Object.
While it does look like a function that appears to accept all types of parameters, you will have to deal with these
The function signature becomes less informative.
No more overloading
You have to do a lot of type checking and casting in the function body to avoid run time errors.
Although the method seemingly accepts all objects, you would never know the actual subset of them until you see the method definition.
The function body might end up having more code to eliminate the wrong types than for its real goal. For example, your function only prints the value. Imagine a function that predominantly does some integer operation.
Increases the probability of run time errors, as the compiler cannot throw errors for missing casts.
I am trying to create
ArrayList<int> myList = new ArrayList<int>();
in Java but that does not work.
Can someone explain why int as type parameter does not work?
Using Integer class for int primitive works, but can someone explain why int is not accepted?
Java version 1.6
Java generics are so different from C++ templates that I am not going to try to list the differences here. (See What are the differences between “generic” types in C++ and Java? for more details.)
In this particular case, the problem is that you cannot use primitives as generic type parameters (see JLS §4.5.1: "Type arguments may be either reference types or wildcards.").
However, due to autoboxing, you can do things like:
List<Integer> ints = new ArrayList<Integer>();
ints.add(3); // 3 is autoboxed into Integer.valueOf(3)
So that removes some of the pain. It definitely hurts runtime efficiency, though.
The reason that int doesn't work, is that you cannot use primitive types as generic parameters in Java.
As to your actual question, how C++ templates are different from Java generics, the answer is that they're really, really different. The languages essentially apply completely different approaches to implementing a similar end effect.
Java tends to focus on the definition of the generic. That is, the validity of the generic definition is checked by only considering the code in the generic. If parameters are not properly constrained, certain actions cannot be performed on them. The actual type it's eventually invoked with, is not considered.
C++ is the opposite. Only minimal verification is done on the template itself. It really only needs to be parsable to be considered valid. The actual correctness of the definition is done at the place in which the template is used.
They are very different concepts, which can be used to perform some, but not all of the same tasks. As said in the other responses, it would take a quite a bit to go over all the differences, but here's what I see as the broad strokes.
Generics allow for runtime polymorphic containers through a single instantiation of a generic container. In Java, all the (non-primitive) objects are references, and all references are the same size (and have some of the same interface), and so can be handled by the bytecode. However, a necessary implication of having only instantiation of byte code is type eraser; you can't tell which class the container was instantiated with. This wouldn't work in c++ because of a fundamentally different object model, where objects aren't always references.
Templates allow for compile time polymorphic containers through multiple instantiations (as well as template metaprogramming by providing a (currently weakly typed) language over the c++ type system.). This allows for specializations for given types, the downside being potential "code bloat" from needing more than one compiled instantiation.
Templates are more powerful than generics; the former is effectively another language embedded within c++, while to the best of my knowledge, the latter is useful only in containers
The main difference is in way they are implemented, but their names accurately describe their implementation.
Templates behave like templates. So, if you write:
template<typename T>
void f(T s)
{
std::cout << s << '\n';
}
...
int x = 0;
f(x);
...
Compiler applies the template, so in the end compiler treats the code like:
void f_generated_with_int(int s)
{
std::cout << s << '\n';
}
...
int x = 0;
f_generated_with_int(x);
...
So, for each type which is used to call f a new code is "generated".
On the other hand, generics is only typechecked, but then all type information is erased. So, if you write:
class X<T> {
private T x;
public T getX() { return x; }
public void setX(T x) { this.x = x; }
}
...
Foo foo = new Foo();
X<Foo> x = new X<>();
x.setX(foo);
foo = x.getX();
...
Java compiles it like:
class X {
private Object x;
public Object getX() { return x; }
public void setX(Object x) { this.x = x; }
}
...
Foo foo = new Foo();
X x = new X();
x.setX(foo);
foo = (Foo)x.getX();
...
In the end:
templates require instantiation of each call to templated function (in compilation of each .cpp file), so templates are slower to compile
with generics you can't use primitives, because they are not Object, so generics is less versatile
You can't use primitives as type parameters in Java. Java's generics worth through type erasure, meaning that the compiler checks that you're using the types as you've defined them, but upon compilation, everything is treated as an Object. Since int and other primitives aren't Objects, they can't be used. Instead, use Integer.
that's because int is a primitive, it is a known issue.
If you really wanted to, you can subclass/write your own collection that can do that.
You could try TIntArraList from GNU Trove which will act like an ArrayList of int values.
For your question, Java objects are somewhat equivalent to pointers in C++.
Java does garbage collection because those dynamic obejcts will be "unseen" (stop being pointed) at some point and then a space cleaning is needed.
int is recognized as a primitive type so that makes impossible to return null and that's the reason why Java generics cannot accept primitve types. To indicate that an element isn't stored inside a Java container as Map, Set, List, a method will return null. Then what are you going to return if you can't return null ?
For std::array, static and dynamic arrays, C++ forces you to define a default constructor, it's because as C++ arrays are arrays of types instead of arrays of pointers as in Java. You have to indicate which default value (null value in Java) is going to take the objects in such structure.
Think about it, in Java any object in an array is null by default, in C++ it isn't unleast you declare an array of pointers and set all those at 0x0 or (preferable) as nullptr.