Fixed length array vs fields [closed] - java

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
What is faster?
I want to write an API for processing and calculating with vectors and matrices.
A "Matrix4f" needs 4*4 float values.
Should i write this as 16 fields or a two-dimensional array?
But if i use fields, inheritance is impossible.

This is more a question of maintainability than speed. The speed difference between your two alternatives will almost certainly not be noticeable. The array approach, however, makes more sense in terms of what you are trying to model, and it's simply easier to deal with (say, for instance, you want to create a 5x5 matrix instead, then your array code will be easily reusable whereas your code with 16 fields would require drastic modifications). In short, don't worry about speed when making this decision, worry instead about what makes more sense and what will be easier to manage down the line; then the choice should be clear.

There is no complexity in accessing an array nor in accessing a variable, O(1) so to say.
This is not what you should consider in speed, but your actual algorithms and functions.

Related

Run a math expression from string in java [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I want to know if there is any efficient method to Run a math expression from string in java , Having some example input and results of that function.
Starting from simple linear functions : a*x+b .To more complex ones
Or is there any good source i can start reading.
I take your task as: take observed input-output and learn some representation which is able to do that transformation with new inputs.
(Some) Neural Networks can learn an approximation-function (Universal approximation theorem
) (and probably other approaches), but there is something important to remark:
Without assumptions about your function (e.g. smoothness), there can't be an algorithm achieving what you want to do! Without assumptions there are infinite many approximation-functions, which are all equally good on your examples, but behave arbitrarily different on new data!
(I'm also ignoring special-cases as: random-data or cryptographic random-generators where this mapping also can't be learned (the former in theory; the latter at least in practice)

Creating my own Lists and Maps data structures [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
For practice I want to make my own lists and maps (like ArrayList, HashMap, HashSet etc.).
My goal is to have it as small and flexible as possible while still maintaining good performance. (long road...)
I have some questions:
1)
Unlike the sun, I don't have to take backwards compatibility into account.
So the first thing I wonder, is there any good reason to keep add and put?
Why not just one?
If I would name put > add would this give problems / complexity / unclearness down the road?
2)
Are there any languages known to have really good data structures? (For example, they could be really smart to avoid a concurrency exception).
3)
As last more a request then a question, if you have any tips our vision of how things could be done different then please post them.
There is no duplicated methods, Collection's have add method that returns a boolean, Map's have put method that returns type associated to Map.
There are plenty of examples of data structure, the point is, ¿what you need your data stucture do best? Avoid concurrency? sort? be fast? store securely?
The examples you need are directly in Java source code:
SOURCES
List
ArrayList
HashMap
and so on....

Best way to store 1000000 phone numbers in memory [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
What would be the best way to store 1000000 phone numbers in memory with the smallest memory footprint.
I was thinking of just using an array but im sure there has to be an better way
The size of memory scales not significantly with the way you store the collection (!) of numbers, but more with how you actually store one phone number (as a string, or as an integer).
If you really want to reduce memory, try to store each phone number using an long.
For instance if you store phone numbers in an ArrayList you will get a maximum overhead of say 30%, which is not that much. If you however store each phone number as a string, you will get an overhead of let's say 900% compared to storing data using integers.
The array has the smallest memory footprint.
make an algorithm for phone number. pretty complex. if you can do it though, that would save a lot of memory.

Algorithm for finding trends in data? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm looking for an algorithm that is able to find trends in large amounts of data. For instance, if one is given time t and a variable x, (t,x), and given input such as {(1,1), (2,4), (3,9), (4,16)}, it should be able to figure out that the value of x for t=5 is 25. How is this normally implemented? Do most algorithms compute lines of best fit that are linear, quadratic, exponential, etc. and then chooses the line of best fit with the lowest standard deviation? Are there other techniques for finding trends in data? Also, what happens when you increase the number of variables to analyze large vectors?
This is a really complex question, try to start from: http://en.wikipedia.org/wiki/Interpolation
There is no simple answer for a complex problem: http://en.wikipedia.org/wiki/Regression_analysis
A Neural network might be a good candidate. Especially if you want to learn it something nonlinear.

Which is the fastest way of converting an Object to a stream of Bytes in Java? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I have an object which I want to convert into a stream of bytes and then to operate on it. I don't want to serialise the object, but just to convert it. I have read this article, where Java Unsafe class is used and the conversion is very fast. However, is there any other fast solution for this?
Fast Convertion is possible. You can use GSON lib. then get it to json string. Use the string as per your requirement. Hope this helps.
There are a number of libraries in development to do what you suggest. I believe all of them are discussed on this forum. https://groups.google.com/forum/#!forum/mechanical-sympathy which may also have many topics which may interest you.
In short you can do it using Unsafe, or a library which uses it. In fact I have one of my own, but again it is in development.
For the effort involved this will only make much of a difference if you have many GB of data. At this point the reduce GC times and reduced size of the heap are the main advantages, on saving a single de-reference.

Categories