Quick Sort Alg and Time Complexity - java

I'm trying to learn different sorting algorithms and understand also understand their time complexity. Right now, I am working on a quicksort alg and I found a good video on YouTube explaining it. However, I notice in his implementation, I don't think his implementation is running in O(n2).
In his quicksort method he has nested loops, making it not run in linear time. Shouldn't that method run in linear time? Or is it already?
I'm just trying to get a good understanding on time complexity so if I'm wrong in my thinking, please help explain. Thanks!
I've used this resource to back up a lot of my thinking on this https://www.geeksforgeeks.org/quick-sort/

Related

Cyclomatic Complexity in Intellij

I was working on an assignment today that basically asked us to write a Java program that checks if HTML syntax is valid in a text file. Pretty simple assignment, I did it very quickly, but in doing it so quickly I made it very convoluted (lots of loops and if statements). I know I can make it a lot simpler, and I will before turning it in, but Amid my procrastination, I started downloading plugins and seeing what information they could give me.
I downloaded two in particular that I'm curious about - CodeMetrics and MetricsReloaded. I was wondering what exactly these numbers that it generates correlate to. I saw one post that was semi-similar, and I read it as well as the linked articles, but I'm still having some trouble understanding a couple of things. Namely, what the first two columns (CogC and ev(G)), as well as some more clarification on the other two (iv(G) and v(G)), mean.
MetricsReloaded Method Metrics:
MetricsReloaded Class Metrics:
These previous numbers are from MetricsReloaded, but this other application, CodeMetrics, which also calculates cyclomatic complexity gives slightly different numbers. I was wondering how these numbers correlate and if someone could just give a brief general explanation of all this.
CodeMetrics Analysis Results:
My final question is about time complexity. My understanding of Cyclomatic complexity is that it is the number of possible paths of execution and that it is determined by the number of conditionals and how they are nested. It doesn't seem like it would, but does this correlate in any way to time complexity? And if so, is there a conversion between them that can be easily done? If not, is there a way in either of these plug-ins (or any other in IntelliJ) that can automate time complexity calculations?

Is there any algorithm or function which can be used to implemen the Goal Seek functionality in Java?

The Goal Seek Excel function (often referred to as What-if-Analysis) is a method of solving for the desired output by changing an assumption that drives it. The function essentially uses a trial and error approach to back-solving the problem by plugging in guesses until it arrives at the answer.
There are multiple algorithms to solve this problem.
It is called multi dimension optimization.
There are group of probabilistic algorithms usually refereed as random search algorithms. Most famous is genetic optimization.
Also there is more classical approaches: Gradient descend, simplex method, etc.

What is the time complexity of Array.sort? [duplicate]

So, .Net and Java has spoiled me not being "required" to learn any sorting algorithms, but now I am in a need to sort an array in a different languages the doesn't have this luxury. I was able to pick up on bubble sorting with little issue. However, some sources detest the use of bubble sorting becuase of the horrible performance with average and worst case scenario of n^2 comparrisons. Bubble sorting seems to get the job done, but about tackle a array that has +100,000 elements and has me worried that performance could be an issue at this degree. On the other, some of the other algorithms look pretty intimidating in terms of complexity. My question is, what would be a good follow up to bubble sorting in terms of better performance, but not going off into complexity wasteland in implementation?
As a side note, I am an analyst that the programs as needed, not a CS major. Needless to say, theres some holes that I have yet filled in my programming expertise. Thanks :)
There are many options, each with their own trade-offs. As you've discovered, Bubble Sort's trade-offs are that it's (a) simple, but (b) slow with even remotely large arrays.
Quicksort is a good one, but you may run into memory issues.
I've used Heapsort with much success, but it's not guaranteed to be stable (though I've never had problems).
Bogosort is fun to implement and talk about, but entirely impractical.
and so on...
Having a good understanding of the data to be sorted helps one decide which algorithm is best. For example:
How large will the array be?
Is there a chance it's already sorted or partially sorted?
What kind of data does the array contain?
How difficult/expensive is it to compare two elements in the array?
How difficult/expensive is it to determine if the array is sorted?
and so on...
There is no one sorting algorithm that's better than all others. Choosing what fits your needs is something that you'll pick up over time and with practice.
Take your time to learn Quicksort, it's a great algorithm and not that complicated if you go slow.
If you want some sorting algorithms just to get your feet wet(ter), I would recommend Insertion Sort and Selection Sort, they are generally better than Bubble Sort, and quick to understand and implement. Merge sort is also common in algorithm courses. You will have much better use of Quick Sort though.
You should also understand the difference between stable and non-stable sorting, if you don't already. A stable sort will not change the order of items with the same key, while a non-stable could.

Subgraph matching (JUNG)

I have a set of subgraphs and I need to match them on the graph they were extracted from. I also need to count how many times each subgraph shows up in such graph (I need to store all possible matches). There must be a perfect match considering the edges' labels of both subgraph and graph, the vertices' labels, however, donĀ“t need to match each other. I built my system using JUNG API, so I would like a solution (api, algorithm etc) that could deal with the Graph structure provided by JUNG. Any thoughts?
JUNG is very full-featured, so if there isn't a graph analysis algorithm in JUNG for what you need, there's usually a strong, theoretical reason for it. To me, your problem sounds like an instance of the Subgraph Isomorphism problem, which is NP-Complete:
http://en.wikipedia.org/wiki/Subgraph_isomorphism_problem
NP-Completeness may or may not be familiar to you (it took me 7 years of college and Master's Degree in Computer Science to understand it!), so I'll give a high-level description here. Certain problems, like sorting, can be solved in Polynomial time with respect to their input size. For example, if I have a list of N elements, I can sort it in O(N log(N)) time. More specifically, if I can solve a problem in Polynomial time, this means I can solve the problem without exhausting every possible solution. In the sorting case, I could traverse every possible permutation of the list and, if I found a permutation of the list that was sorted, return it. This is obviously not the fastest way to solve the problem though! Some very clever mathematicians were able to get it down to its theoretical minimum of O(N log(N)), thus we can sort really big lists of things quite quickly using computers today.
On the flip-side, NP-Complete problems are thought to have no Polynomial time solution (I say thought because no one has ever proven it, although evidence strongly suggests this is the case). Anyway, what this means is that you cannot definitively solve an NP-Complete problem without first exhausting every possible solution. The time complexity of NP-Complete problems are always O(c ^ N) or worse, where c is some constant greater than 1. This means that the time required to solve the problem grows exponentially with every incremental increase in problem size.
So what does this have to do with my problem???
What I'm getting at here is that, if the Subgraph Isomorphism problem is NP-Complete, then the only way you can determine if one graph is a subgraph of another graph is by exhausting every possible solution. So you can solve this, but probably only up to graphs of a few nodes or so (since the problem's time complexity grows exponentially with every incremental increase in graph size). This means that it is computationally infeasible to compute a solution for your problem because as soon as you reach a certain graph size, it will quite literally take forever to find a solution.
More practically, if your boss asks you to do something that is provably NP-Complete, you can simply say it's impossible and he will have to listen to you. If your professor asks you to do something that is provably NP-Complete, show him that it's NP-Complete and you'll probably get an A for the course. If YOU are trying to do something NP-Complete of your own accord, it's better to just move on to the next project... ;)
Well, I had to solve the problem by implementing it from scratch. I followed the strategy suggested in the topic Any working example of VF2 algorithm?. So, if someone is in doubt about this problem too, I suggest to take a look at Rich Apodaca's answer in the aforementioned topic.

Easiest to code algorithm for Rubik's cube?

What would be a relatively easy algorithm to code in Java for solving a Rubik's cube. Efficiency is also important but a secondary consideration.
Perform random operations until you get the right solution. The easiest algorithm and the least efficient.
The simplest non-trivial algorithm I've found is this one:
http://www.chessandpoker.com/rubiks-cube-solution.html
It doesn't look too hard to code up. The link mentioned in Yannick M.'s answer looks good too, but the solution of 'the cross' step looks like it might be a little more complex to me.
There are a number of open source solver implementations which you might like to take a look at. Here's a Python implementation. This Java applet also includes a solver, and the source code is available. There's also a Javascript solver, also with downloadable source code.
Anthony Gatlin's answer makes an excellent point about the well-suitedness of Prolog for this task. Here's a detailed article about how to write your own Prolog solver. The heuristics it uses are particularly interesting.
Might want to check out: http://peter.stillhq.com/jasmine/rubikscubesolution.html
Has a graphical representation of an algorithm to solve a 3x3x3 Rubik's cube
I understand your question is related to Java, but on a practical note, languages like Prolog are much better suited problems like solving a Rubik's cube. I assume this is probably for a class though and you may have no leeway as to the choice of tool.
You can do it by doing BFS(Breadth-First-Search). I think the implementation is not that hard( It is one of the simplest algorithm under the category of the graph). By doing it with the data structure called queue, what you will really work on is to build a BFS tree and to find a so called shortest path from the given condition to the desire condition. The drawback of this algorithm is that it is not efficient enough( Without any modification, even to solver a 2x2x2 cubic the amount time needed is ~5 minutes). But you can always find some tricks to boost the speed.
To be honest, it is one of the homework of the course called "Introduction of Algorithm" from MIT. Here is the homework's link: http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-006-introduction-to-algorithms-fall-2011/assignments/MIT6_006F11_ps6.pdf. They have a few libraries to help you to visualize it and to help you avoid unnecessary effort.
For your reference, you can certainly look at this java implementation. -->
Uses two phase algorithm to solve rubik's cube. And have tried this code and it works as well.
One solution is to I guess simultaneously run all possible routes. That does sound stupid but here's the logic - over 99% of possible scrambles will be solvable in under 20 moves. This means that although you cycle through huge numbers of possibilities you are still going to do it eventually. Essentially this would work by having your first step as the scrambled cube. Then you would have new cubes stored in variables for each possible move on that first cube. For each of these new cubes you do the same thing. After each possible move check if it is complete and if so then that is the solution. Here to make sure you have the solution you would need an extra bit of data on each Rubiks cube saying the moves done to get to that stage.

Categories