Related
I am making a function that can turn a number into a simplified radical square root. I have so far made a function that can return the factors of a number. I want to turn the string into an array so I can index through the numbers in a for loop and test if they have a perfect square root. How can I do this?
This is what I have so far:
public static void factor(int num) {
for (int i = 1; i <= num; ++i) {
if (num % i == 0) {
System.out.println(i);
}
}
}
inputing the number 20 outputs
1
2
4
5
10
20
I want to turn this into {1, 2, 4, 5, 10, 20}
You can store them in a List when you print.
public static void factor(int num) {
List<Integer> list = new ArrayList<Integer>();
for (int i = 1; i <= num; ++i) {
if (num % i == 0) {
list.add(i);
System.out.println(i);
}
}
//iterate over the list
for(int val: list){
//do something with val
}
}
However if you only want to convert a multi-line string to an array, do
yourMultiLineString.split("\n");
Read using a Scanner, store in a List, then use the .toArray() method.
List<Integer> l = new ArrayList<>();
Scanner sc = new Scanner(System.in);
for(int i = 0; i < 7; i++){
l.add(Integer.parseInt(sc.nextLine()));
}
int[] arr = (int[]) l.toArray();
You need to create a list of integers and push your data into it just like the list in below code:
public static void factor(int num) {
List<Integer> list = new ArrayList<>();
for (int i = 1; i <= num; ++i) {
if (num % i == 0) {
list.add(i);
}
}
for(Integer i: list) {
System.out.println(i);
}
// OR via indexing
for(int i=0; i<list.size(); i++) {
System.out.println(list.get(i));
}
}
}
This program is giving output for all non-repeated elements but I need first one non-repeated element. I tried to keep if(flag==1) to break loop after the end of j loop, then I tested but it is not working for all cases
import java.util.Scanner;
public class first
{
public static void main(String[] args)
{
int n, flag = 0;
Scanner s = new Scanner(System.in);
System.out.print("Enter no. of elements you want in array:");
n = s.nextInt();
int a[] = new int[n];
System.out.println("Enter all the elements:");
for(int i = 0; i < n; i++)
{
a[i] = s.nextInt();
}
System.out.print("Non repeated first element is :");
for(int i = 0; i < n; i++)
{
for(int j = 0; j < n; j++)
{
if(i != j)
{
if(a[i]!= a[j])
{
flag = 1;
}
else
{
flag = 0;
break;
}
if(flag == 1)
{
System.out.print(" ");
System.out.println(a[i]);
break;
}
}
}
}
}
}
You can construct two sets, singleSet and repeatedSet, respectively for elements appeared once and more than once. They can be created by doing one iteration on elements. Then, you do an a second iteration, to query which is the first element non-repeated:
int[] elements = { 1, 1, 2, 3, 3, 4 };
Set<Integer> singleSet = new HashSet<>();
Set<Integer> repeatedSet = new HashSet<>();
for (int e : elements) {
if (repeatedSet.contains(e)) {
continue;
}
if (singleSet.contains(e)) {
singleSet.remove(e);
repeatedSet.add(e);
} else {
singleSet.add(e);
}
}
for (int e : elements) {
if (singleSet.contains(e)) {
return e;
}
}
This solution is a O(n) solution, it should be faster than the nested-loop, which is O(n^2).
You can also replace the singeSet by a singleList, and at the end, return the first element in the singleList, which avoid the 2nd iteration on elements. Thus the solution is even faster.
Following up on the idea of the two sets from #Mincong, I am adding here the solution he mentioned as faster.
int[] array = { 1, 1, 2, 3, 3, 4 };
Set<Integer> allValues = new HashSet<>(array.length);
Set<Integer> uniqueValues = new LinkedHashSet<>(array.length);
for (int value : array) {
if (allValues.add(value)) {
uniqueValues.add(value);
}
else {
uniqueValues.remove(value);
}
}
if (!uniqueValues.isEmpty()) {
return uniqueValues.iterator().next();
}
First non-repeating integers in an array in Python
def non_repeating(arr):
non_repeating = []
for n in arr:
if n in non_repeating:
non_repeating.pop(non_repeating.index(n))
else:
non_repeating.append(n)
return non_repeating[0] if non_repeating else None
print(non_repeating([1, 1, 1, 5, 2, 1, 3, 4, 2]))
Javascript
function nonRepeatinInteger(arr) {
let val = [], count = [];
arr.forEach((item, pos) => {
if (!val.includes(item)) {
val.push(item);
count[val.indexOf(item)] = 1;
} else {
count[val.indexOf(item)]++;
}
});
return val[count.indexOf(Math.min(...count))];
}
console.log(nonRepeat([-1, 2, -1, 3, 2]));
console.log(nonRepeat([9, 4, 9, 6, 7, 4]));
private int getFirstNonRepeating(int[] arr) {
Set<Integer> set = new HashSet<>();
ArrayList<Integer> list = new ArrayList<>();
int min = 0;
for (int i = 0; i <arr.length; i++) {
//Parsing though array and adding to list if set.add returns false it means value is already available in set
if (!set.add(arr[i])) {
list.add(arr[i]);
}
}
//Parsing though array and checking if each element is not available in set,then that is smallest number
for (int i = 0; i < arr.length; i++) {
if (!list.contains(arr[i])) {
min = arr[i];
break;
}
}
Log.e(TAG, "firstNonRepeating: called===" + min);
return min;
}
Try this :
int a[] = {1,2,3,4,5,1,2};
for(int i=0; i<a.length;i++) {
int count = 0;
for(int j=0; j<a.length;j++) {
if(a[i]==a[j] && i!=j) {
count++;
break;
}
}
if(count == 0) {
System.out.println(a[i]);
break; //To display first non repeating element
}
}
def solution(self, list):
count_map = {}
for item in list:
count_map[item] = count_map.get(item, 0) + 1
for item in list:
if count_map[item] == 1:
return item
return None
Using JS Object:
function nonRepeat_Using_Object(arr) {
const data = arr.reduce((acc, val) => {
if (!acc[val]) {
acc[val] = 0;
}
acc[val]++;
return acc;
}, {});
for (let i = 0; i < arr.length; i++) {
if (data[arr[i]] === 1) {
return arr[i];
}
}
}
Another way to achieve this : you can use a hashmap to store counts of integers in the first pass and return the first element whose count is 1 in the second pass.
Here is a python code trying to achieve the same -
def singleNumber(nums: List[int]) -> int:
from collections import defaultdict
memory = defaultdict(int)
for num in nums:
memory[num] += 1
for k,v in memory.items():
if v == 1:
return k
So the problem is I have two array and have to check them for common items.Usual stuff, very easy.But the tricky thing for me is that I have to return another array with the elements that have been found to be common.I cannot not use any Collections.Thanks in advance.This is my code so far!
public class checkArrayItems {
static int[] array1 = { 4, 5, 6, 7, 8 };
static int[] array2 = { 1, 2, 3, 4, 5 };
public static void main(String[] args) {
checkArrayItems obj = new checkArrayItems();
System.out.println(obj.checkArr(array1, array2));
}
int[] checkArr(int[] arr1, int[] arr2) {
int[] arr = new int[array1.length];
for (int i = 0; i < arr1.length; i++) {
for (int j = 0; j < arr2.length; j++) {
if (arr1[i] == arr2[j]) {
arr[i] = arr1[i];
}
}
}
return arr;
}
}
In case someone was wondering how the "chasing" algorithm mentioned by #user3438137 looks like:
int[] sorted1 = Arrays.copyOf(array1, array1.length);
Arrays.sort(sorted1);
int[] sorted2 = Arrays.copyOf(array2, array2.length);
Arrays.sort(sorted2);
int[] common = new int[Math.min(sorted1.length, sorted2.length)];
int numCommonElements = 0, firstIndex = 0; secondIndex = 0;
while (firstIndex < sorted1.length && secondIndex < sorted2.length) {
if (sorted1[firstIndex] < sorted2[secondIndex]) firstIndex++;
else if (sorted1[firstIndex] == sorted2[secondIndex]) {
common[numCommonElements] = sorted1[firstIndex];
numCommonElements++;
firstIndex++;
secondIndex++;
}
else secondIndex++;
}
// optionally trim the commonElements array to numCommonElements size
You can use a MIN or MAX default dummy value for the elements in your new array arr using arr[i] = Integer.MIN_VALUE;. In that way you will be able to differentiate between the real and dummy values. Like below:
int[] checkArr(int[] arr1, int[] arr2) {
int[] arr = new int[array1.length];
for (int i = 0; i < arr1.length; i++) {
arr[i] = Integer.MIN_VALUE;
for (int j = 0; j < arr2.length; j++) {
if (arr1[i] == arr2[j]) {
arr[i] = arr1[i];
}
}
}
return arr;
}
Output
[4, 5, -2147483648, -2147483648, -2147483648]
EDIT
Conclusion
When you iterate over arr all the values other than -2147483648 are common.
EDIT 2
To print the common values as mentioned on the comment below:
public static void main(String[] args) {
checkArrayItems obj = new checkArrayItems();
int[] arr = obj.checkArr(array1, array2);
System.out.println("Common values are : ");
for (int x : arr) {
if (x != Integer.MIN_VALUE) {
System.out.print(x+"\t");
}
}
}
Suggestion: Follow naming convention for class i.e. make checkArrayItems to CheckArrayItems.
I'm lazy to type the code, but here is the algorithm.
1. sort both array
2. iterate over array comparing items and increasing the indexes.
Hope this helps.
Declare an index before the two for loops
int index = 0;
that will hold the current position of the arr array. Then:
arr[index++] = arr1[i];
And also, since you initialize arr with arr1.lenght your array will be filled with 0s at the end of the not colision.
I was asked to write my own implementation to remove duplicated values in an array. Here is what I have created. But after tests with 1,000,000 elements it took very long time to finish. Is there something that I can do to improve my algorithm or any bugs to remove ?
I need to write my own implementation - not to use Set, HashSet etc. Or any other tools such as iterators. Simply an array to remove duplicates.
public static int[] removeDuplicates(int[] arr) {
int end = arr.length;
for (int i = 0; i < end; i++) {
for (int j = i + 1; j < end; j++) {
if (arr[i] == arr[j]) {
int shiftLeft = j;
for (int k = j+1; k < end; k++, shiftLeft++) {
arr[shiftLeft] = arr[k];
}
end--;
j--;
}
}
}
int[] whitelist = new int[end];
for(int i = 0; i < end; i++){
whitelist[i] = arr[i];
}
return whitelist;
}
you can take the help of Set collection
int end = arr.length;
Set<Integer> set = new HashSet<Integer>();
for(int i = 0; i < end; i++){
set.add(arr[i]);
}
now if you will iterate through this set, it will contain only unique values. Iterating code is like this :
Iterator it = set.iterator();
while(it.hasNext()) {
System.out.println(it.next());
}
If you are allowed to use Java 8 streams:
Arrays.stream(arr).distinct().toArray();
Note: I am assuming the array is sorted.
Code:
int[] input = new int[]{1, 1, 3, 7, 7, 8, 9, 9, 9, 10};
int current = input[0];
boolean found = false;
for (int i = 0; i < input.length; i++) {
if (current == input[i] && !found) {
found = true;
} else if (current != input[i]) {
System.out.print(" " + current);
current = input[i];
found = false;
}
}
System.out.print(" " + current);
output:
1 3 7 8 9 10
Slight modification to the original code itself, by removing the innermost for loop.
public static int[] removeDuplicates(int[] arr){
int end = arr.length;
for (int i = 0; i < end; i++) {
for (int j = i + 1; j < end; j++) {
if (arr[i] == arr[j]) {
/*int shiftLeft = j;
for (int k = j+1; k < end; k++, shiftLeft++) {
arr[shiftLeft] = arr[k];
}*/
arr[j] = arr[end-1];
end--;
j--;
}
}
}
int[] whitelist = new int[end];
/*for(int i = 0; i < end; i++){
whitelist[i] = arr[i];
}*/
System.arraycopy(arr, 0, whitelist, 0, end);
return whitelist;
}
There exists many solution of this problem.
The sort approach
You sort your array and resolve only unique items
The set approach
You declare a HashSet where you put all item then you have only unique ones.
You create a boolean array that represent the items all ready returned, (this depend on your data in the array).
If you deal with large amount of data i would pick the 1. solution. As you do not allocate additional memory and sorting is quite fast. For small set of data the complexity would be n^2 but for large i will be n log n.
Since you can assume the range is between 0-1000 there is a very simple and efficient solution
//Throws an exception if values are not in the range of 0-1000
public static int[] removeDuplicates(int[] arr) {
boolean[] set = new boolean[1001]; //values must default to false
int totalItems = 0;
for (int i = 0; i < arr.length; ++i) {
if (!set[arr[i]]) {
set[arr[i]] = true;
totalItems++;
}
}
int[] ret = new int[totalItems];
int c = 0;
for (int i = 0; i < set.length; ++i) {
if (set[i]) {
ret[c++] = i;
}
}
return ret;
}
This runs in linear time O(n). Caveat: the returned array is sorted so if that is illegal then this answer is invalid.
class Demo
{
public static void main(String[] args)
{
int a[]={3,2,1,4,2,1};
System.out.print("Before Sorting:");
for (int i=0;i<a.length; i++ )
{
System.out.print(a[i]+"\t");
}
System.out.print ("\nAfter Sorting:");
//sorting the elements
for(int i=0;i<a.length;i++)
{
for(int j=i;j<a.length;j++)
{
if(a[i]>a[j])
{
int temp=a[i];
a[i]=a[j];
a[j]=temp;
}
}
}
//After sorting
for(int i=0;i<a.length;i++)
{
System.out.print(a[i]+"\t");
}
System.out.print("\nAfter removing duplicates:");
int b=0;
a[b]=a[0];
for(int i=0;i<a.length;i++)
{
if (a[b]!=a[i])
{
b++;
a[b]=a[i];
}
}
for (int i=0;i<=b;i++ )
{
System.out.print(a[i]+"\t");
}
}
}
OUTPUT:Before Sortng:3 2 1 4 2 1 After Sorting:1 1 2 2 3 4
Removing Duplicates:1 2 3 4
Since this question is still getting a lot of attention, I decided to answer it by copying this answer from Code Review.SE:
You're following the same philosophy as the bubble sort, which is
very, very, very slow. Have you tried this?:
Sort your unordered array with quicksort. Quicksort is much faster
than bubble sort (I know, you are not sorting, but the algorithm you
follow is almost the same as bubble sort to traverse the array).
Then start removing duplicates (repeated values will be next to each
other). In a for loop you could have two indices: source and
destination. (On each loop you copy source to destination unless they
are the same, and increment both by 1). Every time you find a
duplicate you increment source (and don't perform the copy).
#morgano
import java.util.Arrays;
public class Practice {
public static void main(String[] args) {
int a[] = { 1, 3, 3, 4, 2, 1, 5, 6, 7, 7, 8, 10 };
Arrays.sort(a);
int j = 0;
for (int i = 0; i < a.length - 1; i++) {
if (a[i] != a[i + 1]) {
a[j] = a[i];
j++;
}
}
a[j] = a[a.length - 1];
for (int i = 0; i <= j; i++) {
System.out.println(a[i]);
}
}
}
**This is the most simplest way**
What if you create two boolean arrays: 1 for negative values and 1 for positive values and init it all on false.
Then you cycle thorugh the input array and lookup in the arrays if you've encoutered the value already.
If not, you add it to the output array and mark it as already used.
package com.pari.practice;
import java.util.HashSet;
import java.util.Iterator;
import com.pari.sort.Sort;
public class RemoveDuplicates {
/**
* brute force- o(N square)
*
* #param input
* #return
*/
public static int[] removeDups(int[] input){
boolean[] isSame = new boolean[input.length];
int sameNums = 0;
for( int i = 0; i < input.length; i++ ){
for( int j = i+1; j < input.length; j++){
if( input[j] == input[i] ){ //compare same
isSame[j] = true;
sameNums++;
}
}
}
//compact the array into the result.
int[] result = new int[input.length-sameNums];
int count = 0;
for( int i = 0; i < input.length; i++ ){
if( isSame[i] == true) {
continue;
}
else{
result[count] = input[i];
count++;
}
}
return result;
}
/**
* set - o(N)
* does not guarantee order of elements returned - set property
*
* #param input
* #return
*/
public static int[] removeDups1(int[] input){
HashSet myset = new HashSet();
for( int i = 0; i < input.length; i++ ){
myset.add(input[i]);
}
//compact the array into the result.
int[] result = new int[myset.size()];
Iterator setitr = myset.iterator();
int count = 0;
while( setitr.hasNext() ){
result[count] = (int) setitr.next();
count++;
}
return result;
}
/**
* quicksort - o(Nlogn)
*
* #param input
* #return
*/
public static int[] removeDups2(int[] input){
Sort st = new Sort();
st.quickSort(input, 0, input.length-1); //input is sorted
//compact the array into the result.
int[] intermediateResult = new int[input.length];
int count = 0;
int prev = Integer.MIN_VALUE;
for( int i = 0; i < input.length; i++ ){
if( input[i] != prev ){
intermediateResult[count] = input[i];
count++;
}
prev = input[i];
}
int[] result = new int[count];
System.arraycopy(intermediateResult, 0, result, 0, count);
return result;
}
public static void printArray(int[] input){
for( int i = 0; i < input.length; i++ ){
System.out.print(input[i] + " ");
}
}
public static void main(String[] args){
int[] input = {5,6,8,0,1,2,5,9,11,0};
RemoveDuplicates.printArray(RemoveDuplicates.removeDups(input));
System.out.println();
RemoveDuplicates.printArray(RemoveDuplicates.removeDups1(input));
System.out.println();
RemoveDuplicates.printArray(RemoveDuplicates.removeDups2(input));
}
}
Output:
5 6 8 0 1 2 9 11
0 1 2 5 6 8 9 11
0 1 2 5 6 8 9 11
I have just written the above code for trying out. thanks.
public static int[] removeDuplicates(int[] arr){
HashSet<Integer> set = new HashSet<>();
final int len = arr.length;
//changed end to len
for(int i = 0; i < len; i++){
set.add(arr[i]);
}
int[] whitelist = new int[set.size()];
int i = 0;
for (Iterator<Integer> it = set.iterator(); it.hasNext();) {
whitelist[i++] = it.next();
}
return whitelist;
}
Runs in O(N) time instead of your O(N^3) time
Not a big fun of updating user input, however considering your constraints...
public int[] removeDup(int[] nums) {
Arrays.sort(nums);
int x = 0;
for (int i = 0; i < nums.length; i++) {
if (i == 0 || nums[i] != nums[i - 1]) {
nums[x++] = nums[i];
}
}
return Arrays.copyOf(nums, x);
}
Array sort can be easily replaced with any nlog(n) algorithm.
This is simple way to sort the elements in the array
public class DublicatesRemove {
public static void main(String args[]) throws Exception {
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
System.out.println("enter size of the array");
int l = Integer.parseInt(br.readLine());
int[] a = new int[l];
// insert elements in the array logic
for (int i = 0; i < l; i++)
{
System.out.println("enter a element");
int el = Integer.parseInt(br.readLine());
a[i] = el;
}
// sorting elements in the array logic
for (int i = 0; i < l; i++)
{
for (int j = 0; j < l - 1; j++)
{
if (a[j] > a[j + 1])
{
int temp = a[j];
a[j] = a[j + 1];
a[j + 1] = temp;
}
}
}
// remove duplicate elements logic
int b = 0;
a[b] = a[0];
for (int i = 1; i < l; i++)
{
if (a[b] != a[i])
{
b++;
a[b]=a[i];
}
}
for(int i=0;i<=b;i++)
{
System.out.println(a[i]);
}
}
}
Okay, so you cannot use Set or other collections. One solution I don't see here so far is one based on the use of a Bloom filter, which essentially is an array of bits, so perhaps that passes your requirements.
The Bloom filter is a lovely and very handy technique, fast and space-efficient, that can be used to do a quick check of the existence of an element in a set without storing the set itself or the elements. It has a (typically small) false positive rate, but no false negative rate. In other words, for your question, if a Bloom filter tells you that an element hasn't been seen so far, you can be sure it hasn't. But if it says that an element has been seen, you actually need to check. This still saves a lot of time if there aren't too many duplicates in your list (for those, there is no looping to do, except in the small probability case of a false positive --you typically chose this rate based on how much space you are willing to give to the Bloom filter (rule of thumb: less than 10 bits per unique element for a false positive rate of 1%).
There are many implementations of Bloom filters, see e.g. here or here, so I won't repeat that in this answer. Let us just assume the api described in that last reference, in particular, the description of put(E e):
true if the Bloom filter's bits changed as a result of this operation. If the bits changed, this is definitely the first time object has been added to the filter. If the bits haven't changed, this might be the first time object has been added to the filter. (...)
An implementation using such a Bloom filter would then be:
public static int[] removeDuplicates(int[] arr) {
ArrayList<Integer> out = new ArrayList<>();
int n = arr.length;
BloomFilter<Integer> bf = new BloomFilter<>(...); // decide how many bits and how many hash functions to use (compromise between space and false positive rate)
for (int e : arr) {
boolean might_contain = !bf.put(e);
boolean found = false;
if (might_contain) {
// check if false positive
for (int u : out) {
if (u == e) {
found = true;
break;
}
}
}
if (!found) {
out.add(e);
}
}
return out.stream().mapToInt(i -> i).toArray();
}
Obviously, if you can alter the incoming array in place, then there is no need for an ArrayList: at the end, when you know the actual number of unique elements, just arraycopy() those.
For a sorted Array, just check the next index:
//sorted data!
public static int[] distinct(int[] arr) {
int[] temp = new int[arr.length];
int count = 0;
for (int i = 0; i < arr.length; i++) {
int current = arr[i];
if(count > 0 )
if(temp[count - 1] == current)
continue;
temp[count] = current;
count++;
}
int[] whitelist = new int[count];
System.arraycopy(temp, 0, whitelist, 0, count);
return whitelist;
}
You need to sort your array then then loop and remove duplicates. As you cannot use other tools you need to write be code yourself.
You can easily find examples of quicksort in Java on the internet (on which this example is based).
public static void main(String[] args) throws Exception {
final int[] original = new int[]{1, 1, 2, 8, 9, 8, 4, 7, 4, 9, 1};
System.out.println(Arrays.toString(original));
quicksort(original);
System.out.println(Arrays.toString(original));
final int[] unqiue = new int[original.length];
int prev = original[0];
unqiue[0] = prev;
int count = 1;
for (int i = 1; i < original.length; ++i) {
if (original[i] != prev) {
unqiue[count++] = original[i];
}
prev = original[i];
}
System.out.println(Arrays.toString(unqiue));
final int[] compressed = new int[count];
System.arraycopy(unqiue, 0, compressed, 0, count);
System.out.println(Arrays.toString(compressed));
}
private static void quicksort(final int[] values) {
if (values.length == 0) {
return;
}
quicksort(values, 0, values.length - 1);
}
private static void quicksort(final int[] values, final int low, final int high) {
int i = low, j = high;
int pivot = values[low + (high - low) / 2];
while (i <= j) {
while (values[i] < pivot) {
i++;
}
while (values[j] > pivot) {
j--;
}
if (i <= j) {
swap(values, i, j);
i++;
j--;
}
}
if (low < j) {
quicksort(values, low, j);
}
if (i < high) {
quicksort(values, i, high);
}
}
private static void swap(final int[] values, final int i, final int j) {
final int temp = values[i];
values[i] = values[j];
values[j] = temp;
}
So the process runs in 3 steps.
Sort the array - O(nlgn)
Remove duplicates - O(n)
Compact the array - O(n)
So this improves significantly on your O(n^3) approach.
Output:
[1, 1, 2, 8, 9, 8, 4, 7, 4, 9, 1]
[1, 1, 1, 2, 4, 4, 7, 8, 8, 9, 9]
[1, 2, 4, 7, 8, 9, 0, 0, 0, 0, 0]
[1, 2, 4, 7, 8, 9]
EDIT
OP states values inside array doesn't matter really. But I can assume that range is between 0-1000. This is a classic case where an O(n) sort can be used.
We create an array of size range +1, in this case 1001. We then loop over the data and increment the values on each index corresponding to the datapoint.
We can then compact the resulting array, dropping values the have not been incremented. This makes the values unique as we ignore the count.
public static void main(String[] args) throws Exception {
final int[] original = new int[]{1, 1, 2, 8, 9, 8, 4, 7, 4, 9, 1, 1000, 1000};
System.out.println(Arrays.toString(original));
final int[] buckets = new int[1001];
for (final int i : original) {
buckets[i]++;
}
final int[] unique = new int[original.length];
int count = 0;
for (int i = 0; i < buckets.length; ++i) {
if (buckets[i] > 0) {
unique[count++] = i;
}
}
final int[] compressed = new int[count];
System.arraycopy(unique, 0, compressed, 0, count);
System.out.println(Arrays.toString(compressed));
}
Output:
[1, 1, 2, 8, 9, 8, 4, 7, 4, 9, 1, 1000, 1000]
[1, 2, 4, 7, 8, 9, 1000]
public static void main(String args[]) {
int[] intarray = {1,2,3,4,5,1,2,3,4,5,1,2,3,4,5};
Set<Integer> set = new HashSet<Integer>();
for(int i : intarray) {
set.add(i);
}
Iterator<Integer> setitr = set.iterator();
for(int pos=0; pos < intarray.length; pos ++) {
if(pos < set.size()) {
intarray[pos] =setitr.next();
} else {
intarray[pos]= 0;
}
}
for(int i: intarray)
System.out.println(i);
}
I know this is kinda dead but I just wrote this for my own use. It's more or less the same as adding to a hashset and then pulling all the elements out of it. It should run in O(nlogn) worst case.
public static int[] removeDuplicates(int[] numbers) {
Entry[] entries = new Entry[numbers.length];
int size = 0;
for (int i = 0 ; i < numbers.length ; i++) {
int nextVal = numbers[i];
int index = nextVal % entries.length;
Entry e = entries[index];
if (e == null) {
entries[index] = new Entry(nextVal);
size++;
} else {
if(e.insert(nextVal)) {
size++;
}
}
}
int[] result = new int[size];
int index = 0;
for (int i = 0 ; i < entries.length ; i++) {
Entry current = entries[i];
while (current != null) {
result[i++] = current.value;
current = current.next;
}
}
return result;
}
public static class Entry {
int value;
Entry next;
Entry(int value) {
this.value = value;
}
public boolean insert(int newVal) {
Entry current = this;
Entry prev = null;
while (current != null) {
if (current.value == newVal) {
return false;
} else if(current.next != null) {
prev = current;
current = next;
}
}
prev.next = new Entry(value);
return true;
}
}
int tempvar=0; //Variable for the final array without any duplicates
int whilecount=0; //variable for while loop
while(whilecount<(nsprtable*2)-1) //nsprtable can be any number
{
//to check whether the next value is idential in case of sorted array
if(temparray[whilecount]!=temparray[whilecount+1])
{
finalarray[tempvar]=temparray[whilecount];
tempvar++;
whilecount=whilecount+1;
}
else if (temparray[whilecount]==temparray[whilecount+1])
{
finalarray[tempvar]=temparray[whilecount];
tempvar++;
whilecount=whilecount+2;
}
}
Hope this helps or solves the purpose.
package javaa;
public class UniqueElementinAnArray
{
public static void main(String[] args)
{
int[] a = {10,10,10,10,10,100};
int[] output = new int[a.length];
int count = 0;
int num = 0;
//Iterate over an array
for(int i=0; i<a.length; i++)
{
num=a[i];
boolean flag = check(output,num);
if(flag==false)
{
output[count]=num;
++count;
}
}
//print the all the elements from an array except zero's (0)
for (int i : output)
{
if(i!=0 )
System.out.print(i+" ");
}
}
/***
* If a next number from an array is already exists in unique array then return true else false
* #param arr Unique number array. Initially this array is an empty.
* #param num Number to be search in unique array. Whether it is duplicate or unique.
* #return true: If a number is already exists in an array else false
*/
public static boolean check(int[] arr, int num)
{
boolean flag = false;
for(int i=0;i<arr.length; i++)
{
if(arr[i]==num)
{
flag = true;
break;
}
}
return flag;
}
}
public static int[] removeDuplicates(int[] arr) {
int end = arr.length;
HashSet<Integer> set = new HashSet<Integer>(end);
for(int i = 0 ; i < end ; i++){
set.add(arr[i]);
}
return set.toArray();
}
You can use an auxiliary array (temp) which in indexes are numbers of main array. So the time complexity will be liner and O(n). As we want to do it without using any library, we define another array (unique) to push non-duplicate elements:
var num = [2,4,9,4,1,2,24,12,4];
let temp = [];
let unique = [];
let j = 0;
for (let i = 0; i < num.length; i++){
if (temp[num[i]] !== 1){
temp[num[i]] = 1;
unique[j++] = num[i];
}
}
console.log(unique);
If you are looking to remove duplicates using the same array and also keeping the time complexity of O(n). Then this should do the trick. Also, would only work if the array is sorted.
function removeDuplicates_sorted(arr){
let j = 0;
for(let x = 0; x < arr.length - 1; x++){
if(arr[x] != arr[x + 1]){
arr[j++] = arr[x];
}
}
arr[j++] = arr[arr.length - 1];
arr.length = j;
return arr;
}
Here is for an unsorted array, its O(n) but uses more space complexity then the sorted.
function removeDuplicates_unsorted(arr){
let map = {};
let j = 0;
for(var numbers of arr){
if(!map[numbers]){
map[numbers] = 1;
arr[j++] = numbers;
}
}
arr.length = j;
return arr;
}
Note to other readers who desire to use the Set method of solving this problem: If original ordering must be preserved, do not use HashSet as in the top result. HashSet does not guarantee the preservation of the original order, so LinkedHashSet should be used instead-this keeps track of the order in which the elements were inserted into the set and returns them in that order.
This is an interview question.
public class Test4 {
public static void main(String[] args) {
int a[] = {1, 2, 2, 3, 3, 3, 6,6,6,6,6,66,7,65};
int newlength = lengthofarraywithoutduplicates(a);
for(int i = 0 ; i < newlength ;i++) {
System.out.println(a[i]);
}//for
}//main
private static int lengthofarraywithoutduplicates(int[] a) {
int count = 1 ;
for (int i = 1; i < a.length; i++) {
int ch = a[i];
if(ch != a[i-1]) {
a[count++] = ch;
}//if
}//for
return count;
}//fix
}//end1
But, it's always better to use Stream :
int[] a = {1, 2, 2, 3, 3, 3, 6,6,6,6,6,66,7,65};
int[] array = Arrays.stream(a).distinct().toArray();
System.out.println(Arrays.toString(array));//[1, 2, 3, 6, 66, 7, 65]
How about this one, only for the sorted Array of numbers, to print the Array without duplicates, without using Set or other Collections, just an Array:
public static int[] removeDuplicates(int[] array) {
int[] nums = new int[array.length];
int addedNumber = 0;
int j = 0;
for(int i=0; i < array.length; i++) {
if (addedNumber != array[i]) {
nums[j] = array[i];
j++;
addedNumber = nums[j-1];
}
}
return Arrays.copyOf(nums, j);
}
An array of 1040 duplicated numbers processed in 33020 nanoseconds(0.033020 millisec).
public static void main(String[] args) {
Integer[] intArray = { 1, 1, 1, 2, 4, 2, 3, 5, 3, 6, 7, 3, 4, 5 };
Integer[] finalArray = removeDuplicates(intArray);
System.err.println(Arrays.asList(finalArray));
}
private static Integer[] removeDuplicates(Integer[] intArray) {
int count = 0;
Integer[] interimArray = new Integer[intArray.length];
for (int i = 0; i < intArray.length; i++) {
boolean exists = false;
for (int j = 0; j < interimArray.length; j++) {
if (interimArray[j]!=null && interimArray[j] == intArray[i]) {
exists = true;
}
}
if (!exists) {
interimArray[count] = intArray[i];
count++;
}
}
final Integer[] finalArray = new Integer[count];
System.arraycopy(interimArray, 0, finalArray, 0, count);
return finalArray;
}
I feel Android Killer's idea is great, but I just wondered if we can leverage HashMap. So I did a little experiment. And I found HashMap seems faster than HashSet.
Here is code:
int[] input = new int[1000000];
for (int i = 0; i < input.length; i++) {
Random random = new Random();
input[i] = random.nextInt(200000);
}
long startTime1 = new Date().getTime();
System.out.println("Set start time:" + startTime1);
Set<Integer> resultSet = new HashSet<Integer>();
for (int i = 0; i < input.length; i++) {
resultSet.add(input[i]);
}
long endTime1 = new Date().getTime();
System.out.println("Set end time:"+ endTime1);
System.out.println("result of set:" + (endTime1 - startTime1));
System.out.println("number of Set:" + resultSet.size() + "\n");
long startTime2 = new Date().getTime();
System.out.println("Map start time:" + startTime1);
Map<Integer, Integer> resultMap = new HashMap<Integer, Integer>();
for (int i = 0; i < input.length; i++) {
if (!resultMap.containsKey(input[i]))
resultMap.put(input[i], input[i]);
}
long endTime2 = new Date().getTime();
System.out.println("Map end Time:" + endTime2);
System.out.println("result of Map:" + (endTime2 - startTime2));
System.out.println("number of Map:" + resultMap.size());
Here is result:
Set start time:1441960583837
Set end time:1441960583917
result of set:80
number of Set:198652
Map start time:1441960583837
Map end Time:1441960583983
result of Map:66
number of Map:198652
This is not using Set, Map, List or any extra collection, only two arrays:
package arrays.duplicates;
import java.lang.reflect.Array;
import java.util.Arrays;
public class ArrayDuplicatesRemover<T> {
public static <T> T[] removeDuplicates(T[] input, Class<T> clazz) {
T[] output = (T[]) Array.newInstance(clazz, 0);
for (T t : input) {
if (!inArray(t, output)) {
output = Arrays.copyOf(output, output.length + 1);
output[output.length - 1] = t;
}
}
return output;
}
private static <T> boolean inArray(T search, T[] array) {
for (T element : array) {
if (element.equals(search)) {
return true;
}
}
return false;
}
}
And the main to test it
package arrays.duplicates;
import java.util.Arrays;
public class TestArrayDuplicates {
public static void main(String[] args) {
Integer[] array = {1, 1, 2, 2, 3, 3, 3, 3, 4};
testArrayDuplicatesRemover(array);
}
private static void testArrayDuplicatesRemover(Integer[] array) {
final Integer[] expectedResult = {1, 2, 3, 4};
Integer[] arrayWithoutDuplicates = ArrayDuplicatesRemover.removeDuplicates(array, Integer.class);
System.out.println("Array without duplicates is supposed to be: " + Arrays.toString(expectedResult));
System.out.println("Array without duplicates currently is: " + Arrays.toString(arrayWithoutDuplicates));
System.out.println("Is test passed ok?: " + (Arrays.equals(arrayWithoutDuplicates, expectedResult) ? "YES" : "NO"));
}
}
And the output:
Array without duplicates is supposed to be: [1, 2, 3, 4]
Array without duplicates currently is: [1, 2, 3, 4]
Is test passed ok?: YES
I was asked to write my own implementation to remove duplicated values in an array. Here is what I have created. But after tests with 1,000,000 elements it took very long time to finish. Is there something that I can do to improve my algorithm or any bugs to remove ?
I need to write my own implementation - not to use Set, HashSet etc. Or any other tools such as iterators. Simply an array to remove duplicates.
public static int[] removeDuplicates(int[] arr) {
int end = arr.length;
for (int i = 0; i < end; i++) {
for (int j = i + 1; j < end; j++) {
if (arr[i] == arr[j]) {
int shiftLeft = j;
for (int k = j+1; k < end; k++, shiftLeft++) {
arr[shiftLeft] = arr[k];
}
end--;
j--;
}
}
}
int[] whitelist = new int[end];
for(int i = 0; i < end; i++){
whitelist[i] = arr[i];
}
return whitelist;
}
you can take the help of Set collection
int end = arr.length;
Set<Integer> set = new HashSet<Integer>();
for(int i = 0; i < end; i++){
set.add(arr[i]);
}
now if you will iterate through this set, it will contain only unique values. Iterating code is like this :
Iterator it = set.iterator();
while(it.hasNext()) {
System.out.println(it.next());
}
If you are allowed to use Java 8 streams:
Arrays.stream(arr).distinct().toArray();
Note: I am assuming the array is sorted.
Code:
int[] input = new int[]{1, 1, 3, 7, 7, 8, 9, 9, 9, 10};
int current = input[0];
boolean found = false;
for (int i = 0; i < input.length; i++) {
if (current == input[i] && !found) {
found = true;
} else if (current != input[i]) {
System.out.print(" " + current);
current = input[i];
found = false;
}
}
System.out.print(" " + current);
output:
1 3 7 8 9 10
Slight modification to the original code itself, by removing the innermost for loop.
public static int[] removeDuplicates(int[] arr){
int end = arr.length;
for (int i = 0; i < end; i++) {
for (int j = i + 1; j < end; j++) {
if (arr[i] == arr[j]) {
/*int shiftLeft = j;
for (int k = j+1; k < end; k++, shiftLeft++) {
arr[shiftLeft] = arr[k];
}*/
arr[j] = arr[end-1];
end--;
j--;
}
}
}
int[] whitelist = new int[end];
/*for(int i = 0; i < end; i++){
whitelist[i] = arr[i];
}*/
System.arraycopy(arr, 0, whitelist, 0, end);
return whitelist;
}
There exists many solution of this problem.
The sort approach
You sort your array and resolve only unique items
The set approach
You declare a HashSet where you put all item then you have only unique ones.
You create a boolean array that represent the items all ready returned, (this depend on your data in the array).
If you deal with large amount of data i would pick the 1. solution. As you do not allocate additional memory and sorting is quite fast. For small set of data the complexity would be n^2 but for large i will be n log n.
Since you can assume the range is between 0-1000 there is a very simple and efficient solution
//Throws an exception if values are not in the range of 0-1000
public static int[] removeDuplicates(int[] arr) {
boolean[] set = new boolean[1001]; //values must default to false
int totalItems = 0;
for (int i = 0; i < arr.length; ++i) {
if (!set[arr[i]]) {
set[arr[i]] = true;
totalItems++;
}
}
int[] ret = new int[totalItems];
int c = 0;
for (int i = 0; i < set.length; ++i) {
if (set[i]) {
ret[c++] = i;
}
}
return ret;
}
This runs in linear time O(n). Caveat: the returned array is sorted so if that is illegal then this answer is invalid.
class Demo
{
public static void main(String[] args)
{
int a[]={3,2,1,4,2,1};
System.out.print("Before Sorting:");
for (int i=0;i<a.length; i++ )
{
System.out.print(a[i]+"\t");
}
System.out.print ("\nAfter Sorting:");
//sorting the elements
for(int i=0;i<a.length;i++)
{
for(int j=i;j<a.length;j++)
{
if(a[i]>a[j])
{
int temp=a[i];
a[i]=a[j];
a[j]=temp;
}
}
}
//After sorting
for(int i=0;i<a.length;i++)
{
System.out.print(a[i]+"\t");
}
System.out.print("\nAfter removing duplicates:");
int b=0;
a[b]=a[0];
for(int i=0;i<a.length;i++)
{
if (a[b]!=a[i])
{
b++;
a[b]=a[i];
}
}
for (int i=0;i<=b;i++ )
{
System.out.print(a[i]+"\t");
}
}
}
OUTPUT:Before Sortng:3 2 1 4 2 1 After Sorting:1 1 2 2 3 4
Removing Duplicates:1 2 3 4
Since this question is still getting a lot of attention, I decided to answer it by copying this answer from Code Review.SE:
You're following the same philosophy as the bubble sort, which is
very, very, very slow. Have you tried this?:
Sort your unordered array with quicksort. Quicksort is much faster
than bubble sort (I know, you are not sorting, but the algorithm you
follow is almost the same as bubble sort to traverse the array).
Then start removing duplicates (repeated values will be next to each
other). In a for loop you could have two indices: source and
destination. (On each loop you copy source to destination unless they
are the same, and increment both by 1). Every time you find a
duplicate you increment source (and don't perform the copy).
#morgano
import java.util.Arrays;
public class Practice {
public static void main(String[] args) {
int a[] = { 1, 3, 3, 4, 2, 1, 5, 6, 7, 7, 8, 10 };
Arrays.sort(a);
int j = 0;
for (int i = 0; i < a.length - 1; i++) {
if (a[i] != a[i + 1]) {
a[j] = a[i];
j++;
}
}
a[j] = a[a.length - 1];
for (int i = 0; i <= j; i++) {
System.out.println(a[i]);
}
}
}
**This is the most simplest way**
What if you create two boolean arrays: 1 for negative values and 1 for positive values and init it all on false.
Then you cycle thorugh the input array and lookup in the arrays if you've encoutered the value already.
If not, you add it to the output array and mark it as already used.
package com.pari.practice;
import java.util.HashSet;
import java.util.Iterator;
import com.pari.sort.Sort;
public class RemoveDuplicates {
/**
* brute force- o(N square)
*
* #param input
* #return
*/
public static int[] removeDups(int[] input){
boolean[] isSame = new boolean[input.length];
int sameNums = 0;
for( int i = 0; i < input.length; i++ ){
for( int j = i+1; j < input.length; j++){
if( input[j] == input[i] ){ //compare same
isSame[j] = true;
sameNums++;
}
}
}
//compact the array into the result.
int[] result = new int[input.length-sameNums];
int count = 0;
for( int i = 0; i < input.length; i++ ){
if( isSame[i] == true) {
continue;
}
else{
result[count] = input[i];
count++;
}
}
return result;
}
/**
* set - o(N)
* does not guarantee order of elements returned - set property
*
* #param input
* #return
*/
public static int[] removeDups1(int[] input){
HashSet myset = new HashSet();
for( int i = 0; i < input.length; i++ ){
myset.add(input[i]);
}
//compact the array into the result.
int[] result = new int[myset.size()];
Iterator setitr = myset.iterator();
int count = 0;
while( setitr.hasNext() ){
result[count] = (int) setitr.next();
count++;
}
return result;
}
/**
* quicksort - o(Nlogn)
*
* #param input
* #return
*/
public static int[] removeDups2(int[] input){
Sort st = new Sort();
st.quickSort(input, 0, input.length-1); //input is sorted
//compact the array into the result.
int[] intermediateResult = new int[input.length];
int count = 0;
int prev = Integer.MIN_VALUE;
for( int i = 0; i < input.length; i++ ){
if( input[i] != prev ){
intermediateResult[count] = input[i];
count++;
}
prev = input[i];
}
int[] result = new int[count];
System.arraycopy(intermediateResult, 0, result, 0, count);
return result;
}
public static void printArray(int[] input){
for( int i = 0; i < input.length; i++ ){
System.out.print(input[i] + " ");
}
}
public static void main(String[] args){
int[] input = {5,6,8,0,1,2,5,9,11,0};
RemoveDuplicates.printArray(RemoveDuplicates.removeDups(input));
System.out.println();
RemoveDuplicates.printArray(RemoveDuplicates.removeDups1(input));
System.out.println();
RemoveDuplicates.printArray(RemoveDuplicates.removeDups2(input));
}
}
Output:
5 6 8 0 1 2 9 11
0 1 2 5 6 8 9 11
0 1 2 5 6 8 9 11
I have just written the above code for trying out. thanks.
public static int[] removeDuplicates(int[] arr){
HashSet<Integer> set = new HashSet<>();
final int len = arr.length;
//changed end to len
for(int i = 0; i < len; i++){
set.add(arr[i]);
}
int[] whitelist = new int[set.size()];
int i = 0;
for (Iterator<Integer> it = set.iterator(); it.hasNext();) {
whitelist[i++] = it.next();
}
return whitelist;
}
Runs in O(N) time instead of your O(N^3) time
Not a big fun of updating user input, however considering your constraints...
public int[] removeDup(int[] nums) {
Arrays.sort(nums);
int x = 0;
for (int i = 0; i < nums.length; i++) {
if (i == 0 || nums[i] != nums[i - 1]) {
nums[x++] = nums[i];
}
}
return Arrays.copyOf(nums, x);
}
Array sort can be easily replaced with any nlog(n) algorithm.
This is simple way to sort the elements in the array
public class DublicatesRemove {
public static void main(String args[]) throws Exception {
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
System.out.println("enter size of the array");
int l = Integer.parseInt(br.readLine());
int[] a = new int[l];
// insert elements in the array logic
for (int i = 0; i < l; i++)
{
System.out.println("enter a element");
int el = Integer.parseInt(br.readLine());
a[i] = el;
}
// sorting elements in the array logic
for (int i = 0; i < l; i++)
{
for (int j = 0; j < l - 1; j++)
{
if (a[j] > a[j + 1])
{
int temp = a[j];
a[j] = a[j + 1];
a[j + 1] = temp;
}
}
}
// remove duplicate elements logic
int b = 0;
a[b] = a[0];
for (int i = 1; i < l; i++)
{
if (a[b] != a[i])
{
b++;
a[b]=a[i];
}
}
for(int i=0;i<=b;i++)
{
System.out.println(a[i]);
}
}
}
Okay, so you cannot use Set or other collections. One solution I don't see here so far is one based on the use of a Bloom filter, which essentially is an array of bits, so perhaps that passes your requirements.
The Bloom filter is a lovely and very handy technique, fast and space-efficient, that can be used to do a quick check of the existence of an element in a set without storing the set itself or the elements. It has a (typically small) false positive rate, but no false negative rate. In other words, for your question, if a Bloom filter tells you that an element hasn't been seen so far, you can be sure it hasn't. But if it says that an element has been seen, you actually need to check. This still saves a lot of time if there aren't too many duplicates in your list (for those, there is no looping to do, except in the small probability case of a false positive --you typically chose this rate based on how much space you are willing to give to the Bloom filter (rule of thumb: less than 10 bits per unique element for a false positive rate of 1%).
There are many implementations of Bloom filters, see e.g. here or here, so I won't repeat that in this answer. Let us just assume the api described in that last reference, in particular, the description of put(E e):
true if the Bloom filter's bits changed as a result of this operation. If the bits changed, this is definitely the first time object has been added to the filter. If the bits haven't changed, this might be the first time object has been added to the filter. (...)
An implementation using such a Bloom filter would then be:
public static int[] removeDuplicates(int[] arr) {
ArrayList<Integer> out = new ArrayList<>();
int n = arr.length;
BloomFilter<Integer> bf = new BloomFilter<>(...); // decide how many bits and how many hash functions to use (compromise between space and false positive rate)
for (int e : arr) {
boolean might_contain = !bf.put(e);
boolean found = false;
if (might_contain) {
// check if false positive
for (int u : out) {
if (u == e) {
found = true;
break;
}
}
}
if (!found) {
out.add(e);
}
}
return out.stream().mapToInt(i -> i).toArray();
}
Obviously, if you can alter the incoming array in place, then there is no need for an ArrayList: at the end, when you know the actual number of unique elements, just arraycopy() those.
For a sorted Array, just check the next index:
//sorted data!
public static int[] distinct(int[] arr) {
int[] temp = new int[arr.length];
int count = 0;
for (int i = 0; i < arr.length; i++) {
int current = arr[i];
if(count > 0 )
if(temp[count - 1] == current)
continue;
temp[count] = current;
count++;
}
int[] whitelist = new int[count];
System.arraycopy(temp, 0, whitelist, 0, count);
return whitelist;
}
You need to sort your array then then loop and remove duplicates. As you cannot use other tools you need to write be code yourself.
You can easily find examples of quicksort in Java on the internet (on which this example is based).
public static void main(String[] args) throws Exception {
final int[] original = new int[]{1, 1, 2, 8, 9, 8, 4, 7, 4, 9, 1};
System.out.println(Arrays.toString(original));
quicksort(original);
System.out.println(Arrays.toString(original));
final int[] unqiue = new int[original.length];
int prev = original[0];
unqiue[0] = prev;
int count = 1;
for (int i = 1; i < original.length; ++i) {
if (original[i] != prev) {
unqiue[count++] = original[i];
}
prev = original[i];
}
System.out.println(Arrays.toString(unqiue));
final int[] compressed = new int[count];
System.arraycopy(unqiue, 0, compressed, 0, count);
System.out.println(Arrays.toString(compressed));
}
private static void quicksort(final int[] values) {
if (values.length == 0) {
return;
}
quicksort(values, 0, values.length - 1);
}
private static void quicksort(final int[] values, final int low, final int high) {
int i = low, j = high;
int pivot = values[low + (high - low) / 2];
while (i <= j) {
while (values[i] < pivot) {
i++;
}
while (values[j] > pivot) {
j--;
}
if (i <= j) {
swap(values, i, j);
i++;
j--;
}
}
if (low < j) {
quicksort(values, low, j);
}
if (i < high) {
quicksort(values, i, high);
}
}
private static void swap(final int[] values, final int i, final int j) {
final int temp = values[i];
values[i] = values[j];
values[j] = temp;
}
So the process runs in 3 steps.
Sort the array - O(nlgn)
Remove duplicates - O(n)
Compact the array - O(n)
So this improves significantly on your O(n^3) approach.
Output:
[1, 1, 2, 8, 9, 8, 4, 7, 4, 9, 1]
[1, 1, 1, 2, 4, 4, 7, 8, 8, 9, 9]
[1, 2, 4, 7, 8, 9, 0, 0, 0, 0, 0]
[1, 2, 4, 7, 8, 9]
EDIT
OP states values inside array doesn't matter really. But I can assume that range is between 0-1000. This is a classic case where an O(n) sort can be used.
We create an array of size range +1, in this case 1001. We then loop over the data and increment the values on each index corresponding to the datapoint.
We can then compact the resulting array, dropping values the have not been incremented. This makes the values unique as we ignore the count.
public static void main(String[] args) throws Exception {
final int[] original = new int[]{1, 1, 2, 8, 9, 8, 4, 7, 4, 9, 1, 1000, 1000};
System.out.println(Arrays.toString(original));
final int[] buckets = new int[1001];
for (final int i : original) {
buckets[i]++;
}
final int[] unique = new int[original.length];
int count = 0;
for (int i = 0; i < buckets.length; ++i) {
if (buckets[i] > 0) {
unique[count++] = i;
}
}
final int[] compressed = new int[count];
System.arraycopy(unique, 0, compressed, 0, count);
System.out.println(Arrays.toString(compressed));
}
Output:
[1, 1, 2, 8, 9, 8, 4, 7, 4, 9, 1, 1000, 1000]
[1, 2, 4, 7, 8, 9, 1000]
public static void main(String args[]) {
int[] intarray = {1,2,3,4,5,1,2,3,4,5,1,2,3,4,5};
Set<Integer> set = new HashSet<Integer>();
for(int i : intarray) {
set.add(i);
}
Iterator<Integer> setitr = set.iterator();
for(int pos=0; pos < intarray.length; pos ++) {
if(pos < set.size()) {
intarray[pos] =setitr.next();
} else {
intarray[pos]= 0;
}
}
for(int i: intarray)
System.out.println(i);
}
I know this is kinda dead but I just wrote this for my own use. It's more or less the same as adding to a hashset and then pulling all the elements out of it. It should run in O(nlogn) worst case.
public static int[] removeDuplicates(int[] numbers) {
Entry[] entries = new Entry[numbers.length];
int size = 0;
for (int i = 0 ; i < numbers.length ; i++) {
int nextVal = numbers[i];
int index = nextVal % entries.length;
Entry e = entries[index];
if (e == null) {
entries[index] = new Entry(nextVal);
size++;
} else {
if(e.insert(nextVal)) {
size++;
}
}
}
int[] result = new int[size];
int index = 0;
for (int i = 0 ; i < entries.length ; i++) {
Entry current = entries[i];
while (current != null) {
result[i++] = current.value;
current = current.next;
}
}
return result;
}
public static class Entry {
int value;
Entry next;
Entry(int value) {
this.value = value;
}
public boolean insert(int newVal) {
Entry current = this;
Entry prev = null;
while (current != null) {
if (current.value == newVal) {
return false;
} else if(current.next != null) {
prev = current;
current = next;
}
}
prev.next = new Entry(value);
return true;
}
}
int tempvar=0; //Variable for the final array without any duplicates
int whilecount=0; //variable for while loop
while(whilecount<(nsprtable*2)-1) //nsprtable can be any number
{
//to check whether the next value is idential in case of sorted array
if(temparray[whilecount]!=temparray[whilecount+1])
{
finalarray[tempvar]=temparray[whilecount];
tempvar++;
whilecount=whilecount+1;
}
else if (temparray[whilecount]==temparray[whilecount+1])
{
finalarray[tempvar]=temparray[whilecount];
tempvar++;
whilecount=whilecount+2;
}
}
Hope this helps or solves the purpose.
package javaa;
public class UniqueElementinAnArray
{
public static void main(String[] args)
{
int[] a = {10,10,10,10,10,100};
int[] output = new int[a.length];
int count = 0;
int num = 0;
//Iterate over an array
for(int i=0; i<a.length; i++)
{
num=a[i];
boolean flag = check(output,num);
if(flag==false)
{
output[count]=num;
++count;
}
}
//print the all the elements from an array except zero's (0)
for (int i : output)
{
if(i!=0 )
System.out.print(i+" ");
}
}
/***
* If a next number from an array is already exists in unique array then return true else false
* #param arr Unique number array. Initially this array is an empty.
* #param num Number to be search in unique array. Whether it is duplicate or unique.
* #return true: If a number is already exists in an array else false
*/
public static boolean check(int[] arr, int num)
{
boolean flag = false;
for(int i=0;i<arr.length; i++)
{
if(arr[i]==num)
{
flag = true;
break;
}
}
return flag;
}
}
public static int[] removeDuplicates(int[] arr) {
int end = arr.length;
HashSet<Integer> set = new HashSet<Integer>(end);
for(int i = 0 ; i < end ; i++){
set.add(arr[i]);
}
return set.toArray();
}
You can use an auxiliary array (temp) which in indexes are numbers of main array. So the time complexity will be liner and O(n). As we want to do it without using any library, we define another array (unique) to push non-duplicate elements:
var num = [2,4,9,4,1,2,24,12,4];
let temp = [];
let unique = [];
let j = 0;
for (let i = 0; i < num.length; i++){
if (temp[num[i]] !== 1){
temp[num[i]] = 1;
unique[j++] = num[i];
}
}
console.log(unique);
If you are looking to remove duplicates using the same array and also keeping the time complexity of O(n). Then this should do the trick. Also, would only work if the array is sorted.
function removeDuplicates_sorted(arr){
let j = 0;
for(let x = 0; x < arr.length - 1; x++){
if(arr[x] != arr[x + 1]){
arr[j++] = arr[x];
}
}
arr[j++] = arr[arr.length - 1];
arr.length = j;
return arr;
}
Here is for an unsorted array, its O(n) but uses more space complexity then the sorted.
function removeDuplicates_unsorted(arr){
let map = {};
let j = 0;
for(var numbers of arr){
if(!map[numbers]){
map[numbers] = 1;
arr[j++] = numbers;
}
}
arr.length = j;
return arr;
}
Note to other readers who desire to use the Set method of solving this problem: If original ordering must be preserved, do not use HashSet as in the top result. HashSet does not guarantee the preservation of the original order, so LinkedHashSet should be used instead-this keeps track of the order in which the elements were inserted into the set and returns them in that order.
This is an interview question.
public class Test4 {
public static void main(String[] args) {
int a[] = {1, 2, 2, 3, 3, 3, 6,6,6,6,6,66,7,65};
int newlength = lengthofarraywithoutduplicates(a);
for(int i = 0 ; i < newlength ;i++) {
System.out.println(a[i]);
}//for
}//main
private static int lengthofarraywithoutduplicates(int[] a) {
int count = 1 ;
for (int i = 1; i < a.length; i++) {
int ch = a[i];
if(ch != a[i-1]) {
a[count++] = ch;
}//if
}//for
return count;
}//fix
}//end1
But, it's always better to use Stream :
int[] a = {1, 2, 2, 3, 3, 3, 6,6,6,6,6,66,7,65};
int[] array = Arrays.stream(a).distinct().toArray();
System.out.println(Arrays.toString(array));//[1, 2, 3, 6, 66, 7, 65]
How about this one, only for the sorted Array of numbers, to print the Array without duplicates, without using Set or other Collections, just an Array:
public static int[] removeDuplicates(int[] array) {
int[] nums = new int[array.length];
int addedNumber = 0;
int j = 0;
for(int i=0; i < array.length; i++) {
if (addedNumber != array[i]) {
nums[j] = array[i];
j++;
addedNumber = nums[j-1];
}
}
return Arrays.copyOf(nums, j);
}
An array of 1040 duplicated numbers processed in 33020 nanoseconds(0.033020 millisec).
public static void main(String[] args) {
Integer[] intArray = { 1, 1, 1, 2, 4, 2, 3, 5, 3, 6, 7, 3, 4, 5 };
Integer[] finalArray = removeDuplicates(intArray);
System.err.println(Arrays.asList(finalArray));
}
private static Integer[] removeDuplicates(Integer[] intArray) {
int count = 0;
Integer[] interimArray = new Integer[intArray.length];
for (int i = 0; i < intArray.length; i++) {
boolean exists = false;
for (int j = 0; j < interimArray.length; j++) {
if (interimArray[j]!=null && interimArray[j] == intArray[i]) {
exists = true;
}
}
if (!exists) {
interimArray[count] = intArray[i];
count++;
}
}
final Integer[] finalArray = new Integer[count];
System.arraycopy(interimArray, 0, finalArray, 0, count);
return finalArray;
}
I feel Android Killer's idea is great, but I just wondered if we can leverage HashMap. So I did a little experiment. And I found HashMap seems faster than HashSet.
Here is code:
int[] input = new int[1000000];
for (int i = 0; i < input.length; i++) {
Random random = new Random();
input[i] = random.nextInt(200000);
}
long startTime1 = new Date().getTime();
System.out.println("Set start time:" + startTime1);
Set<Integer> resultSet = new HashSet<Integer>();
for (int i = 0; i < input.length; i++) {
resultSet.add(input[i]);
}
long endTime1 = new Date().getTime();
System.out.println("Set end time:"+ endTime1);
System.out.println("result of set:" + (endTime1 - startTime1));
System.out.println("number of Set:" + resultSet.size() + "\n");
long startTime2 = new Date().getTime();
System.out.println("Map start time:" + startTime1);
Map<Integer, Integer> resultMap = new HashMap<Integer, Integer>();
for (int i = 0; i < input.length; i++) {
if (!resultMap.containsKey(input[i]))
resultMap.put(input[i], input[i]);
}
long endTime2 = new Date().getTime();
System.out.println("Map end Time:" + endTime2);
System.out.println("result of Map:" + (endTime2 - startTime2));
System.out.println("number of Map:" + resultMap.size());
Here is result:
Set start time:1441960583837
Set end time:1441960583917
result of set:80
number of Set:198652
Map start time:1441960583837
Map end Time:1441960583983
result of Map:66
number of Map:198652
This is not using Set, Map, List or any extra collection, only two arrays:
package arrays.duplicates;
import java.lang.reflect.Array;
import java.util.Arrays;
public class ArrayDuplicatesRemover<T> {
public static <T> T[] removeDuplicates(T[] input, Class<T> clazz) {
T[] output = (T[]) Array.newInstance(clazz, 0);
for (T t : input) {
if (!inArray(t, output)) {
output = Arrays.copyOf(output, output.length + 1);
output[output.length - 1] = t;
}
}
return output;
}
private static <T> boolean inArray(T search, T[] array) {
for (T element : array) {
if (element.equals(search)) {
return true;
}
}
return false;
}
}
And the main to test it
package arrays.duplicates;
import java.util.Arrays;
public class TestArrayDuplicates {
public static void main(String[] args) {
Integer[] array = {1, 1, 2, 2, 3, 3, 3, 3, 4};
testArrayDuplicatesRemover(array);
}
private static void testArrayDuplicatesRemover(Integer[] array) {
final Integer[] expectedResult = {1, 2, 3, 4};
Integer[] arrayWithoutDuplicates = ArrayDuplicatesRemover.removeDuplicates(array, Integer.class);
System.out.println("Array without duplicates is supposed to be: " + Arrays.toString(expectedResult));
System.out.println("Array without duplicates currently is: " + Arrays.toString(arrayWithoutDuplicates));
System.out.println("Is test passed ok?: " + (Arrays.equals(arrayWithoutDuplicates, expectedResult) ? "YES" : "NO"));
}
}
And the output:
Array without duplicates is supposed to be: [1, 2, 3, 4]
Array without duplicates currently is: [1, 2, 3, 4]
Is test passed ok?: YES