I have an int[] array that contains values with the following properties:
They are sorted
They are unique (no duplicates)
They are in a known range [0..MAX)
MAX is typically quite a lot larger than the length of the array (say 10-100x)
Sometimes the numbers are evenly distributed across the range, but at other times there are quite long sequences of consecutive numbers. I estimate it is about 50/50 between the two cases.
Given this list, I want to efficiently find the index of a specific value in the array (or if the value is not present, find the next higher value).
I've already implemented a straight binary search with interval bisection that works fairly well, but I have a suspicion that the nature/distribution of the data can be exploited to converge to a solution faster.
I'm interested in optimising the average case search time, but it is important that the worst case is never worse than O(log n) as the arrays are sometimes very large.
Question: it is possible to do much better than a plain binary search in the average case?
EDIT (to clarify additional questions / comments)
The constant in O(log n) definitely matters. In fact assuming that better algorithmic complexity than O(log n) isn't possible, the constant is probably the only thing that matters.....
It's often a one-off search, so while preprocessing is possible it's probably not going to be worth it.
This is in the comments and should be an answer. It's a joint effort, so I'm making it a CW answer:
You may want to look at an interpolation search. In the worst case, they can be worse than O(log n) and so if that's a hard requirement, this wouldn't apply. But if your interpolation is decent, depending on the data distribution an interpolation search can beat a straight binary.
To know, you'd have to implement the interpolation search with a reasonably smart interpolation algorithm, and then run several representative data sets through both to see whether the interpolation or the binary is better suited. I'd think it'd be one of the two, but I'm not au fait with truly cutting edge searching algorithms.
Let's name the interval x here and z the searched number.
Since you expect the values to be evenly distributed, you can use interpolation search. This is similar to binary search, but splits the index range at start + ((z - x[start]) * (end - start)) / (x[end] - x[start]).
To get a running time of O(log n) you have to do combine interpolation search with binary search (do step from binary search and step from interpolation search alternating):
public int search(int[] values, int z) {
int start = 0;
int end = values.length-1;
if (values[0] == z)
return 0;
else if (values[end] == z) {
return end;
}
boolean interpolation = true;
while (start < end) {
int mid;
if (interpolation) {
mid = start + ((z - values[start]) * (end - start)) / (values[end] - values[start]);
} else {
mid = (end-start) / 2;
}
int v = values[mid];
if (v == z)
return mid;
else if (v > z)
end = mid;
else
start = mid;
interpolation = !interpolation;
}
return -1;
}
Since every second iteration of the while loop does a step in binary search, it uses at most twice the number of iterations a binary search would use (O(log n)). Since every second step is a step from interpolation search, it the algorithm should reduce the intervall size fast, if the input has the desired properties.
If int[] is
sorted
have unique values
you know the range (in advance)
Than instead of searching why not to save the value at its index.
Say the number is 243 than save the value in int[243] = 243.
That way searching will be easy and faster. Only thing left is to find out next higher value.
I have one solution.
you are saying array can be
1)numbers are evenly distributed across the range
2)there are quite long sequences of consecutive numbers.
So, first we start a simple test to make sure whether its of type1 or type2.
To test for type 1,
lenght =array.length;
range = array[length-1] - array[0];
Now consider the values of array at
{ length(1/5),length(2/5),length(3/5),length(4/5)},
If the array distribution is of type 1, then we approximately know what must be the value at array[i], so we check whether at those above 4 positions whether they are close to known values if its equal distribution.
If they are close, then its equal distribution and so we can easily find any element in array.If we can't find element based on above approach, we consider it is of type 2.
If above test Fails then it is of type 2, which means in the array there are few places where long sequences of consecutive numbers is present.
so, we solve it in terms like binary search.Explanation is below
*we first search in the middle of the array,(say at length/2, index as i)
left =0,right=length;
BEGIN:
i=(left+right)/2;
case a.1: our search number is greater than array[i]
left=i;
*Now we check at that position is there any long consecutive sequence is present, i.e array[i],array[i+1],array[i+2] are consecutive ints.
case a.1.1: (If they are in consecutive),
as they are consecutive ,and the sequence might be long, we directly search at particular index based on our search integer value.
For example, if our search int is 10, and sequence is 5,6,7,8,9,10,11 15,100,103,
and array[i]=5, then we directly search at array[i+10-5],
If we find our search int, return it, else continue from case a.2 only [because it will obviously less than it] by setting right as
right=(array[i+10-5])
case a.1.2, if they are not consecutive
continue from BEGIN;
case a.2: our search number is less than array[i],
*case a.2 is exactly similar to a.1
*similarly check is there any back sequence , i.e array[i-2],array[i-1],array[i] are in sequence,
If they are in consecutive sequence , search back to exact value as we did in case a.1.1
If they are not consecutive, repeat similar to case a.1.2.
case a.3, it is our search int,
then return it.
HOPE THIS helps
Related
I am looking at the LeetCode problem 2134. Minimum Swaps to Group All 1's Together II:
A swap is defined as taking two distinct positions in an array and swapping the values in them.
A circular array is defined as an array where we consider the first element and the last element to be adjacent.
Given a binary circular array nums, return the minimum number of swaps required to group all 1's present in the array together at any location.
I am trying to study how other people came up with solutions of their own. I came across this particular one, but I don't understand the logic:
class Solution {
public int minSwaps(int[] nums) {
// number of ones
int cntones=Arrays.stream(nums).sum();
// worst case answer
int rslt=nums.length;
// position lft and figure better value for min/rslt
int holes = 0;
for(int i=0;i<cntones;i++) {
if(nums[i]==0)
holes++;
}
// better value for rslt from lft to rgt
// up to index of cntones.
rslt = Math.min(rslt, holes);
// they have a test case with one element
// and that trips up if you dont do modulo
int rgt=cntones % nums.length;
for(int lft=0;lft<nums.length;lft++) {
rslt=Math.min(rslt,holes);
if(nums[lft]!=nums[rgt])
if(nums[rgt]==1)
holes--;
else
holes++;
rgt=(rgt+1)%nums.length;
}
return rslt;
}
}
Why is the worst case, the length of the input array?
I'm thinking wait, wouldn't the worst case be something like [0,1,0,1,0,1...] where 0's and 1's are alternating? Can you give me an example?
I suppose #of holes can potentially be a possible solution in some cases, from counting 0's in a fixed length (the number of total 1's) of a window but because I do not understand the worst case, rslt from question #1, below line stumps me as well.
// better value for rslt from lft to rgt
// up to index of cntones.
rslt = Math.min(rslt, holes);
About the modulo below, I don't think cntones can ever be bigger than nums.length, in turn which will result in 0 all the time? I'm thinking for the case with one element, you'd have to check whether that one element is 0 or 1. How does below line cover that edge case?
// they have a test case with one element
// and that trips up if you dont do modulo
int rgt=cntones % nums.length;
Due to #1~#3 the last for loop makes no sense to me...
Why is the worst case, the length of the input array?
First note that a swap is only useful when it swaps a 0 with 1. Secondly, it makes no sense to swap the same digit a second time, as the result of such double swap could have been achieved with a single swap. So we can say that an upper limit for the number of swaps is the number of 0-digits or number of 1-digits (which ever is the least). In fact, this is an overestimation, because at least one 1-digit should be able to stay unmoved. But let's ignore that for now. To reach that worst case, there should be as many 1 as 0 digits, so then we have half of the length as worst case. Of course, by initialising with a value that is greater than that (like the length) we do no harm.
The example of alternating digits would be resolved by keeping half of those 1-digits unmoved, and moving the remaining 1-digits in the holes between them. So that means we have a number of swaps that is equal to about one fourth of the length of the array.
below line stumps me as well.
rslt = Math.min(rslt, holes);
As you said, there is a window moving over the circular array, which represents the final situation where all 1-digits should end up. So it sets the target to work towards. Obviously, the 1-digits that are already within that window don't need to be swapped. Each 0-digit inside that window has to be swapped with a 1-digit that is currently outside that window. Doing that will reach the target, and so the number of swaps for reaching that particular target window is equal to the number of holes (0-digits) inside that window.
As that exercise is done for each possible window, we are interested to find the best position of the window, i.e. the one where the number of holes (swaps) is minimised. That is what this line of code is doing. rslt is the minimum "so far" and holes is the fresh value we have for the current window. If that is less, then rslt should be updated to it. That's what happens in this statement.
About the modulo below, I don't think cntones can ever be bigger than nums.length, in turn which will result in 0 all the time? I'm thinking for the case with one element, you'd have to check whether that one element is 0 or 1. How does below line cover that edge case?
int rgt=cntones % nums.length;
That modulo only serves for the case that cntones is equal to nums.length. You are right that it will never exceed it. But the case where it is equal is possible (when the input only has 1-digits). And as rgt is going to be used as an index, it should not be equal to nums.length as that is an undefined slot in the array.
Due to #1~#3 the last for loop makes no sense to me...
It should be clear from the above details. That loop moves the window with one step at a time, and keeps the variable holes updated incrementally. Of course, we could have decided to count the number of holes from scratch in each window, but that would be a waste of time. As we go from one window to the next, we only lose one digit on the left and gain one on the right, so we can just update holes with that information and know how many holes there are in the current window -- the one that starts at lft and runs (circular) to rgt. In case the digit that we lose at the left is the same as the one we gain at the right, we obviously didn't change the number of holes. Where they are different, we either win or lose one hole in comparison with the previous window.
This is the pseudo code that i want to calculate time complexity ,i think it is a binary search algorithm but i fail when calculating the complexity because it is reducing logarithamic.
USE variables half-array,found,middle element
SET half-array=initial array;
SET found=True;
Boolean SearchArray(half-array)
find middle element in half-array;
Compare search key with middle element;
IF middle element==search key THEN
SET found=True;
ELSE
IF search key< middle element THEN
SET half-array=lower half of initial array;
ELSE
SET half-array=upper half of initial array;
SearchArray(half-array)
It looks like you are running this method recursively, and with each iteration you are reducing the number of elements being searched by half. This is going to be a logarithmic reduction, i.e. O(log n).
Since you are reducing your elements by half each time, you need to determine how many executions will be needed to reduce it to a single element, which this previous answer provides a proof or if you are a more visual person, you can use the following diagram from this response:
Yes,It is indeed a binary search algorithm.The reason why it is called a 'binary' search is because,if you would have noticed,after each iteration,your problem space is reduced by roughly half (I say roughly because of the floor function).
So now,to find the complexity,we have to devise a recurrence relation,which we can use to determine the worst-case time complexity of binary-search.
Let T(n) denote the number of comparisons binary search does for n elements.In the worst case,no element is found.Also,to make our analysis easier,assume that n is a power of 2.
BINARY SEARCH:
When there is a single element,there is only one check,hence T(1) = 1.
It calculates the middle entry then compares it with our key.If it is equal to the key,it returns the index,otherwise it halves the range by updating upper and lower bounds such that n/2 elements are in the range.
We then check only one of the two halves,and this is done recursively until a single element is left.
Hence,we get the recurrence relation:
T(n) = T(n/2) + 1
Using the Master Theorem,we get the time complexity to be T(n) ∈ Θ(log n)
Also refer : Master Theorem
You are correct in saying that this algorithm is Binary Search (compare your pseudo code to the pseudo code on this Wikipedia page: Binary Search)
That being the case, this algorithm has a worst case time complexity of O(log n), where n is the number of elements in the given array. This is due to the fact that in every recursive call where you don't find the target element, you divide the array in half.
This reduction process is logarithmic because at the end of this algorithm, you will have reduced the list to a single element by dividing the number of elements that still need to be checked by 2 - the number of times you do that is roughly equivalent (see below) to the number of times you would have to multiply 2 by itself to obtain a number equal to the size of the given array.
*I say roughly above because the number of recursive calls made is always going to be an integral value, whereas the power you would have to raise 2 to will not be an integer if the size of the given list is not a power of two.
It's a bonus school task for which we didn't receive any teaching yet and I'm not looking for a complete code, but some tips to get going would be pretty cool. Going to post what I've done so far in Java when I get home, but here's something I've done already.
So, we have to do a sorting algorithm, which for example sorts "AAABBB" to the ABABAB. Max input size is 10^6, and it all has to happen under 1 second. If there's more than one answer, the first one in alphabetical order is the right one. I started to test different algorithms to even sort them without that alphabetical order requirement in mind, just to see how the things work out.
First version:
Save the ascii codes to the Integer array where index is the ascii code, and the value is amount which that character occurs in the char array.
Then I picked 2 highest numbers, and started to spam them to the new character array after each other, until some number was higher, and I swapped to it. It worked well, but of course the order wasn't right.
Second version:
Followed the same idea, but stopped picking the most occurring number and just picked the indexes in the order they were in my array. Works well until the input is something like CBAYYY. Algorithm sorts it to the ABCYYY instead of AYBYCY. Of course I could try to find some free spots for those Y's, but at that point it starts to take too long.
An interesting problem, with an interesting tweak. Yes, this is a permutation or rearranging rather than a sort. No, the quoted question is not a duplicate.
Algorithm.
Count the character frequencies.
Output alternating characters from the two lowest in alphabetical order.
As each is exhausted, move to the next.
At some point the highest frequency char will be exactly half the remaining chars. At that point switch to outputting all of that char alternating in turn with the other remaining chars in alphabetical order.
Some care required to avoid off-by-one errors (odd vs even number of input characters). Otherwise, just writing the code and getting it to work right is the challenge.
Note that there is one special case, where the number of characters is odd and the frequency of one character starts at (half plus 1). In this case you need to start with step 4 in the algorithm, outputting all one character alternating with each of the others in turn.
Note also that if one character comprises more than half the input then apart for this special case, no solution is possible. This situation may be detected in advance by inspecting the frequencies, or during execution when the tail consists of all one character. Detecting this case was not part of the spec.
Since no sort is required the complexity is O(n). Each character is examined twice: once when it is counted and once when it is added to the output. Everything else is amortised.
My idea is the following. With the right implementation it can be almost linear.
First establish a function to check if the solution is even possible. It should be very fast. Something like most frequent letter > 1/2 all letters and take into cosideration if it can be first.
Then while there are still letters remaining take the alphabetically first letter that is not the same as previous, and makes further solution possible.
The correct algorithm would be the following:
Build a histogram of the characters in the input string.
Put the CharacterOccurrences in a PriorityQueue / TreeSet where they're ordered on highest occurrence, lowest alphabetical order
Have an auxiliary variable of type CharacterOccurrence
Loop while the PQ is not empty
Take the head of the PQ and keep it
Add the character of the head to the output
If the auxiliary variable is set => Re-add it to the PQ
Store the kept head in the auxiliary variable with 1 occurrence less unless the occurrence ends up being 0 (then unset it)
if the size of the output == size of the input, it was possible and you have your answer. Else it was impossible.
Complexity is O(N * log(N))
Make a bi directional table of character frequencies: character->count and count->character. Record an optional<Character> which stores the last character (or none of there is none). Store the total number of characters.
If (total number of characters-1)<2*(highest count character count), use the highest count character count character. (otherwise there would be no solution). Fail if this it the last character output.
Otherwise, use the earliest alphabetically that isn't the last character output.
Record the last character output, decrease both the total and used character count.
Loop while we still have characters.
While this question is not quite a duplicate, the part of my answer giving the algorithm for enumerating all permutations with as few adjacent equal letters as possible readily can be adapted to return only the minimum, as its proof of optimality requires that every recursive call yield at least one permutation. The extent of the changes outside of the test code are to try keys in sorted order and to break after the first hit is found. The running time of the code below is polynomial (O(n) if I bothered with better data structures), since unlike its ancestor it does not enumerate all possibilities.
david.pfx's answer hints at the logic: greedily take the least letter that doesn't eliminate all possibilities, but, as he notes, the details are subtle.
from collections import Counter
from itertools import permutations
from operator import itemgetter
from random import randrange
def get_mode(count):
return max(count.items(), key=itemgetter(1))[0]
def enum2(prefix, x, count, total, mode):
prefix.append(x)
count_x = count[x]
if count_x == 1:
del count[x]
else:
count[x] = count_x - 1
yield from enum1(prefix, count, total - 1, mode)
count[x] = count_x
del prefix[-1]
def enum1(prefix, count, total, mode):
if total == 0:
yield tuple(prefix)
return
if count[mode] * 2 - 1 >= total and [mode] != prefix[-1:]:
yield from enum2(prefix, mode, count, total, mode)
else:
defect_okay = not prefix or count[prefix[-1]] * 2 > total
mode = get_mode(count)
for x in sorted(count.keys()):
if defect_okay or [x] != prefix[-1:]:
yield from enum2(prefix, x, count, total, mode)
break
def enum(seq):
count = Counter(seq)
if count:
yield from enum1([], count, sum(count.values()), get_mode(count))
else:
yield ()
def defects(lst):
return sum(lst[i - 1] == lst[i] for i in range(1, len(lst)))
def test(lst):
perms = set(permutations(lst))
opt = min(map(defects, perms))
slow = min(perm for perm in perms if defects(perm) == opt)
fast = list(enum(lst))
assert len(fast) == 1
fast = min(fast)
print(lst, fast, slow)
assert slow == fast
for r in range(10000):
test([randrange(3) for i in range(randrange(6))])
You start by count each number of letter you have in your array:
For example you have 3 - A, 2 - B, 1 - C, 4 - Y, 1 - Z.
1) Then you put each time the lowest one (it is A), you can put.
so you start by :
A
then you can not put A any more so you put B:
AB
then:
ABABACYZ
These works if you have still at least 2 kind of characters. But here you will have still 3 Y.
2) To put the last characters, you just go from your first Y and insert one on 2 in direction of beginning.(I don't know if these is the good way to say that in english).
So ABAYBYAYCYZ.
3) Then you take the subsequence between your Y so YBYAYCY and you sort the letter between the Y :
BAC => ABC
And you arrive at
ABAYAYBYCYZ
which should be the solution of your problem.
To do all this stuff, I think a LinkedList is the best way
I hope it help :)
I need to find algorithm which will find the longest seqeunce of element in one
dimension array.
For example:
int[] myArr={1,1,1,3,4,5,5,5,5,5,3,3,4,3,3}
solution will be 5 because sequnece of 5 is the longest.
This is my solution of the problem:
static int findSequence(int [] arr, int arrLength){
int frequency=1;
int bestNumber=arr[0];
int bestFrequency=0;
for(int n=1;n<arrLength;n++){
if(arr[n]!=arr[n-1]){
if(frequency>bestFrequency){
bestNumber=arr[n-1];
bestFrequency=frequency;
}
frequency=1;
}else {
frequency++;
}
}
if( frequency>bestFrequency){
bestNumber=arr[arrLength-1];
bestFrequency=frequency;
}
return bestNumber;
}
but I'm not satisfied.May be some one know more effective solution?
You can skip the some number in the array in the following pattern:
Maintain a integer jump_count to maintain the number of elements to skip (which will be bestFrequency/2). The divisor 2 can be changed according to the data set. Update the jump_count every time you update the bestFrequency.
Now, after every jump
If previous element is not equal to current element and frequency <= jump_count, then scan backwards from current element to find number of duplicates and update the frequency.
e.g. 2 2 2 2 3 3 and frequency = 0 (bold are previous and current elements), then scan backwards to find number of 3's and update the frequency = 2
If previous element is not equal to current element and frequency > jump_count, scan for scan for every element to update the frequency and update the bestFrequency if needed.
e.g. 2 2 2 2 2 3 3 and frequency = 1 (bold are previous and current elements), scan for number of 2's in this jump and update the frequency = 1 + 4. Now, frequency < bestFrequency, scan backwards to find number of 3's and update the frequency = 2.
If previous element = current element, scan the jump to make sure it is continuous sequence. If yes, update the frequency to frequency + jump_count, else consider this as the same case as step 2.
Here, we will consider two examples:
a) 2 2 2 2 2 2 (bold are previous and current elements), check if the jump contains all 2's. Yes in this case, so add the jump_count to frequency.
b) 2 2 2 2 3 2 (bold are previous and current elements), check if the jump contains all the 2's. No in this case, so considering this as in step 2. So, scan for number of 2's in this jump and update the frequency = 1 + 4. Now, frequency < bestFrequency, scan backwards to find number of 2's(from the current element) and update the frequency = 1.
Optimization: You can save some loops in many cases.
P.S. Since this is my first answer, I hope I am able to convey myself.
Try this:
public static void longestSequence(int[] a) {
int count = 1, max = 1;
for (int i = 1; i < a.length; i++) {
if (a[i] == a[i - 1]) {
count++;
} else {
if (count > max) {
max = count;
}
count = 1;
}
}
if (count> max)
System.out.println(count);
else
System.out.println(max);
}
Your algorithm is pretty good.
It touches each array element (except the last) only once. This puts it at O(n) runtime which for this problem seems like the best worst case runtime you can get and is a pretty good worst case runtime as far as algorithms go.
One possible suggestion is when you find a new bestFrequency and n+bestFrequency > arrayLength you can break out of the loops. This is because you know a longer sequence cannot be found.
The only optimization that seems possible is:
for(int n=1;n<arrLength && frequency + (arrLength - n) >= bestFrequency;n++){
because you don't need to search any further one you can't possible exceed the best frequency with the number of elements remaining (probably possible to simplify that even further given a little more thought).
But as others point out, you're doing a O(n) search on n elements for a sequence - there's really no more efficient algorithm available.
I was thinking this must be an O(n) problem, but now I'm wondering if it doesn't have to be, that you could potentially make it O(log n) using a binary search (I don't think what #BlackJack posted actually works quite right, but it was inspiring):
Was thinking something like keep track of first, last element (in a block, probably a recursive algorithm). Do a binary split (so middle element to start). If it matches either first or last, you possibly have a run of at least that length. Check if the total length could exceed the current known max run. If so, continue, if not break.
Then repeat the process - do a binary split of one of those halves to see if the middle item matches. Repeat this process, recursing up and down to get the maximum length of a single run within a branch. Stop searching a branch when it can't possibly exceed the maximum run length.
I think this still comes out to be an O(n) algorithm because the worth-case is still searching every single element (consider a max length of 1 or 2). But you limit to checking each item once, and you search into the most-likely longest branches first (based on start/middle/end matches), it could potentially skip some fairly long runs. A breadth-first rather than depth-first search would also help.
Given a large unordered array of long random numbers and a target long, what's the most efficient algorithm for finding the closest number?
#Test
public void findNearest() throws Exception {
final long[] numbers = {90L, 10L, 30L, 50L, 70L};
Assert.assertEquals("nearest", 10L, findNearest(numbers, 12L));
}
Iterate through the array of longs once. Store the current closest number and the distance to that number. Continue checking each number if it is closer, and just replace the current closest number when you encounter a closer number.
This gets you best performance of O(n).
Building a binary tree as suggested by other answerer will take O(nlogn). Of course future search will only take O(logn)...so it may be worth it if you do a lot of searches.
If you are pro, you can parallelize this with openmp or thread library, but I am guessing that is out of the scope of your question.
If you do not intend to do multiple such requests on the array there is no better way then the brute force linear time check of each number.
If you will do multiple requests on the same array first sort it and then do a binary search on it - this will reduce the time for such requests to O(log(n)) but you still pay the O(n*log(n)) for the sort so this is only reasonable if the number of requests is reasonably large i.e. k*n >>(a lot bigger then) n*log(n) + k* log(n) where k is the number of requests.
If the array will change, then create a binary search tree and do a lower bound request on it. This again is only reasonable if the nearest number request is relatively large with comparison to array change requests and also to the number of elements. As the cost of building the tree is O(n*log(n)) and also the cost of updating it is O(logn) you need to have k*log(n) + n*log(n) + k*log(n) <<(a lot smaller then) k*n
IMHO, I think that you should use a Binary Heap (http://en.wikipedia.org/wiki/Binary_heap) which has the insertion time of O(log n), being O(n log n) for the entire array. For me, the coolest thing about the binary heap is that it can be made inside from your own array, without overhead. Take a look the heapfy section.
"Heapfying" your array turns possible to get the bigger/lower element in O(1).
if you build a binary search tree from your numbers and search against. O(log n) would be the complexity in worst case. In your case you won't search for equality instead, you'll looking for the smallest return value through subtraction
I would check the difference between the numbers while iterating through the array and save the min value for that difference.
If you plan to use findNearest multiple times I would calculate the difference while sorting (with an sorting algorithm of complexity n*log(n)) after each change of values in that array
The time complex to do this job is O(n), the length of the numbers.
final long[] numbers = {90L, 10L, 30L, 50L, 70L};
long tofind = 12L;
long delta = Long.MAX_VALUE;
int index = -1;
int i = 0;
while(i < numbers.length){
Long tmp = Math.abs(tofind - numbers[i]);
if(tmp < delta){
delta = tmp;
index = i;
}
i++;
}
System.out.println(numbers[index]); //if index is not -1
But if you want to find many times with different values such as 12L against the same numbers array, you may sort the array first and binary search against the sorted numbers array.
If your search is a one-off, you can partition the array like in quicksort, using the input value as pivot.
If you keep track - while partitioning - of the min item in the right half, and the max item in the left half, you should have it in O(n) and 1 single pass over the array.
I'd say it's not possible to do it in less than O(n) since it's not sorted and you have to scan the input at the very least.
If you need to do many subsequent search, then a BST could help indeed.
You could do it in below steps
Step 1 : Sort array
Step 2 : Find index of the search element
Step 3 : Based on the index, display the number that are at the Right & Left Side
Let me know incase of any queries...