leetcode.com/explore/learn/card/binary-search/
class Solution:
def search(self, nums: List[int], target: int) -> int:
n = len(nums)
l = 0
r = n-1
while l <= r:
mid = l + (r-l)//2
if nums[mid] == target:
return mid
elif nums[mid] < target:
l = mid + 1
elif target < nums[mid]:
r = mid - 1
return -1
ai.googleblog.com/2006/06/extra-extra-read-all-about-it-nearly.html
def binarySearch(nums, target):
"""
:type nums: List[int]
:type target: int
:rtype: int
"""
if len(nums) == 0:
return -1
left, right = 0, len(nums) - 1
while left <= right:
mid = (left + right) // 2
if nums[mid] == target:
return mid
elif nums[mid] < target:
left = mid + 1
else:
right = mid - 1
# End Condition: left > right
return -1
Integer Square Root
The value aa we're supposed to compute could be defined as a^2 \le x < (a + 1)^2a2≤x<(a+1)2. It is called integer square root. From geometrical points of view, it's the side of the largest integer-side square with a surface less than x.
Before going to the serious stuff, let's first have some fun and implement the algorithm used by the pocket calculators.
Usually a pocket calculator computes well exponential functions and natural logarithms by having logarithm tables hardcoded or by the other means. Hence the idea is to reduce the square root computation to these two algorithms as well
\sqrt{x} = e^{\frac{1}{2} \log x}x=e21logx
That's some sort of cheat because of non-elementary function usage but it's how that actually works in a real life.
Implementation
from math import e, log
class Solution:
def mySqrt(self, x):
if x < 2:
return x
left = int(e**(0.5 * log(x)))
right = left + 1
return left if right * right > x else right
Complexity Analysis
Time complexity : \mathcal{O}(1)O(1).
Space complexity : \mathcal{O}(1)O(1).
Intuition
Let's go back to the interview context. For x \ge 2x≥2 the square root is always smaller than x / 2x/2 and larger than 0 : 0 < a < x / 20<a<x/2.
Since aa is an integer, the problem goes down to the iteration over the sorted set of integer numbers. Here the binary search enters the scene.
Algorithm
If x < 2, return x.
Set the left boundary to 2, and the right boundary to x / 2.
While left <= right:
Take num = (left + right) / 2 as a guess. Compute num * num and compare it with x:
If num * num > x, move the right boundary right = pivot -1
Else, if num * num < x, move the left boundary left = pivot + 1
Otherwise num * num == x, the integer square root is here, let's return it
Return right
Implementation
Complexity Analysis
Time complexity : \mathcal{O}(\log N)O(logN).
Let's compute time complexity with the help of master theorem T(N) = aT\left(\frac{N}{b}\right) + \Theta(N^d)T(N)=aT(bN)+Θ(Nd). The equation represents dividing the problem up into aa subproblems of size \frac{N}{b}bN in \Theta(N^d)Θ(Nd) time. Here at step there is only one subproblem a = 1, its size is a half of the initial problem b = 2, and all this happens in a constant time d = 0. That means that \log_b{a} = dlogba=d and hence we're dealing with case 2 that results in \mathcal{O}(n^{\log_b{a}} \log^{d + 1} N)O(nlogbalogd+1N) = \mathcal{O}(\log N)O(logN) time complexity.
Space complexity : \mathcal{O}(1)O(1).
Intuition
Let's use recursion. Bases cases are \sqrt{x} = xx=x for x < 2x<2. Now the idea is to decrease xx recursively at each step to go down to the base cases.
How to go down?
For example, let's notice that \sqrt{x} = 2 \times \sqrt{\frac{x}{4}}x=2×4x, and hence square root could be computed recursively as
\textrm{mySqrt}(x) = 2 \times \textrm{mySqrt}\left(\frac{x}{4}\right)mySqrt(x)=2×mySqrt(4x)
One could already stop here, but let's use left and right shifts, which are quite fast manipulations with bits
x << y \qquad \textrm{that means} \qquad x \times 2^yx<<ythat meansx×2y
x >> y \qquad \textrm{that means} \qquad \frac{x}{2^y}x>>ythat means2yx
That means one could rewrite the recursion above as
\textrm{mySqrt}(x) = \textrm{mySqrt}(x >> 2) << 1mySqrt(x)=mySqrt(x>>2)<<1
in order to fasten up the computations.
Implementation
Complexity Analysis
Time complexity : \mathcal{O}(\log N)O(logN).
Let's compute time complexity with the help of master theorem T(N) = aT\left(\frac{N}{b}\right) + \Theta(N^d)T(N)=aT(bN)+Θ(Nd). The equation represents dividing the problem up into aa subproblems of size \frac{N}{b}bN in \Theta(N^d)Θ(Nd) time. Here at step there is only one subproblem a = 1, its size is a half of the initial problem b = 2, and all this happens in a constant time d = 0. That means that \log_b{a} = dlogba=d and hence we're dealing with case 2 that results in \mathcal{O}(n^{\log_b{a}} \log^{d + 1} N)O(nlogbalogd+1N) = \mathcal{O}(\log N)O(logN) time complexity.
Space complexity : \mathcal{O}(\log N)O(logN) to keep the recursion stack.
Intuition
One of the best and widely used methods to compute sqrt is Newton's Method. Here we'll implement the version without the seed trimming to keep things simple. However, seed trimming is a bit of math and lot of fun, so here is a link if you'd like to dive in.
Let's keep the mathematical proofs outside of the article and just use the textbook fact that the set
x_{k + 1} = \frac{1}{2}\left[x_k + \frac{x}{x_k}\right]xk+1=21[xk+xkx]
converges to \sqrt{x}x if x_0 = xx0=x. Then the things are straightforward: define that error should be less than 1 and proceed iteratively.
Implementation
Complexity Analysis
Time complexity : \mathcal{O}(\log N)O(logN) since the set converges quadratically.
Space complexity : \mathcal{O}(1)O(1).
Compare Approaches 2, 3 and 4
Here we have three algorithms with a time performance \mathcal{O}(\log N)O(logN), and it's a bit confusing.
Which one is performing less iterations?
Let's run tests for the range of x in order to check that. Here are the results. The best one is Newton's method.
leetcode.com/problems/search-in-rotated-sorted-array/
The problem is to implement a search in \mathcal{O}(\log{N})O(logN) time that gives an idea to use a binary search.
The algorithm is quite straightforward :
Find a rotation index rotation_index, i.e. index of the smallest element in the array. Binary search works just perfect here.
rotation_index splits array in two parts. Compare nums[0] and target to identify in which part one has to look for target.
Perform a binary search in the chosen part of the array.
1 / 3
두번째 답인 원패스는 아직 안읽어봄
class Solution:
def search(self, nums: List[int], target: int) -> int:
l = 0
r = len(nums) - 1
n = len(nums)
def find_rotate_index(left, right):
if nums[left] < nums[right]:
return 0
while left <= right:
pivot = left + (right - left) // 2
if nums[pivot] > nums[pivot+1]:
return pivot + 1
else:
if nums[pivot] < nums[left]:
right = pivot - 1
else:
left = pivot + 1
def search(left, right):
while left <= right:
pivot = left + (right - left) // 2
if nums[pivot] == target:
return pivot
elif target < nums[pivot]:
right = pivot - 1
else:
left = pivot + 1
return -1
if n == 1:
return 0 if nums[0] == target else -1
rotate_index = find_rotate_index(0, n-1)
if nums[rotate_index] == target:
return rotate_index
if rotate_index == 0:
return search(0, n-1)
if target < nums[0]:
return search(rotate_index, n-1)
return search(0, rotate_index)
Complexity Analysis
Time complexity : \mathcal{O}(\log{N})O(logN).
Space complexity : \mathcal{O}(1)O(1).
Instead of going through the input array in two passes, we could achieve the goal in one pass with an revised binary search.
The idea is that we add some additional condition checks in the normal binary search in order to better narrow down the scope of the search.
Algorithm
As in the normal binary search, we keep two pointers (i.e. start and end) to track the search scope. At each iteration, we reduce the search scope into half, by moving either the start or end pointer to the middle (i.e. mid) of the previous search scope.
Here are the detailed breakdowns of the algorithm:
Initiate the pointer start to 0, and the pointer end to n - 1.
Perform standard binary search. While start <= end:
Take an index in the middle mid as a pivot.
If nums[mid] == target, the job is done, return mid.
Now there could be two situations:
We're here because the target is not found. Return -1.
Implementation
Complexity Analysis
def binarySearch(nums, target):
"""
:type nums: List[int]
:type target: int
:rtype: int
"""
if len(nums) == 0:
return -1
left, right = 0, len(nums)
while left < right:
mid = (left + right) // 2
if nums[mid] == target:
return mid
elif nums[mid] < target:
left = mid + 1
else:
right = mid
# Post-processing:
# End Condition: left == right
if left != len(nums) and nums[left] == target:
return left
return -1
def binarySearch(nums, target):
"""
:type nums: List[int]
:type target: int
:rtype: int
"""
if len(nums) == 0:
return -1
left, right = 0, len(nums) - 1
while left + 1 < right:
mid = (left + right) // 2
if nums[mid] == target:
return mid
elif nums[mid] < target:
left = mid
else:
right = mid
# Post-processing:
# End Condition: left + 1 == right
if nums[left] == target: return left
if nums[right] == target: return right
return -1
Binary Search Template Analysis
Template Explanation:
99% of binary search problems that you see online will fall into 1 of these 3 templates. Some problems can be implemented using multiple templates, but as you practice more, you will notice that some templates are more suited for certain problems than others.
Note: The templates and their differences have been colored coded below.
These 3 templates differ by their:
Template 1 and 3 are the most commonly used and almost all binary search problems can be easily implemented in one of them. Template 2 is a bit more advanced and used for certain types of problems.
Each of these 3 provided templates provide a specific use case:
Template #1 (left <= right):
Template #2 (left < right):
Template #3 (left + 1 < right):
Time and Space Complexity:
Runtime: O(log n) -- Logorithmic Time
Because Binary Search operates by applying a condition to the value in the middle of our search space and thus cutting the search space in half, in the worse case, we will have to make O(log n) comparisons, where n is the number of elements in our collection.
Why log n?
- Binary search is performed by dividing the existing array in half.
- So every time you a call the subroutine ( or complete one iteration ) the size reduced to half of the existing part.
- First N become N/2, then it become N/4 and go on till it find the element or size become 1.
- The maximum no of iterations is log N (base 2).
Space: O(1) -- Constant Space
Although, Binary Search does require keeping track of 3 indicies, the iterative solution does not typically require any other additional space and can be applied directly on the collection itself, therefore warrants O(1) or constant space.
Other Types of Binary Search:
Below, we have provided another type of Binary Search for practice.
Binary Search With 2 Arrays -- In this problem, we need to compare values from 2 arrays to determine our search space: LC #4: Median of Two Sorted Arrays
Binary Search is an immensely useful technique used to tackle different algorithmic problems. Practice identifying Binary Search Problems and applying different templates to different search conditions. Improve your approach to tackling problems, notice the patterns and repeat!
This chapter concludes our Binary Search learnings and summarizes key concepts. Below you can find some problems to help you practice Binary Search!