Problem Solving with Algorithms

반응형

 

 

 

 

 

leetcode.com/explore/learn/card/binary-search/

 

 

left + (right - left) // 2

class Solution:
    def search(self, nums: List[int], target: int) -> int:
        n = len(nums)
        l = 0
        r = n-1
        
        while l <= r:
            mid = l + (r-l)//2
            if nums[mid] == target:
                return mid
            elif nums[mid] < target:
                l = mid + 1
            elif target < nums[mid]:
                r = mid - 1
        
        return -1

ai.googleblog.com/2006/06/extra-extra-read-all-about-it-nearly.html

 

 

 

 


template 1

 

def binarySearch(nums, target):
    """
    :type nums: List[int]
    :type target: int
    :rtype: int
    """
    if len(nums) == 0:
        return -1

    left, right = 0, len(nums) - 1
    while left <= right:
        mid = (left + right) // 2
        if nums[mid] == target:
            return mid
        elif nums[mid] < target:
            left = mid + 1
        else:
            right = mid - 1

    # End Condition: left > right
    return -1

 


69. Sqrt(x)

leetcode.com/problems/sqrtx/

 

Solution


Integer Square Root

The value a we're supposed to compute could be defined as a^2 \le x < (a + 1)^2. It is called integer square root. From geometrical points of view, it's the side of the largest integer-side square with a surface less than x.




Approach 1: Pocket Calculator Algorithm

Before going to the serious stuff, let's first have some fun and implement the algorithm used by the pocket calculators.

Usually a pocket calculator computes well exponential functions and natural logarithms by having logarithm tables hardcoded or by the other means. Hence the idea is to reduce the square root computation to these two algorithms as well

\sqrt{x} = e^{\frac{1}{2} \log x}

That's some sort of cheat because of non-elementary function usage but it's how that actually works in a real life.

Implementation

from math import e, log
class Solution:
    def mySqrt(self, x):
        if x < 2:
            return x
        
        left = int(e**(0.5 * log(x)))
        right = left + 1
        return left if right * right > x else right

 

 

Complexity Analysis

  • Time complexity : \mathcal{O}(1).

  • Space complexity : \mathcal{O}(1).


Approach 2: Binary Search

Intuition

Let's go back to the interview context. For x \ge 2 the square root is always smaller than x / 2 and larger than 0 : 0 < a < x / 2.
Since a is an integer, the problem goes down to the iteration over the sorted set of integer numbers. Here the binary search enters the scene.

Algorithm

  • If x < 2, return x.

  • Set the left boundary to 2, and the right boundary to x / 2.

  • While left <= right:

    • Take num = (left + right) / 2 as a guess. Compute num * num and compare it with x:

      • If num * num > x, move the right boundary right = pivot -1

      • Else, if num * num < x, move the left boundary left = pivot + 1

      • Otherwise num * num == x, the integer square root is here, let's return it

  • Return right

Implementation

Complexity Analysis

  • Time complexity : \mathcal{O}(\log N).

    Let's compute time complexity with the help of master theorem T(N) = aT\left(\frac{N}{b}\right) + \Theta(N^d). The equation represents dividing the problem up into a subproblems of size \frac{N}{b} in \Theta(N^d) time. Here at step there is only one subproblem a = 1, its size is a half of the initial problem b = 2, and all this happens in a constant time d = 0. That means that \log_b{a} = d and hence we're dealing with case 2 that results in \mathcal{O}(n^{\log_b{a}} \log^{d + 1} N) = \mathcal{O}(\log N) time complexity.

  • Space complexity : \mathcal{O}(1).


Approach 3: Recursion + Bit Shifts

Intuition

Let's use recursion. Bases cases are \sqrt{x} = x for x < 2. Now the idea is to decrease x recursively at each step to go down to the base cases.

How to go down?

For example, let's notice that \sqrt{x} = 2 \times \sqrt{\frac{x}{4}}, and hence square root could be computed recursively as

\textrm{mySqrt}(x) = 2 \times \textrm{mySqrt}\left(\frac{x}{4}\right)

One could already stop here, but let's use left and right shifts, which are quite fast manipulations with bits

x << y \qquad \textrm{that means} \qquad x \times 2^y

x >> y \qquad \textrm{that means} \qquad \frac{x}{2^y}

That means one could rewrite the recursion above as

\textrm{mySqrt}(x) = \textrm{mySqrt}(x >> 2) << 1

in order to fasten up the computations.

Implementation

Complexity Analysis

  • Time complexity : \mathcal{O}(\log N).

    Let's compute time complexity with the help of master theorem T(N) = aT\left(\frac{N}{b}\right) + \Theta(N^d). The equation represents dividing the problem up into a subproblems of size \frac{N}{b} in \Theta(N^d) time. Here at step there is only one subproblem a = 1, its size is a half of the initial problem b = 2, and all this happens in a constant time d = 0. That means that \log_b{a} = d and hence we're dealing with case 2 that results in \mathcal{O}(n^{\log_b{a}} \log^{d + 1} N) = \mathcal{O}(\log N) time complexity.

  • Space complexity : \mathcal{O}(\log N) to keep the recursion stack.


Approach 4: Newton's Method

Intuition

One of the best and widely used methods to compute sqrt is Newton's Method. Here we'll implement the version without the seed trimming to keep things simple. However, seed trimming is a bit of math and lot of fun, so here is a link if you'd like to dive in.

Let's keep the mathematical proofs outside of the article and just use the textbook fact that the set

x_{k + 1} = \frac{1}{2}\left[x_k + \frac{x}{x_k}\right]

converges to \sqrt{x} if x_0 = x. Then the things are straightforward: define that error should be less than 1 and proceed iteratively.

Implementation

Complexity Analysis

  • Time complexity : \mathcal{O}(\log N) since the set converges quadratically.

  • Space complexity : \mathcal{O}(1).


Compare Approaches 2, 3 and 4

Here we have three algorithms with a time performance \mathcal{O}(\log N), and it's a bit confusing.

Which one is performing less iterations?

Let's run tests for the range of x in order to check that. Here are the results. The best one is Newton's method.

 


33. Search in Rotated Sorted Array

leetcode.com/problems/search-in-rotated-sorted-array/

Solution


Approach 1: Binary search

The problem is to implement a search in \mathcal{O}(\log{N}) time that gives an idea to use a binary search.

The algorithm is quite straightforward :

  • Find a rotation index rotation_index, i.e. index of the smallest element in the array. Binary search works just perfect here.

  • rotation_index splits array in two parts. Compare nums[0] and target to identify in which part one has to look for target.

  • Perform a binary search in the chosen part of the array.

 

 

1 / 3

 

두번째 답인 원패스는 아직 안읽어봄

class Solution:
    def search(self, nums: List[int], target: int) -> int:
        l = 0
        r = len(nums) - 1
        n = len(nums)
        
        def find_rotate_index(left, right):
            if nums[left] < nums[right]:
                return 0
            
            while left <= right:
                pivot = left + (right - left) // 2
                if nums[pivot] > nums[pivot+1]:
                    return pivot + 1
                else:
                    if nums[pivot] < nums[left]:
                        right = pivot - 1
                    else:
                        left = pivot + 1
        
        def search(left, right):
            while left <= right:
                pivot = left + (right - left) // 2
                if nums[pivot] == target:
                    return pivot
                elif target < nums[pivot]:
                    right = pivot - 1
                else:
                    left = pivot + 1
            return -1
        
        if n == 1:
            return 0 if nums[0] == target else -1
        
        rotate_index = find_rotate_index(0, n-1)
        
        if nums[rotate_index] == target:
            return rotate_index
        if rotate_index == 0:
            return search(0, n-1)
        if target < nums[0]:
            return search(rotate_index, n-1)
        return search(0, rotate_index)

 

Complexity Analysis

  • Time complexity : \mathcal{O}(\log{N}).

  • Space complexity : \mathcal{O}(1).


Approach 2: One-pass Binary Search

Instead of going through the input array in two passes, we could achieve the goal in one pass with an revised binary search.

The idea is that we add some additional condition checks in the normal binary search in order to better narrow down the scope of the search.

Algorithm

As in the normal binary search, we keep two pointers (i.e. start and end) to track the search scope. At each iteration, we reduce the search scope into half, by moving either the start or end pointer to the middle (i.e. mid) of the previous search scope.

Here are the detailed breakdowns of the algorithm:

  • Initiate the pointer start to 0, and the pointer end to n - 1.

  • Perform standard binary search. While start <= end:

    • Take an index in the middle mid as a pivot.

    • If nums[mid] == target, the job is done, return mid.

    • Now there could be two situations:

      • Pivot element is larger than the first element in the array, i.e. the subarray from the first element to the pivot is non-rotated, as shown in the following graph.
      - If the target is located in the non-rotated subarray: go left: `end = mid - 1`. - Otherwise: go right: `start = mid + 1`.
      • Pivot element is smaller than the first element of the array, i.e. the rotation index is somewhere between 0 and mid. It implies that the sub-array from the pivot element to the last one is non-rotated, as shown in the following graph.
      - If the target is located in the non-rotated subarray: go right: `start = mid + 1`. - Otherwise: go left: `end = mid - 1`.
  • We're here because the target is not found. Return -1.

Implementation

Complexity Analysis

  • Time complexity: \mathcal{O}(\log{N}).
  • Space complexity: \mathcal{O}(1).

 

 

 

 

 


template 2

def binarySearch(nums, target):
    """
    :type nums: List[int]
    :type target: int
    :rtype: int
    """
    if len(nums) == 0:
        return -1

    left, right = 0, len(nums)
    while left < right:
        mid = (left + right) // 2
        if nums[mid] == target:
            return mid
        elif nums[mid] < target:
            left = mid + 1
        else:
            right = mid

    # Post-processing:
    # End Condition: left == right
    if left != len(nums) and nums[left] == target:
        return left
    return -1

 

 

 


template 3

 

def binarySearch(nums, target):
    """
    :type nums: List[int]
    :type target: int
    :rtype: int
    """
    if len(nums) == 0:
        return -1

    left, right = 0, len(nums) - 1
    while left + 1 < right:
        mid = (left + right) // 2
        if nums[mid] == target:
            return mid
        elif nums[mid] < target:
            left = mid
        else:
            right = mid

    # Post-processing:
    # End Condition: left + 1 == right
    if nums[left] == target: return left
    if nums[right] == target: return right
    return -1

 

 


  Binary Search Template Analysis


 

Template Explanation:


99% of binary search problems that you see online will fall into 1 of these 3 templates. Some problems can be implemented using multiple templates, but as you practice more, you will notice that some templates are more suited for certain problems than others.

Note: The templates and their differences have been colored coded below.

These 3 templates differ by their:

  • left, mid, right index assignments
  • loop or recursive termination condition
  • necessity of post-processing

Template 1 and 3 are the most commonly used and almost all binary search problems can be easily implemented in one of them. Template 2 is a bit more advanced and used for certain types of problems.

Each of these 3 provided templates provide a specific use case:

 

Template #1 (left <= right):


  • Most basic and elementary form of Binary Search
  • Search Condition can be determined without comparing to the element's neighbors (or use specific elements around it)
  • No post-processing required because at each step, you are checking to see if the element has been found. If you reach the end, then you know the element is not found

 

Template #2 (left < right):


  • An advanced way to implement Binary Search.
  • Search Condition needs to access element's immediate right neighbor
  • Use element's right neighbor to determine if condition is met and decide whether to go left or right
  • Gurantees Search Space is at least 2 in size at each step
  • Post-processing required. Loop/Recursion ends when you have 1 element left. Need to assess if the remaining element meets the condition.

 

Template #3 (left + 1 < right):


  • An alternative way to implement Binary Search
  • Search Condition needs to access element's immediate left and right neighbors
  • Use element's neighbors to determine if condition is met and decide whether to go left or right
  • Gurantees Search Space is at least 3 in size at each step
  • Post-processing required. Loop/Recursion ends when you have 2 elements left. Need to assess if the remaining elements meet the condition.

 

Time and Space Complexity:


Runtime: O(log n) -- Logorithmic Time

Because Binary Search operates by applying a condition to the value in the middle of our search space and thus cutting the search space in half, in the worse case, we will have to make O(log n) comparisons, where n is the number of elements in our collection.

Why log n?

  • Binary search is performed by dividing the existing array in half.
  • So every time you a call the subroutine ( or complete one iteration ) the size reduced to half of the existing part.
  • First N become N/2, then it become N/4 and go on till it find the element or size become 1.
  • The maximum no of iterations is log N (base 2).

 

Space: O(1) -- Constant Space


Although, Binary Search does require keeping track of 3 indicies, the iterative solution does not typically require any other additional space and can be applied directly on the collection itself, therefore warrants O(1) or constant space.

 

Other Types of Binary Search:


Below, we have provided another type of Binary Search for practice.

Binary Search With 2 Arrays -- In this problem, we need to compare values from 2 arrays to determine our search space: LC #4: Median of Two Sorted Arrays

 


Binary Search is an immensely useful technique used to tackle different algorithmic problems. Practice identifying Binary Search Problems and applying different templates to different search conditions. Improve your approach to tackling problems, notice the patterns and repeat!


This chapter concludes our Binary Search learnings and summarizes key concepts. Below you can find some problems to help you practice Binary Search!

 

 

 

 

 

 

 

 

반응형
반응형

공유하기

facebook twitter kakaoTalk kakaostory naver band