leetcode.com/problems/two-sum/
class Solution:
def twoSum(self, nums: List[int], target: int) -> List[int]:
d = {}
for i, n in enumerate(nums):
m = target - n
if m in d:
return [d[m], i]
else:
d[n] = i
class Solution:
def twoSum(self, nums, target):
"""
:type nums: List[int]
:type target: int
:rtype: List[int]
"""
h = {}
for i, num in enumerate(nums):
n = target - num
if n not in h:
h[num] = i
else:
return [h[n], i]
leetcode.com/problems/two-sum-ii-input-array-is-sorted/
class Solution:
def twoSum(self, numbers: List[int], target: int) -> List[int]:
low, high = 0, len(numbers)-1
while low < high:
num = numbers[low] + numbers[high]
if num == target:
return [low+1, high+1]
elif num < target:
low += 1
else:
high -= 1
return [-1, -1]
3가지 어프로치가 나오는데 역시 투포인터가 제일 나은건가
This problem is a follow-up of Two Sum, and it is a good idea to first take a look at Two Sum and Two Sum II. An interviewer may ask to solve Two Sum first, and then throw 3Sum at you. Pay attention to subtle differences in problem description and try to re-use existing solutions!
Two Sum, Two Sum II and 3Sum share a similarity that the sum of elements must match the target exactly. A difference is that, instead of exactly one answer, we need to find all unique triplets that sum to zero.
Before jumping in, let's check the existing solutions and determine the best conceivable runtime (BCR) for 3Sum:
Considering that there is one more dimension in 3Sum, it sounds reasonable to shoot for \mathcal{O}(n^2)O(n2) time complexity as our BCR.
We will follow the same two pointers pattern as in Two Sum II. It requires the array to be sorted, so we'll do that first. As our BCR is \mathcal{O}(n^2)O(n2), sorting the array would not change the overall time complexity.
To make sure the result contains unique triplets, we need to skip duplicate values. It is easy to do because repeating values are next to each other in a sorted array.
If you are wondering how to solve this problem without sorting the array, go over the "No-Sort" approach below. There are cases when that approach is preferable, and your interviewer may probe your knowledge there.
After sorting the array, we move our pivot element nums[i] and analyze elements to its right. We find all pairs whose sum is equal -nums[i] using the two pointers pattern, so that the sum of the pivot element (nums[i]) and the pair (-nums[i]) is equal to zero.
As a quick refresher, the pointers are initially set to the first and the last element respectively. We compare the sum of these two elements to the target. If it is smaller, we increment the lower pointer lo. Otherwise, we decrement the higher pointer hi. Thus, the sum always moves toward the target, and we "prune" pairs that would move it further away. Again, this works only if the array is sorted. Head to the Two Sum II solution for the detailed explanation.
8 / 14
Algorithm
The implementation is straightforward - we just need to modify twoSumII to produce triplets and skip repeating values.
For the main function:
For twoSumII function:
Return the result res.
class Solution:
def threeSum(self, nums: List[int]) -> List[List[int]]:
res = []
nums.sort()
for i in range(len(nums)):
if nums[i] > 0:
break
if i == 0 or nums[i - 1] != nums[i]:
self.twoSumII(nums, i, res)
return res
def twoSumII(self, nums: List[int], i: int, res: List[List[int]]):
lo, hi = i + 1, len(nums) - 1
while (lo < hi):
sum = nums[i] + nums[lo] + nums[hi]
if sum < 0:
lo += 1
elif sum > 0:
hi -= 1
else:
res.append([nums[i], nums[lo], nums[hi]])
lo += 1
hi -= 1
while lo < hi and nums[lo] == nums[lo - 1]:
lo += 1
Complexity Analysis
Time Complexity: \mathcal{O}(n^2)O(n2). twoSumII is \mathcal{O}(n)O(n), and we call it nn times.
Sorting the array takes \mathcal{O}(n\log{n})O(nlogn), so overall complexity is \mathcal{O}(n\log{n} + n^2)O(nlogn+n2). This is asymptotically equivalent to \mathcal{O}(n^2)O(n2).
Space Complexity: from \mathcal{O}(\log{n})O(logn) to \mathcal{O}(n)O(n), depending on the implementation of the sorting algorithm. For the purpose of complexity analysis, we ignore the memory required for the output.
Since triplets must sum up to the target value, we can try the hash table approach from the Two Sum solution. This approach won't work, however, if the sum is not necessarily equal to the target, like in 3Sum Smaller and 3Sum Closest.
We move our pivot element nums[i] and analyze elements to its right. We find all pairs whose sum is equal -nums[i] using the Two Sum: One-pass Hash Table approach, so that the sum of the pivot element (nums[i]) and the pair (-nums[i]) is equal to zero.
To do that, we process each element nums[j] to the right of the pivot, and check whether a complement -nums[i] - nums[j] is already in the hashset. If it is, we found a triplet. Then, we add nums[j] to the hashset, so it can be used as a complement from that point on.
Like in the approach above, we will also sort the array so we can skip repeated values. We provide a different way to avoid duplicates in the "No-Sort" approach below.
Algorithm
The main function is the same as in the Two Pointers approach above. Here, we use twoSum (instead of twoSumII), modified to produce triplets and skip repeating values.
For the main function:
For twoSum function:
Return the result res.
class Solution:
def threeSum(self, nums: List[int]) -> List[List[int]]:
res = []
nums.sort()
for i in range(len(nums)):
if nums[i] > 0:
break
if i == 0 or nums[i - 1] != nums[i]:
self.twoSum(nums, i, res)
return res
def twoSum(self, nums: List[int], i: int, res: List[List[int]]):
seen = set()
j = i + 1
while j < len(nums):
complement = -nums[i] - nums[j]
if complement in seen:
res.append([nums[i], nums[j], complement])
while j + 1 < len(nums) and nums[j] == nums[j + 1]:
j += 1
seen.add(nums[j])
j += 1
Time Complexity: \mathcal{O}(n^2)O(n2). twoSum is \mathcal{O}(n)O(n), and we call it nn times.
Sorting the array takes \mathcal{O}(n\log{n})O(nlogn), so overall complexity is \mathcal{O}(n\log{n} + n^2)O(nlogn+n2). This is asymptotically equivalent to \mathcal{O}(n^2)O(n2).
Space Complexity: \mathcal{O}(n)O(n) for the hashset.
What if you cannot modify the input array, and you want to avoid copying it due to memory constraints?
We can adapt the hashset approach above to work for an unsorted array. We can put a combination of three values into a hashset to avoid duplicates. Values in a combination should be ordered (e.g. ascending). Otherwise, we can have results with the same values in the different positions.
Algorithm
The algorithm is similar to the hashset approach above. We just need to add few optimizations so that it works efficiently for repeated values:
class Solution:
def threeSum(self, nums: List[int]) -> List[List[int]]:
res, dups = set(), set()
seen = {}
for i, val1 in enumerate(nums):
if val1 not in dups:
dups.add(val1)
for j, val2 in enumerate(nums[i+1:]):
complement = -val1 - val2
if complement in seen and seen[complement] == i:
res.add(tuple(sorted((val1, val2, complement))))
seen[val2] = i
return res
Complexity Analysis
Time Complexity: \mathcal{O}(n^2)O(n2). We have outer and inner loops, each going through nn elements.
While the asymptotic complexity is the same, this algorithm is noticeably slower than the previous approach. Lookups in a hashset, though requiring a constant time, are expensive compared to the direct memory access.
Space Complexity: \mathcal{O}(n)O(n) for the hashset/hashmap.
For the purpose of complexity analysis, we ignore the memory required for the output. However, in this approach we also store output in the hashset for deduplication. In the worst case, there could be \mathcal{O}(n^2)O(n2) triplets in the output, like for this example: [-k, -k + 1, ..., -1, 0, 1, ... k - 1, k]. Adding a new number to this sequence will produce n / 3 new triplets.
Further Thoughts
This is a well-known problem with many variations and its own Wikipedia page.
For an interview, we recommend focusing on the Two Pointers approach above. It's easier to get it right and adapt for other variations of 3Sum. Interviewers love asking follow-up problems like 3Sum Smaller and 3Sum Closest!
leetcode.com/problems/4sum-ii/
454. 4Sum II
Medium
152278Add to ListShare
Given four lists A, B, C, D of integer values, compute how many tuples (i, j, k, l) there are such that A[i] + B[j] + C[k] + D[l] is zero.
To make problem a bit easier, all A, B, C, D have same length of N where 0 ≤ N ≤ 500. All integers are in the range of -228 to 228 - 1 and the result is guaranteed to be at most 231 - 1.
Example:
Input: A = [ 1, 2] B = [-2,-1] C = [-1, 2] D = [ 0, 2] Output: 2 Explanation: The two tuples are: 1. (0, 0, 0, 1) -> A[0] + B[0] + C[0] + D[1] = 1 + (-2) + (-1) + 2 = 0 2. (1, 1, 0, 0) -> A[1] + B[1] + C[0] + D[0] = 2 + (-1) + (-1) + 0 = 0
This problem is a variation of 4Sum, and we recommend checking that problem first. The main difference is that here we pick each element from a different array, while in 4Sum all elements come from the same array. For that reason, we cannot use the Two Pointers approach, where elements must be in the same sorted array.
On the bright side, we do not need to worry about using the same element twice - we pick one element at a time from each array. As you will see later, this help reduce the time complexity.
Finally, we do not need to return actual values and ensure they are unique; we just count each combination of four elements that sums to zero.
Approach 1: Hashmap
A brute force solution will be to enumerate all combinations of elements using four nested loops, which results in \mathcal{O}(n^4)O(n4) time complexity. A faster approach is to use three nested loops, and, for each sum a + b + c, search for a complementary value d == -(a + b + c) in the fourth array. We can do the search in \mathcal{O}(1)O(1) if we populate the fourth array into a hashmap.
Note that we need to track the frequency of each element in the fourth array. If an element is repeated multiple times, it will form multiple quadruples. Therefore, we will use hashmap values to store counts.
Building further on this idea, we can observe that a + b == -(c + d). First, we will count sums of elements a + b from the first two arrays using a hashmap. Then, we will enumerate elements from the third and fourth arrays, and search for a complementary sum a + b == -(c + d) in the hashmap.
1 / 8
Algorithm
For each a in A.
For each c in C.
Return the count cnt.
Complexity Analysis
Time Complexity: \mathcal{O}(n^2)O(n2). We have 2 nested loops to count sums, and another 2 nested loops to find complements.
Space Complexity: \mathcal{O}(n^2)O(n2) for the hashmap. There could be up to \mathcal{O}(n^2)O(n2) distinct a + b keys.
Approach 2: kSum II
After you solve 4Sum II, an interviewer can follow-up with 5Sum II, 6Sum II, and so on. What they are really expecting is a generalized solution for k input arrays. Fortunately, the hashmap approach can be easily extended to handle more than 4 arrays.
Above, we divided 4 arrays into two equal groups, and processed each group independently. Same way, we will divide kk arrays into two groups. For the first group, we will have \frac{k}{2}2k nested loops to count sums. Another \frac{k}{2}2k nested loops will enumerate arrays in the second group and search for complements.
Algorithm
We can implement \frac{k}{2}2k nested loops using a recursion, passing the index i of the current list as the parameter. The first group will be processed by addToHash recursive function, which accumulates sum and terminates when adding the final sum to a hashmap m.
The second function, countComplements, will process the other group, accumulating the complement value. In the end, it searches for the final complement value in the hashmap and adds its count to the result.
Complexity Analysis
Time Complexity: \mathcal{O}(n^{\frac{k}{2}})O(n2k), or \mathcal{O}(n^2)O(n2) for 4Sum II. We have \frac{k}{2}2k nested loops to count sums, and another \frac{k}{2}2k nested loops to find complements.
If the number of arrays is odd, the time complexity will be \mathcal{O}(n^{\frac{k+1}{2}})O(n2k+1). We will pass \frac{k}{2}2k arrays to addToHash, and \frac{k+1}{2}2k+1 arrays to kSumCount to keep the space complexity \mathcal{O}(n ^ {\frac{k}{2}})O(n2k).
Space Complexity: \mathcal{O}(n^{\frac{k}{2}})O(n2k) for the hashmap. The space needed for the recursion will not exceed \frac{k}{2}2k.
Further Thoughts
For an interview, keep in mind the generalized implementation. Even if your interviewer is OK with a simpler code, you'll get some extra points by describing how your solution can handle more than 4 arrays.
It's also important to discuss trade-offs with your interviewer. If we are tight on memory, we can move some arrays from the first group to the second. This, of course, will increase the time complexity.
In other words, the time complexity can range from \mathcal{O}(n^k)O(nk) to \mathcal{O}(n^{\frac{k}{2}})O(n2k), and the memory complexity ranges from \mathcal{O}(1)O(1) to \mathcal{O}(n^{\frac{k}{2}})O(n2k) accordingly.