28. Count Subset With Target Sum II
β GFG solution to Count Subset With Target Sum II: find the number of subsets with sum equal to k using meet-in-the-middle technique for optimal performance. π
The problem can be found at the following link: π Question Link
π§© Problem Description
You are given an array arr[] and an integer k. Your task is to find the count of subsets whose sum equals k.
A subset is any combination of elements from the array (including the empty subset). The goal is to efficiently count all possible subsets that sum to the target value k.
Note: It is guaranteed that the number of valid subsets will fit within a 32-bit integer.
π Examples
Example 1
Input: arr[] = [1, 3, 2], k = 3
Output: 2
Explanation: The two subsets whose sum equals k are [1, 2] and [3].Example 2
Input: arr[] = [4, 2, 3, 1, 2], k = 4
Output: 3
Explanation: The three subsets whose sum equals k are [4], [2, 2] and [3, 1].Example 3
π Constraints
$1 \le \text{arr.size()} \le 40$
$-10^7 \le \text{arr}[i], k \le 10^7$
β
My Approach
The optimal solution uses the Meet-in-the-Middle technique, which is perfect for subset problems with constraints up to n β€ 40:
Meet-in-the-Middle with Hash Map
Split Array:
Divide the array into two halves of approximately equal size.
This reduces the problem space from O(2^n) to O(2^(n/2)).
Generate Subset Sums:
For the left half, recursively generate all possible subset sums and store their frequencies in a hash map.
For the right half, do the same and store in another hash map.
Combine Results:
For each sum
sin the left half, check ifk - sexists in the right half.If it exists, multiply their frequencies to get the count of valid pairs.
Accumulate all such counts to get the final answer.
Recursive Generation:
Use a recursive lambda function to explore both choices: include or exclude current element.
Base case: when we've processed all elements, increment frequency of current sum.
π Time and Auxiliary Space Complexity
Expected Time Complexity: O(2^(n/2) Γ n), where n is the size of the array. We generate all 2^(n/2) subsets for each half, and for each subset, we perform O(n) operations in the worst case to calculate the sum. The final combination step takes O(2^(n/2)) to iterate through one map and perform constant-time lookups in the other.
Expected Auxiliary Space Complexity: O(2^(n/2)), as we store all possible subset sums for both halves in hash maps. In the worst case, each half can have up to 2^(n/2) unique subset sums, and we also use O(n) recursion stack space.
π§βπ» Code (C++)
β‘ View Alternative Approaches with Code and Analysis
π 2οΈβ£ Iterative Bitmask Approach
π‘ Algorithm Steps:
Split array into two halves for divide and conquer optimization.
Use bitmask iteration to generate all possible subset sums for each half.
Store frequency of each subset sum in hash maps.
For each sum in left half, find complement in right half and multiply frequencies.
π Complexity Analysis:
Time: β±οΈ O(2^(n/2) Γ n) - Generate subsets for both halves with bit operations
Auxiliary Space: πΎ O(2^(n/2)) - Store subset sums in hash maps
β
Why This Approach?
Avoids recursion overhead with iterative generation
Clear bit manipulation for subset enumeration
Better for understanding meet-in-the-middle technique
π 3οΈβ£ Dynamic Programming Approach
π‘ Algorithm Steps:
Use DP to count subsets with specific sums in each half.
Create frequency maps for achievable sums in left and right halves.
Build DP table iteratively for each element.
Combine results by matching complementary sums.
π Complexity Analysis:
Time: β±οΈ O(2^(n/2) Γ log(2^(n/2))) - Map operations for each subset
Auxiliary Space: πΎ O(2^(n/2)) - DP state storage
β
Why This Approach?
Builds solution incrementally element by element
More intuitive for DP-familiar programmers
Easier to extend for weighted variants
π 4οΈβ£ Optimized Vector-Based Storage
π‘ Algorithm Steps:
Generate all subset sums using vectors instead of hash maps.
Sort one vector for binary search optimization.
For each sum in first vector, binary search for complement.
Count matching pairs efficiently.
π Complexity Analysis:
Time: β±οΈ O(2^(n/2) Γ (n + log(2^(n/2)))) - Sorting and binary search
Auxiliary Space: πΎ O(2^(n/2)) - Vector storage
β
Why This Approach?
Cache-friendly vector access patterns
Binary search for efficient counting
Good for competitive programming constraints
π π Comparison of Approaches
π Approach
β±οΈ Time Complexity
πΎ Space Complexity
β Pros
β οΈ Cons
π Recursive Lambda
π’ O(2^(n/2) Γ n)
π’ O(2^(n/2))
π Clean and concise
π§ Recursion stack overhead
π― Iterative Bitmask
π’ O(2^(n/2) Γ n)
π’ O(2^(n/2))
π No recursion overhead
π’ Bit manipulation complexity
π Dynamic Programming
π‘ O(2^(n/2) Γ log n)
π’ O(2^(n/2))
π― Incremental building
π Map overhead
π Vector Binary Search
π‘ O(2^(n/2) Γ log n)
π’ O(2^(n/2))
β‘ Cache-friendly
π§ Sorting required
π Best Choice Recommendation
π― Scenario
ποΈ Recommended Approach
π₯ Performance Rating
π Optimal performance needed
π₯ Recursive Lambda
β β β β β
π Avoid recursion
π₯ Iterative Bitmask
β β β β β
π§ DP understanding
π₯ Dynamic Programming
β β β β β
π― Cache optimization
π Vector Binary Search
β β β β β
β Code (Java)
π Code (Python)
π§ Contribution and Support
For discussions, questions, or doubts related to this solution, feel free to connect on LinkedIn: π¬ Any Questions?. Let's make this learning journey more collaborative!
β If you find this helpful, please give this repository a star! β
πVisitor Count
Last updated