21. Top K Frequent Elements in Array
β GFG solution to the Top K Frequent Elements problem: find k most frequent elements using QuickSelect partitioning with optimal time complexity. π
The problem can be found at the following link: π Question Link
π§© Problem Description
You are given a non-empty integer array arr[]. Your task is to find and return the top k elements which have the highest frequency in the array.
Note: If two numbers have the same frequency, the larger number should be given the higher priority.
π Examples
Example 1
Input: arr[] = [3, 1, 4, 4, 5, 2, 6, 1], k = 2
Output: [4, 1]
Explanation: Frequency of 4 is 2 and frequency of 1 is 2, these two have the maximum
frequency and 4 is larger than 1.Example 2
Input: arr[] = [7, 10, 11, 5, 2, 5, 5, 7, 11, 8, 9], k = 4
Output: [5, 11, 7, 10]
Explanation: Frequency of 5 is 3, frequency of 11 is 2, frequency of 7 is 2,
frequency of 10 is 1.π Constraints
$1 \le \text{arr.size()} \le 10^5$
$1 \le \text{arr}[i] \le 10^5$
$1 \le k \le \text{no. of distinct elements}$
β
My Approach
The optimal approach uses QuickSelect Partitioning combined with Hash Map frequency counting to efficiently find the top k frequent elements:
QuickSelect + Partial Sort Algorithm
Build Frequency Map:
Traverse the entire array once to count the frequency of each element.
Store frequencies in an unordered hash map for O(1) average lookup time.
Time: O(n) where n is the array size.
Create Frequency-Element Pairs:
Convert the frequency map into a vector of pairs.
Each pair contains (frequency, element) for easy sorting and comparison.
This transformation takes O(m) time where m is the number of unique elements.
Partition Using nth_element:
Apply C++ STL's
nth_elementfunction with greater comparator.This uses the QuickSelect algorithm to partition the vector.
After partitioning, the top k frequent elements are guaranteed to be in the first k positions.
Average time complexity: O(m) for partitioning.
Sort Top K Elements:
Sort only the first k elements in descending order by frequency.
When frequencies are equal, the larger element value gets priority.
This ensures the correct order as per problem requirements.
Time: O(k log k) which is typically much smaller than O(m log m).
Extract Result:
Iterate through the first k sorted pairs.
Extract only the element values (second component of pair) into result vector.
Return the final result containing top k frequent elements.
Why This Approach?
Optimal Time Complexity: Combines the efficiency of QuickSelect (O(m) average) with minimal sorting overhead (O(k log k)).
Space Efficient: Uses only O(m) extra space for frequency map and pairs vector.
Handles Tie-Breaking: Automatically prioritizes larger values when frequencies are equal through pair comparison.
Scalable: Performs exceptionally well when k << m (k is much smaller than unique elements count).
π Time and Auxiliary Space Complexity
Expected Time Complexity: O(n + m + k log k), where n is the size of the input array, m is the number of distinct elements, and k is the number of top elements to find. Breaking down: O(n) for frequency counting, O(m) average for nth_element partitioning, and O(k log k) for sorting the top k elements. Since typically k << m << n, the overall complexity is highly efficient.
Expected Auxiliary Space Complexity: O(m), where m is the number of distinct elements in the array. We use a hash map to store element frequencies (O(m)) and a vector of pairs to store frequency-element combinations (O(m)). The result vector uses O(k) space which is included in O(m) since k β€ m.
π§βπ» Code (C++)
β‘ View Alternative Approaches with Code and Analysis
π 2οΈβ£ Min Heap (Priority Queue) Approach
π‘ Algorithm Steps:
Build a frequency map by counting occurrences of each element in the array.
Create a min heap (priority queue) to maintain exactly k elements with highest frequencies.
Iterate through the frequency map:
Push each (frequency, element) pair into the min heap.
If heap size exceeds k, pop the element with minimum frequency.
After processing all elements, the heap contains exactly k most frequent elements.
Extract elements from heap into result vector and reverse to get descending order.
This approach maintains only k elements in memory at any time.
π Complexity Analysis:
Time: β±οΈ O(n log k) - Building frequency map takes O(n), and for each of m unique elements, we perform heap operations taking O(log k) time
Auxiliary Space: πΎ O(m + k) - Hash map stores m unique elements, heap stores at most k elements
β
Why This Approach?
Memory efficient when k is significantly smaller than the number of unique elements
Maintains optimal heap size throughout execution
Excellent choice for streaming data or when memory is constrained
Heap naturally maintains the k largest frequencies without sorting all elements
π 3οΈβ£ Simple Sorting Approach
π‘ Algorithm Steps:
Build a frequency map to count occurrences of each element in the input array.
Transform the frequency map into a vector of (frequency, element) pairs.
Sort the entire vector in descending order by frequency.
When frequencies are equal, pairs are automatically sorted by element value in descending order.
Extract the first k elements from the sorted vector as the result.
This approach is straightforward but sorts all unique elements even when k is small.
π Complexity Analysis:
Time: β±οΈ O(n + m log m) - O(n) for frequency counting and O(m log m) for sorting where m is number of unique elements
Auxiliary Space: πΎ O(m) - Space for frequency map and vector of pairs
β
Why This Approach?
Extremely simple and easy to understand implementation
No complex data structures or algorithms needed
Good baseline solution for interviews or when code clarity is priority
Works reliably for all input sizes without edge cases
Ideal when m (unique elements) is relatively small
π 4οΈβ£ Bucket Sort Approach
π‘ Algorithm Steps:
Count frequency of each element and track the maximum frequency encountered.
Create an array of buckets where each index represents a frequency value.
Place each element into its corresponding frequency bucket (bucket[frequency]).
Traverse buckets from highest frequency to lowest (max to 1).
For each bucket, sort elements in descending order to handle tie-breaking.
Collect elements from buckets until we have k elements in the result.
This achieves linear time by avoiding comparison-based sorting of all elements.
π Complexity Analysis:
Time: β±οΈ O(n + m) - Linear time for frequency counting and bucket distribution, with small overhead for sorting within each bucket
Auxiliary Space: πΎ O(n) - Buckets array of size maxFreq + 1, plus frequency map
β
Why This Approach?
Achieves optimal O(n) time complexity in the average case
Natural grouping of elements by frequency reduces comparison operations
Excellent performance when frequency distribution is dense
Efficient when maximum frequency is not extremely large
Demonstrates clever use of counting sort principles
π π Comparison of Approaches
π Approach
β±οΈ Time Complexity
πΎ Space Complexity
β Pros
β οΈ Cons
π― QuickSelect + Sort
π’ O(n + m + k log k)
π’ O(m)
π Best average case, optimal k sorting
π§ Slightly complex implementation
π¦ Min Heap
π’ O(n log k)
π’ O(m + k)
πΎ Memory efficient for small k
π Slower when k is large
π Sorting
π‘ O(n + m log m)
π’ O(m)
π Simplest implementation
π Sorts all elements unnecessarily
πͺ£ Bucket Sort
π’ O(n + m)
π‘ O(n)
β‘ Best time complexity
πΎ Higher space when maxFreq is large
π Best Choice Recommendation
π― Scenario
ποΈ Recommended Approach
π₯ Performance Rating
π Small k, very large dataset
π₯ Min Heap
β β β β β
π Balanced performance & code clarity
π₯ QuickSelect + Sort
β β β β β
π― Maximum speed, frequency distribution known
π₯ Bucket Sort
β β β β β
π§ Quick implementation, small datasets
ποΈ Sorting
β β β β β
β Code (Java)
π Code (Python)
π§ Contribution and Support
For discussions, questions, or doubts related to this solution, feel free to connect on LinkedIn: π¬ Any Questions?. Let's make this learning journey more collaborative!
β If you find this helpful, please give this repository a star! β
πVisitor Count
Last updated