Hands-on implementation of theoretical concepts through practical lab exercises
Welcome to the laboratory section of my Design and Analysis of Algorithms portfolio. This section documents my hands-on journey implementing the theoretical concepts covered in the course. Through these experiments, I've transformed abstract algorithmic ideas into working code, analyzed performance characteristics, and gained practical insights into computational efficiency.
Each lab exercise includes empirical analysis of algorithmic performance across different input sizes.
Converting theoretical algorithms to efficient code requires careful handling of edge cases and optimizations.
Many implementations underwent multiple refinements to enhance efficiency and readability.
Explore each algorithm implementation below. Click on an experiment to view details about the implementation approach, code structure, and performance analysis.
These fundamental sorting algorithms serve as a foundation for understanding more complex sorting techniques.
Builds the sorted array one item at a time by repeatedly taking elements from the unsorted part and inserting them into their correct position in the sorted part.
My implementation focuses on readability while maintaining optimal performance. I used a slightly different approach than the textbook algorithm by using a temporary variable to hold the current element, which simplified the inner loop logic.
Repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong order. The process repeats until no swaps are needed.
I enhanced the standard bubble sort with an optimization flag that detects when no swaps occur in a pass, indicating the array is sorted. This improved performance significantly on partially sorted arrays.
Insertion sort has a time complexity of O(n²) in the worst case, as it compares and shifts elements one by one. Bubble sort also has O(n²) in the worst case, repeatedly swapping adjacent elements. Both are inefficient for large datasets, but insertion sort can perform better with nearly sorted data.
This experiment compares two fundamental search algorithms: Linear Search for unsorted arrays and Binary Search for sorted arrays.
Sequentially checks each element until the target is found or the end of array is reached.
Uses divide-and-conquer by repeatedly dividing the search interval in half.
I implemented both recursive and iterative versions of binary search. The iterative version avoided stack overhead and had slightly better performance. Special care was taken to prevent integer overflow in calculating mid-points using the formula mid = low + (high - low) / 2
.
Linear search has O(n) time complexity, checking each element, while binary search is more efficient with O(log n), but requires sorted data. For large datasets, binary search outperforms linear search.
A divide-and-conquer algorithm that divides the input array into two halves, recursively sorts them, and then merges the sorted halves.
The main challenge was optimizing the merge operation to minimize memory allocations. I used a single auxiliary array for all merge operations rather than creating new arrays for each recursive call, which significantly improved performance.
Merge sort consistently outperformed O(n²) algorithms for medium to large arrays. For very small arrays (n < 10), the overhead of recursion made it slightly slower than insertion sort.
I implemented a hybrid approach that uses insertion sort for small subarrays (n < 10), resulting in about 15% performance improvement for large arrays.
These fundamental sorting algorithms serve as a foundation for understanding more complex sorting techniques.
Builds the sorted array one item at a time by repeatedly taking elements from the unsorted part and inserting them into their correct position in the sorted part.
My implementation focuses on readability while maintaining optimal performance. I used a slightly different approach than the textbook algorithm by using a temporary variable to hold the current element, which simplified the inner loop logic.
Repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong order. The process repeats until no swaps are needed.
I enhanced the standard bubble sort with an optimization flag that detects when no swaps occur in a pass, indicating the array is sorted. This improved performance significantly on partially sorted arrays.
Insertion sort has a time complexity of O(n²) in the worst case, as it compares and shifts elements one by one. Bubble sort also has O(n²) in the worst case, repeatedly swapping adjacent elements. Both are inefficient for large datasets, but insertion sort can perform better with nearly sorted data.
This experiment compares two fundamental search algorithms: Linear Search for unsorted arrays and Binary Search for sorted arrays.
Sequentially checks each element until the target is found or the end of array is reached.
Uses divide-and-conquer by repeatedly dividing the search interval in half.
I implemented both recursive and iterative versions of binary search. The iterative version avoided stack overhead and had slightly better performance. Special care was taken to prevent integer overflow in calculating mid-points using the formula mid = low + (high - low) / 2
.
Linear search has O(n) time complexity, checking each element, while binary search is more efficient with O(log n), but requires sorted data. For large datasets, binary search outperforms linear search.
A divide-and-conquer algorithm that divides the input array into two halves, recursively sorts them, and then merges the sorted halves.
The main challenge was optimizing the merge operation to minimize memory allocations. I used a single auxiliary array for all merge operations rather than creating new arrays for each recursive call, which significantly improved performance.
Merge sort consistently outperformed O(n²) algorithms for medium to large arrays. For very small arrays (n < 10), the overhead of recursion made it slightly slower than insertion sort.
I implemented a hybrid approach that uses insertion sort for small subarrays (n < 10), resulting in about 15% performance improvement for large arrays.
Another efficient divide-and-conquer sorting algorithm that works by selecting a 'pivot' element and partitioning the array around it.
I implemented and tested three pivot selection strategies:
The median-of-three approach consistently showed the best overall performance and resistance to worst-case scenarios.
Quick sort generally outperformed merge sort in practice despite having a worse worst-case complexity. The in-place partitioning gives it better cache performance and lower memory overhead.
Implementation of Strassen's algorithm for matrix multiplication, which reduces the number of recursive multiplications from 8 to 7.
The implementation required careful handling of matrix dimensions and recursion base cases. For matrices smaller than a threshold size (n < 32), I reverted to the classic algorithm as the overhead of Strassen's approach wasn't justified for small matrices.
Strassen's algorithm only showed significant advantages for large matrices (n > 128). For practical applications, the crossover point needs to be carefully determined.
Finding the smallest convex polygon that encloses a set of points in a plane using the Graham Scan algorithm.
Graham Scan works in three phases:
The algorithm's performance was consistently O(n log n), dominated by the sorting step. My implementation handled degenerate cases such as collinear points correctly.
This experiment explored two important applications of the greedy approach.
Optimizing value by allowing fractions of items to be taken. Items are sorted by value-to-weight ratio and taken greedily.
Building optimal prefix codes for data compression based on character frequencies.
Building and traversing the Huffman tree efficiently was challenging. I used a min-heap (priority queue) to always extract the two nodes with lowest frequencies.
Finding the longest subsequence common to two sequences using dynamic programming.
The solution uses a 2D table to build up the LCS length for all prefixes of the two strings. After filling the table, the actual subsequence is constructed by backtracking.
I implemented both the standard O(n*m) space solution and an optimized O(min(n,m)) space version that only keeps two rows of the DP table in memory at any time.
Solving the classic N-Queens puzzle of placing N queens on an N×N chess board so that no two queens attack each other.
The algorithm places queens one row at a time, backtracking whenever a conflict is detected. Optimization includes efficient conflict checking by maintaining sets of occupied columns, diagonals, and anti-diagonals.
The pruning strategy dramatically reduced the search space compared to a naive approach.
Finding the shortest possible route that visits each city exactly once and returns to the origin city.
Implemented Held-Karp algorithm using dynamic programming with bitmasks to represent sets of visited cities.
The exact DP solution was practical only for small instances (n ≤ 20). For larger instances, I implemented approximation algorithms.
Enhancing Quick Sort with randomized pivot selection to avoid worst-case scenarios.
The algorithm randomly selects a pivot element before each partition operation, making worst-case scenarios extremely unlikely.
Compared standard Quick Sort with randomized variant across different input distributions.
Implementing and comparing efficient string matching algorithms: Naive, KMP, and Rabin-Karp.
Implementing and comparing efficient string matching algorithms: Naive, KMP, and Rabin-Karp.
Three distinct approaches were implemented:
The implementation was tested on real-world use cases:
KMP algorithm provided the best balance of performance and reliability across these applications.
Through these lab experiments, I've gained practical experience implementing a wide range of algorithms, from elementary sorting techniques to complex dynamic programming solutions. This hands-on journey has transformed abstract algorithmic concepts into tangible code, deepening my understanding of algorithm design principles and performance optimization techniques.