Algorithms

Binary search is your classic recursive algorithm. Works in logarithm time. Be aware that the array must be sorted.

def bsearch(arr, element)
    return bsearch_helper(arr, element, 0, arr.length)
end

def bsearch_helper(arr, element, left, right)
    # find midpoint this way because it will be wrong for large arrays otherwise
    mid = left + ((right - left) / 2)

    return false if right < left

    if arr[mid] == element
        return mid
    elsif arr[mid] > element
        bsearch_helper(arr, element, left, mid - 1)
    else
        bsearch_helper(arr, element, mid + 1, right)   
    end 
end

bsearch([1,2,5,9,11,44,55,101], 5) #=> 2

Sorting

Modern computers were originally created for the purpose of sorting. Much has been, and can be said of sorting. The most famous implementations are quicksort and merge sort, although many others may be better under different situations. For example, insertion sort may be better for small arrays or when the array is nearly sorted, and bucket sort can work in linear time when the data is uniformly distributed and you can use an ordered hash function. Quicksort works is quadratic in the worst case scenario, but linearithmic (n log n) on average, and is probably the fastest in most scenarios. Merge sort will always be linearithmic.

def merge_sort(arr)
    # stop splitting when only one item in array
    return arr if arr.length <= 1

    mid = arr.length / 2

    # split array in middle and sort each side, recursively
    left = merge_sort(left[0...mid])
    right = merge_sort(right[mid..arr.length)
    return merge(left, right)
end

def merge(left, right)
    result = []
    l_idx = 0
    r_idx = 0

    # compare elements in array and put them in result in order
    while l_idx < left.length && r_idx < right.length
        if left[l_idx] > right[r_idx]
            result << right[r_idx]
            r_idx += 1
        else
            result << left[l_idx]
            l_idx += 1
        end
    end

    # add any left over elements from the left array to result
    while l_idx < left.length
        merge << left[l_idx]
        l_idx += 1
    end

    # add any left over elements from the right array to result
    while r_idx < right.length
        merge << right[r_idx]
        r_idx += 1
    end

    return result
end
def quick_sort(arr)
    # stop splitting when only one element left
    return arr if arr.length <= 1

    # pick a value to do a comparisons on. using pop may not be optimal, but it still works
    pivot = arr.pop
    lower = []
    higher = []

    # make comparisons and put value in right accord to whether they're higher or lower than the pivot
    arr.each do |val|
        if val < pivot
            lower << val
        else
            higher << val
        end
    end

    # recursively sort lower and higher arrays around the pivot
    return quick_sort(lower) + pivot + quick_sort(higher)
end
def insertion_sort(arr)
    i = 1

    # iterate through array in run insert method on each value
    while i < arr.length
        insert(arr, i, arr[i])
        i += 1
    end

    return arr
end

def insert(arr, pos, value)
    # start at the previous position
    i = pos - 1

    # work backwards through the array, swapping the values as long as it is smaller than what you're on
    while i >= 0 && arr[i] > value
        arr[i + 1] = arr[i]
        i -= 1
    end

    arr[i + 1] = value
end

BFS is a powerful algorithm for traversing trees or graph data structures. It starts a the root node and visits all of its neighbor first before visiting its neighbors' neighbors. For trees, it visits all of the nodes on the same level (level-order) before going to the next level. It can useful for traversing structures that resemble trees and graphs, such as the DOM, or a JSON object. It is also useful for finding the shortest path between two nodes, along with other tasks.

def tree_bfs_traversal(root)
    # use a queue to maintain order of nodes
    queue = Queue.new

    # add root node
    queue.enq(root)

    # stop when you've visited every node
    while !queue.empty?
        current_node = queue.deq

        p current_node.value

        # add each child node to the queue
        current_node.children.each do |child|
            queue.enq(child)
        end
    end
end

Keep in mind that if you're working with an undirected graph, or a directed graph with cycles, you need to keep track of the nodes you've already visited. The Set data structure is useful for this.

DFS is will explore down a branch as far as possible before backtracking and visiting higher level nodes.

def tree_dfs_traversal(node = root)
    if node
        p node.value

        node.children.each do |child|
            tree_dfs_traversal(child)
        end        
    end
end

Inorder vs Preorder vs Postorder Traversal

You change the order in which you process the current node your on by changing where you do in your function. The example above is a Pre-order traversal since it prints out the value of the node as soon as it is visited. Moving the p function after the loop would cause it to become Post-order. The last thing the function does before exiting the call stack is call the p function, so it will print in order of the node leaving the stack. In-order traversal is specific to binary trees. The p function there would be called between visits to the left subtree and the right subtree. In a binary search tree, the data would be returned in sorted order.

results matching ""

    No results matching ""