# Quick Sort | Pseudo code| Example | Analyzing Complexity

In this blog we are going to learn quick sort which is considered one of the best practical choice for sorting despite having worst case as θ(n*n) as running time. It’s considered the best practical choice because it’s remarkable efficient on the average, having an expected time of θ(n*lgn).

In this blog we are going to:

What is quick sort and write pseudocode for the same

Explain quick sort with an example

Analyze the complexity

So, let’s begin with what is quick sort and write a pseudocode for the same.

Quick sort, like merge sort , applies divide and conquer logic for sorting numbers. We follow the three basic rules for divide and conquer here as well.

**Divide**: Divide the array into subarray. We will use partition function for this purpose. We will take a pivot in the array and divide the array into parts from the pivot element.

**Conquer**: Next we are going to use the quick sort algorithm to sort numbers in the subarray. Divide and conquer algorithm is useful for quick sort because the sub arrays form a similar problems as the main array. We recursively call the subarray to sort the array

**Combine**: Next we are going to combine the array back to get the sorted array, which is already sorted in conquer method.

**Pseudocode for quick sort**

In the first pseudocode we will see quick sort algorithm. Given an array A[p…r] where p is the first element and q is the last element of the array, we will partition the array at q using partition algorithm making the array as A[p...q…r].Next we are going to subdivide the partitioned array into subarray and sort it in the Partition algorithm as explained below.

In the partition algorithm above we pass array A, starting index p and ending index ‘r’ to the array. Usually we take the last element as the pivot element, however we can take any element as the pivot element. For the simplicity we will take the last element of the array as pivot element. Next we take an index ‘i’ which has an initial value less than the zero index of the array, further we will parse the complete array till the pivot element and once we find the value of pivot element greater than the current element we will increment the value of ‘i’ and exchange current element with the A[i] element. Once the entire array has been parsed we will finally change the pivot value with the current ith value to get the desired index of the pivot.

We’ll further explain the above algorithm with an example to clear all the doubts. Consider the following example:

**Analyzing the complexity of quick sort algorithm**

The running time of quicksort depends on the number of elements left for partitioning and current element used for partition. If less number of elements remains the algorithm runs as fast as merge sort. If large number of elements remains then it can run slow as insertion sort.

**Worst Case Analysis**

Worst case behavior is observed when all the elements are less than the pivot element. Since we are dealing with an array of n elements where the pivot is not parsed, hence we have an array with (n-1) elements. Partition happens in θ (n) time and the recurrence for the running time is

T(n) = T(n-1) + θ (n)

Solving the above recurrence with substitution method gives us θ (n*n) .Therefore, the running time of quick sort is no better than insertion sort.

**Best Case Analysis**

In the most even partition, it produces 2 subproblems each of size no more than n/2. One of n/2 and other of n/2 -1. In this case quick sort runs much faster. Recurrence running time is given as:

T(n) = 2T(n/2) + θ (n)

Using masters theorem case 2 the recurrence solution is T(n) = θ (n*lgn)

That’s all folks! We are done with quick sort. We’ve created a pseudocode for the same, understood it with an example and analyzed its complexity.