Technical

Complexity Analysis and Introduction to Algorithms





Contents:

· Coplexity Analysis

o Introduction

o Analysis of algorithms

o Orders of growth

o Big O notation

· Introduction to Algorithms

o Sorting Algorithms

§ Bubble sort

· Introduction

· Implementation

· Complexity analysis

§ Insertion sort

· Introduction

· Implementation

· Complexity analysis

o Searching Algorithms

§ Linear sort

· Introduction

· Implementation

· Complexity analysis

§ Binary sort

· Introduction

· Implementation

· Complexity analysis

· References

Complexity Analysis:

Introduction

Complexity theory is the study of the resources (especially computation time and memory) required by algorithms.

Analysis of algorithms

To analyze an algorithm is to determine the amount of resources (such as time and storage) necessary to execute it. Most algorithms are designed to work with inputs of arbitrary length. Usually the efficiency or complexity of an algorithm is stated as a function -f()- relating the input length (n) to the number of steps (time complexity) or storage locations (space complexity).

Orders of growth

Informally, an algorithm can be said to exhibit a growth rate on the order of a mathematical function if beyond a certain input size n, the function f(n) times a positive constant provides an upper bound or limit for the run-time of that algorithm. In other words, for a given input size n greater than some no and a constant c, an algorithm can run no slower than c × f(n). This concept is frequently expressed using Big O notation

f(n) n=20 n=40 n=60
n

n2

n3

2n

2/(105) sec

4/(104) sec

8/(103) sec

1 sec

4/(105) sec

16/(104) sec

64/(103) sec

12.7 days

6/(105) sec

36/(104) sec

216/(103) sec

366 years

Big O notation

In mathematics, computer science, and related fields, big O notation (also known as Big Oh notation, Landau notation, Bachmann–Landau notation, and asymptotic notation) describes the limiting behavior of a function when the argument tends towards a particular value or infinity, usually in terms of simpler functions. Big O notation allows its users to simplify functions in order to concentrate on their growth rates: different functions with the same growth rate may be represented using the same O notation.

Although developed as a part of pure mathematics, this notation is now frequently also used in computational complexity theory to describe an algorithm‘s usage of computational resources: the worst case or average case running time or memory usage of an algorithm is often expressed as a function of the length of its input using big O notation. This allows algorithm designers to predict the behavior of their algorithms and to determine which of multiple algorithms to use, in a way that is independent of computer architecture or clock rate. Big O notation is also used in many other fields to provide similar estimates.

A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function.

Algorithms:

In mathematics, computer science, and related subjects, an algorithm is an effective method for solving a problem using a finite sequence of instructions. Algorithms are used for calculation, data processing, and many other fields.

There are many algorithms like;

1. Sorting Algorithms

a. Bubble Sort

b. Insertion Sort

c. And many other algorithms (Quick Sort, Selection Sort, …. )

2. Searching Algorithms

a. Linear search (sequential search)

b. Binary search

c. And many other algorithms (BFS, DFS, … )

3. And There are many and many algorithms : ) used in this fields and other fields.

Sorting Algorithms

Bubble sort algorithm

The bsort() function sorts the array using a variation of the bubble sort. This is a simple (although notoriously slow) approach to sorting. Here’s how it works, assuming we want to arrange the numbers in the array in ascending order. First the first element of the array (arr[0]) is compared in turn with each of the other elements (starting with the second). If it’s greater than any of them, the two are swapped. When this is done we know that at least the first element is in order; it’s now the smallest element. Next the second element is compared in turn with all the other elements, starting with the third, and again swapped if it’s bigger. When we’re done we know that the second element has the second-smallest value. This process is continued for all the elements until the next-to thelast, at which time the array is assumed to be ordered. Figure 10.8

Code of bubble sort

void buble_sort(int arr[])

{

For(int i=0; i<strlen(arr); i++)

For(int j=0; j<strlen(arr); j++)

If(arr[i] > arr[j])

swap(arr[i], arr[j]);

}

Complexity analysis of Bubble sort

Bubble sort has worst-case and average complexity both О(n²), where n is the number of items being sorted. There exist many sorting algorithms with substantially better worst-case or average complexity of O(n log n). Even other О(n²) sorting algorithms, such as Insertion sort, tend to have better performance than bubble sort. Therefore bubble sort is not a practical sorting algorithm when n is large.

Insertion Sort Algorithm

Insertion sort is a simple sorting algorithm, a comparison sort in which the sorted array (or list) is built one entry at a time. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort. However, insertion sort provides several advantages:

Code of Insertion sort:

void insert(int arr[], int arrLen ,int value)

{

int insertionPosition = 0;

for(int i=0; i<arrLen; i++)

{

if(value < arr[i])

break;

insertionPosition++;

}

for(int i=arrLen; i>insertionPosition; i–)

arr[i] = arr[i-1];

arr[insertionPosition] = value;

}

Complexity analysis of Insertion sort

The best case input is an array that is already sorted. In this case insertion sort has a linear running time (i.e., O(n)). During each iteration, the first remaining element of the input is only compared with the right-most element of the sorted subsection of the array.

The worst case input is an array sorted in reverse order. In this case every iteration of the inner loop will scan and shift the entire sorted subsection of the array before inserting the next element. For this case insertion sort has a quadratic running time (i.e., O(n2)).

Searching Algorithms

Linear Search (sequential search)

In computer science, linear search or sequential search is a method for finding a particular value in a list, that consists in checking every one of its elements, one at a time and in sequence, until the desired one is found.

Code of Linear search:

/* This function return the index if the value if found and (-1) if not */

int linear_search(int arr[], int value_to_be_searched)

{

for(int i=0; i<strlen(arr); i++)

if(arr[i] == value_to_be_searched)

return i;

return -1;

}

Complexity analysis of Linear search

The worst-case cost and the expected cost of linear search are both O(n).

Binary Search Algorithm

In computer science, a binary search is an algorithm for locating the position of an element in a sorted list. It inspects the middle element of the sorted list: if equal to the sought value, then the position has been found; otherwise, the upper half or lower half is chosen for further searching based on whether the sought value is greater than or less than the middle element. The method reduces the number of elements needed to be checked by a factor of two each time, and finds the sought value if it exists in the list or if not determines “not present, in logarithmic time.

Code of binary searcing

/* Return the index of the value_to_be_searched if found and (-1) if not */

int BinarySearch(const int container[], int arraySize, const int & value_to_be_searched)

{

int middle;

int start = 0;

int end = arraySize – 1;

while(start <= end)

{

middle = (start + end) / 2;

if (value_to_be_searched < container[middle])

end = middle – 1;

else if (value_to_be_searched > container[middle])

start = middle + 1;

else

return middle;

}

return -1;

}

Complexity analysis of Binary search

The worst-case cost of binary search are both O(log(n) ), Where n is the number of elements in the list.

References:

http://en.wikiversity.org/

http://en.wikipedia.org/

Slides of Dr.El Sayed el Horbati

Object Oriented Book – Rebort Lafore –

Advertisements

2 thoughts on “Complexity Analysis and Introduction to Algorithms

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s