- Introduction to Big O Notation
- Space Complexity
- Space Complexity in JS
- Summary
- Linear Search Example
- Binary Search Example
- Writing Big O Notation
- Summary
- Additional Resources
- Data Structures and Algorithms with JavaScript: Bringing classic computing approaches to the Web
- About the author
- Michael McMillan
- Ratings & Reviews
- Friends & Following
- Community Reviews
Introduction to Big O Notation
There are two parts to measuring efficiency — time complexity and space complexity. Time complexity is a measure of how long the function takes to run in terms of its computational steps. Space complexity has to do with the amount of memory used by the function. This blog will illustrate time complexity with two search algorithms.
~ Simplyfing Big O Expression
~ Constants Don’t Matter
~ Smaller Terms Don’t Matter
Big O is sometimes referred to as the algorithm’s upper bound, meaning that it deals with the worst-case scenario. The best-case scenario doesn’t really tell us anything — it will be finding our item in the first pass. We use worst-case to remove uncertainty — the algorithm will never perform worse than we expect.
Space Complexity
So far, we’ve focusing on time complexity: how can we analyze the runtime of an algorithm as the size of the input increases?
Sometimes you’ll hear the term auxilary space complexity to refer to space required by the algorithm, not including space taken up by the inputs.
technically, we’ll be talking about auxillary space complexity.
Space Complexity in JS
- Most primitives (booleans, numbers, undefined, null) are constant space
- Strings require O(n) space (where n is the string length)
- Reference types are generally O(n), where n is the length (for arrays) or the number of keys(for objects) ## Logarithm Compexity
- certain searching algorithmhave logarithmic time complexity
- efficient sorting algorithms involve logarithms.
- Recursion sometimes involves logarithmic space complexity
Summary
- To analyze the performance of an algorithm, we use Big O Notation
- Big O Notation can give us a high level understanding of the time or space complexity of an algorithm
- Big O Notation doesn’t care about precision, only about generals trends(linear? quardratic? constant?)
- The time or space complexity (as measured by Big O) depends only on the algorithm, not the hardware use to run the algorithm
Linear Search Example
Let’s look at a simple example.
We will search for the number eight out of a range from 1–8.
Our first strategy is to start with the first number of the array and move up by one until we find our target number. Our algorithm steps would look something like this.
- Start at the beginning
- Compare the current value to our target
- Move to the next value
- Reach the end of the list
In round 1, we choose number one; that’s not correct, so we move to round 2 and eliminate number one as an option. Step 2 is also not correct, and we continue all the way until we select number eight.
In our worst-case scenario, this is not very efficient. We have to check every single number in the list until we get to our answer. This method is called Linear Search. The Big O notation for Linear Search is O(N). The complexity is directly related to the size of the inputs — the algorithm takes an additional step for each additional data element.
def linear_search(arr, x): #input array and target for i in range(len(arr)): if arr[i] == x: return i return -1 # return -1 if target is not in the array
Binary Search Example
Let’s try a different strategy. The critical part of this strategy is that the list must be in order.
This time we start in the middle of the list. We check to see whether we have chosen the target number. If not, we check whether the number is smaller or larger than our choice. In either case, we eliminate half of the list that doesn’t include our target. Then we select a number in the middle of the remaining values.
- Find the middle of the list — the midpoint
- Compare the midpoint to the target
- If our value and the target match we stop — we’ve found our target
- If our value is smaller than the target, we make a new list ranging from the midpoint plus one to the largest value.
- If our value is larger than the target, we make a new list ranging from the smallest value to the midpoint minus one.
- Repeat until we find the target or reach that last element, and it does not match the target.
The midpoint for our first choice is four. The target is greater than four, so we exclude values four and below and create a new list from five to eight. We pick a midpoint value of six and choose again. We continue until only number eight remains.
This strategy is called Binary Search. It is efficient because it eliminates half of the list in each pass. If we were to double our search and look for number 16 in a range of 1–16, it would take us only one more round. Amazingly, if we were to pick a number between one and a million, the worst case would be 20 guesses.
When we get to large numbers like this, we can clearly see how much more efficient this method is than using linear search: 20 guesses vs. 1,000,000 guesses.
We can model this mathematically. The number eight is equivalent to 2 x 2 x 2. We can also express this equation using exponents: two to the power of three, or 2³.
The inverse of an exponent is a logarithm. Log to the base 2 of 8 means how many times do you have multiply two by itself to get eight. As illustrated above 2 x 2 x 2 =8, so Log2 8 = 3.
In general, the worst-case scenario of a Binary Search is Log of n + 1. The Big O notation for Binary Search is O(log N). In contrast to O(N) which takes an additional step for each data element, O(log N) means that the algorithm takes an additional step each time the data doubles.
def binary_search3(arr, target): left = 0 right = len(arr) - 1 while left arr[mid_point]: left = mid_point + 1 else: return mid_point return -1
Note: Even though Binary Search is far more efficient than Linear Search, you wouldn’t always choose it in every example. The reason for this is the very first qualification. Binary Search requires that the list is in order. Sorting a list introduces its own complexity — so in some cases, it may be more efficient to use Linear Search rather than sorting the list first.
Writing Big O Notation
When we write Big O notation, we look for the fastest-growing term as the input gets larger and larger. We can simplify the equation by dropping constants and any non-dominant terms. For example, O(2N) becomes O(N), and O(N² + N + 1000) becomes O(N²).
Binary Search is O(log N) which is less complex than Linear Search. There are many more complex algorithms. A common example of a quadratic algorithm or O(N²) is a nested for loop. In a nested loop, we iterate through the entire data in an outer loop. Then for each element, we iterate through the data in an inner loop. This is N x N times of N².
Here’s how these Big O runtimes look on a chart. The examples in this blog are two of the least complex notations.
Summary
There are often many choices of writing code to get to a solution. Understanding Big O notation is important in writing algorithms. It helps you determine when your algorithm is getting faster or slower. You can also compare different methods and choose the most efficient.
Additional Resources
Treehouse via freeCodeCamp has a great YouTube course to take you from beginner to understanding Big O notation. Warning it is over five hours long. Treehouse Tutorial
Note: I’ve taken reference from the blog on the Internet.
Credit: Andrew Jamieson+ My Research
keep going on, It will make sense
Thank you for reading :))
Subscribe to my YouTube channel, where I share remote job-hunting tips and opportunities → https://youtube.com/@AvinashVagh
Get Globally Full Remote Job Hiring Company’s details in your inbox | Remote Profile Newsletter → https://remoteprofile.beehiiv.com/subscribe
If you like my Blog, you can connect with me on LinkedIn & follow me on Twitter & can check out my Hashnode Blog
Data Structures and Algorithms with JavaScript: Bringing classic computing approaches to the Web
As an experienced JavaScript developer moving to server-side programming, you need to implement classic data structures and algorithms associated with conventional object-oriented languages like C# and Java. This practical guide shows you how to work hands-on with a variety of storage mechanisms―including linked lists, stacks, queues, and graphs―within the constraints of the JavaScript environment. Determine which data structures and algorithms are most appropriate for the problems you’re trying to solve, and understand the tradeoffs when using them in a JavaScript program. An overview of the JavaScript features used throughout the book is also included. This book
First published January 1, 2014
About the author
Michael McMillan
Ratings & Reviews
Friends & Following
Community Reviews
Even though it’s a great book for explaining Data Structures and Algorithms different topics, it has all sorts of bugs in the code written. It was a good exercise fixing the bugs and writing my own functions but was annoying sometimes when I didn’t quiet understand the explanation and wanted to understand it better from the code. Sometimes the bugs were major ones.
Having just finished this book, I recommend it as an introduction to data structures and algorithms for people with no CS background or for professional JavaScript software developers filling in the gap for or just for refresher purposes. The author is able to show how to implement these data structures and algorithms using JS, even if he isn’t always clear why JS is better than other languages. In addition, the book contains a few typos and misleading diagrams, and the examples are counter-intuitive.
Short version: using JavaScript, McMillan introduces a bunch of fundamental computer science concepts around how data are shaped, and how we work with those data (e.g., searching and sorting). The choice of JavaScript is a practical one because of its ubiquity, but it’s probably not the best choice for most of these. He does some things that are strange (and a few that are arguably «wrong») — but overall, it’s decent, especially if you’re a professional web developer with a few years experience of «doing this» but otherwise lacking that computer science background.
If you’re like me and like to follow along with the code, you’ll find there are a few bugs/typos in the example code. Another thing I didn’t particularly like was the fact that the book still uses the “var” keyword and other ES5 features. Other than that I think it’s a good introduction to data structures and algorithms with JavaScript for people with no CS background.
This book is really short, It doesn’t go into details about how one will actually solve problems using the algorithms. If you are looking for a book with code that implements the data structures along with some simple tests to prove that its working then this is the book for you.
The Javascript syntax and environment used are dated, especially when compared to today’s standards. Some examples are counter intuitive, some plain ugly. If you’re just starting out, you should know that there are better books out there on basic data structures and algorithms.
The introduction of JS Shell to try out the Java script example helps me to learn Java Script in a more efficient way without relying on browser to look at the output.
Good book to learn the very basics, however it has multiple errors, and really should use a more modern JS environment such as Node.JS, instead of Mozilla SeaMonkey.
For a web developer versed in JS, but without a CS background, this is. useful. To a degree. The actual overall, high-level content of the book is good — the explanations of the structures are clear, the supporting diagrams are fine, and the code works. But there’s a few fairly big caveats. The code is strangely structured, and often seems to ignore JS best practices. It seems to completely ignore functional idioms. It reimplements some native structures in slightly baffling ways (eg Set). It doesn’t really go into any depth on use cases, why certain structures are extremely powerful and when they are best used.
Critically, the text and code has a cut-and-paste feel to it, and is badly edited. For example, in the chapter on graphs, a Vertex constructor is described in some detail. The code used to describe it is missing one of the properties described in the preceding paragraph. That fact is irrelevant, though, as the Vertex object is never used in the rest of the chapter. In addition to that, the depth-first function is written using a
loop, which was a [long-deprecated, now completely unsupported and barely-implemented-at-the-time] language addition for E4X (the extension to JS to give it native XML support). There is a helluva lot of this throughout the book. It was a useful read when skimmed, but the actual code used as examples is of extremely poor quality.