I was following this Python course held by MIT (6.0001 Fall 2016) and during one of his lectures, the professor described the function which returns True
if an element (e) is in a list (L), False
otherwise:
def bisect_search1(L, e): if L == []: return False elif len(L) == 1: return L[0] == e else: half = len(L)//2 if L[half] > e: return bisect_search1(L[:half], e) else: return bisect_search1(L[half:], e)
Going on to the next lecture slide, I understand that bisection search calls are O(log n) complexity because the input is to be halved every loop or call.
What I still don't understand from the slide are these points:
- O(n) for each bisection search call to copy list, this is the cost to set up each call, so do this for each level of recursion
- If we are really careful, note that the length of the list to be copied is also halved on each recursive call, turns out that the total cost to copy is O(n) and this dominates the log n cost due to the recursive calls
Things that I still don't understand are:
- Does slicing part of a list and copying a whole list have the same big O complexity O(n)?
- If yes, I understand why the function has O(n log n) complexity. But what does this "turns out that the total cost to copy is O(n) and this dominates the log n cost due to the recursive calls" even mean? Does that mean the function now has O(n) complexity?
I'm sorry if a similar question had been asked before and seems to be beginnerish.
https://stackoverflow.com/questions/66930172/questions-about-big-o-notation-of-recursive-bisection-search April 03, 2021 at 06:36PM
没有评论:
发表评论