- inheritence and virtual keyword [virtual methods] class A inherits from class B (C++ syntax: class A : public B) --> A has B's member variables ("instance variables") including private --> but A can't access the private ones A has B's member functions ("methods") including private --> but A can't access the private ones A's constructor calls B's constructor first --> Java: all methods are what C++ calls "virtual" public class B { public void foo() {...} } public class A extends B { public void foo() {...} } B bar = new A(); bar.foo() --> in Java always calls A.foo --> In C++: if methods are declared virtual, works as above: class B { public: virtual void foo(); }; class A: public B { public: void foo(); }; --> In C++: if methods are NOT declared virtual, only calls WHAT THE TYPE SAYS class B { public: void foo(); }; class A: public B { public: void foo(); }; A *original = new A(); B *bar = original; bar->foo() // always cals B::foo() original->foo(); // calls A::foo() --> what matters is whether the TYPE YOU ARE USING has virtual method [virtual inheritence] - including diamond inheritence --> only matters for inheriting the same base class multiple times --> two choices: choice 1: duplicate the base class [no virtual keyword / default] the idea is that base class is helping implement its immediate child e.g. "linked list node for type of child" class Base { public: int a; }; class ChildOne : public Base { public: void foo(); }; class ChildTwo : public Base { public: void bar(); }; class GrandChild : public ChildOne, public ChildTwo {}; GrandChild { ChildOne's Base::a // what foo() accesses when it uses 'a' ChildTwo's Base::a // what bar() accesses when it uses 'a' } GrandChild *p; ChildOne *p_as_one = (ChildOne*) p; p_as_one->a // accesses ChildOne's Base::a choice 2: share the base class [virtual keyword] base class is common interface used by anyone the class e.g. "comparable", "serializable" class Base { public: int a; }; class ChildOne : public virtual Base { public: void foo(); }; class ChildTwo : public virtual Base { public: void bar(); }; class GrandChild : public ChildOne, public ChildTwo {}; GrandChild { single Base::a // what both foo() and bar() access when they use 'a' } - types of pointers SomeType *x; "x is a pointer to SomeType" in memory: it's just a memory address, but... sizeof(*x) --> uses the type its points to x[10] --> uses the type its points to (skip ahead sizeof(SomeType)*10 bytes in memory) (*x).someMethod() --> uses the type (*x).someMemberVariable --> uses the type SomeType **x; "x is pointer to (a pointer to SomeType)" sizeof(**x) --> sizeof(SomeType*) "size of a pointer to SomeType" (*x).someMethod() --> ERROR (pointers don't have methods) SomeType *&x; NOT a pointer, "x is a reference to (a pointer to SomeType)" closest to a pointer to a pointer int foo() { int x; // <-- stored on the stack or in a register int *p; // " " " " " " int *dynX = new int; // dynX is stored on the stack/in a register // *dynX ("what dynX points to") is stored on the heap int **dynP = new int*; // same as dynX } - "smart pointers" classes which act like pointers wrap a pointer and try to help you manage it better unique_ptr: acts like a pointer, but deletes it from its destructor shared_ptr: acts like a pointer, but counts the number of shared_ptrs and deletes from destructor (assuming the count becomes 0) all work by overriding operators operator* and operator-> so you can use a unique_ptr, etc. object like a pointer have special constructors/destructors - memory leaks -- where they happen rule: must delete what is newed memory leaks are always with new/malloc'd/etc. memory --> heap stack memory: goes away when function returns typical problems are: objects which are used in multiple places --- so unclear where to delete changing where a pointer points without deleteing the object it used to point to just forgetting delete, not writing a destructor - decimal to floating point step 1: convert decimal to binary 2.125 - to binary - 2.125 - 2 - .125 = 0 10.001 <-- base 2 conversion of 2.125 ^ ^^^ repeatedly try subtracting 2^k, 2^{k-1}, 2^{k-2}, ... until it's not bigger step 2: make scientfic notation 1.mantissa * 2^{exponent} need to shift over one 10.001 1.0001 x 2^1 ^-- amount of shifting step 3: adjust for bias (e.g. single precision: 127) 1.0001 x 2^(x - 127 = 1) --> 1.0001 x 2^(128 - 127) step 4: convert parts to floating point sign / exponent bits (SP: 8) / mantissa bits (SP: 23 bits) [MSB] 0 10000000 0001 0000 0000 0000 000 [LSB] ^ for + 1 means - alternate approaches: write as a fraction of the form X/2^Y 2.125 8 17 ----- * - = -- --> 1 8 8 then convert to 2^Z * X/2^Y' where 1 <= X/2^Y' < 2 ^^^^^ what we want the "1.mantissa" part of scieitnfic notation to use 2 17 10001 - * -- --> 17/16 * 2^1 -->----- * 2^1--> 1.0001 * 2^1 1 16 10000 - adjacency list versus matrices - adjacency list: Map > (Map might be an Array, e.g. names are 0, 1, 2, 3.., n) - adjacent matrix: vertex names are 0, 1, 2, 3, ..., n -1 boolean Matrix[N][N] Matrix[i][j] == true iff there is an edge from i to j extension for both: storing weights, etc. in addition to vertex names/booleans - adjacency list: more compact if # edges << order (# vertices)^2 ~ max possible number iterating takes time proportional to # edges + # vertices checking for incoming edges requires iterating through all the lists (if needed for real: often create a "reverse adjacency list") - adjacncy matrix: more compact if # edges ~~ order (# vertices)^2 practical issues: linked list pointers, etc. as easy to check in-edges and out-edges iteratoin takes time proportional to (# vertices)^2 - shortest path algorithms - Dijkstra's algorithm breadth first search is a special case of this (all weights = 1) basic idea: start with a vertex, say its distance is zero: repeatedly process the unprocessed vertex with the lowest distance and every time we process a vertex: iterate through the vertices edges use that edge to update distance from the starting vertex to the edge's destination new distance = current vertex's distance + edge weight ("distance") (only update is new distance is les) time complexity: iterate through all vertices -- set distance iterate through all vertices, removing from priority queue iterate through (essentially) all edges -- to update distances updating distances -- priority queue decreaseMin ^^^^^^^^^^ ended up being asympotitcally slowest O(|V|+|E|log|V|) <-- with best possible priority queue O((|V|+|E|)log|V|) <-- with binary heap priority queue (extra |V|log|V| from deleteMin) - O/Theta/Omega --- usefulness of each type O -- upper bound usually much easier to find if something is O(X) that X is "good enough", it's good enough often confused with/used when people relaly mean big-Theta or something like big-Theta Omega -- lower bound doesn't practically tell you much except that you might need to reject something good for knowing if some idea can get any better often: trivial lower bounds median of unsorted list is Omega(n) --- have to examine every element --> if you hvae a Theta(n) or O(n) median algorithm, no sense in trying to do better Theta --- "tight" bound tells the most about the complexity but is the hardest to find