Complexity in Data Structures

 
CS 367
Introduction to Data
Structures
 
 Lecture 6
 
Today’s Agenda:
Complexity of Computations
Linked Lists
What’s this 
log
 business?
 
Log is logarithm.
Remember that an exponent tells us how
many times a number is to be multiplied:
   10
3
 means 10*10*10
A logarithm tells us what exponent is
needed to produce a particular value.
Hence log
10
(1000) = 3 since 10
3
 = 1000.
 
Fractional exponents (and logs)
are allowed
 
√x  = x
0.5
This is because √x  * √x  = x,
so x
0.5
 * x
0.5
 = x
1
 = x
Thus log
10
(√x) = 0.5 * log
10
(x).
What base do we use?
 
Usually base 2, but …
   it doesn’t really matter.
Logs to different bases are related by a
constant factor.
Log
2
(x) is always 3.32… bigger than
log
10
(x).
   Because  2
3.32… 
= 10
Many useful algorithms are
n log(n) in complexity
 
This is 
way better 
than an n
2
 algorithm
(because log(n) grows slowly as n
grows).
Example: Giving a Toast
 
1.
Fill the glasses.
        Linear complexity
2.
Raise glasses & give the toast
         Constant time
3. Clink glasses.
       Person 1 clinks with persons 2 to N
       Person 2 clinks with persons 3 to N
       Person N-1 clinks with person N
 
 
 
Total number of clinks is (n-1)+(n-2)+ …
+1 +0.
This sums to n*(n-1)/2 = n
2
/2 – n/2.
So this step is 
quadratic
.
Big O Notation
 
This is a notation that expresses the
overall complexity of an algorithm.
Only the highest-order term is used;
constants, coefficients, and lower-order
terms are discarded.
O(1) is constant time.
O(N) is linear time.
O(N
2
) is quadratic time.
O(2
N
) is exponential time.
 
The three functions:
    4N
2
+3N+2,
     300N
2
,
      N(N-2)/2
 
Are all O(N
2
).
Formal Definition
 
A function T(N) is O(F(N)) if for some
constant c and threshold n
0
,
  it is the case T(N) ≤ c F(N) for all N > n
0
 
Example
 
The function 3n
2
-n+3 is O(n
2
) with c =3
and n
0
 = 3:
 
Complexity in Java Code
 
Basic Operations
    
(assignment, arithmetic, comparison, etc.) :
       Constant time, O(1)
List of statements
:
        
statement
1
;
   statement
2
;
   statement
k
;
 
// k is independent of problem size
 
If each statement uses only basic operations,
complexity is k*O(1) = O(1)
 
Otherwise, complexity of the list is the
maximum of the individual statements.
 
If-Then-Else
  
 if (cond) {
    sequence
1
 of statements  }
  
else {
    sequence
2
 of statements  }
Assume conditional requires O(N
c
),
then requires O(N
t
) and else requires O(N
e
).
 
 
 
Overall complexity is:
 
   O(N
c
) + O(max(N
t
, N
e
)) =
 
    O(max(N
c
, max(N
t
, N
e
)))  =
 
    O(max(N
c
,N
t
, N
e
))
 
Basic loops
for (i = 0; i < M; i++) {
 
sequence of statements
}
 
We have M iterations.
If the body’s complexity is O(N
b
), the
whole loop requires M*O(N
b
) =
O(M*N
b
)
(simplify O term if possible).
 
Nested Loops
for (i = 0; i < M; i++) {
   for (j = 0; j < L; j++) {
 
sequence of statements
}
 
We have M*L iterations.
If the body’s complexity is O(N
b
), the
whole loop requires M*L*O(N
b
) =
O(M*L*N
b
)
(simplify O term if possible).
 
Method calls
Suppose 
f1(k) 
is O(1) (constant time):
 
for (i = 0; i < N; i++) {
    f1(i);
}
 
Overall complexity is N*O(1) = O(N)
 
Suppose 
f2(k) 
is O(k) (linear time):
 
for (i = 0; i < N; i++) {
    f2(N);
}
 
Overall complexity is N*O(N) = O(N
2
)
 
Suppose 
f2(k) 
is O(k) (linear time):
 
for (i = 0; i < N; i++) {
    f2(
i
);
}
”Unroll” the loop. Complexity is:
O(0)+O(1)+… +O(N-1) = O(0+1+ … +N-1)
 = O(N*(N-1)/2)
 = O(N
2
)
Number Guessing Game
 
Person 1 picks a number between 1 and N.
Repeat until number is guessed:
    Person 2 guesses a number
    Person 1 answers "correct", "too high",
 or "too low”
   problem size = N
   count : # guesses
 
 
Algorithm 1:
 
Guess number = 1
Repeat
    If guess is incorrect,
       increment guess by 1
until correct
Best case: 1 guess
Worst case: N guesses
Average case: N/2 guesses
 
Complexity = O(N)
 
Algorithm 2:
  Guess number = N/2
  Set step = N/4
  Repeat
     If guess is too large, next guess =
        guess - step
     If guess is too small, next guess =
       guess + step
     step = step/2 (alternate rounding up/down)
until correct
 
Best case: 1 guess
 
Worst case: log
2
(N).
 
So complexity is O(log N).
 
Algorithm 2 is way better!
 
Returning N Papers to N
Students
 
problem size (N) = #
studentscount # of "looks" at a
paper
 
What is the complexity of each
algorithm below?
 
Algorithm 1:
 Call out each name, have student come
forward & pick up paper
best-case:  O(N)
worst-case: O(N)
 
But, this algorithm “cheats” a bit! Why?
 
Concurrency!
 
Algorithm 2:
Hand pile to first student, student searches &
takes own paper,
 then pass pile to next student.
best-case: Each of N students “hits” at first
search. This is N*O(1) = O(N).
worst-case: N compares, then N-1 compares,
etc. Time is O(N)+O(N-1)+… + O(1)  =
O(N
2
).
 
Algorithm 3:
Sort the papers alphabetically,
hand pile to first student who does a
binary search,
then pass pile to next student.
Sort is O(N log N).
Student 1 takes O(log N),
Student 2 takes O(log (N-1)), …
Bounded by N*O(log n).
Overall complexity is O(N log N).
Practice with analyzing
complexity
 
For each of the following methods,
determine the complexity.
Assume arrays A and B are each of size
N (i.e., A.length = B.length = N)
method1
 
void method1(int[] A, int x, int y) {
   int temp = A[x];
   A[x] = A[y];
   A[y] = temp;}
Constant time per call, thus O(1).
method2
 
void method2(int[] A, int s) {
  for (int i = s; i < A.length - 1; i++)
      if (A[i] > A[i+1])
         method1(A, i, i+1); }
Number of iterations is at most N. Each call is O(1)
so overall complexity is O(N).
method3
 
void method3(int[] B) {
   for (int i = 0; i < B.length - 1; i++)
      method2(B, i); }
 
Number of iterations is N-1.
 
method2 
does N-1 iterations, then N-2, etc.
This totals to N
2
, so overall complexity is O(N
2
).
method4
 
void method4(int N) {
   int sum = 0, M = 1000;
   for (int i = N; i > 0; i--)
      for (int j = 0; j < M; j++)
         sum += j; }
 
Since 
M
 is a constant, the entire inner loop takes a
bounded amount of time; its complexity is O(1).
Outer loop iterates N times, so overall complexity is
O(N).
What if M is a parameter?
 
void method4a(int N, int M) {
   int sum = 0;
   for (int i = N; i > 0; i--)
      for (int j = 0; j < M; j++)
         sum += j; }
 
Now the entire inner loops takes O(M) time.
The overall complexity is now  O(N*M).
method5
 
public void method5(int N) {
   int tmp, arr[];
   arr = new int[N];
 
   for (int i = 0; i < N; i++)
      arr[i] = N - i;
   
for (int i = 1; i < N; i++) {
      for (int j = 0; j < N - i; j++) {
         if (arr[j] > arr[j+1]) {
              tmp = arr[j];
              arr[j] = arr[j+1];
              arr[j+1] = tmp; }}}}
  
What does this do?
Complexity of method5
 
Analyze in steps:
1.
The call to new is O(1).
2.
The first loop iterates N times; loop
body is O(1), so loop is O(N).
3.
The nested loop body iterates N-1
times, then N-2 times, …, 1 time.
Total is O(N
2
).
Overall complexity is O(1)+O(N)+O(N
2
)
= O(N
2
).
Complexity Caveats
 
1.
Algorithms with the same complexity
can differ when constants,
coefficients and lower-order terms
are considered.
      Which of these is better?
       2N
2
+3N     vs.   10N
2
 
2.
A better complexity means 
eventually
an algorithm will be better. But for
small to intermediate problem sizes,
an inferior complexity may win out!
      Consider 100*N vs. N
2
/100.
 
      The quadratic complexity is better
      until N = 10,000.
 
Primitive vs. Reference Types
 
Primitive types represent fundamental
data values (integer, floating point,
characters). Values are placed directly in
memory.
An 
int
 value in one word (4 bytes):
1234
 
Reference Types
 
Data objects (created by 
new
) are
pointed to 
by a reference type.
 
Int []
Assignment
 
Assignment of a primitive type copies
the actual value.
     
A = 1234;   A:
 
     B = A;   B:
1234
1234
 
Assignment of a reference type copies
a pointer to an object. The object is
not duplicated
.
 
 
 
   
B = A;
 
B
 
A
 
Assignment of Reference Types
Leads to 
Sharing
 
Given
 
 
  
A[1] = 1; 
causes
1
 
Use clone() to create a distinct copy
of an Object
 
B = A.clone();
   creates:
 
A
 
B
 
But clone() makes a copy only of the
object itself
 
Objects accessed by references 
aren’t
copied.
 
 
 
 
If we clone A we get
 
A
B
A’
 
Why isn’t B copied too?
 
Consider:
 
 
 
 
 
But you can create your own version of
clone() if you wish!
 
A
B
Linked Lists
 
Individual data items, called 
nodes
, are linked
together using references:
 
 
Advantages:
Easy to reorder nodes (by changing links).
Easy to add extra nodes as needed.
 
 
 
 
 
 
Disadvantages:
Each node needs a “next” field
Moving forward is tedious (one node
at a time).
Moving backward is difficult or
impossible.
 
Linked List in Java
 
public class Listnode<E> {
 
//*** fields ***
    private E data;
    private Listnode<E> next;
  //*** constructors *** 
(why two?)
    public Listnode(E d) {
        this(d, null);  }
    public Listnode(E d, Listnode<E> n) {
        data = d;
        next = n; }
 
// methods to access fields
    public E getData() {
        return data;
    }
    public Listnode<E> getNext() {
        return next;
    }
 
 // methods to modify fields
  public void setData(E d) {
        data = d;
  }
 
  public void setNext(Listnode<E> n) {
        next = n;
  }
 
Example
 
Create first node:
Listnode<String> l;
l = new Listnode<String>("ant");
 
 
Then add second node at end of list:
 
l.setNext(
   new Listnode<String>("bat"));
 
 
Adding a Node into a Linked List
 
Assume 
l
 points to start of list,
n
  points to node 
after which 
to insert:
 
 
Create node to be inserted:
 
Listnode<String> tmp =
  new Listnode<String>(“xxx”);
 
 
 
 
 
 
 
Set the new node’s next field:
 
tmp.setNext( n.getNext() );
 
 
Reset 
n
’s next 
field
:
 
n.setNext ( tmp );
 
Removing a Node from a List
 
Goal:
    Remove node 
n
 from list 
l
Approach:
Find 
n
’s predecessor in the list
Link that predecessor to 
n
’s successor
 
 
We search from head of list (l) to
find n’s predecessor
 
Listnode<String> tmp = l;
while (tmp.getNext() != n) {
  // find the node before n
  tmp = tmp.getNext();
}
 
We then reset the Predecessor’s
next field
 
tmp.setNext( n.getNext() );
 // remove n from the linked list
 
Special Case Alert
 
What happens if n is first element of list?
 
It has no predecessor!
 
How does the code fail?
  while (tmp.getNext() != n) {
    tmp = tmp.getNext();
     }
How do we handle this?
 
Add special case code
Make sure every node has a
predecessor!
 
Enhanced Code
 
if (l == n) {
  // special case: n is the first node in the list
    l = n.getNext();
} else {
  // general case: find the node before n, then "unlink" n
    Listnode<String> tmp = l;
    while (tmp.getNext() != n) {  // find the node before n
        tmp = tmp.getNext();
    }
    tmp.setNext(n.getNext());
}
Null List Reference vs. Null List
 
If 
L
 is a variable of type 
Listnode
, what
does 
L == null 
mean?
L
 is a null list (size == 0)
      
Maybe not!
L
 may be uninitialized (undefined)
These two interpretations are different!
Header Nodes
 
To distinguish between an undefined list
and an empty (or null) list, we can add
an extra 
header
 node at the start of each
list.
All lists now have at least one node. So
an empty list 
isn’t
 equal to null.
Moreover, inserting a node at the start
of a list is easy – it is placed just after
the header node!
The LinkedList Class
 
Let’s implement a class that implements
the 
ListADT
 interface using a linked list
rather than an array.
We’ll use a header node to illustrate its
utility.
Interface definition for ListADT
 
public interface ListADT<E> {
 
void add(E item);
 
void add(int pos, E item);
 
boolean contains(E item);
 
int size( );
 
boolean isEmpty( );
 
E get(int pos);
 
E remove(int pos);
}
 
public class LinkedList<E>
  implements ListADT<E> {
 
/*  private data declarations  */
 
/*  constructor(s)   */
 
/* interface methods  */
 
}
 
public class LinkedList<E>
  implements ListADT<E> {
 
 
private Listnode<E> head;
 
private Listnode<E> tail;
 
private int itemCount;
/*  constructor(s)   */
 
/* interface methods  */
 
}
 
public class LinkedList<E>
  implements ListADT<E> {
 
 
private Listnode<E> head;
 
private Listnode<E> tail;
 
private int itemCount;
 
   public LinkedList() {
  
itemCount = 0;
  
head = new Listnode<E>(null);
  
tail = head;
 
}
/* interface methods  */
 
}
 
public class LinkedList<E>
  implements ListADT<E> {
 
/* data defs & constructor */
 
   public int size( ){
  
return itemCount;
 
}
 
public boolean isEmpty( ){
  
return (size() == 0);
 
}
}
 
public class LinkedList<E>
  implements ListADT<E> {
 
/* data defs & constructor */
 
  
 
public void add(E item) {
  
if (item == null)
   
throw new NullPointerException();
  
Listnode<E> newNode =
         new Listnode<E>(item);
  
tail.setNext(newNode);
  
tail = newNode;
  
itemCount++;
 
}
}
 
public class LinkedList<E> implements ListADT<E> {
   public void add(int pos, E item){
  
if (item == null)
   
throw new NullPointerException();
  
if (pos < 0 || pos > itemCount) {
  
        throw new IndexOutOfBoundsException();
}
  
if (pos == itemCount)
   
add(item);
  
else {// Find where to insert
   
Listnode<E> ptr = head;
   
for (int i = 0; i < pos; i++)
    
ptr = ptr.getNext();
   
Listnode<E> newNode = new Listnode<E>(item);
   
newNode.setNext(ptr.getNext());
   
ptr.setNext(newNode); }
   
itemCount++;
 
}
}
 
public class LinkedList<E>
  implements ListADT<E> {
 
/* data defs & constructor */
 
  
 
public E get(int pos) {
 
    // check for bad pos
 
    if (pos < 0 || pos >= itemCount) {
 
      throw
           new IndexOutOfBoundsException();
 
    }
 
    Listnode<E> ptr = head.getNext();
  
 for (int i = 0; i < pos; i++)
   
 ptr = ptr.getNext();
  
 return ptr.getData();
 
}
}
 
public class LinkedList<E>
  implements ListADT<E> {
 
/* data defs & constructor */
 
  
 
public boolean contains(E item){
  
// null values are not allowed in the list
  
if (item == null)
   
return false;
  
Listnode<E> ptr = head.getNext();
  
while (ptr != null){
   
if (ptr.getData().equals(item))
    
return true;
   
else ptr = ptr.getNext();
  
}
  
return false;
 
}}
 
public class LinkedList<E>
  implements ListADT<E> {
 
/* data defs & constructor */
  
public E remove(int pos) {
 
    // check for bad 
pos
 
    if (pos < 0 || pos >= itemCount) {
 
        throw new IndexOutOfBoundsException();}
 
    // decrease the number of items
 
    itemCount--;
 
    Listnode<E> prev = head, target;
  
for (int i = 0; i < pos; i++)
   
prev = prev.getNext();
  
target = prev.getNext();
  
prev.setNext(target.getNext());
  
if (target.getNext() == null)
   
tail = prev;
  
return target.getData();
 
}}
Slide Note
Embed
Share

Introduction to logarithms, fractional exponents, and complexity analysis in algorithms. Exploring Big O notation to express algorithm complexity and examples demonstrating different time complexities. Learn about the importance of analyzing the efficiency of algorithms in data structures.

  • Data Structures
  • Algorithm Complexity
  • Big O Notation
  • Logarithms
  • Exponents

Uploaded on Sep 14, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. CS 367 Introduction to Data Structures Lecture 6

  2. Todays Agenda: Complexity of Computations Linked Lists

  3. Whats this log business? Log is logarithm. Remember that an exponent tells us how many times a number is to be multiplied: 103means 10*10*10 A logarithm tells us what exponent is needed to produce a particular value. Hence log10(1000) = 3 since 103= 1000.

  4. Fractional exponents (and logs) are allowed x = x0.5 This is because x * x = x, so x0.5* x0.5= x1= x Thus log10( x) = 0.5 * log10(x).

  5. What base do we use? Usually base 2, but it doesn t really matter. Logs to different bases are related by a constant factor. Log2(x) is always 3.32 bigger than log10(x). Because 23.32 = 10

  6. Many useful algorithms are n log(n) in complexity This is way better than an n2algorithm (because log(n) grows slowly as n grows).

  7. Example: Giving a Toast Fill the glasses. Linear complexity 2. Raise glasses & give the toast Constant time 3. Clink glasses. Person 1 clinks with persons 2 to N Person 2 clinks with persons 3 to N Person N-1 clinks with person N 1.

  8. Total number of clinks is (n-1)+(n-2)+ +1 +0. This sums to n*(n-1)/2 = n2/2 n/2. So this step is quadratic.

  9. Big O Notation This is a notation that expresses the overall complexity of an algorithm. Only the highest-order term is used; constants, coefficients, and lower-order terms are discarded. O(1) is constant time. O(N) is linear time. O(N2) is quadratic time. O(2N) is exponential time.

  10. The three functions: 4N2+3N+2, 300N2, N(N-2)/2 Are all O(N2).

  11. Formal Definition A function T(N) is O(F(N)) if for some constant c and threshold n0, it is the case T(N) c F(N) for all N > n0

  12. Example The function 3n2-n+3 is O(n2) with c =3 and n0= 3: n 3n2-n+3 3n2 1 2 3 4 5 5 13 27 47 73 3 12 27 48 75

  13. Complexity in Java Code Basic Operations (assignment, arithmetic, comparison, etc.) : Constant time, O(1) List of statements: statement1; statement2; statementk; // k is independent of problem size

  14. If each statement uses only basic operations, complexity is k*O(1) = O(1) Otherwise, complexity of the list is the maximum of the individual statements.

  15. If-Then-Else if (cond) { sequence1of statements } else { sequence2of statements } Assume conditional requires O(Nc), then requires O(Nt) and else requires O(Ne).

  16. Overall complexity is: O(Nc) + O(max(Nt, Ne)) = O(max(Nc, max(Nt, Ne))) = O(max(Nc,Nt, Ne))

  17. Basic loops for (i = 0; i < M; i++) { sequence of statements } We have M iterations. If the body s complexity is O(Nb), the whole loop requires M*O(Nb) = O(M*Nb) (simplify O term if possible).

  18. Nested Loops for (i = 0; i < M; i++) { for (j = 0; j < L; j++) { sequence of statements } We have M*L iterations. If the body s complexity is O(Nb), the whole loop requires M*L*O(Nb) = O(M*L*Nb) (simplify O term if possible).

  19. Method calls Suppose f1(k) is O(1) (constant time): for (i = 0; i < N; i++) { f1(i); } Overall complexity is N*O(1) = O(N)

  20. Suppose f2(k) is O(k) (linear time): for (i = 0; i < N; i++) { f2(N); } Overall complexity is N*O(N) = O(N2)

  21. Suppose f2(k) is O(k) (linear time): for (i = 0; i < N; i++) { f2(i); } Unroll the loop. Complexity is: O(0)+O(1)+ +O(N-1) = O(0+1+ +N-1) = O(N*(N-1)/2) = O(N2)

  22. Number Guessing Game Person 1 picks a number between 1 and N. Repeat until number is guessed: Person 2 guesses a number Person 1 answers "correct", "too high", or "too low problem size = N count : # guesses

  23. Algorithm 1: Guess number = 1 Repeat If guess is incorrect, increment guess by 1 until correct Best case: 1 guess Worst case: N guesses Average case: N/2 guesses Complexity = O(N)

  24. Algorithm 2: Guess number = N/2 Set step = N/4 Repeat If guess is too large, next guess = guess - step If guess is too small, next guess = guess + step step = step/2 (alternate rounding up/down) until correct

  25. Best case: 1 guess Worst case: log2(N). So complexity is O(log N). Algorithm 2 is way better!

  26. Returning N Papers to N Students problem size (N) = # students count # of "looks" at a paper What is the complexity of each algorithm below?

  27. Algorithm 1: Call out each name, have student come forward & pick up paper best-case: O(N) worst-case: O(N) But, this algorithm cheats a bit! Why? Concurrency!

  28. Algorithm 2: Hand pile to first student, student searches & takes own paper, then pass pile to next student. best-case: Each of N students hits at first search. This is N*O(1) = O(N). worst-case: N compares, then N-1 compares, etc. Time is O(N)+O(N-1)+ + O(1) = O(N2).

  29. Algorithm 3: Sort the papers alphabetically, hand pile to first student who does a binary search, then pass pile to next student. Sort is O(N log N). Student 1 takes O(log N), Student 2 takes O(log (N-1)), Bounded by N*O(log n). Overall complexity is O(N log N).

  30. Practice with analyzing complexity For each of the following methods, determine the complexity. Assume arrays A and B are each of size N (i.e., A.length = B.length = N)

  31. method1 void method1(int[] A, int x, int y) { int temp = A[x]; A[x] = A[y]; A[y] = temp;} Constant time per call, thus O(1).

  32. method2 void method2(int[] A, int s) { for (int i = s; i < A.length - 1; i++) if (A[i] > A[i+1]) method1(A, i, i+1); } Number of iterations is at most N. Each call is O(1) so overall complexity is O(N).

  33. method3 void method3(int[] B) { for (int i = 0; i < B.length - 1; i++) method2(B, i); } Number of iterations is N-1. method2 does N-1 iterations, then N-2, etc. This totals to N2, so overall complexity is O(N2).

  34. method4 void method4(int N) { int sum = 0, M = 1000; for (int i = N; i > 0; i--) for (int j = 0; j < M; j++) sum += j; } Since M is a constant, the entire inner loop takes a bounded amount of time; its complexity is O(1). Outer loop iterates N times, so overall complexity is O(N).

  35. What if M is a parameter? void method4a(int N, int M) { int sum = 0; for (int i = N; i > 0; i--) for (int j = 0; j < M; j++) sum += j; } Now the entire inner loops takes O(M) time. The overall complexity is now O(N*M).

  36. method5 public void method5(int N) { int tmp, arr[]; arr = new int[N]; for (int i = 0; i < N; i++) arr[i] = N - i; for (int i = 1; i < N; i++) { for (int j = 0; j < N - i; j++) { if (arr[j] > arr[j+1]) { tmp = arr[j]; arr[j] = arr[j+1]; arr[j+1] = tmp; }}}} What does this do?

  37. Complexity of method5 Analyze in steps: 1. The call to new is O(1). 2. The first loop iterates N times; loop body is O(1), so loop is O(N). 3. The nested loop body iterates N-1 times, then N-2 times, , 1 time. Total is O(N2). Overall complexity is O(1)+O(N)+O(N2) = O(N2).

  38. Complexity Caveats 1. Algorithms with the same complexity can differ when constants, coefficients and lower-order terms are considered. Which of these is better? 2N2+3N vs. 10N2

  39. 2. A better complexity means eventually an algorithm will be better. But for small to intermediate problem sizes, an inferior complexity may win out! Consider 100*N vs. N2/100. The quadratic complexity is better until N = 10,000.

  40. Primitive vs. Reference Types Primitive types represent fundamental data values (integer, floating point, characters). Values are placed directly in memory. An int value in one word (4 bytes): 1234

  41. Reference Types Data objects (created by new) are pointed to by a reference type. Int []

  42. Assignment Assignment of a primitive type copies the actual value. A = 1234; A: 1234 B = A; B: 1234

  43. Assignment of a reference type copies a pointer to an object. The object is not duplicated. A B B = A;

  44. Assignment of Reference Types Leads to Sharing Given A B A[1] = 1; causes 1 A B

  45. Use clone() to create a distinct copy of an Object B = A.clone(); creates: A B

  46. But clone() makes a copy only of the object itself Objects accessed by references aren t copied. A B If we clone A we get

  47. A B A Why isn t B copied too?

  48. Consider: A B But you can create your own version of clone() if you wish!

  49. Linked Lists Individual data items, called nodes, are linked together using references: Advantages: Easy to reorder nodes (by changing links). Easy to add extra nodes as needed.

  50. Disadvantages: Each node needs a next field Moving forward is tedious (one node at a time). Moving backward is difficult or impossible.

More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#