Math Fundamentals: Matrices and Vectors in EECS 442

Lecture 6:
Math Review II
  1
Administrative
HW0 due 
tomorrow
, 1/29 11:59pm
HW1 due 
1 week from tomorrow
, 2/5 11:59pm
  2
Last Time: Floating Point Math
  3
11 bits
2
1023
 ≈ 10
308
52 bits
≈ 15 decimal digits
IEEE 754 Single Precision / Single / float32
IEEE 754 Double Precision / Double / float64
Last Time: Vectors
  4
Scale (vector, scalar 
→ vector)
Add (vector, vector 
→ vector)
Magnitude (vector 
→ scalar)
Dot product (vector, vector 
→ scalar)
Dot products are projection / angles
Cross product (vector, vector 
→ vector)
V
e
c
t
o
r
s
 
f
a
c
i
n
g
 
s
a
m
e
 
d
i
r
e
c
t
i
o
n
 
h
a
v
e
 
c
r
o
s
s
 
p
r
o
d
u
c
t
 
0
You can 
never 
mix vectors of different sizes
Matrices
  5
Matrices
  6
Horizontally concatenate n, m-dim column vectors and
you get a mxn matrix A (here 2x3)
Matrices
  7
Horizontally concatenate n, m-dim column vectors and
you get a mxn matrix A (here 2x3)
Watch out
: In math, it’s common to treat D-dim
vector as a Dx1 matrix (column vector);
In numpy these are different things
Matrices
  8
 
Vertically concatenate m, n-dim row vectors
and you get a mxn matrix A (here 2x3)
Transpose: flip
rows / columns
(3x1)
T
 
= 1x3
Matrix-vector Product
  9
Matrix-vector Product
  10
Matrix Multiplication
  11
Generally: 
A
m
n
 and 
B
n
p
 
yield product (
AB
)
m
p
 
Yes – in 
A
, I’m referring to the rows, and in 
B
, I’m
referring to the columns
Matrix Multiplication
  12
Generally: 
A
m
n
 and 
B
n
p
 
yield product (
AB
)
m
p
Matrix Multiplication
  13
 
Dimensions must match
Dimensions must match
Dimensions must match
(Associative): ABx = (A)(Bx) = (AB)x
(Not Commutative): ABx ≠ (BA)x ≠ (BxA)
Two uses for Matrices
  14
 
1.
Storing things in a rectangular array (e.g. images)
Typical operations
: element-wise operations,
convolution (which we’ll cover later)
Atypical operations
: almost anything you learned in a
math linear algebra class
2.
A linear operator that maps vectors to another
space (
Ax
)
Typical/Atypical: 
reverse of above
 
Images as Matrices
  15
Suppose someone hands you this matrix.
What’s wrong with it?
Contrast: Gamma Curve
  16
Contrast: Gamma Curve
  17
10%
50%
90%
Now the darkest
regions (10
th
 pctile) are
much
 darker than the
moderately dark
regions (50
th
 pctile).
new 10%
new
50%
new
90%
  18
Contrast: Gamma Correction
  19
Phew! Much Better.
Contrast: Gamma Correction
Implementation
  20
 
Elementwise Operations
  21
  22
Sums Across Axes
Note – libraries distinguish between N-D column vector and Nx1 matrix.
Operations they don’t teach
  23
Broadcasting
  24
If you want to be pedantic and proper, you expand e by
multiplying a matrix of 1s (denoted 
1
)
Many smart matrix libraries do this automatically. This
is the source of many bugs.
Broadcasting Example
  25
Given: a nx2 matrix 
P
 and a 2D column vector 
v
, Want:
nx2 difference matrix 
D
Broadcasting Rules
  26
Suppose we have numpy arrays x and y.
How will they broadcast?
 
1.
Write down the 
shape
 of each array as a tuple of integers:
For example: x: (10,)    y: (20, 10)
 
2. If they have different numbers of dimensions, 
prepend
with ones until they have the same number of dimensions
For example: x: (10,)   y: (20, 10)    
 x: (1, 10)   y: (20, 10)
 
3. Compare each dimension. There are 3 cases:
     (a) Dimension match. Everything is good
     (b) Dimensions don’t match, but one is =1.
         ”Duplicate” the smaller array along that axis to match
     (c) Dimensions don’t match, neither are =1. Error!
Broadcasting Examples
  27
x = np.ones(10, 20)
y = np.ones(20)
z = x + y
print(z.shape)
 
x = np.ones(10, 20)
y = np.ones(10, 1)
z = x + y
print(z.shape)
 
x = np.ones(10, 20)
y = np.ones(10)
z = x + y
print(z.shape)
 
x = np.ones(1, 20)
y = np.ones(10, 1)
z = x + y
print(z.shape)
 
(10,20)
 
ERROR
 
(10,20)
 
(10,20)
Tensors
  28
Scalar
: Just one number
Vector
: 1D list of numbers
Matrix
: 2D grid of numbers
Tensor
: N-dimensional grid of numbers
(Lots of other meanings in math, physics)
Broadcasting with Tensors
  29
x = np.ones(30)
y = np.ones(20, 1)
z = np.ones(10, 1, 1) 
w = x + y + z
print(w.shape)
 
(10, 20, 30)
The same broadcasting rules apply to
tensors with any number of dimensions!
Vectorization
  30
Writing code without explicit loops:
use broadcasting, matrix multiply,
and other (optimized) numpy
primitives instead
Vectorization Example
  31
Vectorization Example
  32
 
Compute a NxM matrix
of dot products
Vectorization Example
  33
Vectorization Example
  34
 
Numpy code:
XNorm
 
= np.sum(X**2,axis=1,keepdims=True)
YNorm
 
= np.sum(Y**2,axis=1,keepdims=True)
D = (
XNorm
+
YNorm.T
-
2*np.dot(X,Y.T)
)**0.5
 
Get in the habit of thinking about shapes as tuples.
Suppose X is (N, D), Y is (M, D):
Vectorization Example
  35
Numpy code:
XNorm
 
= np.sum(X**2,axis=1,keepdims=True)
YNorm
 
= np.sum(Y**2,axis=1,keepdims=True)
D = (
XNorm
+
YNorm.T
-
2*np.dot(X,Y.T)
)**0.5
Get in the habit of thinking about shapes as tuples.
Suppose X is (N, D), Y is (M, D):
 
(N, 1)
Vectorization Example
  36
Numpy code:
XNorm
 
= np.sum(X**2,axis=1,keepdims=True)
YNorm
 
= np.sum(Y**2,axis=1,keepdims=True)
D = (
XNorm
+
YNorm.T
-
2*np.dot(X,Y.T)
)**0.5
Get in the habit of thinking about shapes as tuples.
Suppose X is (N, D), Y is (M, D):
 
(M, 1)
(N, 1)
Vectorization Example
  37
Numpy code:
XNorm
 
= np.sum(X**2,axis=1,keepdims=True)
YNorm
 
= np.sum(Y**2,axis=1,keepdims=True)
D = (
XNorm
+
YNorm.T
-
2*np.dot(X,Y.T)
)**0.5
Get in the habit of thinking about shapes as tuples.
Suppose X is (N, D), Y is (M, D):
(M, 1)
(N, 1)
 
(N, M)
Vectorization Example
  38
Numpy code:
XNorm
 
= np.sum(X**2,axis=1,keepdims=True)
YNorm
 
= np.sum(Y**2,axis=1,keepdims=True)
D = (
XNorm
+
YNorm.T
-
2*np.dot(X,Y.T)
)**0.5
Get in the habit of thinking about shapes as tuples.
Suppose X is (N, D), Y is (M, D):
(M, 1)
(N, 1)
(N, M)
 
(N, M)
Vectorization Example
  39
Numpy code:
XNorm
 
= np.sum(X**2,axis=1,keepdims=True)
YNorm
 
= np.sum(Y**2,axis=1,keepdims=True)
D = (
XNorm
+
YNorm.T
-
2*np.dot(X,Y.T)
)**0.5
Get in the habit of thinking about shapes as tuples.
Suppose X is (N, D), Y is (M, D):
(M, 1)
(N, 1)
(N, M)
(N, M)
Vectorization Example
  40
Numpy code:
XNorm
 
= np.sum(X**2,axis=1,keepdims=True)
YNorm
 
= np.sum(Y**2,axis=1,keepdims=True)
D = (
XNorm
+
YNorm.T
-
2*np.dot(X,Y.T)
)**0.5
Get in the habit of thinking about shapes as tuples.
Suppose X is (N, D), Y is (M, D):
(M, 1)
(N, 1)
(N, M)
(N, M)
Vectorization Example
  41
Numpy code:
XNorm
 
= np.sum(X**2,axis=1,keepdims=True)
YNorm
 
= np.sum(Y**2,axis=1,keepdims=True)
D = (
XNorm
+
YNorm.T
-
2*np.dot(X,Y.T)
)**0.5
Does Vectorization Matter?
  42
 
Computing pairwise distances between 300 and 400
128-dimensional vectors
1.
for x in X, for y in Y, using native python: 9s
2.
for x in X, for y in Y, using numpy to compute
distance: 0.8s
3.
vectorized: 0.0045s (~2000x faster than 1, 175x
faster than 2)
Expressing things in primitives that are optimized is
usually faster
Even more important with special hardware like
GPUs or TPUs!
Linear Algebra
  43
Linear Independence
  44
 
Is the set {a,b,c} linearly independent?
Is the set {a,b,x} linearly independent?
Max # of independent 3D vectors?
A set of vectors is 
linearly independent 
if you can’t
write one as a linear combination of the others.
Span
  45
Span: all linear
combinations of a set
of vectors
 
Is 
blue
 in {
red
}’s span?
Span
  46
Span: all linear
combinations of a set
of vectors
Span({    ,      }) 
= ?
Span
  47
Span: all linear
combinations of a set
of vectors
Span({    ,      }) 
= ?
Matrix-Vector Product
  48
Right-multiplying 
A
 by 
x
mixes columns of 
A
according to entries of 
x
 
The output space of f(
x
) = 
Ax
 is constrained to be
the 
span
 of the columns of 
A
.
Can’t output things you can’t construct out of your
columns
An Intuition
  49
x
 – knobs on machine (e.g., fuel, brakes)
y
 – state of the world (e.g., where you are)
A
 – machine (e.g., your car)
Linear Independence
  50
Suppose the columns of 3x3 matrix 
A
 are 
not
 linearly
independent (c
1
, 
α
c
1
, c
2 
for instance)
 
Linear Independence Intuition
  51
Knobs of 
x
 are redundant. Even if 
y
 has 3 outputs,
you can only control it in two directions
Linear Independence
  52
Recall:
Linear Independence
  53
 
An infinite number of non-zero vectors 
x
 can map
to a zero-vector 
y
Called the 
right null-space 
of A.
 
What else can we cancel out?
Rank
  54
 
Rank of a nxn matrix 
A
 – number of linearly
independent columns (
or rows
) of A / the
dimension of the span of the columns
Matrices with 
full rank 
(n x n, rank n) behave nicely:
can be inverted, span the full output space, are
one-to-one.
Matrices with 
full rank
 are machines where every
knob is useful and every output state can be made
by the machine
Matrix Inverses
  55
Symmetric Matrices
  56
Special Matrices: Rotations
  57
 
Every row is linearly independent
Next Time:
More Linear Algebra
+ Image Filtering
  58
Slide Note
Embed
Share

Delve into the world of matrices and vectors with a focus on floating-point math, IEEE standards, vector operations, and matrix manipulation in the context of EECS 442 lectures by Justin Johnson. Explore foundational concepts such as concatenation, transpose, cross product, dot product, and the nuances of working with vectors of different sizes.

  • Math Fundamentals
  • Matrices
  • Vectors
  • Floating Point Math
  • EECS 442

Uploaded on Sep 21, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Lecture 6: Math Review II Justin Johnson 1 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  2. Administrative HW0 due tomorrow, 1/29 11:59pm HW1 due 1 week from tomorrow, 2/5 11:59pm Justin Johnson 2 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  3. Last Time: Floating Point Math IEEE 754 Single Precision / Single / float32 8 bits 2127 1038 23 bits 7 decimal digits S Exponent Fraction IEEE 754 Double Precision / Double / float64 11 bits 21023 10308 52 bits 15 decimal digits Exponent Fraction S Justin Johnson 3 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  4. Last Time: Vectors Scale (vector, scalar vector) Add (vector, vector vector) Magnitude (vector scalar) Dot product (vector, vector scalar) Dot products are projection / angles Cross product (vector, vector vector) Vectors facing same direction have cross product 0 0 You can never mix vectors of different sizes Justin Johnson 4 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  5. Matrices Justin Johnson 5 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  6. Matrices Horizontally concatenate n, m-dim column vectors and you get a mxn matrix A (here 2x3) ?11 ?12 ?21 ?22 ?31 ?32 ? = ?1, ,?? = (scalar) lowercase undecorated (vector) lowercase bold or arrow (matrix) uppercase bold a a A Justin Johnson 6 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  7. Matrices Horizontally concatenate n, m-dim column vectors and you get a mxn matrix A (here 2x3) ?11 ?12 ?21 ?22 ?31 ?32 ? = ?1, ,?? = Watch out: In math, it s common to treat D-dim vector as a Dx1 matrix (column vector); In numpy these are different things Justin Johnson 7 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  8. Matrices ? ? ? ? Transpose: flip rows / columns (3x1)T= 1x3 = ? ? ? Vertically concatenate m, n-dim row vectors and you get a mxn matrix A (here 2x3) ? ?1 ?? ?11 ?21 ?12 ?22 ?13 ?23 ? = = ? Justin Johnson 8 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  9. Matrix-vector Product ?2?1= ?2?3?3?1 ?1 ?2 ?3 ?1 ?2 = ?? ?? ?? ? = ?1??+ ?2??+ ?3?? Linear combination of columns of A Justin Johnson 9 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  10. Matrix-vector Product ?2?1= ?2?3?3?1 3 ?1 ?2 ? ? ?? ? = 3 ?? ?? ?? ?1= ?? ?2= ?? Dot product between rows of A and x Justin Johnson 10 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  11. Matrix Multiplication Generally: Amn and Bnp yield product (AB)mp | | ? ?? ?? ?? | ?? | ?? = ? Yes in A, I m referring to the rows, and in B, I m referring to the columns Justin Johnson 11 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  12. Matrix Multiplication Generally: Amn and Bnp yield product (AB)mp | | ??? ????= ?? ?? | ?? | ??? ?? ??? ?? ? ?? ?? ?? ?? ?? = ? ??? ??? Justin Johnson 12 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  13. Matrix Multiplication Dimensions must match Dimensions must match Dimensions must match (Associative): ABx = (A)(Bx) = (AB)x (Not Commutative): ABx (BA)x (BxA) Justin Johnson 13 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  14. Two uses for Matrices 1. Storing things in a rectangular array (e.g. images) Typical operations: element-wise operations, convolution (which we ll cover later) Atypical operations: almost anything you learned in a math linear algebra class 2. A linear operator that maps vectors to another space (Ax) Typical/Atypical: reverse of above Justin Johnson 14 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  15. Images as Matrices Suppose someone hands you this matrix. What s wrong with it? No contrast! Justin Johnson 15 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  16. Contrast: Gamma Curve Typical way to change the contrast is to apply a nonlinear correction pixelvalue? The quantity ? controls how much contrast gets added Justin Johnson 16 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  17. Contrast: Gamma Curve Now the darkest regions (10th pctile) are much darker than the moderately dark regions (50th pctile). 90% 50% new 90% 10% new 50% new 10% Justin Johnson 17 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  18. Contrast: Gamma Correction Justin Johnson EECS 442 WI 2020: Lecture 6 - 18 January 28, 2020

  19. Contrast: Gamma Correction Phew! Much Better. Justin Johnson EECS 442 WI 2020: Lecture 6 - 19 January 28, 2020

  20. Implementation Python+Numpy (right way): imNew = im**4 Python+Numpy (slow way why? ): imNew = np.zeros(im.shape) for y in range(im.shape[0]): for x in range(im.shape[1]): imNew[y,x] = im[y,x]**expFactor Justin Johnson 20 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  21. Elementwise Operations Element-wise power beware notation ? ?? ??= ??? Hadamard Product / Element-wise multiplication ? ???= ??? ??? Element-wise division ?/???=??? ??? Justin Johnson 21 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  22. Sums Across Axes ?1 ?? ?1 ?? ?1+ ?1 Suppose have Nx2 matrix A ? = (?,1) = ND col. vec. ??+ ?? ? ? (?,0) = ?? , ?? 2D row vec ?=1 ?=1 Note libraries distinguish between N-D column vector and Nx1 matrix. Justin Johnson EECS 442 WI 2020: Lecture 6 - 22 January 28, 2020

  23. Operations they dont teach You Probably Saw Matrix Addition ? + ? ? + ? ? + ? ? + ? ? ? = ? ? ? ?+ What is this? FYI: e is a scalar ? + ? ? + ? ? + ? ? + ? ? ? ? ?+ ? = Justin Johnson 23 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  24. Broadcasting If you want to be pedantic and proper, you expand e by multiplying a matrix of 1s (denoted 1) =? ? ?+ ?2?2? ? ? ? ?+ ? ? ? ?+? ? ? =? ? ? Many smart matrix libraries do this automatically. This is the source of many bugs. Justin Johnson 24 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  25. Broadcasting Example Given: a nx2 matrix P and a 2D column vector v, Want: nx2 difference matrix D ?1 ?? ?1 ?? ?1 ? ?? ? ?1 ? ?? ? ? ? ? = ? = ? = ?1 ?? ?1 ?? Blue stuff is assumed / broadcast ? ? ? ??= ? ? Justin Johnson 25 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  26. Broadcasting Rules Suppose we have numpy arrays x and y. How will they broadcast? 1. Write down the shape of each array as a tuple of integers: For example: x: (10,) y: (20, 10) 2. If they have different numbers of dimensions, prepend with ones until they have the same number of dimensions For example: x: (10,) y: (20, 10) x: (1, 10) y: (20, 10) 3. Compare each dimension. There are 3 cases: (a) Dimension match. Everything is good (b) Dimensions don t match, but one is =1. Duplicate the smaller array along that axis to match (c) Dimensions don t match, neither are =1. Error! Justin Johnson 26 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  27. Broadcasting Examples x = np.ones(10, 20) y = np.ones(20) z = x + y print(z.shape) (10,20) x = np.ones(10, 20) y = np.ones(10) z = x + y print(z.shape) ERROR x = np.ones(10, 20) y = np.ones(10, 1) z = x + y print(z.shape) (10,20) x = np.ones(1, 20) y = np.ones(10, 1) z = x + y print(z.shape) (10,20) Justin Johnson 27 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  28. Tensors Scalar: Just one number Vector: 1D list of numbers Matrix: 2D grid of numbers Tensor: N-dimensional grid of numbers (Lots of other meanings in math, physics) Justin Johnson 28 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  29. Broadcasting with Tensors The same broadcasting rules apply to tensors with any number of dimensions! x = np.ones(30) y = np.ones(20, 1) z = np.ones(10, 1, 1) w = x + y + z print(w.shape) (10, 20, 30) Justin Johnson 29 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  30. Vectorization Writing code without explicit loops: use broadcasting, matrix multiply, and other (optimized) numpy primitives instead Justin Johnson 30 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  31. Vectorization Example Suppose I represent each image as a 128- dimensional vector I want to compute all the pairwise distances between {x1, , xN} and {y1, , yM} so I can find, for every xi the nearest yj Identity: ? ?2= ?2+ ?2 2??? Or: ? ?= ? 2+ ? 2 2???1/2 Justin Johnson 31 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  32. Vectorization Example ?1 ?? ?1 ?? | | ??= ?1 | ?? | ? = ? = ? ?? Compute a Nx1 vector of norms (can also do Mx1) ? ??,? = ? ?? Compute a NxM matrix of dot products ??? ??? ??= ?? Justin Johnson 32 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  33. Vectorization Example 1/2 ? 2??? ? = ??,1 + ??,1 ? ?? ? ? + ?1 ?? ? ?? 2+ ?? 2+ ?? 2 2+ ?? 2+ ?? 2 ?? ?? Why? 2 2 ?? ?? 2 ?2,1 + ?2,1? 2+ ?? ??= ?? Justin Johnson 33 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  34. Vectorization Example 1/2 ? 2??? ? = ??,1 + ??,1 2+ 2??? 2+ ?? ???= ?? Numpy code: XNorm = np.sum(X**2,axis=1,keepdims=True) YNorm = np.sum(Y**2,axis=1,keepdims=True) D = (XNorm+YNorm.T-2*np.dot(X,Y.T))**0.5 Get in the habit of thinking about shapes as tuples. Suppose X is (N, D), Y is (M, D): Justin Johnson 34 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  35. Vectorization Example 1/2 ? 2??? ? = ??,1 + ??,1 2+ 2??? 2+ ?? ???= ?? (N, 1) Numpy code: XNorm = np.sum(X**2,axis=1,keepdims=True) YNorm = np.sum(Y**2,axis=1,keepdims=True) D = (XNorm+YNorm.T-2*np.dot(X,Y.T))**0.5 Get in the habit of thinking about shapes as tuples. Suppose X is (N, D), Y is (M, D): Justin Johnson 35 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  36. Vectorization Example 1/2 ? 2??? ? = ??,1 + ??,1 2+ 2??? 2+ ?? ???= ?? (N, 1) (M, 1) Numpy code: XNorm = np.sum(X**2,axis=1,keepdims=True) YNorm = np.sum(Y**2,axis=1,keepdims=True) D = (XNorm+YNorm.T-2*np.dot(X,Y.T))**0.5 Get in the habit of thinking about shapes as tuples. Suppose X is (N, D), Y is (M, D): Justin Johnson 36 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  37. Vectorization Example 1/2 ? 2??? ? = ??,1 + ??,1 2+ 2??? 2+ ?? ???= ?? (N, 1) (M, 1) (N, M) Numpy code: XNorm = np.sum(X**2,axis=1,keepdims=True) YNorm = np.sum(Y**2,axis=1,keepdims=True) D = (XNorm+YNorm.T-2*np.dot(X,Y.T))**0.5 Get in the habit of thinking about shapes as tuples. Suppose X is (N, D), Y is (M, D): Justin Johnson 37 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  38. Vectorization Example 1/2 ? 2??? ? = ??,1 + ??,1 2+ 2??? 2+ ?? ???= ?? (N, 1) (M, 1) (N, M) (N, M) Numpy code: XNorm = np.sum(X**2,axis=1,keepdims=True) YNorm = np.sum(Y**2,axis=1,keepdims=True) D = (XNorm+YNorm.T-2*np.dot(X,Y.T))**0.5 Get in the habit of thinking about shapes as tuples. Suppose X is (N, D), Y is (M, D): Justin Johnson 38 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  39. Vectorization Example 1/2 ? 2??? ? = ??,1 + ??,1 2+ 2??? 2+ ?? ???= ?? (N, 1) (M, 1) (N, M) (N, M) Numpy code: XNorm = np.sum(X**2,axis=1,keepdims=True) YNorm = np.sum(Y**2,axis=1,keepdims=True) D = (XNorm+YNorm.T-2*np.dot(X,Y.T))**0.5 Get in the habit of thinking about shapes as tuples. Suppose X is (N, D), Y is (M, D): Justin Johnson 39 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  40. Vectorization Example 1/2 ? 2??? ? = ??,1 + ??,1 2+ 2??? 2+ ?? ???= ?? (N, 1) (M, 1) (N, M) (N, M) Numpy code: XNorm = np.sum(X**2,axis=1,keepdims=True) YNorm = np.sum(Y**2,axis=1,keepdims=True) D = (XNorm+YNorm.T-2*np.dot(X,Y.T))**0.5 Get in the habit of thinking about shapes as tuples. Suppose X is (N, D), Y is (M, D): Justin Johnson 40 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  41. Vectorization Example 1/2 ? 2??? ? = ??,1 + ??,1 2+ 2??? 2+ ?? ???= ?? Numpy code: XNorm = np.sum(X**2,axis=1,keepdims=True) YNorm = np.sum(Y**2,axis=1,keepdims=True) D = (XNorm+YNorm.T-2*np.dot(X,Y.T))**0.5 *May have to make sure this is at least 0 (sometimes roundoff issues happen) Justin Johnson 41 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  42. Does Vectorization Matter? Computing pairwise distances between 300 and 400 128-dimensional vectors 1. for x in X, for y in Y, using native python: 9s 2. for x in X, for y in Y, using numpy to compute distance: 0.8s 3. vectorized: 0.0045s (~2000x faster than 1, 175x faster than 2) Expressing things in primitives that are optimized is usually faster Even more important with special hardware like GPUs or TPUs! Justin Johnson 42 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  43. Linear Algebra Justin Johnson 43 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  44. Linear Independence A set of vectors is linearly independent if you can t write one as a linear combination of the others. 0 0 2 0 6 0 5 0 0 Suppose: ? = ? = ? = 0 0 4 0 1 2? 1 ? = = ? = = 3? 2 1 Is the set {a,b,c} linearly independent? Is the set {a,b,x} linearly independent? Max # of independent 3D vectors? Justin Johnson 44 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  45. Span Span: all linear combinations of a set of vectors Span({ }) = Span({[0,2]}) = ? All vertical lines through origin = ? 0,1 :? ? Is blue in {red} s span? Justin Johnson 45 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  46. Span Span: all linear combinations of a set of vectors Span({ , }) = ? Justin Johnson 46 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  47. Span Span: all linear combinations of a set of vectors Span({ , }) = ? Justin Johnson 47 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  48. Matrix-Vector Product | | Right-multiplying A by x mixes columns of A according to entries of x ?? | ?? | ?? = ? The output space of f(x) = Ax is constrained to be the span of the columns of A. Can t output things you can t construct out of your columns Justin Johnson 48 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  49. An Intuition ?1 ?2 ?3 | | ?? | | ?? | ?? | ? = ?? = y1 y2 y3 y x Ax x1 x2 x3 x knobs on machine (e.g., fuel, brakes) y state of the world (e.g., where you are) A machine (e.g., your car) Justin Johnson 49 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

  50. Linear Independence Suppose the columns of 3x3 matrix A are not linearly independent (c1, c1, c2 for instance) ?1 ?2 ?3 | | | ?? | ??? | ?? | ? = ?? = ? = ?1??+ ??2??+ ?3?? ? = ?1+ ??2??+ ?3?? Justin Johnson 50 January 28, 2020 EECS 442 WI 2020: Lecture 6 -

More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#