Graph Theory for Image Processing

Fitting
:
Voting and the Hough Transform
April 23
rd
, 2015
Yong Jae Lee
UC Davis
Last time: Grouping
 
Bottom-up segmentation via clustering
To find mid-level regions, tokens
General choices -- features, affinity functions, and
clustering algorithms
Example clustering algorithms
Mean shift and mode finding: K-means, Mean shift
Graph theoretic: Graph cut, normalized cuts
Grouping also useful for quantization
Texton histograms for texture within local region
 
 
2
Slide credit: Kristen Grauman
q
Recall: Images as graphs
Fully-connected
 graph
node for every pixel
l
i
n
k
 
b
e
t
w
e
e
n
 
e
v
e
r
y
 
p
a
i
r
 
o
f
 
p
i
x
e
l
s
,
 
p
,
q
s
i
m
i
l
a
r
i
t
y
 
w
p
q
 
f
o
r
 
e
a
c
h
 
l
i
n
k
»
similarity is 
inversely proportional
 to difference in color and position
p
w
p
q
w
Slide by Steve Seitz
3
Last time: Measuring affinity
4
0
 
d
a
t
a
 
p
o
i
n
t
s
4
0
 
x
 
4
0
 
a
f
f
i
n
i
t
y
 
m
a
t
r
i
x
 
A
 
x
1
.
.
.
x
4
0
 
x
1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.
 
.
 
.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
x
4
0
 
1.
W
h
a
t
 
d
o
 
t
h
e
 
b
l
o
c
k
s
 
s
i
g
n
i
f
y
?
2.
W
h
a
t
 
d
o
e
s
 
t
h
e
 
s
y
m
m
e
t
r
y
 
o
f
 
t
h
e
 
m
a
t
r
i
x
 
s
i
g
n
i
f
y
?
3.
H
o
w
 
w
o
u
l
d
 
t
h
e
 
m
a
t
r
i
x
 
c
h
a
n
g
e
 
w
i
t
h
 
l
a
r
g
e
r
 
v
a
l
u
e
 
o
f
 
σ
?
4
Slide credit: Kristen Grauman
Example: weighted graphs
Dimension of data points :  d = 2
Number of data points : N = 4
Suppose we have a
4-pixel image
    (i.e., a 2 x 2 matrix)
Each pixel described
by 2 features
Feature dimension 1
Feature dimension 2
Kristen Grauman
5
 
f
o
r
 
i
=
1
:
N
f
o
r
 
j
=
1
:
N
D
(
i
,
j
)
 
=
 
|
|
x
i
-
 
x
j
|
|
2
e
n
d
e
n
d
 
D
(
1
,
:
)
=
 
D
(
:
,
1
)
=
 
0
.
2
4
 
0
.
0
1
 
0
.
4
7
 
(
0
)
Example: weighted graphs
Computing the distance matrix:
Kristen Grauman
6
 
f
o
r
 
i
=
1
:
N
f
o
r
 
j
=
1
:
N
D
(
i
,
j
)
 
=
 
|
|
x
i
-
 
x
j
|
|
2
e
n
d
e
n
d
 
D
(
1
,
:
)
=
D
(
:
,
1
)
=
 
0
.
2
4
 
0
.
0
1
 
0
.
4
7
 
(
0
)
0
.
1
5
0
.
2
4
0
.
2
9
(
0
)
0
.
2
9
0
.
1
5
0
.
2
4
Example: weighted graphs
Computing the distance matrix:
Kristen Grauman
7
 
f
o
r
 
i
=
1
:
N
f
o
r
 
j
=
1
:
N
D
(
i
,
j
)
 
=
 
|
|
x
i
-
 
x
j
|
|
2
e
n
d
e
n
d
N x N matrix
Example: weighted graphs
Computing the distance matrix:
Kristen Grauman
8
 
f
o
r
 
i
=
1
:
N
 
 
 
f
o
r
 
j
=
1
:
N
 
 
 
 
 
 
D
(
i
,
j
)
 
=
 
|
|
x
i
-
 
x
j
|
|
2
 
 
 
e
n
d
e
n
d
f
o
r
 
i
=
1
:
N
 
 
 
f
o
r
 
j
=
i
+
1
:
N
 
 
 
 
 
 
 
A
(
i
,
j
)
 
=
 
e
x
p
(
-
1
/
(
2
*
σ
^
2
)
*
|
|
x
i
-
 
x
j
|
|
2
)
;
 
 
 
 
 
 
 
A
(
j
,
i
)
 
=
 
A
(
i
,
j
)
;
 
 
 
e
n
d
e
n
d
D
A
Distances
affinities
Example: weighted graphs
Kristen Grauman
9
D
=
Scale parameter 
σ
 affects affinity
Distance
matrix
 
Affinity matrix with increasing 
σ
:
Kristen Grauman
10
Visualizing a shuffled affinity matrix
If we permute the order of the vertices as they are
referred to in the affinity matrix, we see different
patterns:
Kristen Grauman
11
Putting these two aspects together
 
σ=.1 
                              
σ=.2 
                           
σ=1
Data points
Affinity
matrices
Points x
1
…x
10
Points x
31
…x
40
Kristen Grauman
12
Goal: Segmentation by Graph Cuts
Break graph into segments
Delete links that cross between segments
Easiest to break links that have low similarity
similar pixels should be in the same segments
dissimilar pixels should be in different segments
A
B
C
13
Slide credit: Kristen Grauman
Cuts in a graph: Min cut
Link Cut
set of links whose removal makes a graph disconnected
cost of a cut:
A
B
 
F
i
n
d
 
m
i
n
i
m
u
m
 
c
u
t
gives you a segmentation
fast algorithms exist
Source: Steve Seitz
 
W
e
a
k
n
e
s
s
 
o
f
 
M
i
n
 
c
u
t
14
Cuts in a graph: Normalized cut
A
B
 
F
i
x
 
b
i
a
s
 
o
f
 
M
i
n
 
C
u
t
 
b
y
 
n
o
r
m
a
l
i
z
i
n
g
 
f
o
r
 
s
i
z
e
 
o
f
 
s
e
g
m
e
n
t
s
:
 
 
 
 
 
assoc(A,V) = sum of weights of all edges that touch A
 
Ncut value is small when we get two clusters with many edges
with high weights, and few edges of low weight between them.
Approximate solution: generalized eigenvalue problem.
Steve Seitz
J. Shi and J. Malik, 
Normalized Cuts and Image Segmentation
, CVPR, 1997
15
Example results: segments from Ncuts
16
Slide credit: Kristen Grauman
Normalized cuts: pros and cons
Pros:
Generic framework, flexible to choice of function that
computes weights (“affinities”) between nodes
Does not require model of the data distribution
Cons:
Time complexity can be high
Dense, highly connected graphs 
 many affinity computations
Solving eigenvalue problem
Preference for balanced partitions
Kristen Grauman
17
Now: Fitting
Want to associate a model with multiple observed features
 
[Fig from Marszalek & Schmid, 2007]
 
 
 
For example, the model could be a line, a circle, or an arbitrary shape.
Adapted from Kristen Grauman
18
Fitting: Main idea
 
Choose a parametric model that best
represent a set of features
Membership criterion is not local
Can’t tell whether a point belongs to a given model just by
looking at that point
Three main questions:
What model represents this set of features best?
Which of several model instances gets which feature?
How many model instances are there?
Computational complexity is important
It is infeasible to examine every possible set of parameters
and every possible combination of features
Slide credit: L. Lazebnik
19
Example: Line fitting
Why fit lines?
 
Many objects characterized by presence of straight lines
 
Wait, why aren’t we done just by running edge detection?
Kristen Grauman
20
 
E
x
t
r
a
 
e
d
g
e
 
p
o
i
n
t
s
 
(
c
l
u
t
t
e
r
)
,
m
u
l
t
i
p
l
e
 
m
o
d
e
l
s
:
which points go with which
line, if any?
O
n
l
y
 
s
o
m
e
 
p
a
r
t
s
 
o
f
 
e
a
c
h
 
l
i
n
e
d
e
t
e
c
t
e
d
,
 
a
n
d
 
s
o
m
e
 
p
a
r
t
s
a
r
e
 
m
i
s
s
i
n
g
:
how to find a line that bridges
missing evidence?
N
o
i
s
e
 
i
n
 
m
e
a
s
u
r
e
d
 
e
d
g
e
p
o
i
n
t
s
,
 
o
r
i
e
n
t
a
t
i
o
n
s
:
how to detect true underlying
parameters?
Difficulty of line fitting
Kristen Grauman
21
Voting
 
It’s not feasible to check all combinations of features by
fitting a model to each possible subset.
V
o
t
i
n
g
 
i
s
 
a
 
g
e
n
e
r
a
l
 
t
e
c
h
n
i
q
u
e
 
w
h
e
r
e
 
w
e
 
l
e
t
 
e
a
c
h
 
f
e
a
t
u
r
e
v
o
t
e
 
f
o
r
 
a
l
l
 
m
o
d
e
l
s
 
t
h
a
t
 
a
r
e
 
c
o
m
p
a
t
i
b
l
e
 
w
i
t
h
 
i
t
.
Cycle through features, cast votes for model parameters.
Look for model parameters that receive a lot of votes.
Noise & clutter features will cast votes too, 
but
 typically
their votes should be inconsistent with the majority of
“good” features.
Kristen Grauman
22
Fitting lines: Hough transform
 
Given points that belong to a line, what
is the line?
How many lines are there?
Which points belong to which lines?
 
H
o
u
g
h
 
T
r
a
n
s
f
o
r
m
 
i
s
 
a
 
v
o
t
i
n
g
t
e
c
h
n
i
q
u
e
 
t
h
a
t
 
c
a
n
 
b
e
 
u
s
e
d
 
t
o
 
a
n
s
w
e
r
a
l
l
 
o
f
 
t
h
e
s
e
 
q
u
e
s
t
i
o
n
s
.
Main idea
:
1.  Record vote for each possible line
on which each edge point lies.
2.  Look for lines that get many votes
.
Kristen Grauman
23
Finding lines in an image: Hough space
 
Connection between image (x,y) and Hough (m,b) spaces
A line in the image corresponds to a point in Hough space
To go from image space to Hough space:
given a set of points (x,y), find all (m,b) such that y = mx + b
Slide credit: Steve Seitz
24
 
Equation of a line?
y = mx + b
Finding lines in an image: Hough space
Connection between image (x,y) and Hough (m,b) spaces
A line in the image corresponds to a point in Hough space
To go from image space to Hough space:
given a set of points (x,y), find all (m,b) such that y = mx + b
What does a point (x
0
, y
0
) in the image space map to?
x
y
m
b
image space
Hough (parameter) space
x
0
y
0
Slide credit: Steve Seitz
25
Finding lines in an image: Hough space
 
What are the line parameters for the line that contains both
(x
0
, y
0
) and (x
1
, y
1
)?
It is the intersection of the lines b = –x
0
m + y
0 
and
b = –x
1
m + y
1
x
y
m
b
image space
Hough (parameter) space
x
0
y
0
 
b
 = –
x
1
m
 + 
y
1
(
x
0
, 
y
0
)
(
x
1
, 
y
1
)
26
Slide credit: Kristen Grauman
Finding lines in an image: Hough algorithm
 
How can we use this to find the most likely parameters (m,b)
for the most prominent line in the image space?
Let each edge point in image space 
vote 
for a set of
possible parameters in Hough space
Accumulate votes in discrete set of bins; parameters with
the most votes indicate line in image space
.
x
y
m
b
image space
Hough (parameter) space
27
Slide credit: Kristen Grauman
Polar representation for lines
 
 
 
 
 
:
 
p
e
r
p
e
n
d
i
c
u
l
a
r
 
d
i
s
t
a
n
c
e
f
r
o
m
 
l
i
n
e
 
t
o
 
o
r
i
g
i
n
 
 
 
:
 
a
n
g
l
e
 
t
h
e
 
p
e
r
p
e
n
d
i
c
u
l
a
r
m
a
k
e
s
 
w
i
t
h
 
t
h
e
 
x
-
a
x
i
s
 
 
Point in image space 
 sinusoid segment in Hough space
Adapted from Kristen Grauman
28
Issues with usual (
m,b
) parameter space: can take on
infinite values, undefined for vertical lines.
 
Hough line demo
http://www.dis.uniroma1.it/~iocchi/slides/icra2001/jav
a/hough.html
29
Slide credit: Kristen Grauman
Hough transform algorithm
 
Using the polar parameterization:
 
 
Basic Hough transform algorithm
1.
Initialize H[d, 
]=0
2.
for each edge point I[x,y] in the image
    for 
 = [
min
  to  
max
 ]  
// some quantization
 
    H[d, 
] += 1
3.
Find the value(s) of (d, 
) where
 H[d, 
] is maximum
4.
The detected line in the image is given by
 
H: accumulator array (votes)
 
d
 
 
Time complexity (in terms of number of votes per pt)?
Source: Steve Seitz
30
1. Image 
 Canny
 
Derek Hoiem
2. Canny 
 Hough votes
 
Derek Hoiem
3. Hough votes 
 Edges
 
  Find peaks
Derek Hoiem
Hough transform example
http://ostatic.com/files/images/ss_hough.jpg
Derek Hoiem
Showing longest segments found
Kristen Grauman
35
Original image
Canny edges
Vote space and top peaks
I
m
a
g
e
 
s
p
a
c
e
e
d
g
e
 
c
o
o
r
d
i
n
a
t
e
s
V
o
t
e
s
Impact of noise on Hough
x
y
d
 
What difficulty does this present for an implementation?
I
m
a
g
e
 
s
p
a
c
e
e
d
g
e
 
c
o
o
r
d
i
n
a
t
e
s
V
o
t
e
s
Impact of noise on Hough
Here, everything appears to be “noise”, or random
edge points, but we still see peaks in the vote space.
37
Slide credit: Kristen Grauman
Extensions
 
Extension 1:  Use the image gradient
1.
same
2.
for each edge point I[x,y] in the image
    
 = gradient at (x,y)
 
    H[d, 
] += 1
3.
same
4.
same
(Reduces degrees of freedom)
 
Extension 2
give more votes for stronger edges
Extension 3
change the sampling of (d, 
) to give more/less resolution
Extension 4
The same procedure can be used with circles, squares, or any
other shape
38
Slide credit: Kristen Grauman
Recall: when we detect an edge point, we also know its
gradient direction
Extensions
 
Extension 1:  Use the image gradient
1.
same
2.
for each edge point I[x,y] in the image
        compute unique (d, 
) 
based on image gradient at (x,y)
 H[d, 
] += 1
3.
same
4.
same
(Reduces degrees of freedom)
 
Extension 2
give more votes for stronger edges (use magnitude of gradient)
Extension 3
change the sampling of (d, 
) to give more/less resolution
Extension 4
The same procedure can be used with circles, squares, or any
other shape…
Source: Steve Seitz
39
Hough transform for circles
 
For a fixed radius r
 
Circle: center (a,b) and radius r
 
Image space
 
Hough space
Adapted by Devi Parikh from: Kristen Grauman
40
 
Equation of circle?
 
Equation of set of
circles that all pass
through a point?
Hough transform for circles
For a fixed radius r
Circle: center (a,b) and radius r
Image space
Hough space
 
Intersection:
most votes for
center occur
here.
Kristen Grauman
41
Hough transform for circles
For an unknown radius r
Circle: center (a,b) and radius r
 
Hough space
Image space
?
Kristen Grauman
42
Hough transform for circles
For an unknown radius r
Circle: center (a,b) and radius r
Hough space
Image space
b
a
r
Kristen Grauman
43
Hough transform for circles
F
o
r
 
a
n
 
u
n
k
n
o
w
n
 
r
a
d
i
u
s
 
r
,
 
k
n
o
w
n
 
g
r
a
d
i
e
n
t
 
d
i
r
e
c
t
i
o
n
Circle: center (a,b) and radius r
 
Hough space
Image space
θ
x
Kristen Grauman
44
Hough transform for circles
 
For every edge pixel (
x,y
) :
 
For each possible radius value 
r
:
 
    For each possible gradient direction 
θ
:
  
// or use estimated gradient at (x,y)
 
    
  
a
 = 
x
r
 cos(
θ
) 
// column
 
    
  
b
 = 
y
 + 
r
 sin(
θ
)  
// row
 
    
  
H[
a,b,r
] += 1
 
end
end
Check out online demo
 : 
http://www.markschulze.net/java/hough/
 
Time complexity per edgel?
Kristen Grauman
45
Original
 
Edges
Example: detecting circles with Hough
 
Votes: Penny
Note: a different Hough transform (with separate accumulators)
was used for each circle radius (quarters vs. penny).
46
Slide credit: Kristen Grauman
 
Original
Edges
Example: detecting circles with Hough
Votes: Quarter
 
Combined detections
Coin finding sample images from: Vivek Kwatra
47
Slide credit: Kristen Grauman
Example: iris detection
Hemerson Pistori and Eduardo Rocha Costa
http://rsbweb.nih.gov/ij/plugins/hough-circles.html
 
 
Gradient+threshold
Hough space
(fixed radius)
Max detections
Kristen Grauman
48
Example: iris detection
An Iris Detection Method Using the Hough Transform and Its Evaluation for
Facial and Eye Movement, by Hideki Kashima, Hitoshi Hongo, Kunihito
Kato, Kazuhiko Yamamoto, ACCV 2002.
Kristen Grauman
49
Voting: practical tips
 
Minimize irrelevant tokens first
Choose a good grid / discretization
 
Vote for neighbors, also (smoothing in
accumulator array)
Use direction of edge to reduce parameters by 1
Kristen Grauman
50
Hough transform: pros and cons
 
Pros
All points are processed independently, so can cope with
occlusion, gaps
Some robustness to noise: noise points unlikely to
contribute 
consistently
 to any single bin
Can detect multiple instances of a model in a single pass
 
Cons
Complexity of search time increases exponentially with
the number of model parameters
Non-target shapes can produce spurious peaks in
parameter space
Quantization: can be tricky to pick a good grid size
 
 
Kristen Grauman
51
Generalized Hough Transform
 
Vote space
 
x
 
x
 
x
 
x
 
x
 
Now suppose those colors encode gradient
directions…
What if we want to detect arbitrary shapes?
 
I
n
t
u
i
t
i
o
n
:
 
Ref. point
 
Displacement
vectors
Kristen Grauman
52
Define a model shape by its boundary points
and a reference point.
[Dana H. Ballard, Generalizing the Hough Transform to Detect Arbitrary Shapes, 1980]
 
 
a
 
A
t
 
e
a
c
h
 
b
o
u
n
d
a
r
y
 
p
o
i
n
t
,
c
o
m
p
u
t
e
 
d
i
s
p
l
a
c
e
m
e
n
t
v
e
c
t
o
r
:
 
r
 
=
 
a
 
 
p
i
.
 
S
tore these vectors in a
table indexed by
gradient orientation 
θ
.
Generalized Hough Transform
O
f
f
l
i
n
e
 
p
r
o
c
e
d
u
r
e
:
Model shape
Kristen Grauman
53
 
 
 
p
1
 
For each edge point:
Use its gradient orientation 
θ
to index into stored table
U
s
e
 
r
e
t
r
i
e
v
e
d
 
r
 
v
e
c
t
o
r
s
 
t
o
v
o
t
e
 
f
o
r
 
r
e
f
e
r
e
n
c
e
 
p
o
i
n
t
Generalized Hough Transform
D
e
t
e
c
t
i
o
n
 
p
r
o
c
e
d
u
r
e
:
Assuming translation is the only transformation here, i.e., orientation and scale are fixed.
 
 
x
Novel image
 
x
 
x
 
x
 
x
Kristen Grauman
54
Generalized Hough for object detection
Instead of indexing displacements by gradient
orientation, index by matched local patterns.
B. Leibe, A. Leonardis, and B. Schiele, 
Combined Object Categorization and
Segmentation with an Implicit Shape Model
, ECCV Workshop on Statistical
Learning in Computer Vision 2004
Source: L. Lazebnik
55
Instead of indexing displacements by gradient
orientation, index by “visual codeword”
B. Leibe, A. Leonardis, and B. Schiele, 
Combined Object Categorization and
Segmentation with an Implicit Shape Model
, ECCV Workshop on Statistical
Learning in Computer Vision 2004
test image
Source: L. Lazebnik
Generalized Hough for object detection
56
Summary
 
G
r
o
u
p
i
n
g
/
s
e
g
m
e
n
t
a
t
i
o
n
 
u
s
e
f
u
l
 
t
o
 
m
a
k
e
 
a
 
c
o
m
p
a
c
t
r
e
p
r
e
s
e
n
t
a
t
i
o
n
 
a
n
d
 
m
e
r
g
e
 
s
i
m
i
l
a
r
 
f
e
a
t
u
r
e
s
associate features based on defined similarity measure and
clustering objective
 
F
i
t
t
i
n
g
 
p
r
o
b
l
e
m
s
 
r
e
q
u
i
r
e
 
f
i
n
d
i
n
g
 
a
n
y
 
s
u
p
p
o
r
t
i
n
g
 
e
v
i
d
e
n
c
e
f
o
r
 
a
 
m
o
d
e
l
,
 
e
v
e
n
 
w
i
t
h
i
n
 
c
l
u
t
t
e
r
 
a
n
d
 
m
i
s
s
i
n
g
 
f
e
a
t
u
r
e
s
.
associate features with an explicit model
 
V
o
t
i
n
g
 
a
p
p
r
o
a
c
h
e
s
,
 
s
u
c
h
 
a
s
 
t
h
e
 
H
o
u
g
h
 
t
r
a
n
s
f
o
r
m
,
m
a
k
e
 
i
t
 
p
o
s
s
i
b
l
e
 
t
o
 
f
i
n
d
 
l
i
k
e
l
y
 
m
o
d
e
l
 
p
a
r
a
m
e
t
e
r
s
 
w
i
t
h
o
u
t
s
e
a
r
c
h
i
n
g
 
a
l
l
 
c
o
m
b
i
n
a
t
i
o
n
s
 
o
f
 
f
e
a
t
u
r
e
s
.
Hough transform approach for lines, circles, …, arbitrary shapes
defined by a set of boundary points, recognition from patches.
Kristen Grauman
57
Questions?
 
See you Tuesday!
58
Slide Note
Embed
Share

Exploring the concepts of fitting, grouping, and affinity measurement in image processing through examples of weighted graph computations and clustering algorithms. Discover how images are represented as graphs and the significance of symmetry in affinity matrices.

  • Graph Theory
  • Image Processing
  • Clustering Algorithms
  • Affinity Measurement
  • Graph Computation

Uploaded on Sep 18, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Fitting: Voting and the Hough Transform April 23rd, 2015 Yong Jae Lee UC Davis

  2. Last time: Grouping Bottom-up segmentation via clustering To find mid-level regions, tokens General choices -- features, affinity functions, and clustering algorithms Example clustering algorithms Mean shift and mode finding: K-means, Mean shift Graph theoretic: Graph cut, normalized cuts Grouping also useful for quantization Texton histograms for texture within local region 2 Slide credit: Kristen Grauman

  3. Recall: Images as graphs q wpq w p Fully-connected graph node for every pixel link between every pair of pixels, p,q similarity wpq for each link similarity is inversely proportional to difference in color and position 3 Slide by Steve Seitz

  4. Last time: Measuring affinity Points x1 x10 40 data points 40 x 40 affinity matrix A x1 . . Points x31 x40 . x40 x1 . . . x40 2} 1 ? ?,? = exp{ ?? ?? 2?2 1. What do the blocks signify? 2. What does the symmetry of the matrix signify? 3. How would the matrix change with larger value of ? 4 Slide credit: Kristen Grauman

  5. Example: weighted graphs Suppose we have a 4-pixel image (i.e., a 2 x 2 matrix) Feature dimension 2 Each pixel described by 2 features Feature dimension 1 Dimension of data points : d = 2 Number of data points : N = 4 5 Kristen Grauman

  6. Example: weighted graphs Computing the distance matrix: 0.01 D(1,:)= (0) 0.24 0.01 0.47 0.24 0.47 for i=1:N end for j=1:N end D(i,j) = ||xi- xj||2 6 Kristen Grauman

  7. Example: weighted graphs Computing the distance matrix: D(1,:)= (0) 0.24 0.01 0.47 0.29 0.24 (0) 0.29 0.15 0.24 0.15 for i=1:N end for j=1:N end D(i,j) = ||xi- xj||2 7 Kristen Grauman

  8. Example: weighted graphs Computing the distance matrix: N x N matrix for i=1:N end for j=1:N end D(i,j) = ||xi- xj||2 8 Kristen Grauman

  9. Example: weighted graphs D A Distances affinities for i=1:N for j=i+1:N A(i,j) = exp(-1/(2* ^2)*||xi- xj||2); A(j,i) = A(i,j); end end for i=1:N for j=1:N D(i,j) = ||xi- xj||2 end end 9 Kristen Grauman

  10. Scale parameter affects affinity Distance matrix D= Affinity matrix with increasing : Kristen Grauman 10

  11. Visualizing a shuffled affinity matrix If we permute the order of the vertices as they are referred to in the affinity matrix, we see different patterns: 11 Kristen Grauman

  12. Putting these two aspects together Points x1 x10 Data points Points x31 x40 =.1 =.2 =1 Affinity matrices 12 2} 1 ? ?,? = exp{ ?? ?? 2?2 Kristen Grauman

  13. Goal: Segmentation by Graph Cuts w A B C Break graph into segments Delete links that cross between segments Easiest to break links that have low similarity similar pixels should be in the same segments dissimilar pixels should be in different segments 13 Slide credit: Kristen Grauman

  14. Cuts in a graph: Min cut B A Weakness of Min cut Link Cut set of links whose removal makes a graph disconnected cost of a cut: B A cut , ( A p = ) w , p q , q B Find minimum cut gives you a segmentation fast algorithms exist 14 Source: Steve Seitz

  15. Cuts in a graph: Normalized cut B A Fix bias of Min Cut by normalizing for size of segments: ( , A ) ( , ) cut A B cut A B + ( , ) ( , ) assoc V assoc B V assoc(A,V) = sum of weights of all edges that touch A Ncut value is small when we get two clusters with many edges with high weights, and few edges of low weight between them. Approximate solution: generalized eigenvalue problem. 15 J. Shi and J. Malik, Normalized Cuts and Image Segmentation, CVPR, 1997 Steve Seitz

  16. Example results: segments from Ncuts 16 Slide credit: Kristen Grauman

  17. Normalized cuts: pros and cons Pros: Generic framework, flexible to choice of function that computes weights ( affinities ) between nodes Does not require model of the data distribution Cons: Time complexity can be high Dense, highly connected graphs many affinity computations Solving eigenvalue problem Preference for balanced partitions 17 Kristen Grauman

  18. Now: Fitting Want to associate a model with multiple observed features [Fig from Marszalek & Schmid, 2007] For example, the model could be a line, a circle, or an arbitrary shape. 18 Adapted from Kristen Grauman

  19. Fitting: Main idea Choose a parametric model that best represent a set of features Membership criterion is not local Can t tell whether a point belongs to a given model just by looking at that point Three main questions: What model represents this set of features best? Which of several model instances gets which feature? How many model instances are there? Computational complexity is important It is infeasible to examine every possible set of parameters and every possible combination of features 19 Slide credit: L. Lazebnik

  20. Example: Line fitting Why fit lines? Many objects characterized by presence of straight lines Wait, why aren t we done just by running edge detection? 20 Kristen Grauman

  21. Difficulty of line fitting Extra edge points (clutter), multiple models: which points go with which line, if any? Only some parts of each line detected, and some parts are missing: how to find a line that bridges missing evidence? Noise in measured edge points, orientations: how to detect true underlying parameters? 21 Kristen Grauman

  22. Voting It s not feasible to check all combinations of features by fitting a model to each possible subset. Voting is a general technique where we let each feature vote for all models that are compatible with it. Cycle through features, cast votes for model parameters. Look for model parameters that receive a lot of votes. Noise & clutter features will cast votes too, but typically their votes should be inconsistent with the majority of good features. 22 Kristen Grauman

  23. Fitting lines: Hough transform Given points that belong to a line, what is the line? How many lines are there? Which points belong to which lines? Hough Transform is a voting technique that can be used to answer all of these questions. Main idea: 1. Record vote for each possible line on which each edge point lies. 2. Look for lines that get many votes. 23 Kristen Grauman

  24. Finding lines in an image: Hough space Equation of a line? y = mx + b y b b0 m0 x m image space Hough (parameter) space Connection between image (x,y) and Hough (m,b) spaces A line in the image corresponds to a point in Hough space To go from image space to Hough space: given a set of points (x,y), find all (m,b) such that y = mx + b 24 Slide credit: Steve Seitz

  25. Finding lines in an image: Hough space y b y0 x0 x m image space Hough (parameter) space Connection between image (x,y) and Hough (m,b) spaces A line in the image corresponds to a point in Hough space To go from image space to Hough space: given a set of points (x,y), find all (m,b) such that y = mx + b What does a point (x0, y0) in the image space map to? Answer: the solutions of b = -x0m + y0 this is a line in Hough space 25 Slide credit: Steve Seitz

  26. Finding lines in an image: Hough space y b (x1, y1) y0 (x0, y0) b = x1m + y1 x0 x m image space Hough (parameter) space What are the line parameters for the line that contains both (x0, y0) and (x1, y1)? It is the intersection of the lines b = x0m + y0 and b = x1m + y1 26 Slide credit: Kristen Grauman

  27. Finding lines in an image: Hough algorithm y b x m image space Hough (parameter) space How can we use this to find the most likely parameters (m,b) for the most prominent line in the image space? Let each edge point in image space vote for a set of possible parameters in Hough space Accumulate votes in discrete set of bins; parameters with the most votes indicate line in image space. 27 Slide credit: Kristen Grauman

  28. Polar representation for lines Issues with usual (m,b) parameter space: can take on infinite values, undefined for vertical lines. d : perpendicular distance from line to origin : angle the perpendicular makes with the x-axis d + = cos sin x y d Point in image space sinusoid segment in Hough space 28 Adapted from Kristen Grauman

  29. Hough line demo http://www.dis.uniroma1.it/~iocchi/slides/icra2001/jav a/hough.html 29 Slide credit: Kristen Grauman

  30. Hough transform algorithm Using the polar parameterization: y x + cos H: accumulator array (votes) = sin d d Basic Hough transform algorithm 1. Initialize H[d, ]=0 2. for each edge point I[x,y] in the image for = [ min to max ] // some quantization cos x d = + sin y H[d, ] += 1 3. Find the value(s) of (d, ) where H[d, ] is maximum 4. The detected line in the image is given by = + cos sin d x y Time complexity (in terms of number of votes per pt)? 30 Source: Steve Seitz

  31. 1. Image Canny Derek Hoiem

  32. 2. Canny Hough votes Derek Hoiem

  33. 3. Hough votes Edges Find peaks Derek Hoiem

  34. Hough transform example http://ostatic.com/files/images/ss_hough.jpg Derek Hoiem

  35. Canny edges Original image Showing longest segments found Vote space and top peaks 35 Kristen Grauman

  36. Impact of noise on Hough d y x Image space edge coordinates Votes What difficulty does this present for an implementation?

  37. Impact of noise on Hough Image space edge coordinates Votes Here, everything appears to be noise , or random edge points, but we still see peaks in the vote space. 37 Slide credit: Kristen Grauman

  38. Extensions Recall: when we detect an edge point, we also know its gradient direction Extension 1: Use the image gradient 1. same 2. for each edge point I[x,y] in the image = gradient at (x,y) cos x d = + sin y H[d, ] += 1 3. same 4. same (Reduces degrees of freedom) Extension 2 Extension 3 give more votes for stronger edges 38 change the sampling of (d, ) to give more/less resolution Slide credit: Kristen Grauman

  39. Extensions Extension 1: Use the image gradient 1. same 2. for each edge point I[x,y] in the image compute unique (d, ) based on image gradient at (x,y) H[d, ] += 1 3. same 4. same (Reduces degrees of freedom) Extension 2 Extension 3 Extension 4 give more votes for stronger edges (use magnitude of gradient) change the sampling of (d, ) to give more/less resolution The same procedure can be used with circles, squares, or any other shape 39 Source: Steve Seitz

  40. Hough transform for circles Circle: center (a,b) and radius r ( Equation of circle? + = 2 2 2 ) ( ) x a y b r i i For a fixed radius r Equation of set of circles that all pass through a point? b a Hough space Image space 40 Adapted by Devi Parikh from: Kristen Grauman

  41. Hough transform for circles Circle: center (a,b) and radius r ( + = 2 2 2 ) ( ) x a y b r i i For a fixed radius r Intersection: most votes for center occur here. Hough space Image space 41 Kristen Grauman

  42. Hough transform for circles Circle: center (a,b) and radius r ( + = 2 2 2 ) ( ) x a y b r i i For an unknown radius r r ? b a Hough space Image space 42 Kristen Grauman

  43. Hough transform for circles Circle: center (a,b) and radius r ( + = 2 2 2 ) ( ) x a y b r i i For an unknown radius r r b a Hough space Image space 43 Kristen Grauman

  44. Hough transform for circles Circle: center (a,b) and radius r ( + = 2 2 2 ) ( ) x a y b r i i For an unknown radius r, known gradient direction x Image space Hough space 44 Kristen Grauman

  45. Hough transform for circles For every edge pixel (x,y) : For each possible radius value r: For each possible gradient direction : // or use estimated gradient at (x,y) a = x r cos( ) // column b = y + r sin( ) // row H[a,b,r] += 1 end end Check out online demo : http://www.markschulze.net/java/hough/ Time complexity per edgel? 45 Kristen Grauman

  46. Example: detecting circles with Hough Original Edges Votes: Penny Note: a different Hough transform (with separate accumulators) was used for each circle radius (quarters vs. penny). 46 Slide credit: Kristen Grauman

  47. Example: detecting circles with Hough Original Combined detections Edges Votes: Quarter 47 Slide credit: Kristen Grauman Coin finding sample images from: Vivek Kwatra

  48. Example: iris detection Gradient+threshold Hough space (fixed radius) Max detections Hemerson Pistori and Eduardo Rocha Costa http://rsbweb.nih.gov/ij/plugins/hough-circles.html 48 Kristen Grauman

  49. Example: iris detection An Iris Detection Method Using the Hough Transform and Its Evaluation for Facial and Eye Movement, by Hideki Kashima, Hitoshi Hongo, Kunihito Kato, Kazuhiko Yamamoto, ACCV 2002. 49 Kristen Grauman

  50. Voting: practical tips Minimize irrelevant tokens first Choose a good grid / discretization Too fine ? Too coarse Vote for neighbors, also (smoothing in accumulator array) Use direction of edge to reduce parameters by 1 50 Kristen Grauman

More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#