Local Features in Computer Vision - Slides by Prof. Kristen Grauman

TP13 - Indexing local features
Computer Vision, FCUP, 2019/20
Miguel Coimbra
Slides by Prof. Kristen Grauman
Matching local features
Kristen Grauman
Matching local features
 
?
 
T
o
 
g
e
n
e
r
a
t
e
 
c
a
n
d
i
d
a
t
e
 
m
a
t
c
h
e
s
,
 
f
i
n
d
 
p
a
t
c
h
e
s
 
t
h
a
t
 
h
a
v
e
t
h
e
 
m
o
s
t
 
s
i
m
i
l
a
r
 
a
p
p
e
a
r
a
n
c
e
 
(
e
.
g
.
,
 
l
o
w
e
s
t
 
S
S
D
)
Simplest approach: compare them all, take the closest (or
closest k, or within a thresholded distance)
Image 1
Image 2
Kristen Grauman
Matching local features
In stereo case, may constrain by proximity if we make
assumptions on max disparities.
Image 1
Image 2
Kristen Grauman
Indexing local features
 
Kristen Grauman
Indexing local features
Each patch / region has a descriptor, which is a
point in some high-dimensional feature space
(e.g., SIFT)
 
Descriptor’s
feature space
Kristen Grauman
Indexing local features
 
When we see close points in feature space, we
have similar descriptors, which indicates similar
local content.
 
Descriptor’s
feature space
Database
images
 
Query
image
Kristen Grauman
Indexing local features
With potentially thousands of features per
image, and hundreds to millions of images to
search, how to efficiently find those that are
relevant to a new image?
Kristen Grauman
Indexing local features:
inverted file index
For text
documents, an
efficient way to find
all 
pages
 on which
a 
word
 occurs is to
use an index…
We want to find all
images
 in which a
feature
 occurs.
To use this idea,
we’ll need to map
our features to
“visual words”.
Kristen Grauman
Text retrieval vs. image search
What makes the problems similar, different?
Kristen Grauman
Visual words: main idea
 
Extract some local features from a number of images …
e.g., SIFT descriptor space: each
point is 128-dimensional
Slide credit: D. Nister, CVPR 2006
 
Visual words: main idea
 
 
Visual words: main idea
 
Visual words: main idea
 
 
 
Each point is a
local descriptor,
e.g. SIFT vector.
 
 
Visual words
 
Map high-dimensional descriptors to tokens/words
by quantizing the feature space
 
Descriptor’s
feature space
 
Quantize via
clustering, let
cluster centers be
the prototype
“words”
 
Determine which
word to assign to
each new image
region by finding
the closest cluster
center.
 
Word #2
Kristen Grauman
Visual words
Example: each
group of patches
belongs to the
same visual word
Figure from  Sivic & Zisserman, ICCV 2003
Kristen Grauman
First explored for texture and
material representations
Texton 
= cluster center of
filter responses over
collection of images
Describe textures and
materials based on
distribution of prototypical
texture elements.
Visual words and textons
Leung & Malik 1999; Varma &
Zisserman, 2002
Kristen Grauman
Recall: Texture representation example
s
t
a
t
i
s
t
i
c
s
 
t
o
s
u
m
m
a
r
i
z
e
 
p
a
t
t
e
r
n
s
i
n
 
s
m
a
l
l
 
w
i
n
d
o
w
s
D
i
m
e
n
s
i
o
n
 
1
 
(
m
e
a
n
 
d
/
d
x
 
v
a
l
u
e
)
D
i
m
e
n
s
i
o
n
 
2
 
(
m
e
a
n
 
d
/
d
y
 
v
a
l
u
e
)
Windows with
small gradient in
both directions
Kristen Grauman
Visual vocabulary formation
 
Issues:
Sampling strategy: where to extract features?
Clustering / quantization algorithm
Unsupervised vs. supervised
What corpus provides features (universal vocabulary?)
Vocabulary size, number of words
 
Kristen Grauman
Inverted file index
Database images are loaded into the index mapping
words to image numbers
Kristen Grauman
New query image is mapped to indices of database
images that share a word.
Inverted file index
 
When will this give us a
significant gain in efficiency?
Kristen Grauman
 
If a local image region is a visual word,
how can we summarize an image (the
document)?
A
A
n
n
a
a
l
l
o
o
g
g
y
y
 
 
t
t
o
o
 
 
d
d
o
o
c
c
u
u
m
m
e
e
n
n
t
t
s
s
Of all the sensory impressions proceeding to
the brain, the visual experiences are the
dominant ones. Our perception of the world
around us is based essentially on the
messages that reach the brain from our eyes.
For a long time it was thought that the retinal
image was transmitted point by point to visual
centers in the brain; the cerebral cortex was a
movie screen, so to speak, upon which the
image in the eye was projected. Through the
discoveries of Hubel and Wiesel we now
know that behind the origin of the visual
perception in the brain there is a considerably
more complicated course of events. By
following the visual impulses along their path
to the various cell layers of the optical cortex,
Hubel and Wiesel have been able to
demonstrate that the 
message about the
image falling on the retina undergoes a step-
wise analysis in a system of nerve cells
stored in columns. In this system each cell
has its specific function and is responsible for
a specific detail in the pattern of the retinal
image.
ICCV 2005 short course, L. Fei-Fei
Bags of visual words
Summarize entire image
based on its distribution
(histogram) of word
occurrences.
Analogous to bag of words
representation commonly
used for documents.
Comparing bags of words
Rank frames by normalized scalar product between their
(possibly weighted) occurrence counts---
nearest
neighbor
 search for similar images.
[5  1   1    0]
[1  8   1    4]
for vocabulary of 
V
 words
Kristen Grauman
tf-idf 
weighting
T
e
r
m
 
f
r
e
q
u
e
n
c
y
 
 
i
n
v
e
r
s
e
 
d
o
c
u
m
e
n
t
 
f
r
e
q
u
e
n
c
y
Describe frame by frequency of each word within it,
downweight words that appear often in the database
(Standard weighting for text retrieval)
 
Total number of
documents in
database
 
Number of documents
word i occurs in, in
whole database
 
Number of
occurrences of word
i in document d
 
Number of words in
document d
Kristen Grauman
Slide from Andrew Zisserman
Sivic & Zisserman, ICCV 2003
Bags of words for content-based
image retrieval
Slide from Andrew Zisserman
Sivic & Zisserman, ICCV 2003
Video Google System
1.
Collect all words within
query region
2.
Inverted file index to find
relevant frames
3.
Compare word counts
4.
Spatial verification
Sivic & Zisserman, ICCV 2003
Demo online at :
http://www.robots.ox.ac.uk/~vgg/r
esearch/vgoogle/index.html
 
32
K. Grauman, B. Leibe
Query
region
 
Retrieved frames
Scoring retrieval quality
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
recall
precision
Query
Database size: 10 images
Relevant (total): 5 images 
Results (ordered):
precision = #relevant / #returned
recall = #relevant / #total relevant
Slide credit: Ondrej Chum
Vocabulary Trees: hierarchical clustering
for large vocabularies
Tree construction:
Slide credit: David Nister
[Nister 
&
 Stewenius, CVPR’06]
K. Grauman, B. Leibe
Vocabulary Tree
Training: Filling the tree
Slide credit: David Nister
[Nister 
&
 Stewenius, CVPR’06]
K. Grauman, B. Leibe
Vocabulary Tree
Training: Filling the tree
Slide credit: David Nister
[Nister 
&
 Stewenius, CVPR’06]
K. Grauman, B. Leibe
Vocabulary Tree
Training: Filling the tree
Slide credit: David Nister
[Nister 
&
 Stewenius, CVPR’06]
K. Grauman, B. Leibe
Vocabulary Tree
Training: Filling the tree
Slide credit: David Nister
[Nister 
&
 Stewenius, CVPR’06]
39
K. Grauman, B. Leibe
Vocabulary Tree
Training: Filling the tree
Slide credit: David Nister
[Nister 
&
 Stewenius, CVPR’06]
 
What is the computational advantage of the
hierarchical representation bag of words, vs.
a flat vocabulary?
Vocabulary Tree
Recognition
Slide credit: David Nister
[Nister 
&
 Stewenius, CVPR’06]
 
RANSAC
verification
Bags of words: pros and cons
 
+  flexible to geometry / deformations / viewpoint
+  compact summary of image content
+  provides vector representation for sets
+  very good results in practice
 
-
basic model ignores geometry – must verify
afterwards, or encode via features
-
background and foreground mixed when bag
covers whole image
-
optimal vocabulary formation remains unclear
Summary
 
M
a
t
c
h
i
n
g
 
l
o
c
a
l
 
i
n
v
a
r
i
a
n
t
 
f
e
a
t
u
r
e
s
:
 
u
s
e
f
u
l
 
n
o
t
 
o
n
l
y
 
t
o
p
r
o
v
i
d
e
 
m
a
t
c
h
e
s
 
f
o
r
 
m
u
l
t
i
-
v
i
e
w
 
g
e
o
m
e
t
r
y
,
 
b
u
t
 
a
l
s
o
 
t
o
 
f
i
n
d
o
b
j
e
c
t
s
 
a
n
d
 
s
c
e
n
e
s
.
B
a
g
 
o
f
 
w
o
r
d
s
 
r
e
p
r
e
s
e
n
t
a
t
i
o
n
:
 
q
u
a
n
t
i
z
e
 
f
e
a
t
u
r
e
 
s
p
a
c
e
 
t
o
m
a
k
e
 
d
i
s
c
r
e
t
e
 
s
e
t
 
o
f
 
v
i
s
u
a
l
 
w
o
r
d
s
Summarize image by distribution of words
Index individual words
I
n
v
e
r
t
e
d
 
i
n
d
e
x
:
 
p
r
e
-
c
o
m
p
u
t
e
 
i
n
d
e
x
 
t
o
 
e
n
a
b
l
e
 
f
a
s
t
e
r
s
e
a
r
c
h
 
a
t
 
q
u
e
r
y
 
t
i
m
e
Slide Note
Embed
Share

This collection of slides by Prof. Kristen Grauman covers topics related to indexing and matching local features in computer vision. It discusses methods for generating candidate matches, constraining matches in stereo cases, and efficiently finding relevant features in a large database. The importance of descriptors in feature spaces and the concept of inverted file indexing are also explored.

  • Computer Vision
  • Local Features
  • Image Matching
  • Feature Indexing
  • Kristen Grauman

Uploaded on Sep 09, 2024 | 1 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. TP13 - Indexing local features Computer Vision, FCUP, 2019/20 Miguel Coimbra Slides by Prof. Kristen Grauman

  2. Matching local features Kristen Grauman

  3. Matching local features ? Image 2 Image 1 To generate candidate matches, find patches that have the most similar appearance (e.g., lowest SSD) Simplest approach: compare them all, take the closest (or closest k, or within a thresholded distance) Kristen Grauman

  4. Matching local features Image 2 Image 1 In stereo case, may constrain by proximity if we make assumptions on max disparities. Kristen Grauman

  5. Indexing local features Kristen Grauman

  6. Indexing local features Each patch / region has a descriptor, which is a point in some high-dimensional feature space (e.g., SIFT) Descriptor s feature space Kristen Grauman

  7. Indexing local features When we see close points in feature space, we have similar descriptors, which indicates similar local content. Query image Descriptor s feature space Database images Kristen Grauman

  8. Indexing local features With potentially thousands of features per image, and hundreds to millions of images to search, how to efficiently find those that are relevant to a new image? Kristen Grauman

  9. Indexing local features: inverted file index For text documents, an efficient way to find all pages on which a word occurs is to use an index We want to find all images in which a feature occurs. To use this idea, we ll need to map our features to visual words . Kristen Grauman

  10. Text retrieval vs. image search What makes the problems similar, different? Kristen Grauman

  11. Visual words: main idea Extract some local features from a number of images e.g., SIFT descriptor space: each point is 128-dimensional Slide credit: D. Nister, CVPR 2006

  12. Visual words: main idea

  13. Visual words: main idea

  14. Visual words: main idea

  15. Each point is a local descriptor, e.g. SIFT vector.

  16. Visual words Map high-dimensional descriptors to tokens/words by quantizing the feature space Quantize via clustering, let cluster centers be the prototype words Word #2 Determine which word to assign to each new image region by finding the closest cluster center. Descriptor s feature space Kristen Grauman

  17. Visual words Example: each group of patches belongs to the same visual word Figure from Sivic & Zisserman, ICCV 2003 Kristen Grauman

  18. Visual words and textons First explored for texture and material representations Texton = cluster center of filter responses over collection of images Describe textures and materials based on distribution of prototypical texture elements. Leung & Malik 1999; Varma & Zisserman, 2002 Kristen Grauman

  19. Recall: Texture representation example Windows with primarily horizontal edges Both Dimension 2 (mean d/dy value) mean d/dx value 4 mean d/dy value 10 Win. #1 Win.#2 18 7 Win.#9 20 20 Dimension 1 (mean d/dx value) statistics to summarize patterns in small windows Windows with small gradient in both directions Windows with primarily vertical edges Kristen Grauman

  20. Visual vocabulary formation Issues: Sampling strategy: where to extract features? Clustering / quantization algorithm Unsupervised vs. supervised What corpus provides features (universal vocabulary?) Vocabulary size, number of words Kristen Grauman

  21. Inverted file index Database images are loaded into the index mapping words to image numbers Kristen Grauman

  22. Inverted file index When will this give us a significant gain in efficiency? New query image is mapped to indices of database images that share a word. Kristen Grauman

  23. If a local image region is a visual word, how can we summarize an image (the document)?

  24. Analogy to documents Of all the sensory impressions proceeding to the brain, the visual experiences are the dominant ones. Our perception of the world around us is based essentially on the messages that reach the brain from our eyes. For a long time it was thought that the retinal image was transmitted point by point to visual centers in the brain; the cerebral cortex was a movie screen, so to speak, upon which the image in the eye was projected. Through the discoveries of Hubel and Wiesel we now know that behind the origin of the visual perception in the brain there is a considerably more complicated course of events. By following the visual impulses along their path to the various cell layers of the optical cortex, Hubel and Wiesel have been able to demonstrate that the message about the image falling on the retina undergoes a step- wise analysis in a system of nerve cells stored in columns. In this system each cell has its specific function and is responsible for a specific detail in the pattern of the retinal image. China is forecasting a trade surplus of $90bn ( 51bn) to $100bn this year, a threefold increase on 2004's $32bn. The Commerce Ministry said the surplus would be created by a predicted 30% jump in exports to $750bn, compared with a 18% rise in imports to $660bn. The figures are likely to further annoy the US, which has long argued that China's exports are unfairly helped by a deliberately undervalued yuan. Beijing agrees the surplus is too high, but says the yuan is only one factor. Bank of China governor Zhou Xiaochuan said the country also needed to do more to boost domestic demand so more goods stayed within the country. China increased the value of the yuan against the dollar by 2.1% in July and permitted it to trade within a narrow band, but the US wants the yuan to be allowed to trade freely. However, Beijing has made it clear that it will take its time and tread carefully before allowing the yuan to rise further in value. sensory, brain, visual, perception, retinal, cerebral cortex, eye, cell, optical nerve, image Hubel, Wiesel China, trade, surplus, commerce, exports, imports, US, yuan, bank, domestic, foreign, increase, trade, value ICCV 2005 short course, L. Fei-Fei

  25. Bags of visual words Summarize entire image based on its distribution (histogram) of word occurrences. Analogous to bag of words representation commonly used for documents.

  26. Comparing bags of words Rank frames by normalized scalar product between their (possibly weighted) occurrence counts---nearest neighbor search for similar images. [1 8 1 4] [5 1 1 0] ??,? ?? ??? ??,? = ? ? ?=1 ??? ?(?) = ? ? ??(?)2 ?(?)2 ?=1 ?=1 q for vocabulary of V words j d Kristen Grauman

  27. tf-idf weighting Term frequency inverse document frequency Describe frame by frequency of each word within it, downweight words that appear often in the database (Standard weighting for text retrieval) Total number of documents in database Number of occurrences of word i in document d Number of documents word i occurs in, in whole database Number of words in document d Kristen Grauman

  28. Bags of words for content-based image retrieval Slide from Andrew Zisserman Sivic & Zisserman, ICCV 2003

  29. Slide from Andrew Zisserman Sivic & Zisserman, ICCV 2003

  30. Video Google System Query region 1. Collect all words within query region 2. Inverted file index to find relevant frames 3. Compare word counts 4. Spatial verification Perceptual and Sensory Augmented Computing Retrieved frames Visual Object Recognition Tutorial Sivic & Zisserman, ICCV 2003 Demo online at : http://www.robots.ox.ac.uk/~vgg/r esearch/vgoogle/index.html 32 K. Grauman, B. Leibe

  31. Scoring retrieval quality Results (ordered): Database size: 10 images Relevant (total): 5 images Query precision = #relevant / #returned recall = #relevant / #total relevant 1 0.8 0.6 precision 0.4 0.2 0 0 0.2 0.4 0.6 0.8 1 recall Slide credit: Ondrej Chum

  32. Vocabulary Trees: hierarchical clustering for large vocabularies Tree construction: Perceptual and Sensory Augmented Computing Visual Object Recognition Tutorial [Nister &Stewenius, CVPR 06] Slide credit: David Nister

  33. Vocabulary Tree Training: Filling the tree Perceptual and Sensory Augmented Computing Visual Object Recognition Tutorial [Nister &Stewenius, CVPR 06] K. Grauman, B. Leibe Slide credit: David Nister

  34. Vocabulary Tree Training: Filling the tree Perceptual and Sensory Augmented Computing Visual Object Recognition Tutorial [Nister &Stewenius, CVPR 06] K. Grauman, B. Leibe Slide credit: David Nister

  35. Vocabulary Tree Training: Filling the tree Perceptual and Sensory Augmented Computing Visual Object Recognition Tutorial [Nister &Stewenius, CVPR 06] K. Grauman, B. Leibe Slide credit: David Nister

  36. Vocabulary Tree Training: Filling the tree Perceptual and Sensory Augmented Computing Visual Object Recognition Tutorial [Nister &Stewenius, CVPR 06] K. Grauman, B. Leibe Slide credit: David Nister

  37. Vocabulary Tree Training: Filling the tree Perceptual and Sensory Augmented Computing Visual Object Recognition Tutorial [Nister &Stewenius, CVPR 06] 39 K. Grauman, B. Leibe Slide credit: David Nister

  38. What is the computational advantage of the hierarchical representation bag of words, vs. a flat vocabulary?

  39. Vocabulary Tree Recognition Perceptual and Sensory Augmented Computing RANSAC verification Visual Object Recognition Tutorial [Nister &Stewenius, CVPR 06] Slide credit: David Nister

  40. Bags of words: pros and cons + flexible to geometry / deformations / viewpoint + compact summary of image content + provides vector representation for sets + very good results in practice - basic model ignores geometry must verify afterwards, or encode via features - background and foreground mixed when bag covers whole image - optimal vocabulary formation remains unclear

  41. Summary Matching local invariant features: useful not only to provide matches for multi-view geometry, but also to find objects and scenes. Bag of words representation: quantize feature space to make discrete set of visual words Summarize image by distribution of words Index individual words Inverted index: pre-compute index to enable faster search at query time

More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#