Active Object Recognition Using Vocabulary Trees: Experiment Details and COIL Dataset Visualizations

 
Active Object Recognition using
Vocabulary Trees
Natasha Govender, Jonathan Claassens, Philip Torr, Jonathan Warrell
 
 
Presented by: Manu Agarwal
 
Outline
 
Particulars of the experiment
Comparing uniqueness scores
 
-
 
Intra-class variation
 
-
 
Inter-class variation
Textureness vs uniqueness
 
-
 
Intra-class variation
 
-
 
Inter-class variation
Using entropy instead of tf-idf
 
-
 
Intra-class variation
 
-
 
Inter-class variation
Selector Component
 
Features
Object Verification and Recognition
 
Features
 
Features
 
Update belief
 
Particulars of the experiment
 
COIL dataset
 
Visualization of the COIL dataset
 
Visualization of the COIL dataset
 
Visualization of the COIL dataset
 
Visualization of the COIL dataset
 
Visualization of the COIL dataset
 
Visualization of the COIL dataset
 
Visualization of the COIL dataset
 
Visualization of the COIL dataset
 
Visualization of the COIL dataset
 
Visualization of the COIL dataset
 
Visualization of the COIL dataset
 
Visualization of the COIL dataset
 
Visualization of the COIL dataset
 
Visualization of the COIL dataset
 
Visualization of the COIL dataset
 
Visualization of the COIL dataset
 
Visualization of the COIL dataset
 
Visualization of the COIL dataset
 
Visualization of the COIL dataset
 
COIL dataset
 
Set of 100 objects imaged at every 5 degrees
 
Used 20 different objects imaged at every 20
degrees
 
Images captured around the y-axis (1 DoF)
Particulars of the experiment
 
k=2
 
20 diverse object categories
 
tf-idf
  
          ; entropy
 
SIFT descriptors
 
Vocabulary Tree
Intra-class variation
 
<
 
<
 
120.21
 
125.74
 
173.41
Intra-class variation
 
<
 
<
 
67.70
 
125.74
 
145.08
Intra-class variation
 
<
 
<
 
149.92
 
169.27
 
183.78
Intra-class variation
 
<
 
<
 
33.85
 
98.22
 
169.27
Intra-class variation
 
<
 
<
 
76.21
 
101.22
 
127.84
 
Conclusions
 
Close-up images are given higher uniqueness
scores
 
Images with visible text are given higher
uniqueness scores
 
Plain images such as those of onion are given
low uniqueness scores
Inter-class variation
 
<
 
<
 
76.21
 
145.08
 
183.78
Inter-class variation
 
<
 
<
 
33.85
 
76.21
 
98.21
Inter-class variation
 
<
 
<
 
102.31
 
324.03
 
236.97
 
Conclusions
 
 
Images depicting the front view of the object
are given higher scores
 
Comparison across classes
 
Comparing Textureness with uniqueness
Comparing Textureness with uniqueness
 
<
 
<
 
33.85
 
76.21
 
98.21
 
<
 
<
 
23
 
32
 
35
Comparing Textureness with uniqueness
 
<
 
<
 
98.21
 
288.14
 
324.03
 
<
 
<
 
31
 
44
 
67
Comparing Textureness with uniqueness
 
<
 
<
 
102.31
 
236.96
 
324.03
 
<
 
<
 
49
 
67
 
75
Comparing Textureness with uniqueness
 
<
 
<
 
33.85
 
76.21
 
98.21
 
<
 
<
 
13
 
28
 
31
 
Conclusions
 
 
There is a very strong correlation between
textureness and uniqueness within class
 
Not as strong a correlation when comparing
objects from different classes
 
Using Entropy instead of tf-idf
Intra-class variation
 
<
 
<
 
120.21
 
125.74
 
173.41
 
<
 
<
 
45.30
 
67.18
 
71.21
Intra-class variation
 
<
 
<
 
76.21
 
101.22
 
127.84
 
<
 
<
 
49.17
 
71.73
 
88.91
Inter-class variation
 
<
 
<
 
33.85
 
76.21
 
98.21
 
<
 
<
 
3.01
 
8.28
 
32.97
Inter-class variation
 
<
 
<
 
102.31
 
236.96
 
324.03
 
<
 
<
 
45.83
 
67.11
 
69.08
 
Conclusions
 
 
The two metrics behave pretty much in a
similar fashion
 
tf-idf gives more weightage to visible text than
entropy does
 
 
Thank You!
Slide Note
Embed
Share

This presentation explores active object recognition using vocabulary trees by Natasha Govender, Jonathan Claassens, Philip Torr, Jonathan Warrell, and presented by Manu Agarwal. It delves into various aspects of the experiment, including uniqueness scores, textureness versus uniqueness, and the use of entropy instead of tf-idf. Additionally, it showcases the COIL dataset through a series of visualizations capturing the dataset from different angles.

  • Object Recognition
  • Vocabulary Trees
  • Experiment
  • COIL Dataset
  • Visualization

Uploaded on Sep 15, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Active Object Recognition using Vocabulary Trees Natasha Govender, Jonathan Claassens, Philip Torr, Jonathan Warrell Presented by: Manu Agarwal

  2. Outline Particulars of the experiment Comparing uniqueness scores - Intra-class variation - Inter-class variation Textureness vs uniqueness - Intra-class variation - Inter-class variation Using entropy instead of tf-idf - Intra-class variation - Inter-class variation

  3. Particulars of the experiment COIL dataset

  4. Visualization of the COIL dataset

  5. Visualization of the COIL dataset

  6. Visualization of the COIL dataset

  7. Visualization of the COIL dataset

  8. Visualization of the COIL dataset

  9. Visualization of the COIL dataset

  10. Visualization of the COIL dataset

  11. Visualization of the COIL dataset

  12. Visualization of the COIL dataset

  13. Visualization of the COIL dataset

  14. Visualization of the COIL dataset

  15. Visualization of the COIL dataset

  16. Visualization of the COIL dataset

  17. Visualization of the COIL dataset

  18. Visualization of the COIL dataset

  19. Visualization of the COIL dataset

  20. Visualization of the COIL dataset

  21. Visualization of the COIL dataset

  22. Visualization of the COIL dataset

  23. COIL dataset Set of 100 objects imaged at every 5 degrees Used 20 different objects imaged at every 20 degrees Images captured around the y-axis (1 DoF)

  24. Particulars of the experiment k=2 20 diverse object categories tf-idf ; entropy SIFT descriptors

  25. Vocabulary Tree

  26. Intra-class variation < < 120.21 125.74 173.41

  27. Intra-class variation < < 67.70 125.74 145.08

  28. Intra-class variation < < 149.92 169.27 183.78

  29. Intra-class variation < < 33.85 98.22 169.27

  30. Intra-class variation < < 76.21 101.22 127.84

  31. Conclusions Close-up images are given higher uniqueness scores Images with visible text are given higher uniqueness scores Plain images such as those of onion are given low uniqueness scores

  32. Inter-class variation < < 76.21 145.08 183.78

  33. Inter-class variation < < 33.85 76.21 98.21

  34. Inter-class variation < < 102.31 236.97 324.03

  35. Conclusions Images depicting the front view of the object are given higher scores

  36. Comparison across classes

  37. Comparing Textureness with uniqueness

  38. Comparing Textureness with uniqueness < < 33.85 76.21 98.21 < < 35 23 32

  39. Comparing Textureness with uniqueness < < 98.21 288.14 324.03 < < 67 31 44

  40. Comparing Textureness with uniqueness < < 102.31 236.96 324.03 < < 75 49 67

  41. Comparing Textureness with uniqueness < < 33.85 76.21 98.21 < < 31 13 28

  42. Conclusions There is a very strong correlation between textureness and uniqueness within class Not as strong a correlation when comparing objects from different classes

  43. Using Entropy instead of tf-idf

  44. Intra-class variation < < 120.21 125.74 173.41 < < 45.30 67.18 71.21

  45. Intra-class variation < < 76.21 101.22 127.84 < < 49.17 71.73 88.91

  46. Inter-class variation < < 33.85 76.21 98.21 < < 3.01 8.28 32.97

  47. Inter-class variation < < 102.31 236.96 324.03 < < 45.83 67.11 69.08

  48. Conclusions The two metrics behave pretty much in a similar fashion tf-idf gives more weightage to visible text than entropy does

  49. Thank You!

More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#