3D Display Methods in Computer Graphics: Parallel vs Perspective Projection

3
D
 
d
i
s
p
l
a
y
 
m
e
t
h
o
d
s
 
i
n
C
o
m
p
u
t
e
r
 
G
r
a
p
h
i
c
s
By
         Miss. T. Sarah Jeba Jency, M.Sc., M.Phil., B.Ed.,
U
N
I
T
 
 
4
 
 
C
o
m
p
u
t
e
r
 
G
r
a
p
h
i
c
s
 
a
n
d
 
V
i
s
u
a
l
i
z
a
t
i
o
n
W
h
a
t
 
i
s
 
3
d
 
d
i
s
p
l
a
y
 
m
e
t
h
o
d
s
 
i
n
 
c
o
m
p
u
t
e
r
g
r
a
p
h
i
c
s
?
3D 
computer graphics 
(
in 
contrast to 
2D
computer 
graphic
s) 
are graphics 
that 
utilize 
a
three 
dimensional 
representation 
of 
geometric
data 
that is 
stored 
in 
the 
computer 
for 
the
purposes of 
performing 
calculations and
rendering 2D images. Such images 
may 
be 
for
later display 
or 
for 
real-time
 
viewing
. We are
talk about the followings.
Parallel
 
Projection.
Perspective
 
Projection.
Depth
 
Cueing
Parallel
 
Projection:
A 
parallel projection is a projection 
of 
an object in three-dimensional space onto a fixed plane,
known 
as 
the 
projection plane or image plane, 
where 
the 
rays, 
known 
as lines 
of 
sight or projection
lines, are parallel 
to 
each
 
other.
In 
parallel projection, 
z 
co-ordinate is discarded  and parallel,
lines 
from 
each vertex on 
the 
object  are 
extended 
until they
intersect 
the 
view
 
plane.
We 
connect 
the 
projected vertices 
by line  
segments which
correspond 
to 
connections on 
the  
original object. As 
shown 
in
next slide a parallel  projection preserves relative proportions of
objects
 
but does not 
produce 
the 
realistic
 
views.
Notes:
Project 
points on 
the 
object 
surface 
along 
parallel 
lines 
onto 
the 
display
 
plane.
Parallel 
lines 
are still parallel after
 
projection.
Used 
in engineering and 
architectural
 
drawings.
Views maintain 
relative 
proportions 
of 
the
 
object.
P
e
r
s
p
e
c
t
i
v
e
 
P
r
o
j
e
c
t
i
o
n
 
:
The 
perspective projection, 
on 
the 
other
hand, 
produces 
realistic views 
but 
does not
preserve relative proportions. 
In 
perspective
projection, 
the lines of 
projection are 
not
parallel. Instead 
, 
they all 
converge at 
a
single 
point 
called 
the 
center 
of 
projection
or 
projection 
reference
 
point
’.
The 
perspective projection 
is perhaps 
the
most 
common 
projection 
technique familiar
to 
us as 
image 
formed 
by eye 
or lenses of
photographic film 
on 
perspective
 
projection.
The 
distance 
and angles 
are 
not preserved 
and 
parallel 
lines do not remain 
parallel.
Instead, 
they 
all 
converge 
at 
a 
single 
point 
called 
center of projection 
or 
projection
reference point
. 
There are 
3 types 
of
 
perspective
projections:-
One point 
perspective projection 
is 
simple 
to
 
draw.
Two 
point 
perspective projection 
gives 
better 
impression of
 
depth.
Three point 
perspective projection 
is 
most 
difficult 
to
 
draw.
P
r
o
j
e
c
t
i
o
n
 
r
e
f
e
r
e
n
c
e
 
p
o
i
n
t
 
:
The 
perspective projection 
conveys 
depth 
information 
by 
making 
distance  
object
smalls 
than 
near
 
one.
This 
is the 
way 
that 
our 
eyes 
and a 
camera 
lens 
form 
images 
and 
so 
the
displays are more
 
realistic.
The 
disadvantage 
is 
that 
if 
object 
have 
only limited 
variation 
, the 
image 
may  
not
provide 
adequate depth 
information 
and 
ambiguity
 
appears.
S
o
m
e
 
p
o
i
n
t
s
 
a
b
o
u
t
 
P
e
r
s
p
e
c
t
i
v
e
 
P
r
o
j
e
c
t
i
o
n
 
:
D
e
p
t
h
 
C
u
e
i
n
g
 
:
Depth cueing is implemented by 
having  
objects blend 
into 
the
background 
color 
with  
increasing distance 
from 
the 
viewer. 
The
 
range
 
of
distances 
over 
which this 
blending 
occurs 
is  
controlled 
by 
the
 
sliders.
To 
create 
realistic image, 
the 
depth 
information 
is important so that 
we 
can
 
easily 
identify, for 
a  
particular viewing direction, 
which 
is 
the 
front 
and
 
which 
is 
the back 
of
displayed 
objects. The  depth
 
of
 
an
 
object
 
can
 
be
 
represented
 
by
 
the
 
intensity
 
of
 
the
 
image.
 
The
 
parts
 
of
 
the
 
objects
 
closest 
to 
the 
viewing position 
are displayed
with the highest intensities 
and 
objects farther  
away 
are displayed 
with decreasing 
intensities.
This 
effect 
is 
known as 
‘depth
 
cueing’
.
Note:
To 
easily identify 
the 
front 
and 
back of 
display
 
objects.
Depth 
information 
can 
be 
included 
using 
various
 
methods.
A 
simple method 
to 
vary 
the 
intensity 
of objects according 
to 
their 
distance 
from  
the 
viewing
 
position.
Eg: 
lines closest 
to 
the 
viewing position 
are displayed 
with 
the higher
 
intensities
and 
lines farther 
away 
are displayed 
with 
lower
 
intensities.
Visible
 
line
 
and
 
surface
 
identification
I.
When 
we 
view 
a 
picture 
containing 
non-transparent 
objects 
and 
surfaces, 
then 
we 
cannot 
see
those objects 
from 
view 
which 
are 
behind 
from 
objects 
closer 
to
 
eye.
II.
We 
must 
remove 
these 
hidden 
surfaces 
to 
get 
a 
realistic 
screen 
image. 
The 
identification 
and
removal 
of 
these 
surfaces 
is 
called 
Hidden-surface
 
problem
Removing 
hidden 
surface
 
problem
Object-Space
 
method
Image-space
 
method
D
e
p
t
h
 
B
u
f
f
e
r
 
Z
B
u
f
f
e
r
 
M
e
t
h
o
d
0≤
depth
≤1
It 
is an image-space approach. 
The 
basic idea is 
to test the Z-depth of 
each surface 
to 
determine 
the
 
closest
visible
 
surface.
To 
override 
the 
closer 
polygons 
from 
the 
far 
ones, 
two 
buffers 
named 
frame 
buffer 
and 
depth
buffer, 
are
 
used.
D
e
p
t
h
 
b
u
f
f
e
r
 
i
s
 
u
s
e
d
 
t
o
 
s
t
o
r
e
 
d
e
p
t
h
 
v
a
l
u
e
s
 
f
o
r
 
x
,
y
 
p
o
s
i
t
i
o
n
,
 
a
s
 
s
u
r
f
a
c
e
s
 
a
r
e
 
p
r
o
c
e
s
s
e
d
0≤
depth
≤1
T
h
e
 
f
r
a
m
e
 
b
u
f
f
e
r
 
i
s
 
u
s
e
d
 
t
o
 
s
t
o
r
e
 
t
h
e
 
i
n
t
e
n
s
i
t
y
 
v
a
l
u
e
 
o
f
 
c
o
l
o
r
 
v
a
l
u
e
 
a
t
 
e
a
c
h
 
p
o
s
i
t
i
o
n
 
x
,
y
Scan-Line
 
Method:
 
The 
Edge 
Table 
It 
contains 
coordinate endpoints 
of 
each 
line 
in 
the 
scene, 
the 
inverse 
slope 
of
each 
line, 
and 
pointers 
into 
the 
polygon 
table 
to 
connect 
edges 
to
 
surfaces.
 
The 
Polygon 
Table 
It 
contains 
the 
plane 
coefficients, surface 
material 
properties, 
other 
surface
data, 
and 
may 
be 
pointers 
to 
the edge
 
table.
Area-Subdivision
 
Method:
A.
Surrounding 
surface 
One 
that 
completely 
encloses 
the
 
area.
B.
Overlapping 
surface 
One 
that 
is 
partly 
inside 
and 
partly 
outside 
the
 
area.
C.
Inside 
surface 
One 
that 
is 
completely 
inside 
the
 
area.
D.
Outside 
surface 
One 
that 
is 
completely 
outside 
the
 
area.
A-Buffer
 
Method:
The 
A-buffer 
expands 
on 
the 
depth 
buffer 
method 
to 
allow 
transparencies. 
The 
key 
data 
structure 
in
the 
A-buffer 
is 
the 
accumulation 
buffer.
Each 
position 
in 
the 
A-buffer 
has 
two 
fields
 
 
Depth 
field 
It 
stores 
a 
positive 
or 
negative 
real
 
number
 
Intensity 
field 
It 
stores 
surface-intensity 
information 
or 
a 
pointer
 
value
If 
depth 
>= 
0, 
the 
number 
stored 
at 
that 
position 
is 
the 
depth 
of 
a 
single surface 
overlapping the
corresponding 
pixel 
area. 
The 
intensity 
field 
then 
stores 
the 
RGB 
components 
of the 
surface 
color
at 
that 
point 
and 
the 
percent 
of 
pixel
 
coverage.
If 
depth 
< 
0, 
it 
indicates 
multiple-surface 
contributions 
to 
the 
pixel 
intensity. 
The 
intensity 
field
 
then
stores 
a 
pointer 
to 
a 
linked 
list 
of 
surface 
data. 
The 
surface 
buffer 
in 
the 
A-buffer 
includes
 
RGB 
intensity
 
components
Opacity
 
Parameter
 
Depth
Percent 
of 
area
 
coverage
Surface
 
identifier
Surface
 
Rendering:
S
u
r
f
a
c
e
 
r
e
n
d
e
r
i
n
g
 
i
n
v
o
l
v
e
s
 
s
e
t
t
i
n
g
 
t
h
e
 
s
u
r
f
a
c
e
 
i
n
t
e
n
s
i
t
y
 
o
f
 
o
b
j
e
c
t
s
 
a
c
c
o
r
d
i
n
g
 
t
o
 
t
h
e
 
l
i
g
h
t
i
n
g
c
o
n
d
i
t
i
o
n
s
 
i
n
 
t
h
e
 
s
c
e
n
e
 
a
n
d
 
a
c
c
o
r
d
i
n
g
 
t
o
 
a
s
s
i
g
n
e
d
 
s
u
r
f
a
c
e
 
c
h
a
r
a
c
t
e
r
i
s
t
i
c
s
.
 
T
h
e
 
l
i
g
h
t
i
n
g
 
 
c
o
n
d
i
t
i
o
n
s
s
p
e
c
i
f
y
 
t
h
e
 
i
n
t
e
n
s
i
t
y
 
a
n
d
 
p
o
s
i
t
i
o
n
s
 
o
f
 
l
i
g
h
t
 
s
o
u
r
c
e
s
 
a
n
d
 
t
h
e
 
g
e
n
e
r
a
l
 
b
a
c
k
g
r
o
u
n
d
 
 
i
l
l
u
m
i
n
a
t
i
o
n
r
e
q
u
i
r
e
d
 
f
o
r
 
a
 
s
c
e
n
e
 
.
O
n
 
t
h
e
 
o
t
h
e
r
 
h
a
n
d
 
t
h
e
 
s
u
r
f
a
c
e
 
c
h
a
r
a
c
t
e
r
i
s
t
i
c
s
 
o
f
 
o
b
j
e
c
t
s
 
s
p
e
c
i
f
y
 
t
h
e
 
d
e
g
r
e
e
 
o
f
 
t
r
a
n
s
p
a
r
e
n
c
y
 
 
a
n
d
s
m
o
o
t
h
n
e
s
s
 
o
r
 
r
o
u
g
h
n
e
s
s
 
o
f
 
t
h
e
 
s
u
r
f
a
c
e
;
 
u
s
u
a
l
l
y
 
t
h
e
 
s
u
r
f
a
c
e
 
r
e
n
d
e
r
i
n
g
 
m
e
t
h
o
d
s
 
a
r
e
 
 
c
o
m
b
i
n
e
d
 
w
i
t
h
p
e
r
s
p
e
c
t
i
v
e
 
a
n
d
 
v
i
s
i
b
l
e
 
s
u
r
f
a
c
e
 
i
d
e
n
t
i
f
i
c
a
t
i
o
n
 
t
o
 
g
e
n
e
r
a
t
e
 
a
 
h
i
g
h
 
d
e
g
r
e
e
 
o
f
 
 
r
e
a
l
i
s
m
 
i
n
 
a
 
d
i
s
p
l
a
y
e
d
s
c
e
n
e
.
Set
 
the
 
surface
 
intensity
 
of
 
objects
 
accordingto
Lighting
 
conditions
 
in
 
the
 
scene
Assigned
 
surface
 
characteristics
Lighting
 
specifications
 
include
 
the
 
intensity
 
and
 
positions  
of 
light
sources 
and 
the 
general 
background
 
illumination  
required 
for
ascene.
Surface 
properties 
include 
degree 
of
 
transparencyand  
how
rough
 
or
 
smooth
 
of
 
the
 
surfaces
Slide Note
Embed
Share

3D computer graphics utilize a three-dimensional representation for rendering images. This article discusses parallel and perspective projection methods, highlighting their differences in preserving proportions and creating realistic views. Parallel projection maintains relative proportions, while perspective projection provides depth perception but may introduce ambiguity. Depth cueing and different types of perspective projections are also explored.

  • Computer Graphics
  • 3D Display
  • Parallel Projection
  • Perspective Projection
  • Depth Cueing

Uploaded on Oct 06, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. UNIT 4 Computer Graphics and Visualization 3D display methods in Computer Graphics By Miss. T. Sarah Jeba Jency, M.Sc., M.Phil., B.Ed.,

  2. What is 3d display methods in computer graphics? 3D computer graphics (in contrast to 2D computer graphics) are graphics that utilize a three dimensional representation of geometric data that is stored in the computer for the purposes of performing rendering 2D images. Such images may be for later display or for real-time viewing. We are talk about the followings. calculations and Parallel Projection. Perspective Projection. Depth Cueing

  3. Parallel Projection: A parallel projection is a projection of an object in three-dimensional space onto a fixed plane, known as the projection plane or image plane, where the rays, known as lines of sight or projection lines, are parallel to each other. We connect the projected vertices by line correspond to connections on the original object. As shown in next slide a parallel projection preserves relative proportions of objects but does not produce the realistic views. segments which In parallel projection, z co-ordinate is discarded and parallel, lines from each vertex on the object intersect the view plane. are extended until they Notes: Project points on the object surface along parallel lines onto the display plane. Parallel lines are still parallel after projection. Used in engineering and architectural drawings. Views maintain relative proportions of the object.

  4. Perspective Projection : The perspective projection, on the other hand, produces realistic views but does not preserve relative proportions. In perspective projection, the lines of projection are not parallel. Instead , they all converge at a single point called the center of projection or projection reference point . The perspective projection is perhaps the most common projection technique familiar to us as image formed by eye or lenses of photographicfilm on perspective projection.

  5. Projection reference point : The distance and angles are not preserved and parallel lines do not remain parallel. Instead, they all converge at a single point called center of projection or projection reference point. There are 3 types ofperspective projections:- One point perspective projection is simple to draw. Two point perspective projection gives better impression of depth. Three point perspective projection is most difficult to draw. Some points about Perspective Projection : The perspective projection conveys depth information by making distance object smalls than near one. This is the way that our eyes and a camera lens form images and so the displays are more realistic. The disadvantage is that if object have only limited variation , the image may not provide adequate depth information and ambiguity appears.

  6. Depth Cueing : Depth cueing is implemented by having background color with increasing distance from the viewer. The range of distances over which this blending occurs is controlled by the sliders. objects blend into the Tocreate realistic image, the depth information is important so that we can easily identify, for a particular viewing direction, which is the front and which is the back of displayed objects. The depth of an object can be represented by the intensity of the image. The parts of the objects closest to the viewing position are displayed with the highest intensities and objects farther away are displayed with decreasing intensities. This effect is known as depth cueing . Note: To easily identify the front and back of display objects. Depth information can be included using various methods. A simple method to vary the intensity of objects according to their distance from the viewingposition. Eg: lines closest to the viewing position are displayed with the higher intensities and lines farther away are displayed with lower intensities.

  7. Visiblelineand surfaceidentification When we view a picture containing non-transparent objects and surfaces, then we cannot see those objects from view which are behind from objects closer to eye. I. We must remove these hidden surfaces to get a realistic screen image. The identification and removal of these surfaces is called Hidden-surface problem II. Removing hidden surface problem Object-Space method Image-space method

  8. 0depth1 Depth Buffer Z Buffer Method It is an image-space approach. The basic idea is to test the Z-depth of each surface to determine the closest visible surface. To override the closer polygons from the far ones, two buffers named frame buffer and depth buffer, are used. Depth buffer is used to store depth values for x,y position, as surfaces are processed 0 depth 1 The frame buffer is used to store the intensity value of color value at each position x,y

  9. Scan-Line Method: The Edge Table It contains coordinate endpoints of each line in the scene, the inverse slope of each line, and pointers into the polygon table to connect edges to surfaces. The Polygon Table It contains the plane coefficients, surface material properties, other surface data, and may be pointers to the edge table.

  10. Area-Subdivision Method: Surrounding surface One that completely encloses thearea. A. Overlapping surface One that is partly inside and partly outside the area. B. Inside surface One that is completely inside the area. C. Outside surface One that is completely outside thearea. D.

  11. A-Buffer Method: The A-buffer expands on the depth buffer method to allow transparencies. The key data structure in the A-buffer is the accumulation buffer. Each position in the A-buffer has two fields Depth field It stores a positive or negative realnumber Intensity field It stores surface-intensity information or a pointer value If depth >= 0, the number stored at that position is the depth of a single surface overlapping the corresponding pixel area. The intensity field then stores the RGB components of the surface color at that point and the percent of pixel coverage.

  12. If depth < 0, it indicates multiple-surface contributions to the pixel intensity. The intensity fieldthen stores a pointer to a linked list of surface data. The surface buffer in the A-buffer includes RGB intensity components OpacityParameter Depth Percent of areacoverage Surfaceidentifier

  13. Surface Rendering: Surface rendering involves setting the surface intensity of objects according to the lighting conditions in the scene and according to assigned surface characteristics. The lighting conditions specify the intensity and positions of light sources and the general background illumination requiredforascene. On the other hand the surface characteristics of objects specify the degree of transparency and smoothnessor roughnessof the surface;usuallythe surfacerenderingmethodsare combinedwith perspective and visible surface identification to generate a high degree of realism in a displayed scene. Setthe surfaceintensity of objectsaccordingto Lightingconditions in thescene Assignedsurfacecharacteristics Lighting specificationsinclude the intensity andpositions of light sources and the general background illumination required for ascene. Surface properties include degree oftransparencyand how roughor smoothof thesurfaces

More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#