Overview of Mass Storage Systems in Computer Engineering

1
Chapter 12
 Mass Storage
L
a
s
t
 
U
p
d
a
t
e
:
 
D
e
c
 
1
4
,
 
2
0
1
7
Bilkent University 
Department of Computer Engineering
CS342 Operating Systems
2
O
b
j
e
c
t
i
v
e
s
 
a
n
d
 
O
u
t
l
i
n
e
O
b
j
e
c
t
i
v
e
s
Describe the physical structure of
secondary and tertiary storage devices
and the resulting effects on the uses of
the devices
Explain the performance characteristics
of mass-storage devices
Discuss operating-system services
provided for mass storage, including
RAID and HSM
O
u
t
l
i
n
e
Overview of Mass Storage Structure
Disk Structure
Disk Attachment
Disk Scheduling
Disk Management
Swap-Space Management
RAID Structure
Disk Attachment
Stable-Storage Implementation
Tertiary Storage Devices
Operating System Issues
Performance Issues
3
M
a
s
s
 
S
t
o
r
a
g
e
Mass Storage
 : permanent storage; large volume of data can be stored
permanently (powering off will not cause loss of data)
S
e
c
o
n
d
a
r
y
 
s
t
o
r
a
g
e
:
 
a
l
w
a
y
s
 
o
n
l
i
n
e
;
 
h
a
r
d
 
d
i
s
k
T
e
r
t
i
a
r
y
 
s
t
o
r
a
g
e
;
 
t
a
p
e
s
,
 
e
t
c
.
4
O
v
e
r
v
i
e
w
 
o
f
 
M
a
s
s
 
S
t
o
r
a
g
e
 
S
y
s
t
e
m
s
:
M
a
g
n
e
t
i
c
 
D
i
s
k
s
Magnetic disks provide bulk of secondary storage of modern computers
Drives rotate at 60 to 200 times per second
Transfer rate 
is rate at which data flow between drive and computer
Positioning time 
(random-access time) is time to move disk arm to desired
cylinder (
seek time
) and time for desired sector to rotate under the disk
head (
rotational latency
)
Head crash results from disk head making contact with the disk surface
That
s bad
Disks can be removable
5
M
o
v
i
n
g
-
h
e
a
d
 
D
i
s
k
 
M
e
c
h
a
n
i
s
m
6
O
v
e
r
v
i
e
w
 
o
f
 
M
a
s
s
 
S
t
o
r
a
g
e
 
S
y
s
t
e
m
s
:
M
a
g
n
e
t
i
c
 
T
a
p
e
s
Magnetic tape
Was early secondary-storage medium
Relatively permanent and 
holds large quantities of data
20-200GB typical storage
Mainly used for backup, storage of 
infrequently-used data
, transfer
medium between systems
Access
 time 
slow
Random access ~1000 times slower than disk
Once data under head, transfer rates comparable to disk
Common technologies are 4mm, 8mm, 19mm, LTO-2 and SDLT
7
D
i
s
k
 
S
t
r
u
c
t
u
r
e
A disk drive is addressed as a large 
1-dimensional array of 
blocks
, where the
logical 
block
 is the smallest 
unit of transfer
.
The 1-dimensional array of 
blocks is mapped into the sectors 
of the disk
sequentially
.
Sector 0 is the first sector of the first track on the outermost cylinder.
Mapping proceeds in order through that track, then the rest of the tracks in
that cylinder, and then through the rest of the cylinders from outermost to
innermost.
Sector 0
8
D
i
s
k
 
A
t
t
a
c
h
m
e
n
t
Host-attached
 storage accessed through I/O ports talking to disk I/O busses
Attachment technologies and protocols (various disk I/O buses)
IDE, EIDE, ATA, SATA
USB
SCSI
Fiber Channel
Host controller
 in computer uses bus to
talk to 
disk controller
 built into drive
or storage array
Host controller
Disk Controller
Disk I/O Bus 
(SCSI, IDE, SATA, etc.)
Disk
Computer I/O Bus
messages
CPU
RAM
9
D
i
s
k
 
A
t
t
a
c
h
m
e
n
t
 
E
x
a
m
p
l
e
:
S
C
S
I
 
a
n
d
 
F
i
b
e
r
 
C
h
a
n
n
e
l
S
C
S
I
 
i
t
s
e
l
f
 
i
s
 
a
 
b
u
s
,
 
u
p
 
t
o
 
1
6
 
d
e
v
i
c
e
s
 
o
n
 
o
n
e
 
c
a
b
l
e
,
 
S
C
S
I
 
i
n
i
t
i
a
t
o
r
 
r
e
q
u
e
s
t
s
o
p
e
r
a
t
i
o
n
 
a
n
d
 
S
C
S
I
 
t
a
r
g
e
t
s
 
p
e
r
f
o
r
m
 
t
a
s
k
s
E
a
c
h
 
t
a
r
g
e
t
 
c
a
n
 
h
a
v
e
 
u
p
 
t
o
 
8
 
l
o
g
i
c
a
l
 
u
n
i
t
s
 
(
d
i
s
k
s
 
a
t
t
a
c
h
e
d
 
t
o
 
d
e
v
i
c
e
c
o
n
t
r
o
l
l
e
r
FC (fiber channel) is high-speed serial architecture
C
a
n
 
b
e
 
s
w
i
t
c
h
e
d
 
f
a
b
r
i
c
 
w
i
t
h
 
2
4
-
b
i
t
 
a
d
d
r
e
s
s
 
s
p
a
c
e
 
 
t
h
e
 
b
a
s
i
s
 
o
f
 
s
t
o
r
a
g
e
a
r
e
a
 
n
e
t
w
o
r
k
s
 
(
S
A
N
s
)
 
i
n
 
w
h
i
c
h
 
m
a
n
y
 
h
o
s
t
s
 
a
t
t
a
c
h
 
t
o
 
m
a
n
y
 
s
t
o
r
a
g
e
 
u
n
i
t
s
C
a
n
 
b
e
 
a
r
b
i
t
r
a
t
e
d
 
l
o
o
p
 
(
F
C
-
A
L
)
 
o
f
 
1
2
6
 
d
e
v
i
c
e
s
10
D
i
s
k
 
A
t
t
a
c
h
m
e
n
t
 
E
x
a
m
p
l
e
:
 
S
C
S
I
SCSI Host 
Adapter
PCI Bus
SCSI Bus
SCSI 
controller
Disk
SCSI 
controller
Disk
RAM
CPU
SCSI target
SCSI initiator
(up to 16 devices 
can be connected)
11
N
e
t
w
o
r
k
 
A
t
t
a
c
h
e
d
 
S
t
o
r
a
g
e
Network-attached storage (
NAS
) is storage made available over a network
rather than over a local connection (such as a bus)
NFS and CIFS 
are common 
distributed file system protocols
 used for network
attached storage
We use those protocols to access remote storage that is connected to a
network.
Implemented via remote procedure calls (RPCs) between host and storage
New 
iSCSI 
protocol uses IP network to carry the SCSI protocol
SCSI – SCSI bus/cable (local/host attached)
iSCSI – TCP/IP network (network attached)
12
N
e
t
w
o
r
k
 
A
t
t
a
c
h
e
d
 
S
t
o
r
a
g
e
NFS or CIFS Protocol
T
C
P
/
I
P
 
N
e
t
w
o
r
k
13
S
t
o
r
a
g
e
 
A
r
e
a
 
N
e
t
w
o
r
k
Common in large storage environments (and becoming more common)
Multiple hosts attached to 
multiple storage arrays 
– flexible
Uses a different communication infrastructure (
SAN
) than the common
networking infrastructure
14
D
i
s
k
 
S
c
h
e
d
u
l
i
n
g
The operating system is responsible for using hardware efficiently — for the
disk drives, this means having a fast
 access time 
and large 
disk bandwidth
.
Disk access time 
has two major components
Seek time
 is the time for the disk to move the head to the cylinder
containing the desired sector (block).
Rotational latency
 is the additional waiting time for the disk to rotate the
desired sector under the disk head.
Minimize seek time
Seek time 
 
seek distance 
(between cylinders)
Disk bandwidth is the total number of bytes transferred, divided by the total
time between the first request for service and the completion of the last
transfer.
15
D
i
s
k
 
I
/
O
 
q
u
e
u
e
Process
1
Process
2
Process
3
disk 
controller
Disk
Kernel
block
 
requests
file requests
 
  
disk request 
queue
request for 
block x
(x is on cylinder y)
16
D
i
s
k
 
S
c
h
e
d
u
l
i
n
g
Several algorithms exist to 
schedule
 the servicing of 
disk I/O request
s.
Assume disk has  cylinders from 0 to 199.
We illustrate them with a 
request queue
. In the queue we have requests for
blocks sitting in  various cylinders. We just focus on the 
cylinder numbers
.
  
 
98, 183, 37, 122, 14, 124, 65, 67             (these are cylinder numbers)
 
Head pointer: 53 (the head is currently on cylinder 53)
We have 8 requests queued. They are
for blocks sitting on cylinders 98, 183, …
17
F
C
F
S
 
A
l
g
o
r
i
t
h
m
F
i
r
s
t
 
C
o
m
e
 
F
i
r
s
t
 
S
e
r
v
e
d
total head movement =  640 cylinders
18
S
S
T
F
 
A
l
g
o
r
i
t
h
m
S
h
o
r
t
e
s
t
 
S
e
e
k
 
T
i
m
e
 
F
i
r
s
t
Selects the request with the 
minimum seek time
 from the current head
position.
SSTF scheduling is a form of SJF scheduling; may cause starvation of some
requests.
19
S
S
T
F
total head movement  = 236 cylinders
Assume initially head 
direction is towards right
20
S
C
A
N
/
E
L
E
V
A
T
O
R
 
A
l
g
o
r
i
t
h
m
The disk arm starts at one end of the disk, and moves toward the other end,
servicing requests until it gets to the other end of the disk, where the head
movement is reversed and servicing continues.
Sometimes called the 
elevator algorithm
.
Several variations of the algorithm exist:
C-SCAN
LOOK
C-LOOK
21
S
C
A
N
total head movement =  236 cylinders
Assume initially head 
direction is towards left
22
C
-
S
C
A
N
C-SCAN: Circular SCAN
Provides a 
more 
uniform wait time
 
than SCAN.
Wait time for  request: time between arrival of request to the queue and
completion of handling the request.
The head moves from one end of the disk to the other; servicing requests as it
goes.  When it reaches the other end, however, it immediately returns to the
beginning of the disk, without servicing any requests on the return trip.
Treats the cylinders as a 
circular list 
that wraps around from the last cylinder to
the first one.
23
C
-
S
C
A
N
Total movement: 382
Assume initially head 
direction is towards right
24
C
-
L
O
O
K
Version of C-SCAN
Arm only goes as far as the 
last request 
in each direction, then reverses
direction immediately (without first going all the way to the end of the disk);
and goes to the first request in the other end  of the disk.
25
C
-
L
O
O
K
Total movement: 322
Assume initially head 
direction is towards right
26
L
O
O
K
From 53 to 183 (sweep meanwhile)
From 183 to  14 (sweep meanwhile)
Total = 299
27
S
e
l
e
c
t
i
n
g
 
a
 
D
i
s
k
-
S
c
h
e
d
u
l
i
n
g
 
A
l
g
o
r
i
t
h
m
SSTF is common and has a natural appeal
SCAN and C-SCAN perform better for systems that place a 
heavy load 
on the
disk.
Performance depends on the 
number and types of requests
.
Requests for disk service can be influenced by the file-allocation method.
The disk-scheduling algorithm should be written as a separate module of the
operating system, allowing it to be replaced with a different algorithm if
necessary.
Either SSTF or LOOK is a reasonable choice for the default algorithm.
28
D
i
s
k
 
M
a
n
a
g
e
m
e
n
t
Low-level formatting
, or 
physical formatting
 — Dividing a disk into sectors that
the disk controller can read and write.
To use a disk to hold files, the operating system still needs to record its own
data structures on the disk.
Partition
 the disk into one or more groups of cylinders (volumes).
Logical formatting
 or 
making a file system
.
29
L
o
w
 
L
e
v
e
l
 
F
o
r
m
a
t
t
i
n
g
Sector
Sector
Sector
 
….
magnetic material that can store bits
Disk before low level formatting 
 
Disk after low level formatting
HDR
ECC
Data (512 bytes)
 
sector number
 
error correcting code
30
B
o
o
t
 
P
r
o
c
e
s
s
Tiny
Boot 
program
ROM
RAM
Disk
MBR
Boot Block
CPU
partition1
partition2
partition3
boot code
partition table
1.
Boot code in
ROM is run; it brings
MBR into memory
and starts MBR boot
code
2.
MBR boot code
runs; looks 
to partition table; 
learns about
the boot partition; 
brings and starts the 
boot code in the boot
partition
3.
Boot code in boot 
partition loads 
the kernel sitting
in that partition
 
Power ON
kernel
31
B
a
d
 
B
l
o
c
k
s
Disk sectors (blocks) may become defective. Can no longer store data.
Hardware defect
System should not put data there.
Possible Strategy:
A bad block X can be remapped to a good block Y
Whenever OS tries to access X, disk controller accesses Y.
Some sectors (blocks) of disk can be reserved for this mapping.
T
h
i
s
 
i
s
 
c
a
l
l
e
d
 
s
e
c
t
o
r
 
s
p
a
r
i
n
g
.
32
B
a
d
 
B
l
o
c
k
s
x
z
y
w
Disk
Disk Controller
x
y
z
w
bad block mapping table
block (sector)  request
bad sector
Table stored 
on disk
spare sectors
33
S
w
a
p
 
S
p
a
c
e
 
M
a
n
a
g
e
m
e
n
t
Swap-space — Virtual memory uses disk space as an extension of main
memory.
Swap-space can be carved out of the normal file system, or, more commonly,
it can be in a separate disk partition.
Swap-space management
Kernel uses 
swap maps
 
to track swap-space use.
E
x
a
m
p
l
e
:
 
4
.
3
B
S
D
 
O
S
 
a
l
l
o
c
a
t
e
s
 
s
w
a
p
 
s
p
a
c
e
 
w
h
e
n
 
p
r
o
c
e
s
s
 
s
t
a
r
t
s
;
 
h
o
l
d
s
t
e
x
t
 
s
e
g
m
e
n
t
 
(
t
h
e
 
p
r
o
g
r
a
m
)
 
a
n
d
 
d
a
t
a
 
s
e
g
m
e
n
t
.
E
x
a
m
p
l
e
:
 
S
o
l
a
r
i
s
 
2
 
O
S
 
a
l
l
o
c
a
t
e
s
 
s
w
a
p
 
s
p
a
c
e
 
o
n
l
y
 
w
h
e
n
 
a
 
p
a
g
e
 
i
s
 
f
o
r
c
e
d
o
u
t
 
o
f
 
p
h
y
s
i
c
a
l
 
m
e
m
o
r
y
,
 
n
o
t
 
w
h
e
n
 
t
h
e
 
v
i
r
t
u
a
l
 
m
e
m
o
r
y
 
p
a
g
e
 
i
s
 
f
i
r
s
t
 
c
r
e
a
t
e
d
.
34
D
a
t
a
 
S
t
r
u
c
t
u
r
e
s
 
f
o
r
 
S
w
a
p
p
i
n
g
 
o
n
 
L
i
n
u
x
S
y
s
t
e
m
s
free slot
35
R
A
I
D
 
S
t
r
u
c
t
u
r
e
R
A
I
D
:
 
R
e
d
u
n
d
a
n
t
 
A
r
r
a
y
 
o
f
 
I
n
d
e
p
e
n
d
e
n
t
 
 
D
i
s
k
s
M
u
l
t
i
p
l
e
 
d
i
s
k
 
d
r
i
v
e
s
 
p
r
o
v
i
d
e
s
 
r
e
l
i
a
b
i
l
i
t
y
 
v
i
a
 
r
e
d
u
n
d
a
n
c
y
.
Multiple disks can be organized in different ways for Reliability and
Performance (RAID is arranged into different levels/schemes)
If you have many disks:
The probability of one of them failing becomes higher.
The probability of all of them failing (at the same time) becomes lower.
36
R
A
I
D
RAID schemes improve performance and improve the reliability of the storage
system by storing redundant data.
Redundancy 
 improves reliability (and also performance to some extend)
Disk striping 
 improves performance
D
i
s
k
 
s
t
r
i
p
i
n
g
 
u
s
e
s
 
a
 
g
r
o
u
p
 
o
f
 
d
i
s
k
s
 
a
s
 
o
n
e
 
s
t
o
r
a
g
e
 
u
n
i
t
 
a
n
d
 
d
i
s
t
r
i
b
u
t
e
s
(
s
t
r
i
p
e
s
)
 
t
h
e
 
d
a
t
a
 
o
v
e
r
 
t
h
o
s
e
 
d
i
s
k
s
Redundancy by:
Mirroring
 or 
shadowing
 keeps duplicate of each disk.
Use of parity bits 
or
 ECC (error correction codes)
 causes much less
redundancy.
37
R
A
I
D
 
S
t
r
i
p
i
n
g
 
e
x
a
m
p
l
e
:
i
m
p
r
o
v
e
s
 
p
e
r
f
o
r
m
a
n
c
e
RAID Controller
Disk
Controller
Disk
Controller
Disk
Controller
Disk
Controller
Operating Systems Software
G
i
v
e
 
m
e
 
b
l
o
c
k
s
 
(
n
,
 
n
+
1
,
 
,
 
n
+
k
)
 
o
f
 
t
h
e
 
d
i
s
k
 
(
k
 
c
o
n
t
i
g
u
o
u
s
 
d
i
s
k
 
b
l
o
c
k
s
)
n
n+1
n+2
n+3
give block n
give block n+1
give block n+2
give block n+3
Disk 
Disk 
Disk 
Disk 
S
t
r
i
p
i
n
g
38
D
i
f
f
e
r
e
n
t
 
R
A
I
D
 
O
r
g
a
n
i
z
a
t
i
o
n
s
/
S
c
h
e
m
e
s
(
a
l
s
o
 
c
a
l
l
e
d
 
L
e
v
e
l
s
)
RAID Level 0: block level striping (no redundancy)
RAID Level 1: mirroring
RAID Level 2: bit level striping + error correcting codes
RAID Level 3: bit level striping + parity
RAID Level 4: block level striping + parity
RAID Level 5: block level striping + distributed parity
….
39
R
A
I
D
 
0
:
 
B
l
o
c
k
 
L
e
v
e
l
 
S
t
r
i
p
i
n
g
Disk 1
Disk 2
Disk 3
Disk 4
Block 0
Block 1
Block 2
Block 3
Block 4
Block 5
Block 6
Block 7
No redundancy; 
parallel read for large data transfers (larger than block size)  
Block 8
Block 9
Block10
Block11
one block can be 
k
 sectors
data: in blocks; adjacent  blocks go into different disks
file system considers all disks as a single large disk
40
file Y
file X
R
A
I
D
 
0
Disk 1
Disk 2
Disk 3
Disk 4
Block 0
Block 1
Block 2
Block 3
Block 4
Block 5
Block 6
Block 7
Block 8
Block 9
Block10
Block11
This is called Striping
Assume a file is allocated a 
contiguous set of blocks
41
R
A
I
D
 
1
:
 
M
i
r
r
o
r
i
n
g
Disk 1
 
 
 
Disk 2
 
 
 
No striping
Mirror
Mirrored copy
42
R
A
I
D
 
1
We are just mirroring the disks (copying one disk to another one).
Without striping: no performance gain, except for reads (doubled read-rate)
Reliability provided. If one disk fails, data can be recovered from the other
disk.
If there are originally N (N >= 1) disks; we need N more disks to mirror
Quite costly in terms of disks required. This cost is for reliability. We can
express the cost as:
o
v
e
r
h
e
a
d
/
d
a
t
a
 
=
 
1
/
1
43
R
A
I
D
 
2
Bit level striping.
Error correcting codes (ECC) used.
For example every 4 data bit is protected with 3 redundant bits. If one of
these 4 bits is in error, we can understand which one it is and correct it
using 3 other code bits.
H
a
m
m
i
n
g
 
c
o
d
e
s
 
c
a
n
 
b
e
 
u
s
e
d
.
bit
bit
bit
bit
bit
bit
bit
error correction bits
data bits
bit
bit
bit
bit
bit
bit
bit
 
can be bits of one byte
o
v
e
r
h
e
a
d
/
d
a
t
a
 
=
 
3
/
4
44
R
A
I
D
 
2
 
o
r
g
a
n
i
z
a
t
i
o
n
Disk 1
Disk 2
Disk 3
Disk 4
Disk 5
Disk 6
Disk 7
b0
b1
b2
b3
b4
b5
b6
b7
….
….
….
….
c0
c0
c1
c0
c2
c0
bx: data bits
cx: control bits
45
R
A
I
D
 
2
Disk 1
Disk 2
Disk 3
Disk 4
Disk 5
Disk 6
Disk 7
c
c
cccccc
cccccccc
cccccccc
c
c
cccccc
cccccccc
cccccccc
d
d
dddddd
dddddddd
dddddddd
d
d
dddddd
dddddddd
dddddddd
d
d
dddddd
dddddddd
dddddddd
d
d
dddddd
dddddddd
dddddddd
c
c
cccccc
cccccccc
cccccccc
d: data bit
c: control bit
46
R
A
I
D
 
3
Improved on RAID 2 in terms of space efficiency
O
n
l
y
 
o
n
e
 
c
o
n
t
r
o
l
 
b
i
t
 
i
s
 
u
s
e
d
 
f
o
r
 
k
 
d
a
t
a
 
b
i
t
s
That control bit is a parity bit
compute the parity of k bits and store it in the parity bit.
k can be 4, 8, …
This is enough to detect and correct one bit errors
b
b
b
b
p
b
b
b
b
p
1
0
1
1
1
0
1
0
1
0
P
example
o
v
e
r
h
e
a
d
/
d
a
t
a
 
=
 
1
/
4
even parity
47
R
A
I
D
 
3
:
 
e
x
a
m
p
l
e
Disk 1
Disk 2
Disk 3
Disk 4
Disk 5
b0
b1
b2
b3
p
b4
b5
b6
b7
p
….
….
….
48
R
A
I
D
 
3
:
 
e
x
a
m
p
l
e
Disk 1
Disk 2
Disk 3
Disk 4
Disk 5
1
0
1
1
1
0
1
0
1
0
….
….
….
E
v
e
n
 
p
a
r
i
t
y
 
i
s
 
u
s
e
d
 
h
e
r
e
49
R
A
I
D
 
3
:
 
e
x
a
m
p
l
e
Disk 1
Disk 2
Disk 3
Disk 4
Disk 5
1
0
1
1
1
0
1
0
1
0
….
….
….
Let one disk fail!  How can we recover its data
 
Look to disks 1, 2, 4, and 5. compute the parity and according to that
generate the content of disk 3.
50
R
A
I
D
 
3
:
 
e
x
a
m
p
l
e
Disk 1
Disk 2
Disk 3
Disk 4
Disk 5
1
0
1
1
1
0
1
0
1
0
….
….
….
1
0
1
1
0
0
1
1
51
R
A
I
D
 
4
Uses block level striping (like RAID 0)
But uses an additional disk to store the parity block
Error recovery as in RAID 3
Block 0
Block 1
Block 2
Block 3
Parity 
Block
Block 4
Block 5
Block 6
Block 7
Parity 
Block
Block 8
Block 9
Block 10
Block 11
Parity 
Block
….
Disk 1
Disk 2
Disk 3
Disk 4
Disk 5
52
R
A
I
D
 
5
Parity blocks are distributed on other disks. Similar to RAID 4.
Load on parity disk is distributed in this way
Block 0
Block 1
Block 2
Block 3
Parity 
Block
Block 4
Block 5
Block 6
Block 7
Parity 
Block
Block 8
Block 9
Block 10
Block 11
Parity 
Block
….
Disk 1
Disk 2
Disk 3
Disk 4
Disk 5
53
R
A
I
D
 
6
Similar to RAID level 5 but uses not only a single parity bit, but multiple ECC
bits to  guards against multiple disk failures
Called also as: P+Q scheme.
Reed-Solomon codes are used as ECC code.
Example: 2-bits ECC code can be used for every 4-bits data.
54
R
A
I
D
 
L
e
v
e
l
s
 
(
0
 
t
h
r
o
u
g
h
 
6
)
 
S
u
m
m
a
r
y
55
R
A
I
D
 
L
e
v
e
l
s
 
0
+
1
 
a
n
d
 
1
+
0
First stripe, then  mirror
First mirror, then  stripe
(stripe of mirrors)
RAID 0+1
RAID 1+0
R
A
I
D
 
0
+
1
Mirror of stripes: 
first a set of disks striped, then the set is mirrored
.
Minimum 3-4 disks required.
Two groups are created. In a group (set of disks)  data is striped. Across
groups, data is mirrored.
56
Block2
Block1
Block0
Block5
Block4
Block3
Block8
Block7
Block6
Disk1
Disk2
Disk3
Disk4
Disk5
Disk6
Block2
Block1
Block0
Block5
Block4
Block3
Block8
Block7
Block6
Group 1
Group 2
R
A
I
D
 
1
+
0
Striped of mirrored pairs: 
first mirror disk in pairs, then stripe them
.
Minimum 4 disk s required. Both reliability and performance.
Disks are groups. A group has 2 disks (for mirrors). In a group, data is
mirrored.
Across groups, data is striped.
57
Block1
Block0
Block0
Block1
Block2
Block2
Block4
Block3
Block3
Block4
Block5
Block5
Block7
Block6
Block6
Block7
Block8
Block8
Disk1
Disk2
Disk3
Disk4
Disk5
Disk6
Group 1
Group 2
Group 3
 
RAID 1+0 has better reliability than RAID 0+1
Advantage of 1+0 over 0+1:
In 0+1, if a single disk fails, the entire stripe will not be available any more.
The other stripe will be used.
In 1+0, if a single disk fails, the mirror of it is available, hence both of the
stripes.
58
59
S
t
a
b
l
e
 
S
t
o
r
a
g
e
 
I
m
p
l
e
m
e
n
t
a
t
i
o
n
Write-ahead log scheme requires stable storage.
Stable storage definition
: information residing in stable storage is never lost. A
write to a block is either fully successful, or non-reflected and did not corrupt
the existing data (previous data is intact).
To implement stable storage:
Replicate information on more than one nonvolatile storage media with
independent failure modes.
Update information in a controlled manner to ensure that we can recover
the stable data after any failure during data transfer or recovery.
60
T
e
r
t
i
a
r
y
 
S
t
o
r
a
g
e
 
D
e
v
i
c
e
s
Low cost is the defining characteristic of tertiary storage.
Generally, tertiary storage is built using 
removable media
Common examples of removable media are
floppy disks and
CD-ROMs;
61
R
e
m
o
v
a
b
l
e
 
D
i
s
k
s
Floppy disk — thin flexible disk coated with magnetic material, enclosed in a
protective plastic case.
Most floppies hold about 1 MB; similar technology is used for removable
disks that hold more than 1 GB.
Removable magnetic disks can be nearly as fast as hard disks, but they
are at a greater risk of damage from exposure.
62
R
e
m
o
v
a
b
l
e
 
D
i
s
k
s
A magneto-optic disk records data on a rigid platter coated with magnetic
material.
Optical disks do not use magnetism; they employ special materials that are
altered by laser light.
63
W
O
R
M
 
D
i
s
k
s
The data on read-write disks can be modified over and over.
WORM (
Write Once, Read Many Times
) disks can be written only once.
Thin aluminum film sandwiched between two glass or plastic platters.
To write a bit, the drive uses a laser light to burn a small hole through the
aluminum; information can be destroyed but not altered.
Very durable and reliable.
Read Only
 disks, such ad CD-ROM and DVD, come from the factory with the
data pre-recorded.
64
T
a
p
e
s
Compared to a disk, a tape is less expensive and holds more data, but random
access is much slower.
Tape is an economical medium for purposes that do not require fast random
access, e.g., backup copies of disk data, holding huge volumes of data.
Large tape installations typically use robotic tape changers that move tapes
between tape drives and storage slots in a tape library.
stacker – library that holds a few tapes
silo – library that holds thousands of tapes
A disk-resident file can be 
archived
 to tape for low cost storage; the computer
can 
stage
 it back into disk storage for active use.
65
O
p
e
r
a
t
i
n
g
 
S
y
s
t
e
m
 
I
s
s
u
e
s
Major OS jobs are to manage physical devices and to present a virtual
machine abstraction to applications
For hard disks, the OS provides two abstraction:
Raw device – an array of data blocks.
File system – the OS queues and schedules the interleaved requests from
several applications.
66
A
p
p
l
i
c
a
t
i
o
n
 
I
n
t
e
r
f
a
c
e
Most OSs  handle 
removable disks
 almost exactly like fixed disks — a new
cartridge is 
formatted
 and an empty file system is generated on the disk.
Tapes
 are presented as a 
raw storage medium
, i.e., and application does not
not open a file on the tape, it opens the whole tape drive as a raw device.
Usually the tape drive is reserved for the 
exclusive use of that application
.
Since the OS does not provide file system services, the application must
decide how to use the array of blocks.
Since every application makes up its own rules for how to organize a tape,
a tape full of data can generally only be used by the program that created
it.
67
T
a
p
e
 
d
r
i
v
e
s
The 
basic operations
 for a tape drive differ from those of a disk drive.
l
o
c
a
t
e
 
p
o
s
i
t
i
o
n
s
 
t
h
e
 
t
a
p
e
 
t
o
 
a
 
s
p
e
c
i
f
i
c
 
b
l
o
c
k
 
(
c
o
r
r
e
s
p
o
n
d
s
 
t
o
 
s
e
e
k
)
.
T
h
e
 
r
e
a
d
 
p
o
s
i
t
i
o
n
 
o
p
e
r
a
t
i
o
n
 
r
e
t
u
r
n
s
 
t
h
e
 
b
l
o
c
k
 
n
u
m
b
e
r
 
w
h
e
r
e
 
t
h
e
 
t
a
p
e
 
h
e
a
d
 
i
s
.
Tape drives are 
append-only
 devices; updating a block in the middle of the
tape also effectively erases everything beyond that block.
An EOT mark is placed after a block that is written.
68
F
i
l
e
 
N
a
m
i
n
g
The issue of naming files on removable media is especially difficult when we
want to write data on a removable cartridge on one computer, and then use
the cartridge in another computer.
Contemporary OSs generally leave the name space problem unsolved for
removable media, and it depends on applications and users to figure out how
to access and interpret the data.
Some kinds of removable media (e.g., CDs) are so well standardized that all
computers use them the same way.
69
H
i
e
r
a
r
c
h
i
c
a
l
 
S
t
o
r
a
g
e
 
M
a
n
a
g
e
m
e
n
t
 
(
H
S
M
)
A hierarchical storage system extends the storage hierarchy beyond primary
memory and secondary storage to incorporate tertiary storage — usually
implemented as a jukebox of tapes or removable disks.
Usually incorporate tertiary storage by extending the file system.
Small and frequently used files remain on disk.
Large, old, inactive files are archived to the jukebox.
HSM is usually found in supercomputing centers and other large installations
that have enormous volumes of data.
70
S
p
e
e
d
Two aspects of speed in tertiary storage are 
bandwidth
 and 
latency
.
Bandwidth is measured in bytes per second.
Sustained bandwidth
 – average data rate during a large transfer; # of
bytes/transfer time.
Data rate when the data stream is actually flowing.
E
f
f
e
c
t
i
v
e
 
b
a
n
d
w
i
d
t
h
 
 
a
v
e
r
a
g
e
 
o
v
e
r
 
t
h
e
 
e
n
t
i
r
e
 
I
/
O
 
t
i
m
e
,
 
i
n
c
l
u
d
i
n
g
 
s
e
e
k
 
o
r
l
o
c
a
t
e
,
 
a
n
d
 
c
a
r
t
r
i
d
g
e
 
s
w
i
t
c
h
i
n
g
.
D
r
i
v
e
s
 
o
v
e
r
a
l
l
 
d
a
t
a
 
r
a
t
e
.
Effective bandwidth <= sustained bandwidth
71
S
p
e
e
d
Access latency – amount of time needed to locate data.
Access time for a disk – move the arm to the selected cylinder and wait for
the rotational latency; < 35 milliseconds.
Access on tape requires winding the tape reels until the selected block
reaches the tape head; tens or hundreds of seconds.
Generally say that random access within a tape cartridge is about a
thousand times slower than random access on disk.
The low cost of tertiary storage is a result of having many cheap cartridges
share a few expensive drives.
A removable library is best devoted to the storage of infrequently used data,
because the library can only satisfy a relatively small number of I/O requests
per hour.
72
R
e
l
i
a
b
i
l
i
t
y
A fixed disk drive is likely to be more reliable than a removable disk or tape
drive.
An optical cartridge is likely to be more reliable than a magnetic disk or tape.
A head crash in a fixed hard disk generally destroys the data, whereas the
failure of a tape drive or optical disk drive often leaves the data cartridge
unharmed.
73
C
o
s
t
Main memory is much more expensive than disk storage
The cost per megabyte of hard disk storage is competitive with magnetic tape
if only one tape is used per drive.
The cheapest tape drives and the cheapest disk drives have had about the
same storage capacity over the years.
Tertiary storage gives a cost savings only when the number of cartridges is
considerably larger than the number of drives.
74
P
r
i
c
e
 
p
e
r
 
M
e
g
a
b
y
t
e
 
o
f
 
D
R
A
M
,
 
F
r
o
m
 
1
9
8
1
 
t
o
 
2
0
0
4
75
P
r
i
c
e
 
p
e
r
 
M
e
g
a
b
y
t
e
 
o
f
 
M
a
g
n
e
t
i
c
 
H
a
r
d
 
D
i
s
k
,
 
F
r
o
m
 
1
9
8
1
 
t
o
2
0
0
4
76
P
r
i
c
e
 
p
e
r
 
M
e
g
a
b
y
t
e
 
o
f
 
a
 
T
a
p
e
 
D
r
i
v
e
,
 
F
r
o
m
 
1
9
8
4
-
2
0
0
0
77
R
e
f
e
r
e
n
c
e
s
The slides here are adapted/modified from the textbook and its slides:
Operating System Concepts, Silberschatz  et al., 7th & 8th editions,  Wiley.
Operating System Concepts, 7
th
 and 8
th
 editions, Silberschatz et al. Wiley.
Modern Operating Systems, Andrew S. Tanenbaum, 3
rd
 edition, 2009.
Slide Note
Embed
Share

Mass storage in computer engineering involves secondary and tertiary storage devices like magnetic disks and tapes, providing permanent storage for large volumes of data. The structure, performance characteristics, and operating system services for mass storage are discussed, including RAID and HSM. Magnetic disks offer fast access times, while magnetic tapes are used for backup and storing infrequently-used data, though with slower access times.

  • Mass Storage
  • Computer Engineering
  • RAID
  • Magnetic Disks
  • Tertiary Storage

Uploaded on Aug 28, 2024 | 9 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Bilkent University Department of Computer Engineering CS342 Operating Systems Chapter 12 Mass Storage Last Update: Dec 14, 2017 1

  2. Objectives and Outline Objectives Describe the physical structure of secondary and tertiary storage devices and the resulting effects on the uses of the devices Explain the performance characteristics of mass-storage devices Discuss operating-system services provided for mass storage, including RAID and HSM Outline Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space Management RAID Structure Disk Attachment Stable-Storage Implementation Tertiary Storage Devices Operating System Issues Performance Issues 2

  3. Mass Storage Mass Storage : permanent storage; large volume of data can be stored permanently (powering off will not cause loss of data) Secondary storage: always online; hard disk Tertiary storage; tapes, etc. 3

  4. Overview of Mass Storage Systems: Magnetic Disks Magnetic disks provide bulk of secondary storage of modern computers Drives rotate at 60 to 200 times per second Transfer rate is rate at which data flow between drive and computer Positioning time (random-access time) is time to move disk arm to desired cylinder (seek time) and time for desired sector to rotate under the disk head (rotational latency) Head crash results from disk head making contact with the disk surface That s bad Disks can be removable 4

  5. Moving-head Disk Mechanism 5

  6. Overview of Mass Storage Systems: Magnetic Tapes Magnetic tape Was early secondary-storage medium Relatively permanent and holds large quantities of data 20-200GB typical storage Mainly used for backup, storage of infrequently-used data, transfer medium between systems Access time slow Random access ~1000 times slower than disk Once data under head, transfer rates comparable to disk Common technologies are 4mm, 8mm, 19mm, LTO-2 and SDLT 6

  7. Disk Structure A disk drive is addressed as a large 1-dimensional array of blocks, where the logical block is the smallest unit of transfer. The 1-dimensional array of blocks is mapped into the sectors of the disk sequentially. Sector 0 is the first sector of the first track on the outermost cylinder. Mapping proceeds in order through that track, then the rest of the tracks in that cylinder, and then through the rest of the cylinders from outermost to innermost. Sector 0 7

  8. Disk Attachment Host-attached storage accessed through I/O ports talking to disk I/O busses Attachment technologies and protocols (various disk I/O buses) IDE, EIDE, ATA, SATA USB SCSI Fiber Channel Host controller in computer uses bus to talk to disk controller built into drive or storage array CPU RAM Computer I/O Bus Host controller Disk I/O Bus (SCSI, IDE, SATA, etc.) messages Disk Controller Disk 8

  9. Disk Attachment Example: SCSI and Fiber Channel SCSI itself is a bus, up to 16 devices on one cable, SCSI initiator requests operation and SCSI targets perform tasks Each target can have up to 8 logical units (disks attached to device controller FC (fiber channel) is high-speed serial architecture Can be switched fabric with 24-bit address space the basis of storage area networks (SANs) in which many hosts attach to many storage units Can be arbitrated loop (FC-AL) of 126 devices 9

  10. Disk Attachment Example: SCSI RAM CPU PCI Bus SCSI Host Adapter SCSI initiator (up to 16 devices can be connected) SCSI Bus SCSI controller SCSI controller SCSI target Disk Disk 10

  11. Network Attached Storage Network-attached storage (NAS) is storage made available over a network rather than over a local connection (such as a bus) NFS and CIFS are common distributed file system protocols used for network attached storage We use those protocols to access remote storage that is connected to a network. Implemented via remote procedure calls (RPCs) between host and storage New iSCSI protocol uses IP network to carry the SCSI protocol SCSI SCSI bus/cable (local/host attached) iSCSI TCP/IP network (network attached) 11

  12. Network Attached Storage TCP/IP Network NFS or CIFS Protocol 12

  13. Storage Area Network Common in large storage environments (and becoming more common) Multiple hosts attached to multiple storage arrays flexible Uses a different communication infrastructure (SAN) than the common networking infrastructure 13

  14. Disk Scheduling The operating system is responsible for using hardware efficiently for the disk drives, this means having a fast access time and large disk bandwidth. Disk access time has two major components Seek time is the time for the disk to move the head to the cylinder containing the desired sector (block). Rotational latency is the additional waiting time for the disk to rotate the desired sector under the disk head. Minimize seek time Seek time seek distance (between cylinders) Disk bandwidth is the total number of bytes transferred, divided by the total time between the first request for service and the completion of the last transfer. 14

  15. Disk I/O queue Process 1 Process 2 Process 3 file requests Kernel disk request queue block requests disk request for block x (x is on cylinder y) controller Disk 15

  16. Disk Scheduling Several algorithms exist to schedule the servicing of disk I/O requests. Assume disk has cylinders from 0 to 199. We illustrate them with a request queue. In the queue we have requests for blocks sitting in various cylinders. We just focus on the cylinder numbers. 98, 183, 37, 122, 14, 124, 65, 67 (these are cylinder numbers) Head pointer: 53 (the head is currently on cylinder 53) We have 8 requests queued. They are for blocks sitting on cylinders 98, 183, 16

  17. FCFS Algorithm First Come First Served total head movement = 640 cylinders 17

  18. SSTF Algorithm Shortest Seek Time First Selects the request with the minimum seek time from the current head position. SSTF scheduling is a form of SJF scheduling; may cause starvation of some requests. 18

  19. SSTF Assume initially head direction is towards right total head movement = 236 cylinders 19

  20. SCAN/ELEVATOR Algorithm The disk arm starts at one end of the disk, and moves toward the other end, servicing requests until it gets to the other end of the disk, where the head movement is reversed and servicing continues. Sometimes called the elevator algorithm. Several variations of the algorithm exist: C-SCAN LOOK C-LOOK 20

  21. SCAN total head movement = 236 cylinders Assume initially head direction is towards left 21

  22. C-SCAN C-SCAN: Circular SCAN Provides a more uniform wait time than SCAN. Wait time for request: time between arrival of request to the queue and completion of handling the request. The head moves from one end of the disk to the other; servicing requests as it goes. When it reaches the other end, however, it immediately returns to the beginning of the disk, without servicing any requests on the return trip. Treats the cylinders as a circular list that wraps around from the last cylinder to the first one. 22

  23. C-SCAN Assume initially head direction is towards right Total movement: 382 23

  24. C-LOOK Version of C-SCAN Arm only goes as far as the last request in each direction, then reverses direction immediately (without first going all the way to the end of the disk); and goes to the first request in the other end of the disk. 24

  25. C-LOOK Assume initially head direction is towards right Total movement: 322 25

  26. LOOK From 53 to 183 (sweep meanwhile) From 183 to 14 (sweep meanwhile) Total = 299 26

  27. Selecting a Disk-Scheduling Algorithm SSTF is common and has a natural appeal SCAN and C-SCAN perform better for systems that place a heavy load on the disk. Performance depends on the number and types of requests. Requests for disk service can be influenced by the file-allocation method. The disk-scheduling algorithm should be written as a separate module of the operating system, allowing it to be replaced with a different algorithm if necessary. Either SSTF or LOOK is a reasonable choice for the default algorithm. 27

  28. Disk Management Low-level formatting, or physical formatting Dividing a disk into sectors that the disk controller can read and write. To use a disk to hold files, the operating system still needs to record its own data structures on the disk. Partition the disk into one or more groups of cylinders (volumes). Logical formatting or making a file system . 28

  29. Low Level Formatting sector number error correcting code HDR Data (512 bytes) ECC Sector Sector Sector . Disk after low level formatting magnetic material that can store bits Disk before low level formatting 29

  30. Boot Process boot code partition table Power ON 1. Boot code in ROM is run; it brings MBR into memory and starts MBR boot code MBR boot code runs; looks to partition table; learns about the boot partition; brings and starts the boot code in the boot partition Boot code in boot partition loads the kernel sitting in that partition MBR ROM Tiny Boot program partition1 2. partition2 Boot Block CPU kernel partition3 3. RAM Disk 30

  31. Bad Blocks Disk sectors (blocks) may become defective. Can no longer store data. Hardware defect System should not put data there. Possible Strategy: A bad block X can be remapped to a good block Y Whenever OS tries to access X, disk controller accesses Y. Some sectors (blocks) of disk can be reserved for this mapping. This is called sector sparing. 31

  32. Bad Blocks block (sector) request x z y w Disk Controller bad block mapping table bad sector x z Table stored on disk Disk y w spare sectors 32

  33. Swap Space Management Swap-space Virtual memory uses disk space as an extension of main memory. Swap-space can be carved out of the normal file system, or, more commonly, it can be in a separate disk partition. Swap-space management Kernel uses swap maps to track swap-space use. Example: 4.3BSD OS allocates swap space when process starts; holds text segment (the program) and data segment. Example: Solaris 2 OS allocates swap space only when a page is forced out of physical memory, not when the virtual memory page is first created. 33

  34. Data Structures for Swapping on Linux Systems free slot 34

  35. RAID Structure RAID: Redundant Array of Independent Disks Multiple disk drives provides reliability via redundancy. Multiple disks can be organized in different ways for Reliability and Performance (RAID is arranged into different levels/schemes) If you have many disks: The probability of one of them failing becomes higher. The probability of all of them failing (at the same time) becomes lower. 35

  36. RAID RAID schemes improve performance and improve the reliability of the storage system by storing redundant data. Redundancy improves reliability (and also performance to some extend) Disk striping improves performance Disk striping uses a group of disks as one storage unit and distributes (stripes) the data over those disks Redundancy by: Mirroring or shadowing keeps duplicate of each disk. Use of parity bits or ECC (error correction codes) causes much less redundancy. 36

  37. RAID Striping example: improves performance Operating Systems Software Give me blocks (n, n+1, , n+k) of the disk (k contiguous disk blocks) RAID Controller Striping give block n give block n+3 give block n+1 give block n+2 Disk Disk Disk Disk Controller Controller Controller Controller n n+1 n+2 n+3 Disk Disk Disk Disk 37

  38. Different RAID Organizations/Schemes (also called Levels) RAID Level 0: block level striping (no redundancy) RAID Level 1: mirroring RAID Level 2: bit level striping + error correcting codes RAID Level 3: bit level striping + parity RAID Level 4: block level striping + parity RAID Level 5: block level striping + distributed parity . 38

  39. RAID 0: Block Level Striping data: in blocks; adjacent blocks go into different disks one block can be k sectors file system considers all disks as a single large disk Block 0 Block 1 Block 2 Block 3 Block 4 Block 8 Block 5 Block 6 Block 7 Block 9 Block10 Block11 Disk 1 Disk 2 Disk 3 Disk 4 No redundancy; parallel read for large data transfers (larger than block size) 39

  40. RAID 0 Assume a file is allocated a contiguous set of blocks file X file Y Block 0 Block 1 Block 2 Block 3 Block 4 Block 8 Block 5 Block 6 Block 7 Block 9 Block10 Block11 Disk 1 Disk 2 Disk 3 Disk 4 This is called Striping 40

  41. RAID 1: Mirroring No striping Disk 1 Mirror Mirrored copy Disk 2 41

  42. RAID 1 We are just mirroring the disks (copying one disk to another one). Without striping: no performance gain, except for reads (doubled read-rate) Reliability provided. If one disk fails, data can be recovered from the other disk. If there are originally N (N >= 1) disks; we need N more disks to mirror Quite costly in terms of disks required. This cost is for reliability. We can express the cost as: overhead/data = 1/1 42

  43. RAID 2 Bit level striping. Error correcting codes (ECC) used. For example every 4 data bit is protected with 3 redundant bits. If one of these 4 bits is in error, we can understand which one it is and correct it using 3 other code bits. Hamming codes can be used. data bits error correction bits bit bit bit bit bit bit bit bit bit bit bit bit bit bit can be bits of one byte overhead/data = 3/4 43

  44. RAID 2 organization b0 b1 b2 b3 c0 c1 c2 b4 b5 b6 b7 c0 c0 c0 . . . . Disk 1 Disk 2 Disk 3 Disk 4 Disk 5 Disk 6 Disk 7 bx: data bits cx: control bits 44

  45. RAID 2 cccccccc cccccccc cccccccc dddddddd dddddddd dddddddd dddddddd dddddddd dddddddd cccccccc cccccccc cccccccc cccccccc cccccccc cccccccc dddddddd dddddddd dddddddd dddddddd dddddddd dddddddd Disk 1 Disk 2 Disk 3 Disk 4 Disk 5 Disk 6 Disk 7 d: data bit c: control bit 45

  46. RAID 3 Improved on RAID 2 in terms of space efficiency Only one control bit is used for k data bits That control bit is a parity bit compute the parity of k bits and store it in the parity bit. k can be 4, 8, This is enough to detect and correct one bit errors even parity P 1 0 1 1 1 b b b b p example 0 1 0 1 0 b b b b p overhead/data = 1/4 46

  47. RAID 3: example b0 b1 b2 b3 p b4 b5 b6 b7 p . . . Disk 1 Disk 2 Disk 3 Disk 4 Disk 5 47

  48. RAID 3: example 1 0 1 1 1 0 1 0 1 0 . . . Disk 1 Disk 2 Disk 3 Disk 4 Disk 5 Even parity is used here 48

  49. RAID 3: example Let one disk fail! How can we recover its data 1 0 1 1 1 0 1 0 1 0 . . . Disk 1 Disk 2 Disk 3 Disk 4 Disk 5 Look to disks 1, 2, 4, and 5. compute the parity and according to that generate the content of disk 3. 49

  50. RAID 3: example 1 1 0 0 1 1 1 1 1 0 0 0 0 1 1 1 1 0 . . . Disk 1 Disk 2 Disk 3 Disk 4 Disk 5 50

More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#