Operating Systems 101: Persistence — Fast File System(iv)

E.Y.
3 min readJun 23, 2021
Photo by Amy Shamblen on Unsplash

This is a course note of an online course I took relating to the Operating System on educative.io.

Chapter 4. Fast File System

The old UNIX file system design leads to fragmented file allocation and poor performance, as the free space was not carefully managed, e.g. a logically contiguous file would be accessed by going back and forth across the disk, thus reducing performance dramatically.

  • With a random access memory pattern, the free list becomes gradually more dispersed
  • Also the smaller the block size, the bigger the positioning overhead to transfer the data on the block

You can see what happens is that E gets spread across the disk, and as a result, when accessing E, you don’t get peak (sequential) performance from the disk. Rather, you first read E1 and E2, then seek, then read E3 and E4. This fragmentation problem happened all the time in the old UNIX file system, and it hurt performance.

Random vs. Sequential write

When people talk about sequential vs random writes to a file, they’re generally drawing a distinction between writing without intermediate seeks (“sequential”), vs. a pattern of seek-write-seek-write-seek-write, etc. (“random”). As each random write gets smaller, you pay more and more of a penalty for the disk seeks.

Cylinder Groups

When you create a UFS file system, the disk slice is divided into cylinder groups. A cylinder group is comprised of one or more consecutive disk cylinders. Cylinder groups are then further divided into addressable blocks to control the structure of the files within the cylinder group. Each Cylinder Group contains a Superblock, Cylinder Group Summary Block, Inode Table, and Data Block Area.

File Allocation Strategy

FFS has to decide what is “related” and place it within the same block group.

  • Placement of directories: First, find the cylinder group with a low number of allocated directories (to balance directories across groups) and a high number of free inodes (to subsequently be able to allocate a bunch of files). Then, put the directory data and inode in that group.
  • Placement of files: First, it makes sure to allocate the data blocks of a file in the same group as its inode. Second, it places all files that are in the same directory in the same cylinder group of the directory they are in. Thus, if a user creates four files, /a/b, /a/c, /a/d, and b/f, FFS would try to place the first three near one another (same group) and the fourth far away (in some other group).

As a result:

  • The data blocks of each file are near each file’s inode, and
  • Files in the same directory are near one another.

Allocate Large Files

In FFS, a large file might fill the block group it is first placed within and prevents subsequent “related” files from being placed within this block group. Thus, for large files, FFS spreads the file across multiple block groups.

In this way, the performance might hurt in sequential file access . But we can address this problem by choosing bigger block size. If the block size is large enough, the file system will spend most of its time transferring data from disk and a little time seeking between chunks of the block.

This process of reducing an overhead by doing more work per overhead paid is called amortisation. Note that the trend in disk drives is that transfer rate improves fairly rapidly but the mechanical aspects of drives related to seeks improve rather slowly. So over time, mechanical costs become relatively more expensive.

That’s so much of it!

Happy Reading!

--

--