Operating Systems 101: Persistence — I/O Devices(i)

E.Y.
5 min readJun 20, 2021
Photo by Joseph Gonzalez on Unsplash

This is a course note of an online course I took relating to the Operating System on educative.io.

Chapter 1. I/O Devices

To connect I/O to Processor and Memory, we use “bus”. A bus is a shared communication link that uses one set of wires to connect multiple subsystems. Sometimes shared bus with memory, sometimes a separate I/O bu

example bus system

Canonical Device

Canonical, in computer science, is the standard state or behaviour of an attribute. There are two building blocks of a canonical device, its interface, and its internal structure.

The device interface is comprised of 3 registers:

  • A status register, which can be read to see the current status of the device.
  • A command register, to tell the device to perform a certain task.
  • A data register to pass data to the device or get data from the device.

By reading and writing these registers, the operating system can control device behaviour.

The protocol has 4 steps.

  • Polling :the OS waits until the device is ready to receive a command by repeatedly reading the status register;
  • Writing to the data register : the OS sends some data down to the data register. When the main CPU is involved with the data movement , we refer to it as programmed I/O (PIO).
  • Writing to the command register : the OS writes a command to the command register; doing so implicitly lets the device know that both the data is present and that it should begin working on the command.
  • Polling again: the OS waits for the device to finish by again polling it in a loop, waiting to see if it is finished.

The Device Driver

How can we keep most of the OS device-neutral, thus hiding the details of device interactions from major OS subsystems?

Device driver : The problem is solved through abstraction using device driver, and any specifics of device interaction are encapsulated within.

A driver provides a software interface to hardware devices, enabling operating systems and other computer programs to access hardware functions without needing to know precise details about the hardware being used.

A driver communicates with the device through the computer bus or communications subsystem to which the hardware connects. When a calling program invokes a routine in the driver, the driver issues commands to the device. Once the device sends data back to the driver, the driver may invoke routines in the original calling program.

Drivers are hardware dependent and operating-system-specific. They usually provide the interrupt handling required for any necessary asynchronous time-dependent hardware interface.

Device Protocol

  • Wait for the drive to be ready. Read Status Register until drive is READY and not BUSY.
  • Write parameters to command registers. Write the sector count, logical block address (LBA) of the sectors to be accessed, and drive number to command registers .
  • Start the I/O. by issuing read/write to the command register. Write READ — WRITE command to command register .
  • Data transfer (for writes): Wait until drive status is READY and DRQ (drive request for data); write data to data port.
  • Handle interrupts. In the simplest case, handle an interrupt for each sector transferred; more complex approaches allow batching and thus one final interrupt when the entire transfer is complete.
  • Error handling. After each operation, read the status register. If the ERROR bit is on, read the error register for details.

Lowering CPU Overhead with Interrupts

The invention to improve the interaction between OS and a device, is the interrupt. Instead of polling the device repeatedly, the OS can interrupt, and context switch to another task. When the device is finally finished with the operation, it will raise a hardware interrupt, causing the CPU to jump into the OS at a predetermined interrupt service routine (ISR) .

Improved utilisation :Interrupts thus allow for overlap of computation and I/O, which is key for improved utilisation. Without interrupts, the system simply spins, polling the status of the device repeatedly until the I/O is complete. By using interrupt, both the CPU and the disk are properly utilised during the middle stretch of time.

Problem with interrupts :

  • For device that performs its tasks very quickly: the first poll usually finds the device to be done with a task. If the speed of the device is not known, or sometimes fast and sometimes slow, it may be best to use a hybrid that polls for a little while and then, if the device is not yet finished, uses interrupts.
  • Another scenario arises in networks. When a huge stream of incoming packets each generate an interrupt, it is possible for the OS to livelock, that is, find itself only processing interrupts and never allowing a user-level process to run.

Coalescing

Another interrupt-based optimisation is coalescing. In such a setup, a device which needs to raise an interrupt first waits for a bit before delivering the interrupt to the CPU. While waiting, other requests may soon complete, and thus multiple interrupts can be coalesced into a single interrupt delivery, thus lowering the overhead of interrupt processing.

More Efficient Process with DMA

When using programmed I/O (PIO) to transfer a large chunk of data to a device, the CPU is once again overburdened with some trivial task, hence the DMA.

Direct memory access (DMA)

A DMA engine is a device that can orchestrate transfers between devices and main memory without much CPU intervention.

To transfer data to the device, the OS would program the DMA engine by telling it where the data lives in memory, how much data to copy, and which device to send it to. At that point, the OS is done with the transfer and can proceed with other work. When the DMA is complete, the DMA controller raises an interrupt, and the OS thus knows the transfer is complete.

Since the copying of data is now handled by the DMA controller. Because the CPU is free during that time, the OS can do something else, e.g. run another process.

Methods of Device Interaction

  • Explicit I/O instructions : These instructions specify a way for the OS to send data to specific device registers and thus allow the construction of the protocols described above.
  • Memory-mapped I/O :With this approach, the hardware makes device registers available as if they were memory locations. To access a particular register, the OS issues a load (to read) or store (to write) the address; the hardware then routes the load/store to the device instead of main memory.

That’s so much of it!

Happy Reading!

--

--