Skip to Content

Deep Dive into Operating Systems

Exploring the Core Components and Advanced Features of Modern Operating Systems
13 August 2024 by
Spark

Operating Systems Introduction

1. Definition and Purpose

An operating system (OS) is a collection of software built on abstractions above the computer hardware level, creating an environment for us to run our applications. The main goals of an OS are as follows:

  1. Resource Management: Responsible for dividing CPU, memory, storage, and I/O devices between different applications.
  2. User Interface: It gives the user interface(interface graphical/command line) to interact with between them and systems.
  3. File management: Controls everything to do with the storage and retrieval of data on the disks.
  4. Security and Protection: Privacy of Data, Maintains Integrity.

2. History and Evolution

The evolution of operating systems has been steady, evolving as the computer hardware becomes more sophisticated and also to meet users' requirements. These eras will be described in large brushes of history.

1940s-1950s: Early Computers

Computers of this era had no operating system to control its operation. Each program had to explicitly load itself, including the necessary machine control routines.

1960 Batch Processing Systems

The earliest systems were batch-processing oriented they did one job after another, with little or no interaction from the user. Example: OS/360 from IBM.

Time-Sharing and Multitasking: 1970s

UNIX, for example, was designed to enable several users to execute jobs concurrently.. (to time-share) and handle multitasking.

The Personal Computer (1980s)

Personal computers came with operating systems like MS-DOS, the early version of Windows, and macOS (which were GUIs).

Modern Operating Systems, 1990-Present

Modern OSs (like Windows, Linux, and macOS) feature many thoughtful variations designed for enhanced support of networking services, security features such as encryption, etc., starved computing resources like RAM by supporting virtualization efforts much more effectively than older predecessors due to their careful design leading users towards an experience that is both versatile yet robust.

Future Trends

The future of operating systems is poised for deeper integration with cloud computing, enabling more robust back-end solutions. These systems will also feature advanced security capabilities and tools designed to support emerging technologies like Artificial Intelligence (AI) and the Internet of Things (IoT). This evolution will lead to operating systems that are more versatile, adaptive, and capable of handling the increasing demands of connected devices and intelligent applications, ensuring seamless and secure operation across diverse environments.

3. Key Components of an OS

The design of an operating system comprises several essential elements, each with its unique purpose:

Kernel

The part of the OS that takes care of running system resources, such as CPU scheduling, memory management, I/O operations, etc. It serves as an intermediary between applications and hardware.

Process Management

Manages the creation, and scheduling of tasks or processes. It means one or more processes can be runnable at the same time, doing different things without getting in each other's way.

Memory Management

Memory Management: Allocates memory to the system ( by managing pages and segments) for a process to improve its use.

File System

It is used for saving, retrieving, and organizing data on storage devices. It works on file, directory permissions, and data integrity.

Device Drivers

They are special kinds of programs that enable the OS to control devices such as printers, disk drives, and network cards making it possible for other applications not needing to know all hardware details.

User Interface

A component through which users interact with the system, usually via a command-line interface (CLI) or graphical user interface(GUI).

Together, these components provide a friendly environment in which applications can run well and resources will be utilized effectively.

Types of Operating Systems

1. Single-User vs. Multi-User

Single-User Operating Systems

Operating systems are built for an individual user, typically used on PCs and gadgets. Process and task-per-user support along with possible concurrency among processes(instances may run concurrently) However, some early operating systems were single-user and assumed that the user was also the owner.

Multi-User Operating Systems

Such systems allow multiple users to share the computer simultaneously through direct connection or remote access. A general-purpose smart multi-user operating system faces the big challenge of handling data, processes, and permissions for different users efficiently. These are things like UNIX, Linux, or newer versions of Windows Server.

2. Single-Tasking: The title does all the multitasking

OS 1: Single-Tasking

Here in the case of single-tasking systems, only one process is executed at a time. Wait on — the system holds for the current task to finish before proceeding to the next. This is the same model that early OSs like MS-DOS followed.

Windows based on Multi-Tasking OS

A multi-tasking system allows (or appears to allow) more than one task or process to execute at the same time, and can give each of them some share of a CPU. This can be done via cooperative multitasking, in which processes voluntarily yield control, or through cleanly pre-emptive scheduling, where the operating system determines when to give time slices for each task. 

3. Distributed Operating Systems

Distributed Operating Systems

Distributed Operating System is supposed to be running on a set of independent, loosely coupled computers that appear as a single coherent system. These systems are network-based, in which resources such as processors, memory, and data are shared among multiple machines over a network. They need advanced synchronization and communication protocols to perform efficiently. This includes Google's Kubernetes and Apache Hadoop.

Characteristics:
  • Transparency: The user does not see any complexity of the distributed resources at all.
  • Reliability: Distributed systems are typically more fault-tolerant.
  • Scalability: This makes it scalable in the sense that resources can be added or taken away without disrupting things.

4. Embedded Operating Systems

Embedded Operating Systems

Use case of Embedded OS: An embedded operating system is tailored to run on resource-constrained systems which may carry out a few specific functions. These systems are usually in devices such as embedded computers for smartphones, factory machines, home appliances, or on/off-road vehicles. For example, real-time operating systems (RTOS) with examples are VxWorks and embedded versions of Linux.

Characteristics:
  • Constraints on Performance: Make efficient use of available memory, CPU, and storage.
  • Real-Time Requirements: A lot of embedded systems work under specific time deadlines.
  • Customization: Generally custom to hardware and application.

5. Operating SystemRTOS

RTOS - Real-Time Operating System

RTOS accepts data from the outside world and processes it, usually on a hard real-time basis (i.e. too late is as bad as incorrect), which makes them well-suited for time-dependent applications Decisions by these systems fall under environments like automotive control, industrial automation, medical devices, and avionics to some examples.

Types of RTOS:
  • Hard Real-Time Systems: Systems where missing a deadline can lead to catastrophic consequences, such as in aerospace or medical applications.
  • Soft Real-Time Systems: Where response deadlines should be met, but failure to meet a given deadline does not lead to catastrophic consequences. 
Characteristics:
  • Otherwise of Hand Scheduling: To make tasks finish in time-predictable intervals.
  • Low Latency: Helps tasks start and run quickly.
  • High Reliability: An M2M device should operate 24 hours a day and any downtime is not acceptable.

Types of Operating System Architectures

The system architecture is the layout and interaction with it between hardware2 System software• Operating Systems: The kernel of the OS manages:- Hardware resources- Providing system services to provide functionality such as file access, and process management. Specific architectures are more complex, fast, or modular. Here are the basic types of os architectures.

1. Monolithic Kernel

Overview:

Monolithic Kernel This is the simplest and oldest type of operating system architecture. The whole operating system runs as a single address space in kernel mode and includes all of the standard services such as process management, memory management, file systems, and device drivers.

Characteristics:
  • Operating System within Single Address Space: All parts of the OS run in the same address space so while communication may be efficient if one part fails it can bring down the whole system.
  • Performance: Usually faster for performance due to direct communication between components
  • Complexity: Difficult to maintain/extend and change in one part of the kernel can affect the entire system.
  • Examples: Linux, and original UNIX systems.
Advantages:
  • Lesser context switches lead to faster system call executions.
  • Perform better with simpler designs
Disadvantages:
  • Less secure and robust in that a bug in one part of the kernel can take down the whole system.
  • If you try to modify or extend the system somehow it will break.

2. Microkernel

Overview:

The microkernel architecture is simply a way of dividing the functions and features that would reside in an operating system into one or more services. On the other hand, some services like device drivers, file systems, and networking run in user space as separate processes.

Characteristics:

Minimal Kernel A small kernel in which the only features are needed.

  • Modularity: Additional services run in user space, which makes the system more modular and easier to manage.
  • Interprocess Communication (IPC): The microkernel relies heavily on IPC mechanisms to communicate between user space services and the kernel.
  • Examples: Minix, QNX, Mach.
Advantages:
  • More secure and stable, as failures in user space services do not crash the entire system.
  • Easier to extend and modify, as new services can be added without altering the kernel.
Disadvantages:
  • Performance overhead due to frequent context switches and IPC can slow down system performance.
  • More complex design compared to monolithic kernels.

3. Hybrid Kernel

Overview:

A hybrid kernel is a variation of the monolithic system built to have some modules running in user mode, above the rung traditionally used by device drivers. It includes typical core services like process and memory management found in the kernel (as is intended by a monolithic kernel architecture), but others, such as networking or device drivers operate at user space or outside of the operating system entirely.

Characteristics:

Hybrid: It uses a combination of the power salient features from a monolithic kernel and also uses aspects more native to microkernel-like protection mechanisms.

Allows for better flexibility in system design: some services can be implemented to run either in user or kernel space.

Some examples are Windows NT, macOS (XNU), and BeOS.

Advantages:
  • Optimizes a balance between performance and security by keeping critical functions in the kernel (ring 0) whereas less-critical services or fault-tolerant devices are run in ring 3 user space.
  • Fits a variety of use cases better and is easier to patch up than just one big huge monolithic kernel.
Disadvantages:
  • More complex compared to both pure monolithic and microkernel designs, might have a higher system overhead.
  • Could Still Be Suffering in Performance mush, until the game is not properly optimized.

4. Modular Kernel

Overview:

A modular kernel, on the other hand, is part of a monolithic or a whole other architectural design itself where dynamic loading and unloading are permitted like device drivers without restarting your laptop. A glibc-based kernel in its most basic form offers some core functionality and enables additional services through modules.

Characteristics:
  • Dynamic Modules: This feature allows modules to be loaded or unloaded at runtime, making for a much more versatile and maintainable system.
  • It is built on the base kernel: The basic kernel stays lean and additional features are introduced through modules.
  • Examples: Linux (newer versions), Solaris.
Advantages:
  • This allows us to update or extend functionalities without rebooting the system.
  • Decreasing the size/maintainability of base kernel → arguable point about system stability (see here.)
Disadvantages:
  • Might come with a bit of housekeeping, and module linking code overhead.
  • Enabling these faulty modules could also destabilize the system if not managed properly.

All of these architectures offer different trade-offs regarding performance, security, flexibility, and complexity depending on the specific requirements that are set out by the operating system.

Process Management

Process management is critical for OS functioning. It includes process creation, scheduling, and killing; inter-process communication and synchronization. Process Management and ConceptsExploring PentahogetResponse Generating Key Captions in PCM with Robotic Process Automation Industry Executiv...

1. Process Concept and Lifecycle

Process Concept

A process is the active execution of a program.This includes the program code, data and register memory of the CPU. The operating system manages many processes, which are essentially the things we call applications in macOS.

Process Components
  • Text Segment: This segment contains the executable code within our program.
  • Data Segment: Global Variables and Static Data
  • Heap: Memory that is allocated during run-time on the process heap.
  • Stack: It is used to store function call information, local variables of a function, and return address.
Process Lifecycle

A process goes through a variety of stages that indicate what the process is intended to perform at any particular point in its execution;

  • Creating: The process is alive
  • Ready: The process just required to be demoted to CPU.
  • Running: At this point, the processor is actively executing commands.
  • Waiting (Blocked): The process is waiting for an event to occur, e.g., completion of I/O.
  • Terminated: Process execution has ended and it no longer exists on the system.
State Transitions

It was created and ready to run.

  • Ready to Run: The process is chosen by the scheduling algorithm for execution.
  • Thread State: Running to Waiting– The process needs any resource or event.
  • RunningReady: If the scheduler has preempted the process; that is, if a new job arrived and the current time slice expired.
  • Waiting for Ready: The Process waits for an event that has now occurred.
  • Terminated: the process completes execution or is terminated by OS. Running to Terminated.

2. Process Scheduling

Overview

Process scheduling is the way of OS chooses which process from the ready state should be passed to running. The goal is to maximize CPU utilization, response time, throughput, and fairness.

Types of Schedulers
  • Job Scheduler / Long-Term Scheduler: IIt determines which processes are allowed to enter the ready state for execution. It determines the level of Multiprogramming (number of processes in memory).
  • Short-Term Scheduler (CPU Scheduler): Selects which process should be loaded into memory for execution.
  • Medium-term scheduler: Swaps processes in and out of memory to balance the load and use resources effectively.
Scheduling Algorithms
  • First-Come, First Served (FCFS): The processes are executed as per the arrival of the process in a ready queue.
  • Shortest Job Next (SJN): The process with the shortest estimated run-time to completion is selected for execution next.
  • ROUND ROBIN: Each process is assigned a fixed time(minuscule) to be executed and it will cycle around the ready queue with FCFS (First Come First Serve).
  • Priority Scheduling: In this type of scheduling, processes are assigned priorities. The CPU is allocated to the process with the highest priority
  • Multilevel Queue Scheduling: Several queues are maintained, each with its scheduling algorithms and a different set of algorithms accordingly.
Context Switching

A context switch is when the CPU switches from processing one process to another. This involves saving the status of a process in execution and loading that context to another waiting for it. Whilst some context switching is required to allow multitasking, it also comes with overhead.

3. IPC stands for Inter-process Communication

Overview

Processes can use IPC mechanisms to communicate with each other and synchronize actions. IPC is also important when more than one process is running on the same processor and processes need to talk to each other or coordinate their activities.

IPC Methods
  • Shared Memory: In this case, multiple processes share the same memory space to communicate. Needs a synchronization mechanism with semaphores or mutexes to avoid race conditions.
  • Message Passing: Processus de communication en envoyant des messages aux uns et autres. This can be achieved through either sending or receiving messages directly and indirectly via mailboxes (or message queues).
  • Sockets: Implement network-based IPC such that processes can communicate across a wire.
  • Signals: To Notify Processes Immediately of Interrupts, or Exceptions.
Synchronization Mechanisms
  • Semaphores: A semaphore is a counter that locks shared resources.
  • Mutex (Mutual Exclusion): A mechanism to lock data so that only one process can access a resource at any given time.
  • Condition Variables: Condition variables work with mutexes to help the process wait until a particular condition is satisfied.

4. Concurrency and multi-thread management

Multithreading

The process dividing into multiple background threads that each run asynchronously. Unlike processes, threads have their own program counter (PC), stack, and registers, but they share the same memory address space of the parent process or task.

Benefits of Multithreading
  • Responsiveness: Multi-threading can be used to keep an application responsive while background tasks run simultaneously.
  • Resource Sharing: Thredas share resources efficiently compared to separate processes.
  • Better performance: Threads can run concurrently on multicore processors to increase the throughput.
Thread Models
  • User-level Threads: Maintained by a user-level library and are invisible to the OS. User-level threads are much less expensive to create and manage than kernel level, but you need a system of User-level threads running on multiple CPUs.
  • Kernel-Level Threads: handled by the operating system kernel These threads are scheduled by the kernel and AS such they can run on different CPUs.
  • Hybrid Threads: Hybridize the user-level and kernel-level threading models for their good points.
Concurrency

Concurrency refers to an operating system's capability to handle multiple tasks simultaneously. We can get that by using multi-threading or multi-processing.

Concurrency Challenges
  • Race Conditions: are the situations where multiple threads or processes access shared resources simultaneously and behaviors unpredictable
  • Deadlocks: This occurs when two or more processes are waiting for each other to release resources, making the whole system stop.
  • Starvation: This happens if a process is always refused the resources it needs because of scheduling processes.
Concurrency Control
  • Locks: Guaranteed Serialization allows only one thread or process to acquire the resource at a time.
  • Monitors: synchronization constructs above mutexes & condition variables
  • Barriers: Synchronization points where threads are stopped and have to wait until all the other threads reach this barrier.

Process management is an essential part of the operating systems, which allows for better and smoother execution faster, safer, and fairer multiple processes or threads in a computing system.

Memory Management

Memory management is a function of an operating system that manages the computer's primary memory (RAM) and also protects from cache erroneously written. It ensures that more memory is provided to the applications as and when they need it for them to be able to execute while also optimizing system performance.

1. Memory Hierarchy

Overview

A memory hierarchy describes an interconnect arrangement of potentially very different types and possibly multiple levels of memory organization in a system, based on speed, size, and cost. The hierarchy strikes our efforts to achieve the fastest access times with smaller sizes of storage, particularly those that grow alongside their cost.

Levels of Memory Hierarchy
Registers:
  • Cache memory is the smallest and fastest type of memory in a computer.
  • Used for instant processing and saving interim data.
  • Time to access it: A handful of nanoseconds.
Cache Memory:
  • The sort of memory is known as cache and it saves all the data information close to the CPU intending to boost access time.
  • Usually, in L1-L2 and L3-Caches-Layering implementations, e.g., whose quality decreases level after a command further from the CPU.
  • Access Time Few Nano Seconds To Tens Of Nano Seconds
Main Memory (RAM):
  • The system's primary memory for active processes and data.
  • Bigger than the cache but slower to access.
  • Time to access: nano-second.
Secondary Storage:
  • Hard drive (HDD), SSD, Optical-storage
  • For saving data for a long time
  • Slower than RAM in which access times are milliseconds.
Tertiart and offline storage:
  • Also, contains backup media like magnetic tapes and external drives.
  • Used as secondary storage, allowing for backups but knives access time compared to tapes and virtually no impression.
Trade-Offs:
  • There is also a trade-off in terms of speed and capacity typically faster memory comes at the expense of space (256GB+ options) but it tends to be more expensive.
  • Price: Faster and smaller levels of memory (levels 1/2) are more expensive per byte than larger capacity levels such as RAMs, and secondary storage.

2. Virtual Memory and Paging

Virtual Memory

Virtual Memory In this memory allocation one we can think of, is the limitless expansion of physical memories that lets all process seems to use more RAM than it has. It enables the running of processes, which uses more memory than what is available physically and extends it to disk space.

How Virtual Memory Works
  • Address Translation: The operating system utilizes the Memory Management Unit (MMU) to convert virtual addresses generated by programs into physical addresses in RAM.
  • OS to Maintain Page Tables: Each OS maintains page tables that map virtual addresses to physical memory locations. MMU uses these tables at the time of address translation.
Paging
  • PAGING: Paging is a memory management scheme that eliminates the need for contiguous allocation of physical Pages. Memory is not one entity but rather divided into fixed-size blocks called pages (as a unit in the virtual memory) and frames (as units in physical memory).
  • Page Size: The size of each page/frame usually powers to two e.g. 4 KB
  • Page Table: helps the CPU to know which virtual page is where in physical memory.
  • Page Faults: This happens when a program attempts to access a page that is not loaded in the physical memory, causing the OS to read the required pages from secondary storage (disk) into RAM.
  • It may involve SSwapping OS can swap out the inactive pages from RAM to Disk and make some room for active ones so that larger workloads can be handled by the system
Benefits of Paging
  • It enables using fixed-size pages, which in turn reduces fragmentation and makes memory management more efficient.
  • Isolation and security: It provides process isolation which means that one process keeps running in its bubble without having access to anyone else's memory space.
Drawbacks of Paging
  • The overheads: there is computational overhead in managing page tables and handling the case of a page fault.
  • Performance: If your workloads cause lots of page faults you can see thrashing where the machine spends more time swapping pages in and out than running processes.

3. Segmentation

Segmentation Overview

Segmentation is a method of reducing memory and generating variable-sized segments fit for logical divisions such as functions, objects, or data structures. Each segment depicts some type of data or code which is a good way to arrange things.

How Segmentation Works
  • It consists of the given tables: Segment Table: The operating system keeps a table for each process called a segment table which includes the base address and limit for every segment. This table translates logical addresses (segment number and offset) to physical addresses.
  • Logical Addressing: The logical address in a segmented system is composed of the segment number and offset within that segment.
  • Segment Protection: Allowing each segment to define different protection levels (e.g. read-only, or read-write), securing it better than sections apparent at the OS level.
Benefits of Segmentation
  • Logical Organisation: Segmentation lines up with the logical structure of programs, it simplifies managing code and data separately.
  • Segmentation: the segments can be shared for two or more processes that share memory areas efficiently.
  • Isolations and Security: Segments may be separately isolated for increased security.
Drawbacks of Segmentation
  • External Fragmentation: Having varying sizes of segments could lead to memory holes, and make it more difficult for the OS to allocate contiguous allocation.
  • Complexity: Segment Handling and Protection levels delivered by an operating system and complexity.
Segmentation vs. Paging
  • Combination Approach: In many contemporary systems, a combination of the two approaches is employed and segments are further divided into paging to combine the advantages of each technique. This will help to reduce fragmentation with architectural respect.

4. Memory Allocation Techniques

Overview

Memory allocation: Assign memory to processes when they execute. Optimal resource allocation is key to making the most efficient use of system resources, providing processes with what they need without wasting and fragmenting memory.

Types of Memory Allocation

Contiguous Memory Allocation
  • Allocation is done in a single contiguous block and can be easier to manage than more fragmented approaches but it's also prone to fragmentation.
  • Fixed Partitioning: In Fixed partitioning, Memory is divided into fixed-size partitions and each process occupies the whole partition.
  • Dynamic Partitioning: Memory partitions can only be created based on the size of processes, so this partition is also variable.
Non-Contiguous Memory Allocation
  • It allows allocation in the non-continuous blocks so that fragmentation can be also reduced by making memory utilization more flexible.
  • Paging (Introduction: Divides the memory into fixed-size pages to allocate them non-contiguously)
  • Segmentation: Allocates variable-size segments.

Memory Allocation Strategies

  • First Fit: Allocate the first block large enough for easy use but with the potential overtime for fragmentation.
  • Best Fit: Chooses the smallest block of memory that is capable, of reducing waste but potentially creating more fragmentation.
  • Worst Fit: Choose the largest available block, creating big holes for future allocations.
  • Next Fit: The successor of the first fit, but can search from the place where the last allocation was done, unlike the beginning.
Fragmentation:
  • Internal Fragmentation: This occurs when the memory blocks allocated to processes contain a little extra space other than what they have requested, this block of every process contains a left-over internal fragment which is called Internal fragmentation.
  • External Fragmentation: This is when Free memory is broken into pieces which are small and non-contiguous places so it makes it difficult to provide a big, continuous piece of blocks.
Compaction:
  • A method of removing fragmentation by moving processes in main memory so that we create big contiguous free blocks. Otherwise, it’s mostly popular with dynamic partitioned systems.

Memory management is a core functionality in operating systems that ensures the proper use of memory resources, allowing processes to execute with optimal speed while keeping system performance and stability.

Input/Output (I/O) Management

Operating systems perform lots of tasks including input/output (I/O) management. This way, data is always transferred between the CPU and peripheral devices such as disks, printers, or network interfaces quickly and safely.

1. I/O Hardware

Overview

I/O hardware is peripheral equipment from which data can be input or outputted from the computer. The work done by each device ranges from steered through-rate to girted-up process speed and effort levels needed.

Types of I/O Devices
Input Devices

These include all devices such as keyboards, mice, scanners, and other input components allowing the users to enter data into it.

Output Devices

Monitors, and Printer – These are output devices that display the processed data from the system back to the user and others include printers that provide hard copies of information put by systems.

Storage Devices

Devices such as hard drives, SSDs, and optical drives offer persistent data storage.

Communication Devices

Network interface cards and modems (for data transmission between computers).

I/O Ports and Buses
  • Ports: Physical links for device I/O (e.g., USB, HDMI, Ethernet).
  • Buses: Paths are the places where records are switched among CPU, memory, and I/O gadgets. These are buses like PCI, SATA, and USB.
Device Controllers
Function:

Device Controllers: Device controllers are the hardware devices that control the operation of an I/O device. They work as per the Medium in between CPU and I/O device, where their main aim is to handle the low-level information of data transfer.

Types:
  • DMA (Direct Memory Access) Controllers: Under these, devices can move data back and forth from memory to their device resource (and vice versa), without the intervention of the CPU, in turn freeing up CPU cycles for any other work.
  • Interrupt Controllers: Alert the CPU when an I/O operation completes or an error occurs, allowing the CPU to respond quickly.

2. I/O Software and Drivers

Overview

I/O software and drivers interact with the operating system to facilitate communication with I/O hardware. These provide an abstraction that decouples the user and applications from low-level hardware interaction.

I/O Software Layers
User-Level I/O Software

They are system calls and library functions that applications can use to request I/O operations. Such as reading and writing files, or sending/receiving network packets.

Device Independent I/O Software

Supports the same I/O interface for all hardware configurations It incorporates buffering, caching, error control as well as spooling.

Device Drivers:
  • Software to interface directly with your hardware Drivers map the I/O request onto device-specific operations
  • Kernel Mode vs User Mode: Most drivers run in Kernel mode for direct hardware access, but certain drivers can be configured to run in user space due to issues related to system stability and security.
Types of I/O Operations:
  • Non-blocking I/O: The requesting process just announces that it wants to go do something else and can check back later.
  • Asynchronous I/O: The requesting process is returned immediately while the i/o operation is performed with a callback format, asynchronously (the OS sends an interrupt when it finishes).
Error Handling:

Error Handling: errors like device failures, data corruption related to devices, and timeouts need to be handled by I/O software so they should tell the user or application that something has gone wrong, so corrective action can also take place.

3. Disk Scheduling Algorithms

Overview:

Disk scheduling algorithms serve as intermediaries between the IC components of the disk drive and the device driver. The idea is to reduce the time that will be spent waiting on disk access and heighten the overall performance of the storage subsystem.

Disk Scheduling algorithms commonly used
First-Come, First-Served (FCFS)

Received Requests are Silently Processed. Not atomic make more than one disk request on the same or different long seek to access a given region, leading to higher wait times

Short Seek Time First (SSTF)

The next request processed is the one that lies closest to the current disk head position, thus by reducing seek time we have a lot of requests further but get starved.

SCAN (Elevator Algorithm)

All requests to be serviced in one direction of the movement of the disk head are attended and then it reverses its way. The short-term result is getting rid of seek time and starvation.

C-SCAN (Circular SCAN)

Similar to SCAN, but when the head reaches it will go back directly to the other end without servicing requests on its way back. It is a more evenly spread-out wait time.

LOOK and C-LOOK

Short SCAN and C-SCAN: These are variants of SCAN and they only scan up to the last request in each direction to avoid unnecessary movement.

Elevator Algorithm

Aka like an elevator that processes all the requests in one direction before going to the other side SCAN algorithm.

Disk Scheduling Affects due to the following factors

  • Seek Time: The time required for the disk head to move from one track to another.
  • Rotational Latency: The time spent waiting for the desired disk sector to move under the disk head.
  • Data Transfer Rate: The rate at which data is read from such drives or written to it.

4. Buffering, Caching and Spooling

Buffering

Overview

Buffering- temporary storage of data in memory when it is getting transferred between devices or processes. Buffers provide a way to even out differences in ES and delay data points during transfer.

Types
  • Single Buffering: Using one buffer, the data transferred is done in blocks and delays for each block to process before stuffing.
  • Two Buffers: Enables a double buffering system, where the other buffer is being filled while one.s and stop inefficiency due to wait time.
  • Circular Buffering: Instead of using one buffer, it uses multiple buffers and cycles through them in turn so that a continuous flow can be maintained with less latency.

Caching

Overview

Buffering Caching stores the most used data in a faster, smaller memory (cache) to accelerate access times. It helps reduce the need to access slow storage media such as disk or network resources.

Cache Types
  • Disk Cache: keeps often-used disk data in RAM within reach to quicken up read and write operations.
  • Memory Cache, which stores often used data and instruction in fast memory (L1 or L2 cache) to be quickly looked up by the CPU.
Cache Policies
  • Write-Through: the data is written to both (cache and back end) simultaneously. ensures consistent updates across all levels of storage
  • Write-Back: Initially, it writes the data to the cache and then later on this gets written back to the backing store but improves performance while running the risk of loss during a crash.

Spooling

Overview

Spooling is the process of placing data to be written into a spool so that they can later be processed. It enables the CPU to perform other tasks while I/O operation takes place.

Use Cases

Print Spooling: Queues print jobs to disk before sending them to the printer so that multiple print queues can be created and processed one by one.

Batch Processing: Jobs get queued to a batch and are taken sequentially, making the best use of resources.

Advantages of Buffering, Spooling and Caching
  • Better Utilization: The pushing action lets loose a river of data so that instead of being the slow trickle caused by server throughput, it changes to more intense flows followed by lower use possibilities for servers (like off hours).
  • Increased Throughput: Makes the system to process multiple I/O operations at a same time, leading to good overall utilization of any hardware.
  • Fall tolerance: It ensures that no data is lost during the transfer temporarily storing it in memory.

I/O management is very crucial to make sure data transfer through the system and peripheral devices happens efficiently balancing speed, reliability, and resource use.

Conclusion

If modern computing devices are a body, the OS is its spine. They orchestrate resources, facilitate communication among the components, and supervise their healthy performance. The different ingredients of an OS, like process management memory management I/O Management, are enough to demand a huge amount of respect for the effort consumed and the sophistication involved in creating something so complicated.

Whether it be the history of operating systems, kernel architecture details, memory allocation, and disk scheduling algorithms Every bit of every aspect is equally important if you want to make sure that your OS performs well with high reliability. With software running on top of hardware, controlled by the operating system; all this interplay allows us to accomplish tasks we usually do not dwell on in everyday use.

Technology is always innovating and as such operating systems will also adapt to new needs whether they be the classic requirements from a distributed system, real-time applications or even supporting more complex things devices. It helps us to navigate these changes and make the most out of the technology that we have in our hands by understanding how Our Operating System works fundamentally.

Whether you are a developer, sysadmin, or just like a tech geek everyone should know the operating system as defining the way how we interact with hardware is important in computing let alone its continuously changing environment.

Spark 13 August 2024
Share this post
Archive