<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Tyler Bugbee</title>
    <description>This blog serves as an ongoing portfolio for my various projects, research interests, and academics. Please feel free to contact me directly - email is best.
</description>
    <link>https://tylerbugbee.com/</link>
    <atom:link href="https://tylerbugbee.com/feed.xml" rel="self" type="application/rss+xml"/>
    <pubDate>Tue, 24 Mar 2026 18:57:27 +0000</pubDate>
    <lastBuildDate>Tue, 24 Mar 2026 18:57:27 +0000</lastBuildDate>
    <generator>Jekyll v3.10.0</generator>
    
      <item>
        <title>Operating Systems Final Review</title>
        <description>&lt;h1 id=&quot;chapters-8---onwards&quot;&gt;Chapters 8 - Onwards&lt;/h1&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;memory-management-chapter-8-and-9&quot;&gt;Memory Management (Chapter 8 and 9)&lt;/h2&gt;

&lt;h4 id=&quot;memory-allocation--&quot;&gt;Memory Allocation -&lt;/h4&gt;

&lt;h4 id=&quot;internal-and-external-fragmentation&quot;&gt;Internal and External Fragmentation&lt;/h4&gt;
&lt;p&gt;Refers to available memory rather than the size of files&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Internal fragmentation: allocated memory may be slightly larger than the requested memory and not being used
    &lt;ul&gt;
      &lt;li&gt;For example, consider an OS that can only allocate blocks of size 5KB. If a process requires 29KB of memory, the smallest allocation the OS can make that will satisfy this need is 30KB&lt;/li&gt;
      &lt;li&gt;This means there is 1KB of internal fragmentation after the memory is allocated. “Room for Growth”&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;External fragmentation: total memory space exists to satisfy the request, but it is not contiguous
    &lt;ul&gt;
      &lt;li&gt;Consider contiguous blocks of 50KB, 30KB, 10KB, and 100KB. The 50KB blocks and 100KB blocks are free space&lt;/li&gt;
      &lt;li&gt;A process requires 125KB of free space to run. There is enough space between the two free blocks, but they are not immediately contiguous&lt;/li&gt;
      &lt;li&gt;External fragmentation can be reduced by using compaction:
        &lt;ul&gt;
          &lt;li&gt;Shuffle memory contents to place all free memory together in one large block&lt;/li&gt;
          &lt;li&gt;Compaction is only possible if relocation is dynamic and is done at execution time&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id=&quot;segmentation-and-paging&quot;&gt;Segmentation and Paging&lt;/h4&gt;
&lt;p&gt;&lt;strong&gt;Segmentation&lt;/strong&gt;:&lt;br /&gt;
Memory-management scheme which organizes a program into a collection of segments, which are contiguous regions of virtual memory&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Each process has a segment table in hardware. Each entry is a segment&lt;/li&gt;
  &lt;li&gt;A segment can be located anywhere in physical memory
    &lt;ul&gt;
      &lt;li&gt;Each segment has a(n): start address, length, and access permission (R/W/RW)&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Processes can share segments
    &lt;ul&gt;
      &lt;li&gt;Same start address, length, same/different access permissions&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;A logical address consists of a two-tuple:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&amp;lt;segment-number, offset&amp;gt;  
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;ul&gt;
  &lt;li&gt;Segment table - maps two-dimensional physical addresses; each table entry has:
    &lt;ul&gt;
      &lt;li&gt;base - contains the starting physical address where the segments reside in memory&lt;/li&gt;
      &lt;li&gt;limit - specifies the length of the segment&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Because segments vary in length, memory allocation is a dynamic storage-allocation problem&lt;/li&gt;
  &lt;li&gt;For example, consider the following table:&lt;/li&gt;
&lt;/ul&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Base&lt;/th&gt;
      &lt;th&gt;Bound&lt;/th&gt;
      &lt;th&gt;Access Permissions&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;1000&lt;/td&gt;
      &lt;td&gt;200&lt;/td&gt;
      &lt;td&gt;R&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;4000&lt;/td&gt;
      &lt;td&gt;1521&lt;/td&gt;
      &lt;td&gt;RW&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;9000&lt;/td&gt;
      &lt;td&gt;400&lt;/td&gt;
      &lt;td&gt;RW&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;If we have the following requests from a program:&lt;/p&gt;
&lt;ol&gt;
  &lt;li&gt;Read (seg 0, offset 150) = legal read of segment 0 at address 1150 (Base = 1000 + offset (150))
    &lt;ul&gt;
      &lt;li&gt;Offset is less than bound (200), so this is within the segment&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Write (seg 1, offset 51) = legal read of segment 1 at address 4051 (Base = 4000 + offset (51))&lt;/li&gt;
  &lt;li&gt;Write (seg 2, offset 20) = legal write of segment 2 at address 9020 (Base = 9000 + offset (20))&lt;/li&gt;
  &lt;li&gt;Read (seg 2, offset 501) = illegal read of segment 2 at address 9501 (Base = 9000 + offset 501)
    &lt;ul&gt;
      &lt;li&gt;Results in a segmentation fault, offset is more than bound&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Write (seg 0, offset 10) = illegal write of segment 0 at address 1010 (Base = 1000 + offset 10)
    &lt;ul&gt;
      &lt;li&gt;Results in an illegal write exception. Address is within the segment, but the permissions are only read for this segment
&lt;strong&gt;Paging&lt;/strong&gt;:&lt;/li&gt;
      &lt;li&gt;Divide physical memory into fixed-sized blocks called frames
        &lt;ul&gt;
          &lt;li&gt;Avoids external fragmentation&lt;/li&gt;
          &lt;li&gt;Avoids problem of varying sized memory chunks&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;Divides logical memory into blocks of the same size called Pages&lt;/li&gt;
      &lt;li&gt;To run a program of size N pages, we need to find N free frames and load the program&lt;/li&gt;
      &lt;li&gt;We set up a page table to translate logical to physical addresses&lt;/li&gt;
      &lt;li&gt;For additional virtual memory notes, see &lt;a href=&quot;/2017/11/11/virtual-memory.html&quot;&gt;notes on virtual memory&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;h4 id=&quot;kernel-memory-allocation-buddy-system-slab&quot;&gt;Kernel Memory Allocation: Buddy system, Slab&lt;/h4&gt;
&lt;p&gt;&lt;strong&gt;Buddy System&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Allocates memory from fixed-size segments consisting of physically-contiguous pages&lt;/li&gt;
  &lt;li&gt;Memory allocated using power-of-2 allocator
    &lt;ul&gt;
      &lt;li&gt;Satisfies requests in units sized as power of 2&lt;/li&gt;
      &lt;li&gt;Request rounded up to the next highest power of 2&lt;/li&gt;
      &lt;li&gt;When smaller allocation is needed than is available, the current chunk splits into buddies of next-lower power of 2
        &lt;ul&gt;
          &lt;li&gt;Continue this process until appropriately sized chunk is available&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;For example, assume 256KB chunk available, kernel requests 21KB
    &lt;ul&gt;
      &lt;li&gt;Split into A1 and A2 of 128KB each
        &lt;ul&gt;
          &lt;li&gt;One further divided into B1 and B2 of 64KB
            &lt;ul&gt;
              &lt;li&gt;One further into C1 and C2 of 32 KB each, one is used to satisfy the request&lt;/li&gt;
            &lt;/ul&gt;
          &lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Advantage - quickly coalesce unused chunks into a larger chunk&lt;/li&gt;
  &lt;li&gt;Disadvantage - fragmentation
&lt;strong&gt;Slab Allocator&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;A slab is one or more physically contiguous pages&lt;/li&gt;
  &lt;li&gt;Cache consists of one or more slabs&lt;/li&gt;
  &lt;li&gt;Single cache for each unique kernel data structure
    &lt;ul&gt;
      &lt;li&gt;Each cache filled with objects - instantiations of the data structure&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;When cache is created, it is filled with objects marked as free&lt;/li&gt;
  &lt;li&gt;When structures are stored, objects are marked as used&lt;/li&gt;
  &lt;li&gt;If the slab is full of used objects, the next object is allocated from the empty slab
    &lt;ul&gt;
      &lt;li&gt;If there are no empty slabs, a new slab is allocated&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Benefits include no fragmentation, fast memory request satisfaction&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;storage-and-file-system-ch-10-11-12&quot;&gt;Storage and File System (Ch. 10, 11, 12)&lt;/h2&gt;

&lt;h4 id=&quot;raid-system-parity-and-hamming&quot;&gt;RAID System: Parity and Hamming&lt;/h4&gt;
</description>
        <pubDate>Wed, 06 Dec 2017 22:30:00 +0000</pubDate>
        <link>https://tylerbugbee.com/2017/12/06/Final-review.html</link>
        <guid isPermaLink="true">https://tylerbugbee.com/2017/12/06/Final-review.html</guid>
        
        
      </item>
    
      <item>
        <title>Operating Systems Midterm Review</title>
        <description>&lt;h1 id=&quot;review-of-chapters-2---7&quot;&gt;Review of Chapters 2 - 7&lt;/h1&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;a href=&quot;/Review.pdf&quot;&gt;Midterm Review&lt;/a&gt;&lt;/p&gt;
</description>
        <pubDate>Wed, 06 Dec 2017 21:00:00 +0000</pubDate>
        <link>https://tylerbugbee.com/2017/12/06/os-midterm-review.html</link>
        <guid isPermaLink="true">https://tylerbugbee.com/2017/12/06/os-midterm-review.html</guid>
        
        
      </item>
    
      <item>
        <title>Small Sat Lab - Day in the Life Testing Overview</title>
        <description>&lt;h2 id=&quot;section-1-testing&quot;&gt;Section 1: Testing&lt;/h2&gt;
&lt;ul&gt;
  &lt;li&gt;prove that we’re converging towards final goal using flatsat&lt;/li&gt;
  &lt;li&gt;“maturing the design” - ground segment
    &lt;ul&gt;
      &lt;li&gt;why do we think we’re capable of doing this?&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;showing ambition&lt;/li&gt;
  &lt;li&gt;begin from a narrative - go to&lt;/li&gt;
  &lt;li&gt;states - show how current state infers next state
    &lt;ul&gt;
      &lt;li&gt;ex: rPi represents obc, tx1 represents tx2,&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;introduction: thought behind what we need to do: short, reference some documents, the fact that we want to show evidence&lt;/li&gt;
&lt;/ul&gt;
</description>
        <pubDate>Thu, 16 Nov 2017 21:30:00 +0000</pubDate>
        <link>https://tylerbugbee.com/personal/2017/11/16/Small-Sat-DITL.html</link>
        <guid isPermaLink="true">https://tylerbugbee.com/personal/2017/11/16/Small-Sat-DITL.html</guid>
        
        
        <category>personal</category>
        
      </item>
    
      <item>
        <title>Notes on Virtual Memory</title>
        <description>&lt;h1 id=&quot;virtual-memory-notes&quot;&gt;Virtual memory notes&lt;/h1&gt;
&lt;hr /&gt;
&lt;h2 id=&quot;paging&quot;&gt;Paging&lt;/h2&gt;
&lt;ul&gt;
  &lt;li&gt;Frames are fixed-sized blocks of physical memory&lt;/li&gt;
  &lt;li&gt;Pages are fixed-sized blocks of logical memory&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id=&quot;process&quot;&gt;Process&lt;/h4&gt;
&lt;p&gt;When a process is to be executed, its pages are loaded into any available memory frames from their source (a file system or the backing store).&lt;br /&gt;
   The logical address space is now totally separate from the physical address space, so a process can have a logical 64-bit address space even though the system has less than 2^64 bytes of physical memory&lt;/p&gt;

&lt;h4 id=&quot;hardware-support&quot;&gt;Hardware Support&lt;/h4&gt;
&lt;p&gt;Every address generated by the CPU is divided into two parts: a &lt;strong&gt;page number (p)&lt;/strong&gt; and a &lt;strong&gt;page offset (d)&lt;/strong&gt;. The page number is used as an index into a &lt;strong&gt;page table&lt;/strong&gt;. The page table contains the base address of each page in physical memory. This base address is combined with the page offset to define the physical memory address that is sent to the memory unit.&lt;br /&gt;
   The page size (like the frame size) is defined by the hardware. The size of a page is a power of 2, varying between 512 bytes and 1 GB per page. If the size of the logical address space is 2^m, and a page size is 2^n bytes, then the high-order m-n bits of a logical address designate the page number and the n low-order bits designate the page offset.&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Every logical address is bound by the paging hardware to some physical address. Using paging is similar to using a table of base registers, one for each frame of memory&lt;/li&gt;
  &lt;li&gt;When a process arrives in the system to be executed, its size, expressed in pages, is examined. Each page of the process needs at least one frame
    &lt;ul&gt;
      &lt;li&gt;thus, if the process requires n pages, at least n frames must be available in memory. If n frames are available, they are allocated to this arriving process&lt;/li&gt;
      &lt;li&gt;The first page of the process is loaded into one of the allocated frames, and the frame number is put in the page table for this process. The next page is loaded into another frame, its frame number is put onto the page table, and so on&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id=&quot;frame-table&quot;&gt;frame table&lt;/h4&gt;
&lt;p&gt;data structure containing one entry for each physical page frame with the following information:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;which frames are allocated of physical memory
    &lt;ul&gt;
      &lt;li&gt;and to which page of which process&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;which frames are available&lt;/li&gt;
  &lt;li&gt;how many total frames there are&lt;/li&gt;
  &lt;li&gt;etc&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id=&quot;translation-look-aside-buffer-tlb&quot;&gt;translation look-aside buffer (TLB)&lt;/h4&gt;
&lt;p&gt;associative, high-speed memory. Each entry in the TLB consists of two parts:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;a key (or tag)
    &lt;ul&gt;
      &lt;li&gt;when the associative memory is presented with an item, the item is compared with all keys simultaneously&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;a value
    &lt;ul&gt;
      &lt;li&gt;If the item is found, the corresponding value field is returned&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;The TLB only contains a few of the page-table entries. When a logical address id generated by the CPU, its page number is presented to the TLB
    &lt;ul&gt;
      &lt;li&gt;If the page number is found, its frame number is immediately available and is used to access memory&lt;/li&gt;
      &lt;li&gt;If the page number is not in the TLB (known as a &lt;strong&gt;TLB miss&lt;/strong&gt;), a memory reference to the page table must be made
        &lt;ul&gt;
          &lt;li&gt;When the frame number is obtained, we can use it to access memory&lt;/li&gt;
          &lt;li&gt;In addition, we add the page number and frame number to the TLB, so that they will be found quickly on the next reference&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;If the TLB is already full of entries, an existing entry must be selected for replacement.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;protection-validinvalid-bits&quot;&gt;Protection (valid/invalid bits)&lt;/h2&gt;
&lt;p&gt;these bits are kept in the page table - one can define a page to be read-write or read-only&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Every reference to memory goes through the page table to find the correct frame number&lt;/li&gt;
  &lt;li&gt;Protection bits can be checked to ensure that no writes are being made to a read-only page&lt;/li&gt;
  &lt;li&gt;one such bit is the &lt;strong&gt;valid-invalid&lt;/strong&gt; bit
    &lt;ul&gt;
      &lt;li&gt;when this bit is set to valid, the associated page is in the process’s logical address space and is thus a legal (or valid) page&lt;/li&gt;
      &lt;li&gt;when the bit is set to invalid, the page is not in the process’s logical address space&lt;/li&gt;
      &lt;li&gt;the OS sets this bit for each page to allow or disallow access to the page&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;replacement&quot;&gt;Replacement&lt;/h2&gt;
&lt;p&gt;If you overallocate memory, the OS must use a page replacement algorithm (process is requesting more than the available memory)&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;if no frame is free, we find one that is not currently being used and free it&lt;/li&gt;
  &lt;li&gt;we do this by changing the page table (and all other tables) to indicate that the page is no longer in memory&lt;/li&gt;
  &lt;li&gt;we can now use the freed frame to hold the page for which the process faulted&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
  &lt;li&gt;Find the location of the desired page on the disk&lt;/li&gt;
  &lt;li&gt;Find a free frame:&lt;br /&gt;
  a. if there is a free frame, use it
  b. if there is no free frame, use a page-replacement algorithm to select a &lt;strong&gt;victim frame&lt;/strong&gt;
  c. write the victim frame to the disk; change the page and frame tables accordingly&lt;/li&gt;
  &lt;li&gt;read the desired page into the newly freed frame; change the page and frame tables&lt;/li&gt;
  &lt;li&gt;continue the user process from where the page fault occurred&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This process slows down when there are no free frames and two page transfers are required: so we can reduce this overhead by using a &lt;strong&gt;modify bit&lt;/strong&gt; (or &lt;strong&gt;dirty bit&lt;/strong&gt;)&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;each page or frame has a modify bit associated with it in the hardware&lt;/li&gt;
  &lt;li&gt;the modify bit for a page is set by the hardware whenever any byte in the page is written into, indicating that the page has been modified&lt;/li&gt;
  &lt;li&gt;when we select a page for replacement, we examine its modify bit&lt;/li&gt;
  &lt;li&gt;if the bit is set, we know that the page has been modified since it was read in from the disk
    &lt;ul&gt;
      &lt;li&gt;in this case, we must write the page to the disk&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;if the bit is not set, the page has not been modified since it was read into memory.
    &lt;ul&gt;
      &lt;li&gt;in this case, we need not write the memory page to the disk: it is already there&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id=&quot;fifo&quot;&gt;FIFO&lt;/h4&gt;
&lt;ul&gt;
  &lt;li&gt;associates the time when that page was brought into memory with each page&lt;/li&gt;
  &lt;li&gt;when a page must be replaced, the oldest page is chosen&lt;/li&gt;
  &lt;li&gt;implement: replace the page at the head of the queue each time. When a page is brought into memory, we insert it at the tail of the queue&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id=&quot;lru&quot;&gt;LRU&lt;/h4&gt;
&lt;ul&gt;
  &lt;li&gt;last recently used algorithm - replace the page that has not been used for the longest period of time&lt;/li&gt;
  &lt;li&gt;associates with each page the time of that page’s last use&lt;/li&gt;
  &lt;li&gt;when a page must be replaced, LRU chooses the page that has not been used for the longest period of time&lt;/li&gt;
  &lt;li&gt;implement:
    &lt;ul&gt;
      &lt;li&gt;associate with each page-table entry a time-of-use counter and increment for every memory reference. We replace the page with the smallest time value&lt;/li&gt;
      &lt;li&gt;keep a stack of page numbers. Whenever a page is referenced, it is removed from the stack and put on the top. In this way, the most recently used page is always at the top of the stack and the least recently used page is always at the bottom -&amp;gt; modify 6 pointers at the worst case scenario&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;second-chance-clock&quot;&gt;Second-chance (clock)&lt;/h3&gt;
&lt;p&gt;Basic algorithm is a FIFO replacement algorithm. When a page has been selected, however, we inspect its reference bit&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;if the bit is 0, we proceed to replace this page&lt;/li&gt;
  &lt;li&gt;if the bit is 1, we give the page a second chance and move on to select the next FIFO page
    &lt;ul&gt;
      &lt;li&gt;when a page gets a second chance, its reference bit is cleared, and its arrival time is reset to the current time&lt;/li&gt;
      &lt;li&gt;thus, a page that is given a second chance will not be replaced until all other pages have been replaced (or given second chances)&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;implement: a pointer indicates which page is to be replaced next. When a frame is needed, the pointer advances until it finds a page with a 0 reference bit. As it advances, it clears the reference bits
    &lt;ul&gt;
      &lt;li&gt;once a victim page is found, the page is replaced, and the new page is inserted in the circular queue in that position&lt;/li&gt;
      &lt;li&gt;if all bits are set, second-chance replacement degenerates to FIFO replacement&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;page-fault-handler&quot;&gt;Page fault handler&lt;/h2&gt;
&lt;p&gt;When a process tries to access a page that was not brought into memory - access to a page marked invalid causes a &lt;strong&gt;page fault&lt;/strong&gt;. The paging hardware will notice that the invalid bit is set, causing a trap in the OS. The procedure for handling this page fault is as follows:&lt;/p&gt;
&lt;ol&gt;
  &lt;li&gt;We check an internal table for this process to determine whether the reference was a valid or an invalid memory access&lt;/li&gt;
  &lt;li&gt;If the reference was invalid, we terminate the process. If it was valid but we have not yet brought in the page, we now page it in&lt;/li&gt;
  &lt;li&gt;We find a free frame (by taking one from the free-frame list, for example)&lt;/li&gt;
  &lt;li&gt;We schedule a disk operation to read the desired page into the newly allocated frame&lt;/li&gt;
  &lt;li&gt;When the disk read is complete, we modify the internal table kept with the process and the page table to indicate that the page is now in memory&lt;/li&gt;
&lt;/ol&gt;

&lt;!-- ## Groupme notes
* use disk_write on victim pages when they are replaced by new pages taking the same frame, only if they are dirty.
* only write when it&apos;s diry, and we only write when we&apos;re invalidating an old page that is being replaced
* dirty means it is a write and it is not a tlb_hit
* initially assign the pages to frames from frame 0 to whatever in rising order, and then the page replacement algorithm handles any further assignments
* return value of the pagefault_handler is the frame number  
* physicalAddr = (frameNo &gt;&gt; 8) + offset;
* int tlbid = pageNo % (TLB_ENTRY/2);
* use valid bits on tlb to determine if it is still in memory, just like page table
* use disk_read on pagefault --&gt;
</description>
        <pubDate>Sat, 11 Nov 2017 12:00:00 +0000</pubDate>
        <link>https://tylerbugbee.com/2017/11/11/virtual-memory.html</link>
        <guid isPermaLink="true">https://tylerbugbee.com/2017/11/11/virtual-memory.html</guid>
        
        
      </item>
    
      <item>
        <title>Welcome to my Blog!</title>
        <description>&lt;p&gt;Hello, and welcome to my first post. My name is Tyler Bugbee and I’m a third-year Computer Science student at the University of Georgia. I hope to use this website as an outlet for tech-talk, information on what I’m working on in school and otherwise, and other curiosities as they relate to computer science.&lt;/p&gt;

&lt;p&gt;My current interests are machine learning, predictive analytics, large-scale data problems, and parallel and distributed computing.&lt;/p&gt;

&lt;p&gt;You can find me at &lt;a href=&quot;https://github.com/tbarc&quot;&gt;GitHub&lt;/a&gt;, &lt;a href=&quot;https://www.linkedin.com/in/tylerbugbee&quot;&gt;LinkedIn&lt;/a&gt;, and &lt;a href=&quot;mailto:tbugbee1@gmail.com&quot;&gt;Gmail&lt;/a&gt;. Feel free to contact me directly!&lt;/p&gt;
</description>
        <pubDate>Wed, 27 Jul 2016 07:33:00 +0000</pubDate>
        <link>https://tylerbugbee.com/personal/2016/07/27/introduction.html</link>
        <guid isPermaLink="true">https://tylerbugbee.com/personal/2016/07/27/introduction.html</guid>
        
        
        <category>personal</category>
        
      </item>
    
  </channel>
</rss>
