rknfish

rknfish

i need more love
github

Lecture #06: Memory Management

refer to Note

refer to Slide

1. Introduction#

How to store:

  • What kind of place on the disk our pages should be stored

Fast operations:

  • For DBMS, all data needs to be transferred from Disk to Memory before operating on the data.
    image

image


2. Locks vs. Latches#

image


3. Buffer Pool#

The memory stores fixed-size pages.

Each record in it is called a frame.

When we need a page, we immediately copy a frame.

Dirty pages are not written back immediately. (Write-Back Cache)

image

Buffer Pool Meta-data#

image

The page table is used to track which pages are in memory.

Usually, some additional information is also stored in the page table.

  • Dirty Flag
  • Pin/Reference Counter

The Dirty Flag is used to indicate whether the page has been written.

The Pin/Reference Counter is used to fix the frame to ensure that the page is not evicted back to the disk.

The page directory is a mapping that maps page ids to page locations. All information must be stored on the disk so that the DBMS can find it.

The page table is a mapping that maps page ids to frames in the buffer pool. This is an in-memory data structure that does not need to be stored on the disk.

Memory Allocation Policies#

Allocation Policies

Global policies

  • Responsible for all operations to ensure global optimality

Local policies

  • Only consider the efficiency of the current query without considering its impact on the global level

4. Buffer Pool Optimizations#

Multiple Buffer Pools#

Usually, a DBMS does not have only one buffer pool.

image

DBMS maintains multiple buffer pools for various purposes (e.g., per-database buffer pool, per-page type buffer pool). Each buffer pool can have its own customized local policies for the number of pages stored in it.

There are two main ways to access different buffer pools:

  1. Object id
    Identify different buffer pools by Object Id
    image
  2. Hashing
    Identify different buffer pools by hashing the page id
    image

Pre-fetching#

DBMS can swap pages in advance based on queries, especially for:

  • Sequential queries
  • Index queries

image
When querying page1, the buffer pool immediately loads the pages that need to be accessed next.

image
It can also be loaded in advance

Scan Sharing(Synchronized Scans)#

Usually occurs when querying the same thing multiple times.

If the first query is scanning a table and shortly after the second query is also scanning the same table, the second query will synchronize with the first query. After the first query is completed, the second query will fill in the missing data.

When Q1 is halfway through the query, Q2 also performs the same query

scan sharing_1

Since Q1 has already queried a distance, it is very wasteful to reload page0 in the buffer pool. So we can query together with Q1.

scan sharing_2

After the joint query is completed, we can query the remaining page0-page2 of Q2

If this strategy is used for queries, the final result may be different. Because our queries are unordered. For a structure that is sequential, important, and deterministic, this method is generally not used for queries.

scan sharing_3

Buffer Pool Bypass#

Some operations may not be stored in the buffer pool, reducing overhead.


5. OS Page Cache#

OS Page Cache

In DBMS, we usually do not directly use the file management of the OS.

  • Reduce multiple copies of pages
  • Reduce page eviction strategies
  • Enhance control over I/O

6. Buffer Replacement Policies#

Buffer Replacement Policies

When the DBMS needs to evict frames from the buffer pool, it needs to decide which page to evict.

So Buffer Replacement Policies are algorithms for evicting frames.

Its main goals are

  • Correctness
  • Consistency
  • Speed
  • Metadata overhead

Least Recently Used (LRU)#

LRU

Maintain a timeline to record which page was last accessed. When a frame needs to be replaced, the page that was last accessed is evicted.

CLOCK#

CLOCK

The clock algorithm is an approximation of LRU. The advantage is that it no longer needs a timeline to record the pages, but uses a bit to record.

Clock is to record all pages in a certain order, and the order is circular. Initially, all pages are marked as 0. When a page is accessed, its bit is set to one. When searching for the page to be replaced, continue the order. If the bit of a page is 1, set the bit to 0; if the bit of the page is 0, replace the page.

page_id clock_page() {

	static page_id ref =  0;
	
	for ( ; ; ref ++ ) {
		if(ref >= BUFFER_POOL_SIZE) ref = 0;
		
		if (bit[ref]) 
			bit[ref] = 0;
		else 
			return ref;
			
	}
	return -1;
}

Alternatives#

Both LRU and CLOCK have a deep impact on sequential flooding operations.

  • An operation that sequentially queries all pages
  • This operation is only performed once and will not be accessed again

This will result in a situation where the data stored in the buffer pool is not the data that will be used next.

To reduce the occurrence of this situation, we usually use LRU-K for replacement.

LRU-K

Track the most recent K accesses for each page. The DBMS determines the access priority based on this record.

Dirty Page#

Dirty Page

For non-dirty pages, they only need to be discarded, while dirty pages need to be written back to the disk to maintain persistence.

To avoid this phenomenon, one strategy is to periodically scan the dirty pages in the buffer pool and write them back to the disk, so that the dirty pin flag can be safely removed.


7. Other Memory Pools#

Other Buffer Pools


Tasks#

https://15445.courses.cs.cmu.edu/fall2022/project1/

    Translation:
Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.