Cache in ARM Cortex M7: Cache Policies

In this second part of cache in ARM Cortex M7, we shall see the read and write operation and cache policies in ARM Cortex M7.

In this guide, we shall cover the following:

  • What is cache hit and cache miss.
  • What is dirty bit.
  • Cache policies.
  • STM32 cache policies and memory attribute.
  • Cache coherency.

3. What is Cache Hit and Cache Miss:

Cache hit and cache miss are terms used to describe cache efficiency in ARM Cortex M7 cache system.

3.1 Case : Cache Read:

In case of reading the cache, we have two terms as following:

  • Cache Hit.
  • Cache Miss.

Cache Hit

A cache hit occurs when the processor attempts to read data from the cache, and the data is found in the cache. This means the data is already stored in the cache from a previous read or write operation, allowing the processor to access the data much faster than if it had to retrieve it from the main memory.

Cache Miss

A cache miss occurs when the processor attempts to read data from the cache, but the data is not found in the cache. This means the data is not currently stored in the cache, and the processor must retrieve the data from the slower main memory. This results in a delay as the data is fetched from the main memory and possibly stored in the cache for future access.

3.2 Cace : Cache Writing:

In case of writing, the cache hit can be applied when the data to be written is found the cache, depending on the write policy, it will behave differently.

4. What is Dirty Bit:

The dirty bit indicates if the cache has been modified (dirty) or not modified (clean). Each block of the cache memory has dirty.

This dirty bit is used in writing to the memory since the writing won’t happen unless the dirty bit is set (dirty), this will reduce the writing operation to the main memory.

5. Cache Policies:

The cache policies has the following four policies:

  • Write through.
  • Write back.
  • Write allocate.
  • Read allocate.

5.1 Write Through Cache Policy:

Definition

The write-through policy ensures that every write operation updates both the cache and the main memory simultaneously. This means that whenever the processor writes data to the cache, it also writes the same data to the main memory.

Characteristics:

  1. Data Consistency: Since every write operation updates both the cache and main memory, the data in the cache and main memory is always consistent. There is no need to worry about synchronizing the cache with the main memory later.
  2. Simplicity: The write-through policy is simpler to implement compared to the write-back policy because it eliminates the need for dirty bits and the complex logic required to manage them.
  3. Lower Latency for Reads: Since data in the cache is always up-to-date with the main memory, read operations can always fetch the most recent data from the cache, which is faster than accessing the main memory.
  4. Higher Memory Bandwidth Usage: The downside of the write-through policy is that it can lead to higher memory bandwidth usage. Every write operation generates traffic to the main memory, which can be a bottleneck, especially in systems with high write frequencies.

Write-Through in ARM Cortex-M7

In the ARM Cortex-M7 processor, the write-through cache policy can be configured for the data cache. Here’s how it typically works:

  1. Write Operation (Cache Hit): When the processor writes data and the data is already in the cache (cache hit), the data is written to both the cache and the main memory. This ensures that the cache and the main memory are always synchronized.
  2. Write Operation (Cache Miss): When the processor writes data and the data is not in the cache (cache miss), depending on the write allocation policy, the data may be written directly to the main memory or the data block may be fetched into the cache first and then written. In a write-through policy, this is usually direct to memory.
  3. No Dirty Bits: Since every write operation updates both the cache and the main memory, there are no “dirty” cache lines. Every cache line is clean because it always matches the corresponding main memory location.

5.2 Write Back Cache Policy:

Definition

The write-back policy defers writing modified data in the cache to the main memory until it is absolutely necessary, typically when the cache line is evicted to make room for new data. This means that the write operation updates only the cache initially, and the main memory is updated later.

Characteristics:

  1. Data Consistency: The cache and main memory are not always consistent. The data in the cache can be different from the data in the main memory if the cache has been modified (written to) and not yet written back to the main memory.
  2. Efficiency: Write-back caching can improve system performance because write operations complete quickly by updating only the cache. The main memory write is deferred and may occur less frequently, which reduces memory traffic and saves time.
  3. Use of Dirty Bits: Write-back caches use dirty bits to keep track of which cache lines have been modified. A dirty bit is set when a cache line is written to, indicating that this line needs to be written back to the main memory before it can be evicted.
  4. Cache Line Eviction: When a cache line is evicted (to make space for new data), if the dirty bit is set, the data in the cache line is written back to the main memory to ensure consistency.

Write-Back in ARM Cortex-M7

In the ARM Cortex-M7 processor, the write-back cache policy is implemented for the data cache. Here’s how it typically works:

  1. Write Operation (Cache Hit):
    • When the processor writes data and the data is already in the cache (cache hit), the data is written only to the cache.
    • The dirty bit for that cache line is set to indicate that the data has been modified and is different from the main memory.
  2. Write Operation (Cache Miss):
    • If a write operation causes a cache miss (the data to be written is not in the cache), the relevant block of data is brought into the cache (causing a read miss), and then the write operation updates the cache. The dirty bit is set for the updated cache line.
  3. Cache Line Eviction:
    • When a cache line with its dirty bit set (indicating modified data) needs to be evicted to make room for new data, the modified data is written back to the main memory. The dirty bit is then cleared, and the cache line can be replaced with new data.
  4. Cache Maintenance:
    • Cache maintenance operations, such as cleaning, invalidating, and cleaning and invalidating, are used to manage the cache and ensure data consistency. For example, cleaning the cache writes back all dirty cache lines to the main memory without invalidating them.

5.3 Write Allocate:

The write allocate policy dictates that when a write miss occurs (i.e., the data to be written is not currently in the cache), the cache line containing the target memory address is loaded into the cache, and then the write operation is performed on the cache.

How It Works:

  1. Write Miss:
    • Cache Miss Occurs: When the processor tries to write to a memory address that is not present in the cache, a cache miss occurs.
    • Load Data into Cache: The cache line that contains the target memory address is fetched from the main memory and loaded into the cache.
    • Write Data to Cache: The write operation is then performed on the newly loaded cache line.
  2. Subsequent Writes: Once the cache line is in the cache, subsequent writes to addresses within that line will be cache hits, allowing for faster write operations.

Write Allocate in ARM Cortex-M7

In the ARM Cortex-M7 processor, the write allocate policy can be configured for the data cache. Here’s how it typically works:

  1. Handling Write Misses:
    • On a write miss, the ARM Cortex-M7 fetches the required cache line from the main memory and loads it into the cache.
    • The processor then performs the write operation on the loaded cache line.
    • Depending on whether the write-back or write-through policy is in use, subsequent writes will either set the dirty bit (write-back) or write to both the cache and main memory (write-through).
  2. Integration with Write-Back:
    • Write-Back + Write Allocate: This combination is common. When a write miss occurs, the line is brought into the cache, written to, and marked as dirty. The write-back policy will later handle writing this modified data back to the main memory.
  3. Integration with Write-Through:
    • Write-Through + Write Allocate: When a write miss occurs, the line is brought into the cache, written to, and the data is simultaneously written to the main memory. This ensures that the cache and main memory are always in sync.

5.4: Read Allocate:

The read allocate policy dictates that when a read miss occurs (i.e., the data to be read is not currently in the cache), the cache line containing the target memory address is fetched from the main memory and loaded into the cache. The read operation is then completed using the data in the cache.

How It Works:

  1. Read Miss:
    • Cache Miss Occurs: When the processor tries to read a memory address that is not present in the cache, a cache miss occurs.
    • Load Data into Cache: The entire cache line that contains the target memory address is fetched from the main memory and loaded into the cache.
    • Complete Read Operation: The processor completes the read operation using the data now present in the cache.
  2. Subsequent Reads: Once the cache line is loaded into the cache, subsequent reads to addresses within that line will result in cache hits, providing faster access to the data.

Read Allocate in ARM Cortex-M7

In the ARM Cortex-M7 processor, the read allocate policy is typically used for both the instruction cache and the data cache. Here’s how it typically works:

  1. Handling Read Misses:
    • On a read miss, the ARM Cortex-M7 fetches the required cache line from the main memory and loads it into the cache.
    • The read operation is then completed using the loaded cache line.
    • Subsequent reads to the same or nearby addresses within that cache line will be cache hits, improving read access times.
  2. Integration with Cache Policies:
    • Read-Allocate + Write-Back: When a read miss occurs, the line is brought into the cache. Subsequent writes to this line will mark it as dirty, and the write-back policy will later handle writing this modified data back to the main memory.
    • Read-Allocate + Write-Through: When a read miss occurs, the line is brought into the cache. Subsequent writes will update both the cache and the main memory, ensuring consistency between the two.

6. STM32 Cache Policies and Memory Attribute :

STM32F7/H7 has the following policies:

And the memory attribute is a following:

7. Cache Coherency:

In part 3, we shall deal with these issues.

Stay tuned.

Happy coding 😉

2 Comments

  • H.S.Raghavendra Rao Rao Posted August 4, 2024 6:01 am

    Dear Sir,
    Good Morning,
    In this Lecture, Section 7: Cache Coherency: there are 5 References to Attachment.png.
    Will you please be kind enough to include them, (hope there are some missing links, if I am not WRONG. Thanks for the 1,2,3 parts, they are good informative, too.
    with warm regards,
    HSR-Rao.

    • Husamuldeen Posted August 7, 2024 2:15 pm

      Hi,
      problem solved.

Add Comment

Your email address will not be published. Required fields are marked *