Back Lectures
Cached running code is never modifed, so no write out of cache.
 
When to refresh main memory
  When data is changed in cache,
    it needs to be mirrored in main memory.

  Two policies.
    Write-through
      Change in cached data immediately copied to main memory.

      Easy to implement.

      Requires more main memory access (defeats caching for writes). 

      Creates fewer problems for Virtual Memory.

      Line never stays 'dirty', so can be overwritten at any time 
        without delay.

    Write-back, Copy-back
      Write data only when line needed to cache line from different segment.

      Needs dirty flag. 

      Unpredictable delays - 
        Interrupts and virtual memory management (paging) can be external 
        triggers to write-back.

      Harder to implement on multi-core CPUs and systems using virtual memory.
        Dirty data is also stale data if it exists in another core's cache.
       
    Some designs introduce queues between CPU and memory, data written to queue
      but not actually written to memory until convenient. Improves 
      permformance, but may leave data at risk.

  Policies for dealing with memory writes when line not in cache.
     Write-around or no-write-allocate usually paired with write-though. 
       If line not already cached, 
         Writes directly to memory, skipping cache.

     Write-allocate usually paired with write-back.
       Memory to be written is first read into cache and then follows 
         write-back policy.
         # assumes newly written data will be referenced shortly.

  Related - in multi-tiered cache (L1, L2, L3).
    Is line cached inclusively in all cache levels 
      or exclusively in only one? 

    If exclusive, total cache is cumulative.
      But multiple searches needed, like set-associative in 3rd dimension.

    If inclusive, write-back and write-have to write all copies of line.

      Systems with large L2/L3 caches tend to favor inclusive.