ComputersInformation Technology

What is cached data and caching?

The computer, alas, does not immediately execute the commands that it receives from people. To speed up this process a number of tricks are applied, and an honorable place among them belongs to caching. What it is? What are the cached data? How does this process actually happen? What is the cached data in the smartphone "Samsung", for example, and they differ from something that is in the computer? Let's get to the answers to these questions.

What is a cache?

This is the name of the intermediate buffer, which provides quick access to information, the probability of which is highest. All the data is contained in it. An important advantage is that you can extract all the necessary information from the cache much more quickly than from the original storage. But there is a significant drawback - size. Cached data is used in browsers, hard drives, CPUs, Web servers, WINS and DNS services. The basis of the structure are record sets. Each of them is associated with a certain element or data block, which is a copy of what is in the main memory. The records have an identifier (tag), with the help of which the correspondence is determined. Let's see from a slightly different point of view: what is the cached data in the Samsung phone or other manufacturer? Are they different from those that are created in the computer? From the point of view of principle - no, the difference is only in the amount.

Process of use

When the client (they were listed above) requests data, the first thing the computer does is examine the cache. If there is a necessary record in it, then it is used. In these cases, there is a hit. Periodically, data from the cache is copied to the main memory. But if the desired record was not found, then the content in the base repository is searched. All the information taken is transferred to the cache, so that it can then be accessed more quickly. The percentage where the requests are crowned with success is called the level or hit ratio.

Updating data

When using, say, a web browser, the local cache is scanned to find a copy of the page. Given the limitations of this type of memory, with a miss, it is decided to discard some information to free space. To solve what exactly will be replaced, various wipe algorithms are used. By the way, if we talk about what cached data is on Android, then in mass they are used to work with pictures and application data.

Write policy

During the modification of the contents of the cache, the data is also updated in the main memory. The time delay that passes between the input of information depends on the recording policy. There are two main types:

  1. Immediate recording. Each change is synchronously entered in the main memory.
  2. Delayed or reverse recording. Data is updated periodically or upon request from the client. To track whether a change was made, use a symptom with two states: "dirty" or changed. In case of a miss, two calls directed to the main memory can be made: the first is used to write down data that has been changed from the cache, and the second is to read the required item.

It can also be that the information in the intermediate buffer becomes irrelevant. This happens when you change the data in the main memory without making any adjustments to the cache. For the consistency of all editing processes, coherence protocols are used.

Modern Challenges

With the increase in the frequency of the processors and the increase in the performance of the RAM, a new problematic place appeared-the limited interface of data transmission. What can a knowledgeable person notice? The cache memory is very useful if the frequency in the RAM is less than in the processor. Many of them have their own intermediate buffer to reduce the access time to the RAM, which acts slower than the registers. In CPUs that support virtual addressing, they often place a small but very fast address translation buffer. But in other cases, the cache is not very useful, and sometimes only creates problems (but this is usually in computers that have been modified by a non-professional). By the way, when talking about what cached data is in the memory of a smartphone, it should be noted that because of the small size of the device, it is necessary to create new miniature implementations of the cache. Now some phones boast parameters like those of advanced computers ten years ago - and what a difference in their size!

Synchronizing data between different buffers

The cache is useful when there is one, but how to keep the effectiveness of this technology, if there are a lot of them? This problem is solved by the coherence of the buffer. There are three options for data exchange:

  1. Inclusive. The cache can behave as you like.
  2. Exclusive. Developed for each specific case.
  3. Non-Exclusive. The standard of wide distribution.

Levels of caching

Their number is usually equal to three or four. The larger the memory level, the more voluminous and slower:

  1. L1 cache. The fastest level of the cache is the first. In fact, it is part of the processor, because it is located on the same chip and belongs to the functional blocks. It is usually divided into two types: a cache of instructions and data. Most modern processors without this level do not work. This cache functions at the processor frequency, so every cycle can access it.
  2. L2 cache. Usually it is located together with the previous one. It is a memory of separate use. To find out its value, you need to divide the whole volume given for data caching into the number of cores that is in the processor.
  3. L3 cache. Slow, but the biggest dependent cache. Usually more than 24 MB. Used to synchronize data that comes from different L2 caches.
  4. L4 cache. The use is justified only for high-performance multiprocessor mainframes and servers. It is implemented as a separate chip. If you ask a question about what data caching is in the Samsung smartphone and look for it in this level - I can say that the years at 5 precisely went ahead.

Cache associativity

This is a fundamental characteristic. The associativity of cached data is necessary for mapping logical segmentation. It, in turn, is necessary because the sequential search of all available lines takes dozens of cycles and reduces all advantages to it. Therefore, a rigid binding of RAM cells to the cache data is used, to shorten the search time. If we compare the intermediate buffers, which have the same volume, but different associativity, then the one with the larger one will work less quickly, but with significant specific efficiency.

Conclusion

As you can see, cached data under certain conditions allows your computer to act more quickly. But, alas, there are still quite a few aspects over which you can work for a long time.

Similar articles

 

 

 

 

Trending Now

 

 

 

 

Newest

Copyright © 2018 en.birmiss.com. Theme powered by WordPress.