| Peer-Reviewed

An Overview of Cache Memory in Memory Management

Received: 14 July 2020    Accepted: 7 August 2020    Published: 30 October 2020
Views:       Downloads:
Abstract

Cache memory are used in small, medium and high speed Central Processing Unit (CPU) to hold provisionally those content of the main memory which are currently in use. Preferably, Caches ought to have low miss rates, short access times, and power efficient at the same time. The design objectives are frequently gainsaying in practice. Nowadays, security concern about caches information outflow is based on the proficient attack of the information in the memory and the design for security in the cache memory are even more controlled and typically leads to significant cache performance. Fault tolerance is an additional advantage of the cache architecture which can be guaranteed in the memory to overcome the processor speed gap in the memory, the routine gap between processors and main memory continues to broaden, increasingly aggressive implementations of cache memories are needed to bridge the gap. In this paper, the objective is to make cache memory unsurprising as seen by the processor, so it can be used in hard real time system to achieve this we consider some of the issues that are involved in the implementation of highly optimized cache memories and survey the techniques that can be used to help achieve the increasingly stringent design targets.

Published in Automation, Control and Intelligent Systems (Volume 8, Issue 3)
DOI 10.11648/j.acis.20200803.11
Page(s) 24-28
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2024. Published by Science Publishing Group

Keywords

Cache Memory, Central Processing Unit, Main Memory, Processor

References
[1] Kumar, M. A., & Francis, G. A. (2017, February). Survey on various advanced technique for cache optimization methods for risc based system architecture. In 2017 4th International Conference on Electronics and Communication Systems (ICECS) (pp. 195-200). IEEE.
[2] Nagasako, Y., & Yamaguchi, S. (2011, March). A server cache size aware cache replacement algorithm for block level network Storage. In 2011 Tenth International Symposium on Autonomous Decentralized Systems (pp. 573-576). IEEE.
[3] Chang, M. T., Rosenfeld, P., Lu, S. L., & Jacob, B. (2013, February). Technology comparison for large last-level caches (L 3 Cs): Low-leakage SRAM, low write-energy STT-RAM, and refresh-optimized edram. In 2013 IEEE 19th International Symposium on High Performance Computer Architecture (HPCA) (pp. 143-154). IEEE.
[4] Bao, W., Krishnamoorthy, S., Pouchet, L. N., &Sadayappan, P. (2017). Analytical modeling of cache behavior for affine programs. Proceedings of the ACM on Programming Languages, 2 (POPL), 1-26.
[5] Kosmidis, L., Abella, J., Quiñones, E., &Cazorla, F. J. (2013, March). A cache design for probabilistically analysable real-time systems. In 2013 Design, Automation & Test in Europe Conference & Exhibition (DATE) (pp. 513-518). IEEE.
[6] Wang, Y., Ferraiuolo, A., Zhang, D., Myers, A. C., & Suh, G. E. (2016, June). SecDCP: secure dynamic cache partitioning for efficient timing channel protection. In Proceedings of the 53rd Annual Design Automation Conference (pp. 1-6).
[7] Zhang, M., Zhuo, Y., Wang, C., Gao, M., Wu, Y., Chen, K.,...& Qian, X. (2018, February). GraphP: Reducing communication for PIM-based graph processing with efficient data partition. In 2018 IEEE International Symposium on High Performance Computer Architecture (HPCA) (pp. 544-557). IEEE.
[8] Barroso, L. A., Gharachorloo, K., McNamara, R., Nowatzyk, A., Qadeer, S., Sano, B.,... & Verghese, B. (2000). Piranha: A scalable architecture based on single-chip multiprocessing. ACM SIGARCH Computer Architecture News, 28 (2), 282-293.
[9] Chetlur, S., & Catanzaro, B. (2019). U.S. Patent No. 10,223,333. Washington, DC: U.S. Patent and Trademark Office.
[10] LiKamWa, R., &Zhong, L. (2015, May). Starfish: Efficient concurrency support for computer vision applications. In Proceedings of the 13th Annual International Conference on Mobile Systems, Applications, and Services (pp. 213-226).
[11] Dayalan, K., Ozsoy, M., &Ponomarev, D. (2014, October). Dynamic associative caches: Reducing dynamic energy of first level caches. In 2014 IEEE 32nd International Conference on Computer Design (ICCD) (pp. 118-124). IEEE.
[12] Mittal, S. (2017). A survey of techniques for cache partitioning in multicore processors. ACM Computing Surveys (CSUR), 50 (2), 1-39.
Cite This Article
  • APA Style

    Ademodi Oluwatosin Abayomi, Ajayi Abayomi Olukayode, Green Oluwole Olakunle. (2020). An Overview of Cache Memory in Memory Management. Automation, Control and Intelligent Systems, 8(3), 24-28. https://doi.org/10.11648/j.acis.20200803.11

    Copy | Download

    ACS Style

    Ademodi Oluwatosin Abayomi; Ajayi Abayomi Olukayode; Green Oluwole Olakunle. An Overview of Cache Memory in Memory Management. Autom. Control Intell. Syst. 2020, 8(3), 24-28. doi: 10.11648/j.acis.20200803.11

    Copy | Download

    AMA Style

    Ademodi Oluwatosin Abayomi, Ajayi Abayomi Olukayode, Green Oluwole Olakunle. An Overview of Cache Memory in Memory Management. Autom Control Intell Syst. 2020;8(3):24-28. doi: 10.11648/j.acis.20200803.11

    Copy | Download

  • @article{10.11648/j.acis.20200803.11,
      author = {Ademodi Oluwatosin Abayomi and Ajayi Abayomi Olukayode and Green Oluwole Olakunle},
      title = {An Overview of Cache Memory in Memory Management},
      journal = {Automation, Control and Intelligent Systems},
      volume = {8},
      number = {3},
      pages = {24-28},
      doi = {10.11648/j.acis.20200803.11},
      url = {https://doi.org/10.11648/j.acis.20200803.11},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.acis.20200803.11},
      abstract = {Cache memory are used in small, medium and high speed Central Processing Unit (CPU) to hold provisionally those content of the main memory which are currently in use. Preferably, Caches ought to have low miss rates, short access times, and power efficient at the same time. The design objectives are frequently gainsaying in practice. Nowadays, security concern about caches information outflow is based on the proficient attack of the information in the memory and the design for security in the cache memory are even more controlled and typically leads to significant cache performance. Fault tolerance is an additional advantage of the cache architecture which can be guaranteed in the memory to overcome the processor speed gap in the memory, the routine gap between processors and main memory continues to broaden, increasingly aggressive implementations of cache memories are needed to bridge the gap. In this paper, the objective is to make cache memory unsurprising as seen by the processor, so it can be used in hard real time system to achieve this we consider some of the issues that are involved in the implementation of highly optimized cache memories and survey the techniques that can be used to help achieve the increasingly stringent design targets.},
     year = {2020}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - An Overview of Cache Memory in Memory Management
    AU  - Ademodi Oluwatosin Abayomi
    AU  - Ajayi Abayomi Olukayode
    AU  - Green Oluwole Olakunle
    Y1  - 2020/10/30
    PY  - 2020
    N1  - https://doi.org/10.11648/j.acis.20200803.11
    DO  - 10.11648/j.acis.20200803.11
    T2  - Automation, Control and Intelligent Systems
    JF  - Automation, Control and Intelligent Systems
    JO  - Automation, Control and Intelligent Systems
    SP  - 24
    EP  - 28
    PB  - Science Publishing Group
    SN  - 2328-5591
    UR  - https://doi.org/10.11648/j.acis.20200803.11
    AB  - Cache memory are used in small, medium and high speed Central Processing Unit (CPU) to hold provisionally those content of the main memory which are currently in use. Preferably, Caches ought to have low miss rates, short access times, and power efficient at the same time. The design objectives are frequently gainsaying in practice. Nowadays, security concern about caches information outflow is based on the proficient attack of the information in the memory and the design for security in the cache memory are even more controlled and typically leads to significant cache performance. Fault tolerance is an additional advantage of the cache architecture which can be guaranteed in the memory to overcome the processor speed gap in the memory, the routine gap between processors and main memory continues to broaden, increasingly aggressive implementations of cache memories are needed to bridge the gap. In this paper, the objective is to make cache memory unsurprising as seen by the processor, so it can be used in hard real time system to achieve this we consider some of the issues that are involved in the implementation of highly optimized cache memories and survey the techniques that can be used to help achieve the increasingly stringent design targets.
    VL  - 8
    IS  - 3
    ER  - 

    Copy | Download

Author Information
  • Computer Engineering Department, School of Engineering, Lagos State Polytechnics, Ikorodu, Lagos, Nigeria

  • Computer Engineering Department, School of Engineering, Lagos State Polytechnics, Ikorodu, Lagos, Nigeria

  • Computer Engineering Department, School of Engineering, Lagos State Polytechnics, Ikorodu, Lagos, Nigeria

  • Sections