Please enter verification code
Confirm
An Overview of Cache Memory in Memory Management
Automation, Control and Intelligent Systems
Volume 8, Issue 3, June 2020, Pages: 24-28
Received: Jul. 14, 2020; Accepted: Aug. 7, 2020; Published: Oct. 30, 2020
Views 66      Downloads 26
Authors
Ademodi Oluwatosin Abayomi, Computer Engineering Department, School of Engineering, Lagos State Polytechnics, Ikorodu, Lagos, Nigeria
Ajayi Abayomi Olukayode, Computer Engineering Department, School of Engineering, Lagos State Polytechnics, Ikorodu, Lagos, Nigeria
Green Oluwole Olakunle, Computer Engineering Department, School of Engineering, Lagos State Polytechnics, Ikorodu, Lagos, Nigeria
Article Tools
Follow on us
Abstract
Cache memory are used in small, medium and high speed Central Processing Unit (CPU) to hold provisionally those content of the main memory which are currently in use. Preferably, Caches ought to have low miss rates, short access times, and power efficient at the same time. The design objectives are frequently gainsaying in practice. Nowadays, security concern about caches information outflow is based on the proficient attack of the information in the memory and the design for security in the cache memory are even more controlled and typically leads to significant cache performance. Fault tolerance is an additional advantage of the cache architecture which can be guaranteed in the memory to overcome the processor speed gap in the memory, the routine gap between processors and main memory continues to broaden, increasingly aggressive implementations of cache memories are needed to bridge the gap. In this paper, the objective is to make cache memory unsurprising as seen by the processor, so it can be used in hard real time system to achieve this we consider some of the issues that are involved in the implementation of highly optimized cache memories and survey the techniques that can be used to help achieve the increasingly stringent design targets.
Keywords
Cache Memory, Central Processing Unit, Main Memory, Processor
To cite this article
Ademodi Oluwatosin Abayomi, Ajayi Abayomi Olukayode, Green Oluwole Olakunle, An Overview of Cache Memory in Memory Management, Automation, Control and Intelligent Systems. Vol. 8, No. 3, 2020, pp. 24-28. doi: 10.11648/j.acis.20200803.11
Copyright
Copyright © 2020 Authors retain the copyright of this article.
This article is an open access article distributed under the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
References
[1]
Kumar, M. A., & Francis, G. A. (2017, February). Survey on various advanced technique for cache optimization methods for risc based system architecture. In 2017 4th International Conference on Electronics and Communication Systems (ICECS) (pp. 195-200). IEEE.
[2]
Nagasako, Y., & Yamaguchi, S. (2011, March). A server cache size aware cache replacement algorithm for block level network Storage. In 2011 Tenth International Symposium on Autonomous Decentralized Systems (pp. 573-576). IEEE.
[3]
Chang, M. T., Rosenfeld, P., Lu, S. L., & Jacob, B. (2013, February). Technology comparison for large last-level caches (L 3 Cs): Low-leakage SRAM, low write-energy STT-RAM, and refresh-optimized edram. In 2013 IEEE 19th International Symposium on High Performance Computer Architecture (HPCA) (pp. 143-154). IEEE.
[4]
Bao, W., Krishnamoorthy, S., Pouchet, L. N., &Sadayappan, P. (2017). Analytical modeling of cache behavior for affine programs. Proceedings of the ACM on Programming Languages, 2 (POPL), 1-26.
[5]
Kosmidis, L., Abella, J., Quiñones, E., &Cazorla, F. J. (2013, March). A cache design for probabilistically analysable real-time systems. In 2013 Design, Automation & Test in Europe Conference & Exhibition (DATE) (pp. 513-518). IEEE.
[6]
Wang, Y., Ferraiuolo, A., Zhang, D., Myers, A. C., & Suh, G. E. (2016, June). SecDCP: secure dynamic cache partitioning for efficient timing channel protection. In Proceedings of the 53rd Annual Design Automation Conference (pp. 1-6).
[7]
Zhang, M., Zhuo, Y., Wang, C., Gao, M., Wu, Y., Chen, K.,...& Qian, X. (2018, February). GraphP: Reducing communication for PIM-based graph processing with efficient data partition. In 2018 IEEE International Symposium on High Performance Computer Architecture (HPCA) (pp. 544-557). IEEE.
[8]
Barroso, L. A., Gharachorloo, K., McNamara, R., Nowatzyk, A., Qadeer, S., Sano, B.,... & Verghese, B. (2000). Piranha: A scalable architecture based on single-chip multiprocessing. ACM SIGARCH Computer Architecture News, 28 (2), 282-293.
[9]
Chetlur, S., & Catanzaro, B. (2019). U.S. Patent No. 10,223,333. Washington, DC: U.S. Patent and Trademark Office.
[10]
LiKamWa, R., &Zhong, L. (2015, May). Starfish: Efficient concurrency support for computer vision applications. In Proceedings of the 13th Annual International Conference on Mobile Systems, Applications, and Services (pp. 213-226).
[11]
Dayalan, K., Ozsoy, M., &Ponomarev, D. (2014, October). Dynamic associative caches: Reducing dynamic energy of first level caches. In 2014 IEEE 32nd International Conference on Computer Design (ICCD) (pp. 118-124). IEEE.
[12]
Mittal, S. (2017). A survey of techniques for cache partitioning in multicore processors. ACM Computing Surveys (CSUR), 50 (2), 1-39.
ADDRESS
Science Publishing Group
1 Rockefeller Plaza,
10th and 11th Floors,
New York, NY 10020
U.S.A.
Tel: (001)347-983-5186