欢迎光临澳大利亚新华书店网 [登录 | 免费注册]

    • 计算机体系结构(量化研究方法英文版原书第6版)/经典原版书库
      • 作者:(美)约翰·L.亨尼斯//戴维·A.帕特森
      • 出版社:机械工业
      • ISBN:9787111631101
      • 出版日期:2019/07/01
      • 页数:932
    • 售价:107.6
  • 内容大纲

        在过去20多年的时间里,本书一直是计算机领域的教师、学生和体系结构设计人员的必读之作。两位作者Hennessy和Patterson于2017年荣获图灵奖,肯定了他们对计算机领域持久而重要的技术贡献。随着处理器和系统架构的最新发展,第6版进行了全面修订。这一版采用RISC-V指令集体系结构,这是一个现代的RISC指令集,被设计为免费且可公开采用的标准。书中还增加了一个关于领域特定体系结构的新章节,并更新了关于仓储级计算的章节,其中介绍了谷歌最新的WSC。与本书之前版本的目标一样,本书致力于揭开计算机体系结构的神秘面纱,关注那些令人兴奋的技术创新,同时强调良好的工程设计。
  • 作者介绍

  • 目录

    Chapter 1  Fundamentals of Quantitative Design and Analysis
      1.1  Introduction
      1.2  Classes of Computers
      1.3  Defining Computer Architecture
      1.4  Trends in Technology
      1.5  Trends in Power and Energy in Integrated Circuits 23
      1.6  Trends in Cost
      1.7  Dependability
      1.8  Measuring, Reporting, and Summarizing Performance
      1.9  Quantitative Principles of Computer Design
      1.10  Putting It All Together: Performance, Price, and Power
      1.11  Fallacies and Pitfalls
      1.12  Concluding Remarks
      1.13  Historical Perspectives and References
      Case Studies and Exercises by Diana Franklin
    Chapter 2  Memory Hierarchy Design
      2.1  Introduction
      2.2  Memory Technology and Optimizations
      2.3  Ten Advanced Optimizations of Cache Performance
      2.4  Virtual Memory and Virtual Machines
      2.5  Cross-Cutting Issues: The Design of Memory Hierarchies
      2.6  Putting It All Together: Memory Hierarchies in the ARM Cortex-A53 and Intel Core i7 6700
      2.7  Fallacies and Pitfalls
      2.8  Concluding Remarks: Looking Ahead
      2.9  Historical Perspectives and References
      Case Studies and Exercises by Norman P. Jouppi, Rajeev Balasubramonian, Naveen Muralimanohar, and Sheng Li
    Chapter 3  Instruction-Level Parallelism and Its Exploitation
      3.1  Instruction-Level Parallelism: Concepts and Challenges
      3.2  Basic Compiler Techniques for Exposing ILP
      3.3  Reducing Branch Costs With Advanced Branch Prediction
      3.4  Overcoming Data Hazards With Dynamic Scheduling
      3.5  Dynamic Scheduling: Examples and the Algorithm
      3.6  Hardware-Based Speculation
      3.7  Exploiting ILP Using Multiple Issue and Static Scheduling
      3.8  Exploiting ILP Using Dynamic Scheduling, Multiple Issue, and Speculation
      3.9  Advanced Techniques for Instruction Delivery and Speculation
      3.10  Cross-Cutting Issues
      3.11  Multithreading: Exploiting Thread-Level Parallelism to Improve Uniprocessor Throughput
      3.12  Putting It All Together: The Intel Core i7 6700 and ARM Cortex-A53
      3.13  Fallacies and Pitfalls
      3.14  Concluding Remarks: What’s Ahead?
      3.15  Historical Perspective and References
      Case Studies and Exercises by Jason D. Bakos and Robert P. Colwell
    Chapter 4  Data-Level Parallelism in Vector, SIMD, and GPU Architectures
      4.1  Introduction
      4.2  Vector Architecture
      4.3  SIMD Instruction Set Extensions for Multimedia
      4.4  Graphics Processing Units
      4.5  Detecting and Enhancing Loop-Level Parallelism
      4.6  Cross-Cutting Issues

      4.7  Putting It All Together: Embedded Versus Server GPUs and Tesla Versus Core i7
      4.8  Fallacies and Pitfalls
      4.9  Concluding Remarks
      4.10  Historical Perspective and References
      Case Study and Exercises by Jason D. Bakos
    Chapter 5  Thread-Level Parallelism
      5.1  Introduction
      5.2  Centralized Shared-Memory Architectures
      5.3  Performance of Symmetric Shared-Memory Multiprocessors
      5.4  Distributed Shared-Memory and Directory-Based Coherence
      5.5  Synchronization: The Basics
      5.6  Models of Memory Consistency: An Introduction
      5.7  Cross-Cutting Issues
      5.8  Putting It All Together: Multicore Processors and Their Performance
      5.9  Fallacies and Pitfalls
      5.10  The Future of Multicore Scaling
      5.11  Concluding Remarks
      5.12  Historical Perspectives and References
    Case Studies and Exercises by Amr Zaky and David A. Wood
    Chapter 6  Warehouse-Scale Computers to Exploit Request-Level and Data-Level Parallelism
      6.1  Introduction
      6.2  Programming Models and Workloads for Warehouse-Scale Computers
      6.3  Computer Architecture of Warehouse-Scale Computers
      6.4  The Efficiency and Cost of Warehouse-Scale Computers
      6.5  Cloud Computing: The Return of Utility Computing
      6.6  Cross-Cutting Issues
      6.7  Putting It All Together: A Google Warehouse-Scale Computer
      6.8  Fallacies and Pitfalls
      6.9  Concluding Remarks
      6.10  Historical Perspectives and References
      Case Studies and Exercises by Parthasarathy Ranganathan
    Chapter 7  Domain-Specific Architectures
      7.1  Introduction
      7.2  Guidelines for DSAs
      7.3  Example Domain: Deep Neural Networks
      7.4  Google’s Tensor Processing Unit, an Inference Data Center Accelerator
      7.5  Microsoft Catapult, a Flexible Data Center Accelerator
      7.6  Intel Crest, a Data Center Accelerator for Training
      7.7  Pixel Visual Core, a Personal Mobile Device Image Processing Unit
      7.8  Cross-Cutting Issues
      7.9  Putting It All Together: CPUs Versus GPUs Versus DNN Accelerators
      7.10  Fallacies and Pitfalls
      7.11  Concluding Remarks
      7.12  Historical Perspectives and References
    Case Studies and Exercises by Cliff Young
    Appendix A  Instruction Set Principles
      A.1  Introduction
      A.2  Classifying Instruction Set Architectures
      A.3  Memory Addressing
      A.4  Type and Size of Operands

      A.5  Operations in the Instruction Set
      A.6  Instructions for Control Flow
      A.7  Encoding an Instruction Set
      A.8  Cross-Cutting Issues: The Role of Compilers
      A.9  Putting It All Together: The RISC-V Architecture
      A.10  Fallacies and Pitfalls
      A.11  Concluding Remarks
      A.12  Historical Perspective and References
      Exercises by Gregory D. Peterson
    Appendix B  Review of Memory Hierarchy
      B.1  Introduction
      B.2  Cache Performance
      B.3  Six Basic Cache Optimizations
      B.4  Virtual Memory
      B.5  Protection and Examples of Virtual Memory
      B.6  Fallacies and Pitfalls
      B.7  Concluding Remarks
      B.8  Historical Perspective and References
      Exercises by Amr Zaky
    Appendix C  Pipelining: Basic and Intermediate Concepts
      C.1  Introduction
      C.2  The Major Hurdle of Pipelining—Pipeline Hazards
      C.3  How Is Pipelining Implemented?
      C.4  What Makes Pipelining Hard to Implement?
      C.5  Extending the RISC V Integer Pipeline to Handle Multicycle Operations
      C.6  Putting It All Together: The MIPS R4000 Pipeline
      C.7  Cross-Cutting Issues
      C.8  Fallacies and Pitfalls
      C.9  Concluding Remarks
      C.10  Historical Perspective and References
      Updated Exercises by Diana Franklin
    References
    Index
    Online Appendices
    Appendix D  Storage Systems
    Appendix E  Embedded Systems
      by Thomas M. Conte
    Appendix F  Interconnection Networks
      by Timothy M. Pinkston and Jos.e Duato
    Appendix G  Vector Processors in More Depth
      by Krste Asanovic
    Appendix H  Hardware and Software for VLIW and EPIC
    Appendix I  Large-Scale Multiprocessors and Scientific Applications
    Appendix J  Computer Arithmetic
      by David Goldberg
    Appendix K  Survey of Instruction Set Architectures
    Appendix L  Advanced Concepts on Address Translation
      by Abhishek Bhattacharjee
    Appendix M  Historical Perspectives and References