Understanding Advanced Computer Architecture in Parallel Computing

Slide Note
Embed
Share

Covering topics like Instruction-Set Architecture (ISA), 5-stage pipeline, and Pipelined instructions, this course delves into the intricacies of advanced computer architecture, with a focus on achieving high performance by optimizing data flow to execution units. The course provides insights into the importance of caches, multicore CPUs, MESI protocol, and the differences between GPU and CPU architectures. Feedback is sought on the relevance of the architecture module in the course curriculum for future enhancements.


Uploaded on Oct 05, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. EE 193: Parallel Computing Fall 2017 Tufts University Instructor: Joel Grodstein joel.grodstein@tufts.edu Introduction to advanced architecture 1

  2. Introduction to advanced architecture The next 2-3 weeks will cover EE126/COMP46, Computer Engineering EE194, Advanced Computer Architecture all in two weeks Why do we care? The hard part in achieving high performance is usually getting data to the execution units. So learning about caches is important. Most multicore CPUs share their last-level cache. So we have to learn a bit about what that means, and about MESI GPU vs. CPU: GPU gives up OOO, speculative, branch predict In return, gets SMT over a very large number of threads To understand when to use a GPU, we should know what these words and acronyms mean EE 193 Joel Grodstein 2

  3. Disclaimer First time offering this class at Tufts Tufts also has Comp 50 Concurrency Not sure how the two courses will play together in the future Looking for feedback: is 2-3 weeks of architecture useful? next time around, should we expand it or shrink it? EE 193 Joel Grodstein 3

  4. Instruction-Set Architecture (ISA) A simple RISC ISA Arithmetic instructions read two operands from registers, do a computation, and write the result back to a register Loads read an address from a register, read "memory," and put the result into a register Stores read an address and data from registers, and write "memory." EE 193 Joel Grodstein 4

  5. 5-stage pipeline Fetch instruction from memory Read operands from regFile (issue) Do the computation (execute) writeback to regFile Load/store We call this a pipeline because the instructions can overlap each other Real pipelines can be 5-40 stages deep. EE 193 Joel Grodstein 5

  6. Pipelined instructions Instruction load r2=mem[r1] Cycle 1 Cycle 2Cycle 3 Cycle 4Cycle 5Cycle 6Cycle 7 fetch read R1 access cache fetch read r3,r4 r3,r4 fetch read r6,r7 fetch write r2 add r5=r3+r4 add write r5 add r8=r6+r7 add r6,r7 read r9,r10 write r8 store mem[r9]=r10 store Notes: Fetch, issue, execute, memory and WB are all real physical resources; only one instruction can use a given resource at a given time Just like any assembly line: two different cars cannot be at the same station at once EE 193 Joel Grodstein 6

  7. Hazards Instruction load r2=mem[r1] Cycle 1 Cycle 2Cycle 3 Cycle 4Cycle 5Cycle 6Cycle 7 fetch read R1 access cache fetch read r3,r2 fetch read r6,r7 fetch write r2 add r5=r3+r2 add r8=r6+r7 write r5 add r6,r7 read r9,r10 write r8 store mem[r9]=r10 store Hazards occur when one computation uses the results of another What to do? EE 193 Joel Grodstein 7

  8. Stall Instruction load r2=mem[r1] Cycle 1 Cycle 2Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 fetch read R1 access cache fetch stall stall write r2 add r5=r3+r2 add r8=r6+r7 read r3,r2 write r5 fetch read r6,r7 add r6,r7 store mem[r9]=r10 fetch read r9,r10 Simplest solution is to stall until the data is ready Compiler solution: rearrange the instructions so that this never happens How successful will this be? Not so easy: dependencies are very frequent But compilers can be very smart EE 193 Joel Grodstein 8

  9. Stall Instruction load r2=mem[r1] Cycle 1 Cycle 2Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 fetch read R1 access cache fetch stall stall write r2 add r5=r3+r2 add r8=r6+r7 read r3,r2 write r5 fetch read r6,r7 add r6,r7 store mem[r9]=r10 fetch read r9,r10 Compiler solution: rearrange the instructions so that this never happens How successful will this be? Not so easy: dependencies are very frequent But compilers can be very smart, especially if it can interleave multiple independent computations EE 193 Joel Grodstein 9

  10. Life is harder because Cycle 1 Cycle 2Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 load r2=mem[r1] fetch read R1 Instruction access cache stall write r2 add r5=r3+r2 add r8=r6+r7 fetch stall read r3,r2 write r5 fetch read r6,r7 add r6,r7 store mem[r9]=r10 fetch read r9,r10 You never know how long a load will take to happen depends whether the data is in cache, and which cache makes a compiler's life much harder Hardware solution: out-of-order machine (but the hardware isn't simple) EE 193 Joel Grodstein 10

  11. What else causes stalls? Cycle 1 Cycle 2Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 load r2=mem[r1] fetch read R1 Instruction access cache stall write r2 branch if r2=0 add r8=r6+r7 fetch stall read r2 fetch read r6,r7 add r6,r7 Branches (e.g., from an "if" statement) cannot even fetch the add until you know if the branch happened Hardware solution: speculative machine + branch prediction make a guess if the branch will be taken unroll everything if you guessed wrong EE 193 Joel Grodstein 11

  12. Symmetric multithreading Avoiding stalls is easier when successive instructions have no dependencies One way to do this: symmetric multithreading (SMT) Processor keeps multiple threads ready at all times by definition, different threads have no dependencies (they each have their own set of registers) Whenever a dependency would cause a stall, the machine instantly switches to the other thread without losing any cycles and if that thread would stall, it switches to yet another thread, Intel calls this hyperthreading, and only keeps two threads around Notes: I've oversimplified. Real SMT works on superscalar machines, where multiple instructions can issue in one cycle and then lets instructions from different threads all issue together SMT is perhaps a GPU's biggest trick. It's much more than 2-way EE 193 Joel Grodstein 12

Related


More Related Content