頁籤選單縮合
題名 | Using a Data Prefetching Scheme Reducting the Data Access Latency of Pipeline Processor=使用資料預取技術降低管線式處理器之資料延遲 |
---|---|
作 者 | 杜日富; | 書刊名 | 新埔學報 |
卷期 | 17 1999.10[民88.10] |
頁次 | 頁271-288 |
分類號 | 471.61 |
關鍵詞 | 資料預取; 管線式處理器; 資料延遲; Data prefetching; Data reuse ratio; Data access latency; Recognizable engine; |
語文 | 英文(English) |
中文摘要 | 對多數的電腦均需要大量的記憶體及頻繁的資料擷取於CPU及記憶體之間,此將造成資料危障及資料抓取延遲,這些均會引起CPU效能的降低。在此論文中將提出一解決資料抓取延遲的方法,一種以硬體為基礎的資料預取方式,加入歷史表(history table)及資料預取表(data prefe tcher)於管線式處理器中,此一資料預取器被建模及模擬於SES/ workbench的物件導向圖形之建模及模擬軟體。此一改進型處理器有效能之改進優於傳統型處理器,同時也衡量於不同的快取記憶體時的輸出量,這兩者均有助於研究的驗證。 |
英文摘要 | Processor cycle times are currently much faster than the memory's, and the performance gap between them continues to increase over time. Prefetching schemes are used to solve and reduce the performance gap between processor and the off-chip store equipment in RISC and CISC architecture. This paper proposes a new scheme to solve the data access latency. Two components are added into the traditional pipeline processor: one is Data Prefetching Buffer (DPB), and the other is Recognizable Engine (RE). Based on this idea, the Data Prefetching Engineer (DPE), is constructed by the pipeline CPU, DPB and RE. This piper is modeled and simulated using the SES/workbench object-oriented graphical modeling and simulation software. For the quantitative analysis, we compare the data reuse ratio of instructions between the enhanced pipeline processor architecture and the traditional one, and measure the throughput of different amounts of cache size in the enhancement architecture. |
本系統之摘要資訊系依該期刊論文摘要之資訊為主。