查詢結果分析
相關文獻
- The Factor Structure of Multiple-Choice Items of the English Subtest of the General Scholastic Ability Test
- Validating the Cloze Items in the English Subtest of the General Scholastic Ability Test in Taiwan
- What Do the Cloze Items in the English Subtest of the Advanced Subjects Test in Taiwan Measure?
- 驗證性因素分析於瞭解生涯承諾構念效度性的應用
- 八大多元智慧問卷的信、效度分析
- 中國人的家族主義:概念分析與實徵衡鑑
- 智能障礙青年社區生活品質驗證性因素分析及其相關因素之研究
- 結構方程模式的應用--驗證性因素分析
- 空間能力測量之效度分析
- 資訊科技委外的交易風險與成功--構念衡量與實質關係
頁籤選單縮合
題 名 | The Factor Structure of Multiple-Choice Items of the English Subtest of the General Scholastic Ability Test=大學學科能力測驗英語科測驗選擇題效度的驗證 |
---|---|
作 者 | 林文鶯; 劉玉玲; 游錦雲; | 書刊名 | 國立虎尾科技大學學報 |
卷 期 | 34:2 2018.06[民107.06] |
頁 次 | 頁89-105 |
分類號 | 521.35 |
關鍵詞 | 驗證性因素分析; 構念效度; Purpura文法能力分類模式; 整體英語閱讀能力; Confirmatory factor analysis; Construct validity; Purpura's model of grammatical competence; General reading ability; |
語 文 | 英文(English) |
中文摘要 | 本研究目的乃是藉由驗證性因素分析方法,探討台灣大學學科能力測驗英語科測驗中選擇題的構念效度。具體主要研究問題是:2015及2016年學科能力測驗英語科選擇題(包括字彙題、克漏字、及閱讀理解題等三大類)所測量的潛在特質因素結構為何?為回答此研究問題,本研究向大學考試入學中心申請2015及2016年隨機抽樣各5,500位學生每題選擇題答題得分記錄。本研究透過Mplus軟體進行驗證性因素分析,藉以確認專家將題目分類後產生的測量模型是否與考生的實徵資料相互配適,並期能更進一步找出實徵資料的最佳適配模型。研究結果顯示,專家將兩年的字彙題及克漏字題目根據Purpura's (2004)分類成三或四個主要測量語言特質,也將閱讀理解題題目根據Purpura's (1999)的研究分類成兩個主要閱讀能力特質。然而,驗證性因素分析結果顯示,專家的題目分配模型與實徵資料的適配度不是最佳的,最佳的適配度模型反而是單一整體特質模型。也就是說,本研究結果顯示,大學學科能力測驗英語科測驗選擇題乃是在測量整體英語閱讀能力。最後,本研究也根據研究結果,針對高中英語教師與大學考試入學中心編製測驗的相關人員,提出建議。 |
英文摘要 | Administered annually in January by the College Entrance Examination Center (CEEC) in Taiwan, the General Scholastic Ability Test (GSAT) is a high-stakes college entrance test for high school seniors. This study aimed at validating the multiple-choice (MC) items of its English subtest (GSAT-ES) using confirmatory factor analysis (CFA). The GSAT-ES contains a total of 56 MC items grouped into three sections: vocabulary (15 items), cloze items (25 items), and reading comprehension items (16 items). In this study, two data sets (one for 2015 and one for 2016) were provided by CEEC, with each set containing 5,500 randomly-selected test-takers' responses to the 56 MC items. Performed by a panel of five raters, the vocabulary and cloze items were classified into four language components developed by Purpura (2004), and the reading comprehension items were classified into two reading sub-skills developed by Purpura (1999). Then CFAs were applied to the two sets of data. For both years, the CFA results showed that the raters' item classifications failed to fit the test-takers' responses. Instead, based on three common measures, the single-factor model best captured the characteristics of the data, suggesting that the three MC sections together appeared to tap into the general English reading ability rather than a number of divisible reading sub-traits. Finally, based on the results of the study, some pedagogical and practical implications can be drawn for English teachers and test constructors. |
本系統中英文摘要資訊取自各篇刊載內容。