Posts Tagged ‘language testing’

I ran across this very helpful article recently that does with reading and listening comprehension skills that I have been doing with reading-writing subskills.

Song, M-Y. (2009). Do divisible subskills exists in second language (L2) comprehension? A structural equation modeling approach. Language Testing, 25 (4), 435-464.

In essence, the author asks: to what degree does student performance on a test suggest that reading, listening, and 2-3 identified subskills exist as separate constructs (as evidenced by structural equation modeling)? In comparison, I have been trying to identify the degree to which reading comprehension, writing ability, synthesis comprehension, and paraphrase writing ability are all subskills of reading-to-write tasks. So this article provided quite a bit of theoretical framing and technical analysis that I need to further refine my own study. (more…)


Read Full Post »

I work with my institution’s international teaching assistant (ITA) program. The program, like those at many universities and colleges in the USA, is designed to provide language and pedagogical training to non-native English speaking graduate students who serve as teaching assistants for their departments.

Last fall, after testing all potential ITAs for oral proficiency training, our program received many concerned calls from departments whose international students did not meet the minimum English language standards to be TAs. In fact, a large number of the students were well below cut-off in the university’s policy regarding ITAs. Departments were concerned that these students needed funding and now they did not qualify to work as TAs due to their oral proficiency. To put it mildly was a very frustrating experience for many stakeholders.


Read Full Post »

Given the University’s need to assess the English language proficiency of in-coming transfer students from the community college system, I was able to continue assessing the effectiveness of our ESL placement test. Earlier this week, a group of transfer students took the new grammar/vocabulary test, as well as the existing oral interview, and a revised writing test. Now that I have additional examinee data, I have been able to conduct an initial item analysis. (more…)

Read Full Post »

Last week, I spent a half hour in the depths of the university library searching through the P/PE stacks (the call number section of linguistics and language teaching). I picked up a few books that I hadn’t read before, including this volume that focuses on language learning contexts in India and the Middle East.

Singh, G. (2006). Summarisation skills: An analysis in text comprehension and production. In V. Narang (Ed.), Contemporary Themes and Issues in Language Pedagogy (pp. 17-32). Delhi, India: Nagri Printers.

This chapter summarizes research (from the 1980s) on summarization skills, and then applies those concepts and methodologies to an analysis of summaries generated by graduate student English language students in India. The results suggest that even graduate students struggle to understand source texts and write effective summaries. (more…)

Read Full Post »

I just picked this one up off the latest edition of Language Testing journal. I’m always a little cautious of ETS articles written only by ETS researchers. They invariably tend to support ETS practices and promote ETS products. Still, they do lots of research and serve as a model for lots of interesting approaches to language testing, so I don’t begrudge – I just take anything they publish with a grain of salt. After all, any researcher regardless of the institutional association has some agenda or other.

Here’s the reference, and the review is after the jump:

Sawaki, S. Stricker, L. J., and Oranje, A. H. (2008). Factor structure of the TOEFL Internet-based test. Language Testing, 26, 5-30.


Read Full Post »

I finally took a moment this afternoon to read through this article that I found at least a couple months ago. It was one of the few cases where my Google Alert actually showed up something useful. In fact, in the last year that I have had that Google Alert, this may have been the ONLY useful link that came through. Even so, I maintain that it was worth it because:

  1. This article came from a journal that I was previously unaware of and would never have thought to browse, and
  2. This article is critically tied to my research topic, and my planned study could be considered the “next step.”

Here’s the full reference citation:

Ascension Delaney, Y. (2008). Investigating the reading-to-write contruct. Journal of English for Academic Purposes, 7, 140-150.


Read Full Post »

Of all the authors from this volume whom I met at LTRC this summer, Elana Shohamy was the most personable. I sat next to her during a couple session and not only was she very friendly, but she seemed very humble and genuinely inquisitive when others shared their research, even though I knew that she is well-published. On another note, I have to say that she’s the closest thing to a language testing rock star; I’m not sure what that means, but somehow it seems fitting.

Shohamy, E. (2007). Tests as Power Tools: Looking Back, Looking Forward. In J. Fox, M. Wesche, D. Bayliss, L. Cheng, and C. E. Turner (eds.), Language testing reconsidered (pp. 141-152). Ottawa, ON: University of Ottawa Press.

Shohamy starts out her essay by detailing her experience with language tests (much like Charles Alderson did during his address at LTRC this summer). Elana’s experiences connected with my own:

  • She saw herself as a “victim of tests” (p. 142) during her educational experience. I can relate. I never saw the purpose of tests since I never felt that they were effective measures of what I had learned. I felt that they often measured test-taking ability or obscure facts related to the material rather than the important and key concepts I had been learning.
  • She also describes how tests replaced learning as she got closer to finishing high school and teachers prepared students for exit exams and university admissions. My experiences:
  1. Although high school exams in Canada are not so ridiculous (there is no national high school exit exams nor any university entrance test), I do remember have a difficult time in Physics 12. I just couldn’t get into the course material and as a result I never did most of my labs. However, in the 3 weeks before the final exam, our instructor let use review old provincial exams (the closest things to systematized high school exams in Canada). Instead of studying the material for the class, I learned what kinds of questions would be asked and strategies for completing the questions. As such, I did well on my Physics 12 exam, not because I understood the principles of Physics 12, but because I knew how to use basic algebra and apply a list of equations to test items that asked provided variables that fit that list of equations.
  2. I also saw this tendency towards testing and not learning while working as an English teacher in China. When I began teaching, I only taught Grade 10 students and we would do a variety of role-playing and communicative activities. But occasionally one of the upper-level instructors would ask me to visit their classes and the atmosphere was very different. Students in these classes never spoke; instead they listened as the instructor explained complex grammar rules and then they would take long grammar, multiple-choice quizzes and listen again while the instructor explained why wrong answers were wrong. When they asked for my explanations, I often could not provide one given that many of the subtleties in this grammar points were either British English or were unimportant to real communication in English.  China is only now starting to recover from this learning-less method of teaching English.
  • Shohamy explains that this aversion to tests continued in university where she was determined to become a testing expert in order to change the world. My experiences:
  1. I admit that my path to language testing was similar. I was extremely critical of traditional tests, and ended up taking a language testing class during my first semester of graduate school – not so much because I wanted to learn about test theory, but because I wanted to graduate faster and the testing course was another one that I could sandwich into my schedule. Through the course, I learned to enjoy the mathematical aspects of language testing, but I was also encouraged to learn that new testing practices extended beyond traditional True/False or multiple choice questions and included an array of performance and “alternative” modes of testing. The instructor of that course asked me to become her research assistant, and that’s what led me to do my MA thesis research into ESL writing rating. This has continued in my PhD research where I am investigating the use of integrated writing tasks as a means of assessment.
  2. Despite all this, I still rarely given tests in my classes. The only time that I ever taught an exclusively reading class, I never gave a single test and instead relied on other forms of formative assessment to see what learning was happening and what learning needed to happen.
  3. I frequently criticized the program-wide speaking tests at my old institution because I could see how students could pass these tests by simply memorizing mini-speeches instead of communicating spontaneously. Before leaving, I pushed for alternate forms of speaking and writing assessment that would result in improved classroom washback and would encouarge instructors to focus on skills that students would realistically use in academic situations.
  • Shohamy spend the majority of her article explaining the connection between tests as policy tools, both for good and for the detriment of examinees and communities. In connection with Spolsky’s and McNamara’s chapters, Shohamy helps testers see how the impact of language tests moves beyond grades but can have significant societal influence. As test developers, we need to consider these issues and work with test score users to ensure responsible test use and policy application.
  1. I am experiencing this now, more than ever before. At my old job, our test scores were an internal measure. Students were not impacted for good/poor test scores outside of our program. We had been working towards greater continuity with the main campus, but only now are such issues being considered. However, with my current position, scores for test that we administer have a major impact on students and their sponsoring departments. I have already presented on this issue at a couple graduate admissions meetings, and I have another one tomorrow. Our goal is to help score users understand what these scores mean and to make responsible choices in admissions, assignments, and advancement that will be beneficial to all students who are involved. It’s a complex issue and solutions are not immediate.

Read Full Post »

Older Posts »