Evaluating Phonological Skills in Adult ESOL Learners
by Robin Lovrien Schwarz, M. Sp. Ed: LD
Part Three: Evaluation of phonological skills in adult ELLs (English Language Learners) in Texas
The overriding purpose of the study in Texas ESOL programs was to lay groundwork for further investigation into the role of phonological skills in adult ELLs’ learning and into ways these skills can be effectively measured. Thus this study is a snapshot of the phonological skills of learners with a variety of language backgrounds, education levels, experience with English and English proficiency levels.
The secondary purpose of this study was to learn more about the APS as a measurement of adult ELLs’ phonological skills.
(Note: The original intention of this study was to attempt to determine if there was any correlation between phonological skill levels and learner’s current placements and progress in their programs. However, timing of the testing precluded gathering information about learner progress from the learners’ teachers or programs.)
The participants in the study were adult ELLs at five different sites of ESOL instruction in Texas. Sites were in Fort Worth, Houston (two sites), Austin and San Antonio. Some learners were receiving instruction in ESOL classes in adult education programs; a few were receiving ESOL instruction in a refugee program; others were being tutored in volunteer tutoring programs through libraries. Most were in ESOL classes in a community college setting. A total of twenty-nine learners were tested. They ranged in age from 18 to well past 70 (several older learners were reluctant to reveal how old they actually were!).
Participants were classified into three groups according to the amount of formal education they had received in their country and in the US (Table 1). Twenty of the 29 participants fell in the group with the highest education levels (High Literate). They had at least completed high school and most had some college or had completed college. Several of this group had postgraduate degrees (JD, MD, MBA). One had a professional certificate (chef’s training). The high number of highly educated participants resulted from the fact that the majority of participants tested were students in a community college ESOL program. Two participants had completed elementary school and were classified as “Intermediate Literate.” The remaining seven participants had education ranging from less than one year of education to four years of elementary school. These were classified as “Low Literate.” Those having less education were either were from non-literate cultures (Hmong) or were economically unable to attend school (Mexicans and one Thai). None of the participants was totally illiterate. The Hmong participant was able to read and write in Thai; the Mexicans and the Thai were all literate in English by the time of testing as a result of attending adult education or having been tutored in literacy skills.
Participants hailed from 15 countries—five Spanish-speaking countries of Central and South America, one Caribbean (Haiti), one Asian (Korea), one Southeast Asian (Thailand), two Eastern European (Uzbekistan, Azerbaijan), one Middle Eastern (Iran), and two African countries (Ethiopia, Cote D’Ivoire). (Table 2)
Twelve languages were represented among all tested. Most participants spoke Spanish (14), four spoke Vietnamese, two spoke Farsi and all other languages were represented by one speaker each (Thai, Hmong, Turkish, Uzbek, Russian, Korean, Amharic, French, Haitian Kriol) (Table 2).
Many of the more educated participants had had English instruction while in their countries. Six had one to three years of English in their country, while another five had three to five years. Three had six years of English and two had studied English for eight years before coming to America (Table 3).
Participation in the project required that all learners have at least 6 months of English instruction in this country. English instruction in the US ranged from one or two having more than five years (two could not remember how many years they had spent in adult ESOL instruction off and on) to the most (13) having less than one year of instruction (Table 3).
Evaluation of participants:
A basic description of the project was sent to teachers ahead of the visit for testing so that teachers could explain it to their learners and ask for volunteers. Then participants were asked if they were willing to be in the project. Participants who volunteered to be tested were first identified by their teachers as having had six months or more of English instruction.
At the time of testing, participants were assigned a number. No names were used except on the approved consent form. The consent form was fully explained and the participant received an original of it after signing two copies. A consent form in Spanish was used for those who could read Spanish, and the form was translated into Vietnamese and Amharic as well. However, since there was no way to anticipate who would volunteer, consent forms were not available for all participants in their first language.
Participants were required to have English skills at Level IV on the Student Performance Levels (SPLs) or to have someone who could explain in their first language accompanying them to assure comprehension of the nature and purpose of the testing. Once testing began, however, directions were continued in English.
Participants were sometimes interviewed privately and sometimes the participant permitted someone to stay with him or her (for example the coordinator of ESOL stayed in the room with the three in the refugee program in Fort Worth). After reading the consent form (if they could) and hearing an explanantion of it, participants signed it, and received their own copy of it. Then they answered questions about their first language, how much education they might have had both in their country and in the US, the amount and type of English instruction they had received, and how long they had been in the US.
When the testing was finished, participants received their results in writing. The test administrator explanained the implications of the results and offered suggestions for ways to strengthen specific skills. Participants were reminded that they did not have to share their results with their teachers (this was stated on the consent form as well), and teachers did not receive results of individual learners. Some teachers received a general description of the overall performance of their students, but since no names were used, the administrator did not discuss individual students with teachers.
Results of testing:
Raw data from testing indicated the following:
Overall scores: Ten participants (just under 30%) scored in the “good” range in both phonological awareness and phonological memory, while four had overall scores of “fair/poor” in both areas. None scored in the “poor” range in both skills. Where the scores fell in the “fair/poor” range, for three participants, the “poor” score was in phonological memory. (See discussion below for theories about this.) Just one participant of the 29 had an overall score of “poor” in phonological awareness.
Three participants had no scores in the “poor” range on any task, while one scored “poor” on five of the eight items and another participant scored “poor” on four of the eight items. No participant scored “good” on all eight tasks (largely because of the sentence repetition challenge discussed below). Three participants could not understand one of tasks –which counted as “poor” for scoring purposes. (Table 4).
Of all the tasks and scores, one set of scores stands out: Twenty-one participants scored “poor” on the sentence repetition task—slightly more than two thirds; five more scored “fair” on this task—meaning they got three out of five sentences completely correct. Only three learners had scores in the “good” range on this task (Table 6).
Discussion: overall scores
As expected, most participants with higher levels of education nearly always scored better on most of the tasks on the APS than those with less education. However, the score of “poor” on the sentence repetition for many participants lowered scores enough to keep them from getting “good” scores in both kinds of skills on the test. Of the ten participants scoring “good” on both types of skills, nine were in the “High Literate” category.
Literacy and education were not always closely related to scores on the phonological skills tasks. The tenth participant scoring “good” on both kinds of skills and one of only three having only one “fair” score was in the “Low Literate” category, though he had been in the US for 24 years. This person, a laborer who had 18 months total formal education in Mexico, reported having had many years of ESOL instruction on and off over the years and was actively pursuing a GED in English. This was one of only three learners who scored “good” on sentence repetition as well.
In contrast, one of the Vietnamese participants, who was placed in the “High Literate” category because of having finished high school in the US, scored among the lowest of all participants, with “fair” phonological awareness skills (1 poor, 2 fair, one good)—a score raised by skill in counting syllables—and a “poor” overall score in phonological memory. He scored “fair” or “poor” on every phonological memory task despite having gone to high school for two years in the US (which presumably would have given him a lot of exposure to spoken English). One of the Spanish speakers with several years of college scored in the “fair” range in both phonological memory and awareness. This participant, despite being a highly literate in Spanish speaker and working as a nanny with English-speaking children, could barely understand the deletion task and had difficulty counting words in sentences and producing rhyme.
Two other participants had “poor” ratings in phonological memory. Both were Spanish speakers—one, who had the second-lowest item scores of the group, had had three years of formal education in Mexico and had been in a pre-GED program for 14 months prior to testing. The other had also had only three years of education in Mexico but had been in the US a long time and had been in tutoring for an indeterminate amount of time (more than five years). This latter participant could not understand the rhyming production task and had scores in the “poor” range on the word and syllable counting tasks. Surprisingly, this participant was able to do four out of five of the deletion tasks.
The one participant with a “poor” rating in phonological awareness could not understand the deletion task at all and barely understood the syllable counting task. Though this person had been in English for Specific Purposes (ESP) classes at a hotel and had attended the literacy program for four years off and on, he had never had direct syllable instruction. At the end of the testing he went back into his class, asked the teacher to explain syllables and came back to show that he had learned the idea of syllables!
Discussion: Individual items:
As mentioned in the Results section, more than two thirds (21= 72%) of the participants obtained a score of “poor” on the sentence repetition task. This item had more errors than any of the others and was apparently unrelated to education (12 of the 21 were in the “high literacy” category; one of the highest scores belonged to a Spanish speaker with the least amount of formal education); to the amount of English education (some had as much as eight years of formal English instruction; others only a few months); or to language background (the Russian, one Farsi speaker and one Spanish speaker scored “good” on this task).
The possible explanations for the three scores of “good”on this item are that the Russian speaker came to this country as a middle school-aged student and undoubtedly had the advantage of youth in his language learning. The Farsi speaker was already a polyglot when he arrived in this country (speaking Farsi, Arabic and fluent French), and worked exclusively with American blue-collar workers despite having himself had some college and being a gifted businessman. Thus, he was likely challenged to speak and understand highly colloquial English. The Spanish speaker with a “good” score on the sentence repetition was what could only be described as an “aggressive language learner” –reporting that he actively sought correction and comprehension in virtually everything he said and heard. His lack of formal education was due to economic circumstances—he had been prevented from going to school in Mexico when young because he had to support his parents and siblings; likewise, in the US he had had to support a wife and several children and reported that he never had time for school. Once retired, he decided to go at learning with a vengeance. At the point of testing, he was only one or two tests away from taking his GED in English.
As for why the others found the task difficult, the most likely reason, as mentioned in the last section of Part One, is that they had difficulty hearing the sounds of and words of English accurately and therefore did not hear critical differences of sound that impact meaning. For example, nearly every participant substituted “is” for “was” in the sentence “The traffic was bad today,” or omitted the final s in magazines (“Magazines can be interesting.”) or substituted “could” for “can” in that sentence. Nearly all participants understood the sentences they had to repeat, but could not repeat them accurately. Though inaccurate perception of sounds impacted other tasks such as rhyme perception or word counting for some, there was no pattern to those errors.
The second most-missed item was production of rhyme, which involves giving a rhyming syllable for stimulus ( “What rhymes with ‘bug’?”). Indeed, it is even questionable whether the scores of some of those scoring low on this task should have been counted, so poor was their understanding of the task. Of all 29 participants, only two were able to understand that producing something that rhymed with the stimulus did not require a known vocabulary word. Directions for the task and practice both were intended to make it clear that this was not a vocabulary task—that is, participants did not need to come up with a real word from their vocabulary to rhyme—any rhyming syllable will do. However, this concept proved too difficult to explain or demonstrate effectively.
Surprisingly even most of the low literate learners were able to do a very good job with the deletion task; of those who struggled with it, two had the among the lowest English skills, having studied English for just barely six months, making their scores unsurprising. One of these was the Hmong speaker. Though she had studied English for nearly 18 months, she was not able to understand the task. One Spanish speaker also was unable to understand the task. This participant had only three months of formal education in Mexico and relatively little ESOL instruction in this country. His inability to understand the task could clearly be the result of low literacy. The only other person scoring in the “poor” range on this task was a highly educated Nicaraguan. As noted before, she also scored poorly on the word counting task and would be the only one of the twenty nine for whom reading in English might be predicted to pose a problem. She did not report reading problems in English, but has hardly had any formal instruction in English.
Language background appeared to interfere only irregularly. While all the Vietnamese speakers, for example, had difficulty with sentence repetition, only one had difficulty with word repetition; three Spanish speakers, the Korean and one Farsi speaker had difficulty with syllable counting, and as we learned, at least one case of that was because it simply had never been taught to one participant. Deletion was also not related to language background except perhaps in the case of the Hmong speaker. Since Hmong is not a written language, it is unlikely the Hmong speaker has gained any deep sense of phoneme awareness in her few months in a refugee ESOL conversation class.