Does one really need to worry about the reliability of the U.S. naturalization test when it is designed to be extremely easy (USCIS says that at least 92 percent of applicants pass the test)? One Michigan State University professor seems to think so.
There are 100 possible questions about America's history and its civics. The questions and the answers are both provided in advance.
Test takers are given only 10 questions drawn from the list and can miss four and still pass. (There are 10 sets of these questions and each set is designed to be comparable to the others.)
As candidates for citizenship, they have lived in the United States for at least three or five years, depending on their marital status, and have had a major opportunity in those years to learn something about their adopted country.
It's like shooting fish in a barrel. What's to worry about?
My concern, if any, would be that the test is too easy, but that is clearly far from the thoughts of Paula Winke, an assistant professor of second language studies at Michigan State University. She has written a paper called "Investigating the Reliability of the Civics Component of the U.S. Naturalization Test" for Language Assessment Quarterly.
Had Professor Winke been writing some 27 years ago when I did a pioneering study of the naturalization process for the Ford Foundation (The Long Grey Welcome, long out of print, though an article-length version is available for purchase here) she would have had something to worry about in the testing process.
In those days, each naturalization examiner made up the questions as he or she saw fit, and set the level of the questions and the passing score to match the educational level of the applicant. So there were enormous ranges in passing percentages from INS office to INS office and examiner to examiner, and a continuing opportunity for examiners to be quite subjective in the ratings.
For many years now, the array of questions have been the same nationwide and the passing score has been set at a modest, undemanding 60 percent. Further, people over 65, if they have been here for 20 years or more, are given an even easier test.
If you, despite all these advantages, blow the test you can come back within 90 days and try again, without paying an additional fee.
Despite this, Professor Winke has studied with great care the reliability of the test questions, at some substantial cost in terms of her time and her skills, in 24 pages of relatively small type in the academic journal cited above.
And what is "reliability" in this academic context? It is, to quote the paper, "an empirical, statistical measure of how consistent test scores are". The hidden concern, of course, is for the one applicant in 14 or more who, despite the favorable odds, actually flunks the test.
To be more precise, 92.4 percent of the people applying for naturalization in fiscal year 2011 were granted their papers, as we noted in an earlier blog, and 7.6 percent were denied them. Many rejections were based on factors other than failing the civics test: Some had criminal records, some mishandled the application process, and many others had inadequate knowledge of the English language, so the actual percentage who failed the civics test may have been more like 5 percent of applicants.
The percentage of people denied citizenship has dropped steadily, falling from 31.1 percent in 2000 to 8.4 percent in 2010, and to 7.6 percent in 2011.
Professor Winke did not use these basic data. Instead (in her footnote 4), she quotes a 1998 study by INS that said 34 percent of a study group of naturalization applicants "were denied due to failure on the English, the civics test, or both". In another place she arrayed a batch of raw numbers without citing a percentage, a concept that is easier for most to understand than the thrust of this text:
In [FY 2008] USCIS administered 1,652,468 naturalization interviews, approving 1,050,399 applicants for citizenship, denying 121,283 and giving 480,786 pending status.
I think the applications, in the process of being examined, were given "pending status" rather than that status being bestowed on the aliens applying.
If you add the numbers of approvals to denials and divide the sum into the denials you get a denial rate of 10.4 percent for all reasons. That low rate has dropped in more recent years.
Unchallenged by this bit of reality, which might have suggested that she write about something else, the author charged ahead with 414 tests of aliens (187) and citizens (225), in which:
Using an incomplete block design of six forms with 16 nonoverlapping items and four anchor items on each form (the anchors connected the six subsets of civics test items), I applied Rasch analysis to the data. The analysis estimated how difficult the items are, whether they are interchangeable, and how reliably they measure civics knowledge.
The report is replete with two appendices, three pages of references, four tables, and five nods of thanks to reviewers — the entire academic panoply. It must have taken a lot of time and used a lot of PhD-level skills, but to what end? To analyze in great detail the accuracy of a test which 92 to 95 percent of the applicants pass? To examine a study in which all the answers are given to all the test-takers in advance?
Two comments: First, she points out that her subjects found one of the 100 questions the most difficult, and that is: "How many amendments are there to the Constitution of the United States?" Despite a lifetime of voting and relating to government and writing about it I could not answer that one off the top of my head and I agree with her that the question should be dropped from the list of 100.
The author is not kind enough to her readers to supply the answer, but I will be to mine: There are 27 of them.
Secondly, this misdirected bit of academic talent is, sadly, not unique. All too often the academic journals are full of such not particularly useful material, as are the contents of too many master's degree theses and PhD dissertations.
In the testing field, why not examine the extent to which various SAT-type tests can predict success in college or graduate school or income levels after graduation? In the immigration policy field, why not study the relative rates of naturalization or ultimate income levels of the various classes of family, employment-based, refugee, and asylum migrants? Or do a comparative cross-national study of the utility of differing strategies of immigration enforcement in employment situations? (Some of these subjects may well have been studied.)
I think well of scientific research, including that in the social sciences, but wish that some of the latter could be guided into more useful pathways.