There were two underlying themes at a recent House Immigration Subcommittee hearing that deserve a little more attention.
The hearing, on February 15, dealt with the explosive report by the Acting Inspector General of DHS, Charles K. Edwards, in which he explored the extent to which USCIS, during the Obama administration, has pushed rank-and-file staff members to approve marginal applications for immigration benefits.
The hearing was described in an earlier blog and the full text of the IG report is available.
The two largely undiscussed elements at the hearing were the often murky USCIS statistical reporting systems and the inherent difficulties (faced by the IG) in prying controversial information out of the employees of any government agency.
If you are going to make the charge, as the IG did, that staff members were being pressed to approve questionable cases, then you might expect that there would be tables and tables of data showing how USCIS said "yes" or "no" to different kinds of applications over different stretches of time.
As the ranking Democrat on the Committee, Zoe Lofgren (D-Silicon Valley), aptly pointed out, there was practically no statistical data in the report; I disagree with her policy positions routinely, but think she was right on this matter.
What no one said was that USCIS data sets are uneven in quality, vary in format and are often not made public; these data sets are not, I hasten to add, published by the ever-reliable DHS Office of Immigration Statistics, a different entity. A regularly published, detailed, set of USCIS data showing petitions received, petitions approved, and petitions denied, on all the various immigration benefits sought, would be helpful in resolving these matters. The subcommittee should demand this from the agency.
The other key question not asked at the hearing, was: how do you get data on something when providing that data is dangerous (or seems to be) to the person with the information?
Suppose the questions deal with abuse by guards in a prison. Can you conduct a random sample survey of the prisoners, and tie their names to their responses about which guards are bullies and which are not? What prisoner in his or her right mind would respond, or respond honestly, to such an inquiry?
No, you have to use other approaches. You have to give the data sources (the prisoners in this case) a system in which they can respond honestly, but without fear of retaliation.
That is pretty much the situation, if not quite so dramatic, faced by the Inspector General as he sought to find out the extent to which the USCIS leadership was twisting the system in order to "get to yes". How do you convince rank-and-file adjudicators to tell the IG staff what (perhaps illicit) pressures were put upon them to "get to yes"? How do you get sound information on such a controversial subject when an honest answer might (even if only in the minds of the respondent) lead to trouble on the job?
Unfortunately the conversation on this central methodological question was completely one-sided. Lofgren said that the damning responses that the IG had elicited from staff adjudicators "came from a self-selected study group", i.e., the staff members who replied anonymously to the IG’s questions. Similarly, Alejandro Mayorkas, Director of USCIS, dismissed the whole report as "based on limited testimonial evidence and not empirical data . . ."
Both Lofgren and Mayorkas have extensive experience in congressional hearings, both seem to have a great deal of self-confidence, and neither is an "acting" anything.
Edwards, on the other hand, may have been testifying before Congress for the first time (which, in my own case, I recall as a pretty terrifying experience), is only the Acting Inspector General, and he seemed to be a hesitant witness. He did not defend his agency's own methodology and no one on the committee came to his rescue on this point.
As a researcher, I have had some experiences with this sort of thing in the past. At one point, when I was with the island territories office in the Department of the Interior (DoI), I ran a survey of former nonimmigrant workers who had toiled in the garment sweatshops of the Commonwealth of the Northern Mariana Islands. My boss and I wanted to know what happened to the workers in the factories and in the recruitment process. It was not a pretty picture.
We decided that trying to talk to people who currently worked in those factories would be impossible, on the grounds that the women would be justifiably terrified of reprisal, so we worked with a slightly different population, women on nearby Guam, all Chinese, who had escaped from the CNMI sweatshops, who had landed on Guam illicitly, and who were seeking asylum from, at that time, the Immigration and Naturalization Service. The interviews were in the offices of their lawyers, on Guam. We secured a lot of useful information, particularly about how the women were treated by the provincial labor-export agencies in Communist China, in the factories, and by CNMI’s own immigration service.
Purists within DoI, back in Washington, however, quashed the report because it was "not a random sample". That one could not possibly secure data from a random sample of the exploited workers was dismissed as "unfortunate".