|
« AAUP Homepage
|
After the Cold War: A New Calculus for Science and Security
The debate over scientific openness and national security is not new. Drawing on the past, we can learn how to continue doing science in a new age of terrorism.
By Mitchel Wallerstein
I have been concerned professionally for more than two decades with the relationship between scientific openness and national security. Indeed, just more than twenty years ago, I had the privilege of directing a National Academy of Sciences panel that issued a report entitled Scientific Communication and National Security, known informally as the Corson Report, after Dale Corson, the panel's chair and president emeritus of Cornell University. Thus, for me, today's discussions about science and security have, in the immortal words of Yogi Berra, a strong sense of "déjà vu all over again."
The nature of the threat has changed, of course, since the Corson panel issued its report. The target of restrictions on open communication of scientific information is no longer the former Soviet Union and Warsaw Treaty states. But the risks to scientific and technological progress and the potential negative effects of imposing restrictions remain similar.
After working on the Corson Report and related studies at the National Academy in the 1980s, I managed major aspects of the U.S. Department of Defense's policy on technology security and export controls from 1993 to 1997. Even though my time in government preceded the terrible events of September 11, 2001, I can report that we recognized during the 1990s that certain areas of science, such as biotechnology, could be enormously helpful to the so-called proliferant states, such as Iraq and North Korea, as well as to terrorist groups seeking to gain access to mass-casualty weapons—or weapons of mass destruction. (Proliferant states are states known to possess, or strongly suspected of seeking to acquire or develop, nuclear, chemical, or biological weapons and their means of delivery.)
I had forgotten, until I went back recently to review the Corson Report, that the panel had anticipated the need to consider how restrictions on scientific communication would differ in an era in which the principal security threats did not emanate from the Soviet Union and Warsaw Treaty states. This observation was, however, simply noted toward the end of the report as a subject that the National Academy might wish to address in the future.
Of course, the fact that the threats we worry about today no longer derive from a monolithic adversary with considerable science and technology capabilities necessarily must alter the calculus of how we think about the problem. During the era of the Soviet Union, we faced an opponent that, because of the shortcomings of its science and technology infrastructure and its economic constraints, undertook a systematic and sustained effort to obtain scientific and technological information from the West. It did so by taking advantage of the openness of the western science and technology community. It sent agents to scientific meetings to search for specific information (or someone who could be co-opted to supply it); it placed supposed "students" onto university campuses where they could gain access to leading-edge research; and it engaged in many other activities, both overt and covert. These efforts were well documented by the intelligence community, and some were fairly successful.
In 1981, in response to this growing threat, senior officials in the incoming Reagan administration began to call—loudly at times—for compartmentalizing sensitive research on university campuses and in the private sector and for excluding many foreign nationals from participation in such research. This development alarmed the leadership of the science and technology community, including university presidents. Shortly thereafter, the National Academy presidents set up the panel on scientific communication and national security that issued the Corson Report.
I dwell on this history to make a point: in the Soviet era, we had a technically sophisticated adversary that, if it succeeded in gaining access to sensitive research and analysis, would have been able to overcome the gap in fielded weapons systems between itself and nations of the North Atlantic Treaty Organization. This gap gave the West its technological dominance, which, in turn, maintained the strategic parity between the opposing sides in the Cold War despite the substantial numerical superiority of the Warsaw Treaty forces.
What the Corson Report pointed out, however, was that, with few exceptions, it was (and is) not individual widgets or weapons component technology that must be protected, but the knowledge base and technical know-how necessary to design and build them. This seemingly obvious but important observation applies to every major threat from weapons of mass destruction we face today, including that posed by nuclear, biological, and chemical weapons, and even more esoteric weapons such as those used in cyberwarfare.
The most immediate concern driving recent federal legislation and executive branch actions—including the enshrinement of the ambiguous term, "sensitive homeland security information"—is the fear that al Qaeda and other terrorist groups may gain access to the knowledge and materials necessary to build crude, but nevertheless deadly, mass casualty weapons for use against the United States or its interests or citizens abroad.
But that is not the only reason to worry about unrestricted communication of sensitive science and technology information. Let me name two others: the so-called proliferant states (especially Iran, Iraq, and North Korea) and China. Credible evidence exists that some proliferant states continue to seek information (and people) in the West to help them develop indigenous nuclear weapons and other weapons of mass destruction. And China, which is, of course, already a nuclear weapons state, is modernizing its military and possibly expanding its force projection capabilities.
I would argue, however, that both of these threats more closely resemble the old concerns about the Soviet science and technology acquisition effort. We know how to deal with such threats, and the research management procedures now in place are generally adequate to cope with the problem. But the number of Chinese nationals working and studying today in the U.S. science and technology sector is large indeed, and their presence could become a matter of concern if political and military relations with China deteriorate later in this decade.
The issue of terrorist acquisition of scientific information and know-how is of a different character from these other threats that have been with us, in one form or another, for the last quarter-century. As a general rule, terrorists do not need—nor, in all likelihood, can they readily make use of—massive volumes of basic scientific knowledge or advanced techniques.
In Soviet times, we worried about protecting the physics knowledge and engineering expertise needed to build smaller, faster computer chips, or the extraordinarily complex computer algorithms used to design the hot sections of high-bi-pass jet engines. Terrorists, however, are neither designing nor manufacturing weapons systems. They lack the economic resources, the personnel, and the physical infrastructure to accomplish this task.
What they are intent on—and apparently quite good at—is constructing (often in ingenious and unconventional ways) a small number of weapons of mass destruction, most often by acquiring details about their operational and design characteristics. But will further restrictions on the communication of scientific information or on the access by foreign students to the U.S. research system do anything significant to impede terrorist acquisition of weapons of mass destruction?
In my view, the principal area in which the acquisition of technical know-how could directly and substantially benefit terrorist organizations and proliferant states is biological science. Clearly, the communication of information that helps improve knowledge about dangerous pathogens, their effects, safe handling of them, and so on increases the chance that they can be made into weapons covertly on a small scale. It has been an informal rule of thumb since the Cold War that the narrower the gap between the acquisition of new scientific knowledge and efforts to embody that knowledge in technical applications, the greater the likelihood of the unintended transfer of potentially dangerous technology or technical expertise.
Biotechnology and biological warfare threats are of extraordinary concern for another reason as well: it does not require a huge investment in physical infrastructure or many highly trained researchers to achieve modest success. The experience of the Aum Shinrikyo cult in Japan is instructive. The Aum Shinrikyo was the first terrorist organization outside the United States (there was at least one inside as well) to attempt to acquire both chemical and biological weapons. After the arrest of the cult's leaders, which unfortunately did not occur until after a sarin gas attack on the Tokyo subway in 1995, and after some less successful (and not as well publicized) efforts to develop biological weapons, the authorities found and explored the Aum Shinrikyo research and development facility near Mount Fuji.
What they found was shocking: the cult had recruited to its ranks a small number of chemical engineers and life scientists who were at work developing and testing chemical and biological weapons. (Investigators even discovered subsequently that the Aum Shinrikyo had rented an abandoned sheep station in western Australia to test the weapons it had developed.) The entire undertaking, including the acquisition of equipment, precursor chemicals and pathogens, and so on was financed by the sect. But the key to its success was the recruitment of a small cadre of individuals with sufficient technical training and knowledge.
So what about the development of present-day principles for determining whether or not science and technology information should be kept (or made) secret for security reasons? Having observed and worked on the problem from within and outside the government, I have reached the following conclusions.
- Rational and well-conceived restrictions do remain necessary, but they can and must be applied to substantially fewer areas of scientific inquiry and technology development than in Cold War days. No rationale remains for a large, overreaching list of controlled items and subject areas.
- In fields such as biotechnology, the publication of what some would call the "recipes" for doing things at the laboratory bench level should be avoided in most cases. As the Corson panel noted, it is often difficult to transfer such know-how unless qualified scientists can gain hands-on experience at the bench level.
- Unfettered access to scientific knowledge on university campuses remains as important today as it was twenty years ago. On this point, the Corson panel surely had it right, and the dependence of the U.S. research system on foreign students, postdoctoral researchers, and faculty has only grown in the interim. But that does not mean that we cannot devise ways to be more vigilant about who is permitted to gain entry to our country and to our research facilities. The September 11 hijackers "hid in plain sight" in our communities before carrying out their deadly violence. Thus, sad to say, universities and private research enterprises must devote greater effort to reviewing the backgrounds of foreign nationals whom they admit for graduate training or hire in their laboratories. And the government will need to work even more closely with the universities and the private sector in determining who should be granted a visa for study or work in the United States.
- The areas of scientific knowledge and technological application that are immediately germane to the development of weapons of mass destruction are well known at this point. Because we are not dealing with an adversary that is capable of broadly vacuuming up knowledge or expertise, advances in many—perhaps most—disciplinary areas can be discussed and communicated with little or minimal restrictions. Unfortunately, however, those areas or subdisciplines of the life sciences associated with the development of biological weapons must continue to be subject to a different set of rules. As the Corson panel and other more recent studies recommended, work in such areas may best be undertaken at off-campus facilities, where the matter of excluding foreigners, when necessary, is perhaps more manageable.
- That said, the scientific enterprise depends on the rapid publication and dissemination (whether physical or virtual) of new results and ideas. As repeated studies have concluded, we will damage the very capability that has made us the world's leading techno-scientific power if we allow our new security concerns to impede this process. Nevertheless, a modest publication delay for purposes of security review may be appropriate in areas of the life sciences in which the rapid communication of research results may have direct application to the design of biological weapons, improve knowledge about handling dangerous pathogens, or help a terrorist organization or a proliferant state avoid costly dead-end lines of research or overcome other technical obstacles. For research not undertaken with federal funding, I can imagine such a review conducted voluntarily by a duly constituted body of the life sciences community.
- Fortunately, terrorist organizations continue to have difficulty purchasing so-called enabling technology, such as sophisticated laboratory measurement equipment, containment devices, and the like. Nevertheless, the U.S. government and equipment manufacturers also must remain vigilant regarding the end-user(s) of transferred technology.
Before concluding, I want to comment briefly on two other aspects of the problem. First, there is the legitimate question of what the United States can realistically expect to accomplish on its own. To state the obvious, the U.S. research system is not the only place where important life sciences research is carried out that may be of interest to terrorists and the agents of proliferant states. The European Union, Japan, and other advanced states have research infrastructures equally capable of producing and disseminating such information. Thus, despite some highly regrettable unilateral actions taken by the U.S. government in the last year, limitations on the communication of sensitive science and technology information can work only if they are adopted multilaterally. This matter requires urgent attention, and perhaps an international meeting of experts.
Finally, since Cold War days, universities have, quite frankly, tried to have it both ways. They have sought large amounts of public funding to conduct basic and applied research while resisting periodic calls for the adoption of "codes of conduct" and other means to address concerns about how foreign nationals use the advanced training and knowledge they acquire in the United States when they return to their own countries. This apparent contradiction has continued to perplex me over the years. It would seem that today, more than ever, faculty, research staff, and administrators who manage work in sensitive research areas must be vigilant about the motivations and intentions of their students and co-workers. They should strive to impart a value structure that emphasizes the positive role of science and technology in advancing the interests and needs of humanity and guards against its use to cause mass casualties and human suffering.
Mitchel Wallerstein is dean of the Maxwell School of Citizenship and Public Affairs at Syracuse University.
|