American academics know institutional review boards (IRBs) as local committees that make ethical judgments to protect human subjects in research. In making these ethical judgments, IRBs are also required to follow procedures mandated by U.S. federal regulations. Toward the end of the 20th century, after decades of arms-length oversight, regulators launched a wave of enforcement designed to hold IRBs accountable.

This article uses the federal crackdown on IRBs to explore the unintended consequences of a particular style of governance, which I refer to as decentered accountability. I adapt the term “decentered” from legal scholarship (Black 2001), to refer to well-known centrifugal tendencies of American governance, such as limited federal bureaucratic capacity, fragmented authority, and a preference for delegating state functions to nongovernmental entities (Balogh 2015; Campbell and Morgan 2011; Clemens 2006; Mayrl and Quinn 2017; Quinn 2019; Skocpol and Finegold 1982; Skowronek 1982). Rather than building strong, centralized policy institutions, the American state tends to achieve its objectives obliquely and indirectly, through a “tangle of indirect incentives, cross-cutting regulations, overlapping jurisdictions, delegated responsibility, and diffuse accountability” (Clemens 2006:187). In summarizing these structures and strategies, the concept of “decentered governance” can be used to fill the conceptual hole once occupied by the notion of the “weak state” (Novak 2008).

By enlisting the support of nongovernmental actors, American policymakers can achieve their objectives without increasing the state’s central bureaucratic capacity. Yet organizations may respond to these workarounds in unintended ways. One unintended consequence, noted by organizational sociologists studying employment law, is symbolic compliance. The Civil Rights Act prohibited employment discrimination, but failed to create a strong regulatory agency to define and oversee the mandate. Instead, the law empowered private litigants to sue their employers. By harnessing the power of courts and private lawsuits, the law unintentionally enabled symbolic compliance. In discrimination lawsuits, organizations were able to placate judges by erecting ceremonial structures, thus protecting themselves against the imposition of more disruptive measures (Edelman 1992; Edelman et al. 2011; Edelman, Uggen, and Erlanger 1999; Kalev and Dobbin 2006; Kalev, Dobbin, and Kelly 2006; Krieger, Best, and Edelman 2015).

However, although some decentered governance regimes in the United States rely heavily on private lawsuits, others are defined by formalized accountability: bureaucratic systems of assessment designed to compel adherence to predetermined rules (Bardach and Kagan 2017; Power 1997; Strathern 2003). The imposition of accountability by decentered governance regimes may trigger its own distinctive set of unintended consequences, which have received limited attention in sociological literature.

The federal crackdown analyzed in this article presents an exemplary case of decentered accountability in action. Oversight of the IRB system was fragmented and diffuse, led by an agency possessing limited resources and authority (Greenberg 2007; National Bioethics Advisory Commission 2001). In spite of these limitations, regulators successfully used audits of documentation—a quintessential accountability mechanism—to impose discipline (Borror et al. 2003; Burris and Welsh 2007). Private lawsuits played no part in enforcement.

What happens when the unyielding demands of accountability are imposed by decentered regulatory institutions? I analyze biomedical research institutions’ response to the federal IRB crackdown to make two main arguments about the particular hazards of decentered accountability in American governance. First, I argue that its most immediate hazard is not symbolic compliance, but a disproportionate focus on technical compliance: the bureaucratic tasks—completing forms, compiling documents, generating metrics, and so on—through which organizations make themselves accountable (Heimer and Gazley 2012). One aspect of decentered governance that exacerbates accountability burdens is federal bureaucratic incapacity (Skocpol and Finegold 1982). Comparative legal scholars argue that, because of their constrained authority and discretion, U.S. regulators often embrace a formalistic style of enforcement that emphasizes detailed, standardized rules (Bardach and Kagan 2017; Kagan 2019). The result is a distinctly rigid variety of accountability, unsoftened by flexibility in interpretation: what Bardach and Kagan (2017) refer to as “going by the book.”

This article traces how formalistic enforcement in the IRB regime triggered a surge in technical compliance. It also identifies two additional mechanisms through which decentered governance inflated the burdens of accountability in the IRB system. On the one hand, there were ambiguous mandates, which regulators lacked the capacity to clarify, which led IRBs to overcomply with regulations (Gunningham, Kagan, and Thornton 2004; Wu and Wirkkala 2009). On the other hand, there was fragmented authority, which multiplied technical compliance obligations and created additional layers of complexity and disruption. Fueled by the combined forces of formalistic enforcement, ambiguity, and fragmented authority, IRBs adopted more complex rules, required more copious documentation, and suffered from an epidemic of delayed reviews. A growing number of critics charged that the system had become profoundly dysfunctional (see Bledsoe et al. 2007; Burris and Welsh 2007; Heimer and Petty 2010). “IRBs’ blizzard of paperwork,” wrote one group of researchers, “is getting in the way of their fundamental mission: to protect the dignity and well-being of human subjects” (Gunsalus et al. 2007: 620).

Second, I argue that, in the longer term, decentered accountability can make the cost of technical compliance unsustainable, fueling the emergence of commercial industries dedicated to managing these costs. This is exemplified by the rise of “independent IRBs:” for-profit companies that conduct ethics reviews for a fee. In the wake of the federal crackdown, research universities increasingly outsourced biomedical studies to for-profit IRBs, attracted by their ability to minimize red tape and its associated delays. Within American regulation, there appear to be multiple similar instances of high accountability costs catalyzing the growth of specialized vendors peddling compliance with efficiency.

Decentering in American Governance

As conceived by British legal scholar Julia Black, “decentering” refers to regulatory strategies characterized by “complexity, fragmentation, interdependencies, ungovernability, and the rejection of a clear distinction between public and private” (Black 2002:254). The term evokes new governance forms that have emerged across the wealthy industrialized world (Carrigan and Coglianese 2011; Parker and Nielsen 2009), in which top-down state controls are being supplemented, or supplanted, by diffuse “webs of influence” (Black 2001:103)

Although coined to describe a contemporary worldwide trend, “decentered” also evokes an enduring “style of American statecraft … [characterized] by extensive delegation and formal complexity” (Mayrl and Quinn 2017:58). In this article, I use “decentered” to summarize two related features of this American way of governance. One is a pervasive policy strategy, characterized by the persistent delegation of functions to nongovernmental entities (Balogh 2015; Campbell and Morgan 2011; Clemens 2006; Hacker 2002; Mayrl and Quinn 2017; Mayrl and Quinn 2016; Quinn 2019). In Europe and other wealthy industrialized regions, decentered regulatory strategies may represent novel innovations (Black 2001; Parker and Nielsen 2009). In the United States, however, decentered strategies are not “new governance” at all, but a time-honored “antibureaucratic strategy of state-building” (Skocpol and Finegold 1983: 262).

The second dimension of decentering, which extends the term’s original meaning, refers to the sprawling structure of American state institutions. The growth of the administrative state in the U.S. was constrained by constitutional separation of powers, federalist dispersal of authority, and decentralist anti-state ideologies (Hamilton and Sutton 1989; Short 2011; Skocpol and Finegold 1982; Skowronek 1982). Born and raised in these inhospitable institutional circumstances, the U.S. federal bureaucracy is today observed to be less capable, less comprehensive, and less coherent than its counterparts abroad (Campbell and Morgan 2011; Rourke 2020; Skocpol and Finegold 1982). There is a direct relationship between central state incapacity and decentered strategies: American policymakers pursue these strategies when they are unwilling or unable to expand the size and power of federal bureaucracy (Balogh 2015; Campbell and Morgan 2011; Clemens 2006; Hacker 2002; Mayrl and Quinn 2017; Mayrl and Quinn 2016; Quinn 2019).

The concept of decentered governance provides a useful conceptual umbrella for related dimensions of American policy that were, at one time, often summarized under variations on the term “weak state.” This term has since fallen out of favor, as scholars have pointed to overwhelming evidence of the effectiveness of the American state (King and Lieberman 2017; Novak 2008). The demise of the “weak state” label has created a linguistic gap, into which political sociologists and others have inserted partial synonyms: the “hidden state” (Howard 1999), the “associational state” (Balogh 2015), the “Rube Goldberg state” (Clemens 2006), among others. “Decentered governance” can be used to fill this wider conceptual gap, and in a way that implies, not the absence of state power but rather its dispersal.

The American regulatory state exemplifies decentered governance. Its apparatus is fragmented across a patchwork of overlapping jurisdictions (Schiller 2016), and its authority is sharply constrained—by Congress, by the courts, and by arduous procedures imposed by regulatory foes (Axelrad, Kagan, and others 2000; Kagan 2007; McGarity 1991). Agencies depend for their resources on the fluctuating support of presidential administrations and Congress. This makes them perpetually vulnerable to having their capacity reduced by opponents of regulation, as occurred repeatedly during the Reagan and Trump administrations (McGarity 1986; Rein and Tran 2017).

To achieve regulatory objectives in the face of these challenging circumstances, U.S. lawmakers and regulators have both long made ample use of decentered strategies. A well-known example is the use of private lawsuits to define and enforce compliance (Farhang 2010; Melnick 2005) and exemplified by the case of equal employment opportunity in civil rights law (Dobbin 2009). Yet reliance on private litigation is only one of a menu of decentered governance strategies that are prominent in American regulation. Another common strategy is to rely on regulatory intermediaries (Abbott, Levi-Faur, and Snidal 2017): private organizations, such as audit firms, accreditors, and certifiers, that assume governance functions and that are funded, not by taxpayers, but by regulated organizations (Fransen and LeBaron 2019; Lytton 2017). Still another decentered strategy is to delegate governance functions to regulated organizations themselves, in systems of “enforced self-regulation” (Ayres and Braithwaite 1992; Coglianese and Lazer 2003). Enforced self-regulation offloads much of the cost of routine oversight to the regulated organizations and their specialized compliance offices (Bamberger and Mulligan 2015; Nelson 2021).

The Hazards of Decentered Accountability

This article is about what happens when decentered regulatory regimes impose the discipline of accountability. Accountability mechanisms can be quite effective at achieving regulatory objectives. Studies show that enforcement that holds organizations to predetermined standards can improve compliance outcomes (Kagan, Gunningham, and Thornton 2003; Short and Toffel 2010). Regulatory audits appear to be more effective than lawsuits at improving diversity in hiring (Dobbin, Schrage, and Kalev 2015; Kalev and Dobbin 2006). In this study, I do not address the efficacy of accountability for achieving regulatory objectives: the “outcome” in this particular case—ethical research—is notoriously difficult to measure (Abbott and Grady 2011).

Instead, the focus here is on the perverse side-effects of decentered accountability, which have been largely overlooked in sociological literature. An extensive literature on equal employment opportunity suggests that decentered regimes are prone to the hazard of symbolic compliance. The exemplary case is U.S. law prohibiting employment discrimination. Although the agency charged with combating discrimination was “toothless, unorganized, and broke” (Pedriana and Stryker 2004:747), civil rights law enabled private lawsuits (Farhang 2010). Rather than relying on its scant resources and authority, the agency and its civil society allies used private litigation to pressure employers to comply and to expand the definition of discrimination (Dobbin 2009; Pedriana and Stryker 2004). Judges in anti-discrimination lawsuits accepted evidence of employers’ preferred practices, such as diversity training programs, as indicators of substantive compliance. Employer defendants were therefore able to placate external authorities with symbolic compliance, or “legal structures designed to signal attention to law and thus confer legitimacy” (Edelman et al. 2011; Krieger, Best, and Edelman 2015:846). Symbolic compliance, in this account, is a buffering strategy through which organizations protect themselves from external pressures (Meyer and Rowan 1977).

Accountability, however, is designed to prevent mere symbolic compliance by penetrating and disrupting organizations’ most valued activities, or “technical core” (Meyer and Rowan 1977; Spillane, Parise, and Sherer 2011). Rather than being rewarded for following their own preferred practices, organizations are held to standards that were predefined by external authorities. Organizations must continuously narrate their internal workings to these authorities, who assess performance, administer penalties, and make demands for reform. Organizations may respond to accountability by emphasizing the production of performance indicators, rather than the goals these indicators were meant to promote (Braithwaite, Makkai, and Braithwaite 2007), which might be described as a form of “means-ends decoupling” (Bromley and Powell 2012). Yet the prioritization of means over ends is not an organizational strategy as much as it is a systemic feature of accountability, which is organized around the assessment of means rather than the achievement of ultimate goals (Power 1997).

Studies of accountability argue that this imperative has unintended consequences that are distinct from the symbolic compliance observed in new institutional studies (Espeland and Sauder 2016; Power 1996, 1997; Reich 2012; Strathern 2003). In sociology, a number of studies have focused on the consequences for individuals, such as epistemic distress (Hallett 2010), self-discipline (Sauder and Espeland 2009), and the internalization of accountability standards (Reich 2012). In this study, I am more concerned with the broader impact of accountability on organizations, and most specifically on the cost of accountability production. For accountability is never definitively achieved, but must be continuously produced: metrics must be generated, reports completed, forms completed, routines followed, audits endured, and so on. Following Heimer and Gazley (2012), I refer to such accountability production activities as “technical compliance.” Technical compliance may result in two kinds of costs to the organization: the indirect cost of disrupting valuable core activities; and the financial expense of paying for specialized accountability work.

Decentered governance regimes in the United States may be especially prone to high technical compliance costs. Comparative legal scholars suggest that, because U.S. regulators are mistrusted and constrained by Congress and the courts, they are often held to a standardizing logic, which requires them to uphold “uniform, detailed, and stringent rules” (Bardach and Kagan 2017:66). In this way, decentered policy institutions encourage a hyper-formalized version of accountability that is unsoftened by flexibility in interpretation. Standardization is passed down to regulated organizations as a meticulous “by the book” approach that punishes organizations for failing to attend closely to the minutiae of technical compliance (see Axelrad et al. 2000; Bardach and Kagan 2017; Kagan 2000, 2019). Even under systems of enforced self-regulation, in which organizations are expected to develop their own policies and procedures, organizations may in practice adhere closely to regulators’ detailed guidelines (Bardach and Kagan 2017:235).

As a result, technical compliance with U.S. regulations can be unusually costly. International corporations operating in different national environments report that U.S. regulatory paperwork is more time-consuming than that of other governments (Axelrad et al. 2000). A comparative study of nursing home regulation in the U.S., U.K., and Australia finds that American facilities devote far more resources to the production of standardized compliance documentation (Braithwaite et al. 2007). Nursing homes emphasize documentation because regulators focus on documentation is in audits and enforcement actions. Regulators, in turn, emphasize documentation because they lack the authority to make discretionary judgments of more substantive outcomes (Bardach and Kagan 2017; Kagan 2019).

The underlying logic of this regulatory style broadly parallels Porter’s (1995) accounts of “mechanical objectivity” being used to curb professional discretion. However, whereas Porter’s account emphasizes quantification—the distilling of performance into numbers to facilitate external control—some of the most burdensome forms of standardization reported in American regulation are those that cannot be quantified. More specifically, significant burdens appear to be created by procedural compliance, prescribing sequences of actions to be performed and then assessed in audits of comprehensive records. Procedural rules are experienced by the regulated organization as the obligation to continuously mass-produce auditable records, demonstrating that each procedure was followed to the letter in every relevant case. For example, workers in American nursing homes are required to record every time they turn a resident suffering from pressure sores—among a host of other everyday procedures—to create a meticulous paper trail to be presented to auditing regulators (Braithwaite et al. 2007:232). Guided by the motto, “if it’s not recorded, it didn’t happen,” nursing homes must devote large amounts of staff time and money—time and money that might otherwise be devoted to patient care—to the ongoing production of compliance documentation (Braithwaite et al. 2007:54).

Background and Research Methods

Passed by the U.S. Congress in 1974, the National Research Act authorized regulations to prevent ethical abuses in research. The law was a response to reports of the horrifying biomedical research scandals of the 1970s, the best known of which was the Tuskegee syphilis study. Importantly, the law did not contain a clause enabling private litigation, and lawsuits would not play a role in its evolving interpretation. The regulations required that federally funded research involving human subjects be reviewed by local committees, which became known as Institutional Review Boards (IRBs) (Frankel 1976; Stark 2012).

The IRB system was delegated and diffuse, exemplifying decentered policy structures and strategies. Local boards were given final authority over the ethics of research studies: there was no higher body to establish precedents or to receive appeals. Liberal lawmakers had tried to authorize a more centralized system, but were thwarted by the combined opposition of the NIH, conservatives, and the biomedical research community (Frankel 1976; Halpern 2008). Federal authority over the system was fragmented (see Figure 1). The original agency overseeing IRBs was the Office for Protection from Research Risks (OPRR), charged with regulating research funded by NIH, and later—with the promulgation of revised regulations known as the “Common Rule”—the research of other federal agencies that signed on to the Rule. In 2000, this agency was reorganized, relocated, and renamed the Office for Human Research Protections (OHRP). In addition to OHRP, the U.S. Food and Drug Administration (FDA) oversaw a similar (but not identical) set of regulations for privately-sponsored studies, under the jurisdiction of three separate offices dealing, respectively, with research on drugs, devices, and biologics (United States General Accounting Office 1996).

Figure 1
Figure 1

U.S. Regulatory Framework for Protecting Human Research Subjects, 2001.

Adapted from National Bioethics Advisory Commission (2001).

The agency charged with overseeing the Common Rule was small and underfunded (Greenberg 2007:133–34). It lacked authority to set official precedents and had no say over ethical decisions, which were entirely left to local boards. Instead, the agency’s main function was to hold research institutions accountable for compliance with prescriptive, mostly procedural, rules—especially regarding how IRB decisions were to be made, and by whom. Following a logic of enforced self-regulation, IRBs were also required to develop and follow their own local policies and procedures. To allow regulators to assess compliance with both federal and local rules, IRBs were expected to maintain comprehensive documentation of their activities (McCarthy 2001).

Until the second half of the 1990s, however, these accountability structures had little impact. Under the leadership of a director who favored an educational over a punitive strategy, the agency conducted little enforcement (McCarthy 2001). IRBs’ technical compliance obligations, although stated in the regulations, were easily overlooked by boards run by faculty volunteers, more concerned with research ethics than with following the letter of the law (Babb 2020). The impact of federal regulations would not be felt until the mid-1990s, when a fresh outbreak of biomedical research scandals triggered an unprecedented wave of federal enforcement.

This article analyzes how biomedical research institutions responded to this wave of enforcement, which brought the discipline of accountability into the highly decentered IRB system. The period of analysis begins in the late 1990s and concludes in 2018, the year in which new federal IRB regulations were published. The mode of analysis is the “theory-building” variant of “process-tracing”—using the structured analysis of a single case to build a plausible argument for more general causal processes (Bennett and Checkel 2015).

The analysis presented here draws on a larger qualitative study based on both interview data and a wide variety of documentary sources (Babb 2020). Among the documentary sources are articles that appeared in the trade journal IRB Advisor (“your practical guide to institutional review board management”) between 2001 and 2018. To understand how individuals experienced and acted upon large-scale social forces, I also conducted qualitative interviews with informants working in and around the IRB world in the 1990s and early 2000s at research institutions across the country.1 I refer to my informants by pseudonym. Both documents and interviews were coded and analyzed inductively with the assistance of qualitative data analysis software.

It is important to acknowledge, at the outset, two limitations of this case study. First, the focus here is on the regulation of IRBs at research institutions: universities and academic medical centers, conducting studies mostly (although by no means exclusively), under the jurisdiction of the agency known since 2000 as the Office for Human Research Protections (OHRP). There exists a separate ecosystem of organizations providing research services to private biopharmaceutical companies, answering almost exclusively to Food and Drug Administration (FDA) regulations. These have evolved along a different trajectory (see Fisher 2008; Mirowski and Van Horn 2005), and are touched on only briefly in this article.

Second, my analysis focuses on the effects of accountability on biomedical institutions and researchers. It was primarily to address biomedical misdeeds that the first IRB regulations were adopted, and biomedical developments have primarily driven the system’s subsequent evolution. Readers of this article may be most familiar with the system’s considerable collateral impact on social and humanities research, which was swept along as regulators responded to biomedical incidents (Babb, Birk, and Carfagna 2017; Bledsoe et al. 2007; Katz 2007; Schrag 2010). Because this subplot in the IRB saga is complex in its own right, and only tangential to my argument, I do not narrate it here.

Findings

In 1996, a college student named Nicole Wan died in a study on the effects of smoking and air pollution. The young volunteer had received a lethal dose of lidocaine and it was soon discovered that the study’s IRB protocol had failed to specify the maximum dosage (IRB Advisor 2002; Rosenthal 1996). This shocking death was just one in a series of biomedical research scandals that unfolded over the course of the 1990s (Hilts 1994; Shalala 2000; Stolberg 2000).

Under pressure from lawmakers and the public, regulators launched an attention-grabbing enforcement campaign, using tools well-suited to their limited resources. Rather than making expensive site visits, regulators almost always conducted investigations remotely, through an assessment of research institutions’ documentation (Borror et al. 2003).2 The lead agency in the crackdown was OHRP, which, although small and underfunded, had the power to suspend research institutions’ federally funded research (Koski 2003).

The following two sections trace the unfolding consequences of federal discipline for research institutions. The first section examines how federal discipline triggered the infamous “blizzard of paperwork”—a surge in IRBs’ technical compliance activities—and shows how the refraction of accountability through decentered institutions amplified these burdens. The second section shows how escalating accountability costs led research institutions to reorganize the labor of technical compliance for efficiency—most notably, by outsourcing ethics reviews to for-profit IRBs.

The Inflation of Technical Compliance

In May of 1999, IRB regulators ordered the Duke University Medical Center to suspend $140 million in federally funded research. The Duke shutdown was both well-publicized and shocking; at the time, Duke was the highest profile institution ever to be penalized in such a way (Wadman 1999). Other institutions to have their research suspended during this period would include the University of Alabama at Birmingham, the University of Pennsylvania, Virginia Commonwealth University, and Johns Hopkins (Brainard 2000; Crigger 2001). Although the shutdowns garnered the most negative publicity, many other institutions had their reputations damaged by federal enforcement letters outlining their compliance failures. Between 1999 and 2002, 155 institutions were cited for such failures in 269 compliance oversight determination letters (Borror et al. 2003).

The federal crackdown immediately captured the attention of biomedical administrators and researchers around the country, who were “worried, even panicked, that the same thing could happen at their institutions” (Brainard 2000). Regulators had sent an unmistakable message, and at the heart of this message was that IRBs were responsible, both for meticulously complying with procedural rules, and for meticulously documenting their compliance. In enforcement letters, research institutions were cited for failing to devise IRB policies and procedures that were sufficiently detailed; for failing to follow the policies they had devised; and for failing to adhere to the procedural rules mandated by the regulations (Borror et al. 2003; Burris and Welsh 2007).

Above all, research institutions were penalized for neglecting to document that procedures had been followed (Borror et al. 2003; Burris and Welsh 2007). Regulators did not have the authority to second-guess local boards’ ethical decisions; their job was to make sure only that boards correctly carried out the series of actions prescribed by the rules; and the only way they could make such determinations was by scrutinizing written records. The oft-repeated motto of regulators during the crackdown was: “if it wasn’t documented, it didn’t happen” (OPRR Director Gary Ellis, cited in Greenberg 2007:148). Documentation needed not only to include records of all IRB decisions, but also accounts of the process through which each decision was reached. To take just one of many examples: in making an “exemption determination,” an IRB decision-maker was required to consider into which of six eligible categories a study fell. In a regulatory audit, an IRB would need to produce a record documenting under which category of 45 C.F.R. §46.101(b) the protocol (and all other such protocols) had been exempted, by whom, and why.

In response to federal discipline, therefore, prescribed routines were followed more diligently and local policies and procedures were lengthened. There was an explosive growth in paperwork and recordkeeping. At a time when few IRB systems were computerized, technical compliance assumed the tangible form of paper. A science journalist recalled seeing “[a]t several universities…thousands of feet of shelf space occupied by thousands upon thousands of folders stuffed with documentation required by the IRBs,” recalled (Greenberg 2007:132).

These labors of procedure and documentation were exacerbated by the ambiguity of regulators’ expectations, which created a strong incentive for overcompliance. By the beginning of the 2000s, it was obvious that IRBs were not only following the rules but exceeding them by a considerable margin. As one informant colloquially recalled, “the pendulum swung to being conservative, to cover your butt” (Sheila, compliance office director, research university). Former OHRP head Greg Koski criticized a surge in “reactive hyperprotectionism,” or “inappropriately cautious interpretations and practices that have unnecessarily impeded research without enhancing protections for the participants” (Koski 2003:5). Two years later, Koski’s successor at OHRP, Bernard Schwetz, similarly worried that “institutions treat guidance as regulations, and institute new rules internally that are burdensome and not required” (Schwetz interview quoted in Burris and Welsh 2007).

Although federal authorities scolded research institutions for going overboard, regulators were unintentionally complicit in producing this behavior; for, by failing to clarify the rules, they created an incentive for overcompliance. The regulations contained numerous grey areas, such as what constituted a “vulnerable subject” or “adequate protections.” IRBs relied on regulators to clarify the rules; but regulators’ ability to clarify was impeded by their lack of authority to do so, and—especially after deregulation during the Reagan administration—a chronic shortage of resources and staff (IRB Advisor 2001, 2004b; McCarthy 2001). Emblematic of this problem was an obsolete “IRB guidebook” from 1993, which remained on OHRP’s website site more than a decade later, with the cryptic caveat:

Developments over the intervening years have made portions of the Guidebook information obsolete, while portions of the information remain valid. There is no errata document to indicate which information has been superseded. OHRP cautions users to verify the current validity of any Guidebook information before relying on the information in a program of human subjects protection (United States Office for Human Research Protections n.d., emphasis added).

The site provided no additional information about how an IRB could verify this information.

When research institutions pled for regulatory guidance, the response could be delayed for months or even years (IRB Advisor 2001). When guidance did arrive, it frequently failed to clear up the confusion. Many uncertainties arose around whether and how to apply standard regulatory requirements to unfunded social and humanities research (Bledsoe et al. 2007; Schrag 2010). For example, regulators stated in 2003 that oral history did not need IRB review. But only months later, the agency published a list of examples of oral history projects that might need to be reviewed, causing widespread confusion (IRB Advisor 2004b). No further explanation was offered until the issuing of new regulations, and not until 2018. The safest option, in the meantime, was overcompliance: to put all oral history studies through the same standard IRB review process (Schrag 2010:156).

One tangible result of overcompliance was the exploding size of application forms and their supporting paperwork. “We became very fearful of missing anything, so we wanted to capture everything,” one IRB administrator recalled (Diane, IRB administrator, research university). “I think we were all afraid we were gonna miss something,” Elizabeth similarly explained. “So you would ask all these questions” (IRB administrator, research university). There was a dramatic increase in the length of informed consent documents. These “used to be in the range of two or three pages,” observed OHRP director Bernard Schwetz in 2004. “Now the documents are up to 10 pages or more. … There are informed consent documents over 100 pages and … the reason you have an additional 90 pages … is for protecting institutions, not subjects” (IRB Advisor 2004a). The inflation of consent documents, although mostly driven by fear of reprisal from regulators, was also driven by fear of lawsuits, which ticked up temporarily during the crackdown (Mello, Studdert, and Brennan 2003). The formality of these documents made them ideal courtroom evidence and, as such, a point of vulnerability for research institutions being sued for medical malpractice in the context of research studies (Halpern 2008). This was ironic, given that consent documents had originally been conceived as a means of protecting sponsors and institutions from legal liability (Stark 2012).

Technical compliance burdens were also amplified by the IRB system’s spectacularly fragmented authority. On the one hand, there were the burdens engendered by the wholesale delegation of ethical authority to thousands of local IRBs. There was no higher body to establish precedents or receive appeals: each individual board had final say over the ethics of local research studies within its jurisdiction and relied on its own local precedents (Stark 2012). By the mid-1990s, however, biomedical studies were conducted mostly as “multi-site” studies across different institutions (Fisher 2008; Mirowski and Van Horn 2005; Rettig 2000). In this new research context, IRBs were “squandering precious resources when dozens or hundreds of them must review all aspects of a single, multi-site protocol,” as the National Bioethics Advisory Commission (NBAC) observed (NBAC 2001: 14). When multiple boards arrived at disparate decisions, their judgments had to be laboriously reconciled (Infectious Diseases Society of America 2009). The red tape and disruption created by multi-site research in the decentered IRB system would soon reach crisis proportions, as we will see below.

On the other hand, accountability burdens were refracted across multiple regulators. A single biomedical IRB could be held accountable to multiple federal agencies, each with its own similar, but not identical, procedural and documentation requirements. For example, whereas under the Common Rule boards needed to report “unanticipated problems,” FDA regulations stipulated the reporting of “adverse events,” which had a somewhat different definition and requirements (IRB Advisor 2004c). During the era of peak enforcement, additional mandates—each run out of a separate regulatory office–were added to IRBs’ workload, such as rules in response to the Health Insurance Portability and Accountability Act (HIPAA) (IRB Advisor 2003d), and conflicts of interest (IRB Advisor 2003g). Ensuring that these various rules were understood and scrupulously adhered to was a job requiring time and arcane knowledge—calling not for the attention of faculty volunteers, but that of full-time administrators with specialized regulatory expertise.

Perhaps most consequentially, in 2001 a private regulatory authority was added, with the founding of the Association for the Accreditation of Human Research Protection Programs (AAHRPP) (pronounced “ay-harp”). Federal regulators supported the establishment of AAHRPP to achieve closer oversight without straining limited agency resources. Accreditation soon became the norm among biomedical research institutions (Halpern 2008). By securing the AAHRPP seal of approval, a research institution both improved its reputation and lowered its chances of being subjected to a federal audit. Yet the benefits of accreditation came with a high price tag—in the form not only of high membership fees, but also a formidable array of technical compliance duties. To apply for accreditation, an IRB office was required to conduct a detailed self-assessment, producing exhaustive documentation of its activities; host a multi-day site visit; and respond to a report from the accreditor detailing required changes (Halpern 2008; IRB Advisor 2003e, 2003b, 2003f, 2008). Moreover, to maintain its accredited status, a research institution would commit to a host of ongoing “continuous quality improvement” tasks, such as: regularly assessing and updating policies and procedures; defining and redefining performance metrics; keeping comprehensive tracking logs; and implementing checklists to guide all levels of IRB decision-making (IRB Advisor 2009, 2010a, 2010d, 2014a).

Controlling the Cost of Technical Compliance

At the start of the new millennium, IRBs were making unprecedented efforts to meet and even exceed their technical compliance obligations. For research institutions, these efforts were beginning to generate high costs, of two different sorts: the financial expense of running IRB offices; and the opportunity cost of disrupting research institutions’ most valued activities. Research institutions responded with growing efforts to comply more efficiently, by internally reorganizing technical compliance production, and by outsourcing the entire process to for-profit IRBs.

Research institutions had by then made large financial investments in their IRB offices. “[D]ramatic changes have been made,” observed former chief regulator Greg Koski in a 2003 editorial. “[I]nstitutions have in many cases doubled and tripled their commitments of resources to their human subjects protection programs” (Koski 2003:5). A survey of academic medical centers in 2007 found that the median cost of running an IRB was $781,224 annually (about $1.3 million in 2024 dollars). The biggest line item, accounting for about 60% of the total, was IRB staff salaries. That same year, a survey of IRB administrators found that more than half of respondents worked in offices with 3 or more full-time staff members, with 15 percent having at least 10 full-time staff (Public Responsibility in Medicine and Research 2007).

The duties of these staff members were focused almost exclusively on managing the technical demands of accountability. This preoccupation with technical compliance was evident in the Certified IRB Professional (CIP) examination. In one practice CIP exam, 80% of questions tested knowledge of regulatory minutiae (Public Responsibility in Science and Medicine (PRIM&R) 2016). Some representative questions included:

  • Under the Common Rule, how long must an IRB retain its records of studies to be in compliance with federal regulations?

  • The Protection of Pupil Rights Amendment (PPRA) requires written parental consent for …

  • Research involving greater than minimal risk, but presenting the prospect of direct benefit to the individual child subject, may be approved by an IRB only if …

Increasingly, the workers wielding such knowledge saw themselves as skilled professionals, paid for providing an essential service. “Employers are looking for IRB professionals who will assure that research conducted within the institution is ethical and regulatorily unassailable,” explained an IRB administrator in 2003 (IRB Advisor 2003a).

In addition to inflating the financial cost of running IRB offices, technical compliance became increasingly disruptive to the work of researchers. Accountability duties could not be confined to the IRB office but spilled over to occupy a growing amount of investigators’ time and attention. Just as IRBs labored to make their procedures visible to regulators, so too were investigators laboring to account for their own activities, in formal documents—such as protocols, amendments, continuing reviews, and consent forms—to be prospectively reviewed by IRBs, and (if necessary) retrospectively audited by regulators.

As IRBs scrutinized this lengthening documentation with greater care, there was an epidemic of delayed reviews. A nationwide survey found that IRB red tape was among the top regulatory burdens reported by sponsored investigators (Decker et al. 2007). Such disruptions were most evident in the review of multi-site research. The problem of reconciling the divergent decisions of multiple boards became an acute crisis. According to one report, “a [multi-site] tuberculosis study…required a median of 30 [hours] of staff time,” and “the median times to approval for multicenter protocols ranged from 1.5 to 15 months” (Infectious Diseases Society of America 2009:330–1).

In the face of high costs and increasing disruptions, a consensus emerged—among investigators, research administrators, sponsors, accreditors, and even regulators—that IRB compliance needed to be made more efficient. “I think IRBs have spent a lot of time in the last few years trying to … have a program that’s compliant with the regulations,” explained the accreditation agency’s director in 2005. “[Now] they can begin to address questions around efficiency” (IRB Advisor 2005). Efficiency rhetoric resonates with widely-held social norms, and therefore has symbolic value for American compliance programs (Dobbin and Sutton 1998; Edelman, Uggen, and Erlanger 1999). In the IRB world, however, there were powerful pressures to actually achieve greater efficiency, driven by the unsustainably high cost of accountability production.

Research institutions addressed these pressures by reorganizing their technical compliance production processes to comply more quickly, less disruptively, and more cost-effectively, borrowing strategies from industrial management. To maximize speed and guarantee uniformity of output, IRB tasks were thoroughly routinized (Leidner 1993). Forms, templates and checklists were introduced to save time, standardize outputs, and ensure that fussy regulatory procedures were followed and duly recorded (IRB Advisor 2013, 2014d, 2014b). Larger offices introduced a systematic bureaucratic division of labor, complete with entry-level positions and job ladders. “[We] create[d] three levels of job descriptions that started from basic coordinator and moved through intermediate and then a senior position,” explained a hospital administrator (IRB Advisor 2011). “[O]nce a protocol came in, [it was] treated like a car on an assembly line in Detroit,” explained Craig, who was hired by a large academic medical center to improve its IRB functions (associate, compliance consulting firm).

In line with the assembly-line metaphor, IRB offices introduced automating software. Electronic “protocol management” systems not only improved speed and consistency, but also saved labor—for example, by automatically forwarding completed protocols to different levels of review. Such automation helped control the burgeoning financial cost of running IRB offices: a number of offices reported that electronic systems had allowed them to eliminate lower-level staff positions (IRB Advisor 2003c, 2010b, 2014e).

In addition to reorganizing compliance internally, research institutions increasingly outsourced biomedical IRB reviews to for-profit compliance service providers. Known as “independent” or “commercial” IRBs, these firms charged for reviews, and hired faculty members to serve on boards on an as-needed basis. For decades, they had specialized in commercial studies, typically conducted across multiple non-academic sites (such as clinics and community hospitals), and regulated by the FDA. Because these sites did not typically have IRBs of their own, a loophole in FDA guidance allowed them to hire external boards (Heath 2000).

Independent IRBs were masters of technical compliance production. Honed by market competition to satisfy the demands of their clients, they had infrastructure that traditional boards could not replicate: large, specialized staffs, teams of regulatory attorneys, cutting-edge software, and extensive networks of reviewers-for-hire (Kaplan 2016). They could produce IRB decisions backed by meticulous, auditable documentation with minimum hassle for investigators, and in record time—a third of the time of their traditional IRB counterparts (Rosenberg 2014). Independent IRB review was “a little bit like a factory. It’s very efficient” (Frances, executive officer, independent IRB).

Even more importantly, independent IRBs could rise above the intractable problem of multi-site studies: they were not tied to any particular institution and had decades of expertise in reviewing commercial clinical trials spanning dozens or even hundreds of locations (Fisher 2008). A single independent IRB could expeditiously make a single centralized decision in a multisite study that would take traditional boards months of hard labor to reconcile.

Until the federal crackdown in the late 1990s, independent IRBs catered almost exclusively to commercial biopharmaceutical companies and their contractors. With the increase in federal enforcement, however, independent boards began to expand rapidly into studies conducted at academic research institutions. Outsourcing to independent boards was not promoted by local IRB offices, whose claim to institutional resources was weakened when studies were outsourced. For example, when Albany Medical College began to outsource most of its reviews to the independent Western IRB, it was able to cut in half its local IRB staff (IRB Advisor 2003h). Outsourcing also deprived local IRBs of the fees charged to commercial sponsors for review of their proposed research, which had become a common way to supplement IRB budgets (Prentice, Mann, and Gordon 2006:57). Instead, IRB outsourcing was promoted by upper-level research administrators seeking to reconcile rigorous compliance with core institutional objectives—objectives that included cost control and getting biomedical studies up and running.

The most obvious studies to outsource were those sponsored by pharmaceutical and other private companies. Commercially sponsored studies had become an important revenue source for traditional research institutions (Colyvas 2007). Private sponsors could easily take their studies elsewhere, and commercial contract research organizations typically favored the more efficient independent boards. As IRB red tape at research institutions became more burdensome, commercial sponsors became increasingly reluctant to invest in studies weighed down by “time-consuming layers of internal study review” (Rosenberg 2014:1).

To stem the outgoing tide of commercial research dollars, research institutions began to encourage the outsourcing of commercial studies to independent boards, even over the objections of their own local IRB offices (Rosenberg 2014). “We were pushed [by the university administration] to allow our investigators to use the [independent] IRBs, but we contained it to Phase III industry-sponsored research,” explained Sophia (IRB administrator, research university). Andrea, a research administrator at an Ivy League university, recalled advising an investigator to outsource his study:

I have one investigator that had three studies that needed to start up right away, and I mean right away. So my perspective is … let’s go to [an independent] IRB and get this done. And he was so conflicted about it. I just had to be very firm with him because… It’s my job to help them start their studies. … And sure enough … I got it started up for him in two weeks (associate compliance director, academic medical center).

Over time, some federally-funded studies were outsourced as well, enabled by the changing attitudes of OHRP, the agency overseeing such studies. Compared to the FDA, which had long allowed outsourcing, OHRP had long discouraged it (Heath 2000). Among other things, OHRP shared longstanding concerns that sponsors and investigators would engage in “IRB shopping”—giving their business to the board with the most lax standards—and that independent boards faced a conflict of interest between satisfying customers and upholding ethical standards (Anon 2011).

During and after the crackdown, however, OHRP’s stance began to soften. After suspending Virginia Commonwealth University’s federally-funded research in 2000, the agency allowed the university to contract with the independent Western IRB for ongoing review of studies (Gore 2000). As the problem of multi-site research became increasingly urgent, the agency’s resistance eroded further. In 2010, OHRP reversed its earlier policy of holding research institutions liable for problems with outsourced reviews, making it clear that the external board, not the outsourcing university, would be held responsible. “OHRP has in a sense been trying to send the message that there are benefits from having a more centralized IRB review,” explained director Jerry Menikoff in 2010, “We recognize there can be inappropriate administrative burdens by having multiple reviews, and that can slow down research” (IRB Advisor 2010c).

In facilitating the use of independent boards, regulators were almost certainly under pressure from NIH, the massive federal funder whose research OHRP was charged with regulating. NIH was both bureaucratically powerful and highly motivated to unburden its studies from disruption. In 2014, NIH announced that “single IRB review” would be not only allowed but required for its multi-site studies (IRB Advisor 2015). Two years later, the sponsor began to allow IRB fees, including those of independent IRBs, to be charged as direct costs to grants (National Institutes of Health 2016). And in 2018, a new Common Rule was published that contained a “single IRB review” stipulation that echoed the NIH policy (Young 2018). Responding to these new incentives, traditional boards began to develop their own single IRB mechanisms (IRB Advisor 2014c), but independent boards had a competitive edge in reviewing highly dispersed studies (Kaplan 2016). Lawmakers and others continued to worry about the potential conflicts of interest, especially as profitable boards were acquired and consolidated by private equity firms (United States Government Accountability Office 2023). However, no policy action was taken to rein them in. Today, for-profit boards remain a powerful presence within the American biomedical research landscape: a practical, privatized solution to the high cost of decentered accountability.

Discussion

This article has explored the origins of the blizzard of paperwork for which IRBs became famous around the beginning of the 21st century. By engaging in widespread enforcement, regulators unwittingly triggered the accountability mechanism that had long lain dormant in the heart of the regulations, guided by the aphorism, “if it wasn’t documented, it didn’t happen.” For the first time, biomedical research institutions were required to adhere to the letter of the rules and to provide meticulous documentation that they had done so. The findings of this study resonate with legal studies that draw a connection between limited U.S. federal bureaucratic capacity, by-the-book regulation, and heavy accountability burdens (Axelrad et al. 2000; Bardach and Kagan 2017; Kagan 2019). This study has also revealed two additional features of decentered American governance that may exacerbate technical compliance burdens: ambiguous mandates, which can encourage overcompliance; and fragmented authority, which can refract accountability duties, rendering them more complex, time-consuming, and demanding of expert attention.

This is not to say that the American IRB system is entirely different from its counterparts abroad. Across the wealthy industrialized world, governments delegate decisions about human research ethics to committees, typically containing a mixture of expert and lay members (Druml et al. 2009; Fitzgerald and Phillips 2006; Hedgecoe 2020; Hedgecoe et al. 2006; Shelley-Egan 2016, 2016). It is difficult to imagine a system in which the ethics of clinical trials were adjudicated by central government bureaucrats. In addition to being impractical—given the size of the biomedical research enterprise—a top-down system of human research ethics would surely lack the necessary level of specialized biomedical knowledge. Sometimes decentered governance is the most effective policy strategy (Black 2001).

In other ways, however, the American system stands out in sharp contrast. Particularly unusual is its heavy reliance on for-profit ethics review. Nothing comparable exists within the European Union (European Network of Research Ethics Committees 2022). Although Canada allows review by U.S.-based for-profit boards (and has given rise to one of its own), the practice has been far more limited and appears to be a concession to the practical realities of collaborating with American sponsors (Goldstein 2018).

There are at least two likely reasons for the rarity of commercialized human research review around the world. One is that it is considered ethically problematic. A second is that in most other national systems, more capable state institutions impose lighter accountability burdens, obviating the need for such privatized solutions. In support of this point, it is revealing to compare the American IRB system and its counterpart in the UK, made up of Research Ethics Committees (RECs). Like their American counterparts, British RECs are composed mostly of research experts, operating under the framework of government regulations (Hedgecoe 2020; Hedgecoe et al. 2006). Yet in the UK, these boards are administered not by local institutions, but by the National Health Service (NHS), which is solely responsible for regulating clinical trials (Allen 2016).

The centralization of the British system is a relatively recent development. Until the 1990s, British RECs were, like American IRBs, entirely reliant on the decisions of local authorities. Like their American counterparts, British researchers complained about the burdens and delays resulting from disparate decisions in multi-site studies (Hedgecoe 2020; Hedgecoe et al. 2006). In contrast to the U.S., however, authorities in the UK were able to consolidate REC governance under the umbrella of the strong, pre-existing NHS bureaucracy (Hedgecoe et al. 2006; Salman et al. 2014).

There is evidence that centralization has lessened technical compliance burdens. Since the REC system was reformed, biomedical investigators in the UK have reported a significant decrease in red tape (Hedgecoe 2020:193–96). Today, rather than requiring each board to devise its own policies, the NHS publishes and annually updates a single set of detailed standard operating procedures (United Kingdom. Health Research Authority 2022b). Because they are administered by NHS, UK committees have direct access to authoritative advice about the meaning of government regulations, eliminating the incentive to engage in overcompliance. Multi-site studies are not outsourced, but go through a single NHS online portal, where they are assigned to be reviewed by a single committee (United Kingdom. Health Research Authority 2022a, 2022b). There are no “commercial RECs” specializing in cutting through red tape and delays: none are needed in a system in which centralized ethics review is provided by a government agency.

Whereas for-profit ethics review is rare around the world, commercialized compliance services appear to be quite common in the United States, as is revealed by a quick internet search using the words “compliance vendor.” Research on this fascinating phenomenon is thin and scattered. Based on this current study, however, we might hypothesize that for-profit compliance service providers are especially likely to thrive in areas of regulation where decentered accountability has generated unsustainably high technical compliance costs.

An especially compelling example, and one paralleling the IRB story, can be found in the the world of U.S. financial services regulation. Both Sarbanes-Oxley (2002) and the subsequent Dodd-Frank Act (2010) caused a ballooning in financial firms’ technical compliance obligations, and a dramatic expansion of compliance staffing. By 2019, Citi reported having around 30,000 risk, regulatory, and compliance staff, and JP Morgan around 43,000 (English and Hammond 2019; The Economist 2019). By 2017, the cost of compliance was estimated at $270 billion per year, industry-wide, or 10 percent of operating resources, most of it spent on maintaining large cadres of compliance administrators (Farley 2017). These high costs induced financial firms to meet regulators’ demands more efficiently. Today, all depend on powerful software known as “regtech” (short for “regulatory technology”), which assumes the technical compliance functions that would otherwise be accomplished, albeit more slowly and imperfectly, by salaried employees (Bamberger 2010; Packin 2019). Many financial firms also rely on outsourcing to “compliance management vendors”—the equivalent of independent IRBs (English and Hammond 2019; Hammond and Cowan 2020).

Conclusion

This study has proposed the concept of “decentering” as a way of summarizing a well-known “style of American statecraft” (Mayrl and Quinn 2017:58). By working around fractured and limited bureaucratic capacity, and by mobilizing private actors and resources, the apparently weak American state can have a powerful impact (Balogh 2015; Campbell and Morgan 2011; Clemens 2006; Farhang 2010; Melnick 2005; Novak 2008; Quinn 2019). In this article, we have seen how a tiny federal agency with sharply limited authority was able to induce thousands of biomedical research institutions to overcomply with federal rules and to invest millions (if not billions) of dollars in administering their IRBs.

At the same time, this article underscores the ongoing relevance of state capacity for explaining policy outcomes. Decentered regulatory strategies that work around central state incapacity may achieve mandates at the cost of creating extensive perverse side-effects. One hazard, described in new institutional studies of employment law, is symbolic compliance (Edelman 1992; Edelman et al. 2011, 1999; Kalev and Dobbin 2006; Kalev, Dobbin, and Kelly 2006; Krieger et al. 2015) This article has addressed other hazards that arise when decentered regimes impose the discipline of accountability—relying not on private litigation for enforcement, but on rigid, prescriptive rules assessed in meticulous audits and inspections.

I have argued that decentered accountability elicits not symbolic compliance, but an obsessive focus on technical compliance, experienced as a blizzard of paperwork. I have also identified a long-run hazard of decentered accountability that merits future study: namely, reorganization of technical compliance to lower costs, and the consequent nurturing of commercial industries peddling compliance with efficiency, exemplified by independent IRBs. Such firms help regulated organizations to manage high technical compliance costs, but in ways that potentially undermine regulatory mandates and create further layers of complexity.

An obvious limitation of this article is that it is based on the case of a single regulatory regime. Within the universe of American regulation, there appear to be many parallel cases of decentered accountability, where compliance is defined by inspections and audits by fragmented regulatory authorities with limited capacity. Examples include financial services (described in the discussion above), nursing homes (Braithwaite et al. 2007), hospital patient safety (van de Ruit and Bosk 2021), and food safety (Lytton 2017). There is a need for more comparative research to identify which of the adaptations to decentered-accountable structures seen in the IRB story are universal, and which are more variable. There is also an enormous gap in our knowledge of the role and impact of for-profit compliance industries.

Moreover, very recent legal transformations will soon provide an unprecedented opportunity for researchers to study the consequences of diminished federal capacity in American regulatory governance. In the summer of 2024, the US Supreme Court struck down the Chevron doctrine, making it easier for businesses to challenge regulations in court, and even more difficult for agencies to exercise discretion in interpreting regulatory mandates. This attack on the American regulatory state, it is predicted, will embolden corporations to test the limits of regulators’ diminished powers (The Economist 2024).

This study, however, suggests a second possibility: that by further eroding agency capacity, the demise of the Chevron doctrine may inadvertently amplify the hazards of decentered accountability. It may induce agencies to rely more heavily on an inflexible, by-the-book style of enforcement that protects them from legal challenges; rely more on cryptic, informal guidance (Kalen 2008); and perhaps rely even more on private regulatory intermediaries, whose voluntary standards cannot be challenged in court. Moreover, by weakening federal regulatory authority, the decision may add more layers of complexity to already fragmented regulatory environments (Gaskin 2024). Paradoxically, an anti-regulationist Supreme Court decision may end up increasing regulatory burdens with more procedures, more record-keeping, more specialized work for compliance offices, and more business for third-party compliance vendors: in short, a new chapter in the ongoing saga of decentered accountability in American governance.

Notes

  1. To accommodate geographical diversity, most interviews were conducted by phone. Three of those I interviewed were former federal regulators who agreed to be identified by name. The remainder had a range of job titles at the time of the interview—most commonly “IRB administrator” and “research compliance office director”—and many had held several different roles in the IRB world—for example, as administrators, federal regulators, and accreditors.
  2. The FDA, which regulated commercially sponsored research, played a more minor role in the federal crackdown. Unlike OHRP (known as OPRR until 2000), the FDA did conduct site visits to assess IRB compliance. However, these visits were also mostly focused on documentation, which was sampled rather than reviewed comprehensively (United States General Accounting Office 1996).

Competing Interests

The authors declare that they have no competing interests.

References

Abbott, Kenneth W., David Levi-Faur, and Duncan Snidal. 2017. “Theorizing Regulatory Intermediaries: The RIT Model.” The Annals of the American Academy of Political and Social Science 670(1):14–35.

Abbott, Laura, and Christine Grady. 2011. “A Systematic Review of the Empirical Literature Evaluating IRBs: What We Know and What We Still Need to Learn.” Journal of Empirical Research on Human Research Ethics 6:3–20.

Allen, Charlotte. 2016. “Changes to Staff Roles and the Approvals Process in the Health Research Authority.” Retrieved May 8, 2023 (https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/804339/HRA_Update_for_MHRA_Stakeholder_Event_.pdf).

Anon. 2011. “Who Watches the Watchmen? Some Commercial Firms That Oversee the Ethics and Scrutiny of Clinical Trials Have Been Found Wanting. Human Volunteers in Research Deserve Better.” Nature 476:125.

Axelrad, Lee, Robert A. Kagan, and others. 2000. Regulatory Encounters: Multinational Corporations and American Adversarial Legalism. Berkeley, CA: University of California Press.

Ayres, Ian, and John Braithwaite. 1992. Responsive Regulation: Transcending the Deregulation Debate. New York: Oxford University Press, USA.

Babb, Sarah. 2020. Regulating Human Research: IRBs from Peer Review to Compliance Bureaucracy. Redwood City, CA: Stanford University Press.

Babb, Sarah, Lara Birk, and Luka Carfagna. 2017. “Standard Bearers: Qualitative Sociologists’ Experiences with IRB Regulation.” The American Sociologist 48(1):86–102.

Balogh, Brian. 2015. The Associational State: American Governance in the Twentieth Century. Philadelphia, PA: University of Pennsylvania Press.

Bamberger, Kenneth A. 2010. “Technologies of Compliance: Risk and Regulation in a Digital Age.” Texas Law Review 88(4):669.

Bamberger, Kenneth A., and Deirdre K. Mulligan. 2015. Privacy on the Ground: Driving Corporate Behavior in the United States and Europe. Cambridge, MA: MIT Press.

Bardach, Eugene, and Robert Kagan. 2017. Going by the Book: The Problem of Regulatory Unreasonableness. New York: Routledge.

Bennett, Andrew, and Jeffrey T. Checkel. 2015. Process Tracing. Cambridge University Press.

Black, Julia. 2001. “Decentring Regulation: Understanding the Role of Regulation and Self-Regulation in a ‘Post-Regulatory’ World.” Current Legal Problems 54(1):103–46.

Black, Julia. 2002. “Mapping the Contours of Contemporary Financial Services Regulation.” Journal of Corporate Law Studies 2(2):253–87.

Bledsoe, Caroline H., Bruce Sherin, Adam G. Galinsky, Nathalia M. Headley, Carol A. Heimer, Erik Kjeldgaard, James Lindgren, Jon D. Miller, Michael E. Roloff, and David H. Uttal. 2007. “Regulating Creativity: Research and Survival in the IRB Iron Cage.” Northwestern University Law Review 101(2):593–641.

Borror, Kristina, Michael Carome, Patrick McNeilly, and Carol Weil. 2003. “A Review of OHRP Compliance Oversight Letters.” IRB: Ethics & Human Research 25(5):1–4.

Brainard, J. 2000. “Spate of Suspensions of Academic Research Spurs Questions about Federal Strategy: A U.S. Agency, Its Own Future Uncertain, Unsettles College Officials with Its Crackdown.” The Chronicle of Higher Education 96(22):A29–30, A32.

Braithwaite, John, Toni Makkai, and Valerie A. Braithwaite. 2007. Regulating Aged Care: Ritualism and the New Pyramid. Cheltenham, UK: Edward Elgar Publishing.

Bromley, Patricia, and Walter W. Powell. 2012. “From Smoke and Mirrors to Walking the Talk: Decoupling in the Contemporary World.” Academy of Management Annals 6(1):483–530.

Burris, Scott, and Jen Welsh. 2007. “Regulatory Paradox: A Review of Enforcement Letters Issued by the Office for Human Research Protection.” Northwestern University Law Review 101:643.

Campbell, Andrea Louise, and Kimberly J. Morgan. 2011. The Delegated Welfare State: Medicare, Markets, and the Governance of Social Policy. New York: Oxford University Press.

Carrigan, Christopher, and Cary Coglianese. 2011. “The Politics of Regulation: From New Institutionalism to New Governance.” Annual Review of Political Science 14(1):107–29.

Clemens, Elisabeth S. 2006. “Lineages of the Rube Goldberg State: Building and Blurring Public Programs, 1900–1940.” Rethinking Political Institutions: The Art of the State 187:189.

Coglianese, Cary, and David Lazer. 2003. “Management-based Regulation: Prescribing Private Management to Achieve Public Goals.” Law & Society Review 37(4):691–730.

Colyvas, Jeannette Anastasia. 2007. “From Divergent Meanings to Common Practices: Institutionalization Processes and the Commercialization of University Research.” PhD Thesis.

Crigger, Bette-Jane. 2001. “What Does It Mean to ‘Review’ a Protocol? Johns Hopkins & OHRP.” IRB: Ethics & Human Research 23(4):13–15.

Dobbin, Frank. 2009. Inventing Equal Opportunity. Princeton, N.J.: Princeton University Press.

Dobbin, Frank, Daniel Schrage, and Alexandra Kalev. 2015. “Rage against the Iron Cage: The Varied Effects of Bureaucratic Personnel Reforms on Diversity.” American Sociological Review 80(5):1014–44.

Dobbin, Frank, and John R. Sutton. 1998. “The Rights Revolution and the Rise of Human Resources Management Divisions.” American Journal of Sociology 104 (2):441–76.

Druml, Christiane, M. Wolzt, J. Pleiner, and E. A. Singer. 2009. “Research Ethics Committees in Europe: Trials and Tribulations.” Intensive Care Medicine 35(9):1636–40.

Edelman, Lauren B. 1992. “Legal Ambiguity and Symbolic Structures: Organizational Mediation of Civil Rights Law.” American Journal of Sociology 97(6):1531–76.

Edelman, Lauren B., Linda H. Krieger, Scott R. Eliason, Catherine R. Albiston, and Virginia Mellema. 2011. “When Organizations Rule: Judicial Deference to Institutionalized Employment Structures.” American Journal of Sociology 117(3):888–954.

Edelman, Lauren B., Christopher Uggen, and Howard S. Erlanger. 1999. “The Endogeneity of Legal Regulation: Grievance Procedures as Rational Myth.” American Journal of Sociology 105(2):406–54.

English, Stacey, and Susannah Hammond. 2019. Cost of Compliance Report 2019. Eagan, MN: Thomson Reuters.

Espeland, Wendy Nelson, and Michael Sauder. 2016. Engines of Anxiety: Academic Rankings, Reputation, and Accountability. New York, NY: Russell Sage Foundation.

European Network of Research Ethics Committees. 2022. “About EUREC.” Retrieved September 7, 2022 (http://www.eurecnet.org/index.html#:~:text=EUREC%20is%20a%20network%20that,the%20EU%20European%20Research%20Area.).

Farhang, Sean. 2010. The Litigation State: Public Regulation and Private Lawsuits in the United States.Princeton, NJ: Princeton University Press.

Farley, Peter. 2017. “Spotlight on Compliance Costs as Banks Get Down to Business with AI.” International Banker, July 4.

Fisher, Jill. 2008. Medical Research for Hire: The Political Economy of Pharmaceutical Clinical Trials. New Brunswick, NJ: Rutgers University Press.

Fitzgerald, Maureen H., and Paul A. Phillips. 2006. “Centralized and Non-Centralized Ethics Review: A Five Nation Study.” Accountability in Research 13(1):47–74.

Frankel, Mark Steven. 1976. “Public Policymaking for Biomedical Research: The Case of Human Experimentation.” PhD Thesis.

Fransen, Luc, and Genevieve LeBaron. 2019. “Big Audit Firms as Regulatory Intermediaries in Transnational Labor Governance.” Regulation & Governance 13(2):260–79.

Gaskin, Jennifer. 2024. “Regulatory Compliance in a Post-Chevron World: Fasten Your Seatbelts.” Corporate Compliance Insights, July 16.

Goldstein, Gabrielle. 2018. “The Market for Ethics: Human Subjects Research Oversight in the United States and Canada.” PhD Thesis, UC Berkeley.

Gore, Mollie. 2000. “Agency Approves VCU’s Revised Research Plan.” Richmond Times Dispatch, February 1.

Greenberg, Daniel S. 2007. Science for Sale: The Perils, Rewards, and Delusions of Campus Capitalism. Chicago, IL: University of Chicago Press.

Gunningham, Neil, Robert A. Kagan, and Dorothy Thornton. 2004. “Social License and Environmental Protection: Why Businesses Go beyond Compliance.” Law & Social Inquiry 29(2):307–41.

Hacker, Jacob S. 2002. The Divided Welfare State: The Battle over Public and Private Social Benefits in the United States. Cambridge University Press.

Hallett, Tim. 2010. “The Myth Incarnate Recoupling Processes, Turmoil, and Inhabited Institutions in an Urban Elementary School.” American Sociological Review 75(1):52–74.

Halpern, Sydney. 2008. “Hybrid Design, Systemic Rigidity: Institutional Dynamics in Human Research Oversight.” Regulation & Governance 2(1):85–102.

Hamilton, Gary G., and John R. Sutton. 1989. “The Problem of Control in the Weak State: Domination in the United States, 1880–1920.” Theory and Society 18(1):1–46.

Hammond, Susannah, and Mike Cowan. 2020. Cost of Compliance Report 2020. Eagan, MN: Thomson Reuters.

Heath, Erica. 2000. The History, Function, and Future of Independent Institutional Review Boards. Online Ethics Center. Retrieved November 16, 2024: (http://www.onlineethics.org/cms/8080.aspx).

Hedgecoe, Adam, Fatima Carvalho, Peter Lobmayer, and Fredrik Rakar. 2006. “Research Ethics Committees in Europe: Implementing the Directive, Respecting Diversity.” Journal of Medical Ethics 32(8):483–86.

Hedgecoe, Adam. 2020. Trust in the System: Research Ethics Committees and the Regulation of Biomedical Research. Manchester, UK: Manchester University Press.

Hilts, Philip J. 1994. “Agency Faults a U.C.L.A. Study For Suffering of Mental Patients.” New York Times, March 10.

Howard, Christopher. 1999. The Hidden Welfare State: Tax Expenditures and Social Policy in the United States. Princeton, NJ: Princeton University Press.

Infectious Diseases Society of America. 2009. “Grinding to a Halt: The Effects of the Increasing Regulatory Burden on Research and Quality Improvement Efforts.” Clinical Infectious Diseases 49(3):328–35.

IRB Advisor. 2001. “GAO: More Work Could Be Done to FIll the Gaps.” IRB Advisor, July 2001.

IRB Advisor. 2002. “Assessing Risks/Benefits-Recent Clinical Trial Deaths Suggest Imbalances.” IRB Advisor, November 2002.

IRB Advisor. 2003a. “2003 Salary Survey Results.” IRB Advisor, November 2003.

IRB Advisor. 2003b. “Accreditation Is Not for the Faint of Heart.” IRB Advisor, September 2003.

IRB Advisor. 2003c. “Baylor Uses Its BRAAN to Improve IRB Operations.” IRB Advisor, April 2003.

IRB Advisor. 2003d. “HHS Guidance on Financial Conflicts Puzzles Some.” IRB Advisor, June 2003.

IRB Advisor. 2003e. “Now the Real Work Begins: Maintaining Accreditation.” IRB Advisor, September 2003.

IRB Advisor. 2003f. “Prepping for Acceditation Survey Has Quality Improvement Benefits.” IRB Advisor, February 2003.

IRB Advisor. 2003g. “Spotlight on Compliance: HHS Suggests Analysis of Conflicts of Interest.” IRB Advisor, May 2003.

IRB Advisor. 2003h. “Supply and Demand: IRB Fees Now Are the Norm.” IRB Advisor, October 2003.

IRB Advisor. 2004a. “Is the Problem Overregulation or One of Overinterpretation by IRBs?” IRB Advisor, June 2024.

IRB Advisor. 2004b. “Protocols Involving Oral History Still Need Review.” IRB Advisor, February 2004.

IRB Advisor. 2004c. “Reporting Rules for Adverse Events, Unanticipated Problems Differ Slightly.” IRB Advisor, March 2004.

IRB Advisor. 2008. “Successful Accreditation Process Requires Close Attention to Details.” IRB Advisor, September 2008.

IRB Advisor. 2009. “Need to Sharpen up You IRB Process?” IRB Advisor, January 2009.

IRB Advisor. 2010a. “Data Driven: Accreditation Group Releases Metrics for IRB Performance.” IRB Advisor, September 2010.

IRB Advisor. 2010b. “Making the Case for a New Electronic System.” IRB Advisor, November 2010.

IRB Advisor. 2010c. “OHRP Move Might Increase Trend of Research Sites Using Central IRBs.” IRB Advisor, August 2010.

IRB Advisor. 2010d. “QA Program Checks Consent Form Accuracy.” IRB Advisor, February 2010.

IRB Advisor. 2011. “Improve IRB Staffing Issues Following This Good Example.” IRB Advisor, April 2011.

IRB Advisor. 2013. “Streamlining Protocol Review with Checklists.” IRB Advisor, December 2013.

IRB Advisor. 2014a. “Accreditation Expert Offers Assessment Tips.” IRB Advisor, May 2014.

IRB Advisor. 2014b. “Dust off Those Checklists, Tools, Templates.” IRB Advisor, August 2014.

IRB Advisor. 2014c. “Examples of Central IRB Models.” IRB Advisor, October 2014.

IRB Advisor. 2014d. “One-Page Checklist Saves Time for IRB Staff.” IRB Advisor, September 2014.

IRB Advisor. 2014e. “Staffing, Collaborations Top IRB Issues.” IRB Advisor, January 2014.

Kagan, Robert. 2000. “The Consequences of Adversarial Legalism.” Pp. 372–414 in Regulatory encounters: Multinational corporations and American adversarial legalism, edited by L. Axelrad, R. A. Kagan, and others. Berkeley, CA: Univ of California Press.

Kagan, Robert A. 2007. “Globalization and Legal Change: The ‘Americanization’ of European Law?” Regulation & Governance 1(2):99–120.

Kagan, Robert A. 2019. Adversarial Legalism: The American Way of Law. Cambridge, MA: Harvard University Press.

Kagan, Robert A., Neil Gunningham, and Dorothy Thornton. 2003. “Explaining Corporate Environmental Performance: How Does Regulation Matter?” Law & Society Review 37(1):51–90.

Kalen, Sam. 2008. “The Transformation of Modern Administrative Law: Changing Administrations and Environmental Guidance Documents.” Ecology LQ 35:657.

Kalev, Alexandra, and Frank Dobbin. 2006. “Enforcement of Civil Rights Law in Private Workplaces: The Effects of Compliance Reviews and Lawsuits over Time.” Law & Social Inquiry 31(4):855–903.

Kalev, Alexandra, Frank Dobbin, and Erin Kelly. 2006. “Best Practices or Best Guesses? Assessing the Efficacy of Corporate Affirmative Action and Diversity Policies.” American Sociological Review 71(4):589–617.

Kaplan, Sheila. 2016. “In Clinical Trials, For-Profit Review Boards Are Taking over for Hospitals. Should They?” Stat, July 6.

Katz, Jack. 2007. “Toward a Natural History of Ethical Censorship.” Law & Society Review 41(4):797–810.

King, Desmond, and Robert Lieberman. 2017. “The Civil Rights State: How the American State Develops Itself.” in The Many Hands of the State: Theorizing Political Authority and Social Control, edited by K. J. Morgan and A. S. Orloff. New York, NY: Cambridge University Press.

Koski, Greg. 2003. “Beyond Compliance… Is It Too Much to Ask?” IRB: Ethics & Human Research 25(5):5–6.

Krieger, Linda Hamilton, Rachel Kahn Best, and Lauren B. Edelman. 2015. “When ‘Best Practices’ Win, Employees Lose: Symbolic Compliance and Judicial Inference in Federal Equal Employment Opportunity Cases.” Law & Social Inquiry 40(4):843–79.

Lytton, Timothy D. 2017. “The Taming of the Stew: Regulatory Intermediaries in Food Safety Governance.” The Annals of the American Academy of Political and Social Science 670(1):78–92.

Mayrl, Damon, and Sarah Quinn. 2016. “Defining the State from within: Boundaries, Schemas, and Associational Policymaking.” Sociological Theory 34(1):1–26.

Mayrl, Damon, and Sarah Quinn. 2017. “Beyond the Hidden American State: Classification Struggles and the Politics of Recognition.” Pp. 58–80 in The Many Hands of the State: Theorizing Political Authority and Social Control, edited by K. J. Morgan and A. S. Orloff. Cambridge: Cambridge University Press.

McCarthy, Charles. 2001. “Reflections on the Organizational Locus of the Office for the Protection from Research Risks.” in Vol. 2, Ethical and Policy Issues in Research Involving Human Participants, edited by N. B. A. Commission. Bethesda, MD: National Bioethics Advisory Commission.

McGarity, Thomas. 1986. “Regulatory Reform in the Reagan Era.” Maryland Law Review 45:253.

McGarity, Thomas O. 1991. “Some Thoughts on Deossifying the Rulemaking Process.” Duke Law Journal 41:1385.

Mello, Michelle M., David M. Studdert, and Troyen A. Brennan. 2003. “The Rise of Litigation in Human Subjects Research.” Annals of Internal Medicine 139(1):40–45.

Melnick, R. Shep. 2005. “From Tax and Spend to Mandate and Sue: Liberalism after the Great Society.” Pp. 387–410 in The Great Society and the High Tide of Liberalism.

Meyer, John W., and Brian Rowan. 1977. “Institutionalized Organizations: Formal Structure as Myth and Ceremony.” American Journal of Sociology 83(2):340–63.

Mirowski, Philip, and Robert Van Horn. 2005. “The Contract Research Organization and the Commercialization of Scientific Research.” Social Studies of Science 35(4):503–48.

Nelson, Josephine S. 2021. “Compliance as Management.” The Cambridge Handbook of Compliance 104–22.

Novak, William J. 2008. “The Myth of the ‘Weak’ American State.” The American Historical Review 113(3):752–72.

Packin, Nizan Geslevich. 2019. “Is RegTech The Answer To Corporate Governance And Risk Management Issues?” Forbes, February 8, 2019.

Parker, Christine, and Vibeke Nielsen. 2009. “The Challenge of Empirical Research on Business Compliance in Regulatory Capitalism.” Annual Review of Law and Social Science 5:45–70.

Pedriana, Nicholas, and Robin Stryker. 2004. “The Strength of a Weak Agency: Enforcement of Title VII of the 1964 Civil Rights Act and the Expansion of State Capacity, 1965–1971.” American Journal of Sociology 110(3):709–60.

Porter, Theodore M. 1995. Trust in Numbers: The Pursuit of Objectivity in Science and Public Life. Princeton, NJ: Princeton University Press.

Power, Michael. 1996. “Making Things Auditable.” Accounting, Organizations and Society 21(2–3):289–315.

Power, Michael. 1997. The Audit Society: Rituals of Verification. New York, NY and Oxford, UK: Oxford University Press.

Public Responsibility in Medicine and Research. 2007. Workload and Salary Survey, 2007.

Quinn, Sarah L. 2019. American Bonds: How Credit Markets Shaped a Nation. Princeton, NJ: Princeton University Press.

Reich, Adam. 2012. “Disciplined Doctors: The Electronic Medical Record and Physicians’ Changing Relationship to Medical Knowledge.” Social Science & Medicine 74(7):1021–28.

Rein, Lisa, and Andrew Tran. 2017. “How the Trump Era Is Changing the Federal Bureaucracy.” Washington Post, December 13, 2017.

Rettig, Richard A. 2000. “The Industrialization of Clinical Research.” Health Affairs 19(2):129–46.

Rosenberg, Ronald. 2014. “AMCs Vying to Better Compete for Industry Trials: Working to Conquer Study Start-up Delays, IRB Review Process.” CenterWatch Monthly, December 2014.

Rosenthal, Elisabeth. 1996. “New York Seeks to Tighten Rules on Medical Research.” New York Times. September 27, 1996.

Rourke, Francis E. 2020. “American Exceptionalism: Government without Bureaucracy.” Pp. 223–29 in The State of Public Bureaucracy. New York, NY: Routledge.

van de Ruit, Catherine, and Charles L. Bosk. 2021. “Surgical Patient Safety Officers in the United States: Negotiating Contradictions between Compliance and Workplace Transformation.” Work and Occupations 48(1):3–39.

Salman, Rustam Al-Shahi, Elaine Beller, Jonathan Kagan, Elina Hemminki, Robert S. Phillips, Julian Savulescu, Malcolm Macleod, Janet Wisely, and Iain Chalmers. 2014. “Increasing Value and Reducing Waste in Biomedical Research Regulation and Management.” The Lancet 383(9912):176–85.

Sauder, Michael, and Wendy Espeland. 2009. “The Discipline of Rankings: Tight Coupling and Organizational Change.” American Sociological Review 74(1):63–82.

Schiller, Reuel. 2016. “The Historical Origins of American Regulatory Exceptionalism.” in Comparative Law and Regulation. Cheltenham, UK: Edward Elgar Publishing.

Schrag, Zachary M. 2010. Ethical Imperialism: Institutional Review Boards and the Social Sciences 1965–2009. Baltimore, MD: Johns Hopkins University Press.

Shalala, Donna. 2000. “Protecting Research Subjects-What Must Be Done.” New England Journal of Medicine 343(11):808–10.

Shelley-Egan, Claire. 2016. “Ethical Assessment of Research and Innovation: A Comparative Analysis of Practices and Institutions in the EU and Selected Other Countries.” European Commission. Accessed November 17, 2024 (https://satoriproject.eu/media/D1.1_Ethical-assessment-of-RI_acomparative-analysis.pdf).

Short, Jodi L. 2011. “The Paranoid Style in Regulatory Reform.” Hastings Law Journal 63:633.

Short, Jodi L., and Michael W. Toffel. 2010. “Making Self-Regulation More than Merely Symbolic: The Critical Role of the Legal Environment.” Administrative Science Quarterly 55(3):361–96.

Skocpol, Theda, and Kenneth Finegold. 1982. “State Capacity and Economic Intervention in the Early New Deal.” Political Science Quarterly 97(2):255–78.

Skowronek, Stephen. 1982. Building a New American State: The Expansion of National Administrative Capacities, 1877–1920. Cambridge University Press.

Spillane, James P., Leigh Mesler Parise, and Jennifer Zoltners Sherer. 2011. “Organizational Routines as Coupling Mechanisms: Policy, School Administration, and the Technical Core.” American Educational Research Journal 48(3):586–619.

Stark, Laura Jeanine Morris. 2012. Behind Closed Doors: IRBs and the Making of Ethical Research. Chicago: The University of Chicago Press.

Stolberg, Sheryl Gay. 2000. “Teenager’s Death Is Shaking Up Field of Human Gene-Therapy Experiments.” New York Times, January 27 2000.

Strathern, Marilyn. 2003. “Introduction: New Accountabilities: Anthropological Studies in Audit, Ethics and the Academy.” Pp. 13–30 in Audit Cultures. New York, NY: Routledge.

The Economist. 2019. “Rise of the ‘No Man’; Big Compliance.” The Economist, April 30, 2019.

The Economist. 2024. “What the Chevron Ruling Means for the next US President.” The Economist, July 4, 2024.

United Kingdom. Health Research Authority. 2022a. “Central Booking Service.” Retrieved November 4, 2022 (https://www.hra.nhs.uk/about-us/committees-and-services/online-booking-service/).

United Kingdom. Health Research Authority. 2022b. “Research Ethics Committee – Standard Operating Procedures.” Retrieved April 27, 2023 (https://s3.eu-west-2.amazonaws.com/www.hra.nhs.uk/media/documents/RES_Standard_Operating_Procedures_Version_7.6_September_2022_Final.pdf).

United States. General Accounting Office. 1996. Scientific Research: Continued Vigilance Critical to Protecting Human Subjects. U.S. General Accounting Office.

United States. Government Accountability Office. 2023. Institutional Review Boards: Actions Needed to Improve Federal Oversight and Examine Effectiveness. GAO-23-104721.

United States. Office for Human Research Protections. n.d. “IRB Guidebook.”

Wadman, Meredith. 1999. “NIH Ethics Office Clamps Down on Duke.” Nature 399 (190).

Wu, JunJie, and Teresa M. Wirkkala. 2009. “Firms’ Motivations for Environmental Overcompliance.” Review of Law & Economics 5(1):399–433.