Monday, January 17, 2005

Risk-Based Validation: What is it, and why do we do it?

Recently, much has been made of the FDA’s recommendation that pharmaceutical companies move toward a risk-based validation strategy. It has been referenced in the guidance General Principles for Software Validation (January 2002) as well as the final guidance regarding 21 CFR Part 11 (Part 11, Electronic Records; Electronic Signatures – Scope and Application, August 2003) as an FDA recommendation for validation of computerized systems. “Risk-based validation” is now a commonly heard phrase in the pharmaceutical industry, but its precise definition is unclear. The common impression is that it is a method that will reduce the overall time and effort expended in validation, and therefore will positively impact productivity and profitability. Though this is certainly true of a well-planned and well-executed risk-based approach, if the knowledge of how to implement such an approach is lacking, chances are the real benefits will not be seen.

In an effort to understand the concept of risk-based validation, individuals must begin by learning how to assess risk and make decisions based on that assessment. Frequently, a common scale for measuring risk is sought, and in the case of some efforts such as the GAMP risk assessment methodology, the establishment of a common scale, or at least a standard way to evaluate risk, is attempted. The GAMP method advocates categorizing software and performing validation based on the extent of validation recommended for the category in which the software is placed. Another method for assessing the risks associated with a given software package would be to assess each requirement (functional, design, user, etc.) for its ultimate impact on patient health, and deciding to concentrate validation on those requirements for which risk is highest. This requires the ability to approach the requirements at a macro level, and may require the initial involvement of more individuals from different disciplines within an organization, but that may be an acceptable investment for a software package that will be installed, and therefore validated, once, and only maintained thereafter for several years.

Another thing to remember about risk-based validation is that, as always, “if it’s not documented, it didn’t happen.” It is not enough to simply assess the risk and make the decisions based on that risk; the process of risk assessment must be documented, as well. The approach taken, the findings uncovered, the decisions made, and the justification for those decisions must be documented and included with the validation documentation for the system.

If steps are taken to perform a risk assessment for a piece of software to be implemented, it is indeed possible in some cases to reduce the overall amount of time spent on validation: taking a tailored approach rather than a “one-size-fits-all” approach can only benefit schedules as well as patient health. However, the approach must be planned and documented thoroughly.

Thursday, January 06, 2005

The implications of CDISC on validation

Not sure how many of my readers, if any, are aware of the initiatives being advanced by the Clinical Data Interchange Standards Consortium (CDISC), but they're setting forth standards for consistent collection and reporting of clinical trial data. One of their standards especially, the Standard Data Tabulation Model (SDTM), is going to have great impact on the world of computer systems validation.

SDTM has been introduced by FDA in a press release as a standardized format in which clinical data will be accepted in regulatory submissions. As a result, CDISC and the SDTM are oft-heard buzzwords in the clinical trial industry. They're not as often heard yet in the computer systems validation world, but they soon will be.

SDTM provides a standardized format and organizational scheme for the purpose of capturing, storing, reporting on, and retrieving clinical trial data. Currently, the data for each study within a trial can be in a different format and organizational scheme. The applications currently used to verify and report on that data, generally written using the SAS platform, are only informally tested and validated due to the sheer number of them (consider that a full suite must be written for each format and organizational scheme) and to the aggressive schedules pursued by pharmaceutical companies. SDTM is going to have an enormous amount of impact on this. With a single standard format and organizational scheme to work with, more time can be invested in the verification and reporting programs (edit checks and tables, listings, and figures programs respectively for the most part) and especially in the validation of these programs. They will reduce the amount of time spent programming and validating, while at the same time allowing for more extensive validation to be done on each program.

Consider that initially you would have, say, 100 programs and perform a cursory validation on each that takes a half hour. You're spending 50 hours on validation at that point. With SDTM, you will have perhaps 15 programs, and you can spend 2 hours validating each one and still end out under the initial time spent. In addition, more extensive validation up front reduces the amount of fixes that need to be put in (and revalidated) once the program is being used in production. It's a winning proposition for everyone.

Monday, January 03, 2005

Vioxx: The 21st century's punching bag

David Graham has stated that Vioxx may have harmed as many as 139,000 users, rather than the initial estimate of 28,000. On one hand, it's not entirely surprising -- Vioxx was a blockbuster drug, and drugs don't become blockbusters because they're sparsely prescribed. If it was widely prescribed, and cardiovascular events have been associated with its long-term use, it's not entirely shocking that the number of those affected might be greater than initially estimated.

It's starting to feel like a broken record. Vioxx was overprescribed. It had only been on the market since 1999, so it was hardly a time-tested therapy. And it was designed for a specific group of people, when the actual use was much, much more widespread. Drugs do things to your body, good and bad. FDA's responsibility is to make sure that the good outweighs the bad, and as far as toxic drugs go, I don't necessarily think Vioxx was even up in the top 5. (Accutane? Yes, for its implication in massive birth defects. Vioxx? Not so much.) The cardiovascular side effects were a surprise, but that is one of the risks of taking new drugs.

Now, the remaining COX-2 inhibitors are in question and will likely be removed from the market (or in the case of those not yet in the market, may never make it there.) These are useful therapies for the target population that won't be made available to them (and thanks to the negative press, even if it were, nobody would want to take it, useful or not.)