Tuesday, June 28, 2005

Absence makes the heart grow fonder

She lives! She's just been very, very busy. Over the past month or so, I've gotten involved in much more project work than I should be, given that I'm really supposed to be building and executing a corporate strategy for validation, and then I've gotten a new job on top of it. I'm leaving my current CRO-ish atmosphere for the hallowed halls of a local (SE PA) Big Pharma, and I couldn't be happier.

It is an interesting shift in more ways than one. For the past nine years, I have worked on computer software within the pharmaceutical industry. With this new position, I'm shifting to an auditing role within "the business" as those of us in software often call it, and I'm not going to be working with computer systems any longer. If it sounds like a big difference, that's because it is. It's not that I'm not qualified to do what I'll be doing (and more on what I'll be doing later) but that the focus of what I do is shifting dramatically. It's an extremely exciting prospect for me.

As for what I'll be doing... The Big Pharma for which I will be working is in the beginning stages of an effort to establish, enforce, and measure against global criteria for all vendors used in the conduct of clinical trials, and that's what I've been charged with doing. I was brought on because of my experience in quality assurance and validation (I'll need both when auditing vendors) and I think I have a lot to bring to the company and to the position. Not to mention that it's an excellent opportunity for me, and definitely in the right direction.

So, short story long, that's why I've been away. I haven't wanted to discuss anything here before it was decided and certain, but now that it is, I look forward to resuming my regular every-so-often posting pattern.

Wednesday, April 13, 2005

The argument against testing

Did you ever think you'd read about a validation/quality assurance person talking about talking people out of testing? Yeah, me either. But that's just what I've been doing lately.

My current employer is a relatively small and young company; as such, in the typical young company zeal to do the right thing, they have long held the attitude that everything computer-related must be validated in a full-blown validation effort: system master plan, requirements spec, test protocol(s), test scripts, traceability, and summary and conclusions and master reports aplenty. And I mean everything; we have a Microsoft Word template that we create as part of a service to our clients. All it does is apply font sizes and formats to existing content, and yet for several releases now, it's been validated in a full-scale validation effort.

I'm one of the strongest proponents of performing a strong testing effort, and doing it right the first time, but there's just a lot of needless effort going on here. That Word template, for instance. We develop it as part of a service to our clients, so I definitely believe that it needs to be tested to some extent. But does it really need to be validated, with all of the bells and whistles that validating something involves? I don't think so. My perspective is that if we qualify Microsoft Word as a part of our operating environment (and we do,) would we need to validate every document created by Word? No. So why would we need to validate a template that does nothing more than apply formatting to existing content (or format a segment of a document in preparation for the addition of content)? My feeling is that, frankly, we don't need to, and my only regret is that I didn't catch this sooner and make it easier on everyone nine months ago when I first got a look at this stuff.

21 CFR 820.70(i) states that "When computers or automated data processing systems are used as part of production or the quality system, the manufacturer shall validate computer software for its intended use according to an established protocol. All software changes shall be validated before approval and issuance. These validation activities and results shall be documented." In the particular case of the Word template I'm talking about, this template is neither part of pharmaceutical product production or of the quality system. This template is about as far removed from drug production as it gets; it's just a template for authoring reports. The template itself has zero impact on the content. That having been said, all of this is predicated on Word being a qualified part of the operating environment (and, again, it is here,) but once Word is verified, you really don't need to do more provided you're not using mechanisms within Word itself to derive, modify, or delete information that has regulatory impact.

So far, the person I've tried to talk out of validating his Word template has come back to me once to debate the decision. I'm open to the debate, but it does make me chuckle a little to think that, for once, I'm on the side that's arguing against fully documented testing.

Tuesday, March 22, 2005


I apologize for the extended absence. I've been putting in tons of time not only on my day job, but also on my presentation for DIA CDM, which is coming up very soon. I've also been approached just today for a very short-notice but very desirable writing opportunity, so we'll see if I can pull that together in time.

I've been doing a lot of validation remediation lately. I'm on my second project now for one client, and I've been doing a fair amount of it internally as well. Even though I suspect validation remediation makes up a significant amount of time in a given validation consultant's work year, I have had the luxury of not having to come up against it much until recently. Everywhere I've worked, we've dealt with new systems and prospective validation that was done with knowledge of 21 CFR Part 11 and all predicate rules, so I flatter myself that our efforts were relatively comprehensive. Now, though, that I'm calling my own shots and bidding on my own projects, I'm finding that I'm running into validation remediation more and more -- companies don't mind spending the money to bring in someone versed in current guidance and regulations to set things to right because it benefits them so much, and I find it interesting work.

It's rather like figuring out a half-done jigsaw puzzle, with the added challenge of some of the rest of the pieces being missing and others being scattered around the building. I have to figure out what is there and if it was put together appropriately, and once I do that, I need to figure out what other pieces I can find and re-do the ones I can't find. In the end, I come out with a picture that, while it might not look as perfect as the original puzzle would have had it been completed on the first go, is perfectly serviceable and representative of what it was meant to be a picture of in the first place.

Friday, February 18, 2005

What is pharma's problem, anyway?

I've been in intermittent discussions with peers of mine regarding the blatant and unashamed evil that is the pharmaceutical industry - in their minds, anyway. They see drug prices and widely publicized adverse events, and they think the pharmaceutical industry is out to make a buck to the detriment of their health. Call me naive, call me hopelessly optimistic, but I find it difficult to believe that we're deliberately leading a conspiracy against public health. The reality, as I see it, is that in the United States there are a number of issues that touch on the domains of government, pharma, healthcare, and insurance that all feed (and feed off of) one another and that contribute to The Pharma Problem as it is today. To wit:

  • Governmental: Every New Drug Application that is sent to FDA is accompanied by a "user fee" per the Prescription Drug User Fee Act (PDUFA III). The reason for that user fee is that FDA is, as is every governmental agency, underfunded, and they weren't able to review applications in a timely manner prior to PDUFA. (More on why time is so important in the next bullet.) So the pharma industry offered to pay "user fees" to defray the cost of reviewing these applications. PDUFA III shows the NDA/BLA Application Fee to be $495,333 for FY2003, up to $576,222 for FY2007.

  • Also governmental: Timing. Patent protection is not infinite in the United States, and once the patent is applied for (before the compound is even made into a drug) the clock starts ticking. Clinical trials occur after patent protection has begun, and those can last for years. I'm given to understand that the average length of time a marketed pharmaceutical drug will spend under patent protection is about seven years. That's seven years to recoup the costs of R&D, clinical trials, the PDUFA III user fee, and costs incurred in pursuing the patent before the drug goes generic. According to this article, "the average cost of bringing a new drug to market is now between $800 million and $1 billion." Quite a lot to recoup in seven years.

  • Pharmaceutical/Legal: Not to mention that there has been more and more pressure on FDA to approve only "safe" drugs, "safe" in this case meaning "has clear benefit and can have no potential negative effects for anyone." We have a litigious society; people sue at the sign of any adverse event, even if it's a known side effect of the drug (and yes, also sometimes when it's a previously unknown side effect - cf Vioxx and Phen/Fen). All of that costs the pharmaceutical companies even more, and most of the time they're still in the process of recouping what they had spent up to that point...

  • Pharmaceutical: ...so here we ring the bell and usher in direct-to-consumer advertising. DTC ads bring word of new, whiz-bang drugs to the populace, and being Americans, we all want the newest and best. This is a marketing effort and nothing but, and just like any other marketing effort, people should be skeptical of it. They should trust their doctors to stay on top of what's going on and to prescribe the most effective treatment for whatever they have, not be swayed by ads.

  • Healthcare/Insurance: ...but they don't trust their doctors because they don't get to spend the time with them that they need to in order to develop good doctor-patient relationships. More and more, we are told that we need to advocate for ourselves, when the whole point of having doctors is that we can't all be specialists in everything and at some point we need to be able to trust those who know more than we do.

  • Pharmaceutical/Governmental: And so we're back to the DTC ads. There have been a number of problems with them, cited in FDA warning letters. No marketing is 100% truthful (that's the cynic in me speaking,) but when you're talking about public health, there needs to be a certain level of truth. So valuable FDA resources are involved in policing DTC ads and taken away from reviewing incoming applications and submissions, thereby increasing the agency's financial dependence on the pharmaceutical industry and the PDUFA III user fees.

  • Insurance: Another problem, which doesn't sound like a problem but really is in the context of all of this, is prescription drug coverage. Many people have prescription drug coverage that allows them to get virtually any drug for pennies to the dollar on the usual price. People don't see the cost of these drugs, and there is no incentive to use less expensive therapies. Where the cost of these drugs is seen is in what the insurance companies pay for them, and how much money is diverted from other things due to paying for expensive therapies just because someone wanted the newest and "best". Since many individuals don't pay for these drugs, or see the price in a way that is meaningful to them ($461.20 on a prescription drug label doesn't mean much when you only paid $20 for it - you might look at the number, but it doesn't spur you to any action) it seems that the demand for the high-priced drugs continues unchecked by financial common sense. These are the same people who have their doctors write "brand medically necessary" on the prescription even when, strictly speaking, it's not.

  • Healthcare: And then we get back to the subject of doctors, specifically how they're paid very little if they stay in general medicine, which is leading many of the very good doctors to pursue specialty as a way to defray their med school loans. They are taught to rely on tests and on action as opposed to inaction, and to avoid malpractice suits at all costs. (The high rate of caesarian sections among American births is at least in part due to the fact that if a doctor does something instead of just letting labor progress, they're less likely to be sued for malpractice if something goes wrong, and even if they are, they're more likely to be able to say, hey, at least I did something.) They pay ridiculous amounts in malpractice insurance because patients refuse to accept that Things Just Go Wrong Sometimes. (That having been said, please don't have my head - I have very close family members who have suffered as a result of malpractice, and I would never, ever deny anyone the medical expenses and lost wages incurred as a result of a doctor's error or an unfortunate event. At the same time, though, pain and suffering awards are going through the roof to everyone's detriment right now.)

...And that's all off the top of my head right now. These items all relate to and are dependent on one another. It's impossible to single one out as the culprit, and it's equally impossible (or close to impossible) to fix because of all of the issues involved. I don't know what the solution should be. I'm barely just getting my hands around the problem at this point.

Tuesday, February 15, 2005

"So, what will the FDA enforce?"

I must get asked the question in the title three or four times a week. I don't blame people for wanting to know - clearly, it would make life much easier for just about everyone if we knew and could concentrate on what FDA perceives to be the potential critical points of failure in the industry's work. But I don't know beyond what FDA states in guidance, and I can't read minds, so, due to budget cuts and an attendant lack of a crystal ball, I turn to the next best thing for my answer: FDA Warning Letters.

After reading these letters every day, I've come up with a pat response which, though probably more flippant than it should be, contains a grain of truth:

FDA seems to care most about food products and promotional material. So as long as you don't make something people are going to eat, and you don't claim that your product cures anything, you're pretty much good to go.

Like I said, flippant. But also not without its truth. If you read all of the warning letters sent to companies in recent months, you'll find that the vast majority of them pertain to either food products that were manufactured in environments that were not in compliance with regulations or promotional materials that don't accurately reflect the benefits, risks, and side effects associated with a drug (or device or biologic).

I'm not sure whether I feel that this will change at all as the environment in the industry changes. It feels like now more than ever, the public is demanding "safer" drugs. ("Safer" in quotes because if you're reading this, chances are you know as well as I do that safety is all relative after a certain point.) I'm not sure whether that will translate over time to more stringent regulations (and therefore enforcement) for safety data for new compounds (and what that could potentially mean to the industry), or whether the hue and cry over DTC ads will continue instead. Perhaps both will happen to some extent.

Monday, January 17, 2005

Risk-Based Validation: What is it, and why do we do it?

Recently, much has been made of the FDA’s recommendation that pharmaceutical companies move toward a risk-based validation strategy. It has been referenced in the guidance General Principles for Software Validation (January 2002) as well as the final guidance regarding 21 CFR Part 11 (Part 11, Electronic Records; Electronic Signatures – Scope and Application, August 2003) as an FDA recommendation for validation of computerized systems. “Risk-based validation” is now a commonly heard phrase in the pharmaceutical industry, but its precise definition is unclear. The common impression is that it is a method that will reduce the overall time and effort expended in validation, and therefore will positively impact productivity and profitability. Though this is certainly true of a well-planned and well-executed risk-based approach, if the knowledge of how to implement such an approach is lacking, chances are the real benefits will not be seen.

In an effort to understand the concept of risk-based validation, individuals must begin by learning how to assess risk and make decisions based on that assessment. Frequently, a common scale for measuring risk is sought, and in the case of some efforts such as the GAMP risk assessment methodology, the establishment of a common scale, or at least a standard way to evaluate risk, is attempted. The GAMP method advocates categorizing software and performing validation based on the extent of validation recommended for the category in which the software is placed. Another method for assessing the risks associated with a given software package would be to assess each requirement (functional, design, user, etc.) for its ultimate impact on patient health, and deciding to concentrate validation on those requirements for which risk is highest. This requires the ability to approach the requirements at a macro level, and may require the initial involvement of more individuals from different disciplines within an organization, but that may be an acceptable investment for a software package that will be installed, and therefore validated, once, and only maintained thereafter for several years.

Another thing to remember about risk-based validation is that, as always, “if it’s not documented, it didn’t happen.” It is not enough to simply assess the risk and make the decisions based on that risk; the process of risk assessment must be documented, as well. The approach taken, the findings uncovered, the decisions made, and the justification for those decisions must be documented and included with the validation documentation for the system.

If steps are taken to perform a risk assessment for a piece of software to be implemented, it is indeed possible in some cases to reduce the overall amount of time spent on validation: taking a tailored approach rather than a “one-size-fits-all” approach can only benefit schedules as well as patient health. However, the approach must be planned and documented thoroughly.

Thursday, January 06, 2005

The implications of CDISC on validation

Not sure how many of my readers, if any, are aware of the initiatives being advanced by the Clinical Data Interchange Standards Consortium (CDISC), but they're setting forth standards for consistent collection and reporting of clinical trial data. One of their standards especially, the Standard Data Tabulation Model (SDTM), is going to have great impact on the world of computer systems validation.

SDTM has been introduced by FDA in a press release as a standardized format in which clinical data will be accepted in regulatory submissions. As a result, CDISC and the SDTM are oft-heard buzzwords in the clinical trial industry. They're not as often heard yet in the computer systems validation world, but they soon will be.

SDTM provides a standardized format and organizational scheme for the purpose of capturing, storing, reporting on, and retrieving clinical trial data. Currently, the data for each study within a trial can be in a different format and organizational scheme. The applications currently used to verify and report on that data, generally written using the SAS platform, are only informally tested and validated due to the sheer number of them (consider that a full suite must be written for each format and organizational scheme) and to the aggressive schedules pursued by pharmaceutical companies. SDTM is going to have an enormous amount of impact on this. With a single standard format and organizational scheme to work with, more time can be invested in the verification and reporting programs (edit checks and tables, listings, and figures programs respectively for the most part) and especially in the validation of these programs. They will reduce the amount of time spent programming and validating, while at the same time allowing for more extensive validation to be done on each program.

Consider that initially you would have, say, 100 programs and perform a cursory validation on each that takes a half hour. You're spending 50 hours on validation at that point. With SDTM, you will have perhaps 15 programs, and you can spend 2 hours validating each one and still end out under the initial time spent. In addition, more extensive validation up front reduces the amount of fixes that need to be put in (and revalidated) once the program is being used in production. It's a winning proposition for everyone.