Thursday, December 29, 2011

Pharmaceutical Quality assurance




Pharmaceutical Quality assurance is an essential element of drug advancement in the undersized pharmaceutical world. It is a branch which is accountable to warrant that all suitable methods have been abide by and recorded so that scientific development can be achieved. Modern solutions in documentation and supervising tools for use concerning essential storage, individual packaging and labeling, research laboratory environments, and development management are considered to enhance total quality assurance.
Pharmaceutical Quality Assurance is a very vast range of concept that encompasses all resources that individually or jointly influence the quality of a product. In relation to drugging industry, quality assurance can be classified into almost 4 key divisions:supervision of the process, the production process, deliverance and evaluation. The development of traditional norms and regulations for the boost of assurance of quality is an important factor of the rule book of World Health Organization.
Significant vital factors are assurance of quality management manuals in the fields of manufacturing, assessment, and dissemination of medicines. These comprise regulation on: high-quality production procedures, quality assurance in endorsement control, pre-criterion of qualitative medicines, research laboratories, and source organizations; exemplary certifications for quality assurance-linked efforts; value management analysis; latest requirements for enclosure in the Basic Tests progression.
All these fundamental features are projected for application by the national board officials, manufacturers and other concerned organizations. The need to raise access to low-cost quality medicines for communicable disease in the less developed countries has raised numerous tests within pharmaceuticals realm.
These jobs happen on unsurpassed fact that among national regulatory authorities there is a variable quantity to justify and apply current practices and accepted advices and ideals on instructions, quality management, labeling and classification of pharmaceuticals. World Health Organization will work to build up and promote complete standards, norms and guidelines in setting up its significance, worth and benefits of medicine.
The enhancement of ethics, morals and beliefs in promoting quality assurance and quality control is an essential factor of the rule book of World Health Organization and a remarkable involvement. It has been encouraged and sponsored via numerous World Health statements, and with Revised Drug Strategy. World Health Organization carries out its purpose in the areas of medicines and essential drugs at global markets, isolated or rural areas. At WHO headquarters, activities were developed and put to use by the Department of Essential Medicines and Pharmaceutical Policies (EMP).
Traditional medicine is the whole information, expertise, and the habits found on hypothesis, principles, and practices of ethnic to diverse cultures, whether understandable or not, implication in the reservation of health as well as in the preclusion, analysis, recovery or cure of physiological or psychological diseases.
Traditional use of herbal medicines employs to the lengthy treatment of these medicines. Their treatment is extensively acknowledge and widely accepted to be helpful, beneficial and highly effective and therefore, approved by the national regulatory authorities.
Active ingredients connected to elements of herbal medicines with healing properties are the top attributes to promote cure. In herbal medicines where beneficial component has been categorized, the foundation of such drugs should be dealt to comprise a defined quantity of the active factor, when enough systematic process are reachable. As it was, where it is not possible to be able to find out the active components, the entire herbal medicine may be considered as one effective cure

Wednesday, December 28, 2011

Beyond Size Exclusion: There Is No Universal Model Organism



Brevundimonas diminuta is typically used as the model for organisms that are expected to be found in pharmaceutical manufacturing environments. It was selected, based on its presence in pharmaceutical operations, and within native bioburden [4].
The organism has been found particularly suitable for validating sterilizing grade filters due to its size and ease of cultivation. However, it should only be used to model situations where its dimensions closely match the organisms of interest in a given application, relative to the filter pore size and shape.
Furthermore, in some cases, sieve retention may not be the mode of organism removal. For instance, in some cases, it may be adsorption, as the organism forms hydrogen bonds to the filter’s polymeric surface. This could account for the observation, 26 years ago, that Pseudomonas aeruginosa organisms are more strongly retained by polyamide membranes than by cellulose triacetate filters [5]. It also explains the removal of latex particles from aqueous suspensions by polyamide membranes in the presence of surfactant, but not in its absence (Table 1) [6].
Table 1: Retention (%) of 0.198-µm spheres by various 0.2-µm-rated membranes
Filter Type    In Water (% )   In 0.05% Triton X-100 (%) 
Polycarbonate 100.0 100.0
Asymmetric polysulfone  100.0 100.0
Polyvinylidene fluoride 74.8 19.2
Nylon 66 82.1 1.0
Cellulose esters 89.4 25.1




From Tolliver and Schroeder (1983) courtesy of Microcontamination
B. diminuta should not be viewed as a universal model organism, as some native bioburden may be a better alternative as challenge organism, being close to the actual process settings. Unfortunately, rare penetrations of sterilizing grade filters have caused an exaggerated doubt in the reliability of filtration. Appropriate process validation though should render such doubts and be trusted by even the most critical reviewer of sterile filtration..

Sources of Variability: Size and Shape

B. diminuta varies in size and shape, depending on how it is cultivated. Back in 1967, Bowman and colleagues described the B. diminuta size as 0.3 × 1.0 μm [7]. However, in 1978, Leahy and Sullivan found [8] that the organism grown at 30 °C and incubated for 24 hours in saline lactose broth, a minimally nutritional medium for that microbe, yielded cocci-like cells approximately 0.3 × 1.0 μm (Figure 1). Similar considerations have to be accounted for when challenge tests are performed with native bioburden forms.
B. diminuta are typically cultivated to develop as spherical a form as possible, since spheres are least amenable to retention. Thus, Leahy and Sullivan proposed, back in 1978, that it be used as the model organism for 0.2/0.22-μm-rated membranes, partly because of its size relative to the 0.2-μm dimension [8].
Subsequently, the FDA designated it for that very purpose [9], defining a sterilizing filter as one that retains a minimum of 1 x 107 cfu of Brevundimonas diminuta ATCC 19146 / per cm2 of effective filtration area (EFA).
Although it isn’t the smallest organism known, B. diminuta was considered diminutive enough to represent whatever smaller organisms were likely to be present in pharmaceutical preparations. The smaller the test organism, goes the logic, the more likely that its removal by a filter would assure the sieve retention of larger organisms.
However, ease and safety of cultivation and handling are also important considerations. In 2001, Sundaram and colleagues found an increasing number of cases where filtration in 0.2/0.22-µm-rated membranes failed to yield sterile effluent [10]. Experimental studies showed that penetrating organisms had shrunken because they had been cultivated in broths that were nutritionally inadequate. In such cases, the physicochemistry of the suspending fluid may serve to alter the size of the suspended organisms as expressed by the Donnan equilibrium consequent to ionic strengths.Leahy Organism Shape

Organism Shrinkage During Processing

Sundaram’s team [11] also found that organisms underwent size changes after exposure to certain drugs. In cases with 0.2-µm-rated membranes, the researchers found, the larger pore size could only provide sterile effluent and/or a high titer reduction with regard to certain organisms for various lengths of time, before penetration occurred.
Penetration times varied from 24 to 96 hours, and the cumulative challenge at which penetration was first observed ranged from 1.2 x 107 to 1.1 x 108 cfu/cm2. Two 0.2-µm rated Nylon-66 filters in series were unable to fully retain Ralstonia pickettii (now Burkholderia pickettii) with penetration observed at 72 hours, corresponding to a cumulative challenge of 2.4 x 107 cfu/cm2. The more extensive penetration of the Nylon-66 membranes, compared with the PVDFs, is in keeping with their greater degree of openness, as Krygier and colleagues showed in 1986 [12].
As a result, it has been suggested that 0.1-µm-rated membranes be substituted for their 0.2-µm-rated counterparts.
Sundaram’s team evaluated five 0.1-µm-rated membranes and found that they yielded sterile effluent over the entire duration of the test (120-196 hours), up to challenge levels of 5.7 x 107 to 2.0 x 108 cfu/cm2. Similar results were obtained with the PVDF filters tested; no B. pickettii were detected at challenge levels of 5.9 x 107-6.0 x 108 cfu/cm2.
In addition, all 0.1-µm-rated filters tested provided consistent and complete retention of B. pickettii for the entire duration of the test (120-192 hours), suggesting that the smaller pores would ensure sterile product at conditions where penetration could occur through conventional 0.2- and 0.22-µm-rated sterilizing grade filters. Proponents argue that using 0.1-μm-rated membranes would permit longer term formulation and filtration operations. In fact, 0.1 µm-rated filters may be the best choice for long-term filtrations.
However, penetration has also been found in 0.1 µm-rated filters. In 1999, Sundaram’s team found that B. pickettii, when its size was so affected, could be retained by certain 0.1-µm-rated filters. But, in a similar situation, they found that only four of seven commercially available 0.1-μm-rated membranes could remove a particular organism. Just because one type of membrane so classified may provide proper retention, does not mean that any other 0.1-µm-rated membrane can also be depended upon for a like result.
It is important to remember that today, there are no industry standards by which 0.1 filters can be judged. In addition, more research clearly needs to be done into the kinetics behind the organism’s size changes, evaluating different organisms in different fluids.
Based on available data, long term filtrations may best be handled by 0.1-µm-rated filters, subject to validations being performed. However, in other cases, substituting 0.1-µm-rated for 0.2-µm-rated membranes may be unnecessary, and could result in significant penalties, including:
 Slower flow and processing rates, resulting in longer term operations.
 Higher costs for larger EFAs
 More leaching and extractables
 Higher product losses, due to adsorptive bonding to the ultimately greater filter area used.
A responsible choice requires that both the 0.1 µm-rated membranes and the 0.2 µm-rated membranes be validated.
If both types of filter prove appropriate, the higher pore size rating should be used to avoid the penalties of reduced flows. If, however, the validation data do not permit a clear resolution, the 0.1 µm-rated membranes should be used, since retention is more critical than flow rate or flux.
Below, we address some of the common sterile filtration concerns, requirements or practices that appear to be motivated by fear and can best be resolved by careful process validation.
“0.2-µm filters are penetrated by organisms. The industry is, therefore, required to switch to 0.1-µm-rated filters.”
In certain specific processes, 0.2-µm–rated filter can be penetrated by organisms, or by organisms which would normally be retained by such filters. In such cases, the flltrative removal of the organisms may well require the use of 0.1 µm–rated filters. Such instances are not new. Their occurrences have been considered by regulators for years, at least since the PDA and FDA held a special forum on this topic in 1995 [13].
   
Certain organisms, such as Burkholderia pickettii, Burkholderia cepacia, and Pseudomonas aerugenosa. shrink as a result of their immersion in fluid media that are only minimally nutritious for them [14]. Their reduction in size renders as invalid validations that use B. diminuta as a model. Brevundimonas diminuta can undergo shape alterations in minimally nutritious media but is not listed as undergoing size alterations occasioned by contacts with process fluids.
The fact that some microbes require 0.1-µm–rated filters to arrest them does not signify that all organisms are so disposed. The necessitated switch from 0.2-µm-rated to 0.1-µm–rated happens in only roughly 0.005% – 0.01 % of sterilizing grade filtration applications.
A mandated switch is therefore scientifically and statistically unfounded. Its promulgation may be shunned and process validation activities and data used as performance verification. Sole reliance on pore size ratings have been found obsolete anyway.
“Increasingly there are detectable but non-culturable organisms or L-forms or nano-bacteria in our processes.”
Conclusions cannot be made regarding the sterile filtration of microorganisms unless the methods of quantifying them by culturing and counting are available. Organisms such as the L-forms, nanobacteria, and “viable but non-culturable” entities may not be amenable to such analyses. Concerns about their presence may be justified, but without the means to cultivate and count them, it is impossible to attest to their complete absence.
It follows that a sterilizing filter can be judged only by its performance in the removal of identifiable and culturable organisms known to be present in the drug preparation [15]. The complex of influences governing the outcome of an intended sterilizing filtration necessitates a careful validation of the process, including that of the filter [4]. The very drug preparation of interest, the exact membrane type, the precise filtration conditions, and the specific organism type(s) of concern must be employed in the necessary validation.
“Redundant 0.2-µm filtration is necessary and should be used.”
Not necessarily. Again, proper process validation will disclose whether a single filter will do the job or not. However, there are some specific applications which traditionally, for whatever reason, utilize a second (redundant) filter as an “insurance filter,” i.e. if the first filter fails, the second may compensate. This holds, however, only when each filter has been validated to show specified retentivity.
Even so, the wisdom of the exercise deserves careful evaluation, as it assumes added costs for membrane EFA, increased leachables and extractables. The loss of drug product may needlessly be incurred by the filter’s heightened product hold-up, and unspecified adsorption.
    
“The maximum bioburden in front of a sterilizing filter should be 10 cfu per 100 mL of fluid.”
This is true if one wishes to accord with EMA regulations, and especially if one wishes to export product to Europe. The FDA makes no such stipulation, but bases its approval on process validation.
Seemingly in conflict, the two views arise from the same premise. The EMA regulation tries to establish the same sterility assurance level (SAL) for filtration as for thermal sterilization. EMA recognizes that, the greater the number of challenges, the more likely that at least one will succeed.
The FDA seems to agree, in that if the filter can sustain the removal of organism burdens far above those liable to be encountered in real life situations, it can assuredly withstand lesser insults. If, as the authors see it, the FDA’s massive challenge fails to breach the filter’s pores, it is needless to compel bioburden assessment in front of the filter. Filter validation would gainfully serve the intended purpose. Process validation, effectively conducted, would reliably demonstrate the filter action.
“I need an absolute 0.1- or 0.2-µm-rated filter.”

 As a former FDA authority, since retired, once observed, “The word ‘absolute’ should be used only in conjunction with vodka.” Absoluteness implies a complete independence from conditions, an inherent ability to retain particles larger that than the filter’s pore size rating, regardless of any other considerations. Without a complete knowledge of the properties of the particles and filter pores at our disposal, the statement is devoid of technical significance or guidance. It may, perhaps, be used in ignorance (although cynics may suspect that its utility derives from marketing efforts, a practice not unknown in the competitive world of sales.)

Control vs. Fear

As Sandman elucidated, human beings like to be in control, and, if this status cannot be achieved, may move rapidly to fear. Unfortunately, when sterile filtration is concerned, fear can result in the installation of wasteful, unnecessary safety nets that can create more problems than they solve.
Being in control is the desired state, and such control can only come from process validation studies. Their authority is at least as old as Lord Kelvin’s basic scientific principle, “When you can measure what you are speaking about, and can express it in numbers, you know something about it.”
It speaks to validation. In sterile filtration, as in most areas of pharmaceutical manufacturing, science-based validation is the best cure for fear.

References
1. Hessler, A., Sandman, P.M. Squeaky Clean? Not Even Close. http://www.nytimes.com/2004/01/28/dining/squeaky-clean-not-even-close.html?sec=health?pagewanted=1
2. FDA. Guideline on General Principles of Process Validation, FDA CDER, 1987.
3. Agalloco, J.P. “Compliance Risk Management Using a Top-Down Validation Approach,” Pharmaceutical Technology, July 2008.
4. PDA Technical Report 26 (2008), Sterilizing Filtration of Liquids, Parenteral Drug Association, Bethesda, MD.
5. Ridgway, H.F., Rigby, M.G., and Argo, D.G. “Adhesion of a Mycobacterium to Cellulose Diacetate Membranes Used in Reverse Osmosis.” Applied and Environmental Microbiology 47, 1984, pp. 61-67.
6. Tolliver, D.L. and Schroeder, H.G. “Particle Control in Semiconductor Process Streams.” Microcontamination (l), 1983, pp. 34-43 and 78.
7. Bowman, F.W, Calhoun, M.P. and White, M. “Microbiological Methods for Quality Control of Membrane Filters.” J. Pharm. Sci., 56/2, 1967, pp. 453-459.
8. Leahy, T.J., Sullivan, M.J. “Validation of Bacterial Retention Capabilities of Membrane Filters.” Pharmaceutical Technology 2(11), 1978, pp. 64-75.
9. FDA. Guideline on Sterile Drug Products Produced by Aseptic Processing, FDA CDER, 1987.
10. Sundaram, S., Eisenhuth, J., Howard Jr., G.H., and Brandwein, H. “Part 1: Bacterial Challenge Tests on 0.2 and 0.22 Micron Rated Filters.” PDA Journal of Pharmaceutical Science and Technology, 55 (2), 1984, pp. 65-86.
11. Sundaram, S., Auriemma, M., Howard Jr., G.H., Brandwein, H., and Leo, F. “An Application of Membrane Filtration for Removal of Diminutive Bioburden Organisms in Pharmaceutical Products and Processes,” PDA Jour. Pharm. Sci. and Technol. 53 (4), 1999, pp. 186-201.
12. Krygier, V. Rating of Fine Membrane Filters Used in the Semiconductor Industry, Transcripts of Fifth Annual Semiconductor Pure Water Conference, (1986), pp. 232-251, San Francisco, CA
13. PDA/FDA Special Scientific Forum, Bethesda, MD; Validation of Microbial Retention of Sterilizing Filters, July 12-13, 1995.
14. Mittleman, M.W., Jornitz, M.W., Meltzer, T.H., “Bacterial Cell Size and Surface Charge Characteristics Relevant to Filter Validation Studies,” PDA Jour. of Pharm. Sci. and Technol. 52 (1), 1998, pp. 37-42.
15. Agalloco, J., Letter to the Editor—re: “It just doesn’t matter, It just doesn’t matter, It just doesn’t matter.” PDA Journal of Science and Technology. Vol 52, No. 3, pp. 149-150

How to set-up your own Aseptic Laboratory?

The Aseptic Room
All openings except the openings for entry need perfect sealing and horizontal surfaces including windows should be eliminated. These surfaces may allow retention of dust and may create problems.
The structure should be tight enough to prevent infiltration of uncontrolled air. All exposed surfaces should be smooth and impervious, easily cleanable and in no way prone to settling of dust upon them.
The surfacing material should not be susceptible to hold dust, flaking or chalking under normal operative procedures. Uncoated smooth surfaces e.g. stainless steel, aluminium and chromium plating, plastic laminates and plastic films make satisfactory surfaces. Cement concrete is a very undesirable surface.
Air-conditioned Atmosphere
In view of the sealed structure, preventing any source of entry of air, air conditioning is essential. Since no special requirement of temperature and humidity are prescribed, conventional air conditioning may be suitable enough. It is however preferable to have the humidity on the lower side to avoid contamination by the perspiration of the workers.
Air cleaning part of the air conditioning system is a critical factor as the nature of the atmospheric air may differ considerably from time to time. Cooling and heating coils, humidity control apparatus, reheat coils; blowers etc. are the equipment that should be of standard specifications.
However, the fans selected should be such that can provide high pressure. The room has to be constantly maintained under positive pressure to prevent inlet of air from entry point whenever it is opened. Dust, temperature and humidity control are interdependent functions.
Dust control is impossible without confining the area. Confined space is unlivable without air-conditioning. If temperature and humidity are controlled by air conditioning, dust control measures are automatically taken care of.
Cleaning of Air
Cleaning of air is the key factor of the aseptic processing. The usual approach is a combination of the conventional cleaners e.g. regular filters or electronic air cleaners located within the system and some kind of super-interception or an ultra cleaner located down stream from all coils, blowers etc.
Bactericidal equipment is incorporated in the assembly for providing sterile air in addition to be devices stated above. The air in the aseptic area should be free from fibers, dust and microbes. This can be conveniently achieved by the use of High Efficiency Particulate Air (HEPA) filters which can remove particles up to 0.3 µm with an efficiency of 99.7% or more.
HEPA filters made use of in Laminar Air Flow in which air moves with uniform velocity along parallel lines with minimum of eddies. The air flow can be either horizontal or vertical and 100 ± 10 ft/min. is considered to be the minimum effective air velocity. Such laminar flow stations and work benches are commercially available and should find immense use in compounding and dispensing practice.
Air Distribution
Materials of construction for the air distribution system should be made of non-rusting and non- flaking materials (e.g. ducts, air outlets etc.). Duct insulation, if needed, should be applied on the outer side and only on the ducts that are located out of the clean area. Joints and other fittings, if any, should be sealed to prevent leakage and contamination. Air distribution is also employed for 'washing' the workers off dust who enter the sterile area. These are called air showers which are strictly blasts or air directed on the person to remove dust.
Controls
Conventional controls are used in the system for regulating humidity and temperature. Sometimes greater pressure may have to be maintained in critical areas and for this purpose additional controls may have to be installed. Interlocks may be required between the sterile area and the entrance or passing doors. In a highly elaborate system, alarm circuits are introduced to warn against the malfunctioning or inadvertent misuse of the air locks.
Instruments
Highly sophisticated instrumentation has to be installed which constantly or intermittently monitors and samples the clean atmosphere for analyzing its cleanliness. These devices immediately indicate the contamination, if any.
Furniture
Seating, work tables, racks etc. are essential requirements of furniture inside he clean area and have to be specially designed and built to meet certain rigid specifications. Adjusting mechanisms on the chairs should either not exist or ought to be sealed. These may be potential dust setting surfaces.
Conventional upholstering of the chairs is completely out of question. Dust catching surfaces should be minimum and all parts should be readily cleanable and resistant to the action of the cleaning agents. Worker comfort is a very important factor in a confined area. The requirements of the job in a sterile room keep the workers chained to the work over long periods of time continuously without leisure and with minimum exits from the room and thus warrants high degree of comfort for carrying out the critical operations.
Special jigs, fixtures and tools are developed for specific purposes but many operations need dust-free hoods-miniature aseptic rooms located on the working bench. Even when the aseptic hoods or chambers are located in dust free rooms, they may have to be provided with supply of pressurized air in each one of them. The air may be either super cleaned or an inert gas. Further these hoods have to be independently illuminated.
There are many several important factors to consider when called upon compounding a sterile ophthalmic preparation. Eyes are very sensitive to heat, light, drugs and chemicals. In many cases, the drugs involved have a narrow therapeutic range and even small errors when introduced have the potential to cause irreversible damage to the eye or loss of the eye sight. The following considerations are recommended whenever preparing such a product.
1. Ensure concentration is within the acceptable range or not before dispensing of the product.
2. Sterility of the final product is a must, strictly handled in aseptic area.
3. The pH of the final product must be within an acceptable range.
4. Stability of the final product must be known, as well as the recommended storage requirements.
5. Suitable knowledge of potential diluents or vehicles is required in order to ensure proper tonicity, viscosity, or dissolution of the final product.
6. Proper documentation of each step is an important consideration to reduce error.
7. If the preparation of a product requires the breaking of an ampoule or the reconstitution of a powder, it is recommended that the final product be made in sterile water for injection and free from particulate matter.
8. The preparation of intra-occular products requires the use of preservative-free ingredients. Many preservatives have been found to be toxic to the inner ocular tissues.
9. Finally, before dispensing the finished product, always indicate the storage requirements, concentrations of ingredients, and the expected expiration date.

While sterility testing can be cumbersome and time consuming, there are FDA-approved options regarding container and closure system integrity testing. In fact, the Food and Drug Administration (FDA) recently released a guidance document on container and closure system integrity testing in lieu of sterility testing as a component of the Stability Protocol for Sterile Products. The document offers an alternative approach, if the approach satisfies the requirements of the applicable statutes and regulations.

Aimed at manufacturers, the guidance offers alternative testing methods other than sterility testing to confirm container and closure system integrity as a part of the stability protocol for sterile biological products, human and animal drugs, and medical devices.

The purpose of stability testing is to provide evidence on how the quality of a substance or product varies with time under the influence of a variety of environmental factors such as temperature, humidity and light. Products labeled as sterile are expected to be free from viable microbial contamination throughout the product's entire shelf life or dating period. This enables manufacturers to establish or modify recommended storage conditions, retest periods and shelf life or dating period, as the case may be.

Currently, manufacturers of drugs and biologics purporting to be sterile are required to test each batch or lot, to ensure that the product in question conforms to sterility requirements. They must also maintain a written testing program designed to assess stability characteristics and meet stability testing requirements.

Manufacturers of medical devices are required to validate processes, including sterilization, for a device purporting to be sterile, although stability testing should be part of the design validation of such devices. Also, in vitro diagnostic products for human use are required to be labeled with stability information.

The minimum sterility testing generally performed as a component of the stability protocol for sterile products is at the initial time point (release) and final testing interval (i.e., expiration). Additional testing is often performed at appropriate intervals within this time period (e.g., annually).

Alternatives to sterility testing as part of the stability protocol, such as replacing the sterility test with container and closure system integrity testing, might include any properly validated physical or chemical container and closure system integrity test (e.g., bubble tests, pressure/vacuum decay, trace gas permeation/leak tests, dye penetration tests, seal force or electrical conductivity and capacitance tests, etc.), or microbiological container and closure system integrity tests (e.g., microbial challenge or immersion tests).

Such tests may be more useful than sterility testing in demonstrating the potential for product contamination over the product's shelf life or dating period. The advantages of using such container and closure system integrity tests in lieu of sterility tests in the stability protocol for sterile products include: detecting a breach of the container and/or closure system prior to product contamination; conserving samples that may be used for other stability tests; requiring less time than sterility test methods which require at least seven days incubation; and reducing false positive results with some alternative test methods when compared to sterility tests.

To implement container and closure system integrity testing as an alternative to sterility testing, the FDA recommends manufacturers consider the following: a container and closure system integrity test may replace sterility testing in a stability program at time points other than the product sterility test prior to release; container and closure system integrity tests do not replace sterility testing methods for product sterility testing prior to release; any validated container and closure system integrity test method should be acceptable provided the method uses analytical detection techniques appropriate to the method and is compatible with the specific product being tested.

Friday, December 16, 2011

TOOLS OF THE TRADE - Analytical Instrumentation | The Power of FCS





Samara Kuehne
ISS makes the Alba-FCS dual-channel spectrometer, which combines a confocal scanning microscope with FCS.
ISS makes the Alba-FCS dual-channel spectrometer, which combines a confocal scanning microscope with FCS.

Newer fluorescence correlation spectroscopy instruments focus on portability, affordability, and ease of use

Fluorescence correlation spectro-scopy (FCS), originally developed in the early 1970s for use in physics and physical chemistry, has recently been applied to the fields of drug discovery and development, with powerful results.
Early FCS equipment used in physics labs was large, with bulky lasers that were not necessarily user friendly. Since 2005, instrumentation with specific application to the drug discovery field has entered the marketplace, and products released since have increasingly focused on reducing the size of a unit’s footprint to make it easier to use in the lab.

FCS Technology

FCS is an extremely effective tool for ultrasensitive measurements. The technique measures and correlates fluctuations in fluorescence intensity within a very small volume element, allowing for single-molecule inspection. A sharply focused laser illuminates this small element, and single molecules diffusing through the illuminated confocal volume produce bursts of fluorescent light. Each burst is recorded by a single-photon detector and analyzed via autocorrelation.
This autocorrelation provides data on concentration, the diffusion time of the individual molecules, and each molecule’s brightness, which subsequently allows differentiation of slower- and faster-diffusing particles. Ultimately, the binding and catalytic activity is calculated from the diffusion times and the ratio of faster and slower molecules.1,2

Pharmaceutical Uses

The ConfoCor 3 by Carl Zeiss is designed to be paired with one of the company's laser scanning microscopes.
Sensor Technologies’ QuantumXpert FCS Spectrometer includes optical and electronic components.
In drug delivery and discovery, FCS is used to measure the quantity and distribution of a drug in the nanoparticles used to deliver it to its target and, ultimately, to determine how fast the binding is and how low a concentration of the drug is necessary to achieve an effective binding. Study results published in 2010 concluded that FCS, used in combination with a confocal microscope, could in fact determine diffusion constants and concentrations of fluorescent molecules.3
FCS can be used to study molecular and cellular interactions in homogeneous assays. In drug discovery and production, FCS can be used to develop assay targets for protein-protein interaction, protein-nucleic acid interaction, kinase activation by complexing, and host-cell contamination.
The technology can be used to detect both direct binding and competitive inhibition of binding, a newer method of measurement that makes FCS useful in drug research and drug delivery analysis.

Instrumentation

There are several devices currently available on the FCS market. Carl Zeiss (Jena, Germany) offers the ConfoCor 3, which is designed to be paired with one of the company’s laser scanning microscopes. The unit’s software controls the detection module and can analyze single or multiple measurements and allow auto- and cross-correlation to be calculated at the same time as the current measurement. Software upgrades to the unit include options for photon-counting histograms and for defining start values and boundaries.
The ConfoCor 3 by Carl Zeiss is designed to be paired with one of the company’s laser scanning microscopes.
The ConfoCor 3 by Carl Zeiss is designed to be paired with one of the company’s laser scanning microscopes.
ISS (Champaign, Ill.) manufactures the Alba FCS dual-channel spectrometer, which combines a confocal scanning microscope with FCS. Its light sources are either single-photon or multi-photon lasers, and the device can be interfaced with Leica, Nikon, Olympus, or Zeiss epi-fluorescence microscopes. The Alba can acquire data either in time mode, which counts photons acquired in time intervals, or in photon mode, which measures the time delay between photons and builds a histogram.
Sensor Technologies’ (Shrewsbury, Mass.) QuantumXpert FCS Spectrometer includes both optical and electronic components. The unit’s confocal microscope optics are built in to the system, so it doesn’t require them as external add-ons. The result is a much smaller and more portable system that does not require daily realignment exercises, one that can cost less than other traditional FCS systems. The device also features a data analysis software system designed to simplify the task of analyzing FCS measurement data, offering single and batch correction algorithms, along with curve-fitting methods for both fluorescence correlation data and photon-counting histograms. It is available with either a manual or an automated sample changer.
Corrvus (Spokane, Wash.) is also developing a portable FCS system with a lower price tag than more traditional models. This unit, available in the next 12 months and designed for ease of use in the lab, will not need a dark room to develop the imaging.

The Future of FCS

Research over the past 10 years has shown FCS to be a powerful tool in drug discovery and delivery. It allows for single-molecule observation and can specifically measure how one molecule interacts with another. Parijat Sengupta, PhD, research assistant professor at Washington State University in Spokane and consultant to Corrvus, thinks “FCS might play a very important role in personalized medicine,” a model that emphasizes therapeutic care and treatment plans tailored to specific individuals. With the technology’s specific application to pharmaceuticals still in its relative infancy, its versatility and allowance for extremely sensitive measurements could pave the way for some exciting developments in drug formulation.

References

  1. Seethala R. Homogeneous assays for high-throughput and ultrahigh-throughput screening. In: Seethala R, Fernandes PB, eds. Handbook of Drug Screening. New York: CRC Press; 2001:94-95.
  2. Schwille P, Haustein E. Fluorescence correlation spectroscopy: an introduction to its concepts and applications. Biophysics Textbook Online. 2003. Available at: www.dpi.physik.uni-goettingen.de/Praktika/ Biophysik/Versuche/2006w/Fluoreszenzkorrelationsspektroskopie-Literatur-Schwille_ Haustein.pdf. Accessed Oct. 16, 2011.
  3. Jung CC, Polier S, Schoeffel M, Drechsler M, Jerome V, Freitag R. Fluorescence correlation spectroscopy as a quantitative tool applied to drug delivery model systems. Nature Precedings. 2010. Available at: http://hdl.handle.net/10101/npre.2010.4140.1. Accessed Oct. 16, 2011

QUALITY CONTROL - Method Validation | Means to a Method



Cliff Nilsen

A step-by-step guide for proper validation

Part 4 of 4: This final installment completes our series on method validation, continuing the discussion of the mechanics of the process.

Read the Complete Series

  • Part 1, PFQ February-March 2010: Described high-performance liquid chromatography (HPLC) procedures for demonstrating that a method is stability indicating through the use of forced degradation studies and evaluation of peak purity using a photo diode array UV detector.
  • Part 2, PFQ June-July 2010: Defined each of the other method validation components—selectivity; linearity and range; accuracy and recovery; assay precision; intermediate precision; limit of detection; limit of quantitation; ruggedness, robustness, and comparative studies—with the primary focus on assay methods.
  • Part 3, PFQ October-November 2010: Began the description of how to perform method validation for each validation component.
A placebo is spiked with different levels of active ingredient standard, bracketing the normal working concentration of the method. The amount of standard recovered at each level is compared with that added.
As with linearity, prepare a series of five standards that span a range of 50% to 150% of the analyte working range except that each standard will contain placebo in an amount proportional to the placebo/active ratio of the drug product for which the method is being validated. For example, if a tablet weighing 500 mg contains 50 mg of active pharmaceutical ingredient (API), 450 mg is placebo.
Table 1. API Linearity Standards
click for larger view
Table 1. API Linearity Standards
To perform the accuracy portion of the validation, assuming a working concentration of 0.1 mg/mL, make 200 mL of a solution of active in a suitable solvent, having a concentration of 1.0 mg/mL (10 times the working concentration). This is the active stock solution. Next, obtain a small quantity of placebo (drug product minus active). Prepare the five working standards according to Table 1.
Using the method being validated (assume it is an HPLC method), inject each spiked standard three times. For each spiked standard—50%, 75%, 100%, 125%, 150% of the working standard, respectively—each containing a proportional amount of placebo, calculate the area unit’s percent relative standard deviation (%RSD) to determine injection precision at each level. Use the mean values for active concentration at each of the five levels, computing the percent recovery from the placebo at each level.
In most cases, it is desirable to have an injection precision, in terms of area unit RSD, of less than 2% for each standard level, i.e., 50% to 150% of the working concentration. Recovery (accuracy) limits vary with method requirements, but are usually considered acceptable if the recovery of spiked active from the placebo at each level is between 98% and 102%.

Assay Precision

Multiple (six) sample preparations are made from a single homogeneous sample, and the six separate preparations are assayed, using the method under validation, versus a freshly prepared standard. Precision between individual assay results is calculated and expressed as %RSD. An assay precision of not more than 2% is generally considered acceptable for assays. Some applications, such as residual solvent determinations or trace analysis, will have different acceptance criteria.

Limit of Detection

The limit of detection (LOD) of the analytical method is determined by comparing the test results obtained from samples with known concentrations of analyte against those of blank samples and establishing the minimum level of analyte that can be detected. There are a number of ways to express LOD, including a multiple of noise level, a minimum area count %RSD, or a fixed percentage of the lower limit of the linearity curve (50% lowest level for example). LOD is of little importance for assay determinations but is of great importance to applications such as impurity analyses or determination of trace solvent levels.
Experimentally, LOD can be determined by serial dilution of a working standard until the sample peak is indistinguishable from baseline noise.

Limit of Quantitation

The limit of quantitation (LOQ) of the analytical method is the lowest level at which analyte can be reliably measured. Some common definitions of LOQ: three times the LOD and a level at which the %RSD of injection precision is less than 5%. As with LOD, LOQ can be determined by serial dilutions of a standard. LOQ is important in determination of trace components such as impurities and residual solvents.

Ruggedness and Intermediate Precision

The ruggedness of an analytical method is determined by analyzing multiple samples from homogeneous lots. Samples from the same lot are assayed in hextuplicate (six times) using six sample preparations (six assays). Separate sets of six assays from the same homogeneous sample are performed by different chemists on different days, using different columns, different instruments (if possible), and different standard preparations. The %RSD of each set of assay results for each chemist should be no greater than 2.0, and the pooled %RSD for all 12 assays should be no greater than 2.0.
As a point of information, ruggedness (inter-lab precision) refers to performing each of the two assay sets in different labs, whereas intermediate precision (intra-lab precision) refers to performing each of the two assay sets in the same lab.

Robustness

Robustness is determined by observing how a method responds to slight variations in normal operating parameters. In HPLC methods, for instance, this could be a change in flow rate, column length, column temperature, or mobile phase concentration. A simple way of doing this is by performing assay precision under various conditions that vary slightly from the method parameters.
For example, if a method calls for a 30 cm column, a 1.0 mL/minute flow rate, a column temperature of 30 degrees C and a mobile phase consisting of 80 parts water and 20 parts methanol, one could perform an assay precision (six replicate assays) at 0.8 mL/minute, 1.0 mL/minute and 1.2 mL/min, at column temperatures of 28 degrees C, 30 degrees C, and 32 degrees C, at method conditions on a 25 cm column and at mobile phase ratios of 78/22, 80/20 and 82/18 water/methanol. One set of sample preparations is used for all these experiments (six weighings and a ton of injections). The %RSD of the assay results at slightly varied conditions should be no greater than the maximum allowed under normal method operating conditions.

Acceptance Criteria

The validation protocol must include acceptance criteria for each validation parameter. The criteria are the performance requirements of the method—a yardstick against which the method’s validity is measured and which will vary depending on the intended application. The typical acceptance criteria for a drug product assay method are cited in each of the validation parts described above. Refer to part three in this series for information on stability indication, selectivity, and linearity and range.
continues below...

Read More

Generic Drugs: A Consumer’s Self Defense Guide
Cliff Nilsen’s latest book, “Generic Drugs: A Consumer’s Self Defense Guide,” has been published by iUniverse (http://177060.myauthorsite.com). Here is an excerpt: Imagine the horror as you rush your infant child to the hospital emergency room. “Doctor, what’s wrong?” “We can’t be sure miss, but it looks like some kind of poisoning.” After a short time, the doctor comes back out to tell the hysterical mother, “I’m sorry, but your baby died.” What happened? Upon investigation, it was determined that the mother had given the baby a generic brand of infant ear drops for an earache. It turns out that the ear drops contained glycerin as a main ingredient — glycerin from China that was contaminated with anti-freeze. How could that possibly happen?
Well, this story is fictional, but glycerin from China contaminated with anti-freeze is real. How can we as consumers avoid buying contaminated or substandard drug products? Do you know if the products sitting in your medicine chest or kitchen cabinets are safe to use? How can you be sure? This book uncovers and documents questionable manufacturing practices used by drug companies, mostly generic companies, that could have serious health consequences for American consumers who purchase the drugs produced by those companies.
No one wants to use drug products that are contaminated with dangerous chemicals, foreign matter such as metal and glass, or harmful bacteria, or wants to purchase and use drugs that are mislabeled or have the wrong strength, or that have been made using shoddy manufacturing procedures in dirty equipment by untrained workers. Yet, these things are commonplace, particularly with over-the-counter drugs.

The Validation Report

Once the validation has been executed, a validation report is prepared and submitted for approval. The elements described below should be included in the validation report.
  • Summary: The summary is a simple statement about the results of the validation study. For example, “The method for the assay of product XYZ by HPLC was found to be accurate, precise, selective, linear, and stability indicating, and thus suitable for its intended use.”
  • Analytical Validation Data: Analytical data should be presented in tabular and graphical form for ease of evaluation. The data presentation should show analytical results for each validation parameter, plus residuals and all calculations used to derive results from laboratory data. Be sure to include all graphs, curves, and copies of raw data (chromatograms and notebook pages).
  • Discussion: Describe the outcome of the validation in detail. The discussion/ conclusions should deal with any problems that were encountered and should include a rationale for accepting or rejecting the validation. Any experiments or failing results that were repeated and then accepted need to be explained and justified.
Any deviations from acceptance criteria must be explained, and the conditions under which the method may be used (method limitations) should be clearly defined, such as “only linear from 75% to 125% of the working concentration” or “meets all acceptance criteria and can be used throughout the ranges tested in the validation.” The validation protocol must be approved prior to beginning a validation study, and the validation report must be approved prior to using the method under validation.

INGREDIENTS - Excipients | Embrace Excipients

James Netterwald, PhD

Critical drug components are necessary to enhance solubility and delivery

Other than the active pharmaceutical ingredient (API), one or more excipients make up the rest of the weight in a final drug product, whether it is a tablet, an injectible, a dermal, or an inhaler. Even water is an excipient in an injectable drug, because it dissolves the API and facilitates drug delivery.
“Excipients are needed to bind the tablet together so that it will not break into powder,” said Dale Carter, chairman of International Pharmaceutical Excipients Council of the Americas (IPEC-Americas). Excipients can also be used to protect the active ingredient in a tablet so it does not get destroyed by stomach acid and can reach the small intestine, where the drug can be absorbed. Excipients are also used to sugar coat an antibiotic to mask its normally bitter taste.
The current thinking on excipients is that not all of them are effective for drug delivery. For a variety of reasons, many are difficult to use in formulations.

Types and Uses

Anthony Hickey, PhD, president and chief executive officer of Cirrus Pharmaceuticals Inc., in Durham, N.C., is a particle scientist, specializing in aerosol and inhalation formulations. The major excipient Dr. Hickey uses in drug formulation is dry powder lactose. He uses lactose, he said, because there are limited options for additives to use in the formulation of asthma inhalants. Dr. Hickey explained that only a handful of excipients appear in approved inhaled products.
“If you use a propellant-based, metered dose inhaler, you will find phosphotidylcholine and sorbitan trioleate, for example,” Dr. Hickey said. “These excipients were approved in the old chlorofluorocarbon (CFC) products, but they don’t work in hydrofluroalkane (HFA) products.”
Oleic acid, also found in old chlorofluorocarbon products, has a slightly different role, but it is included in some newer products. HFA inhalants are environmentally safer than the older inhalers, which released dangerous CFCs into the environment. Other additives and excipients in nebulizer formulations include solvents, buffers, and salts, but although all these products are approved for use, the range of excipients found in other dosage forms such as tablets does not appear in inhaled products.
Lactose is the excipient of choice for Dr. Hickey’s work because of its inherent ability to assist in the aerosolization of a drug. Micronized drug particles used in aerosol products exist as small particles that tend to stick together due to their physicochemical properties. Lactose acts as a carrier, separating the drug particles in the powder and assisting in the formation of an aerosol as the powder is drawn into the inspiratory airflow.
“In essence, the small drug particles are stuck on the surface of lactose,” Hickey said. “When you inhale, those small particles are more easily stripped from the surface of the lactose.” Lactose is then separated from the API. Lactose does not enter the lungs. Instead, it is deposited in the back of the throat and swallowed.
 Lactose, present in almost all inhaled, dried, powdered products, especially those indicated for asthma, is also a cause of food intolerance. So, what happens for asthmatics who are also lactose intolerant? The solution is inhaled products without lactose, including those used in metered-dose inhalers and nebulizer solutions.
 “There is a litany of excipients that have been added to oral products and injectables,” Dr. Hickey explained. “The dominant dosage form is in tablet form, where you have the most excipients, some of which are added to help or allow the very small dosage of drug to be measured to make the dosage form. Like lactose, some of those excipients are voided in various ways once the drug is released, not affecting the drug biology at all.”
continues below...

CASE STUDY: Apricus Bio Validates an Excipient

Apricus is not only an excipient manufacturer. The firm also develops drugs that use its excipient in new drug formulations. “For our erectile dysfunction drug Vitaros, we performed a dose response of our excipient DDAIP,” said Bassam Damaj, PhD, president and CEO of the San Diego, Calif., firm. “Our goal there was to find a minimum percentage of excipient necessary to adequately deliver the API to the penile arteries to induce the clinical effectiveness in patients without inducing irritation.” He said the challenge took several years to complete, but finally, the company found that 2.5% of DDAIP with API in its formulation produced the best systemic levels of the API to induce significant clinical effectiveness. These results were partially attributed to the approval of this drug.—JN

Basic Regulatory Rules

Excipients are generally inert components in a drug formulation and, therefore, do not usually pose a safety risk. However, “excipients that are still being consumed need to meet all of the same regulations as an active pharmaceutical ingredient,” said William Kopesky, vice president of analytical services at Particle Technology Labs. “(The) FDA [U.S. Food and Drug Administration] does not set specifications for excipients, but they expect a client, a submitter, or a manufacturer to have specifications and a controlled process or control over their materials,” he said.
Clearly, there is not a huge safety risk with the presence of excipients in formulations. And although drug formulators do not always wish to add excipients, in some instances they are necessary in order to make an effective formulation.
Excipients serve a purpose in the dosage form and do not pose a safety risk for individuals taking the pharmaceutical. Many are sugars, lipids, or polymers that are of biological origin or are biologically compatible. “The reasons excipients are added often have more to do with making the dosage form and making drug delivery easier,” said Dr. Hickey. “I think the tendency these days is that if you don’t have to use excipients then you don’t. And then, of course, to make it clear, the final dosage form is the product that has to be evaluated as part of the approval process, which includes excipients. Excipients are part of the drug safety evaluation.”

Role in Drug Delivery

The current thinking on excipients is that not all of them are effective for drug delivery. In the past, many companies have tried to develop excipients, as well as permeation enhancers such as azone, polyethylene glycol, and dimethyl sulfoxide (DMSO). The problem these companies have faced is that, for a variety of reasons, many excipients are difficult to use in formulations.
 Bassam Damaj, PhD, president and CEO of Apricus Biosciences in San Diego, Calif., described the characteristics of his company’s main excipient and permeation enhancer—dodecyl-2-(N,N-dimethylamino) propionate (DDAIP)—to illustrate the features necessary for any excipient.
Dr. Damaj said DDAIP’s mode of action is to loosen the tight junctions between cells to allow a drug to enter the cell. Like DDAIP, excipients should not be toxic and should not induce irritation. Also, the excipient should have a very short half-life once it enters the circulation. An excipient should also be water soluble. Many excipients and enhancers are not water soluble, limiting their use in the pharmaceutical formulation.
To handle the issue of excipient solubility, Apricus has developed two forms of its excipients: a base form that works well with hydrophobic formulations and an acid form that works best with hyphophilic formulations. The excipient you choose depends on the drug and the active pharmaceutical formulation to be delivered.

Analysis in Formulation

There are various technologies available for excipient analysis. For particle-size determination, methods such as laser diffraction and sieving light obscuration can be used. For particle surface area determination, nitrogen gas absorption or krypton gas absorption are used.
Particle Technology Labs in Downers Grove, Ill., is a service laboratory that performs analysis of particles—which include excipients—for its mainly pharmaceutical clients, who submit samples for physical characterization such as analysis of particle size and surface area. Those measured properties can affect the materials’ behavior in formulations or in processing conditions. “In other words, the physicochemical properties of excipients in a drug formulation can affect the drug’s behavior (in terms of) its dissolution, solubility, and processability,” Kopesky said. He encounters many excipients in his work, including microcrystalline cellulose, magnesium stearate, crosprovidone, and lactose.
Excipients are added and analyzed prior to addition to the formulation. “If the excipient is within a certain particle size range, it may behave differently than a larger particle-sized excipient,” Kopesky said. He added that it is important to know those physical properties in order to either predict or elicit a certain processing characteristic in the final formulation.
Excipients must also be tested for their relative chemical inertness to ensure that they do not chemically interact with the API. Pharmaceutical formulation scientists desire excipients that exhibit chemical inertness. However, their inertness must be tested prior to their use as an additive in drug formulation. “So I think that anybody that could formulate with limited excipients would do so,” Dr. Hickey said. “That’s not to say that they don’t see the value in excipients, because so many drugs need to be formulated with excipients.”

Hot or Not?

As with any component in the formulation process, some excipients are in favor, while others have fallen by the wayside. Among those still in favor are polyethylene glycol and lactose as well as DDAIP-HCl. The latter is preferred because of its enhanced activity in formulations, its purity, and the ease with which it is manufactured.
Excipients that have fallen out of favor include DMSO, oleic acid, and paraffin. These have lost ground primarily because of significant skin irritation, reduced stability in formulation, or difficulty of manufacturing. Excipients that have lost their luster have been associated with a wide range of adverse events, incl

FORMULATION - Parenteral Advances | Get Local with Targeted Delivery



Maybelle Cowan-Lincoln
Get Local with Targeted Delivery

A new range of polymer implants and programmable microchips is paving the way to more personalized therapies

There are three major parenteral drug delivery routes: intravenous, intramuscular, and subcutaneous, as well as several more rarely used routes such as intra-arterial.1 Recently, however, interest has piqued in new parenteral technologies that facilitate targeted local drug delivery.
Parenteral delivery can be the route of choice under several circumstances:
  • When the plasma levels of a drug must be carefully controlled;
  • When it is necessary to avoid the “first pass” metabolism through the liver or minimize the risk of harmful side effects resulting from systemic delivery;
  • When the patient is not conscious or not capable of taking the drug orally; and
  • When administering drugs with a short half-life.2
The parenteral route provides the best solution for overcoming the challenges of delivering proteins and peptides. These agents are easily degraded by enzymes found in the gastrointestinal tract, and the large size of the molecules makes transdermal delivery difficult. In addition, proteins and peptides have very short half-lives in vivo, so they must be injected multiple times over the course of a day.
Novel controlled release parenteral technologies can reduce injection frequency and the accompanying pain, thereby potentially improving patient compliance.3 But this solution comes with its own set of problems. In addition to the pain of multiple injections, drugs taken by injection follow the pattern of first-order kinetics—high levels in the blood after administration followed by a sharp fall in concentration. At peak levels, toxicity can be an issue, and efficacy can decrease as levels fall. An ideal implantable system would include an electronic feedback device to control drug release.

Zero-Order Release

Controlled drug release from implantable parenteral devices may also be able to achieve the elusive goal of sustained zero-order release. This means that the rate of drug release remains constant, minimizing the risk of toxicity and the inconvenience of frequent dosing. This is particularly important when the drug concentration must fall into a narrow window between the minimum effective concentration and the maximum safe concentration.4
Strategies to come close to zero-order release have included multiple injections and implantable pumps. But these methods fall short of ideal. Frequent injections are inconvenient and painful, and implantable pumps require surgery to implant, refill, or remove. In addition, these pumps can only be used with drugs that are stable at physiological temperature. Research continues to look for a more efficient technology to achieve zero-order release.
continues below...

CASE STUDY: The Electronic Implant Revolution

EMS and NEMS devices are the basis for technologies that will effect a sea change in the treatment of many diseases. Implantable devices complete with micro- or nanochip technology that allows them to respond to physiological changes can virtually automate the management of certain chronic diseases and provide lightning fast interventions for emergency situations. Two systems on the leading edge of this research are the “artificial pancreas” and the “personal paramedic.”1
www.artificialpancreasproject.com
www.artificialpancreasproject.com
According to the American Diabetes Association, approximately 25.8 million Americans suffer from diabetes, with 7 million of these undiagnosed. And, each year, 1.9 million new cases are discovered in people aged 20 and older.2 The most important thing diabetes patients can do to maintain their health is to strictly control their blood glucose levels. But for a variety of reasons—pain and inconvenience among them—many do not. This often results in debilitating pathologies later in life, including blindness, kidney failure, and amputations.3
To promote better disease management, the Juvenile Diabetes Research Foundation (JDRF) launched the Artificial Pancreas Project six years ago. The JDRF has formed a consortium of government and academic researchers, along with private corporations from the United States and Europe, to work collaboratively toward the development of the first fully functional unit.4
The artificial pancreas would be a miniature, closed-loop device composed of a glucose monitor, a miniature pump powered by a MEMS chip to deliver insulin, and a power source. The artificial pancreas would continuously monitor blood sugar levels and automatically release the precise amount of insulin needed, practically automating diabetes management.
Another breakthrough MEMS-powered device is the “personal paramedic” or Implantable Rapid Drug Delivery Device (IRD3). An implantable delivery system designed for ambulatory emergency care, the IRD3 allows for rapid delivery of cardiac resuscitation drugs such as vasopressin.
The IRD3 is made up of three layers: the reservoir where the drug is stored, the membrane that seals the reservoir to prevent the drug from leaking out or foreign substances from penetrating, and the actuation layer. In the actuation layer, micro-resistors heat fluid to form bubbles once certain cardiac symptoms are detected by the MEMS microchip. The increased pressure caused by the bubbles ruptures the membrane, allowing the medicine to be released from the reservoir at a rate of approximately 20 µl in 45 seconds.5
The treatment of diabetes and certain types of heart disease can be transformed by these drug delivery systems. Their success can also stimulate a creative and vibrant commercial environment, fueling the discovery of more ways MEMS devices can improve the management of many conditions.

References

  1. Staples M. Microchips and controlled-release drug reservoirs. Wiley Interdiscip Rev Nanomed Nanobiotechnol. 2010;2(4):400–417.
  2. American Diabetes Association. Diabetes statistics. Available at: www.diabetes.org/diabetes-basics/diabetes-statistics. Accessed Oct. 3, 2011.
  3. Schetky LM, Jardine P, Moussy F. A closed-loop implantable artificial pancreas using thin film nitinol MEMS pumps. Paper presented at: International Conference on Shape Memory and Superelastic Technologies; 2003; Pacific Grove, Calif.
  4. MacRae M. The artificial pancreas project. American Society of Mechanical Engineers. August 2011. Available at: www.asme.org/kb/ news—articles/articles/bioengineering/the-artificial-pancreas-project. Accessed Oct. 3, 2011.
  5. Elman NM, Ho Duc HL, Cima MJ. An implantable MEMS drug delivery device for rapid delivery in ambulatory emergency care. Biomed Microdevices. 2009;11(3):625-631.

Polymer Implants

click for larger view
Polymer scaffolds made from chitosan can serve as temporary biodegradable drug depots
A popular controlled delivery device is the polymeric implant. These systems fall into one of two categories: nondegradable and biodegradable. An example of the former technology is the Norplant five-year contraceptive device. Hollow polymer tubes are filled with a drug suspension that dissolves into the polymer, then diffuses through the tubing walls.
Biodegradable polymer implants are usually made of microspheres containing a drug. Once injected, the polymer dissolves, releasing the drug into the system. A new technology under development uses biodegradable polymer rods implanted in the marrow of infected bones to deliver fluconazole to treat fungal osteomyelitis.5
Some exciting developments are being pursued in creating polymer scaffolds to serve as temporary biodegradable drug depots. These structures can be made from natural polymers such as collagen or chitosan, or from synthetic polymers that do not promote inflammation and are bio-degradable, biocompatible, and nontoxic.6
One of the most successful drug scaffold models is the injectable polymer depot. A liquid liposomal solution or suspension is injected subcutaneously or intratumorally, where it forms a semi-solid scaffold that releases the drug right at the target site. This method offers several benefits, including local drug retention and sustained release.
In addition to delivering pharmaceuticals, polymeric scaffolds have been designed to continously release growth factor cells to promote tissue regeneration.
Prefabricated polymeric scaffolds have also gained attention as delivery systems for small-molecule drugs and various bioactive molecules. These systems are manufactured outside the body and must be implanted surgically. Biodegradable and nondegradable materials are being tested as possible scaffold materials; the drawback of nondegradable materials is, of course, the necessity of surgical removal at the end of therapy.
In addition to delivering pharmaceuticals, polymeric scaffolds have been designed to continuously release growth factor cells to promote tissue regeneration. For example, vascular endothelial growth factor—a signal protein manufactured in cells that promotes the growth of new blood cells, a process known as angiogenesis—has been incorporated into a scaffold. In a recent study, increased blood vessel density was noted at the implant site. These results demonstrate an increase in angiogenic potential. Tissue regeneration typically uses prefabricated scaffolds requiring surgical implantation and removal, but research on injectible hydrogel scaffolds is ongoing.7

Getting a Charge

Exciting advances are being made in the development of implantable drug delivery devices using micro- and nanoelectromechanical systems. Called MEMS and NEMS, respectively, this technology employs microchips that contain micro- and nano-scale programmable electronic circuits.8
MEMS and NEMS devices offer complex functionalities that could potentially overcome many of the shortcomings and inconveniences of conventional drug therapy, including complicated dosing regimens and fragile or easily degradable active ingredients. Drugs can be released from a reservoir by electrical signals programmed into the micro- or nanochip. With MEMS technology, timing and dose amount can be precisely controlled, and drugs can be delivered to precise locations. Myriad dosing options can then be available, including delivery on demand, programmable dosing cycles, and automated dosing of multiple drugs.
Another delivery system that utilizes MEMS chips is the micropump. Unlike mechanical micropumps, actuated by one of various mechanisms, including electrostatic, electromagnetic, and piezoelectric energy (utilizing the electric charge that naturally collects in crystals, bone tissue, DNA, and other material in response to mechanical stress), some micropumps utilize MEMS microchips. These pumps must minimize chip and device size and must be made of biocompatible materials. They must be able to operate for weeks to years without presenting much risk to patients, must deliver a relatively steady flow rate, and must either use minimal power or be remotely rechargeable.
Medtronic's ACT Insulin pump.
Medtronic’s ACT Insulin pump.
In even more exciting research, MEMS and NEMS technology is contributing to the emergence of personalized medicine, the science of optimizing treatment based on individual genetics and physiology. Delivery devices containing standard drugs could include micro- or nanochips and possibly a biosensor to provide physiological feedback. This complex device could maximize therapeutic flexibility and efficiency by tailoring drug delivery to the patient’s needs as revealed by the biosensor.
This type of system could be used to create an “artificial pancreas” for diabetes patients. It would comprise an insulin reservoir and a biosensor to monitor blood glucose levels. The sensor would communicate with a MEMS/NEMS-based drug delivery device to regulate insulin release and also transmit data to an external device for use by the patient or physician. Data for dosing flexibility and control could also potentially be received by the device. This technology has not yet been perfected, but once successfully implemented, it could revolutionize the treatment of a large patient population.
In addition to the benefits for long-term drugs, MEMS- and NEMS-based systems can also provide emergency care. Currently, a “personal paramedic” is being developed to work in tandem with current cardiac devices such as a pacemaker. This device, called the IRD3 (Implantable Rapid Drug Delivery Device), could release drugs used in cardiac resuscitation, such as vasopressin, when needed. The IRD3 could also be used to treat angina patients by releasing vasodilators on demand.
Microreservoirs that employ MEMS devices are a combination of drug reservoir systems and polymer matrices. Microreservoirs are implanted drug delivery systems used for proteins, hormones, pain medications, and other drugs. Each tiny reservoir, covered with a gold membrane, contains a single dose. The dose is released when one microreservoir is exposed to anodic voltage from the MEMS chip, causing the membrane to rupture.

Smooth as Silk

An artificial pancreas would be a miniature, closed-loop device composed of a glucose monitor and a miniature pump to deliver insulin powered by a MEMS chip.
Silk fibroin is another material being studied as a polymer vehicle for sustained local drug delivery. A recent study conducted at Tufts University evaluated the efficacy of silk fibroin to encapsulate the anticonvulsant adenosine in a biocompatible and biodegradable drug reservoir.
Adenosine is a promising treatment for drug-resistant epilepsy. When given systemically, however, it causes severe side effects, including suppressed cardiac function. Local delivery using a reservoir placed in the brain may provide the answer. However, polymers commonly used to coat reservoir devices have significant drawbacks. Some are nondegrading or require organic solvents that can damage encapsulated drugs. Others release drugs too quickly.
Silk-based implantable systems, on the other hand, offer numerous advantages. Silk fibroin is biodegradable, biocompatible, and strong; it is used as suture material for the brain and nervous tissue. But the study demonstrated that silk fibroin can successfully achieve the parenteral drug superobjective. Adenosine reservoirs coated with eight layers of 8% silk fibroin material exhibited zero-order release. Reservoirs with four layers of 8% silk fibroin exhibited near zero-order release.

References

  1. Gad SC, Cavagnaro JA, Nassar AF, et al. Formulations, routes, and dosage design. In: Gad SC, ed. Pharmaceutical Sciences Encyclopedia: Drug Discovery, Development, and Manufacturing. New York: John Wiley and Sons; 2010:10-15.
  2. Paolino D, Sinha P, Fresta M, Ferrari M. Drug delivery systems. In: Webster JG, ed. Encyclopedia of Medical Devices and Instrumentation. 2nd Ed. New York: John Wiley and Sons; 2006:437-486.
  3. Gad SC, Tamilvanan S. Progress in the design of biodegradable polymer-based microspheres for parenteral controlled delivery of therapeutic peptide/protein. In: Gad SC, ed. Pharmaceutical Manufacturing Handbook: Production and Processes. New York: John Wiley and Sons; 2008:393-427.
  4. Pritchard EM, Szybala C, Boison D, Kaplan DL. Silk fibroin encapsulated powder reservoirs for sustained release of adenosine. J Control Release. 2010;144(2):159-167.
  5. Soriano I, Martín AY, Évora C, Sánchez E. Biodegradable implantable fluconazole delivery rods designed for the treatment of fungal osteomyelitis: Influence of gamma sterilization. J Biomed Mater Res A. 2006;77(3):632–638.
  6. Mufamadi MS, Pillay V, Choonara YE, et al. A review on composite liposomal technologies for specialized drug delivery. J Drug Deliv. 2011;2011:1-19.
  7. Chung HJ, Park TG. Surface engineered and drug releasing pre-fabricated scaffolds for tissue engineering. Adv Drug Deliv Rev. 2007;59(4-5):249-262.
  8. Staples M. Microchips and controlled-release drug reservoirs. Wiley Interdiscip Rev Nanomed Nanobiotechnol. 2010;2(4):400–417.

DELIVERY - siRNA/RNAi | RNAi No Longer Blue Sky



Neil Canavan

Despite significant delivery hurdles, researchers make progress toward genetic therapies

The use of double-stranded RNA to deliberately “interfere” with gene expression was first described in the journal Nature in 1998.1 Though the work was performed in Caenorhabditis elegans, the potential was immediately clear: RNAi made for an excellent research tool and likely represented a new class of therapeutic agents.
In either application, a recent advance may revolutionize the optimal selection of RNA inhibitors.
“The potential for RNAi demands that you have to have the right trigger,” said Christof Fellmann, a graduate student at Cold Spring Harbor Laboratory, N.Y. Algorithms to successfully predict an optimal RNAi sequence, in this case for short hairpin RNAs (shRNAs), remain elusive, and current methods involve extensive time-consuming screens. To overcome this barrier, a “sensor assay” was developed that rapidly identifies optimized shRNAs at large scale.
For this method, a library of 20,000 expression vectors was created, each consisting of (a) a unique shRNA sequence, (b) a known target sequence (the sensor), and (c) a fluorescent reporter gene. Once expressed in transfected cells, the components were free to interact (or not) with resulting levels of fluorescence inversely relating to the degree of target-gene suppression. This innovative screen was able to identify shRNAs not only of exquisite potency, but also of unprecedented specificity, thereby greatly reducing off-target effects.2
The sensor assay is already bearing fruit at the bench, as demonstrated by Cold Spring Lab’s recent identification of a potential therapeutic target for acute myeloid leukemia, and the technology will soon be commercialized by Mirimus Inc., a Cold Spring Harbor spin-off.3

Enhanced In Vivo Survival

Working to optimize their proprietary siRNA platform for in vivo use, investigators at Ambion, a Life Technologies company in Austin, Texas, have been working on chemical modifications to synthetic oligos. “About three years back, we came out with a technology for high-throughput screening for siRNAs called Silencer Select,” said Nitin Puri, PhD, senior manager of R&D, Ambion. “With chemical and/or sugar modifications to the oligo we have now managed to both enhance specificity, while decreasing the immunogenicity of the siRNA.”
Along with other proprietary chemistries, the use of RNA analogues (so-called locked nucleic acids) in the Ambion process lends rigidity to the RNA structure that confers enhanced hybridization-to-target behavior, while at the same time blocking destructive ribonuclease activity. This last point is critical, because this dynamic has a marked effect on serum stability—a major issue for in vivo applications. “We have increased serum stability more than 150-fold in rat and mice serum,” said Dr. Puri. Unprotected, siRNA half-life is roughly five minutes; with the Ambion modifications, the half-life is about 24 hours.
Life Technologies has made other recent strides toward enhancing RNAi selection, quantification, and delivery: a method described by Cheng and colleagues, based on stem-loop real-time polymerase chain reaction technology, allows for easier quantification of siRNAs in vivo.4 A new vehicle reagent for RNAi duplex delivery, Invivofectamine 2.0, which enables high transfection rates in the liver, is also now available.
continues below...

CASE STUDY: RNAi May Replace Standard Stent Coatings

Angioplasty revolutionized the treatment of coronary artery disease; however, there are still improvements to be made. First-generation bare metal stents, scaffolds set in situ to maintain the balloon-opened artery, were vulnerable to rapid restenosis via the buildup of scar tissue; secondgeneration appliances, the so-called drug-eluting stents, prevent scar tissue proliferation but at the same time delay endothelial healing, which can lead to eventual thrombosis.1
CASE STUDY: RNAi May Replace Standard Stent Coatings
Extended recovery times require the prolonged use of expensive antithrombotic agents and are clearly suboptimal. “We had the idea to use siRNA because we think that we can have a faster re-endothelialization while stopping the inflammation,” explained Andrea Nolte, PhD, research scientist at the Children’s University Hospital in Tuebingen, Germany.
In her study, Dr. Nolte and colleagues conducted an in vitro assay using endothelial cells (ECs) exposed to a complex of polyethylenimine (PEI) and siRNA targeted to E-selectin, a molecule that plays an important role in the inflammatory response. The PEI/siRNA complex was mixed with a gelatin solution and then fixed to the bottom wells on culture plates. ECs and culture media were added, and after cells reached confluence, an inflammatory response was induced with the addition of TNFα. Results showed a 70% knockdown of expression of expected inflammatory factors.2,3
Along with validation of the approach, there was a happy surprise, and a challenge to be considered. “Normally, when you put a PEI/RNAi complex on ECs, not a whole lot happens,” said Dr. Nolte, but the construct fixed in gelatin sustained therapeutic efficacy even in the presence of serum. “We were very happy to see that.”
On the downside, the release of siRNA from the gelatin coating was too rapid; a different matrix will have to be found for the approach to be viable for a coated stent. “We’re looking for a release profile of about four weeks,” said Dr. Nolte, and added that they are exploring the use of poly(lactic acid-co-glycol acid) films. A sustained release over time should prevent initial stent-induced scarring while posing little hindrance to vascular healing.
Dr. Nolte’s use of siRNA-coated surfaces has caught the attention of numerous players in the industry. “They’re interested because there are other applications that would be useful with a coating,” she said. Beads coated with siRNA could be inserted into tumors, and topical formulations could be applied to chronic inflammatory conditions of the skin. “There are numerous settings that require the immobilization of the siRNA.”
Dr. Nolte’s work has, in fact, been done in partnership with industry. The stent application is being developed with Qualimed, of Hamburg, Germany.

References

  1. Zhao FH, Chen YD, Jin ZN, Lu SZ. Are impaired endothelial progenitor cells involved in the processes of late in-stent thrombosis and re-endothelialization of drug-eluting stents? Med Hypotheses. 2008;70(3):512-514.
  2. Nolte A, Walker T, Schneider M, Deniz O, Avci-Adali M, Ziemer G, et al. siRNA eluting surfaces as a novel concept for intravascular local gene silencing [published online ahead of print July 22, 2011].
  3. Walker T, Saup E, Nolte A, Simon P, Kornberger A, Steger V, et al. Transfection of short-interfering RNA silences adhesion molecule expression on cardiac microvascular cells [published online ahead of print June 20, 2011]. Thorac Cardiovasc Surg.

RNAi Delivery

Delivering enough therapeutically active molecules to the intended target remains the most vexing suite of obstacles for RNAi researchers. Challenges include achieving persistence in systemic circulation and effective reuptake by target cells.
Delivering enough therapeutically active molecules to the intended target remains the most vexing suite of obstacles for RNAi researchers. In brief, the challenges are achieving persistence in systemic circulation (no rapid clearance), localization of active moiety to target-tissue cell surface, efficient uptake by target cells, and rapid release of the internalized RNAi cargo from the transporting endosome into the cytosol.
It would take an entire textbook to delineate these issues. However, one author of a recent review of the field, Xudong Yuan, PhD, assistant professor in the division of pharmaceutical sciences at Long Island University in Brooklyn, N.Y., touches on some of the vehicular approaches that have recently caught his eye.5
“The good thing about (PLGA [D, L-lactide-co-glycolide]) is that it’s biodegradable, biocompatible, and, most importantly, it’s already approved by FDA,” Dr. Yuan said. Release of cargo from PLGA alone is too rapid, however. Dr. Yuan and colleagues are experimenting with PLGA combined with polyethyleneimines (PEIs). “This gives you better RNAi loading and also facilitates endosomal release through the proton sponge effect,” whereby cationic PLGA/PEI nanoparticles induce osmotic swelling, rupturing the endosome.6 With the same goal and mechanism in mind, Dr. Yuan is also working with chitosin, a biodegradable, linear polysaccharide, an approach that should have the added advantage of minimized toxicity.7
Currently not on his bench top but certainly on his radar are a few more applications for which Dr. Yuan also sees great promise:
  • RNAi-loaded nanoparticles composed of cyclodextrin, specifically, CALAA-01, the first targeted siRNA nanoparticle administered to humans, which is currently in clinical trials;8 and
  • “Lipidoids,” a combinatorial approach to constructing RNAi-bearing particles being developed by the Massachusetts Institute of Technology in collaboration with the RNAi therapeutics company Alnylam.9

References

  1. Fire A, Xu S, Montgomery MK, Kostas SA, Driver SE, Mello CC. Potent and specific genetic interference by double-stranded RNA in Caenorhabditis elegans. Nature. 1998;391(6669):806-811.
  2. Fellmann C, Zuber J, McJunkin K, Chang K, Malone CD, Dickins RA, et al. Functional identification of optimized RNAi triggers using a massively parallel sensor assay. Mol Cell. 2011;41(6):733-746.
  3. Zuber J, Shi J, Wang E, Rappaport AR, Herrmann H, Sison EA, et al. RNAi screen identifies Brd4 as a therapeutic target in acute myeloid leukaemia [published online ahead of print August 3, 2011]. Nature.
  4. Cheng A, Vlassov AV, Magdaleno S. Quantification of siRNAs in vitro and in vivo. Methods Mol Biol. 2011;764:183-197.
  5. Yuan X, Naguib S, Wu Z. Recent advances of siRNA delivery by nanoparticles. Expert Opin Drug Deliv. 2011;8(4):521-536.
  6. Nel AE, Mädler L, Velegol D, Xia T, Hoek EM, Somasundaran P, et al. Understanding biophysicochemical interactions at the nano-bio interface. Nat Mater. 2009;8(7):543-557.
  7. Yuan X, Shah BA, Kotadia NK, Li J, Gu H, Wu Z. The development and mechanism studies of cationic chitosan-modified biodegradable PLGA nanoparticles for efficient siRNA drug delivery. Pharm Res. 2010;27(7):1285-1295.
  8. Eifler AC, Thaxton CS. Nanoparticle therapeutics: FDA approval, clinical trials, regulatory pathways, and case study. Methods Mol Biol. 2011;726:325-338.
  9. Siegwart DJ, Whitehead KA, Nuhn L, Sahay G, Cheng H, Jiang S, et al. Combinatorial synthesis of chemically diverse core-shell nanoparticles for intracellular delivery. Proc Natl Acad Sci U S A. 2011;108(32): 12996-13001.

CONTAMINATION - Risk Management | Manage Contamination Risk with a Lean Approach



Judy Madden
Manage Contamination Risk with a Lean Approach

How existing products can benefit from examining the true cost of quality

In an ideal manufacturing world, we would always purchase pure ingredients, consistently formulate products within the parameters of aseptic technique, and reliably ship sterile goods to market. Quality by Design (QbD) and process analytical technology (PAT) initiative adherents alike support an approach that makes certain that product quality is part of the production process from the start.
In the real world, even with the best intentions and plans, our products and processes are at risk of contamination. As a result, and with the exception of parametric release, products and processes are tested to ensure their quality prior to releasing product to market. This is a necessary step, but—for companies using traditional microbial testing methods—it is extremely time consuming and costly.
A lean quality approach using rapid testing methods will reduce the cost and impact of contamination events and accelerate your production cycle. Best of all, it can be applied readily to an existing manufacturing process.

The True and Total Cost of Quality

To find the right balance between risk and safety, it can be helpful to compare the costs of not having a quality control process with the costs of a good quality system (see Figure 1).
Figure 1. The True Cost of Quality
click for larger view
Figure 1. The True Cost of Quality
Without quality control processes, products are manufactured and shipped into distribution quickly. While this is not an option for pharmaceutical products, the risk is, of course, that pallets of finished products will be contaminated. In the plant, that means expensive rework, scrap, and overtime. For goods that have left the company bound for pharmacies or retail stores or, worse yet, have already been sold to patients or consumers, the liability and brand impact are significant and potentially catastrophic.
To minimize this risk, companies will often test their products at several stages: when raw materials arrive; potentially at the bulk or prepackaging stage; and always as packaged products ready to leave the facility (see Figure 2).
At each step in situations where traditional test methods are used, materials and products sit idle for multiple days awaiting results. In addition to the cost of the testing itself, warehouse space and invested capital are tied up. Despite these disadvantages, traditional testing is widely accepted as a cost of having good quality. Yet there is an alternative.

Lean Quality Advantage

In lean manufacturing terms, even minutes during which value is not added to the product are considered waste. Imagine the field day your lean team would have if they learned you could free up several days of waiting time by using a rapid method.
Figure 2. Traditional Quality Testing
click for larger view
Figure 2. Traditional Quality Testing
A lean quality approach does just that by allowing you to remove many days of “waste” or waiting time from the manufacturing process and still release safe products to market (see Figure 3).
In terms of risk management, the lean quality approach has a significant advantage. A faster production cycle means faster problem detection. Corrective action can be initiated sooner and, therefore, more effectively.
After all, it is far easier to isolate and identify events that may have led to a contamination event yesterday than to try and troubleshoot those same events four, five, or more days later. I can easily remember what I had for breakfast yesterday, but recalling the precise details of a meal I ate last week is a lot trickier.
Figure 3. Lean Quality Testing
click for larger view
Figure 3. Lean Quality Testing
The same is true of contamination events. Additionally, by the time a problem is detected with traditional methods, the company has five or more days of additional, potentially contaminated inventory to deal with. Beyond the cost of the goods themselves, this can have a significant impact on your ability to meet customer demands.

Rapid Methods Manage Risk

The benefits of a rapid detection method are made clear in this simplified contamination event timeline (see Figure 4).
Assume it takes this company one to two days to formulate and package a product and then, using traditional microbiological methods, an additional three to seven days to test finished product for microbiological quality. In this example, contamination is identified after five days of micro-hold and the seventh day of overall production. An investigation and corrective action are initiated.
Figure 4. Contamination Recovery Timeline
click for larger view
Figure 4. Contamination Recovery Timeline
Several days later, replacement product needs to be produced to replace the original contaminated batch. That product is also subject to microbiological testing. The 17-day point in our production timeline arrives before the product is available for distribution—a full 10 days beyond the planned seven-day production schedule.
On the second timeline, we see that using a rapid detection assay reduces the microbiological testing time to 18 to 24 hours. Detection of the problem and initiation of corrective action now happen within 24 hours of production.
The benefits of rapid detection also extend to the release of replacement product. In the example above, replacement product is released at the nine-day point. This is a full eight days faster than in the scenario in which the manufacturer uses traditional methods and is still in crisis mode at day nine.

Costs Are One Thing, Savings Are Another

Many discussions of rapid methods begin and end with the cost per test. “It’s too high,” say the lab managers who appreciate the lab efficiencies generated by rapid methods but feel constrained by their budgets and pressure to keep expenses low.
The biggest obstacle to the adoption of rapid methods is the fact that 100% of the cost of the method is charged to the labs, while 90% of the financial benefits are in manufacturing.
For successful adoption, Operations and Finance need to get involved to help everyone see that the overall benefit to the company is well worth a modest increase in the lab’s testing budget.
Rapid detection may be the best-kept secret outside the microbiological laboratory. Yet some of the largest and most successful companies in the pharmaceutical and consumer product industries benefit from the efficiencies offered by rapid testing systems. The company discussed below implemented their first Celsis rapid detection system in 1996 and now uses the technology at its facilities worldwide.
The company, a manufacturer of pharmaceutical products, was experiencing issues with inventory and periodic in-house contamination events. It was following traditional microbial testing methods for screening raw materials and finished goods with a five-day hold at each step. The value of the company’s daily finished goods production at the time was roughly $75,000.
Celsis worked with the company to complete a financial impact assessment to determine the value of implementing a rapid system.
Readily available data, including the value of daily finished goods, the reduction in micro-hold days, the frequency of contamination events, and instrument and reagent costs, were incorporated into the impact assessment. The model calculated a five-year net present value (NPV) of over $677,000, payback of less than nine months, and savings from faster contamination containment annualized at $64,000 per year.
Projecting these savings over the company’s five facilities with an 18-month rollout program increased the five-year NPV to almost $2.5 million, with a payback of just 15 months. The later rollouts were financed through the working capital efficiencies generated from the earlier placements. Annualized contamination savings alone rose to almost half a million dollars.
Further, with results available in 24 hours, the company was able to introduce some in-process testing at a critical point in production to detect contamination even earlier and reduce the potential impact of a contamination event even further. Total micro-hold was reduced from 10 days to only three days and was redistributed to better manage risk as it occurs in the operation.
Figure 5. Impact Report
click for larger view
Figure 5. Impact Report
The impact report (see Figure 5) shows a typical output graph with the projected savings for adopting rapid methods at a single plant. The report identifies the economic impact by six-month periods, along with the cumulative discounted cash flow.
The average company’s investment is shown in red. The initial outflow represents the initial system investment followed by an implementation period and the ongoing cost of reagents.
The blue bars represent the positive impact of the reduction in working capital requirements driven by a reduction in inventory held in quarantine and safety stock. This includes the initial release of inventory upon validation and the ongoing value of redeploying that capital into productive investments.
The green bars are the estimated savings from the reduced impact of contamination events. In many cases, these savings alone pay for the program: The green bars are larger than the red bars in each period.
Celsis also offers an environmental impact report documenting sustainability improvements resulting from implementation—from reducing water and energy consumption to minimizing the amounts of liquid, solid, and hazardous waste requiring disposal..