Friday, July 30, 2010

Photostability Chambers


The use of light for forced degradation studies in the formulation development track of

biopharmaceuticals and pharmaceuticals is a discussion point of interest for many pharmaceutical companies. Caron Products and Services Inc. manufactures two different 10 cubic foot photostability chambers. The 6540 Photostability series controls light and temperature conditions. The 6545 series precisely controls light, temperature. and humidity. These chambers have an easy to use color touch-screen interface.

www.caronproducts.com | (800) 648-3042

Gas Sorption Analyzer

Quantachrome Instruments, a leading manufacturer and supplier of materials characterization instruments, announces the launch of its latest and most-advanced gas sorption analyzer. The Autosorb-iQ offers a multitude of capabilities in a single instrument and is modular in design. The Autosorb-iQ will have particular appeal to researchers in academia and industry who are at the cutting edge of porous materials development.

www.quantachrome.com | (561) 731-4999

Potentiometric Titrator


JM Science’s new Potentiometric Titrator (COM-1700) has reliable high-speed communications with no response time lag, so results appear in real-time. The compact design

reduces bench space by 25%. The new buret design allows automatic reagent exchange quickly and efficiently. A longer-life syringe with top dead-center rest position with minimized dead space and buret head position sensor eliminates improper assembly. The new sample changer accommodates a wide selection of test tubes, beakers, and conical flasks. It also is equipped with an auto shutdown function.

The Pharma Ecosystem



By Patrick McGee
Patrick McGee

It’s old news that pharma and biotech have been struggling over the last several years, and things won’t be getting better any time soon. As our cover story reports, over the next three years, revenue at big pharma is projected to fall off at a rate of 20% a year.

These kinds of sobering numbers are what made us decide to feature an article looking at how the industry is responding to these challenges, how its ecosystem is adapting and evolving.

“What you see now in big pharma is that they’re disintegrating most of the things that historically gave them strength, and they’re looking for new business models,” said G. Steven Burrill, founder and CEO of Burrill & Company.

It’s clear that it is time to move away from the blockbuster model that has been the mainstay of the industry for too long. Many companies are doing this by focusing on their core strengths. And they are also either buying or partnering with smaller startups, companies that are at the cutting edge of innovation.

I saw evidence of some of this innovation in March when I went to the greater Philadelphia area for a tour of several pharma and biotech startups. I met many industry veterans with 15, 20, or 25 years of experience who had been let go by, or decided to bail on, their big pharma employers. But instead of deciding to go into another field or opt for early retirement, many chose to start, or be employed by, small companies working in niche areas. Their innovation filters up to and through many of their big pharma and biotech counterparts, regenerating the ecosystem.

Take the case of Protez Pharmaceuticals, based in Malvern, Pa. The company has a management team of six industry veterans with a combined 155 years of experience working for some of the giants of the industry like Pfizer, GlaxoSmithKline, and Aventis.

They are focused on the development of new antibiotics for resistant bacteria, an area abandoned by big pharma because it believed there was not enough profit margin. But it is estimated that the market for hospital-based antibiotics in the United States is over $5 billion per year and growing at more than 10% annually. That potential clearly got the attention of Novartis, which purchased Protez in June 2008.

Or take ERYtech Pharma, a company based in Lyon, France, with a second office at the University City Science Center in Philadelphia. The company has developed a technique that uses a modified dialysis machine to encapsulate therapeutic compounds in red blood cells. Last November, ERYtech launched a Phase 3 clinical trial in France in patients with relapsed acute lymphoblastic leukemia.

The trial is for the company’s lead compound Graspa, a new enzyme formulation of L-asparaginase that is encapsulated in red blood cells. L-asparaginase has been a critical component of combination chemotherapy for over 30 years, but this new formulation should yield longer efficacy, better patient compliance, reduced dosage, and an increased safety profile, the company said.

These are just two examples of the continuing innovation that should help revitalize the pharma and biotech industries. Although the process can be painful, ecosystems usually find a way to return to a state of equilibrium. Let’s hope the signs of innovation that I saw on this recent tour are indicators of just that.

A supplier management program can increase quality


Figure 1. A Decision Flow Chart can Help When Managing Suppliers
IMAGE COURTESY OF PILGRIM SOFTWARE INC.
Figure 1. A Decision Flow Chart can Help When Managing Suppliers

Editor’s Note: This article is the second in a two-part series. The first installment appeared in our February/March issue.

In the first part of this article, we discussed why supplier quality has been a prime topic at recent industry conferences, in the media, and in legislation. Supply chain has evolved into a complex process of global sourcing, manufacturing, and distribution. With increasing levels of media exposure for recalls, companies have to be responsive to the growing pressures from regulatory agencies and customers to develop, implement, and maintain a supplier management program that integrates compliance, oversight, and strong supplier relationships into business practices and quality systems. In this article, we’ll look at the key areas that companies should focus on for an overall improvement in quality in the supply chain.

A supplier management program can help improve profitability by decreasing the cost of quality. Additional emphasis should be placed on the control/oversight of suppliers as outsourcing activities increase. Remember, poor cost of quality results in rework, waste, delays in product approval, resource inefficiencies, corrections and removals, enforcement actions, and other problems that impact profitability. A good supplier management process should include:

  • Supplier selection, evaluation, and approval to determine the appropriate supplier level (risk) for products, materials, and services/consultants; the appropriate contracts and agreements to meet the needs of the business; good rationalization of the supplier base, with segmentation based on strategic importance, people, information sharing, competitive factors, quality, ethics, culture, stability, longevity, and history; and sound quality and business/financial systems. Such processes will secure access to more/better sources of supply and reduce the cost of procurement.
  • Monitoring/maintenance that will include the initial review, ongoing reviews, and annual evaluations (audits, internal and external nonconformance data, on-time delivery data, and so on) and keeping suppliers on track by communicating company strategy, performing quarterly business reviews, having annual supplier conferences, and keeping supplier score cards. Such processes will increase compliance and decrease risk.
  • Appropriate removal/inactivation controls for noncompliance and inactivity. Such processes will reduce the cost of quality.
  • Real-time data that demonstrate control over suppliers—metrics, data analysis, and action items. Such systems will provide better control over the supplier base with increased collaboration, resulting in standardized practices and processes and improved transparency and auditability.

Risk Management

To ensure that quality is maintained, there must be a risk management process that covers the entire life cycle. This requires strong supporting audit processes and internal and third-party audit resources and information management systems that allow in-company resources and external supply chain partners and auditors to share information. The risk assessment process must include an execution of pre-qualification processes for new suppliers and accompanying follow-up audits. Given the limited resources available for numerous audits, a risk-based approach is essential. In addition, supply chain visibility has become a key issue. The heparin problem revealed the complexity of a multi-tiered, global supply chain. Companies may not be too concerned with their supplier’s suppliers, but when it comes to business risk, visibility becomes essential.

Key issues for supplier risk management include:

  • aligning multiple stakeholders;
  • applying risk management through the entire supply chain;
  • prequalifying suppliers;
  • having audit standards;
  • ensuring visibility across the entire supply chain (primary and sub-tiers); and
  • implementing the right systems for data collection.

The first requirement is to unify the goals and objectives with regard to suppliers and contractors across all functional groups. Cross-functional collaboration is necessary to reduce the risk to the corporation. A failure of quality or significant disruption of supply has a major impact on a company. Hidden costs of manufacturing failures, which can be attributable to suppliers or contractors, can become significant.

A comprehensive risk management system, extending over the complete product life cycle, needs to start in the early phases of product development with a rough risk assessment based on the immediate data available, including the evaluation of raw materials and services, along with appropriate functions involved such as quality, environmental health and safety, and procurement. With appropriate follow-through on actions identified, the risk assessment will help ensure cross-functional alignment.

Given the scarce resources typically available for audits, there must be agreement across functions on how to best deploy the audit resource, including the use of third-party and shared audits. Because such resources can be a severe constraint, a rigorous risk assessment is needed before deployment.

Information and data management is another essential component involved in managing the risk process over a lengthy product life cycle. Without access to a cross-functional and collaborative supplier system, it is almost impossible to achieve a unified approach to managing risks across multiple stakeholders.

Table 1. Example of Applying Risk Management Principles to  Evaluation by Taking Into Account Cost, Volume, Type, and Extent of  Supplier
IMAGE COURTESY OF PILGRIM SOFTWARE INC.
Table 1. Example of Applying Risk Management Principles to Evaluation by Taking Into Account Cost, Volume, Type, and Extent of Supplier

Supplier Sourcing and Qualification

Each manufacturer establishes and maintains requirements, including quality requirements that must be met by suppliers, contractors, and consultants. It is the responsibility of the original equipment manufacturer to ensure that the materials used in products are of acceptable quality, regardless of the geographical and logistical challenges involved in evaluating manufacturers and suppliers in remote countries. Supplier qualification must be performed for each material and supplier combination. These expectations are explicitly stated for pharmaceutical components at risk for melamine contamination in an August 2009 Food and Drug Administration current good manufacturing practice guidance that goes beyond CFR Part 211, stating that manufacturers need to know and monitor their supply chain for any at-risk components.

The selection process should occur as early as possible in the product life cycle process and should include cross-functional team members from purchasing, quality assurance, engineering, and manufacturing, as applicable. Evaluations should be conducted based upon the type and extent of control needed over the product, material, or service to be provided and should take into account risk management principles. Use risk management principles to determine supplier risk during the selection process. Some things to consider include:

  • supplier level (type and extent of control);
  • sole source; and
  • single source.

Develop different audit types: on-site audits, quality assessment questionnaires, and business/financial assessments examining the stability of the supplier, including ability to deliver, potential for growth, safety stock for product/materials, and cost and volume. Determine the extent of the suppliers: frequency, facility locations, training, volume, cost, and ability to meet acceptance criteria. Finally, determine the results of the evaluation and the approval status of each supplier: accept/reject.

Choose a potential supplier based upon the company’s ability to provide consistent product, material, and/or services. Determine the stability of the potential supplier by:

  • Establishing and/or reviewing quality systems and data, including compliance, contracts, and quality agreements;
  • Developing contracts/quality agreements with clear expectations and well-defined acceptance criteria, which should include quality system requirements such as control of material; audits by the manufacturer and/or by an external agency; notification of changes to location, process, or materials; nonconformance control; complaint handling; and corrections and removals; and
  • Finalizing an approved supplier list, a formal controlled list of suppliers that includes pertinent information such as name, location, type, level, and current status.

As companies look for ways to improve procurement cost and lead times, the focus in supplier partnerships is shifting from one of price reduction to relationship value and total cost of management.

Supplier Audits and Testing

Auditing your suppliers is a key part of quality and compliance improvement. A pharmaceutical company needs to insist on a confidential in-depth audit of every part of the facility, operations, and quality systems relating to the material it wants to purchase, including manufacturing and testing details. In addition to regular, thorough manufacturing facility audits, the company should actively ensure that it or its brokers have reliable, up-to-date knowledge and verification of the drug substance upstream to ensure that materials really do come from approved sources. If manufacturing and supply chain integrity cannot be verified regularly, the initial cost savings from a cheaper source cannot compensate for increased risk.

Companies must determine how often and to what extent suppliers will be reevaluated. Risk-based measurements should be determined for data analysis in reviewing suppliers, including assuring meaningful data is captured; configuring data so that problems can be identified; and identifying data-driven status changes.

Audits must be conducted by skilled, experienced auditors who are technical experts and are, preferably, fluent in the local language, with enough time to look carefully at suppliers’ operations and records. Given the complexity of the supply chain—numerous partners and geographic locations—industry is responding by developing a consortium approach to audit execution and information collaboration.

Rx-360, a nonprofit international pharmaceutical supply chain consortium seeking efficient and effective approaches to supplier risk management, has combined the efforts of leading pharmaceutical companies to implement audit standards, auditor training, and auditor certification, providing the capability to support over 1,000 audits. With the goal of a consortium supplier certification program, suppliers can start sharing critical data with transparency to the manufacturer and the distribution channel. In addition, the consortium expects to implement additional standards for good distribution practices; good importer practices; good storage practices; quality agreements; certificate of analysis; tamper-evident packaging; and pre-audit questionnaires.

Traditionally, manufacturers get inspection records once the material physically arrives at the manufacturers’ premises. A good inspection system allows for the creation of different types of sampling plans. One is the American National Standards Institute/American Society of Quality Control’s standards and guidelines for sampling characteristics used in testing incoming inventory for meeting acceptable quality levels. The receiving inspection system, using inspection capabilities and based on inspection results, allows companies to prevent out-of-spec supplied materials from entering the production environment.

Even if the tests are made more specific, however, the risk of contamination for materials can be significant. Even third-party testing is not enough to ensure the quality of pharmaceuticals. It is not possible to test quality into a pharmaceutical product, because good quality must be built into the product with appropriate materials and validated processes that ensure the product’s safety and efficacy. There is a risk that the modification of even a single processing step can change the by-products or impurities of even the materials used. As with a final drug substance or API, testing is a confirmation of a material quality but may not be enough on its own, without manufacturing records showing that each batch was made according to a validated process that complied with the final product specifications.

The best suppliers will accept the new reality of increased transparency, and brokers and distributors who can establish and guarantee the level of quality assurance associated with more rigorous auditing and supply chain verification will be invaluable.

Figure 2. An ICH Q9 Risk Management Plan can Help Ensure That all  Possibilities are Considered
IMAGE COURTESY OF PILGRIM SOFTWARE INC.
Figure 2. An ICH Q9 Risk Management Plan can Help Ensure That all Possibilities are Considered

Operation and IT Challenges

Technology is critical to securing the global supply chain because it is the only way to provide both visibility into products and assets across extended supply networks and the ability to take action immediately. Senior management should be updated regularly on the security and integrity of critical supply chains. Overall reports usually include:

  • metric and data analysis and review;
  • inspection results (incoming, production/manufacturing, source inspections);
  • nonconformance control/corrective and preventive action (CAPA) oversight;
  • complaint handling/corrections/removals;
  • business metrics (delivery, lead time, technical support, cost, and strategic initiatives);
  • supplier scorecard;
  • adherence to contracts/quality/purchasing agreements;
  • audit results and responsiveness; and
  • cost of quality.

However, current manual processes can limit the integration of applications and data for analysis that is adequate to effectively understand your state of supply chain quality, compliance, and potential risks. While supply chain data may be maintained within a certain quality system, departments, or other manufacturing and procurement automated solutions, in many cases different elements of an organization’s product supply, quality, regulatory, and commercial functions are disconnected.

An integrated quality approach must be embedded into the core business. Ask yourself these questions: How many systems contain some supplier quality data throughout the product life cycle? How much time is spent navigating the maze of data versus using and focusing on problem solving? How much time is spent on the duplication of data entry or searching for data entry errors across different supplier quality systems? Can people at all points in the chain create data that are recognized from end to end? Do any supplier corrective action requests signal not only discrete performance issues (e.g., a failed batch), but also overall degradation to the whole company? With regard to any CAPAs, are all sites aware of the issues that affect them and are they actively looking to make connections? Are holds instantaneous throughout the entire chain? Does information about non-conformances get passed along forward and backward? Have you truly leveraged technology to enable effectiveness, efficiency, and closed-loop processes?

Although these systems provide automation that helps organizations reap the benefits of repeatability and sustainability, you must ask yourself if they are truly complementing your overall supplier quality investments. Integration allows real-time access to data stored within each system, enabling your organization to make informed decisions based on current information. But it’s not just about technology and automation. Automation of inefficient processes will, by default, simply make your inefficiencies happen faster. A comprehensive supplier quality management system helps companies understand the significant impact that poor quality, service, and delivery have on sales and profits. Take what you’ve learned from your process inefficiencies, add that to your understanding of best practices, and architect them into an integrated supplier quality platform across your organization’s value chain.

With such technology enablers in place, reporting and analysis makes it easier for management and all stakeholders to understand the supply chain state of risk, quality, and compliance.

Flexible, Simple, Integrated

Make the supplier management system flexible, simple, risk-based, and easily integrated throughout the organization. Assure that sufficient mechanisms are in place so that all suppliers are managed according to the type of product, materials, or service provided—this includes “consultant” services. Have a selection, maintenance, and reporting process that includes materials, products, and services throughout the product life cycle. Make sure the appropriate functional partnerships are established—engineering, manufacturing, purchasing, quality, and the supplier’s representative functions. Invest in resources to support the internal partnerships among quality, purchasing, engineering, and manufacturing to drive for successful oversight and management of suppliers. Build supplier relationships.

Integrate risk management throughout the supplier selection, maintenance, and data reporting processes. Implement measurement, data analysis tools, and processes for different levels of the organization. Configure data so that problems related to product, process, or quality systems can be identified. Identify and communicate to the supplier results of the analysis and/or any further decision to take action. Drive for the development and ownership of supplier relationships, in other words, accountability.

A good supply chain management program with effective quality processes will help pharmaceutical companies reduce the cost of goods purchased due to improved market access and more effective price negotiations made possible by company leverage. It will also reduce risk due to increased compliance with more thorough specs, better communication with your suppliers, and more rigorous testing and reporting. Such a program will also improve supplier performance with service that is punctual and high quality, meaning there will be no surprises for you. It will also deliver standardized contracts that are easier to monitor and a sustainable competitive advantage with consolidation and visibility throughout the entire supply chain.

Stem Cells Could Fight HIV


An alternative for patients who can’t use antivirals

A novel stem cell therapy that arms the immune system with an intrinsic defense against HIV could be a powerful strategy to tackle the disease. This new approach could dramatically improve the quality of life and life expectancy for HIV sufferers in whom antiviral drugs are no longer effective, said Ben Berkhout, PhD, head of the Laboratory of Experimental Virology at the University of Amsterdam. He discussed his research at the recent spring meeting of the Society for General Microbiology in Edinburgh, Scotland.

In the absence of an effective vaccine, daily administration of antiretroviral drugs is the most effective treatment for HIV. However, low patient compliance rates, combined with the virus’s ability to easily mutate, have led to the emergence of drug-resistant strains that are difficult to treat.

This therapy would offer an alternative for HIV-infected patients that can no longer be treated with regular antivirals.
—Ben Berkhout, PhD, University of Amsterdam

Dr. Berkhout is investigating a novel gene therapy with long-lasting effects even after a single treatment. This therapy arms patients against viral infection by delivering antiviral DNA to their own immune cells. “This therapy would offer an alternative for HIV infected patients that can no longer be treated with regular antivirals,” he said in a statement.

The therapy involves extracting and purifying blood stem cells from a patient’s bone marrow. Antiviral DNA is transferred to the cells in the laboratory, after which the cells are re-injected into the body. The DNA encodes small RNAs that are the mirror image of key viral genes used by HIV to cause disease. The small RNAs float around inside the immune cell until they encounter viral genes to which they can firmly attach. This RNA interference approach can block the production of key viral components from these genes.

Transferring the antiviral DNA to stem cells would help to restore a large part of the patient’s immune system. The group hopes to start clinical trials of the therapy within three years. “So far, very promising results have been obtained in the laboratory, and we are now testing the safety and efficacy in a pre-clinical mouse model,” Dr. Berkhout said.

Slowing down immune system may improve HIV vaccines


Schematic representation of the key structural features of SIV and  HIV-1 entry into T cells.
WIKIMEDIA.ORG
Schematic representation of the key structural features of SIV and HIV-1 entry into T cells.

Like a skittish driver slamming on the brakes, a special class of T cells may be limiting the effectiveness of therapeutic vaccines for HIV by slowing the immune system response too soon, University of Pittsburgh health science researchers reported in a recent issue of PLoS ONE. Their study, the first to look at the role of regulatory T cells in therapeuticHIV vaccines, may help researchers improve the efficacy of such vaccines by devising methods to circumvent the braking mechanism of these cells.

Regulatory T cells (Treg) are critical because they suppress the immune response, preventing the immune system from turning against itself. Without the braking action of Treg, autoimmune disease could flourish. But what if these cells are shutting down the immune response before a therapeutic vaccine has had a chance to bolster immunity against HIV?

When we removed Treg from blood cells, we found a much stronger immune response to the vaccine, giving us insight into how we can develop more effective HIV vaccines.
—Charles R. Rinaldo Jr., PhD, University of Pittsburgh

Pitt researchers sought to answer this question as follow-up to a clinical trial of a therapeutic dendritic cell-based HIV vaccine they developed to activate the CD8, or killer T cell, response. First reported in 2008, their findings indicated only limited success of the vaccine in the 17 patients enrolled in the trial. For the current study, the researchers went back to the freezer, removed Treg from the patients’ blood cell samples, and found it was masking a two-fold increase in immune response to HIV induced by the vaccine.

“When we removed Treg from blood cells, we found a much stronger immune response to the vaccine, giving us insight into how we can develop more effective HIV vaccines,” said Charles R. Rinaldo Jr., PhD, professor and chairman of the department of infectious diseases and microbiology at Pitt’sGraduate School of PublicHealth.

“Treg normally shuts down CD8 responses once the infection has been controlled, but in this case it appears to be putting on the brakes early and possibly limiting the vaccine’s ability to do its job effectively,” said Dr. Rinaldo, the study’s lead author, in a statement.One theory is thatHIV infection drives up Treg, which in turn shuts down the HIV-1- specific CD8 T cell response, he said.

Pill Signals That it has Been Swallowed


Can help ensure patient compliance

Pill Signals That it has Been Swallowed

Seeking a way to confirm that patients have taken their medication,University of Florida engineering researchers have added a tiny microchip and digestible antenna to a standard pill capsule. The prototype is intended to pave the way for mass-produced pills that, when ingested, automatically alert doctors,loved ones, or scientists working with patients in clinical drug trials.

“It is away to monitor whether your patient is taking their medication in a timely manner,” Rizwan Bashirullah, PhD, an assistant professor in electrical and computer engineering at the university, said in a statement.

Such a pill is needed because many patients either mismanage, forget, or refuse to take their medication. This causes or exacerbates medical problems, spurs hospitalizations or expensive medical procedures, and undercuts clinical trials of new drugs.

The American Heart Association (AHA) calls patients’ failure to follow prescription regimens the biggest problem in treating illness. Studies have found, for example, that patients with chronic diseases normally take only about half their prescribed medications. According to the AHA, 10% of hospital admissions result from patients not following their prescription guidelines. Other studies have found that taking medication improperly results in 218,000 deaths annually.

Compliance a Problem

Medication compliance is a big problem for clinical trials,Dr. Bashirullah said,because failure to take the experimental drugs skews study results or makes them useless. As a result, researchers often require visual confirmation of participants taking pills, an extremely expensive proposition for trials in which hundreds or thousands of people are participating.

“The idea is to use technology to do this in a more seamless, much less expensive way,” Dr. Bashirullah said. Doctoral student Hong Yu, Chris Batich, PhD, of the University of Florida’s materials science and engineering department, and Neil Euliano of Gainesville-based Convergent Engineering designed and tested the pill with Dr.Bashirullah.

The system has two parts. One is the pill, a standard white capsule coated with a label embossed with silvery lines. The lines comprise the antenna, which is printed using ink made of non-toxic, conductive silver nano particles. The pill also contains a tiny microchip about the size of a period.

When a patient takes the pill, it communicates with the second main element of the system, a small electronic device carried or worn by the patient. For now, it is a stand-alone device, but in the future it could be built into a watch or cell phone. The device then signals a cell phone or laptop that the pill has been ingested, and this informs doctors or family members.

No Batteries Needed

Dr. Bashirullah said the pill needs no battery because the device sends the pill power via imperceptible bursts of extremely low-voltage electricity. The bursts energize the microchip to send signals relayed by the antenna. Eventually the patient’s stomach acid breaks down the antenna, and the microchip is passed through the gastrointestinal tract,but not before the pill confirms its own ingestion. “The vision of this project has always been that you have an antenna that is biocompatible and that essentially dissolves a little while after entering the body,” Dr. Bashirullah said.

The team has successfully tested the pill system in artificial human models as well as cadavers. Researchers have also simulated stomach acids breaking down the antenna to learn what traces it leaves behind. Dr. Bashirullah said those tests had determined that the amount of silver retained in the body is minimal, less than the amount people often receive from common tap water.

IN THE LAB - Testing Chambers | Portable Stability Testing Chambers



By Ron Breuer

Cost-effective chambers offer flexibility and adaptability

The interior of a portable stability testing chamber. When  temperature and humidity are distributed evenly throughout, all samples  are exposed equally to consistent conditions over extended periods of  time.
The interior of a portable stability testing chamber. When temperature and humidity are distributed evenly throughout, all samples are exposed equally to consistent conditions over extended periods of time.

Stability testing is an essential part of pharmaceutical product development, important to regulatory review, product efficacy, and consumer safety. While necessary, these studies impose a cost burden on drug development and production. How an organization addresses stability testing requirements can lead to either more or less efficiency and expense. This piece will examine the relative cost effectiveness of portable stability test chambers compared to permanently installed testing rooms. Factors influencing this calculation include not only the purchase price per cubic foot for the storage space, but also the costs for maintenance, operations, redundancy, efficiency, flexibility, and what may be referred to as “leave behind.”

Portable chambers offer anywhere from four to 30 ft3 of useable sample space, while built-in rooms typically provide 20 to 400 ft3 of sample storage. A state-of-the-art portable chamber costs between $10,000 and $25,000, depending on its size. A full-sized installed room starts at approximately $40,000 and can cost considerably more. Thus, based on a straight price per ft3 basis, portable chambers are more cost effective.

Considerations

The first consideration is physical space. In a new facility, what are the tradeoffs involved in dedicating floor space to a built-in room? In an existing building, are the space and layout appropriate to accommodate a room? If not, and unless new construction is an option, the decision is made. In some localities it is necessary to add the cost of building permits—and the time it takes to have them approved—to the list of expenses for built-in rooms.

The issue of redundancy comes into play as well. If a room goes down, a comparable amount of space will be needed in which to place samples while the room is brought back online. The chances that all of an organization’s portable stability testing chambers will be offline at the same time are remote at best. Thus, it is typically sufficient to cover approximately 50% of the available space provided by portable chambers for backup. In the event of a local power outage, portable units can be relocated to a portion of the facility or adjoining facilities that still have power. Reduced requirements for backup testing chamber availability are a significant factor when considering the overall budget for acquiring testing capacity. (It should be noted that backup generators are needed at all facilities, independent of whether or not stability testing is done in portable or installed/ built-in rooms.)

The portability of freestanding chambers provides further benefits. The ability to relocate the chambers creates a more flexible facility—they can be moved should it become necessary to change the footprint of storage space. Looking at long-term business trends, an additional benefit can be found when companies change their physical location: the leave-behind factor. This happens more often than plant and business managers may realize. Down-sizing, up-sizing, relocation, mergers, acquisitions, and other factors often cause companies to move portions of their production and or testing facilities. Built-in rooms are left behind. Installed rooms can be disassembled and moved, but with far greater time and expense. This potential future cost should be considered in the overall calculations.

With regard to available space, there are pluses and minuses to both types of stability testing platforms. When really large batches need to be accommodated, it is convenient to have a room large enough that the new samples can be readily added with minimal disturbance or movement of existing samples. Portable chambers provide more flexibility. They can be set for varying parameters, including temperature, relative humidity, and light exposure, which can be important depending on the storage requirements of individual samples and the climate in which the product is to be distributed.

On this temperature and humidity chart, the light area indicates  the control range of temperature and relative humidity. The hatched area  indicates the control range of temperature and relative humidity  without condensation. The chamber being measured is a Binder KB 240.
IMAGE COURTESY OF BINDER INC.
On this temperature and humidity chart, the light area indicates the control range of temperature and relative humidity. The hatched area indicates the control range of temperature and relative humidity without condensation. The chamber being measured is a Binder KB 240.

While batches that require different time periods for study are easily accommodated in either chamber or room, there are times when portable units can reduce the amount of record keeping that is necessary if they host an entire batch for the duration of the test. Built-in and installed rooms may more readily accept more kinds of packaging, but larger-capacity portable chambers also accommodate a full variety of containers and packages.

Maintenance is another factor to consider. The equipment that is needed to maintain high and low temperature and humidity conditions in the chambers must run continuously for long periods of time. While most units made today are fairly reliable, they do break down on occasion. It is important to note that the flexibility of the portable chambers when it comes to backup space is important from a maintenance consideration too.

It is also easier to schedule maintenance on portable chambers by doing so when they come offline or even scheduling them to be offline at specific intervals. Taking a room offline is a much more complicated and time-consuming endeavor. In addition, it can sometimes be more difficult to find qualified service personnel for installed rooms, particularly if the installations are heavily customized.

Operational Considerations

Operational factors, while not as significant in the overall picture, still need to be considered. Record keeping for multiple units can be more time consuming. Actually, keeping track of the units, while not normally a problem, can be a cause for concern. Power costs are comparable. Security is easier to accomplish with portable chambers, mainly because their size limits the number of people with access and the number of different sample runs that they contain. If photostability testing is to be conducted on some of the samples, isolating them within a room can prove more troublesome, especially when compared to putting samples in a dedicated unit.

Over the long term, replacement costs must be part of the calculation. This tends to be a gradual process with portable chambers—they don’t usually suffer fatal failures all at the same time. When a room has outlived its usefulness, it must be replaced in its entirety. It is easier to replace testing space a fraction at a time rather than all at once. Replacing units individually also allows organizations to purchase the latest technological advances as the need arises.

In theory, all stability testing chambers are designed to meet all applicable national and international testing specifications and requirements. It is important to buy good quality equipment, whichever type is selected. The equipment should also be backed up by solid and inclusive warranties provided by a company with a proven reputation for customer service. In addition to the long-term savings in replacement and maintenance costs, purchasing well-made products up front will save time, headaches, and money when the costs of downtime and compliance and operational concerns are included.

In the long term, the flexibility and adaptability provided by portable stability chambers lead to clear economic benefits for the consumer. In addition, they provide many functional benefits over the time of their use, and, while not the appropriate choice in all situations, are often easier to use because of their adaptability.

In terms of performance, both types of testing space typically do well. Most manufacturers—some better than others—do at least a reasonable job of putting out units that hold temperature and humidity fairly steadily over long periods of time. While neither type of unit holds an inherent advantage in terms of performance, the flexibility afforded by portable chambers does provide advantages.

Maintaining Temperature and Humidity

Manufacturers of built-in and portable stability testing chambers use different methods to maintain temperature and humidity conditions in their units, and they are not all equal. It is important to look carefully at uniformity throughout the chamber, recovery after door openings, and other measurable and non-quantifiable factors. Shelf design, ease of access, ease of cleaning, resistance to contamination, resistance to ice buildup, data logging and environmental monitoring technologies, and many other factors need to be weighed.

These details can be important in the validation process as well. While validating multiple chambers can add to record keeping, it is simpler to document overall stability of temperature and humidity in a smaller space than it is in a larger space. Validation of chambers dedicated to a specific lot or a small number of lots can often be more readily accomplished.

Stability testing, a critical part of pharmaceutical development, evaluation, and production, is necessary both to meet regulatory review requirements and ensure product safety. Numerous options are available in terms of type and quality of unit, as well as the various technologies used within. In the long term, the flexibility and adaptability provided by portable stability chambers lead to clear economic benefits for the consumer. In addition, they provide many functional benefits over the time of their use, and, while not the appropriate choice in all situations, are often easier to use because of their adaptability.

FORMULATION - Excipients | Dry Granulation Simplifies Tableting Process

Fast and cost effective compared to wet granulation

Recent advances in formulation technologies have led to a shift from traditional wet granulation to dry granulation manufacturing processes in the development of solid oral dosage forms. This change has come about largely because of process expedition, easy handling, and time and cost savings; the wet granulation process requires multiple steps that involve agglomeration (granulation), drying, sieving, particle size reduction, and blending.1 Dry granulation is suitable for medium- and high-dose drugs and is particularly applicable for active pharmaceutical ingredients (APIs) that are heat and moisture sensitive.2

Figure 1. Plasticity of Dry Binders at Different Compression  Forces (Plasticity = Plastic Energy/Total Energy)
IMAGE COURTESY OF BASF CORP.
Figure 1. Plasticity of Dry Binders at Different Compression Forces (Plasticity = Plastic Energy/Total Energy)

Direct compression and roller compaction are common processing methods used in dry granulation. These processes enhance solid dosage stability by increasing the final tablet hardness and reducing tablet friability. Excipients play a major role in the development of robust formulations because they can influence the degree of granule compression and binding.3 Excipients can also absorb mechanical stress derived from the granulation and tableting processes without affecting tablet hardness and tensile strength.

This article will review the types of dry binders, both pure and co-processed, that are applicable for dry granulation and will focus on those that are ideally suited for manufacturing tablets by direct compression and roller compaction.4

Requirements for Dry Binders

One important requirement for a dry binder is good powder flowability. The dry binding characteristics are influenced by the powder’s morphology; shape and size; porosity; plasticity; hygroscopicity; compressibility; stability to air, moisture, and heat; and compatibility with APIs. The stronger binding characteristics are governed by intermolecular forces and mechanical interlocking, depending upon the excipients’ type. For instance, intermolecular interactions are more prevalent in excipients with a tendency to fracture and/or deform on compression, whereas mechanical locking is prevalent in those with needle-like fibers that interlock by hooking or twisting.5

There are two categories of dry binders, depending on their usage level. The first category requires only a small quantity of dry binder to produce tablets with the desired hardness. For example, copovidone (Kollidon VA64/Fine from BASF) can be used at levels as low as 2% to produce hard tablets by direct compression. The second category requires larger quantities of dry binders. For example, microcrystalline cellulose, lactose, calcium phosphate, and maltodextrin need to be used at levels as high as 40% before appreciable tablet hardness is achieved.

Several manufacturers market dry binders. Some of them require only low to modest compression forces to yield harder tablets with good tensile strengths, while others require high compression forces to yield harder tablets with modest tensile strengths. The application of higher compression forces on certain dry binders can have a negative effect on tablet hardness and may result in a significant loss of tensile strength due to double mechanical stress.

Excipient Selection

Table 1. Evaluation of Dry Binders in Aspirin Tablets at 7.6 wt%.
IMAGE COURTESY OF BASF CORP.

Although the dry granulation process is cost effective and improves dosage stability, the excipients’ chemistry, physicochemical properties, and, most impor- tantly, compatibility with APIs can present formulation challenges. For example, cellulosic binders have poor flowability and are incompatible with some APIs, while hydroxypropyl cellulose (HPC) can interact with phenols and anionic polymers. Microcrystalline cellulose (MCC) and hydroxypropyl methylcellulose (HPMC) are incompatible with oxidizing agents, and lactose interacts with APIs containing amino groups. Finally, pre-gelatinized starch has low elasticity and slow plastic deformation, while HPC and HPMC have poor disintegration and dissolution properties.

Kollidon VA64/Fine, on the other hand, has strong binding and compressibility characteristics; has good flowability, porosity, and plasticity; has low glass-transition temperature and low moisture absorption; and is compatible with many APIs. The average particle sizes of Kollidon VA64 and Kollidon VA64/Fine are 55 and 17 microns, respectively. The smaller spherical, hollow particles of Kollidon VA64/Fine provide excellent flowability, compressibility at low compression forces, and plasticity.6 Figure 1 illustrates the plasticity of different dry binders in comparison with Kollidon VA64. The data suggest that Kollidon VA64 outperforms other dry binders like polyvinylpyrrolidone (PVP), MCC, and HPMC.

Performance of Dry Binders

Figure 2. Hardness of Acetylsalicylic Acid Tablets (500 mg)  Obtained by Direct Compression With Different Dry Binders
IMAGE COURTESY OF BASF CORP.
Figure 2. Hardness of Acetylsalicylic Acid Tablets (500 mg) Obtained by Direct Compression With Different Dry Binders

The performance of dry binders has been evaluated with many APIs, including acetylsalicylic acid (aspirin). The formulation composition of aspirin with different dry binders is shown in Table 1. The powder blends were compressed into tablets on a high-speed tablet press (Korsch PH 100/6) with a punch diameter of 12 mm, beveled edge-shaped, at a rotary speed of 30 rpm. The performance on hardness with individual dry binders at different compression forces is shown in Figure 2.

Crospovidone grades were also evaluated as the dry binders.7 Crospovidone grades ranging in mean particle sizes from five microns (Kollidon CL-M) to 25 microns (Kollidon CL-SF) to 35 microns (Kollidon CL-F) to 110 microns (Kollidon CL) were selected for direct compression on a single punch Carver press. The compression data of crospovidone grades are shown in Figure 3.

It is interesting to note that crospovidone grades with finer particles, with the exception of Kollidon CL-M, were able to compress significantly harder at low or high compression forces. The compressibility of crospovidone grades decreased in the following order: Kollidon CL-SF > Kollidon CL-F > Kollidon CL > Kollidon CL-M. Such differences in compressibility profiles might be related to greater porosity in the matrix, which could lead to increased hardness of the tablets upon compression. The poor compressibility of Kollidon CL-M was presumably due to micronized pores with fewer void spaces within the polymer matrix.

Although dry binders have good powder characteristics and flowability, dust generation can be potentially hazardous during solid dosage manufacturing. Roller compaction helps avoids this concern in situations when compression yields granules in the dry state in a solvent-free process.

Roller compaction has many process advantages and a few disadvantages. On the negative side are costs associated with purchase, installation, and maintenance of the equipment; generation of dust that may lead to cross-contamination; quality issues due to raw material fines; and the requirement for excipients with good binding/cohesive properties capable of enduring double mechanical stress. All of these factors need to be carefully considered.

Table 2. Formulation Composition of Allopurinol Granules
IMAGE COURTESY OF BASF CORP.

In a typical process, the blend of excipients and API is fed between the two counter-rotating drums of the roller compactor. As the powder blends proceed through the drums, the resulting ribbons are further granulated and passed through sieves before a lubricant is added and the tablets are produced.

Table 2 shows the formulation composition of allopurinol granules prepared by roller compaction using Kollidon VA64/Fine as a dry binder with the appropriate processing conditions. The resulting granules were compressed on a Korsch PH 100/6 at 16 kN to yield 100 mg allopurinol tablets with the following desired characteristics: diameter, 8 mm; hardness, 246 N; friability, <>

Wanted: Appropriate Binding

This article reviews several excipients that can be used as dry binders in direct compression and roller compaction. An excipient with poor binding character poses a challenge in drug development. Thus, identification of a dry binder with appropriate binding characteristics is important to achieve the desired tablet hardness and tensile strength.

Cellulose-based excipients like MCC are the most commonly used binders in direct compression and roller compaction. In roller compaction, MCC generates a large quantity of fines on compression, which, in turn, leads to a significant reduction in tablet tensile strength.8 HPC and HPMC also generate fines upon roller compaction, leading to poor tablet friability and decreased tablet tensile strength. Dicalcium phosphate and lactose are highly brittle and fracture upon compression, producing a large quantity of fines, which leads to poor surface contact with APIs and poor binding.

Conversely, PVP and copovidone possess a relatively higher plasticity than lactose, dicalcium phosphate, or cellulose-based excipients. The higher plasticity produces a greater binding effect due to increased polymer/API surface contact as the polymer deforms during compaction. The net result is harder tablets with improved tensile strength.

Figure 3. Dry Binding Properties of Crospovidone Grades
IMAGE COURTESY OF BASF CORP.
Figure 3. Dry Binding Properties of Crospovidone Grades

Kollidon VA64/Fine could play an important role in dry granulation. Recently, Herting and Kleinbudde evaluated insoluble PVP grades (Kollidon CL-M and Kollidon CL-F, SF) and Kollidon VA64/Fine in roller compaction and compared the results with cellulose-based MCC, HPC, and HPMC.9 Interestingly, from the compression studies of non-compacted powders, Kollidon CL-M showed tensile strength comparable to Kollidon VA64/Fine but significantly higher than MCC, HPMC, or HPC.

The tensile strength of tablets with Kollidon VA64 and Kollidon CL-F was slightly lower than that of Kollidon CL-M or Kollidon CL-SF. The significant loss in tensile strength of MCC, HPMC, and HPC is predominantly due to generation of a large amount of fines in the granules as compared to PVP-based excipients (~40% vs. ~15%).9

The tensile strength of the excipients decreased in the following order: Kollidon VA64/Fine> Kollidon CL-M> Kollidon VA64> Kollidon CL-SF> MCC> HPMC> HPC.

When all is said and done, there is much to recommend dry granulation. It offers a significant advantage over wet granulation because it simplifies the entire tableting process. Direct compression and roller compaction offer process solutions for many drugs sensitive to moisture and temperature. Dry granulation is a fast and cost-effective process. A number of excipients have been evaluated in dry granulation, with some performing better than others on compression.

The data suggest that Kollidon VA64 and Kollidon VA64/Fine are good dry binders for direct compression and roller compaction. A recent study showed that insoluble PVP grades (Kollidon CL-M and Kollidon CL-SF) performed better than cellulosic dry binders in roller compaction. Interestingly, Kollidon CL-M and Kollidon CL-SF outperformed all of the dry binders investigated.


Dr. Ali is manager of technical sales and Dr. Langley is head of technical sales in Pharma Ingredients and Services at BASF Corporation. Reach them at shaukat.ali@basf.com, nigel.langley@basf.com, or by calling (973) 245-6000.

REFERENCES

  1. Nyström C, Glazer M. Studies on direct compression of tablets. XIII. The effect of some dry binders on the tablet strength of compounds with different fragmentation propensity. Int J Pharm. 1985;23:255-263.
  2. Maschke A, Meyer-Böhm K, Kolter K. Dry binders used in direct compression. ExAct. 2008;20:2-5. Available at: www.pharma-solutions.basf.com/PDF/Documents/EMP/ExAct/ExAct_20_May2008.pdf. Accessed March 31, 2010.
  3. Van Gessel S, van Duinen H, Bogaerts I. Roller compaction of anhydrous lactose and blends of anhydrous lactose with MCC. Pharm Technol Web site. April 1, 2009. Available at: http://pharmtech.findpharma.com/pharmtech/Ingredients/Roller-Compaction-of-Anhydrous-Lactose-and-Blends-/ArticleStandard/Article/detail/590451. Accessed March 31, 2010.
  4. Gohel MC, Jogani PD. A review of co-processed directly compressible excipients. J Pharm Pharm Sci. 2005;8(1):76-93.
  5. Kolter K, Flick D. Structure and dry binding activity of different polymers, including Kollidon VA64. Drug Dev Ind Pharm. 2000;26(11):1159-1165.
  6. Bühler V. Kollidon VA 64 grades (copovidone). In Kollidon: Polyvinylpyrrolidone Excipients for the Pharmaceutical Industry. 9th ed. Ludwigshafen, Germany: BASF SE; 2008:207-252.
  7. Ali S, Santos C. Crospovidone in development of directly compressible tablets. Poster presented at: 2009 AAPS Annual Meeting and Exposition; November 2009; Los Angeles, Calif. Available at: www.aapsj.org/abstracts/AM_2009/AAPS2009-001798.PDF. Accessed March 31, 2010.
  8. Herting MG, Klose K, Kleinebudde P. Benchmark of different dry binders for roll compaction/dry granulation. ExAct. 2008;20:6-7. Available at: www.pharma-solutions.basf.com/PDF/Documents/EMP/ExAct/ExAct_20_May2008.pdf. Accessed March 31, 2010.
  9. Herting MG, Klose K, Kleinebudde P. Comparison of different dry binders for roll compaction/ dry granulation. Pharm Dev Technol. 2007;12(5): 525-532.

Nanotechnology and Drug Development


To learn more about the intersection of nanotech and drug discovery and development, read the abstract below from Wiley Interdisciplinary Reviews: Nanomedicine and Nanobiotechnology, which is published by John Wiley & Sons, the publisher of Pharmaceutical Formulation & Quality. To read the full articles, click on the links below.

The core-shell architecture of a poly(amidoamine) (PAMAM)  dendrimer with an ethylene diamine core with a typical generation  numbering scheme. Half-generation PAMAM dendrimers may have carboxyl or  methyl ester terminal groups. Unmodified full-generation PAMAM  dendrimers have amine surface groups.
The core-shell architecture of a poly(amidoamine) (PAMAM) dendrimer with an ethylene diamine core with a typical generation numbering scheme. Half-generation PAMAM dendrimers may have carboxyl or methyl ester terminal groups. Unmodified full-generation PAMAM dendrimers have amine surface groups.

Understanding Specific and Nonspecific Toxicities: A Requirement for the Development of Dendrimer-Based Pharmaceuticals

Dendrimer conjugates for pharmaceutical development are capable of enhancing the local delivery of cytotoxic drugs. The ability to conjugate different targeting ligands to the dendrimer allows for the cytotoxic drug to be focused at the intended target cell while minimizing collateral damage in normal cells.

Dendrimers offer several advantages over other polymer conjugates by creating a better defined, more monodisperse therapeutic scaffold. Toxicity from the dendrimer, targeted and nonspecific, is not only dependent upon the number of targeting and therapeutic ligands conjugated, but can be influenced by the repeating building blocks that grow the dendrimer, the dendrimer generation, as well as the surface termination.


Overview: Risk Management of Nanomaterials

Nanotechnology has become the focus of a large amount of scientific, political, and financial interest. Limited information on the exposure to nanomaterials is available, with only a few occupational exposure studies having been performed. While laboratory animal studies on the biological effects of some nanomaterials have been published, no epidemiological studies have been reported to date.

This lack of data on exposure and human health effects hinders risk assessment of these materials. As the use of nanomaterials increases rapidly, it is of vital importance that the risk assessment community understands the complexities of the issues surrounding the manufacture, use and disposal of nanomaterials, the potential of environmental and occupational exposure to human populations, as well as adverse health outcomes. For this to happen, it is in many ways necessary for the scientific community to also understand what questions risk assessors need to ask, and what research will best answer them.

Risk management of nanomaterials requires more information as to the human and ecological effects of exposure to various nanomaterials. At this time, there are no specific regulations for nanomaterials, but a few efforts to include nanomaterials under existing environmental regulations have begun. The purpose of this article is to describe the potential regulations for nanomaterials, and the current issues related to the risk assessment of nanomaterials.


A generalized schematic of the ways in which a nanoparticle may be  targeted, made biocompatible, and carry payloads such as drugs or  contrast inducing materials.
A generalized schematic of the ways in which a nanoparticle may be targeted, made biocompatible, and carry payloads such as drugs or contrast inducing materials.

Advanced Review: Multifunctional Imaging Nanoprobes

Multifunctional imaging nanoprobes have proven to be of great value in the research of pathological processes, as well as the assessment of the delivery, fate, and therapeutic potential of encapsulated drugs. Moreover, such probes may potentially support therapy schemes by the exploitation of their own physical properties, e.g., through thermal ablation.

This review will present four classes of nanoparticulate imaging probes used in this area: multifunctional probes (1) that can be tracked with at least three different and complementary imaging techniques, (2) that carry a drug and have bimodal imaging properties, (3) that are employed for nucleic acid delivery and imaging, and (4) imaging probes with capabilities that can be used for thermal ablation.

Real-Time Formulation and Development



By Peter Scholes, PhD

Translational pharmaceutics enables rapid drug product optimization

In first-in-human (FIH) studies, the oral drug product used is generally relatively simple, such as active pharmaceutical ingredient (API) in solution, suspension, or capsule. The choice is typically driven by physicochemical properties of the API, or, in many cases, the formulation type favored by the sponsor company. In most early development projects, therefore, a new chemical entity (NCE) must be transitioned into a more suitable form such as formulated API in a capsule or tablet that can deliver reliable and reproducible exposure to achieve the target product or pharmacokinetic (PK) profile.

NCEs emerging from industry’s research and development pipeline, however, are often characterized by poor solubility or permeability, presenting significant challenges to oral formulation development and optimization. The resulting suboptimal PK behavior often compromises the outcomes of early clinical studies. Furthermore, for oral medicines, the competitive nature of the pharmaceutical industry inevitably means that a once or twice daily dosing regimen is crucial to success.

A key target of the early development team, therefore, is to demonstrate proof of safety and tolerability in addition to providing evidence of efficacy. The identified drug product must also be suitable for downstream development, scale-up, and commercialization. All of this needs to be achieved in a timely and cost-effective manner.

Table 1. Formulation Selection can Result in Flawed Outcomes
IMAGE COURTESY OF QUOTIENT BIORESEARCH
Table 1. Formulation Selection can Result in Flawed Outcomes: Consideration of formulation-limited outcomes, along with proactive development and approval of backup delivery systems and flexible clinical protocols, can mitigate risks.

Typical issues the development team must address include adverse events associated with peak plasma concentrations (Cmax) and too short a half-life, as well as apparent nonlinear, dose-limiting, and highly variable oral bioavailability. Due to the constraints of conventional formulation development processes, it is not possible to respond to these emerging data sets within a clinical protocol. The development program must pause while the formulation is refined to deliver a PK profile consistent with the target product profile. Inevitably, this pause delays the pivotal proof-of-concept (POC) studies and increases development expenditure.

Translational pharmaceutics is a new approach specifically designed to anticipate such problems and increase flexibility within a protocol to address formulation and drug delivery challenges as they arise. Translational pharmaceutics offers a new development paradigm in which the screening and selection of candidate drug products is driven on the basis of human data. The clinical data obtained from one candidate drug product can inform the real-time manufacture and dosing of the next candidate within a 10- to 14-day cycle time.

Conventional Early Development

Rapid entry into man is typically supported by rudimentary FIH formulations such as “drug in bottle” or “drug in capsule” to provide dosing flexibility and limit upfront pharmaceutical development investment. Overall project progression will be delayed if study objectives are not met due to biopharmaceutical problems, however, and the development of an alternative formulation strategy to increase exposure will be required to enable definition of the maximum tolerated dose (MTD). Alternatively, if the molecule is found to have a suboptimal half-life, the development of a modified release (MR) solid dosage form appropriate for demonstrating POC and initiating Phase 2 will be required.

The alternative to using a simple FIH formulation is upfront investment in a tablet dosage form for the FIH trial to expedite downstream development, although this carries the risk of restricting flexibility and can stall the program if a safety, PK, or pharmacodynamic (PD) event is encountered. A new approach is required to address the critical questions in early development:

  • What drug product is most appropriate for the FIH study?
  • How can the FIH to POC timeline be accelerated by proactively addressing pharmaceutics issues?
  • How can the transition to a drug product amenable for full development be expedited?
  • How can pharmaceutics “flexibility” be built into clinical protocols to enable real-time responses to emerging scientific data?

Translational Pharmaceutics

Figure 1. An Early Product Development Approach
IMAGE COURTESY OF QUOTIENT BIORESEARCH
Figure 1. An Early Product Development Approach: The use of translational pharmaceutics can eliminate several non-value-added steps in the traditional chemistry and manufacturing controls early development cycle.

Translational pharmaceutics is an early product development approach based upon the integration of formulation development, pharmaceutical analysis, good manufacturing practices (GMP), and good clinical practices (GCP) testing facilities and workflows. This approach can eliminate several non-value-added steps in the traditional chemistry and manufacturing controls (CMC) early development cycle, arising from the need to transfer knowledge, products, and processes between functional and geographical silos in the development, manufacture, and dosing of the drug product (see Figure 1).

A translational pharmaceutics platform offers a rapid and seamless manufacturing-to-clinic transfer of drug products, often within 24 hours of dosing. Besides delivering specific time and cost savings, the platform’s ability to enhance flexibility and precision within early clinical studies can have profound benefits on the overall development program.

Translational pharmaceutics also enables limitations of the current early development process to be addressed through a number of activities:

  • an upfront what-if assessment of potential study outcomes;
  • development of alternative formulations capable of addressing major adverse events or PK risks;
  • regulatory and ethics approvals for flexible clinical protocols and associated CMC algorithms;
  • operational capability to respond rapidly in real time to interim safety, PK, or PD data; and
  • selection, manufacture, and dosing of alternative formulation compositions.

Benefits are realized by the ability for a real-time response in the GMP of investigational medicinal products for human dosing, allowing clinical data from one dosing period to drive the formulation selection for the next.

These concepts can be applied at all key stages within the FIH-to-POC development cycle to increase both speed and precision in the generation of critical decision-making data.

Table 2. Challenges That Could Arise From a First-in-Human Study
IMAGE COURTESY OF QUOTIENT BIORESEARCH
Table 2. Challenges That Could Arise From a First-in-Human Study: Even with acceptable systemic exposure from the first-in-human study, several challenges may arise, all requiring a change in formulation strategy.

Flexible FIH Formulation Strategy

The automatic selection of drug-in-capsule or drug-in-bottle formulation systems for FIH oral studies can bring about several unsuccessful study outcomes (see Table 1). Some of these occurrences can be predicted from available physicochemical or preclinical data, while others only become evident upon human administration. Upfront consideration of such formulation-limited outcomes, along with proactive development and approval of backup delivery systems and flexible clinical protocols, can mitigate these risks.

For example, a drug that is a weak base risks the potential for re-precipitation upon gastric emptying if it is administered as a particulate or solution formulation. This scenario can give rise to solubility-limited absorption and, hence, reduced exposure, preventing the definition of the MTD. Anticipation of this outcome and the development and approval of a backup (e.g., lipid-based) solubilized system could allow for a rapid switch within the clinical study based on interim analyses if this risk became evident during human dosing.

Alternatively, completion of the single ascending dose study may be restricted by Cmax-related side effects. Having the flexibility to transfer subjects immediately to drug products that enable the administration of divided doses—a sipping protocol—could also avoid delays in the clinical program.

Other contingencies to the outcomes indicated in Table 1 would, for example, be a back-up encapsulated formulation with the risk of nausea or vomiting, or an option to vary the fill weight or size of capsules to avoid the need for administration of multiple units. All of these strategies depend not only on upfront contingency planning and approval of flexible CMC options and clinical protocols, but also on the utilization of the translational pharmaceutics platform to enable implementation and delivery of formulation changes within the clinical study between cohorts in response to the interim safety, PK, or PD data.

Identification of POC Formulations

The benefits of a translational pharmaceutics approach can be realized further in facilitating the transition from the FIH to an optimal, scalable oral formulation suitable for demonstrating POC. Even with acceptable systemic exposure from the FIH study, several challenges may arise, all requiring a change in formulation strategy to include some element of MR technology (see Table 2).

Successful identification of an optimal MR formulation composition can be enhanced through a flexible CMC strategy to allow selection of formulation compositions in response to interim PK data. The complementary and innovative use of formulation design space concepts provides further benefits, whereby regulatory approval of ranges in the levels of critical functional excipients can be exploited by real-time manufacture and dosing within crossover study designs. This provides the opportunity for iterative formulation development based on interim human PK data.1

A translational pharmaceutics approach enables flexibility and precision to be built into early clinical programs while reducing development time and cost. In contrast to conventional approaches, this process enables real-time responses to interim safety, PK, or PD clinical data to drive selection of the formulation type or composition for the next dosing period, ensuring provision of fit-for-purpose formulations. Confirmed benefits include the ability to enhance speed, success, and value from pivotal Phase 1 clinical studies, whether by ensuring exposure that is satisfactory to demonstrate safety and tolerability in a FIH study or by expediting a formulation transition to a drug product capable of demonstrating POC and being suitable for full development.

Enhanced Skin Permeation of Naltrexone by Pulsed Electromagnetic Fields in Human Skin in Vitro


The aim of the present study was to evaluate the skin permeation of naltrexone (NTX) under the influence of a pulsed electromagnetic field (PEMF). The permeation of NTX across human epidermis and a silicone membrane in vitro was monitored during and after application of the PEMF and compared to passive application. Enhancement ratios of NTX human epidermis permeation by PEMF over passive diffusion, calculated based on the AUC of cumulative NTX permeation to the receptor compartment verses time for 0-4 h, 4-8 h, and over the entire experiment (0-8 h) were 6.52, 5.25, and 5.66, respectively.

Observation of the curve indicated an initial enhancement of NTX permeation compared to passive delivery whilst the PEMF was active (0-4 h). This was followed by a secondary phase after termination of PEMF energy (4-8 h) in which there was a steady increase in NTX permeation. No significant enhancement of NTX penetration across silicone membrane occurred with PEMF application in comparison to passively applied NTX.

In a preliminary experiment PEMF enhanced the penetration of 10 nm gold nanoparticles through the stratum corneum as visualized by multiphoton microscopy. This suggests that the channels through which the nanoparticles move must be larger than the 10 nm diameter of these rigid particles.

Particle Shape: A New Design Parameter for Passive Targeting in Splenotropic Drug Delivery


The role of particle size and surface modification on biodistribution of nanocarriers is widely reported. We report for the first time the role of nanoparticle shape on biodistribution. Our study demonstrates that irregular shaped polymer lipid nanoparticles (LIPOMER) evade kupffer cells and localize in the spleen. We also demonstrate the macrophage-evading characteristic of the irregular-shaped LIPOMER. Our results suggest particle shape as an important tool for passive targeting of nanocarriers in splenotropic drug delivery.

Devarajan PV, Jindal AB, Patil RR, Mulla F, Gaikwad RV, Samad A. Particle shape: a new design parameter for passive targeting in splenotropic drug delivery. J Pharm Sci. 2010; 99: 2576-2581. Correspondence to Padma V. Devarajan, Institute of Chemical Technology, Mumbai, India, at pvdevarajan@gmail.com or 912224145616

Delivery of Therapeutic Proteins


The safety and efficacy of protein therapeutics are limited by three interrelated pharmaceutical issues, in vitro and in vivo instability, immunogenicity and shorter half-lives. Novel drug modifications for overcoming these issues are under investigation and include covalent attachment of poly(ethylene glycol) (PEG), polysialic acid, or glycolic acid, as well as developing new formulations containing nanoparticulate or colloidal systems (e.g., liposomes, polymeric microspheres, polymeric nanoparticles). Such strategies have the potential to develop as next generation protein therapeutics. This review includes a general discussion on these delivery approaches.

Patent cliff, other factors force industry to rethink itself


Pharma Reformulates

It’s not a secret. Numerous blockbuster drugs are about to lose patent exclusivity, and revenue at big pharma is projected to fall off at a rate of 20% a year for the next three years. What is less known, at least by the increasingly pharma-skeptical public-at-large, is that innovation is restructured, and alive and well—it just may not look like your father’s Oldsmobile.

“The largest selling drug in the world today is Lipitor,” said G. Steven Burrill, founder and CEO of Burrill & Company, a merchant bank in San Francisco specializing in the life sciences. He posits that the enormous revenue generated by the sale of the drug—some $14 billion annually—illustrates why the industry approach of building a company around potential blockbusters was all quite reasonable at the time. “The model was discover on one end, develop, manufacture, market, and distribute on the other—so you can think of it vertically.” And for years it worked. “Well, that model is totally flawed today.”

Burrill takes the industry apart and looks at the pieces, first at the broken bits. He then suggests a future direction for what remains. In the beginning, pharma was the land of discovery. “But discovery is ubiquitous now—we’ve got hundreds of small companies and people doing discovery all over the world.” It’s not realistic, given the ready availability of technology and life science datasets, for discovery to have a favored venue.

Moving on to the craft of development, “We have an entire industry in place to do this now,” said Burrill, referring to contract research organizations. Their use by big pharma is already commonplace and growing. And manufacturing? Obviously, there’s China, India, and soon, Eastern Europe to do the work at lower cost. The one piece left that Burrill acknowledges as the strong suit of big pharma is distribution, which, it could be argued, is the business of business: getting a product to the consumer at the lowest possible internal cost. “What you see now in big pharma is that they’re disintegrating most of the things that historically gave them strength, and they’re looking for new business models.”

Burrill sees the strongest evidence of this shift in the executive suites. “Years ago, big pharma was led by guys that moved up through the R&D organization. … Just 10 or 15 years ago these guys became much more sales and marketing oriented, strategic kind of guys.” For instance, Jeff Kindler, the present CEO of Pfizer, is a Harvard Law graduate and came to his current position by way of McDonald’s. Pfizer first hired him as general council, where one can easily assume he often focused on patent protection.

Back to the Drawing Board

Figure 1. Spending Vs. New Drug Approvals
IMAGE COURTESY OF KINETA INC.
*Inflation adjusted per U.S. Bureau of Labor statistics to 2008 dollars.

The drive for new models has produced a purchasing war among giants: Roche captured Genentech, Merck merged with Schering Plough, Pfizer grabbed Wyeth, Novartis set its sights on eye-care giant Alcon, and the snatching up of morsels continues. Sanofi-Aventis acquired BiPar Sciences, an innovator in poly ADP-ribose polymerase technology, Abbot supplements its pipeline with Facet Biotech … and many more.

With the dust far from settled, Pam Narang, PhD, senior analyst for the global business information provider Datamonitor, took a look at what has actually been accomplished. Dr. Narang considered the merger and acquisition (M&A) trends of big pharma over the last eight years, first by noting the spike. “We see a jump from six deals in 2000 to 14 in 2008.” Diversification was a big part of this move.

So, the question is this: Is diversification away from pharmaceuticals a good way to deal with the patent cliff? “We see about 47% of all these M&As were actually involving companies that were not pure pharma companies,” said Dr. Narang. Alcon is a good example. The idea was to bring in products not subject to patent expiration.

The surprise for Dr. Narang was the variable profitability of a mega-merger based on brand. “I found a distinct, positive correlation between branded pharma focus and operating margins. This suggests that companies which are highly pharma focused are just more profitable.” It further implies that big pharma should seek to increase branded focus either by acquiring companies that are more branded than the buyer or by divesting assets that are not branded pharma. Dr. Narang said Bristol-Myers Squibb is a good example of a company that’s done this. “It’s been divesting its non-core assets and also going out there and buying into branded pharma companies, bringing those products in house and essentially making themselves more branded.”

M&A done in the right way, then, can boost or retain profitability. “It gets there in solving the pipeline issue, because you get marketed products; they don’t have to worry about the FDA [U.S. Food and Drug Administration] risk or the development risk, and they can take it further into the current indication,” said Mark Monane, MD, senior analyst for biotechnology and life sciences, Needham & Company, New York. “That’s becoming a favored level of maturity for big pharma.”

For bringing up new, novel products, Dr. Monane sees pharma’s strengths in its recognition of outside talent and its ability to do large clinical trials. To the first point, pharma is working with biotech, mindful that the two corporate cultures should remain separate: Witness how Genentech has retained its identity, though it is now part of Roche. “Another good example is Regeneron and Sanofi,” said Dr. Monane. “They have a monster deal with Sanofi to do antibody drug development. Regeneron is doing the work and Sanofi’s paying for it, but the bottom line is they want to retain that culture.”

There are also the good fits of capability—pharma does the big trials. “It’s very unlikely that a biotech company can develop, say, a diabetes drug on their own. That’s why Amylin has a deal with Eli Lilly for exenatide.” Cardiom has partnered with Astellas in a huge trial for its drug to treat atrial fibrillation. Portola Pharmaceuticals, in partnership with Merck, has a large trial for its factor Xa inhibitor, betrixaban, a drug with enormous—and profitable—potential as a replacement for warfarin.

Dr. Monane also notes the shift in targets; the low hanging fruit of easily understood and manipulated pathologies like high blood pressure has been picked. Much of what remains—high stakes, high dollar puzzles—is in oncology. And the chase is on. “There’s a lot more focus in pharma to look at cancer agents, hoping to develop high-priced, premium drugs in areas of unmet need.” Pfizer, in particular, has recently established and heavily invested in an oncology franchise, and others are following suit.

Dr. Monane thinks the field is large enough to accommodate any number of players, but he’s concerned about the finite nature of the game. “The undertold story in clinical development is the lack of patients. The number of adults in the United States volunteering for trials is only 5% of the patient population.” That 5% may represent 50,000 individuals, but with hundreds of trials underway at any given time, it’s just not enough. “That’s why the trials are going to Europe, to Russia, Israel, China. … This is translating into a huge problem in drug development.”

Greg Wiederrecht, PhD, MerckWe’re looking to go upstream and work with biotech and academia on the basic research. I have a worldwide network of scouts who are completely non-transactional. This is a highly pedigreed group. Almost all are former bench scientists or MDs.
—Greg Wiederrecht, PhD, Merck

Bridging Toward Innovation

Charles Magness, PhD, president and CEO of Kineta Inc., a Seattle biotech firm formed in 2007, identifies another ongoing issue in drug development. “We think the biggest hurdle is the translational gap between the earlier academic-type discoveries and advancing a drug into clinical trials. So, that’s exactly where we’re focusing our business.” Dr. Magness has used this approach effectively before, having founded Illumigen Biosciences with his current Kineta partner, Shawn Iadonato, PhD. After advancing a lead compound for hepatitis C, the company was acquired by Cubist Pharmaceuticals in 2007.

“If you look at the structure and number of deals over the last decade, you can see this strategy evolving,” said Dr. Magness. It’s been apparent for some time that big pharma is no longer efficient at early stage research. “They realize it, and they’re moving away from the old model and outsourcing innovation. Almost everybody is doing it.”

Dr. Magness’ latest entry at Kineta is a suite of ShK analogs, highly specific, peptide potassium channel inhibitors derived from the venom of the Caribbean sea anemone. ShK, which is nontoxic to mammals, has shown immune response modulating activity in animal models of multiple sclerosis and rheumatoid arthritis. This makes the molecules attractive not just for their potential applications in lucrative markets, but also because Dr. Magness believes that, as a biologic, ShK has a better chance of making it through to final regulatory approval.

“Historically, there’s a greater likelihood that a biologic can get approved because you know more about the molecule. You generally have a pretty good idea what it’s doing in the parent organism.” The possibility of having off-target toxicity in the clinic is far less likely if you already know the mechanism of action. This perception goes a long way to explain why big pharma has recently been pushing hard to acquire biologics. “If you file an IND [Investigational New Drug application] on a biologic today you have a 33% chance of getting ultimate approval from the FDA, whereas if you file on a small molecule, looking back 10 years, the average rate of regulatory success is about seven percent.” As for the Shk program, Dr. Magness is looking to partner after navigating the hurdles of Phase 1 dose safety trials—positioning it to move into some big pharma pipeline.

G. Steven Burrill, Burrill  & CompanyWhat you see now in big pharma is that they’re disintegrating most of the things that historically gave them strength, and they’re looking for new business models.
—G. Steven Burrill, Burrill & Company

The Body in the Room

Discussing the patent cliff directly with big pharma is problematic; several of the largest players contacted by PFQ declined to comment, perhaps uneasy about how poorly the public perceives their current efforts. However, Greg Wiederrecht, PhD, vice president and head of external scientific affairs at Merck & Co Inc. in Whitehouse, N.J., was eager to talk about Merck’s prospects, and, in fact, about prospecting.

“We have a new group called external development and preclinical sciences—it’s not a licensing group. These guys are charged with bringing in just one particular type of collaboration, and that’s basic research.” Dr. Wiederrecht, who comes from the bench himself, sees the offerings of early development companies like Kineta as all well and good, and will pursue those opportunities of interest. But, as he puts it, “These things are well picked over. We’re looking to go upstream and work with biotech and academia on the basic research. I have a worldwide network of scouts who are completely non-transactional. This is a highly pedigreed group. Almost all are former bench scientists or MDs. Most come from Merck Research Laboratories.” Dr. Wiederrecht’s team spends their days meeting with technology companies; meeting with biotech, small and mid-sized; and attending basic science meetings to identify promise at the earliest opportunity.

One recent find resulted in the signing of a collaborative deal with researcher Laurie Glimcher, MD, of Harvard Medical School, an opportunity to exploit advances in new therapies for osteoporosis.

Though unrelated to Dr. Wiederrecht’s initiative, collaboration has also recently been established among the giants. This February, Merck, Eli Lilly, and Pfizer launched the Asian Cancer Research Group Inc., an independent, not-for-profit company established to accelerate research in cancers that genetically differ from, or are more prevalent than, those seen in the West. The initial focus will be gastric and lung cancers. It is hoped that combined forces and shared datasets will result, sooner rather than later, in new drug candidates for this vast emerging market.

Thursday, July 22, 2010

The Role of Glasses in Aseptic Production: A Detail Often Ignored

Hood, suit, faceplate, cover shoes, gloves: these are the necessary items of clothing when operating in A- and B-grade areas. The principal purpose of protective clothing is to minimize the risk of microbiological contamination caused by personnel. Thus, protective garments should not release fibers and must be able to contain particles produced and released by the body.

But how can we ensure that protective garments are not themselves vehicles of contamination? And how can we ensure that cleaning and sterilization processes are effective and do not alter the characteristics of the garments? We attempted to answer these questions, concentrating our attention mainly on glasses (in general, on individual protection devices usually referred to as masks).

Because glasses are not disposable, we must consider that stress conditions such as repeated sterilizations may compromise their use. The glasses may lose functionality and the components might be damaged, resulting in the release of contaminating material.


Figure 1: The type of glasses evaluated
We prepared a study protocol to help verify the following aspects:
  • the glasses' ability to endure repeated sterilization processes without suffering alterations;
  • the ability of the sterilization process to obtain a 12-log reduction of the starting microbiological charge.

We chose to verify only the steam-sterilization cycle because it is the process most commonly used in the pharmaceutical industry, although glasses also are sterilized using other methods (γ-rays, ethylene oxide, etc.).


Table I: Characteristics of the glasses used.
For our tests, we used glasses (see Figure 1) with the characteristics outlined in Table I. Tests were conducted to verify that it was possible to subject glasses to repeated sterilization cycles without any alterations that could compromise their usefulness. Glasses in the trial were subjected to repeated steam sterilization cycles (temperature = 121 ± 1 °C, time = 30 min) according to the outline in Table II. At the end of the fixed sterilization cycles, the glasses were evaluated for adherence to the facial conformation, lens transmission, and particle release.

The effectiveness of the sterilization process is a probabilistic function depending on the number of microorganisms present, the thermic resistance of these microorganisms, and the quantity of heat supplied. Therefore, determining the quantity of heat that is necessary to attain the 12-log reduction in the microorganism population to ensure sterility depends entirely on the thermic resistance of the present microorganisms.


Table II: Glasses subjected to steam sterilization cycles.
The thermic resistance of the microorganisms was evaluated by verifying the D value as the time necessary to reduce 90% of the population of present microorganisms (1 log) in specific sterilization conditions. Even if the sterilization cycle recommended by the producer is a typical overkill cycle, it is necessary to evaluate the D value of the microorganism in a trial because this value strongly depends on the possible interactions between the microorganisms and the material on which they are found.