Wednesday, October 19, 2016

Mass Measurement Precision of Small Objects in Pharmaceutical Production


Unlocking the regulations-related advantages when using the new generation of contactless measurement systems

By Arek Druzdzel and Arne Wieneke, Aiger Group AG, Zug, Switzerland
Traditional methods of weight measurement are based on comparison with standards accepted in designated areas. Over the past 200 years, a kilogram became such a standard and a metric base unit [1]. In the International System of Units (SI), it is defined as a mass standard and is used as a base for weight measurements worldwide.
Nowadays, with using the standard kilogram, it is expected it yields the same reading of high-precision weighing devices all over the world. As long as single measurements under laboratory conditions are at stake, using a standard mass in calibration procedures on state-of-the-art load cells is sufficiently precise as they allow for achieving highly repeatable and precise measurements.
However, load cells maintenance and calibration become a disadvantage when fast, precise and accurate measurements of single milligrams and micrograms are applied in-process production. It is a known fact, that under such conditions, scales have their limitations and correct adherence to regulations and production targets might not be ensured at the same time.
At first, scales with load cells require adjustment to the geographical location, otherwise the measured weight yields an error dependent on the actual location. In fact, scales do not measure mass but weight which is then translated into mass taking into account the locational gravity force.
PM1607 Mass measure
Mass-reading error is caused by variation in the gravitational acceleration and the resulting gravity force (weight) that are not constant around the world.
The mass-reading error is caused by variation in the gravitational acceleration and the resulting gravity force (weight) that are not constant around the world. For an object of a given constant mass, its actual weight depends on both latitude and altitude of the actual location of the balance used for the measurement. Diagram 1 shows the variation in the gravitational acceleration around the world, at a constant altitude of 100 meters.
The gravitational acceleration at the Equator amounts to approximately 9.78 [m/s2], while at the poles it is approximately 9.832 [m/s2], resulting in discrepancy of 0.052 [m/s2], i.e. 0.53%.
Additionally, gravitational acceleration is affected by local altitude, tilt of Earth’s rotational axis, precession, equatorial bulge, etc. [2]. Gravity-related effects apply when, e.g. calibrating weight measurement devices to a mass standard, hence the more accurate and precise the measurement is required, the more time and effort are required both for calibration and the actual measurement. Furthermore, measurement precision and accuracy of scales and load cells changes with time from the last calibration, as they depend on elastic properties of materials in load cells, environmental conditions, and other components of a weighing system [4, 5]. 
This discrepancy implicates variation in weight readings when an object of a given mass is measured at various latitudes and altitudes. Scales compensate this significant error by providing reference masses for pre-calibration. Evidentially, this calibration becomes critical when fast measuring small masses, e.g. in milligrams range; leading to more frequent calibration to ensure reliable measurement in line with regulations.
When now moving away from the perfect laboratory environment to the real production environment, more factors influence the weight measurement of small masses. Machines vibrate which cause slow measurements and/or potentially incorrect readings; potent products require contained handling inside RABS or isolators which require constant ventilation; products may vary in water content during processing whereas the dry weight is in focus, etc. Accumulation of these influencing factors limits the useable range of accuracy of load cells or even does not permit determining small masses accurately, precisely and quickly.
Furthermore, closed processes add and/or mix substances in a way that does not permit to monitor the correct execution of adding or mixing, because the process is closed or continuous. Such applications may not allow the use of load cells but only offline sampling or indirect estimation of weight. In particular, for continuous manufacturing, the offline monitoring of small weights is not an option – because it represents a time delay. In case of frequent sampling, “offline” is described as “inline” though it is not “online.”
Removing gravity and ambience from the equation
A solution to the above described discrepancies is a highly time-stable, gravity-independent measurement system, capable of measuring the mass of objects online, i.e. the amount of substance instead of weight of fast-moving objects, e.g. capsules, tablets or powder. Such measurements would be identical around the world and independent of the influencing factors, allowing not only for tight and online monitoring of substances but also direct data comparison.
Over the past years, we have developed such a novel system [3] and successfully installed it in a number of factories around the world. The system uses sensors emitting a local energy field directly interacting with the substance passing through the field and this way, alternating its output signal. As a result of such a field-substance interaction, the initial (empty sensor) signal changes adequately, creating an output signal. Such modified signal is equivalent to the amount of the substance passing through the measurement system. Once measured, the signal can be instantly converted to mass or the local weight. Knowing exactly the quantity of the substance dosed, i.e. mass of an object, the whole dosing-measuring system is automatically calibrated to the weight measurable in any region (location) where it is destined, without a need for overdosing or a risk of underdosing.
As only the substance change the output signal, the signal is ambience-independent, allowing for online, positive process control. As sensor signal data processing is fast enough, it allows for closed loop control.
A new industrial application
An industrial version of such a system has been built and verified with a variety of small objects, including capsules in 4 to 00 size range, tablets as well as with micro-dosing of powder from 1mg to 500mg into vials and syringes. The system requires just a single push-button calibration that once done at a location, does not need any further recalibration services. The system has proven stable precision and accuracy within one sigma ranging from 0.25% to 3%, the dispersion depending mainly on materials structure and morphology.
Most important for production environments, the proposed system ensures quick, precise and accurate mass measurement of a very wide range of objects, with no need for major system adjustment, special environmental conditions, leveling, isolation from vibration and ventilation or prolonged measuring time. The system has no moving or flexing elements hence it is free from disadvantages associated with common weight measurement systems.
With already initiated transition of the pharmaceutical industry from batch production to continuous manufacturing, such a system is an important and highly anticipated tool for reliable monitoring quantities of smallest ingredients and finite products.
There are well-known shortcomings of customary, online weight measurements systems, especially for industrial applications in the pharmaceutical industry. However, alternative system for online, precise and accurate measurement of small masses are available.
The gravity-independent and highly effective mass measurement system facilitates compliance with existing drug quality regulations as well as with the industry safety directives and guidelines for cGMP, QbD and PAT. It does provide the capability to reduce cost but most importantly, it enables extension of products serialization down to formulation and components level and streamlines industrialization processes.
References
  1. Kilogram – the base unit of mass, Wikipedia
  2. Earth’s rotation, Wikipedia
  3. Patent application GB2512026A
  4. Pharmacopeia USP General Chapter 41 and 1251
  5. European directive 2009/23/EC

Transparency: A Vital Ingredient for Safe, Effective Medicines


By making the cloud the heart of their networks, companies are adapting quality systems to address supply chain risks

By Michael Jovanis, Veeva Systems
There’s no question that pharmaceutical and medical device production has become more complex, especially with the growing number of biologics and combination products. These products come with their own set of complicated production challenges and supply chain risks. As life sciences companies outsource to better scale and contain costs, complexity only increases. With more manufacturing functions externalized — and outside of direct oversight — ensuring continuity of processes, regulatory compliance, and accurate, high-quality data from start to finish is difficult.
“It used to be that if a company did any outsourcing, it was primarily limited to sourcing raw materials,” said Daniel Matlis, founder and president of Axendia, a life sciences industry analyst firm. “But as a compounding number of companies began to go outside their four walls, they discovered opportunities to reduce cost and shed non-core competencies. Now, many aspects of manufacturing are handled by contract organizations.”

As the supply chain expands, it’s absolutely essential for companies to maintain complete visibility with contract manufacturing organizations (CMOs) on good manufacturing practices (GMP), quality processes and data. Further, life sciences companies should make it standard operating procedure to carefully review GMP data generated by CMOs so they have the insights they need before approving batch lots for release. All too often, companies don’t have easy (or any) direct access to CMOs’ systems and so often skip this important step.
Demand for manufacturing outsourcing has grown steadily, from $800 million in 1998, to $2.5 billion in 2014 and is expected to reach $4.1 billion by 2019, according to a recent survey on contract manufacturing from High Tech Business Decisions. Today, the U.S. is the largest market for contract manufacturing worldwide. In total, 80 percent of the active pharmaceutical ingredients consumed in the U.S. originate in India and China, and about 40 percent of the finished products come from outside the U.S. Europe lags slightly behind, but growth is expected from developing regions such as Asia Pacific, with the Japanese market projected to register a compounded annual growth rate of 13 percent.1
That’s the good news, but increased outsourcing also adds several degrees of complexity to an already complicated process. As life sciences companies relinquish a level of control over the manufacturing process in return for lower costs, they open the door to diminished transparency. Without worldwide supply chain visibility, quality can suffer. In fact, a number of popular prescription drug products manufactured in India were recently banned from U.S. importation due to quality concerns — these include acne treatment Accutane and the antibiotic Cipro. These issues continue to grow, driving the U.S. Food and Drug Administration to increase its local presence throughout the world.

“We now have offices and inspectors in New Delhi and Mumbai, India, as well as China, Latin America, South Africa and a growing cadre of international inspectors elsewhere,” said Dr. Margaret Hamburg, former commissioner of the FDA, during a 2014 interview. “We’re trying to achieve the same levels of inspection, enforcement and compliance that we would expect of any company manufacturing drugs for consumption by the American public,” Hamburg said.2
“Unfortunately for the life sciences industry, there have been a number of challenges associated with contract organizations, which has put a finer focus on the need for control and visibility into what happens at suppliers,” said Matlis. “Regulators are well aware of this situation.” In fact, of the 20 warning letters issued by the FDA Center for Drug Evaluation and Research Office (CDER) of Manufacturing Quality in 2015 — more than half were related to cGMP.
Despite the government effort, manufacturers are still dealing with tragic manufacturing errors that are not always discovered until it’s too late. For example, Heparin, a popular blood thinner manufactured in China, was contaminated by someone upstream in the supply chain. A cheaper, incorrect active ingredient that tested just like the right active ingredient was substituted in the manufacturing process. The drug made it into the U.S. supply chain, causing a large number of patients to become ill.4
Vigilant supply chain visibility is the key. It provides the crucial system of checks and balances that often go missing when a company moves critical stages of its manufacturing process to outside vendors. As the supply chain becomes more complex and thus more risky, companies are finding that they need to adapt their quality systems to address these risks.

TROUBLE WITHOUT TRANSPARENCY
For life sciences companies, transparency means visibility into the manufacturing process, timely access to data and content, and an accurate audit trail of all activities. Often, companies do not learn about issues that can hinder the delivery of a finished product until it is too late either because the supplier is trying to resolve the problem on its own or the manufacturer is never made aware. Life sciences companies try to prevent problems by maintaining frequent communications with contracted organizations, often via phone or email, but this is not practical, sustainable or secure.3 When exchanges of vital information between companies and CMOs rely on paper-based and manual processes that involve many people reading and transcribing emails and faxes — errors are bound to occur. As problematic, there is no consolidated audit trial of these exchanges, making it nearly impossible to confirm and manage.
The troubling reality is that process gaps and inaccurate or incomplete data can lead to products that do not meet health authority standards for safety, purity or efficacy. In addition to threatening the safety of medications, poor data quality can create costly, time-consuming bottlenecks in the drug development process…and, ultimately, dangerous drug shortages.
The negative consequences of fragmented quality processes, poor data quality and lack of transparency are even more acute when a healthcare authority sends a warning letter to a company stating that its manufacturing processes, or those of its CMO, do not meet regulatory requirements. From 2007 to 2013, the number of warning letters issued by the FDA increased by 78 percent, with this number expected to increase in the coming years.4 After receiving an FDA Form 483 at the conclusion of an inspection, a company can continue manufacturing, but it is given specific instructions on steps it must take to correct problems. The FDA’s more severe Consent Decree letter can order a company to stop manufacturing altogether or order it to shut down a particular facility. Either notification usually requires that a manufacturer evaluate and revamp processes, resulting in devastating cost and time losses. In the end, it delays time to market — which can cost companies up to $8 million a day, according to various sources.
Not only can these warnings cause serious financial losses for life sciences companies, but also can dramatically erode public trust, impacting the brand and corporate reputation in confounding ways.
OUTSOURCE MANUFACTURING, NOT RESPONSIBILITY
Life sciences companies can outsource their processes, but not responsibility for quality. According to the International Council for Harmonization (ICH), “The pharmaceutical company is ultimately responsible to ensure processes are in place to assure the control of outsourced activities and quality of purchased materials.”5
The EU legislation for the pharmaceutical section, guidance contained within EudraLex, reaffirms this standpoint: “The contract giver is ultimately responsible to ensure processes are in place to assure the control of outsourced activities.”6 And, in the U.S., the FDA regards contract facilities “as an extension of the manufacturer’s own facility and therefore expects a company’s supplier network to perform as an extension of its own quality system. The FDA also states that “the owner’s quality unit is ultimately responsible for approving and rejecting drug product manufactured by the contract manufacturer.”7
The FDA is proposing to standardize quality metrics — defined as “objective measures of the quality of a product or process” — as a tool to evaluate drug manufacturers and compare quality results between similar companies and products. The FDA plans to capture data from manufacturers over a one-year period for lots attempted, lots rejected, lots with out-of-specification results, stability tests conducted, and the number of quality complaints. After the data is compiled, the FDA says that it will use this aggregated data to generate metrics to arrive at an objective measure of quality.⁸
“Now that companies are dealing with a global and outsourced environment, they need to truly characterize their products uniformly and to scientifically understand what are the critical-to-quality (CTQ) attributes essential to producing the desired outcomes. Just as important is the master data, which needs to be properly defined, curated and managed. Finally, everything — the documents, the data, the SOPs — needs to be totally transparent. This is the direction of regulators as well,” said Matlis.
Visibility is possible with the right technology. It starts with a hyper-connective network that links multiple partners in a fluid, cloud environment. Having continuous processes and timely access to data all along the supply chain fosters collaboration and transparency among the company, outsourced manufacturers and suppliers globally. This opens up the ability to measure processes, ensure compliance standards are met and quality standards are exceeded. Companies can then reduce risk, find anomalies or opportunities for improvement, and act upon them quickly.
Yet, while 90 percent of companies have implemented at least one quality management system, they often reside within and are only accessible within a company’s internal network.9 Such on-premise solutions do not easily allow appropriate levels of access to those who need it most, including most especially, contract partners. As companies outsource more, everyone — brand owners, suppliers and CMOs — need complete access and, therefore, visibility to the most current, accurate information available. This is data transparency, and it is the key to quality.
THE CLOUD MAKES QUALITY CLEAR
Using cloud-based technology to orchestrate drug development and manufacturing enables seamless processes and provides all stakeholders with access to accurate data and content in a single, authoritative system. It also enables partners to be a part of the process — offering input, contributing data and improving the entire operation. Most crucially, the cloud solution should extend enterprise-wide, from end to end, to ensure that all relevant data is captured reliably across the organization because there is just one source of truth.
The cloud gives life sciences companies the ability to extend visibility across all parts of the value chain, while at the same time enabling partners to access information they need to provide valuable services. Raw materials suppliers, CROs, CMOs, brokers and distributors can interact simultaneously under very controlled conditions that ensure up-to-date information is always available to those that need it — whenever, wherever, however they need it.

“Providing our partners with direct access to our validated, cloud-based quality system ensures we maintain a clear chain-of-custody on manufacturing documents such as batch records and audit reports,” explained Craig Gassman, associate director of regulatory operations at Karyopharm Therapeutics — a clinical-stage pharmaceutical company focused on discovery and development and subsequent commercialization of novel first-in-class drugs directed against nuclear transport and related targets for the treatment of cancer and other major diseases.
Modern solutions also enable life sciences companies to gain real-time visibility into the status of quality processes. Workflows can be set up to allow both internal staff and outsourced partners to view important data or content uploaded by either party all in one place — again, for a single source of truth. Seamless processes and comprehensive views prevent breaks in the audit trail that cause mistakes, quality control issues and even FDA observations. For example, the operations team can set alerts for people in the workflow and the entire team can see who performed tasks and when. This close collaboration is essential for efficiency and quality. In clinical trials, for instance, cloud applications speed the collection of all trial-related content, providing status updates throughout a study’s start-up, execution and close.
MANAGING PROCESSES, DATA
In manufacturing, with greater outsourcing of critical functions, companies are starting to rely on cloud solutions to manage quality processes and keep track of data received from contract test labs, manufacturers and other specialty partners. Key people can easily review executed batch records from anywhere before acceptance and release of product. Deviations and subsequent communications are also centrally tracked and visible to all parties. Increasing efficiency while improving compliance and quality results in fewer inspections, faster inspections and significant reduction in risk of receiving warning letters or forcing dreaded plant shutdowns.
“In this outsourced environment, it becomes very difficult to manage operations using ‘tribal knowledge,’ that is, bringing in a person or a group of people who really understand a step or process or molecule and getting their opinion to address an issue,” said Matlis. “Once a life sciences company establishes quality attributes critical for each product and the requisite data that needs to be managed, it must implement solutions that not only manage quality, but even more important, improve quality.8
The key to solving these challenges is a shift in the underpinnings of the life sciences supply chain. It’s time to operate the business as a broader network that allows suppliers and partners to become directly tied into the central nervous system of the life sciences company. To do so effectively, organizations are making the cloud the heart of that network — and, with greater visibility, successfully retaking full responsibility for the quality of their products.

SOURCES
1. “Achieving Global Supply Chain Visibility, Control & Collaboration in Life Sciences: Regulatory Necessity, Business Imperative,” Axendia, 2010. p.7
2. “The Safety of Prescription Drugs Made Outside the U.S.,” The Diane Rehme Show, February 20, 2014. 
3. “Measuring the Business Value of Data Quality,” Gartner, 2011. 
4. “FDA Warning Letters Rise 78% Over Six Years with Increase Expected in 2015,” The Pharma Letter, 2014. 
5. “ICH Harmonised Tripartite Guideline: Pharmaceutical Quality System Q10,” International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use, 2008. 
6. Eudralex Volume 4 Chapter 7, Outsourced Activities
7. 21 CFR 200.10
8. “Quality Metrics: What Does It Really Mean?” Contract Pharma, 2016. 
9. FY17-Mfg-Managing External Quality

Holding the Diagnostic Mirror

Only those pharma companies with enough courage and imagination to learn and apply lessons from other industry leaders can hope to break from the pack

By Thibaut Dedeurwaerder and Jonathan Tilley, McKinsey & Company
Pharma managers should learn to watch their operations differently, see opportunities for improvement on their shop floor, and learn from top-performing peers and from leaders in other industries how to capture these opportunities.

The pharmaceutical industry is lagging other industries in operational efficiency, as indicated by OEE ranges of 10-60 percent, up to six-month lead times, and other measures. Few pharma managers understand shop-floor operations and their potential for improvement, and few learn from their industry peers — even those whose performance is far superior. But even the best-performing pharmaceutical plant is miles away from the efficiency achieved at the average Toyota plant. Only those pharmaceutical companies and plants with enough courage and imagination to learn and apply lessons from other industry leaders can hope to break from the pack.
DIAGNOSING BY "GENCHI GENBUTSU"
When Taichi Ohno, Toyota’s head of production engineering after the second world war, visited Ford’s plants in Michigan, he may have been impressed by what he saw in the industry leader’s facilities. But beyond understanding the manufacturing system, he found many areas for improvement on the production line, such as a leveled pace of production and a smaller work-in-progress inventory. His visit, and the knowledge he took back to Japan, show how looking at systems or processes with fresh eyes can reveal new insights at the most basic level. This is exactly what pharmaceutical companies need to do now to uncover opportunities for improvement in operations.
This type of visit offers several lessons. Years before Toyota codified its core manufacturing principles, Mr. Ohno had already found new approaches to looking at production processes. The principle was “Genchi Genbutsu” — go and see for yourself. Managers watch processes, in person, to understand the fundamentals of what adds value and what does not. This approach would later become one of Toyota’s most famous slogans, and it is fundamental to production process diagnostics.
No matter how good their information, pharma managers rarely find inefficiencies in operations behind a desk. Yet all too often, managers accept the data in front of them without challenge. Anybody who wants to know what really goes on in manufacturing, including people’s difficulties and daily worries, must go to the shop floor and look, listen, question and understand.
We find that many pharma managers get wrapped up in other areas and fail to understand how much value can be added. They spend too little time watching operations or meeting and talking with employees. People on the shop floor often tell us that managers seem uncomfortable during visits. Not sure how they should behave, what they should be looking for or asking, many managers are actually relieved when they can return to the safety of their offices.
WHAT SHOULD THEY BE LEARNING?
Pharmaceutical managers need to “learn to see” waste and variability in familiar processes, rather than just the barriers to change. Managers must ask “what would it take” rather than simply report “why we can’t do it.” Looking for waste and variability, especially in your own operations, requires courage. It also requires observation. In more than 50 plant walks and diagnostics we have conducted, we have seen pharma executives gain tremendous insights and benefits.
When we brought pharma managers to the shop floor, we sometimes found that less than half the equipment was running, and that multiple weeks of work in progress (WIP) was stored in many different places. On other occasions, we found different operators working on the same equipment with no clear work descriptions, and saw for ourselves the heavy burdens of batch documentation.
Results from these diagnostics speak for themselves: Many plants can improve their productivity and throughput times by 30 percent, and some by 50 percent.
LEADING PHARMA PLANTS' BEST PRACTICES
Some pharma plants perform up to 20 times better than their lowest-performing peers, according to POBOS benchmarks for over 200 facilities. What distinguishes the top performers?
What we notice first when visiting top performing plants is the activity on the shop floor. Paradoxically, it is relatively low! Corridors are lonely, lines are populated sparingly and people aren’t running around. Multi-machine handling helps make this possible. But implementing this seemingly simple principle requires three enablers that distinguish top performers:
• Machine effectiveness at top-performing plants is significantly higher, with OEEs as high as 60 percent. Lines have fewer minor stops and breakdowns; product changeovers are executed efficiently; lines are run at a speed that meets the targeted output and quality. Underlying high machine effectiveness is a problem-solving approach where production operators, maintenance technicians and engineers collaborate in the pursuit of continuous improvement. Line performance is monitored systematically and continuously. Where a gap appears between planned and actual performance, the team responds immediately by looking for the root cause and implementing countermeasures.
• Efficient workplace design also plays a key role. Good plant layout enables multi-tasking. This includes a minimum of physical barriers on the shop floor to allow operators to flow between areas (e.g., automatic doors, same clean room classification throughout production). Line operators at top plants have all they need at arm’s length, and work stations are close to each other to reduce walking and searching time.
• Elimination of non-value-adding activities. Obviously, high machine effectiveness and efficient workplace design help reduce non-value added activities like repairing, waiting and walking. But top performers push it further. For example, by adopting the “critical to quality” principle, they reduce documentation by limiting the number of inputs to the items that really matter for product quality. Beyond removing non-value adding tasks, they also try to simplify activities. Fewer and simpler tasks mean fewer mistakes. The number and frequency of in-process controls are reduced to what is appropriate based on the process need and knowledge, rather than habit or tradition.
Many top performers have also reduced non-value-adding activities by investing in automation, such as automatic weighing stations and guided vehicles, container-washing machines and electronic batch records. The distinctiveness here, however, lies not so much in the investments but in the way the automation and systems are selected and implemented. Top performers understand that automation in itself does not guarantee performance or productivity. For any automation investment, they require a clear business case that is reviewed in a thorough capex approval process. Also, any process to be automated is optimized in-depth before being automated to avoid “locking in” inefficiencies.
Another characteristic that distinguishes top-performing plants is the mindset of their leaders. Most recognize that pharma is no different from other industries that have applied lean concepts. A recurring challenge is the perception that lean principles are difficult to marry with pharmaceutical operations, because of the highly regulated nature of the industry.
But managers at high-performing plants find the lean journey does not expose them to undue risk — and can even offer the same kinds of benefits seen in other industries. On one hand, managers understand that productivity and efficiency do not have to compromise quality, and that simplified work processes can reduce complexity, non-value-added activity and opportunities for error. What’s more, with quality being one of the main functions in pharma plants, managers also understand how lean principles can have tremendous benefits when applied in the quality department itself.
HOW TOYOTA WOULD MAKE PILLS
Given the huge spread in performance, pharma plants can learn a lot from each other about how to run operations efficiently. But even the best-performing plants can raise their aspirations by looking critically for waste and variability in their operations. A key purpose of a diagnostic is to find the technical limits of “what would we have to do” to improve. Each company should aspire to be the “Toyota of the pharma industry.” Admittedly, cars are not pills or vaccines. But it is worth thinking about how Toyota might go about making pills; this insight guides how we look at a diagnostic.
When comparing a top-performing pharma plant with the production system at Toyota, questions immediately arise:
1. Why are line-OEEs below 90 percent?
2. Why is throughput time 10 to 30 longer than actual process time?
3. Why don’t all operators follow a standardized work pattern?
Pharma executives reply with rational-sounding explanations, such as that change-over times and campaign sizes limit the flow, they face quality constraints, and that unforeseeable events prevent further standardization of work.
LESSONS FROM AUTOMOTIVE
These explanations for inefficiency sound like those of automotive executives of 30 to 40 years ago. Yet today the car industry produces an exceptional variety of products in an ever more efficient way, with ever shorter delivery lead times and extremely high quality. Toyota assembles all the variants of more than one model on a single production line with a 97-99 percent run ratio (a kind of an OEE measurement), with virtually no buffer stocks, while achieving the best quality-performance in the industry, enabling them to offer five-year warranties in many markets.
It is reasonable to believe that Toyota would immediately address the poor OEE-levels of 10-60 percent in pharma. Just as automotive companies started to improve equipment effectiveness decades ago, pharmaceutical companies should now follow suit.
Toyota would not accept a product spending more than 30 days in a manufacturing facility. They aspire to get production down to pure processing time. Production would be controlled by pull to minimize work-in-process levels. Quality checks would be built into the process via error-proofing, and confirmation would be done in real time, leading to immediate release on receipt of any test needing incubation time.
Finally, by no means would Toyota allow non-standardized work execution. Standardized work is the basis of performance and improvement. Every process — be it a quality test, a changeover or a line replenishment — would be torn down to its basic elements. These would be sequenced and put onto a timeline, and managers would meticulously verify adherence to standards.
When Mr. Toyoda, former president of Toyota Europe, grandson of the founder and now the head of Toyota, visited the assembly plant in France a couple of years ago, he complained that operators did not follow standardized work processes. Indeed, during the plant tour, he stopped on the line, opened the standardized work manual for the position, and verified second-by-second that the operator performed the job according to its specified elements.
In what other industry does the head of the corporation take the time to investigate how a process is executed on the factory floor? Or pause long enough to ask why an operator handled a bolt with his right hand instead of his left?
What nobody thought possible in the automotive industry 50 years ago came true as one leader emerged. Pharmaceutical executives should aspire to excel on cost, service and quality and move away from the current “either-or” mindset.
Those with the courage to look firsthand at their operations — and the imagination to find the solutions to remove waste and variability from the processes and production system as a whole — will become leaders in their industry.

Next-Generation Innovation


Streamlined strategies secure pharma’s future as the industry thinks outside the pill box

By Karen Langhauser, Chief Content Director
As drug pricing reaches what some might argue is the height of public and political scrutiny (thanks Martin Shkreli), consumer expectations rise, and the blockbuster model continues its well-documented demise, the market has made innovation mandatory for the pharmaceutical industry. Bottom-lining it, PcW stated in its “Managing Innovation in Pharma” report, “the rewards for success are high and the risks of failure can threaten a company’s very survival.”
To say that the pharmaceutical industry lacks innovative ideas would be doing a tremendous disservice to an industry that has sustained decades of respectable growth along with a healthy list of historic medical achievements. More recently, the last five years of Thomson Reuters Top 100 Global Innovators lists consistently report pharma as one of the largest industry sectors represented.
Rather than isolated examples of innovation in the form of single new molecules, today’s market calls for next-generation innovation in the form of innovation strategy.

For many drug manufacturers, innovation strategy involves streamlining an increasingly complex manufacturing system. This type of next-generation innovation wades through the growing sea of new ideas and emerges with the strategies that deliver a clear, focused value. How does this play out in pharma in 2016? In the form of targeted acquisitions and partnerships, personalized treatments, efficient outsourcing partners and properly integrated technologies.
Delivering authentic innovation in today’s pharmaceutical environment is a momentously complex task with one very succinct emphasis. That emphasis, according to Dr. Clive Meanwell, CEO at The Medicines Company and recent recipient of the 2016 Dr. Sol J. Barer Award for Vision, Innovation and Leadership, is a “sharp focus on what customers really need.” 
MERGERS, ACQUISITIONS AND PARTNERSHIPSPwC’s Health Research Institute’s annual reportpredicts that 2016 will be the “year of merger mania” in healthcare, specifically mentioning the pharmaceutical and life sciences sector. According to the report, “drug companies are looking beyond traditional M&A by acquiring ‘beyond-the-pill’ products and services to bolster their portfolios and pipelines of drugs.”
Consolidation has typically been a dirty word in pharma and rarely appears in the same sentence as innovation. But a handful of forward-thinking companies are recognizing the need to innovate beyond merely acquiring new molecule formulations. Dwindling (though definitely not gone) are the days when pharmaceutical companies would hunt for deals to boost up specific therapeutic areas, aiming to completely dominate that space. Under immense pressure to optimize performance, today’s companies are taking a heavy look at the systems and services behind these new drugs and making strategic acquisitions with an eye toward innovative services and digital technology. 
Teva Pharmaceutical made a strong move in the digital space in September with its purchase of smart inhaler company, Gecko Health. Prior to that acquisition, Teva, in collaboration with Phillips Healthcare, launched Sanara Ventures in Israel. The collaboration will invest approximately $26 million to support 40-50 early-stage digital healthcare and medical device companies in the next eight years. 
The last few years have seen numerous innovative crossovers as pharma looks toward unconventional partnerships, specifically in the tech field. In 2014, Google’s R&D business, Calico, partnered with AbbVie to focus on age-related diseases. Around the same time, Google X Labs teamed with Novartis to develop glucose monitoring, smart contact lenses and early last year, partnered with Biogen to explore wearables technology in multiple sclerosis.
But M&A is not always about innovation. Sometimes it’s about money — often in the form of tax inversion deals. The $160 billion dollar 2015 Pfizer-Allergan merger created the world’s biggest drug company — and will move Pfizer’s domicile from the U.S. to Ireland, dropping its corporate tax rate by about 7-8% percent. Pfizer is not alone in capitalizing on this tactic. In 2014, Mylan acquired Abbott Labs and moved its headquarters to the Netherlands. In 2014, AbbVie reconsidered its $54 billion acquisition of Shire — a deal that would have allowed AbbVie to reincorporate in Britain — after the Treasury Department announced new rules taking aim at inversion deals. According to a Bloomberg report, about 51 U.S. companies have reincorporated in low-tax countries since 1982, including 20 since 2012.
TARGETED OUTSOURCING PARTNERSAs their goal is to serve the unmet needs of the pharmaceutical and biotech industries, contract manufacturing movements are often reflective of the drug industry demands. Like the industries it serves, the contract services market has not been immune to consolidation. In fact, according to Visiongain’s “Pharmaceutical Contract Manufacturing World Industry and Market Outlook 2015-2025,” about 30 CMOs account for more than half of the industry’s revenues and, in the last three years, there have been 18 acquisitions in the CMO space.
Despite a reduction in supplier options, pharmaceutical manufacturers are becoming smarter and more specific when it comes to choosing contract manufacturing partners, expecting a higher degree of flexibility.
According to Peter Soelkner, managing director at Vetter Pharma, “drug companies are making every effort to reduce and simplify their network of different service providers. What they want to achieve, whenever possible, is a solution that equates to ‘one-stop-shopping.’ They expect that any partner they choose to work with must be strategic in their efforts, not simply tactical.”
Vetter is in the process of multiple facility expansions and technology upgrades, including the implementation of an internally engineered restricted access barrier system (RABS) concept for increased operational excellence in aseptic manufacturing. The RABS technology allows for faster start-up time, ease of changeover and reduced capital costs.
As pharmaceutical companies ramp up investment in flexible in-house technologies and continue acquiring their own contract services providers, contract manufacturers are understanding the need to specialize — especially surrounding the growth of biologic drugs and biosimilars, including the growing demand for novel therapies. CMOs are increasing their investments in single-use technologies for biopharmaceutical manufacturing and continuous manufacturing processes.
Aware of their critical role in an increasingly sophisticated global supply chain, today’s contract manufacturers are innovating to provide high quality, flexible production.
INTEGRATING TECHNOLOGYTrends and advancements in the pharmaceutical industry tend to trigger cascading responses from linked industries, such as equipment, packaging and drug delivery devices. Take, for example, the continued focus on patience compliance and biologics, which has evolved into a growing market for combination products — the marriage of biological products, drug containers and drug delivery devices.
Drug manufacturers who have typically only dealt with making drugs have needed to broaden their in-house expertise or contract manufacturing reach to be able to address the technical, commercial and regulatory issues that have emerged with combination devices. 
According to Jessica Buday, senior manager, Process & Operational Excellence, Ferring Pharmaceuticals, the pharmaceutical landscape today involves, “being prepared for not just the new products, but the new technology that is required (preferably in-house) to manufacture them. For instance, drug delivery now involves more than just tablets and vials — there is the entire world of combination devices. The companies that master development and validation of these devices will put themselves at the forefront. In manufacturing, that includes making sure we have the equipment for commercializing these devices.”
Ferring Pharmaceuticals, known for its reproductive health treatments, also focuses on offering more effective drug-delivery devices, including needle-free devices and transdermal delivery technologies. In March of last year, Ferring entered the U.S. pediatric endocrinology market with the acquisition of Zomacton growth hormone deficiency treatment and with it, the Zoma-Jet needle-free delivery device from Teva Pharmaceutical.
For West Pharmaceutical Services, a company at the forefront of combination devices, successful drug therapy is a comprehensive strategy. “Our customers work hard to come up with innovative new molecules, but a drug molecule is completely useless unless delivered to patients in the best way,” says Graham Reynolds, vice president and general manager, Biologics, West Pharmaceutical Services. For West, there are four elements that need to be considered in successful drug therapy: the molecule itself, the container that holds it, the delivery system that administers it and the fourth — often forgotten element — patience adherence. “The interfaces between these elements are as critical as the phases themselves,” adds Reynolds. 
These “interfaces” are also driving equipment trends. For example, the increasingly important role that aseptic processing single-use systems play in the fill/finish process. Single-use components are helping manufacturers decrease time spent on cleaning and validation, thus saving them money. The newer, disposable technology enables fully integrated, continuous production.
It’s not so much the stand-alone technologies that drugmakers are reaching for, but rather, targeted innovation that enhances overall the effectiveness of the process. 
PRECISION MEDICINEPrecision medicine, as defined generally by the National Institutes of Health, is an “emerging approach for disease treatment and prevention that takes into account individual variability in genes, environment and lifestyle for each person.” While still in somewhat of a nascent stage, public interest has grown since last January when President Obama announced the Precision Medicine Initiative (PMI) in his State of the Union address.
As the pharmaceutical industry moves forward with its quest to innovate and streamline, this patient-centered, data-driven approach makes sense. Brad Campbell, president and COO of Amicus Therapeutics, points to the rise of precision medicines as a real example of innovation. Precision medicine, according to Campbell, enables us to “drive science toward not just a specific disease but a specific genetic substrate. It helps improve the risk-benefit ratio, removing ‘waste’ from the system.” 
Amicus is in the process of seeking global approvals for its lead product candidate, migalastat, a personalized medicine in late-stage development to treat individuals with Fabry disease. Fabry disease is a rare, inherited disorder caused by deficiency of an enzyme called α-galactosidase. In terms of Amicus’ work on its migalastat treatment, this precision medicine approach is designed for patients with “amenable mutations,” that is, specific mutations that are capable of responding to oral migalastat as a monotherapy treatment. Amicus’ extensive preclinical and clinical work has characterized the properties of nearly 800 known Fabry disease-associated mutations in an effort determine which patients are most eligible for treatment.
The result? “The use of elegant science to identify, with high precision, which patients will benefit,” states Campbell.
In June 2015, the National Cancer Institute (NCI) announced a precision medicine trial touted as “the first study in oncology that incorporates all of the tenets of precision medicine.” The trial, called NCI-MATCH, seeks to determine whether targeted therapies for people whose tumors have specific gene mutations will be effective, regardless of their cancer type. Obama’s PMI budget request included $70 million for NCI to scale up efforts to identify genomic drivers in cancer. The robust list of pharmaceutical partners involved in NCI-MATCH includes Novartis, Pfizer, Boehringer Ingelheim, AstraZeneca, as well as device manufacturer, Thermo Fisher Scientific.
NEXT-GENERATION DEMANDSSpecifically speaking about precision medicine, Campbell stressed the industry’s need to “move away from the shotgun approach” to treatment, but perhaps this statement has wider implications for today’s pharmaceutical industry. The practice of cranking out rapid-fire innovation in the form of new molecules with the hopes of finding the next blockbuster is being replaced by targeted innovation strategies that demonstrate actual patient value and can be duplicated across an enterprise.
Today’s patients are more informed and connected with their health decisions than ever before. With that in mind, pharmaceutical manufacturers are making smarter choices when it comes to acquisitions, new technology and contract partners. Next-generation patients demand next-generation innovation, and the pharmaceutical industry is rising to the challenge. 

Editor’s note: Special thanks to our friends at Choose New Jersey and BioNJ for their assistance in the form of sharing their vast network of innovative contacts with us. Please visit them at

Tuesday, October 18, 2016

Strategies for HVAC Systems

By James Piper January 2006 -
Heating and air-conditioning systems account for 50-80 percent of the total energy use in most commercial and institutional buildings. With such a high percentage of energy use concentrated in these two areas, it is no surprise that the recent rise in energy prices and concerns over energy supplier reliability have prompted maintenance and engineering managers to focus attention on these systems, particularly boilers and chillers.Even a slight improvement in the operating efficiency of these components translates into big cuts in energy use and costs. Suppose the average load on a large chiller plant is 3,000 tons. If a manager can improve its average operating efficiency from 0.91 kW/ton to 0.70 kW/ton, the monthly savings will top $27,000, assuming an average electricity cost of $0.06 per kWh.
While managers can take many steps to improve the operating efficiency of chillers and boilers, all energy-improvement programs should start by making existing equipment operate as efficiently as possible.
How can technicians achieve this goal? Two steps can enhance operation - monitoring and maintenance. Basic maintenance, such as cleaning and adjustments, will keep boilers and chillers operating so they minimize energy use.
But the only way to achieve peak operating efficiencies is to regularly monitor operating parameters. Data from a monitoring program will tell operators how to adjust the equipment and will identify the need for maintenance tasks.

Chiller Monitoring

Chillers present a challenge to managers when it comes to operating efficiency. While manufacturers rate chiller efficiency at full load, most building chillers rarely operate at full load. Part-load efficiencies are lower and vary with a number of parameters, including supply and return chilled-water temperatures, entering condenser-water temperature, and condenser and chilled-water flow rates.
Compounding the challenge is the level of precision needed in measuring each parameter to calculate efficiency accurately. An error of as little as 1 degree in water-temperature measurements can cause an error in the efficiency calculation of 1-2 percent. A similar error in flow measurement can result in an even larger error in calculating efficiency. Therefore, it is critical to install high-quality flow meters and temperature sensors and to maintain them properly.
To be effective, a chiller efficiency-monitoring program must be ongoing. As data is collected under different operating conditions, managers will develop an efficiency performance baseline for the chiller. Technicians should compare this baseline to the manufacturer's published performance curve to determine if the chiller is performing as efficiently as possible.
As data continues to be compiled, technicians will be able to monitor trends in performance. While managers can expect a slight deterioration in performance, due to normal wear and tear, they should look for trends that might indicate a need for maintenance, such as cleaning the chiller's tubes, a replacing refrigerant, or overhauling the chiller.
Fortunately, a new generation of monitoring equipment is available to assist managers in monitoring chiller performance. This equipment collects the necessary data and automatically calculates the chiller's efficiency.
The equipment then compares the operating parameters and the calculated efficiency to past performance values, triggering an alarm if a value falls outside the expected range. By constantly monitoring chiller operation, the systems assist in keeping chillers at peak operating efficiency, and they help technicians detect and troubleshoot problems.

Boilers

Efficiency calculations for chillers determine overall efficiency expressed as the ratio of energy input to energy output. Boilers require a different efficiency calculation - combustion efficiency.
A boiler's combustion efficiency measures how completely fuel burns and how effectively the generated heat transfers to water or steam. The measurement does not take into account heat loss from the boiler's surface, blowdown loss, or energy used by auxiliary equipment. With good test equipment, technicians can measure it with 98 percent accuracy or greater.
Managers have two reasons for using the combustion test instead of an overall efficiency test. First, it is practically impossible to measure all necessary parameters accurately, including the energy content of the fuel. More importantly, boiler losses excluded from the combustion efficiency calculation remain relatively constant. While technicians need to check them periodically, they do not require ongoing monitoring.
Combustion-efficiency testing is one of the most accurate means of adjusting a boiler and its auxiliary equipment for both safe and efficient operation. To do so, technicians can install a portable unit temporarily on a boiler for testing or permanently for ongoing monitoring.
Technicians most often use oxygen sensors to test a boiler's combustion efficiency. The equipment uses an electronic sensor in the boiler's flue that measures oxygen in the flue gas. Changes in combustion efficiency, such as those caused by varying levels of excess air, show up as varying levels of oxygen in the flue gas.
For small boilers, technicians most often use a portable combustion-efficiency tester setup at the beginning of every heating season. By adjusting the boiler each year, managers can achieve higher levels of operating efficiency in multiple small boilers without making a significant investment in equipment.
Medium-sized boilers can use the same setup, but technicians can achieve greater efficiency by testing the boiler at least monthly during the heating season. For larger boilers, it is most effective to permanently install the equipment and connect it to the boiler's control system. In this configuration, technicians can use the equipment to adjust the boiler under all heating loads to achieve the most efficient operation.

Pumps

Managers often overlook pumps when looking for ways to improve HVAC system operating efficiency. As a result, inefficient pump operation can go uncorrected for the life of the system. And with most HVAC system pumps operating when the system operates, inefficiency can result in large amounts of lost energy.
Perhaps the most important factor when installing pumps is that specifiers must match them to the system's requirements. As the flow rate increases or decreases from the design point of the pump, the efficiency of the pump decreases. So it is important that managers select a pump based on the system's required flow rate and the pressure exerted on the pumping system. If the system must operate over a range of flow rates, a variable-speed pump system provides the best operating efficiency.
Once in operation, technicians should periodically test pumps for proper flow rate and pressure differential. And they should plot readings against the manufacturer's pump curve to determine if the pump is operating within its specified range.
James Piper, P.E., is a national consultant based in Bowie, Md., with more than 25 years of experience in facilities maintenance and engineering issues.

5 Steps to Chiller Efficiency

Chillers represent a substantial capital investment and are a major contributor to operating costs in institutional and commercial facilities. For many organizations, chillers are the largest single energy users, and comprehensive maintenance is critical to ensure their reliability and efficient operation.
While some organizations use predictive maintenance — including vibration analysis, infrared thermography, and rotor bar testing — to diagnose problems in advance, a comprehensive preventive maintenance (PM) plan remains the key to ensuring the best performance and efficiency of a chiller.
Chiller efficiencies have improved steadily over the past decade due to advances in controls, refrigerants and equipment design. As a result, chillers now have tighter operational tolerances, and regular service and maintenance are more crucial than ever. When developing a PM plan for chilling equipment, maintenance and engineering managers should consider five essential areas.

Step 1: Maintain a Daily Operating Log

Chiller operators should document chiller performance daily with an accurate and detailed log, comparing this performance with design and start-up data to detect problems or inefficient control setpoints. This process allows the operator to assemble a history of operating conditions, which can be reviewed and analyzed to determine trends and provide advanced warning of potential problems.
For example, if machine operators notice a gradual increase in condensing pressure during a month’s time, they can consult the daily operating log and systematically check and correct the possible cause of this condition, such as fouled condenser tubes or non-condensables.
Chiller manufacturers can provide a list of recommended data points specific to equipment upon request. Operators can take data readings daily, once per shift at about the same time. Today’s chillers are controlled via microprocessor controls, so managers can automate this process using microprocessor-controlled building automation systems.

Step 2: Keep Tubes Clean

One large potential hindrance to desired chiller performance is heat-transfer efficiency. Chiller performance and efficiency relate directly to its ability to transfer heat, which begins with clean evaporator and condenser tubes. Large chillers contain several miles of tubing in their heat exchangers, so keeping these large surfaces clean is essential for maintaining high-efficiency performance.
Chiller efficiency deteriorates as tubes become fouled, when mud, algae, sludge, scale or contaminants accumulate on the waterside of heat-transfer surfaces. The rate of fouling depends on the system type — open or closed — as well as on water quality, cleanliness and temperature.
Most chiller manufacturers recommend cleaning condenser tubes annually, since they typically are part of an open system, and they recommend cleaning evaporator tubes once every three years for closed systems. But if the evaporator is part of an open system, they recommend periodic inspection and cleaning.
Managers can consider two primary methods for cleaning tubes:
  • Mechanical cleaning removes mud, algae, sludge and loose materials from smooth-bore tubes and consists of removing the water-box covers, brushing the tubes and flushing with clean water. For internally enhanced tubes, managers should consult the chiller manufacturer for mechanical-cleaning recommendations.
  • Chemical cleaning removes scale. Most chiller manufacturers recommend consulting with a local water-treatment supplier to determine the proper chemical solution required. A thorough mechanical cleaning should always follow a chemical cleaning.
New chillers feature automatic tube-brushing systems, which can be retrofit onto existing chillers. These systems use small, nylon-bristled brushes that flow through the tubes for cleaning. A custom-manufactured, four-way reversing valve is installed in condenser water-piping system, and every six hours, the system automatically reverses the flow through the condenser tubes for about 30 seconds.
Coupled with proper water treatment, these systems virtually eliminate fouling within the chiller and maintain design-approach temperatures. These systems typically show payback periods of less than two years.

Step 3: Ensure a Leak-free Unit

Manufacturers recommend quarterly tests of compressors for leaks. Low-pressure chillers using either CFC-11, which has been phased out, or HCFC-123 have sections of their refrigeration systems that operate at subatmospheric pressure. Although these chillers are the most common in today’s facilities, it is difficult to create a perfectly sealed machine, and leaks allow air and moisture, commonly referred to as non-condensables, to enter the unit.
Once in the chiller, non-condensables become trapped in the condenser, increasing condensing pressure and compressor-power requirements and reducing efficiency and overall cooling capacity. Low-pressure chillers have high-efficiency purge units that remove non-condensables to maintain design-condensing pressure and promote efficient operation. One chiller manufacturer estimates that 1 psi of air in a condenser equates to a 3 percent loss in chiller efficiency.
Moisture in a chiller also can create acids that corrode motor windings and bearings and create rust inside the shell. Small rust particles called fines float in the vessels and get trapped inside heat-exchanger tubes. Fines on tubes decrease the unit’s heat-transfer effectiveness and overall efficiency. Left unchecked, they can lead to costly tube repairs.
The best way to monitor leaks in a low-pressure chiller is to track purge-unit runtime and the amount of moisture accumulation at the purge unit. If either of these figures is too high, the unit has leaks. Other indications of air in the system include increased head pressure and condensing temperature.
High-pressure chillers using CFC-12, HFC-134a, or HCFC-22 operate at pressures well above atmospheric pressure, and leaks in these types of chillers release potentially hazardous refrigerants into the environment. Environmental regulations limit the amount of annual refrigerant leaks.
Leaks also results in a lower refrigerant charge and other operational problems, such as lower evaporator pressure, which can cause the compressor to work harder to produce a lower cooling capacity. For positive-pressure chillers, technicians should monitor the refrigerant charge level and evaporator pressure to detect leaks.

Step 4: Sustain Proper Water Treatment

Most chillers use water for heat transfer, so the water must be properly treated to prevent scale, corrosion and biological growth. A one-time chemical treatment is required for closed-water systems, which are typical of chilled-water systems connected to the chiller evaporator.
Open systems typically are used for condenser-water systems connected to the chiller condenser. Condenser systems that use sources such as cooling towers require continuous chemical water treatment. Managers should work with a chemical-treatment vendor familiar with local water supplies and can provide full-service maintenance for all facility water systems.
Scale should not be a problem if the vendor maintains proper chemical treatment of the evaporator and condenser-water systems. The presence of scale in the condenser or evaporator tubes indicates improperly treated water. The vendor needs to test water quality every three months and correct the water treatment program, which should aid in cleaning the chiller tubes.
Also, all systems strainers should be cleaned every three months. Sand filters and side-stream filters for condenser-water systems are very effective at maintaining clean water, if properly maintained. To determine when cleaning is required, technicians should monitor pressure drop at the filters and refer to manufacturer recommendations on cleaning. Filters should be cleaned quarterly, regardless of pressure drop.
Maintenance of strainers and filters limits chiller-tube erosion caused by sand or other small particles moving at high velocities. Erosion and tube pitting decreases overall heat-transfer effectiveness and decreases efficiency. If uncorrected, these conditions can lead to plugged tubes or catastrophic tube failure.
Technicians should inspect chilled-water and condenser-water piping systems annually for evidence of corrosion and erosion. Most manufacturers recommend eddy-current inspection of heat-exchanger tubes, including an electromagnetic procedure for evaluating tube-wall thickness, every three-five years.

Step 5: Analyze Oil and Refrigerant

Annual chemical analysis of oil and refrigerant can aid in detect chiller-contamination problems before they become serious. Testing consists of spectrometric chemical analysis to determine contaminants, including moisture, acids and metals, which hamper performance and efficiency. A qualified chemical laboratory specializing in HVAC equipment must perform the analysis. Most manufacturers provide annual oil and refrigerant analysis services.
Technicians should take an oil sample while the chiller is operating. The oil should be changed only if indicated by oil analysis. Technicians also should monitor oil filters for pressure drop and change them during a recommended oil change or if pressure drop is outside of tolerance.
Oil analysis can help detect other chiller problems. For example, high moisture content in the oil can signal problems with the purge unit, and changes in oil characteristics can signal the development of unacceptable compressor wear.
Managers use refrigerant testing to determine contaminants that might lead to reliability and efficiency problems. One main contaminant is oil that migrates into the refrigerant. One chiller manufacturer estimates there is a 2 percent loss in chiller efficiency for every 1 percent oil found in the refrigerant, and it is not uncommon to find 10 percent oil in older chillers’ refrigerant. Based on this estimate, such contamination can lead to a substantial 20 percent decrease in efficiency. The bottom line — testing can pay large dividends.
Kevin M. Graham, P.E., is a project manager with 10 years’ experience for Smith Seckman Reid Inc. an engineering and facility consulting firm.