. All About Chemistry: Feb 4, 2009

Flash point testing - The definitive test method

Mike Sherratt - Director of Research at Stanhope-Seta, and Chair of the Joint ISO/CEN Working Group on Flash Point

Classifying the flammability of fuels and other materials by their flash point value has been an established practice for more than 100 years. Today mandatory international and national regulations are set by bodies such the UN, IATA, EPA, EU, and Health and Safety executives.

The fundamental reason for measuring flash point is to assess the safety hazard of a liquid with regard to its flammability and then classify the liquid into a recognized hazard group. This classification is used to warn of a risk and to enable the correct precautions to be taken when manufacturing, storing, transporting or using the liquid. Flash point requirements are listed in regulations and product specifications.

What is flash point?
The flash point of a fuel is essentially the lowest temperature at which vapours from a test portion combine with air to give a flammable mixture and 'flash' when an ignition source is applied.

Specifications quote flash point values for quality control purposes as well as for controlling the flammability risk. The lower the flash point temperature the greater the risk. A change in flash point may indicate the presence of potentially dangerous volatile contaminants or the adulteration of one product by another.

The measurement of flash point is defined in test methods that are maintained by standardization bodies such as the Energy Institute in the UK, ASTM in the USA, CEN in Europe and ISO internationally. Over the last few years the focal point for flash point test methods has become the CEN/ISO Joint Working Group on Flash Point.

Which flash point test?
In general flash point is measured by apparatus named "open cup" or "closed cup". "Open cup" tests are required in some specifications and regulations, and are intended to mimic conditions in open spaces whereas "closed cup" tests are closer to most situations, where space is restricted. "Closed cup" tests are more usually specified as the test results are less affected by laboratory conditions and give a more precise and lower (safer) result. There are 4 major "closed cup" flash point tests which are specified nationally and internationally for testing fuels and other materials: Pensky-Martens, Small Scale (Setaflash), Abel and Tag.

The table below gives some examples where the Small Scale test is specified or mandated.

Material to be tested Test method Who says so
Aviation turbine fuel Abel, Tag, Small Scale ASTM D1655 and Def Stan 91-91
Gas turbine fuel Pensky-Martens, Small Scale ASTM 2880
Diesel fuel Pensky-Martens, Small Scale ASTM D975
Kerosines Tag, Small Scale ASTM D3699
Biodiesel (100% FAME) Small Scale EN14213 and EN14214
Transport regulations Small scale, Other closed cups UN, IATA, regulatory bodies
General ignitability Small Scale EPA 1020 A and B
Fuel oil Pensky-Martens A and B, Small Scale ASTM D396
Naphthas Tag, Small Scale ASTM D3734 and D3735
Raw Tung Oil Small Scale ASTM D12
Water borne paints Small Scale ISO 3679, ISO 3680
Waste products Small Scale European Waste Directive

The Small Scale (Setaflash) Closed Cup test is specifically identified by the following test methods: ASTM D3278, ASTM D3828, IP303, IP523, IP524, EPA 1020 A and B, ISO 3679 and ISO 3680.

Why is the Small Scale accepted universally?
The Small Scale test method and the uniquely approved Setaflash Tester have been in use for over 30 years, primarily for a one minute test with 2 ml of sample to carry out a flash no-flash test, and more recently for automatic flash point determinations.

During these 30 years, comparative tests and collaboration with bodies such as the Institute of Petroleum, ASTM D01 and D02, British Railways, Commission of the European Communities, National Research Council Canada, BSI, UK Ministry of Defence, Transport and Road Research Laboratory, Paint Research Association and the major international oil refiners, may be summarized by the following well recorded statements;

"ASTM evaluation studies of the Setaflash Tester demonstrate the excellent correlation between the Setaflash and the Tag Closed and Pensky-Martens Testers. In addition "the repeatability and reproducibility of the Setaflash are definitely better than values found using Pensky-Martens" and "the precision of the Setaflash is equivalent or slightly better than the Tag Closed Tester".

This performance and proven equivalence for specific materials has resulted in the adoption of the Small Scale test method in a wide range of product specifications and regulations. Today Setaflash Testers are in daily worldwide use by thousands of laboratories to test hundreds of different liquids.

The Setaflash family
The Setaflash Tester has evolved into a modern family of manual and automated instruments incorporating automatic temperature control and flash detection. A version with an electric ignitor has also been announced. In addition an open cup tester is available for mandated combustibility testing.

Is the Small Scale test the referee?
International transport regulations allow the use of a number of so called "non-equilibrium" closed cup tests such as the Pensky-Martens, Tag or Abel to assess "flammability" criteria. However, if a result is within 2°C of a defined limit, the use of an equilibrium test is mandated. In this instance the Small Scale test or another equilibrium test is the referee. Under these circumstances the Setaflash is usually selected because its one minute test is preferable to the 2 hours taken by other equilibrium tests.

In product specifications it will be made clear which test method is the referee.

Choosing which flash point method can be difficult, however a new CEN/ISO document, Petroleum products and other liquids - Guide to flash point testing, gives advice and will be available in 2005.

The concept of the Small Scale test eliminates the possibility of heating the test sample above the test temperature, avoids the loss of volatile constituents, is the fastest test available, has excellent available precision and an in depth history of comparative tests and equivalent results. From these facts it is clear that the Small Scale test is The Definitive Test.

Selengkapnya...

Do you see the complete picture?

"Triple Detection" in Gel Permeation Chromatography

Bernd Tartsch, Viscotek GmbH, Carl-Zeiss-Str. 11, 68753 Waghäusel

Gel permeation chromatography (GPC) or also named size exclusion chromatography (SEC) is the most often used method for determining the molecular weight of natural and synthetic macromolecules. One specific strength of this method lies in the fact that by the separation process the whole molecular weight distribution is obtained. The molecular weight distribution contains much larger information for the characterization of polymer samples compared to a single value.

Introduction

When GPC started was the concentration detector the only detector used for the determination of the molecular weight distribution. calculation the molecular weight from the concentration chromatogram needs the elaborate calibration of the columns by polymer standards. If sample and standard are different only relative molecular weights are obtained. The combination of concentration, viscosity and light scattering detection established a standard because of its superior information content.

Conventional Calibration

The conventional method for calibration of the GPC columns uses commercially available polymer standards with low polydispersity. The molecular weight is plotted versus the elution volume (see figure 1). The polymer is usually detected by a refractive index (RI) detector [1] or a UV detector, if the polymer contains UV absorbing groups. The molecular weight of an unknown sample is then calculated by dividing the area below the curve in small fractions (slices) and projecting the retention volume of each slice on the calibration curve.

Usually only relative molecular weights are obtained because of the different chemical composition of sample and standard. This is a consequence of the separation of the molecules on the columns by size (more precise: their hydrodynamic volume) and not by molecular weight. The hydrodynamic volume of a polymer molecule depends on the chemical constitution, its structure (linear or branched) and also from concentration. Thus, two polymer molecules can elute, despite of the same molecular weight, at different retention volume and therefore alter the calculated molecular weight.

Fig. 1: Principle of conventional calibration. Polymer standards with a narrow distribution are used to build up a calibration curve. This is then used to calculate the average molecular weight and polydispersity of the sample.

Universal Calibration

Universal calibration was first described 1967 by Benoit et al. [2] and allows to the determination of the exact molecular weight even for samples that are different in chemical composition or structural properties. This Calibration is based on the fact that the product of intrinsic viscosity [] and molecular weight is directly proportional to the hydrodynamic volume. With the development of the four-capillary viscometer by Haney [3] this method was introduced to GPC. This patented differential viscometer detector allowed for the first time to measure the intrinsic viscosity on-line without the drawback of earlier viscometers (low sensitivity, influenced by pump pulsation, Lesec effect).

Fig. 2: Principle of the 4-capillary viscometer. The capillaries 1 and 2 divide the flow symmetrically. The sample in the lower branch builds up a back pressure whereas the sample in the upper branch flows without pressure in a reservoir. The intrinsic viscosity is calculated from the pressure difference (DP) together with the inlet pressure (IP).

If one takes into account that GPC is a separation technique by size the advantage of a online viscometer is directly visible. By using the universal calibration of the chromatographic columns a calibration curve giving the size of the molecules over the retention volume is obtained and thus is independent of chemical composition or structure of the used polymer standards. The real molecular weight of the sample is easily calculated, by measuring the intrinsic viscosity of the sample as well. Thus universal calibration goes not rely on a certain type of polymer standard.

By using a viscometer detector and universal calibration a comprehensive characterisation of the unknown sample is possible. Besides the determination of absolute molecular weight further important variables are measures like the intrinsic viscosity and the radius of gyration and for both parameters the distribution over the whole sample.
By using the Mark-Houwink-equation []= K*M^a and calculating the parameters a and K further information is collected. For ideal coil conformations is the a-value in the range of 0.6-0.8; more compact structures (e.g. branched polymers, proteins) values below 0.5 are typical; stiff polymer chains deliver values in the range of 1-2. The theory from Zimm and Stockmayer allows the calculation of the degree of branching by comparing the intrinsic viscosity of branched and linear polymers.

Light Scattering

Very often light scattering is used to determine the molecular weight by GPC. One big advantage is that the molecular weight is measured without the need for a calibration curve because of the proportionality of the light scattering signal direct to the molecular weight. The Rayleigh equation (1) describes the relation of the scattered light of the dissolved polymer molecules by the so-called Raleigh ratio Rθ, of the polymer concentration c and the weight average molecular weight Mw [5].

(1)

K is an optical constant, A2 the second virial coefficient and P(θ) the static structure factor.

Looking at “small” macromolecules a isotropic scattering in all directions is observed, thus P(θ) = 1 for all angles θ (figure 3). The size of the macromolecules is here compared to the wave length of the used laser light. Molecules with a diameter less than 1/20 of the laser wave length are “small”. Using a typical wave length of 670 nm all molecules with a diameter less than 15 nm show no angular dependence. Linear synthetic molecules reach this border if their molecular weight surpasses 150.000 g/mol. For branched molecules the molecular weight must be even higher.

Fig. 3: Angle dependency of the Rayleigh light scattering. “Small” molecules (radius <>

Looking at larger macromolecules the scattered light intensity towards higher angels is reduced, thus P(θ) depends on the measuring angle. To measure the right molecular weight the scattered light at θ = 0° must be measured. This is experimentally impossible because of the primary beam of the laser.

There exist several approaches to solve this dilemma: the most consequent way is to measure the scattered light at an angle as close as possible to 0°. This is the only method to avoid extrapolation or correction of the measured values. This approach called low angle light scattering (LALS) is complicated by the proximity of the primary beam and reflections on the glass-air surface and was therefore rarely used in the past. Nowadays these problems are solved by a clever design of the sample cell and by fading out the primary beam by a tilted mirror. The scattering light at 7° is even for the largest polymers accessible by gel permeation chromatography only about 1 % less compared to the scattered light at 0°.

In general a second parameter can be calculated by light scattering on polymers: the radius of gyration. This radius is calculated from the angular dependence of the scattered light. For molecules with a radius below 15 nm no angular dependence is observed and thus it is for many polymer samples impossible to determine the radius of gyration by static light scattering.

Triple Detection

Triple detection means the combination of 3 detectors with different information. That is a concentration detector (RI or UV), a light scattering detector (measures molecular weight), and a viscosity detector (sensitive to the molecular density in solution). With these three detectors the distribution of molecular weight, intrinsic viscosity, and size (radius) of the polymers over the whole molecular weight range accessible by GPC can be determined.

Plotting the intrinsic viscosity over molecular weight on a logarithmic scale gives the Mark-Houwink-Plot log [] = alog M + log K. this is the central structure plot in polymer analytics. It shows structural variations by branching as well as the coiling behaviour of the polymer chain and its stiffness. With this method the physical properties of the polymer sample are measured directly and independent from the elution volume, this makes the method robust to chromatographic conditions as flow rate irregularities, peak broadening and column degradation.

There are two further advantages of triple detection: The flexibility of the system. Is it impossible to use one of the detectors for a certain analysis, the others can still be used to precisely measure the molecular weight. Further leads the comparison of different methods very often to a better understanding of the polymer structure.

Copolymer Analytics

Development and synthesis of even more special polymers and their application in industrial and pharmaceutical applications leads to an enhanced complexity of polymer analytics. Using two or more different monomers during the synthesis leads to the formation of copolymers. Homogeneous statistic copolymers can in GPC be treated as homopolymers. This is not possible for inhomogeneous polymers with varying contents of monomer A and B over the molecular weight distribution.

With the application of two concentration detectors (RI and UV) that give different responses to the comonomers A and B the calculation of the true concentration profile and the content of each monomer is possible. Equation 2 displays this dependency. KRI and KUV are instrument constants, dn/dc is the refractive index increment and dA/dc is the UV absorption of the monomers A and B at the wavelength λ:

(2)

Combination of the two concentration detectors with a viscometer and a light scattering detector gives the distribution of all polymer specific parameters as described for homopolymers. The same approach can be used for other two component systems as e.g. protein/polymer complexes.

Polystyrene

Linear Polystyrene (PS) is commercially available with narrow and broad molecular weight distribution. Polystyrene samples in tetrahydrofuran are a very good example to show the responses of the three detectors.

Fig. 4: Triple chromatogramm of two narrow distributed polystyrene standards.

Figure 4 shows the triple chromatogramm of a mixture of two narrow polystyrene standards (MW = 850,000 und 30,000 g/mol). The use columns show a good separation and the peaks are baseline separated. The RI chromatogramm shows that the PS with the lower molecular weight is about 4 times as concentrated as the high molecular weight PS. Despite this difference in concentration the signals of the viscosity and light scattering detector of the high molecular weight PS are much larger. The reason for this lies in the molecular weight dependent signal of the viscometer and light scattering detector.

Fig. 5: Triple chromatogram of a broad polystyrene: Mw = 254.000 g/mol, PDI = 2,5, IV = 0,843 dL/g.

Figure 5 shows the triple chromatogramm of a broad (polydisperse) PS sample. The signals of the light scattering detector and the viscometer are shifted towards higher molecular weight (lower elution volume) compared to the RI signal. This shift is the result of the different response of the detectors. The shift caused by the volume offset between the detectors is already corrected by the software. The RI signal is independent of molecular weight whereas the viscometer and light scattering detector react much more sensitive to high molecular weight contents and therefore leading to a much faster rise of the signals on the left side of the chromatogramm. This apparent shift of the viscosity and light scattering signal compared to the RI detector is also a measure for the polydispersity of the sample.

Looking closely to the 3 curves one observes that the shift of the light scattering signal is larger compared to the viscometer signal. The reason therefore lies in the behaviour of intrinsic viscosity. The light scattering signal is proportional with molecular weight but the viscosity grows according to the Mark-Houwink [] = K Ma equation with the power of a. The exponent a is for linear polymers that form an ideal statistic coil in solution smaller than one (e.g. 0.7 in the case of PS in THF), thus the viscosity signal grows slower compared to the light scattering signal. This effect is also observed for the narrow Ps samples in figure 4. The curve of Log M over retention volume (figure 5) shows a linear behaviour and therefore an ideal separation by size exclusion mechanism.

Brominated Polystyrene

The performance of triple detection is demonstrated on the example of PS compared to brominated polystyrene (BrPS, used in flame retardants) with the same chain length. Bromination of PS leads to the substitution of Hydrogen (1 amu) by Bromine (80 amu). This leads to an increase of molecular weight whereas the size of the polymer coil is almost unaffected, as shown schematically in figure 6.

Fig. 6: Schematic picture of the bromination of polystyrene.

Comparing the chromatogramms (figure 7) shows almost no alteration of the RI chromatogram. Using conventional calibration would result in almost the same molecular weight for both samples. The signal of the light scattering detector much larger and therefore a 2.5 times larger molecular weight is calculated (table 1). The reduction of the viscosity signal confirms as an independent detector the increase of density of the brominated polymer.

Fig. 7: Comparison of the detector signals before and after the bromination.

Table 1: Molecular weight of polystyrene before and after the bromination

PS
BrPS
Mn [g/mol]
122.000 295.000
Mw [g/mol] 259.000 665.000
IV [dL/g]
0,865 0,320

Natural Macromolecules – Maltodextrine

Maltodextrins are produced by the enzymatic degradation of starch. They are use as food additives to improve the rheologic properties or as flavour enhancer. By differences in the used starch, the enzymes or processing parameters are the resulting maltodextrines different in molecular weight and degree of branching.

Fig. 8: Triple chromatogramm of a maltodextrine sample. Mw = 468.000 g/mol, PDI = 3,48, IV = 0,117 dL/g.

Figure 8 shows the triple chromatogram of a maltodextrine sample. Already from the peak shapes is a high degree of branching visible in the high molecular weight range. At low retention volume an intense LALS signal and a small viscosity signal is observed. This means a high molecular weight combined with a high density (low viscosity) and therefore a highly branched structure.

Fig. 9: Mark-Houwink plot of the maltodextrine sample. The dotted line shows the expected curve for a linear maltodextrine sample.

The increasing degree of branching can nicely be followed in the Mark-Houwink plot (figure 9). Linear polymer samples result in a linear Mark-Houwink curve from that the slope is calculated and information about the coiling properties are derived. For the maltodextrine sample a significant downwards curvature is observed. This curvature shows the inhomogeneous structure of the sample and that the number of side chains increase towards the high molecular weight region. Quantitative calculation of the number of branches and the branching frequency is possible. Other example that have been successfully analyzed by GPC are: starch, cellulose, nitrocellulose, pectin, xanthan, heparin, hyaluronic acid, chitosan, pullulan, dextran, careageenan, proteins, antibodies, RNA, DNA ...

Summary

The utilisation of multiple detectors enhances the available information from the GPC analysis significantly. Especially for the determination of structural properties of the polymer samples offers triple detection the ideal solution. Triple detection allows, besides measuring the absolute molecular weight, the determination of coil dimensions and other physical parameters that otherwise must be measured with additional instruments.

One of the biggest advantages of triple detection is the fact that no elaborate calibration are necessary and certain non-perfect conditions as flow rate fluctuations, separation not by size exclusion or column degradation have no impact on the results. this is because the retention volume is not used for the calculation of molecular weight, intrinsic viscosity, coil dimensions and degree of branching.

Analysis of the results can be done on different levels. The raw triple chromatogramm allow already to extract qualitative information of the sample. In a easy way average values of the physical parameters can be calculated. The calculation of the distributions of molecular weight and intrinsic viscosity gives the most comprehensive information about the sample and its branching. Do you see the complete picture of your macromolecules?

Literature

[1] M. A. Haney, Principles of Triple Detection GPC/SEC: The Deflection Refractometer (RI), Laboratory Equipment, March 2003

[2] Z. Grubisic, P. Rempp, H. Benoit, A Universal Calibration for Gel Permeation Chromatography, J. Polym. Sci. B: Polym. Lett. 5, 753 (1967)

[3] M. A. Haney, The Differential Viscometer. II. On-line Viscosity Detector for Size-Exclusion Chromatography, J. Appl. Polym. Sci. 30, 3037 (1985)

[4] B. H. Zimm, W. H. Stockmayer, The Dimensions of Chain Molecules Containing Branches and Rings, J. Chem. Phys 17, 1301 (1949)

[5] S. Mori, H. G. Barth, Size Exclusion Chromatography, Kapitel 8.1, Springer Verlag Berlin (1999)

Selengkapnya...

Wyeth 'learns and confirms' in India. Frost & Sullivan interview with Dr. Michael Kolb, Wyeth

parna Singh (AS), Program Manager with Frost & Sullivan’s Chemical Materials and Foods team caught up with Dr. Michael Kolb (Dr. MK) at Frost & Sullivan’s Opportunities in Lifescience Molecules: Global Partnership Summit 2006. This annual summit was held at Goa (India) from 21st -23rd May, 2006.

 Dr. Kolb is currently the Vice President, Chemical Development at Wyeth Research. He joined Wyeth in 1996 to head their Chemical Development Business function. At Wyeth, he has streamlined the Chemical Development business with the objective to meet company pipeline goals. Dr. Kolb was born in Germany and obtained his PhD at the Justus Liebig University. He then joined the California Institute of Technology for a post-doctoral stay. He started his career in 1976 with Merrell-Toraude at Strasbourg, which later became a part of Sanofi-Aventis. He worked there for 13 years, engaged originally in basic research and then overseeing scale-up activities at the Center. In 1989, Dr. Kolb joined MMD’s Research Center in Cincinnati as head of Chemical Development. Then in 1995, he was asked to act as the Site Director for MMD Research Institute in Tucson.

AS: To begin with, since this would be of interest to the Indian market, can you tell u a little bit about Wyeth’s approach to offshoring? What are the company’s strategic plans here, in terms of key regions and benefits that are you looking at?

Dr. MK: Wyeth has no API production facilities, everything that goes into end of phase 2 studies, phase 3 or commercialization, is out-sourced to companies which typically are in Europe, and now, more and more, in the Asian region – India, Japan, South Korea and China. On the discovery side, we’ve just started this year a collaboration with GVK, and I believe that they will have about 70 chemists on board this year – with the goal to increase the workforce to 150 chemists who all will work on discovery projects.
On the bio statistics side, with Accenture, we transfer data to India, where they are analyzed and the results send back to the US.

AS: We would like to know more about Wyeth’s ‘Clinical Development Model’ - fundamentally, how is this ‘learn and confirm’ approach different from the conventional new drug creation process? What concrete steps is Wyeth taking to put this into action, and what are the likely benefits that will accrue?

Dr. MK: The concept - its not new at all, its over 10 years old – I remember it was published around 1980. The idea is that instead of going through a rigid process of phase I to phase II and then to phase III clinical trials, a more flexible and adaptable process of ‘Learn and confirm’ is implemented. – There is an early phase where we go into the clinic and learn about the compound, and then there is a later phase where we confirm the learning of the compound. There are no strict boundaries anymore between the individual clinical phases. Also, in the past, typically each compound had a project team, which was championing the specific compound. Competition for resources and decision on the priority of their compound were issues. Now we have learnt and confirmed teams, which don’t work on one single compound, but on all compounds falling into the therapeutical area of this team. So the team might have five compounds in CNS and can learn from all of them in order to move the best one forward.

AS: What about regulatory issues as far as this system is concerned?

Dr. MK: We presented this to the FDA and they liked it. They are only concerned that sufficient supporting data are available for a compound when we go to the NDA.

AS: Beyond contract manufacturing which are the main areas in the value chain that Wyeth has looked at for any contract services? How do you see this, going forward in the next 4-5 years?

Dr. MK: In the clinical domain, we’re now developing the concept of Early Clinical Development Center (ECDC). In the past, what we used to do is, once a compound went in for clinical trials, it used to be sent to one hospital, and, as an example, you might need 200 patients to do the study, who might be difficult to recruit in one hospital. Now that we have ECDCs all around the world, we may recruit 80 patients in India, 100 in China, and 20 in Europe, which eventually also gives us access to the needed 200 patients. This allows us to do the trials much faster because we don’t have to wait for recruiting all 200 patients at one hospital. An additional benefit is that with this system we obtain a broad genetic spread of patients and can ensure the desired mix of age groups and so on. Of course, one other benefit is the lower costs of doing trials in India, China or South America.

AS: What has been the company’s experience as far as IPR compliance in the Indian market is concerned? Do you see any significant changes in the business environment with India acceding to the new patent laws?

Dr. MK: I don’t think we ever had a problem, but then, I don’t think we ever put any sensitive IP in this area. I guess the attitude will change, but right now, a lot of people have the attitude of let’s wait and see!

AS: With outsourcing becoming a critical component for most pharmaceutical majors, has your company invested in any measurement systems designed to evaluate their cost-benefit?

Dr. MK: One area is cost. We routinely compare our own cost estimates for making an API with offers we obtain from our out-sourcing partners in Europe, Asia, etc. We maintain a database not just on cost, but also on items of on-time delivery, reliability, and quality for all our outsourced API activities. Again we measure all our suppliers on these parameters and see what their performance is.

AS: Which countries are currently the leaders in each segment – such as drug discovery services, clinical research and custom synthesis?

Dr. MK: Discovery until recently, we never outsourced, and now the first country we have come to is India. The way I look at it is that the talent is available and the cost is attractive. API manufacturing – India is strong, but IP issues still need to be resolved in some areas. However it really depends on which are the critical issues about your API synthesis. If cost is a big issue, one would go to India, or China, or South Korea to find a supplier. If cost is not that critical, at the end of the day, a chemist in Europe is as good as a chemist in India, but might have more experience in the specific technology you are looking for.

AS: Going forward, what do you see as the outlook in different contract services for the Indian market?

Dr. MK: For me, the biggest areas are API manufacturing and Drug discovery. Outsourcing of API manufacturing is already very active in India, but outsourcing of drug discovery will increase. The big driver, as I see it right now, is still cost, though this may change with time.

Selengkapnya...

Wyeth 'learns and confirms' in India Frost & Sullivan interview with Dr. Michael Kolb, Wyeth

Selengkapnya...

Outsourcing to India and China: Cultural diffculties with the perception of costs Frost & Sullivan interview with Mr. Steve Fishwick, AstraZeneca

Aparna Singh (AS), Program Manager for the Chemicals Materials and Foods team at Frost & Sullivan India, recently caught up with Mr. Steve Fishwick (SF), Projects Group Director, AstraZeneca at the Frost & Sullivan’s Global Life Sciences Summit 2006 at Goa, India. He was one of the key speakers at this premium summit.

 Mr. Fishwick has held various positions in process chemistry, chemical production, and central operations in the 28 years that he has spent with ICI, Zeneca and AstraZeneca. For a major part of the last 15 years of his career he has been closely involved with outsourcing, initially as a Project Manager and now as Project Group Director. Since 2003 he also has the specific responsibility for developing AstraZeneca’s sourcing from India and China. In the interview below he gives insights into this sector.

AS: Holding responsibility for AstraZeneca’s outsourcing from India and China, can you tell us broadly about AstraZeneca’s plans for this region? How do you see India and China in your overall strategic plans? What are the areas you are looking at, and as a corollary to that, perhaps not looking at, at this point in time?

SF: My area of responsibility is entirely around the Chemicals, and the API outsourcing area. Within that scope, we are open-minded to talk about any of the types of outsourcing related to that, both here and in China. So we’re not excluding anything. At the moment, the reality has been that we’ve focused more on the older products, but that has been really a conduit into the Indian market rather than a strategy. And we’re quite prepared to look for intermediates for New Chemical Entities and for products under development. So the full range of types of chemical outsourcing is possible.

AS: How large a contributor are these areas to your overall outsourcing?

SF: I would say relatively small, but growing. India is ahead of China in our current position. We see potential in both to support our long-term outsourcing strategies. I don’t have exact figures, but I would be able to say that they contribute not more than 10 percent currently. It’s definitely going up, though I don’t have any numerical targets.

AS: Oncology and Cardiovascular drugs are both important areas of operation for AstraZeneca. Can you tell us a little more about the new developments in these areas that the company is working on?

SF: Yes, I can’t go into any specific developments in these areas as far as R&D is concerned, but what I can do is contrast these two therapy areas – if you look at Oncology products, I am generalizing here, but typically these are high potency, small volume – they have major challenges around handling the products very often. This is in contrast to cardiovascular drugs, which very often are in much higher volumes. So, the drive for AstraZeneca to come to India or China, from a cost perspective, is much higher for cardiovascular products than it would be for oncology products, because, the potential total spend on APIs and intermediates is much higher. At the moment, we are outsourcing more in the cardiovascular area than for any other therapeutic area, and I would imagine that this trend will continue. We’re not dismissing any of the others, but efficiencies from India and China are much higher when you have high volume products such as cardiovascular drugs.

AS: Does the company have in place any measurement systems to evaluate the cost benefit of outsourcing activities? What have been the company’s experiences in different areas?

SF: There are two aspects to this. Firstly we have a clear strategy, that we don’t manufacture raw materials, or starting materials. So any such raw material will be outsourced, that’s a given. Obviously we will look for the best value outsourcing, not just based on cost, but also on other criteria such as security, technology etc. For intermediates, we will look at the options that we have, either in-house or externally, and then we will build a business proposal, on a case-by-case basis. And that will depend upon the fit – If there is a good fit in any of our existing facilities, perhaps we may manufacture in-house, but also, we’ll do the financials on whether it makes more sense to outsource. So it’s very much a case by case basis.
If it’s an older established product, it would be very much financially driven; also, whether we need the capacity for something else, what’s the cost of outsourcing, long-term revenue-cost estimations etc. If it’s an early stage raw material for intermediates, we know we will outsource, we just need to decide where. Post supply, we have a number of performance indicators, which we use to evaluate all our suppliers, and not just cost, but delivery performance and many other areas. It’s a detailed measurement process.

AS: Could you tell us a little more about China, in the area of intermediates, how are you looking at Chinese companies as potential suppliers? Can you tell us more about the companies plans for China, and any strategic alliances you are looking at in that market?

SF: We go to China with the same open mind that we come to India. Our experience as been that, at the moment, they are significantly less well developed, in terms of any GMP manufacturing for intermediates. Also, many of the companies in China tend to be very ‘product-type’ specific. So they may be antibiotic producers or steroid producers, in the same way as Indian companies started out. But here, I think companies have a broader technology base. So, our intentions in China are the same as in India – to find the best value proposition that we can, the reality at the moment is that the Chinese industry is somewhat behind the Indian industry in terms of sophistication. But, it’s fair to say as well, that it’s moving at an extremely fast pace.

AS: Within the intermediates space, are there a lot of custom services or customized intermediates happening, or does the market in India still largely comprise catalogue products?

SF: My group is entirely custom. Everything our group manages is made specifically for us, or at least perhaps capacity would have to be increased for us. Our projects usually involve some technology transfer of our own process, or at the least, some analytical methodology. So, our group doesn’t buy any commodities. To give you the dimensions, my group has roughly a 100 projects, and 60-70 of these would be development projects.

AS: Which sphere within the contract services area do you think Indian players are excelling at? Which are the areas where you would say further work and improvement is required to compete in the global market?

SF: I am not sure technology is an issue – by and large, I think you have a pretty wide base of technologies available here. I think there is still perhaps a cultural difficulty, with open book costing. The way that we work with our preferred European suppliers is that, when we go out to ask for a proposal, we expect very high visibility in terms of costs, including margin. Cost is not always a deciding factor, but visibility of true cost is an important factor to us. And then we can balance the cost with other criteria. With Indian companies, we find that there is still an element of negotiation early on and perhaps pitching the initial offer higher than it could actually be accepted. That gives us big problems, because effectively we eliminate the supplier, before we get into any detailed discussions. For the very high turnover rate that we have for many of our projects, we haven’t got time to get into detailed negotiations early on. Its different ways of doing business, I would think. We need to break down some of those ways of working, really. With the suppliers in India whom we have identified as partners, we run workshops to help them understand how we work, and vice-versa.

AS: In terms of scalability for moving from development to commercial scale, what has been your experience in India?

SF: Our experience is limited, but from the cases we’ve seen and the experiences we’ve had, there is no reason to believe that it should be easier or more difficult than doing it in Europe or anywhere else in the world. Indian companies have a lot of good scientists, and the ones that we choose to work with have the range of capacities we need, lab-scale, kilo-lab, pilot and commercial scale, so we would expect them to scale up as required, so we’ve no reason to be particularly worried in that area. We always work with a long-term perspective, for any project that we get into, that this would be a long-term supplier for commercial scale. Of course, many projects may never make it to that stage, but the intention is always to develop a long-term supplier.

AS: Since East European countries also offer a cost-competitive manufacturing environment, with perhaps some advantages of geographical proximity, do you see this region as a possible threat to Asian companies?

SF: Within AstraZeneca, I think it’s fair to say that we’ve done very little in Eastern Europe, for chemical intermediates or API sourcing, virtually nothing. I don’t know whether we are unique in that – we took a decision some years ago, that we would focus on India and China. To be blunt, there are plenty of suppliers available in these two territories, which means that we don’t need to go to Eastern Europe. In my opinion, there is no great advantage in going to, for example, Poland, as opposed to India. We’re just talking about an extra few hours on a plane! It doesn’t make any difference. What we’re looking for is an entire package – to my knowledge, costs in Eastern Europe are rising. We have no reason to believe that standards are much higher than India, so, it would just dilute our focus. We’ve only got a certain amount of business, and we’ve only got a certain number of people managing it, so we can’t do the entire world! So it’s not an area of the world where we have any experience, or any real plans to explore.

Selengkapnya...

Gröger & Obst Analyzers for Simultaneous Measurement of TOC and VOC

Options for the Modification of TOC Standard Equipment

Dr.-Ing. Rolf Semsch

Significance of TOC and VOC

To determine the degree of organic pollution of waste water, labs prefer the kind of analytical test methods that yield useful results without being costly or demand the use of large-scale equipment. To achieve this end, labs employ methods that are sensitive to the oxidizing capability of all organic matter and record the so-called sum parameters (e.g. TC, TOC, DOC, TIC and VOC). These parameters are instrumental in water analysis and likewise in the determination of solutes and the physicochemical variables.

In recent years, continuous measuring systems (e.g. thermal catalytic oxidation systems) have been generally accepted and they have proven their efficiency compared with discontinuous systems. It is now possible to monitor the concentration gradient over an extended period of time and, by doing so, it is no longer necessary to rely on isolated results for a meaningful assessment of the quality of water.

In sewage engineering, “on-line” measuring systems for the determination of the TOC sum parameter, which reflects the level of organic pollution, have skyrocketed and are firmly established by now. For the assessment of drinking water, groundwater, surface water, leakage water from landfills and waste water containing organic matter, the total content of organic carbon (TOC) is one of the variables which is increasingly used to supplement or replace the chemical oxygen demand (COD). As a result of this situation, the supply of TOC-Analyzers on the market has increased (Fig. 1).

groegerobst1.jpg
Fig. 1: TOC-Analysers GO-TOC 1000 (left), GO-TOC 100 P (centre) and GO-TOC P (right)
(Manufacturer: Gröger & Obst Vertriebs und Service GmbH).

The TC (Total Carbon: the sum of organic and inorganic bound carbon in dissolved and undissolved compounds) represents the total load of organic matter and is mainly composed of the following sum parameters:

• TIC (Total Inorganic Carbon: the sum of inorganic carbon in dissolved and undissolved compounds)
• TOC (Total Organic Carbon: the sum of organic carbon in dissolved and undissolved compounds)
• DOC (Dissolved Organic Carbon: the sum of organic carbon in dissolved compounds)
• VOC (Volatile Organic Carbon: the sum of volatile (blow off) compounds)

VOC is the generic term for substances containing organic bound carbon that is easily volatilized. At first, all volatile organic compounds having a boiling point of 250 °C had been classified as VOCs. Now, a distinction is made between VOC and SVOC (Semi Volatile Organic Compounds = not easily volatilized organic compounds) (boiling point 240-260 °C). SVOCs are among other things phthalates, higher fatty acids and the like.

According to the WHO definition, a VOC is an organic substance having a boiling range between 60-250°C. These compounds classified as VOCs include e.g. alkanes, alkenes, aromatic compounds (benzene etc.), terpenes, halogenated hydrogen compounds, aldehydes and ketones. These easily volatilized organic compounds may leach out into the air where they are likely to become a health hazard. There are a number of other compounds that fall within the definition of VOC and have been classified as extremely toxic or rather carcinogenic (e.g. above all benzene in gasoline).

VOCs are an indirect by-product of ozone formation near the ground and of other photo-oxidants. Therefore, reduction limits must be defined for such toxic substances. The release of volatile organic compounds is also a problem in the treatment of boiler feed water. It is a potential health hazard (workplace, environment).Due to more stringent government requirements, recording of VOC levels has recently become more important. Continuous and fast determination of these critical sum parameters is of paramount importance because it allows necessary precautions and prompt reaction in case of emergencies.Thus, methods for continuous VOC determination have gained recognition for applications where specified limit values have to be met. Additional VOC readings provide better insight and understanding of the chemical and biological processes occurring in the tested sampling lines.

Fast and simple continuous determination of VOC concentrations was only possible done with conventional test methods, which were costly and time consuming and, as a consequence. Therefore, measurements were neglected even though the extent of organic pollution caused by volatile organic compounds constitutes a considerable health hazard.

The measuring principle developed by Gröger & Obst based on the modification of the proven design of continuous TOC determination with thermal catalytic oxidation can meet the requirements of continuous VOC determination.Here, the volatile organic compounds are extracted with air, purged from gaseous carbon dioxide (soda lime) and fed into the reactor. There, oxidation of the easily volatilized carbon compounds is initiated at a temperature of about 850 °C and enhanced by catalytic conversion. The carbon content of the carbon compounds is quantitatively converted into carbon dioxide. Subsequently, the carbon dioxide is purged from the vapour (condenser) and measured using an infrared detector.

Unlike the TOC analyses, no acid is used and thus, only the easily volatilized organic compounds are blown off. The inorganic carbon compounds (carbonates, hydro carbonates) remain in the aqueous phase and are removed from the system.

Because of this method, the components of the devices are much less adversely affected by VOC than by TOC measurements. No salt is added, lower concentrations are used, no acid is added, and, as a result, the service life of the individual components is extended considerably. This is reflected in extremely economical operating costs. The measuring technique is also extremely flexible and it is suitable for a wide range of applications.

The following continuous measuring methods are available:

• Consecutive measurement of TOC and VOC
• Simultaneous measurement of TOC and VOC (2nd thermal reactor und 2-channel detector, see Fig. 2)
• Two sampling flows TOC and VOC (2nd thermal reactor and 2-channel detector)

groegerobst2.jpg
Fig. 2: Example showing two spectra obtained during simultaneous recording of TOC and VOC.

Comparative measurements conducted with conventional standardized test methods have confirmed the VOC results from thermal catalytic oxidation.

Modification of suitable test equipment

The scope of applications can be further extended e.g. to include recording of TIC and ultimately also of TC. As an extra benefit the TOC device on hand needs only a minor modification and is ready for VOC analysis. This makes the system extremely cost-effective.

When simultaneous measurements of multiple parameters are needed, it only a second reactor and a 2-channel detector are required. For consecutive measurements adjustments for the consecutive mode are just a matter of minutes.

The figure below (Fig. 3) shows a possible configuration of the equipment with a 2-channel detector and an additional reactor for simultaneous, continuous determina¬tion of VOC and TOC. Owing to the small footprint, it is possible to run 4 TOCs on-line on a surface area of 1 square metre.

groegerobst3.jpg

Fig. 3: TOC-Analyzer GO-TOC P with continuous VOC measurement option (Manufacturer: Gröger & Obst Vertriebs und Service GmbH).


It is above all the fundamental configuration of the GO-TOC P analyzer (manufacturer Gröger & Obst) that is suitable for upgrading to do simultaneous recordings of TOC/VOC.

The configuration of the device is very flexible and it can be upgraded by adding a second oxidation reactor and a 2-channel detector. No major modifications of the basic equipment are needed. Moreover, with an alternative modification of the GO-TOC-P (manufacturer: Gröger & Obst) it is also possible to measure the VOC content of the air. Owing to the flexibility of the method, it is also possible to determine the TC contents in solids.

To sum it up!

All it takes to measure the sum parameter VOC is an appropriate test method and a suitable TOC analyzer. Simultaneous measurements of VOC and TOC are also possible; of course, the equipment has to be modified correspondingly for this purpose.

The GO-TOC P developed by Gröger & Obst is an ideal tool for these measurements and, as a result of the continuous and fast data output of crucial parameters, it is possible to act or rather react optimally in critical situations. Merits such as continuous recording of multiple parameters, enhanced quality (optimal process management and process control), high-quality safety features primarily in the area of automatic measuring and control engineering (e.g.: monitoring of valves and emergency cut-offs, etc.) are highly appreciated.

Analyzers working in continuous mode have clear advantages over gas chromatography (FID) because of their small footprint and because there are no hazardous waste problems. Moreover, there is no need for gas bottles and EXplosion proofing of rooms.


Selengkapnya...

Measurement of Nano particles and Proteins

Ulf Nobbmann, Biophysical Characterization, Malvern Instruments Ltd., UK
Renate Hessemann, Marketing Manager Europe, Malvern Instruments, Germany

Colloid emulsions in their native environment require particle size determination in high concentration since there are concerns about changes in sample morphology upon dilution eg break up of aggregates. For proteins and other samples with very low concentration and for weakly scattering samples high sensitivity is required.
New technologies offer both.

Dynamic Light Scattering (DLS) is a powerful technique for determining the size of sub-micron particles. Conventional instrumentation is limited in terms of the maximum concentration of samples that can be analysed because of multiple scattering effects. Non-invasive back scatter (NIBS) technology not only increases the concentration limits at which DLS can be successfully applied, but also increases the sensitivity of the technique.

Dynamic light scattering
Dynamic light scattering (DLS) is a non-invasive technique for measuring particle size, typically in the sub-micron size range. Particles in suspension undergo random Brownian motion. If these particles are illuminated with a laser beam, the laser light is scattered. The intensity of the scattered light detected at a particular angle fluctuates at a rate that is dependent upon the particle diffusion speed, which in turn is governed by particle size. Particle size data can therefore be generated from an analysis of the fluctuations in scattered light intensity

Concentration limits of DLS
In the past 20 years, DLS has become a routine tool for the measurement of particles less than a micron. However, conventional DLS has its limitations. The sample concentration must be high enough to ensure an adequate signal, but the risk of erroneous results due to multiple scattering (light scattered by one particle undergoing scattering by another) means there are restrictions on the concentration range for which measurements are valid.

NIBs extends the concentration limits
The use of patented noninvasive backscatter technology has overcome these limitations. NIBS is a dynamic light scattering technique incorporating an optical configuration that maximizes the detection of scattered light while maintaining signal quality. This provides the high sensitivity needed for measurement of the size and molecular weight of molecules smaller than 1000 D. It also enables measurement at extremely high concentrations. The use of backscattering, rather than more typically detecting scattered light at a 90o angle, improves the sensitivity and at the same time ensures the smallest possible interference from multiple scattering. Previous backscattering techniques have suffered from drawbacks that include the need for close contact between sample and detector optics, necessitating frequent cleaning of both the measurement cell and the detector. Because NIBS is a non contact technique, cleaning is not necessary.

The range of sample concentrations that can be analyzed successfully is extended by changing the measurement position within the cuvette. This is achieved by moving the focussing lens. For small particles, or samples at low concentrations, it is beneficial to maximize the amount of scattering from the sample and hence a measurement position towards the centre of the cuvette is most effective. Large particles, or samples at high concentrations, scatter much more light and therefore measuring closer to the cuvette wall is preferable as this reduces the chance of multiple scattering.

DLS for protein characterization
One of the most time consuming steps is still the actual crystallisation, the search for the conditions under which the protein under investigation will crystallize. As the screening involves a plethora of various buffer conditions and protein amounts are often limited it has become wide-spread practice to check the suitability of the starting material. A simple light scattering experiment will tell the size and the polydispersity, the non-homogeneity of the starting sample. The measurement is quick, and requires only small volumes, with the intact sample being available for further analysis.

What information does one get out of the technique?
The size itself can be linked to the molecular weight of the protein. While there may be unusual shape effects, many proteins tend to behave like relatively globular molecules. And these may then be expected to behave according to the Mark-Houwink relation: the measured particle size is related to the molecular weight through a power law. When encountering an ‘unknown’ protein, it is a simple matter of comparing its measured size to the expected estimated molecular weight. Thus, the size can predict the oligomeric state of the protein in solution. As the measured size gives an impression of the molecules as it is present in the sample under the current conditions, it provides insight into the actual oligomeric configuration. This, however, requires a reasonable data quality. Very polydisperse samples are not suitable for such advanced data interpretation.
The polydispersity is the width of the size distribution. When many different particle species are present in a measuring volume, the width of the size distribution will turn out to be larger than when all particles are of the same size.

In real life, there seems to be a “natural polydispersity” due to constant interchange with solvent layer molecules and some molecular flexibility. However, when different particle species, such as dimers, trimers, oligomers are present then the polydispersity is markedly higher than for monodisperse single-meric solutions.

The relative polydispersity expressed in percent of the half width of the peak divided by the peak mean in the particle size distribution can vary from a few to a hundred percent. Many proteins show polydispersities below 20% for single species, 20-30% for oligo-species (monomer-dimer, or monomer-tetramer), and above 30% when forming a wide range of different oligomeric states in the buffer under question.

If the question is solubility at different buffer compositions then light scattering is the fastest answer. It provides the particle size (which gives an estimate of the oligomeric state of the protein) and the polydispersity (which shows the homogeneity of the distribution in solution). The ease and speed of the technique has shown advantages in protein crystallization, stability analysis, thermal properties, degradation, self-assembly, in short characterization in solution conditions.

Zetasizer Nano - optimized for Protein characterization
Malvern Instruments’ Zetasizer Nano S combined static and dynamic light scattering instrument is optimized for the characterization of proteins in solution prior to crystallization.
This compact, easy to use system is designed for the rapid delivery of accurate and extensive information that can assist both in the screening of appropriate conditions for protein crystallization and in determining the likelihood of crystals being suitable for structure determination. Not only does the Zetasizer Nano automatically optimize all instrument settings for each sample but custom data reports and graphical data presentations make interpretation easier than ever before.

malvern2.jpg

The Zetasizer Nano enables researchers to detect and quantify aggregation, determine the second virial coefficient to find the “crystallisation sweet spot”, and quantify sample polydispersity to increase the likelihood of successful crystallization.

In addition it enables users to study the effect of temperature on monodispersity, and offers the ability to automate temperature studies, including melting point and thermal denaturation determinations.

It also allows estimation of prolate and oblate axial ratios and Perrin factor, as well as measurement of hydrodynamic diameter and absolute molecular weight.

Size measurements of proteins as small as 0.6 nm and 400 Da can be made in their native environments. As little as 12 microliters of sample is required and the sample is recoverable. Selengkapnya...

Making the connection - particle size, size distribution and rheology

Jamie Fletcher, Applications Specialist
Adrian Hill, Rheometry Technical Specialist
Malvern Instruments Ltd, Enigma Business Park, Grovewood Road, Malvern, Worcestershire, UK, WR14 1XZ

A number of factors influence the rheology of a suspension, including particle size, particle size distribution, and the volume fraction of solids present. Here we examine the relationship between rheology and particle size parameters. Commonly used rheological terms are described and we present data from example systems to illustrate key points.

Terminology

Viscosity (‘thickness’) is the term that describes resistance to flow. High viscosity liquids are relatively immobile when subjected to shear (a force applied to make them move), whereas low viscosity fluids flow relatively easily. Measurement of viscosity, and other rheological properties, can be made using either capillary or rotational rheometers, the choice of system depending on the properties of the material being tested and the data required.

‘Shear rate’ defines the speed with which a material is deformed. In some processes (spraying for example), materials are subjected to high shear rates (>105 s-1); in others, (such as pumping or levelling), the associated shear rate is low (10-1 – 101 s-1). High shear rates tend to occur when a material is being forced rapidly through a narrow gap.

If viscosity remains constant as shear rate increases, a fluid is described as being Newtonian. Non-Newtonian fluids, which fail to exhibit this behaviour, fall into one of two categories – shear thinning or shear thickening. With shear thinning materials viscosity decreases as shear rate increases: application of shear leads to a breakdown of the material’s structure so that it flows more readily. Most fluids and semi-solids fall into this group. Conversely, the viscosity of shear thickening materials increases at rising shear rates.

With regard to suspensions, the volume fraction and the maximum volume fraction can also be influential. It is possible to think of the maximum volume fraction (highest volume of particles that can be added to a fluid) as the amount of free space the particles have in which to move around, and the implications on viscosity are discussed below.

Effect of particle size

Maintaining a constant mass of particles in a suspension while reducing the particle size of the solid phase leads to an increase in the number of particles in the system. The effect of this change on the viscosity of the system across a range of shear rates is shown in figure 1. These data are for latex particles in a pressure-sensitive adhesive and the shape of the graph indicates:

  • the fluid is shear thinning (viscosity decreases at higher shear rates)
  • viscosity tends to be greater with smaller particles
xMAL JOB 1267 Fig1.jpg

Fig. 1: The impact of particle size on viscosity.

A higher number of smaller particles results in more particle-particle interactions and an increased resistance to flow. Clearly as shear rate increases, this effect becomes less marked, suggesting that any particle-particle interactions are relatively weak and broken down at high shear rates.

Figure 2 shows data for a talc/epoxy system. In the absence of talc, the epoxy system is Newtonian; adding coarse talc leads to an increase in viscosity, but still the system is Newtonian. The addition o finer talc results in a further, more significant, increase in viscosity, particularly at low shear rates. Colloidal repulsion between a relatively large number of particles gives structure to the fluid, increasing resistance to flow. As in the previous example, this relatively weak structure is broken down at high shear rates. The fluid has become shear thinning.

xMAL JOB 1267 Fig2.jpg

Fig. 2: The impact of particle size on flow behavior.

Volume fraction

The effects of volume fraction and maximum volume fraction on viscosity are described using the Krieger-Dougherty equation:

xMAL JOB 1267 formel.jpg

where η is the viscosity of the suspension, ηmedium is the viscosity of the base medium, φ is the volume fraction of solids in the suspension, φm is the maximum volume fraction of solids in the suspension and [η] in the intrinsic viscosity of the medium, which is 2.5 for spheres.

This correlation indicates an increase in viscosity with increasing volume fraction. As the volume fraction of solids in the system goes up: the particles become more closely packed together; it becomes more difficult for them to move freely; particle-particle interactions increase; and resistance to flow (viscosity) rises. As the volume fraction nears maximum for the sample, viscosity rises very steeply.

As well as influencing the absolute value of viscosity, volume fraction also affects the nature of the relationship between shear rate and viscosity for the system - flow behaviour. Suspensions with relatively low volume fraction tend to behave as Newtonian fluids, with viscosity independent of shear rate. Increasing volume fraction leads to shear-thinning behaviour. The transition is illustrated in Figure 3 for a latex/pressure-sensitive adhesive system.

xMAL JOB 1267 Fig3.jpg

Fig. 3: Viscosity as a function of shear rate for different volume fractions.

At the lowest volume fraction the system is almost Newtonian. As volume fraction increases, shear-thinning behaviour becomes evident. Increased volume fraction results in more particle-particle interaction, and resistance to flow increases. The forces between particles are, however, broken down at high shear rates.

A further transition in flow behaviour occurs as volume fraction increases to more than ≈50% of maximum volume fraction. At these solids loadings the free movement of particles is significantly hindered as collisions between particles increase and the system becomes more congested. As shear rate increases, the particles are trying to move more rapidly and the effect becomes more pronounced. Viscosity therefore increases with shear rate; the system is shear thickening at very high shear rates.

Distribution

Particle size distribution (PSD) influences particle packing: a polydisperse population with a broad size distribution packs more closely than a monodisperse sample. The effects on viscosity can be explained with reference to the Krieger-Dougherty equation (see above). For a monodisperse sample the maximum volume fraction is around 62%. With a polydisperse sample smaller particles can fill gaps between larger ones and the maximum volume fraction is greater – around 74%. Increasing the PSD for any given volume fraction of solids will reduce the viscosity of the system. PSD can be a valuable tool for manipulating the viscosity of a system that has a fixed volume fraction.

Viscosity as a function of fraction of large or small talc particles is shown for an epoxy in figure 4. In this example a synergistic effect is seen when particles of both sizes are present at a certain concentration. The resulting viscosity is lower than that achieved using a monodisperse sample of either sized talc.

xMAL JOB 1267 Fig4.jpg

Fig. 4: Viscosity as a function of polydispersity.

These results show how particle size distribution can be used to manipulate viscosity. If the requirement is for a higher solids loading but the same viscosity, then this can be achieved by broadening the particle size distribution. Conversely, viscosity can be increased by using particles with a narrower size distribution.

In conclusion

It is clear that particle size and size distribution data can be valuable when developing products with specific rheological properties. Clear relationships between particle size, particle size distribution and volume fraction, and viscosity, allow key physical parameters of the suspension to be tuned to meet product specifications.

Selengkapnya...

Particle shape - an important parameter in pharmaceutical manufacturing

Dr Deborah Huck, Application Specialist Vision Systems
Malvern Instruments Ltd, Enigma Business Park, Grovewood Road, Malvern, Worcestershire, UK, WR14 1XZ

The advent of rapid and reliable measurement technologies, together with the FDA’s PAT (Process Analytical Technologies) initiative, has increased the use of particle shape analysis within the pharmaceutical industry. Particle shape, like particle size which is routinely measured and controlled, can directly influence product performance and its measurement can lead to improved process and product understanding. Here we consider the importance of particle shape measurement for the pharmaceutical industry, with reference to the aims of the PAT initiative, and highlight the modern image analysis techniques available for sensitive size and shape characterization.

Why measure particle shape?

Often, manufacturers producing a particulate product need to identify and understand the differences between batches, either for product development reasons or for quality control purposes. For some applications particle size analysis generates enough data for sample differences to be fully rationalized, but for applications where samples are very close in size, measurement of subtle variations in shape may be necessary.

Figure 1 shows two different samples. The particle size distributions for each material could be the same, but they are clearly not identical. It is likely that these two materials would behave differently during processing, or in their final product form. For example, their flow and abrasion characteristics would be dramatically different. Particle size data alone would not allow differentiation between them.

xMAL PRL 997 Figure 1.jpg
Fig. 1: Two different samples could be reported as identical using a size-only distribution.

PAT

The FDA’s PAT initiative, an effort to improve cGMP by providing a regulatory framework for the introduction of new manufacturing technologies for the pharmaceutical industry, is ultimately designed to improve process control in the sector. Improved process control delivers greater efficiency, less waste and lower production costs. It will therefore allow the industry to respond more effectively to environmental and economic challenges.

Currently, many manufacturing operations are based on time-defined endpoints; for example ‘blend for 10 minutes’ or ‘mill for 1 hour’. The spirit of the PAT initiative is to move away from this approach, to one where endpoint is defined in relation to a property that is closely linked to product quality - granule size, morphic form or blend uniformity for example. Material with the desired properties is then produced more consistently and waste is minimized. This approach requires identification of an appropriate variable, with effective monitoring and control of the selected parameter.

Particle characterization using image analysis

Particle shape and size data can be generated using automated image analysis techniques, complementing both microscopy and laser diffraction for particle characterization. In contrast to manual microscopy, image analysis generates statistically relevant data with no subjective bias, allowing shape, and its effects, to be studied systematically. Image analysis generates number-based distributions and is therefore extremely sensitive to the presence of fines or small numbers of foreign particles. In addition, individual particle images are recorded, allowing visual detection and verification of agglomerates or contaminants.

Image analysis procedures involve the capture of images using transmitted or reflected light, a lens system and a CCD. Movement between the sample and the magnification lens allows scanning of a large number of particles for the production of statistically relevant data; typically several thousand particles are measured per minute. Multiple shape parameters are calculated for each individual particle and collated into distributions with all the associated distribution parameters.

Particle orientation

Particle orientation is critically important for effective characterization of particle shape by image analysis. Figure 2, which shows an analysis of a sample of monodisperse needle-shaped particles, clearly illustrates the problem associated with random orientation. The shape and particle size data produced shows a polydisperse sample. The bank of images illustrates why. The camera and software are seeing a selection of different 2D views of similar particles – the random orientation is hiding the genuine primary morphology of the sample.

xMAL PRL 997 Figure 2a.jpg
xMAL PRL 997 Figure 2b.jpg
Fig. 2: Shape analysis of monodisperse needle shaped particles.

Consistent orientation is critical for the identification of real morphological differences. Particles may be presented showing their largest surface area, their smallest surface area or something in between. Which area is analyzed is less important than the consistency of presentation. However, as the largest area orientation is more closely correlated with surface area and volume-based data - and easier to achieve - this approach tends to be adopted.

Defining particle shape

Various different aspects of particle shape are of interest and a range of descriptors has been devised to allow particle shape to be quantifiably described. No single shape descriptor is suitable for all applications. The following three parameters, which are all normalised (defined to have values lying in the range 0 – 1) are frequently used to quantify different aspects of particle shape.

Elongation

Elongation provides an indication of the length/width ratio of the particle and is defined as (1-[width/length]). Shapes symmetrical in all axes, such as circles or squares, will tend to have an elongation close to 0 whereas needle-shaped particles will have values closer to 1. Elongation is more an indication of overall form than surface roughness (see figure 3) - a smooth ellipse has a similar elongation to a ‘spiky’ ellipse of similar aspect ratio.

xMAL PRL 997 Figure 3.jpg
Fig. 3: Elongation.

Convexity

Convexity is a measurement of the surface roughness of a particle and is calculated by dividing the particle area by a ‘total area’, best visualized as the area enclosed by an imaginary elastic band placed around the particle. A smooth shape, regardless of form, has a convexity of 1 while a very ‘spiky’ or irregular object has a convexity closer to 0 (see figure 4).

xMAL PRL 997 Figure 4.jpg
Fig. 4 : Convexity.

Circularity

Circularity is a measurement of the ratio of the actual perimeter of a particle to the perimeter of a circle of the same area. A perfect circle has a circularity of 1 while a very ‘spiky’ or irregular object has a circularity closer to 0. Intuitively, circularity is a measure of irregularity or the difference from a perfect circle. Figure 5 shows how circularity is sensitive to both overall form (like elongation) and surface roughness (like convexity). This shape factor is particularly useful for applications where perfectly spherical particles are the desired end product.

xMAL PRL 997 Figure 5.jpg
Fig. 5 : Circularity.

A further parameter frequently used in particle characterization is:

Circle equivalent diameter

Circle equivalent diameter is calculated by measuring the area of a 2D image of a particle and back-calculating the diameter of a circle with the same area. It is one of many equivalent values used to define particle size and is calculated easily from image analysis data. Circle equivalent diameter calculation depends upon which 2D view is captured and hence may not be directly comparable with alternative particle size measuring techniques, particularly if the particles are not spherical.

Practical example of sensitivity to shape

The following exemplifies the sensitivity of one pharmaceutical process to particle shape. One of four batches of a pharmaceutical excipient was continuously failing at the tabletting stage of a manufacturing process. This was proving to be highly expensive since the tabletting process was at the very end of the manufacturing process where all the value has been locked into the product.

The tablet producer wanted some way of identifying the failed batch much earlier – ideally as a raw material. Traditional microscopy or ensemble sizing methods could not distinguish between the four batches being used.

Automated image analysis was used to evaluate the average convexity of each of the four batches. Convexity is a measure of surface roughness or ‘spikeyness’ of the particle surface and the failed batch was found to consistently exhibit a lower average convexity than the other three good batches (Figure 6).

xMAL PRL 997 Figure 6.jpg
Fig. 6.

In conclusion

The need for higher quality, higher sensitivity analytical techniques to increase process understanding within the pharmaceutical industry has been highlighted through the PAT initiative. Image analysis, an increasingly accessible option thanks to advances in PC processing power and digital camera technology, is particularly suited to analyzing size and shape and is a valuable tool for the sector. With particle shape and size data readily available it becomes possible to more effectively define process end-point and rationalize differences in the behaviour of different batches. In this way image analysis technology is delivering significant improvements in both process efficiency and product quality.

Selengkapnya...

 
Check Page Rank of any web site pages instantly:
This free page rank checking tool is powered by Page Rank Checker service
Search Engine Promotion Widget