Reply to Dr. Dominique Francois-Bongarcon

Title: Reply to Dr. Dominique Francois-Bongarcon

Year: 2005

Publication Type:

Source: Explore - Association of Exploration Geochemists Newsletter, Volume 127, p.19-23 (2005)

Keywords:

Authors: ,

Dr. Francois-Bongarcon takes issue with a number of points in our EXPLORE contribution entitled: “Sample Preparation of ‘Nuggety’ Samples: Dispelling Some Myths about Sample Size and Sampling Errors”. We disagree with much of what he says and discuss his principle points below. 1. Dr. Francois-Bongarcon first suggests that Poisson statistics cannot be used to model sampling error in gold ores because the gold is typically not liberated. This comment ignores the work of Clifton et al. (1969), that has long formed the basis of sampling protocols for rare grains in applied geochemistry. This U.S.G.S. Professional Paper details how the effective nugget size and the effective number of nuggets can be calculated from the relative error of replicate samples of a given size, and how these can be used to estimate the relative error of samples of different sizes. Clifton et al.’s (1969) approach makes no attempt to exactly mimic non-ideal sample characteristics (such as full liberation, or constant grain size and shape), but rather employs an ideal ‘equant grain model’ that exhibits exactly the same variance structure of the material under examination. Using this Poisson-based model, predictions regarding the magnitude of sampling error can be made for samples of different size. A follow-up publication, Stanley (1998), describes freeware, available for download from the website http://ace.acadiau.ca/~cstanley/software.html to undertake these calculations. These papers present compelling evidence, both theoretically and empirically, that the sampling error in material containing nuggets is essentially controlled by the largest nuggets (with examples from a range of deposit types: gold and platinum deposits, diamondiferous kimberlites, and rare earth elementbearing pegmatites). These illustrate that the size distribution of nuggets in the sample is generally of subordinate concern when considering sampling error. Rather, it is the largest nuggets that exert the most influence on the sampling error magnitude. Instead of a Poisson model, Dr. Francois-Bongarcon suggests that the well-known sampling formula of Pierre Gy (1982) should be used because he claims that it, unlike Poisson statistics, accounts for grain size distribution. If one examines Gy’s formula in detail, one will find that it employs a parameter called a ‘granulometric factor’ (g) which is supposed to accommodate the effect of a range in grain sizes on sampling error. This factor is merely the ratio of the diameter of the largest (nominal) spherical grain in a particulate material that fits through a given mesh, to the diameter of the average grain in the material, raised to the power of three. As a result, this factor is merely used to convert grain volume described in terms of a mesh that passes a certain percentage of particles to the average particle volume in the particulate material. In fact, this ‘average grain size’ is equivalent to the ‘effective grain size’ originally described and used by Clifton et al. (1969) in his model. Clifton et al. (1969) calculates this effective grain sized by the formula: drReplyFormula01 This formula determines the volume-weighted mean diameter of particles in a sample, where the mh are the masses of each of the p grain size ranges that a particulate sample has been sieved into, the dh are the average diameters of these grain size categories, mt is the total mass of the sample, and d is the ‘effective grain size’ or ‘average grain size’ of the nuggets. This formula is used in the Poisson-based approach of Clifton et al. (1969) and Stanley (1998) to model sampling error in materials with rare grains. Clearly, both Gy’s (1982) formula and the Poisson-based approach of Clifton et al. (1969) accommodate, in exactly the same way, the grain size distribution of the material.

However, Gy’s formula is based on and derivable from the binomial theorem. Consequently, Gy’s formula doesn’t apply to samples containing very low concentrations of elements contained in rare grains (e.g., Au, PGE, diamonds, etc.), where a Poisson relation is applicable. Our avoidance in referencing Gy stems directly from the fact that we consider samples containing nuggets a scenario that is inconsistent with Gy’s approach. 2. Dr. Francois-Bongarcon also claims that core splitting is not a sampling operation. Whereas some might consider this a semantic argument, it is not. If a whole material (the core) is divided into halves and one part is collected, it is sampled. The core halves could be obtained by dividing a vertically oriented core along NS or E-W (or any other cut or broken) planes, or by dividing the core into one cm thick vertically stacked cylinders and collecting every other piece in the sample. Some of these formats may produce better samples than others, but in all cases, the core is sampled. One could easily also collect the remaining half of the drill core, and this would represent a second ‘duplicate sample’. The variation observed between the grades of the two duplicate samples thus estimates the ‘sampling error’. Dr. Francois-Bongarcon then carries his argument further, referring to drilling in a slightly different location (10 m to the left or right) to obtain a different piece of drill core and a different grade. This is not the problem (spatial variation) we are addressing, and is irrelevant to the discussion. 3. In his comments, Dr. Francois-Bongarcon indicates that “primary sampling variance can indeed be controlled by adequate sample mass”. Although this is true in theory (a larger sample will most-often reduce sampling error), it is often times not achievable in practice. This is because geologists do not normally have access to infinite sample masses. Economic, logistical and physical constraints limit the size of sample that can be obtained and sampled, and oftentimes the available sample mass remains inadequate (mostly because of the ‘nugget effect’). As a result, sometimes geologists have problems controlling sampling variances. Dr. Francois-Bongarcon goes on to indicate that “all stages of the (sampling, preparation and analysis) protocol are important, no one less than the others.” This is true, but only in the most general sense, as some stages of a sample treatment protocol introduce errors that are much larger than do other stages. The largest sources of errors are the most important to any sampling effort from a practical point of view, because their reduction can most expeditiously reduce the total error. Reducing a lesser error can never be as efficient. This is illustrated by the drawings in Figure 1, which demonstrate that a reduction in the error with the smaller standard deviation reduces the total error less than a similar reduction in the error with the larger standard deviation.


drReplyFigure01This is illustrated by the drawings in Figure 1, which demonstrate that a reduction in the error with the smaller standard deviation reduces the total error less than a similar reduction in the error with the larger standard deviation.
Hence the emphasis in our original EXPLORE communication addressing the measurement and reduction of the field component of measurement error, the largest error in our two examples and at most other mineral deposits we have evaluated. In order to measure the error related to selection of the initial sample (effectively caused by the geology of the sample), the error introduced at each step of the subsequent sample size reduction process in the laboratory also must be measured. The difference between the laboratory errors and the total error measured by the field duplicates is the error related to ‘geology’. It is this error that we hope to quantify, and ultimately reduce. This is, after all, the point of the whole process: to obtain an estimate of (and limit the) risk on the analyses being generated.

4. Dr. Francois-Bongarcon claims that “the graphs presented are unrealistic”, and that “sampling standard deviations have no reasons whatsoever to increase (with grade)’. The Thompson and Howarth approach (Thompson & Howarth 1973, 1976a, 1976b, 1978; Thompson 1973, 1982; Fletcher 1981; Stanley & Sinclair 1986; Stanley 2003) that we used to make our graphs and estimate measurement error was an empirical, valid and convenient method for our purposes. Thompson and Howarth’s approach employs a linear model to characterize measurement error (sampling, preparation, analysis, whatever) as a function of grade. A linear model was used by Thompson and Howarth for two reasons: (1) because the poor estimates of error from duplicates do not allow one to legitimately employ a more complicated non-linear model, and (2) because, on empirical grounds, a linear model seemed to fit the analytical data that they were generating and for which they were using to describe the errors. We have used the identical linear model on our data, precisely because it also fits our data (Figure 2). We would be the first to use an appropriate non-linear model (e.g., a Poisson curve) if the data warranted, but because the relationships are demonstrably linear, it would be inappropriate to try to fit a curve to these data.


drReplyFigure02Figure 2 – Modified Thompson-Howarth replicate error plot illustrating 3488 duplicate Au determinations from rotary reverse circulation samples (diamonds), and mean averages and standard deviations of groups of 11 duplicates (squares) for samples from an anonymous intrusion-related gold deposit. The grouped data define a linear relationship between concentration and error.

Dr. Francois-Bongarcon’s contention that the standard deviations shouldn’t increase with concentration is also inconsistent with his preference to use Gy’s sampling formula (1982) to model sampling error. This inconsistency derives from the fact that the binomial theorem, on which Gy’s sampling model is based, defines a sampling variance that increases with concentration according to the formula:

drReplyFormula02

where c is the element concentration (expressed as a proportion), n is the total number of grains in the sample, and sc is the standard deviation of the concentration (expressed in proportion units). This formula describes a curve in concentration-error space that starts at the origin, rises steeply at low concentrations, and then shallows at higher concentrations (Figure 3). As a result, if Gy’s (1982) sampling formula were to be applied to these case histories, the sampling error modeled would increase with concentration.


drReplyFigure03
Figure 3 – Examples of binomial sampling error models with different sampling parameters (n). Values are expressed in proportions (ptn). Maximum sampling error occurs at a concentration of 50%.. Larger n define binomial sampling error models with smaller errors. At low concentrations, sampling error increases in a curvilinear fashion with increasing concentration.


5. That Dr. Francois-Bongarcon is a major proponent of Gy’s sampling theory is clear. Unfortunately, although Gy’s sampling formula is applicable in ores that do not suffer from a nugget effect, the high regard that Dr. Francois-Bongarcon holds for this theory is not ample justification for its use to model sampling error in ‘nuggety’ materials.

by: Barry W. Smee
Smee and Associates Consulting Ltd.
4658 Capilano Road, North Vancouver B.C. V7R 4K3
Phone 1 (604) 929-0667 Fax 1(604) 929-0662
bwsmee@geochemist.com

and: Clifford R. Stanley
Department of Geology, Acadia University
Wolfville, Nova Scotia B4P 2R6, Canada
VOX (902) 585-1344, FAX (902) 585-1816
cliff.stanley@acadiau.ca

References
CLIFTON, H.E., HUNTER, R.E., SWANSON, F.J. & PHILLIPS, R. L. 1969. Sample size and meaningful gold analysis. U.S. Geological Survey, Professional Paper, 625- C. U.S. Government Printing Office, Washington, D.C., pp. C1-C17.

FLETCHER, W.K. 1981. Analytical Methods in Geochemical Prospecting. Handbook of Exploration Geochemistry, Vol. 1. Elsevier Scientific Publishing Co., Amsterdam, 255 p.

GY, P. 1982. Sampling of particulate materials: Theory and Practice. Elsevier Scientific Publishing Co., New York, 431 p.

STANLEY, C.R. 1998: NUGGET: PC-software to calculate parameters for samples and elements affected by the nugget effect. Exploration and Mining Journal, Canadian Institute of Mining and Metallurgy, 7:1-2, 139- 147.

STANLEY, C.R. 2003. THPLOT.M: A MATLAB function to implement generalized Thompson-Howarth error analysis using replicate data. Computers and Geosciences, 29:2, 225-237.

STANLEY, C.R. & SINCLAIR, A.J. 1986. Relative error analysis of replicate geochemical data: advantages and applications. in: Programs and Abstracts, GeoExpo – 1986: Exploration in the North American Cordillera, Association of Exploration Geochemists Regional Symposium, Vancouver, British Columbia, pp. 77-78.

THOMPSON, M. 1982. Regression methods and the comparison of accuracy. The Analyst, 107, 1169-1180.

THOMPSON, M. 1973. DUPAN 3, A subroutine for the interpretation of duplicated data in geochemical analysis. Computers and Geosciences, 4, 333-340.

THOMPSON, M. & HOWARTH, R.J. 1978. A new approach to the estimation of analytical precision. Journal of Geochemical Exploration, 9, 23-30.

THOMPSON, M. & HOWARTH, R.J. 1976a. Duplicate analysis in practice – part 1. theoretical approach and estimation of analytical reproducibility. The Analyst, 101, 690-698.

THOMPSON, M. & HOWARTH, R.J. 1976b. Duplicate analysis in practice – part 1. Examination of proposed methods and examples of its use. The Analyst, 101, 699- 709.

THOMPSON, M., HOWARTH, R.J. 1973. The rapid estimation and control of precision by duplicate determinations. The Analyst, 98, 153-160.

Readers’ Forum
From Dominique Francois-Bongarcon, PhD
AGORATEK International – dfbgn@attglobal.net

It is with keen interest I started to read the technical note “Sample Preparation of ‘nuggety’ samples: Dispelling some myths about sample size and sampling errors”. However, shortly into the text, I realized the paper was misguided for a number of technical reasons. While one can easily agree with the overall conclusion that 3 kg pulverizers are not the best tool to use1, it is obviously not acceptable to think one should not be concerned with the size of the coarse split! Nor does this article make a valid point of it, as the following critical errors can easily be spotted:

* Poisson statistics can only apply assuming the gold grains are freed from their gangue (liberated ores), which is rarely the case just after crushing. This makes all the precision calculations and what-if scenarios in the note invalid. Gy’s formula should be used down to liberation size, and even below as Poisson formulas fail to account for the size distribution of the gold grains whereas Gy’s formula does.

* While primary sampling at the RC rig is a true sampling operation (which therefore contributes a variance component), core splitting is not. In the first case, a smaller mass (sample) intends to represent the larger mass of whole interval cuttings. In the second case, the 1/2 core represents only itself. The variance attached to core splitting is not entirely a sampling2 variance, which invalidates the two examples and their conclusions3.

* As a result, only the case of RC chips can be considered for the reasoning described in the note, but then, the primary sampling variance can indeed be controlled by adequate sample mass, as is the one for the coarse split. All the stages of the protocol are important, no one less than the others, and they must all be balanced with each other, which implies adjusting the coarse split mass as well on a case by case basis.

* The graphs are unrealistic: it is well known that the sampling variance depends not only on sample mass and crush size, but also on the size of the gold grains, which has been shown to vary with grade in most gold deposits. The sampling standard deviations have no reasons whatsoever to ever increase, nor to do so linearly with grade, and such general conclusions as the one pursued in this note are over-simplistic and do not take advantage of the many advances of Pierre Gy, founder of modern sampling theory.

A more fruitful approach based on the work of Pierre Gy4 and my experience developing sampling protocols over the past 15 years consists of:

1. characterizing the heterogeneity of the ore (i.e. Gy’s formula parameters),

2. studying the gold grain size variations as a function of gold grade,

3. finding the grade at which the sampling characteristics of the ore are worst,

4. using this worst-case scenario to assess and correct the protocol for primary RC cuttings, crushed ore and pulp sampling stages.

5. The protocol is then optimized to make sure all these steps have as similar precisions as practically possible.

The overall preparation precision result is shown to be driven by the worst step(s). Either these poor steps can be improved, or if not practical, the other, better steps could be relaxed, as they cost money without fixing at all what makes the overall result poor (interestingly, this relaxing of better coarser pulverization).

The important point to remember is that this methodology can only be applied on a case by case basis, and that there is not such a thing as a universal, optimal coarse split mass or crushing size.

As far as myths are concerned, the latter, which goes back to the gold rushes, would be the one well worth eradicating.

Footnotes
1. The large pulverizers described in the note were designed with the aim of allowing for a larger coarse split after crushing, when it became clear the usual protocols (splitting 500 g at 10 mesh) were inadequate for medium to coarse gold. In reality, in many instances, the usual effect was to reduce the primary sample to 3 kg and skip coarse crushing, with often disastrous consequences.

2. Sampling is defined as taking a smaller mass expected to represent the characteristics of interest of the whole lot. It is unfortunate the word is also used for the extraction of specimens or measurement supports.

3. One would argue that the other core half could have been selected, therefore their grade difference is important. This, however, is a classical misconception: in fact, the drill hole could have been drilled 10 cm further to the right, so what about that other difference? And what about 10 m to the left? Where do we stop seeing this difference as a measurement error? The truth is we made a decision to measure the grade at a given location in space along a half cylinder. We could have chosen the full cylinder (usually with little geostatistical gain, in ultimate analysis) but we did not. The difference between the two half cores is why we interpolate the grade between drill holes, thus proving we should never consider/expect the 1/2 core to be representative of a larger volume of material. Spatially interpolating grades between measurement points, however close to each other, belongs to the domain of geostatistics, a discipline that is separate from sampling theory. In other words, the assumption that the two 1/2 core samples are sample duplicates, and can be analyzed as such, is simply invalid. Spatial components require different kinds of analysis (e.g. variography) not addressed by the authors. The only, very rare case where it may make sense to process such data is when one of the two components (natural nugget effect and measurement error) has a variance known to be negligible in front of the other. In all other cases, their separation is not possible and useful conclusions cannot be derived.

4. The reader wanting to know more about it will be interested in a cycle of international conferences on sampling and bed-blending which started in 2003 and will hold its next instance in Brisbane in May 2005 under the auspices of AusIMM and CSIRO.

Link: Google Scholar