In the technical note of the same title in EXPLORE (No. 63, July 1988, pp. 12-14), the groundtruth of a computer-simulated Au geochemical anomaly (Figure 2a), and two realizations of that groundtruth were presented.

These two realizations were collected with sample sizes such that an an average of 0.25 and 1 grain per sample (Figures 2b and 2c, respectively) were collected over the anomaly.

A third realization of an unknown groundtruth (with sample sizes corresponding to 0.25 grains per sample) was presented in Figure 2d.

As promised in the first technical note, the groundtruth for the realization of Figure 2d is presented in Figure 3a and a similar realization from this groundtruth using a sample size corresponding to, on average, 1 grain per sample is presented in Figure 3b.

For both of the example groundtruths and their realizations presented in these two articles, it appears possible to correctly define the location of the anomaly with samples containing, on average, 1 grain per sample (Figures 2c and 3b), but virtually impossible to correctly define the location of the anomaly when sample sizes producing, on average, 0.25 grains per sample are used (Figures 2b and 2d).

Clearly, larger sample sizes have improved our ability to recognize the anomaly because they have decreased the variance produced by the nugget effect and created a more stable geochemical response.

Sampling theory (Gy, 1982), grain size studies (Clifton et al. 1969) and Poisson statistics (Ingamells 1981; Figure 1) have demonstrated that extremely large sample sizes are generally required to obtain Au analyses with precisions of 50% or less.

Unfortunately, the precision associated with the realizations using sample sizes corresponding to, on average, 1 grain per sample is 200%, calculated using the following formula:

where p is the precision expressed in % (equal to twice the coefficient of variation – standard deviation divided by the mean) and g is the average number of grains in the sample.

This is much higher than the 50% precision level, which based on theoretical considerations, has been recommended by several authors (e.g. Clifton et al. 1969); yet in the example shown, the precision is low enough to allow us to define the correct location of the simulated anomaly.

This apparent discrepancy is not a result of these specific randomly sampled realizations of the groundtruths, but rather can be shown to be caused by several other important factors.

First, the objective of the geochemical survey must be considered.

One may wish only to locate an anomaly, or may wish to go further by locating the anomaly and defining the sample site containing its highest relative concentration.

Obviously, in the second case, a lower level of analytical precision (high level of sample reproducibility), and thus a larger sample size, is required.

The second factor which must be considered is the number of samples which were collected from anomalous sites. Clearly, if the sample density of the survey is so low that only one sample will likely be collected from anomalous material, a very high level of sample reproducibility (low level of precision) is required to ensure that the sample records an anomalous concentration.

The chance of detecting the anomaly (c) will be equal to the chance of detecting a gold grain in the anomalous sample (d), and thus is only a function of sample size (at least for this case).

However, if several samples will likely be collected from anomalous material, either because of a larger sample density or a larger anomaly, the chance of detecting the anomaly in one of these samples can be calculated by the following formula:

where n is the number of samples collected from anomalous material.

The chance of anomaly detection is no longer solely a function of d, the chance of detecting at least one gold grain in an anomalous sample; it is a function of both d and n.

Thus, if 5 samples were collected from anomalous material and each of these samples had a 20% chance of containing a gold grain (corresponding to an average of approximately 0.25 grains per sample), the chance of one of these 5 samples being anomalous is 67%.

With 10 samples collected from anomalous material, the probability of obtaining one anomalous sample becomes 89%. Clearly there is safety in numbers.

The last factor considered here involves the criteria by which we define a set of anomalous concentrations as an anomaly.

These will vary depending on the circumstance of the survey, but for a single element which is subject to a nugget effect, these criteria will largely be a function of the number and spatial distribution of the anomalous samples (i.e. – the pattern the anomalous samples make on the map).

The absolute magnitudes of the sample concentrations will generally be less important due to the large variance imposed by the nugget effect.

Thus, we may require that several nearly adjacent samples record anomalous concentrations before we confidently consider the region they define to be an ‘anomaly’.

The chance of obtaining anomalous concentrations in m of the n samples collected from anomalous sites can be calculated by the following formula:

Thus, if we require that 2 of 5 samples collected from anomalous material record anomalous concentrations before we have enough confidence to say we have correctly located the anomaly, the chance of anomaly detection becomes 34%. If 3 of the 5 anomalous sites must record anomalous concentrations, the probability of detection drops to 8%. Raising the requirements for anomaly detection reduces the probability of detection. Therefore, the probability of anomaly detection is dependent on a variety of factors.

Those factors considered here include the size of the sample, and thus the chance of collecting a gold grain from an anomalous site, the number of anomalous sites, and the number of samples which must be anomalous before the ‘anomaly’ can be located with confidence.

It is important for geochemists to consider all of these factors in light of their survey objectives in order to define the sample size required to produce a survey which has only a small chance of not detecting an anomaly when it is present.

In this way, exploration expenses can be directly related to the probability of anomaly detection, and geochemical surveys can be designed and implemented in the most cost efficient manner.

This brief example forms part of a larger project investigating the relationships between sampling theory, sample size, sample density, pattern recognition, cost effectiveness and practicality (logistical feasibility) in geochemical exploration surveys.

Clifford R. Stanley

Dept. of Geology and Geophysics, University of Calgary

and

CyberQuest Exploration Systems

Vancouver, B.C. V6C US

Barry W. Smee

Abermin Corporation

**Explore, 65, 12-14.**

**References Cited **

Clifton, H.E., Hunter, R.E., Swanson, EJ. and Phillips, R.L. (1969)

Sample Size and Meaningful Gold Analyses. U.S.G.S.

Professional Paper # 625-C, pp. CI-C17.

Gy, P.M.(1982) Sampling of Particulate Materials. Eisevier, New York,

pp.1-431.

Ingamells, C.O.(1981) Evaluation of Skewed Exploration Data — The Nugget Effect. Geochimica et Cosmochimica Acta, Vol. 45, pp.

1209-1216.

Stanley, C.R. and Smee, B.W.(1988) A Test in Pattern Recognition: Defining Anomalous Patterns in Surficial Samples Which Exhibit Severe

Nugget Effects. Explore, No. 63, pp. 12-14.