The how and why of proficiency testing schemes

Disputes and product rejections in the grain and oilseed value chains, can be prevented through improved benchmarking and alignment of testing facilities. Successful participation in proficiency schemes can assist value chains to function more efficiently. It is against this background that the Oilseeds Advisory Committee (OAC) requested the Southern African Grain Laboratory NPC (SAGL) to co-ordinate a proficiency testing scheme for soya beans and soya bean meal on their behalf. A questionnaire was developed to ascertain the number of participants as well as the specific parameters to be included in the proficiency scheme. This questionnaire was distributed to members of the Animal Feed Manufacturers’ Association (AFMA) and the South African Oil Processors’ Association (SAOPA) in a bid to reach the maximum number of testing facilities in the soya bean industry. Accredited testing laboratories involved in the testing of soya beans and soya bean meal, were also included in the distribution list.

Measuring individual performance

Proficiency testing aims to provide an independent assessment of the competence of participating testing facilities and, together with the use of properly validated methods and well-trained analysts, is an essential element of quality assurance. Successful participation in proficiency schemes forms part of complying with the accreditation requirements under the international standard for testing laboratories (ISO/IEC 17025). Regular participation in proficiency testing (also called interlaboratory comparisons or ring tests) provides information that can be used to evaluate and carefully monitor individual performance using pre-established criteria.

Benefits include equipment verification, method optimisation, benchmarking against other laboratories/analysts, and information on the comparability of test or measurement methods. It is also a method validation tool, assists with the training of laboratory staff, allows the laboratory to identify analytical drifts and equipment dysfunctions that can be corrected, improves measurement uncertainties, and ultimately provides a means of confirming the accuracy of results. Proficiency samples are prepared, tested and its homogeneity confirmed before it is distributed to participants. The results submitted by the participants are statistically analysed in order to provide an assigned value for each analyte; the assigned value is derived from the consensus of the results submitted by participants. The assigned values are then used in combination with the standard deviation for proficiency to calculate a z-value for each result.

Identifying outliers in data sets

Outliers in a set of data will influence the average and standard deviation of that data set and thus also the z-value. All outliers are identified and omitted for further statistical analysis. A significance level of 5%, or a confidence level of 95%, is used when determining outliers; the significance level is a specified small probability considered to be the risk of erroneously rejecting a good observation. Different statistical tests are used to determine outliers, the most commonly used being the Grubbs test, Dixon’s Q test, and the David, Hartley and Pearson test.

Significance of z-values

Z-values are used internationally as an indication of a laboratory’s proficiency. A z-value is calculated for each reported value. Each individual z-value represents the decimal number of standard deviations by which an analytical result differs from the ‘true value’, as represented by the average/consensus/assigned value. A ‘perfect’ z-value is 0,00. Z-values of less than 1,00 represent outstanding accuracy and precision; z-values of less than 2,00 are considered to represent satisfactory accuracy and precision; z-values between 2,00 and 3,00 are considered to be questionable and it is suggested that some attention to equipment and procedures may be required; and z-values greater than 3,00 are considered to be unsatisfactory and require examination of the equipment and procedures used.

Evaluating overall performance

To grade proficiency schemes on grading parameters, individual samples are prepared per participant. Grain and oilseed samples are cleaned of all defects. Known quantities of defects and foreign matter are then added to each cleaned sample. Prior to the samples being sent to participants, each sample is graded to ensure that the composition of all the samples are similar and to enable comparison of the participants’ results. Results are presented in table and graph format and each participating laboratory can use their own uniquely coded results to evaluate their performance. Information on the specific methods used by the participants are also included in the report. This information provides participants with additional information on the different methods that are used by other testing facilities. It remains the responsibility of each participant to investigate the reason and to take action if an out-of-spec result is reported. These investigative steps include making sure that the correct results were reported, the correct method was used, the equipment calibration and verification is up to date, and the staff conducting the testing is properly trained.


Wiana Louw

General Manager, Southern African Grain Laboratory

Quality overview of the 2020 maize crop

A total of 890 composite samples, representing white and yellow maize of each production region, were received and analysed to determine their quality. The samples consisted of 516 white and 374 yellow maize samples.