(1) Accuracy of discrimination of letters at various preselected distances was determined each session while Ortho-rater examinations were given periodically throughout training.
(2) A rater-specifuc varuabke was fiybd fir eacg if tge fiyr raters.
(3) Study 1 assessed the effects of roentgenogram quality, raters, and seven measurement methods on the consistency and accuracy of evaluating translations in the sagittal plane.
(4) Videotaped interviews were used for assessing the level of inter-rater reliability and the communicability of the CPRS to unexperienced raters.
(5) In order to evaluate how many patients presenting at accident and emergency (A&E) departments show signs of psychiatric disturbance, 140 consecutive medical presentations to an A&E department were evaluated using a range of simple self-report and rater measures, then followed up a month later.
(6) This increase was greater with the inexperienced raters than with the experienced group.
(7) Interrater reliabilities, ranging from .62 to .83 across rater pairs, were superior to reliabilities reported in medical education studies.
(8) The DRS and LCFS were compared in terms of how consistently ratings could be made by different raters, how stable those ratings were from day to day, their relative correlation with Stover Zeiger (S-Z) ratings collected concurrently at admission, and with S-Z, Glasgow Outcome Scale (GOS), and Expanded GOS (EGOS) ratings collected concurrently at discharge, and finally in the ability of admission DRS and LCFS scores to predict discharge ratings on the S-Z, GOS, and EGOS.
(9) Scale items that differed from the raters' intuition tended to be omitted more than others.
(10) Two raters examined 45 children (90 hips), including patients with spastic diplegia and with meningomyelocele, who are prone to developing hip flexion contractures, and healthy subjects.
(11) Additional evaluations included interrater reliability and an evaluation that included longitudinal measurement, in which one subject was imaged sequentially 24 times, with reliability computed from data collected by three raters over 1 year.
(12) Furthermore, raters watched the synchronously recorded video versions of the subject's face and rated them as to expressivity.
(13) Each rater evaluated the transcript of 15 prenatal interviews.
(14) These differences diminish when more highly educated raters are used.
(15) Prealcohol and postalcohol responses were assessed by self-rating scales of affect and mood, independent rater observation, perceptual-motor, and cognitive performance tasks.
(16) Intrarater reliability for each of the four nurse-raters on a random sample was at a significant level.
(17) Several investigators have used the Brier index to measure the predictive accuracy of a set of medical judgments; the Brier scores of different raters who have evaluated the same patients provides a measure of relative accuracy.
(18) Comparison of reliability scores across rating conditions indicated that the videotape medium had little effect on the ability of raters to rate affective flattening similarly.
(19) Calibrated raters were unaware of group affiliation of products.
(20) The Brief Psychiatric Rating Scale (BPRS) and the Clinical Global Impressions (CGI) scale were administered at study entry and once a week by a blind rater.