What's the difference between frater and rater?

Frater


Definition:

  • (n.) A monk; also, a frater house.

Example Sentences:

  • (1) Quoted by Sarah Frater, Evening Standard, 2002 In other words "To his public, Kenneth MacMillan was an enigmatic figure.
  • (2) In five of them the technique of Plauth, Frater, Spencer and Trusler was used.
  • (3) Stewart Frater (@stewart_frater) Ed Miliband has the charisma of a shoelace #Miliband January 17, 2014 @SymonHill seemed to take issue with the reactionary and sensationalist commentary found on social media, particularly in the context of bipartisan politics.
  • (4) To separate the effect of active relaxation and filling, a method was introduced [E. L. Yellin, M. Hori, C. Yoran, E. H. Sonnenblick, S. Gabbay, R. W. M. Frater, Am.
  • (5) Although none of the other members of Jamaica’s 4x100m squad in 2008, which included Bolt, Asafa Powell and Michael Frater, are accused of doping, if the news is confirmed the IOC could strip them of their title.
  • (6) Bolt, along with Yohan Blake, Nesta Carter and Michael Frater ensured that Jamaican independence celebrations extended into another night, by making good a boast of saving their best for the final against their great rival the USA.
  • (7) Bolt, Yohan Blake, Michael Frater and Nesta Carter ran the 4x100m in 36.84, the first team in history to run under 37 seconds.
  • (8) As a result of the complex comparative neurochemical study of the translation machinery functioning in the brain cells of three conventionally "phylogenetically related" species of wild timber voles (Clethrionomys glareolus, Clethrionomys frater and Clethrionomys gapperi), it has been found that the cytoplasm of brain cells of the latter contain an oligonucleotide (oligoribonucleotide) factor(s) with mol.
  • (9) And they did, because Gatlin was matched up against Jamaica's slowest runner, Frater.

Rater


Definition:

  • (n.) One who rates or estimates.
  • (n.) One who rates or scolds.

Example Sentences:

  • (1) Accuracy of discrimination of letters at various preselected distances was determined each session while Ortho-rater examinations were given periodically throughout training.
  • (2) A rater-specifuc varuabke was fiybd fir eacg if tge fiyr raters.
  • (3) Study 1 assessed the effects of roentgenogram quality, raters, and seven measurement methods on the consistency and accuracy of evaluating translations in the sagittal plane.
  • (4) Videotaped interviews were used for assessing the level of inter-rater reliability and the communicability of the CPRS to unexperienced raters.
  • (5) In order to evaluate how many patients presenting at accident and emergency (A&E) departments show signs of psychiatric disturbance, 140 consecutive medical presentations to an A&E department were evaluated using a range of simple self-report and rater measures, then followed up a month later.
  • (6) This increase was greater with the inexperienced raters than with the experienced group.
  • (7) Interrater reliabilities, ranging from .62 to .83 across rater pairs, were superior to reliabilities reported in medical education studies.
  • (8) The DRS and LCFS were compared in terms of how consistently ratings could be made by different raters, how stable those ratings were from day to day, their relative correlation with Stover Zeiger (S-Z) ratings collected concurrently at admission, and with S-Z, Glasgow Outcome Scale (GOS), and Expanded GOS (EGOS) ratings collected concurrently at discharge, and finally in the ability of admission DRS and LCFS scores to predict discharge ratings on the S-Z, GOS, and EGOS.
  • (9) Scale items that differed from the raters' intuition tended to be omitted more than others.
  • (10) Two raters examined 45 children (90 hips), including patients with spastic diplegia and with meningomyelocele, who are prone to developing hip flexion contractures, and healthy subjects.
  • (11) Additional evaluations included interrater reliability and an evaluation that included longitudinal measurement, in which one subject was imaged sequentially 24 times, with reliability computed from data collected by three raters over 1 year.
  • (12) Furthermore, raters watched the synchronously recorded video versions of the subject's face and rated them as to expressivity.
  • (13) Each rater evaluated the transcript of 15 prenatal interviews.
  • (14) These differences diminish when more highly educated raters are used.
  • (15) Prealcohol and postalcohol responses were assessed by self-rating scales of affect and mood, independent rater observation, perceptual-motor, and cognitive performance tasks.
  • (16) Intrarater reliability for each of the four nurse-raters on a random sample was at a significant level.
  • (17) Several investigators have used the Brier index to measure the predictive accuracy of a set of medical judgments; the Brier scores of different raters who have evaluated the same patients provides a measure of relative accuracy.
  • (18) Comparison of reliability scores across rating conditions indicated that the videotape medium had little effect on the ability of raters to rate affective flattening similarly.
  • (19) Calibrated raters were unaware of group affiliation of products.
  • (20) The Brief Psychiatric Rating Scale (BPRS) and the Clinical Global Impressions (CGI) scale were administered at study entry and once a week by a blind rater.