Data Jamboree 1/Annotation Experiment

From phenoscape
Revision as of 19:59, 24 April 2008 by Wasila (talk | contribs)

Background and Participant Preparation

An annotation experiment was conducted on day 2 of the Phenoscape Data Jamboree in order to assess curation consistency among the four trained participants. Training consisted of a hands-on group annotation exercise on day 1, and individual work on each participant's own publications with assistance from project personnel on days 1 and 2. An Annotation Guide with examples of character types commonly encountered in the fish systematic literature was also given to participants. For the experiment, participants were given 2 hours to annotate 10 characters (plus one extra credit) taken from three publications.

Results and Conclusions

Completeness of annotations

Three of the four participants attempted annotations for all 11 characters, while one participant finished only 7 characters. All participants recorded the character number and textual description, and selected the appropriate voucher specimen for each annotation. Only two of the four participants recorded evidence codes for each annotation.

Variability of EQ statements

A summary of annotation consistency among participants is presented in the table below (incomplete annotations due to software issues are excluded).

Character # # Participants with

Completed Annotations*

% Consistency with Key Variable component of annotation
1 4 100
2 3 0 post-composition of Q term for relative length
3 3 0 incorrect recording of count values
4 4 0 TAO term definition confusion (bone vs. cartilage)
5 3 33 E post-composition; choice of appropriate Q
6 4 0 E post-composition
7 4 50 E post-composition
8 3 33 choice of appropriate Q term
9 3 0 E post-composition; choice of appropriate Q term
10 2 50 choice of appropriate Q term
EC 2 25 E post-composition; choice of appropriate Q term
  • *incomplete annotations due to software issues were excluded

Participants annotated only one character identically. Variation in the other annotations was due to several reasons:

Participants annotated only one character identically. Variation in the other annotations was due to several reasons:

  • Granularity of annotations. For example, some participants integrated very detailed information in post-compositions of entities or qualities, whereas others used single anatomy terms or broad term categories for quality. Also some participants used spatial information in post-composition.
   *Creation of post-composed entities. Participants had difficulty in deciding what term to use as the genus in post-composition.  Also, the relation used in post-composition (for example, use of part_of/has_part) differed in annotations among participants.
   *Choice of the appropriate quality term.  Participants had difficulty in choosing quality terms among many similar choices.  The appropriate use of monadic and relational qualities also differed in the annotations among participants.
   *Confusion regarding the definition of an anatomy term, pointing to the importance of consistently naming bone terms in the TAO.

The results of the annotation experiment highlight the need for annotation standards, and stream-lining of the software interface so that curators are not faced with so many similar choices for terms and relations.