Difference between revisions of "Data Jamboree 1/Annotation Experiment"

From phenoscape
(References)
(Background and Participant Preparation)
Line 1: Line 1:
 
====Background and Participant Preparation====
 
====Background and Participant Preparation====
  
An annotation experiment was conducted on day 2 of the Phenoscape Data Jamboree in order to assess curation consistency among the four trained participants.  Training consisted of a hands-on group annotation exercise on day 1, and individual work on each participant's own publications with assistance from project personnel on days 1 and 2. An Annotation Guide with examples of character types commonly encountered in the fish systematic literature was also given to participants.  For the experiment, participants were given 2 hours to annotate 10 characters (plus one extra credit) taken from three publications (see References).
+
An annotation experiment was conducted on day 2 of the Phenoscape Data Jamboree in order to assess curation consistency among the four trained participants.  Training consisted of a hands-on group annotation exercise on day 1, and individual work on each participant's own publications with assistance from project personnel on days 1 and 2. An Annotation Guide with examples of character types commonly encountered in the fish systematic literature was also given to participants.  For the experiment, participants were given 2 hours to annotate 10 characters (plus one extra credit) taken from three publications.
  
 
====Results and Conclusions====
 
====Results and Conclusions====

Revision as of 19:26, 24 April 2008

Background and Participant Preparation

An annotation experiment was conducted on day 2 of the Phenoscape Data Jamboree in order to assess curation consistency among the four trained participants. Training consisted of a hands-on group annotation exercise on day 1, and individual work on each participant's own publications with assistance from project personnel on days 1 and 2. An Annotation Guide with examples of character types commonly encountered in the fish systematic literature was also given to participants. For the experiment, participants were given 2 hours to annotate 10 characters (plus one extra credit) taken from three publications.

Results and Conclusions

Completeness of annotations

Three of the four participants attempted annotations for all 11 characters, while one participant finished only 7 characters. All participants recorded the character number and textual description, and selected the appropriate voucher specimen for each annotation. Only two of the four participants recorded evidence codes for each annotation.

Variability of EQ statements

A summary of annotation consistency among participants is presented in the table below (incomplete annotations due to software issues are excluded).

Character # # Participants with

Completed Annotations*

% Consistency with Key Variable component of annotation
1 4 100
2 3 0 post-composition of Q term for relative length
3 3 0 incorrect recording of count values
4 4 0 TAO term definition confusion (bone vs. cartilage)
5 3 33 E post-composition; choice of appropriate Q
6 4 0 E post-composition
7 4 50 E post-composition
8 3 33 choice of appropriate Q term
9 3 0 E post-composition; choice of appropriate Q term
10 2 50 choice of appropriate Q term
EC 2 25 E post-composition; choice of appropriate Q term
  • *incomplete annotations due to software issues were excluded

Participants annotated only one character identically. Variation in the other annotations was due to several reasons. Most commonly, annotations differed in how post-composed entity terms were created, particularly in the choice of relation in post-composition (for example, use of part_of/has_part). Annotations among participants also varied in choice of the appropriate quality term among many similar choices. Another point of difference was due to confusion in the definition of an anatomy term, pointing to the importance of consistently naming bone terms in the TAO. The results of the annotation experiment highlight the need for annotation standards, and stream-lining of the software interface so that curators are not faced with so many similar choices for quality terms and relations.