NLTK inter-annotator agreement using Krippendorff Alpha Outputs Zero on only 1 disagreement

369 Views Asked by At

I have a sequence-labeling task in NLP, where the annotators are asked to provide 1 or more labels to each word in a sentence. For e.g., for a sentence [a, b, c, d]

Ann 1. provided [[0, 1, 2], [0, 1], [0], [0]]

Ann 2. provided [[0, 2], [0], [0], [0, 1]]

I used the NLTK's agreement module to calculate the Krippendorff's Alpha (since this is a multi-label task). To calculate the alpha, I did

task_data = [
('coder0', 'word0', frozenset([0, 1, 2])), 
('coder0', 'word1', frozenset([0, 1])), 
('coder0', 'word2', frozenset([0])), 
('coder0', 'word2', frozenset([0])),

('coder1', 'word0', frozenset([0, 2])), 
('coder1', 'word1', frozenset([0])), 
('coder1', 'word2', frozenset([0])), 
('coder1', 'word2', frozenset([0, 1]))
]

task = AnnotationTask(distance = masi_distance)
task.load_array(task_data)
print(task.alpha())

Using the above code, I received very low numbers for alpha. To investigate, I took a sample example:

task_data = [
('coder1','Item0',frozenset(['l1'])), 

('coder2','Item0',frozenset(['l1', 'l2']))
]

task = AnnotationTask(distance = masi_distance)

task.load_array(task_data)

and I get the task.alpha() as ZERO.

My questions are:

  1. Is this the correct way of calculating agreement for a sequence labeling and multi-label task?
  2. Why for the toy example alpha is zero? I tried calculating Masi-distance and it's non-zero.
  3. Is there any other better/reliable metric for such a task?
0

There are 0 best solutions below