String similarity where order and difference in ascii code matters

324 Views Asked by At

Anybody aware of a string similarity method that would give the correct results for the below? I'm dealing with alphanumeric IDs where:

  1. a change in the early part of the string matters more than in the latter part. I guess I could do ngrams? Although that might break down in the scenario where one string has a prefix?
  2. The difference in what character gets substrituted matters as changing an "a" to "b" is less of a change than changing it to "c".

Levenstein and Jaro-Winkler don't seem to be doing the right thing.

See example below.

import jellyfish
t1="100"
t21=["100a","a100"] # case 1. expecting: similar, not similar
t22=["101","105","200"] # case 2. expecting: similar, less similar, least similar

fun = jellyfish.levenshtein_distance
print([fun(t1, t) for t in t21]) # all the same
print([fun(t1, t) for t in t22]) # all the same

fun = jellyfish.jaro_winkler
print([fun(t1, t) for t in t21]) # all the same
print([fun(t1, t) for t in t22]) # all the same

For added fun, a scenario where the first string has a prefix which is essentially irrelevant to the string as an ID but messes up string similarity.

t1="pre-100"
t21=["100a","a100"] # expecting: similar, not similar
t22=["101","105","200"] # expecting: similar, less similar, least similar

fun = jellyfish.levenshtein_distance
print([fun(t1, t) for t in t21]) # picks the wrong one
print([fun(t1, t) for t in t22]) # all the same

fun = jellyfish.jaro_winkler
print([fun(t1, t) for t in t21]) # picks the wrong one
print([fun(t1, t) for t in t22]) # picks the right one
0

There are 0 best solutions below