I am currently trying (and struggling) to implement a WeightedCorePredicate to implement the weighteddbscan in elki. The modified code of the core predicate is shown here:
public boolean isCorePoint(DBIDRef point, DBIDs neighbors) {
WeightSum = 0.0; // Make sure to initialize the weights as 0
for (DBIDIter it = neighbors.iter(); it.valid(); it.advance()) {
/*
Within here, I need to extract the original indices of the neighbours detected
in the original file, and need to link that back to the original data
and accumulate the weight columns to the WeightSum to get the weighted
core points in dbscan
*/
}
return WeightSum >= minpts;
}
}
The idea is that one of the inputs to the original corepredicate is the weighted column dataset, and I would 'pluck' out the weights using the indices extracted. My trouble is getting the original indices of the data the neigbours DBIDs point to. I have tried methods of internalGetIndex() (which does not have any link to indices of the original data) and offset indices, but these to have no luck and I am a bit at my wits end as to how to get around this issue.
If anyone can assist, I would be most grateful
Thanks
Do it the other way.
Store the weights for each object in a
DoubleDataStore
that you can access byDBID
. Don't go back to "original" indexes.Also, use local variables, or the code will have a race condition when run in parallel.