There's a lot of information on the internet regarding ordinary line simplification,
https://www.jasondavies.com/simplify/
https://bost.ocks.org/mike/simplify/
http://geomalgorithms.com/a16-_decimate-1.html
http://mourner.github.io/simplify-js/
i.e. when the simplified points are known upfront. The Visvalingam’s algorithm, Douglas-Peucker Algorithm, but what if the tolerance parameter is fixed and the points are not known upfront. I have a lot of points and I wouldn't want to run N * Log(N) algorithm M thousand times, instead I'd like it to process my set incrementally, the intersections doesn't matter, the point is just to reduce the size of dataset with minimal visual impact is there some smart way to deal with this problem?
If intersections don't matter and all you need is visual similarity (presumably after rasterization and with
ε
close to the pixel size), simply discarding points that are close enough to the reduced chain may do the job.In pseudocode:
What this approach guarantees:
ε
ε
What it doesn't guarantee: