The following code should create a support vector classifier (SVM with linear kernel) using the ksvm function from the kernlab package:
library(kernlab)
set.seed(1)
x <- rbind(matrix(rnorm(10 * 2, mean = 0), ncol = 2),
matrix(rnorm(10 * 2, mean = 2), ncol = 2))
y <- c(rep(-1, 10), rep(1, 10))
svc <- ksvm(x, y, type = "C-svc", kernel = "vanilladot")
plot(svc, data = x)
The resulting graph:
If my understanding is correct, the black shapes are the support vectors, which are the data points that lie inside or on the boundary of the margin.
So what's up with the topmost black dot? There are three open dots (so not support vectors) that are closer to the decision boundary. (Two are nearby and easy to see. The third is harder to see unless you zoom in on the picture, but it's the one furthest to the right.)
Either there is a bug in the implementation here or I'm missing something conceptual about the way this is supposed to work. Any insights?

There's nothing wrong with your results. The 6 support vectors are indeed closest to your decision surface (i.e. line in your case). I admit that the shading in the plot you're showing looks a bit odd. Could this be an optical artefact?
Let's reproduce your results using
svmfrom thee1071library (since I'm more familiar withe1071than withkernlab).Here is your sample data.
Let's use
svmas a classification machine using a linear kernel.scale = FALSEensures that data are not scaled.We plot the decision surface and support vectors (SV).
The SVs are marked by the
xsymbols. You can clearly see how the SVs are located nearest to the separating decision line.We can also extract the parameters of the decision line (i.e. its normal vector), and manually plot decision line and data: