I've got a simple app that displays an image inside a ImageView which is inside a scrollview which in turn is in a stack view along with some buttons. I've set up the imageview/scrollview to be able to pinch/zoom.
Now, I've added a TapGesture Recognizer to detect touchs and I then grab the x,y coordinates in the image. Based previous StackOverflow questions/answers, I translate the coordinates in the gesture recognizer back to the original image. Here is my gesture callback.
@IBAction func didTapImage(tapGestureRecognizer: UITapGestureRecognizer)
{
guard let image = imageView.image else {
return
}
let touchPoint: CGPoint = tapGestureRecognizer.location(in: imageView)
print("image clicked: x: \(touchPoint.x) y: \(touchPoint.y)")
print("image size is \(image.size)")
print("frame size is \(imageView.frame.size)")
// touch pont relative to imageView then translate to the image coordinates
let x_prop = touchPoint.x / imageView.frame.size.width
let y_prop = touchPoint.y / imageView.frame.size.height
let new_x = x_prop * image.size.width
let new_y = y_prop * image.size.height
print("x_prop: \(x_prop), y_prop: \(y_prop)")
print("new_x: \(new_x) new_y: \(new_y)")
}
What I'm seeing is that close to the center of the image, the coordinates seem pretty accurate. When I go to click at roughly 0,0, I'm finding the X is really distorted and Y seems accurate.
I've set the image, imageview, and scrollview to be scaleAspectFit.
Any ideas why the x,y coordinates in the gesture call back are distorted? (Below is the code I use to assemble the main view)
Bobby
//scrollView.frame = view.bounds
scrollView.zoomScale = 1.0
scrollView.maximumZoomScale = 10.0
scrollView.minimumZoomScale = 0.5
scrollView.delegate = self
scrollView.isUserInteractionEnabled = true
scrollView.translatesAutoresizingMaskIntoConstraints = false
scrollView.contentMode = .scaleAspectFit
// set up image view for scale aspect fit, allow user interaction (for clicking)
// and add the gesture for detecting touch
imageView.contentMode = .scaleAspectFit
imageView.isUserInteractionEnabled = true
let singleTap = UITapGestureRecognizer(target: self,action:#selector(didTapImage))
imageView.addGestureRecognizer(singleTap)
imageView.image = theImage
// stackview setup
stackView.frame = view.bounds
stackView.axis = .vertical
stackView.distribution = .fillProportionally
//stackView.distribution = .fillEqually
//stackView.distribution = .equalSpacing
stackView.spacing = 5
stackView.translatesAutoresizingMaskIntoConstraints = false
stackView.contentMode = .scaleAspectFit
stackView.addArrangedSubview(selectButton)
stackView.addArrangedSubview(resetButton)
stackView.addArrangedSubview(imageView)
stackView.addArrangedSubview(textView)
contentView.contentMode = .scaleAspectFit
contentView.addSubview(stackView)
scrollView.addSubview(contentView)
view.addSubview(scrollView)
contentView.contentMode = .scaleAspectFit
contentView.addSubview(stackView)
scrollView.addSubview(contentView)
view.addSubview(scrollView)
// set up layout constraints
// set constraints
selectButton.heightAnchor.constraint(equalToConstant: 0.1*view.frame.size.height).isActive = true
resetButton.heightAnchor.constraint(equalToConstant: 0.1*view.frame.size.height).isActive = true
textView.heightAnchor.constraint(equalToConstant: 0.40*view.frame.size.height).isActive = true
imageView.heightAnchor.constraint(equalToConstant: 0.40*view.frame.size.height).isActive = true
scrollView.topAnchor.constraint(equalTo: view.safeAreaLayoutGuide.topAnchor).isActive = true
scrollView.bottomAnchor.constraint(equalTo: view.safeAreaLayoutGuide.bottomAnchor).isActive = true
scrollView.leadingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.leadingAnchor).isActive = true
scrollView.trailingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.trailingAnchor).isActive = true
contentView.centerXAnchor.constraint(equalTo: scrollView.centerXAnchor).isActive = true
contentView.topAnchor.constraint(equalTo: scrollView.topAnchor).isActive = true
contentView.bottomAnchor.constraint(equalTo: scrollView.bottomAnchor).isActive = true
contentView.leadingAnchor.constraint(equalTo: scrollView.leadingAnchor).isActive = true
contentView.trailingAnchor.constraint(equalTo: scrollView.trailingAnchor).isActive = true
stackView.leadingAnchor.constraint(equalTo: contentView.leadingAnchor).isActive = true
stackView.trailingAnchor.constraint(equalTo: contentView.trailingAnchor).isActive = true
stackView.topAnchor.constraint(equalTo: contentView.topAnchor).isActive = true
stackView.bottomAnchor.constraint(equalTo: contentView.bottomAnchor).isActive = true
'''
Code to draw crosshair on image:
'''
func markImage(x_prop: CGFloat, y_prop: CGFloat)
{
guard let image = imageView.image else {
return
}
let imageSize = image.size
let scale: CGFloat = 0
let length: CGFloat = max(imageSize.width/48, imageSize.height/48)
let gap: CGFloat = length / 1.5
var actualX = imageSize.width * x_prop
var actualY = imageSize.height * y_prop
//actualX = actualX.rounded()
//actualY = actualY.rounded()
print("markImage at \(actualX) \(actualY)")
UIGraphicsBeginImageContextWithOptions(imageSize, false, scale)
imageView.image!.draw(at: CGPoint.zero)
var uiColor = hexToUIColor(rgbVal: 0xFE00DD)
uiColor.setFill()
// horizontal lines in target
var rectangle = CGRect(x: actualX - length - gap, y: actualY-gap/2,
width: length, height: gap)
UIRectFill(rectangle)
rectangle = CGRect(x: actualX + gap, y: actualY-gap/2,
width: length, height: gap)
UIRectFill(rectangle)
// vertical lines in target
rectangle = CGRect(x: actualX-gap/2, y: actualY - length - gap,
width: gap, height: length)
UIRectFill(rectangle)
rectangle = CGRect(x: actualX-gap/2, y: actualY + gap,
width: gap, height: length)
UIRectFill(rectangle)
// for testing, draw a white rectangle 20x20 centered at actual x,y
uiColor = hexToUIColor(rgbVal: 0xFFFFFF)
uiColor.setFill()
rectangle = CGRect(x: actualX-10, y: actualY-10, width: 20, height: 20)
UIRectFill(rectangle)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
imageView.image = newImage
}
'''
The reason your calculated coordinates are off is because you're not accounting for the image view's
.aspectFitcontent mode.Take a look at this example...
Here is a self-portrait I drew:
It's pixel dimensions are
300 x 600.Suppose I want cross-hairs at the center of the right eye - that is located at
100, 100(in pixels):When working with tap locations and view sizes, we have to work in points. So, if we have a
300 x 300image view, with.contentMode = .aspectFit, it will look like this:If we tap the right eye, the tap in the image view will be at
125, 50points:and, if we try to draw at that point, it will be here on the original image:
Which is, obviously, not where we want it.
So, first we need to calculate the image rectangle, relative to the imageView's bounds.
We can do that with a func like this:
and call:
with the resulting rect being:
x: 75, y: 0, w: 150, h: 300Or, much easier,
import AVFoundationand then:Now we can subtract that rect's origin from the tapped point:
Then we need to scale that point to match the scaled-size of the image:
Note that, depending on how we're drawing on (modifying) the actual bitmap image, we may also need to take into account the
img.scale... for example, I might have@2xand@3ximages, so the actual pixel dimensions might be300 x 600(@2x) and450 x 900(@3x).Since you say you will NOT be saving the image with the cross-hairs drawn on it, let me offer a much simpler approach that will avoid (almost) all of that.
Let's write a
UIImageViewsubclass, with aCAShapeLayerfor the cross-hairs:Now we can tap away on the image view and get this:
and we don't have to do any other calculationa when zooming in a scroll view:
Here's an example controller, with two buttons, the custom image view, and a text view ... in a vertical stack view in a "content" view in a scroll view: