Improved Drag Gesture
Let’s fix that “snapping” bug that we saw in the Drag Gesture example.
From the Drag Gesture post:
You may notice the entity “snap” from its current position when starting this gesture.
In the first example for Drag Gesture, we set the position of the entity to a position calculated from the gesture, but the location3D value of the gesture does not necessarily match the position of the entity.
We can smooth out this gesture by caching the initial position of the entity before the gesture starts, then adding it to the movement vector.
struct Example010: View {
@State var isDragging: Bool = false
@State var initialPosition: SIMD3<Float> = .zero
var body: some View {
RealityView { content in
if let scene = try? await Entity(named: "GestureLabs", in: realityKitContentBundle) {
content.add(scene)
...
}
}
.gesture(dragGesture)
}
var dragGesture: some Gesture {
DragGesture()
.targetedToAnyEntity()
.onChanged { value in
// We we start the gesture, cache the entity position
if !isDragging {
isDragging = true
initialPosition = value.entity.position
}
// Calculate vector by which to move the entity
let movement = value.convert(value.gestureValue.translation3D, from: .local, to: .scene)
// Add the initial position and the movement to get the new position
value.entity.position = initialPosition + movement
}
.onEnded { value in
// Clean up when the gesture has ended
isDragging = false
initialPosition = .zero
}
}
}What is happening here?
- We capture the entity position when the gesture starts
- We calculate the movement (translation in 3D space) by converting the translation3D value into the correct coordinate space
- We set the entity position by adding our captured position to the movement
- When the gesture ends, we reset the state vars
This works well, but I don’t like storing the state in the same view with my RealityView. Chances are that I’m going to need several gestures for my scene, so this could get cluttered. Here is an alternative that moves the gesture into a custom view modifier.
struct Example010: View {
var body: some View {
RealityView { content in
// Load the scene from the Reality Kit bundle
if let scene = try? await Entity(named: "GestureLabs", in: realityKitContentBundle) {
content.add(scene)
...
}
}
.modifier(DragGestureImproved010())
}
}
struct DragGestureImproved010: ViewModifier {
@State var isDragging: Bool = false
@State var initialPosition: SIMD3<Float> = .zero
func body(content: Content) -> some View {
content
.gesture(
DragGesture()
.targetedToAnyEntity()
.onChanged { value in
// We we start the gesture, cache the entity position
if !isDragging {
isDragging = true
initialPosition = value.entity.position
}
// Calculate vector by which to move the entity
let movement = value.convert(value.gestureValue.translation3D, from: .local, to: .scene)
// Add the initial position and the movement to get the new position
value.entity.position = initialPosition + movement
}
.onEnded { value in
// Clean up when the gesture has ended
isDragging = false
initialPosition = .zero
}
)
}
}We could also store our state in a view model or some other data structure that could be exposed to multiple views and modifiers.
Video Demo
Support our work so we can continue to bring you new examples and articles.
Download the Xcode project with this and many more examples from Step Into Vision.
Some examples are provided as standalone Xcode projects. You can find those here.

Thank you for your explanation and example.
Very easy to understand and implement.
You blog/web is a good point to learn how to achieve developments on Vision Pro.
Thank you very much.
Thank you for the kind words! Let us know if there is a topic you would like to see. If we haven’t covered it yet I can make sure we get to it soon