Project Graveyard – Devlog 002

Starting work on the next version of Project Graveyard, using visionOS 26 features.

It has been quite a while since I last worked on this silly little side project. It doesn’t get a lot of attention from me. It is a free app that I mostly made for myself and my friends. But it can be a useful sandbox to explore new visionOS features. The updates we got at WWDC 2025 opened up some new possibilities for Project Graveyard. Some new features can replace a lot of the hacks I had developed in the past.

Low Hanging Fruit

ViewAttachmentComponent allows me to define attachments and bind them to gravestones all in same place. Before visionOS 26, I had to create these up front and keep them in sync with the parent entities. With ViewAttachmentComponent, I can do something like this.

let newStone = stoneEntity.clone(recursive: true) // imported from an RCP scene
let attachmentEntity = Entity() // we'll offset this to place the attachment in front of the stone mesh
let attachment = ViewAttachmentComponent(rootView: GraveStoneFace(item: item))
attachmentEntity.components.set(attachment)
newStone.addChild(attachmentEntity, preservingWorldTransform: true)

PresentationComponent lets me show a popover relative to a transform in the volume. Each stone entity has a hidden transform just above it. I used this to place a selection indicator in the last version. I’m reusing it here as the anchor for the popover. I can create a PresentationComponent and pass in a SwiftUI view. I set isPresented to false. Instead of using SwiftUI state to toggle this popover, I’ll show and hide it based on user input.

if let selection = newStone.findEntity(named: "selection") {
    selection.isEnabled = false
    var presentation = PresentationComponent(
        configuration: .popover(arrowEdge: .bottom),
        content: FormItemDataNew(item: item, entity: newStone)
    )
    presentation.isPresented = false
    selection.components.set(presentation)
}

I think these small popover cards are great for customizing objects. For example, changing materials, colors, adjusting scene settings. I’m less sure about text editing. Each gravestone has three text values that the user can edit. visionOS seems to have a hard time focusing these fields based on user taps. Sometimes it can take several taps to get the keyboard to display. I might have to keep text editing as a secondary/utility window if things don’t improve.

GestureComponent lets me simplify the tap gesture. Instead of a gesture that targets entities based on a component type, I can do this.

  let tapGesture = TapGesture()
    .onEnded({ [weak entity] _ in
        if let selection = entity?.findEntity(named: "selection") {
            selection.components[PresentationComponent.self]?.isPresented.toggle()
        }
    })
let gestureComponent = GestureComponent(tapGesture)
entity.components.set(gestureComponent)

The magic part here is [weak entity] _ in. At first I tried just using the entity directly in the onEnded closure, but when I did, the Tap Gesture didn’t keep a reference to it. I looked around for some examples of how Apple uses GestureComponent and found this in Canyon Crosser. See it for yourself in AppModel+Setup.swift line 46. This lets the gesture keep a weak reference to the entity as an optional. When a tap occurs, we check to see if we can reach the PresentationComponent on the selected child entity. If so, we toggle the isPresented value.

New App Mode

I love the new Manipulation Component in visionOS 26. I’m using it in two different ways in this project. These uses make up two of the three new app modes.

  1. Look: this is the default mode. Users can look at the graveyard. In this mode, Manipulation Component lets them pick it up and bring it close for inspection. They can scale it up to take a closer look. A billboard component is applied while a entity is being handled in this mode.
  2. Move: we’ll use this mode to allow users to arrange the graveyard. We use Manipulation Component again, but with some constraints. They can only move the stone around on the ground, and only within the bounds of the graveyard. They can only rotate it around the Y axis and can only scale within a fixed range. These constraints are applied by overwriting the transforms in ManipulationEvents.DidUpdateTransform.
  3. Edit: this mode removes manipulation and enables the Tap Gesture we looked at above. This shows a SwiftUI view using PresentationComponent.
// Manipulation for .display mode removes rotation
// there is no reason to try to apply rotation when we have a billboard component anyway
var mc = ManipulationComponent()
mc.dynamics.primaryRotationBehavior = .none
mc.dynamics.secondaryRotationBehavior = .none
entity.components.set(mc)
// Manipulation for .arrange mode removed primary rotation, but allows secondary
// this mode also set the release behavior to .stay
var mc = ManipulationComponent()
mc.releaseBehavior = .stay
mc.dynamics.primaryRotationBehavior = .none
entity.components.set(mc)
// Constraining transform changes while in .arrange mode
_ = content.subscribe(to: ManipulationEvents.DidUpdateTransform.self) { event in
    if (viewModel.interactionMode == .arrange) {
        let newPos = Grave3DHelpers.constrainPosition(event.entity.position, limit: 4.0)
        let newRot = Grave3DHelpers.constrainRotationToYAxis(event.entity.transform.rotation)
        let newScale = Grave3DHelpers.constrainScale(event.entity.scale.x, minScale: 0.25, maxScale: 1.25)
        let newTransform = Transform(scale: newScale, rotation: newRot, translation: newPos)
        event.entity.transform = newTransform
    } 
}

For now, I hacked together a few buttons to switch between these modes and stuck them in a toolbar. I’d like to do something a little better though. I’m thinking of an expandable/collapsible control that integrate better with the volume. I’d also like to re-enable look/display mode after a short period of inactivity.

Aside from the text input issue I mentioned, I’m happy with these changes. There are still a lot of things I want to work on before release. If you want to try it this summer, please let me know. I’ll get a TestFlight version out at some point.

Demo Video

DEVLOG 002 Video

Questions or feedback?

2 Comments

  1. I am trying to get the new PresentationComponent working as you mention in your project above but my simple Text view (that I am adding as a PresentationComponent) does not appear in my RealityView even though the entity is found. Here is a simple example built from an Xcode immersive view default project:

    struct ImmersiveView: View {
    @Environment(AppModel.self) var appModel

    var body: some View {
    RealityView { content in
    // Add the initial RealityKit content
    if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
    content.add(immersiveContentEntity)

    if let materializedImmersiveContentEntity = try? await Entity(named: "Test", in: realityKitContentBundle) {
    content.add(materializedImmersiveContentEntity)

    var presentation = PresentationComponent(
    configuration: .popover(arrowEdge: .bottom),
    content: Text("Hello, World!")
    .foregroundColor(.red)
    )
    presentation.isPresented = true
    materializedImmersiveContentEntity.components.set(presentation)
    }
    }
    }
    }

    }

    In the VisionOS26 WWDC video they do the same thing but it doesn’t work for me either:

    https://developer.apple.com/videos/play/wwdc2025/274/?time=962 (18:29 minutes into video)

    Here is the Apple reference: https://developer.apple.com/documentation/realitykit/presentationcomponent