WWDC 2025 Vision Pro Developers Wishlist
What we hope to see for visionOS, RealityKit, ARKit, and SwiftUI at WWDC 2025.
After working with visionOS for nearly two years, I have a few things on my list. I hope I didn’t go overboard. These generally fall into three categories.
- User should be able to truly customize their use of Apple Vision Pro.
- Building on the APIs that visionOS has today.
- Creating visionOS apps should get easier every year.
My Wishlist
- Improved Scene management APIs. (Windows, Volumes, Spaces).
- Position windows relative to the user (expanding on the .utility placement).
- Move windows after they have been opened. For example, summon a window when it is needed to complete a task.
- Group windows together into a shared construct so they can be moved and sized together.
- Snap windows to walls and surfaces in the shared space.
- Pin a window to a location and permanently save that location even after a device reboot.
- Disable the feature that hides part of a window or volume when it is close to or overlaps another.
- Move from one space to another without exiting the first space. Space B replaced Space A when Space B loads.
- Window / volume flag to treat our windows the same way as other apps when entering an immersive space. Hide windows with this flag set, show them again when exiting the space.
- New windows and volumes should take ornaments into account when calculating their placement. We can hack around this now with sizes, but it isn’t a great solution.
- RealityKit
- Attachments and ornaments should support the presentation API for pickers, menus, popovers, etc…
- More than eight dynamic lights per scene.
- Support for area lights (surfaces and volumes that emit light).
- Generate dynamic concave collision shapes.
- Make it easier to generate physics joints from an array of entities.
- Hover effect component should support entity transformation. Bonus of hover effect can trigger entity actions.
- Timelines and Behaviors in code. We already have entity actions to call with them.
- Update all gestures to support access to
inputDevicePose3D. This is currently not available in Tap and Spatial Tap gestures, but it can be used from Drag Gesture. - New components to solve common problems. Object pooling, entity spawners, gesture managers, etc.
- Make it easier to change component values. Replace or obscure the whole “get component, edit value, set component” thing.
- Support for using SwiftData with RealityView. Currently, RealityViews are not updated/notified when SwiftData imports new data from CloudKit.
- Reality Composer Pro
- Add real documentation for this app to the Help menu. The menu currently points to a not-all-that-helpful web page.
- Merge the lists of components we can add in RCP and in code. Currently, there are many components we can add in one, but not the other. Example, we can’t add hover effect component in RCP.
- Timelines and action should be able to read values from entity components and adjust behavior accordingly.
- Expand the list of behavior triggers and add conditional logic to these.
- Expand the list of entity actions in Timelines. Actions to animate the value of a property on any component.
- Create primitive plane entities. We can do this in code, but not in RCP. Bonus if they can be two-sided.
- Preview on device shows content as a volume. Include an option to show as an immersive space at real-world scale. This would turbo charge iterating on scenes.
- ARKit
- Improve tracking speed across all features. Hand tracking is still very slow compared to any other XR device, even with predictive mode enabled. Object tracking is so slow I don’t even want to use it.
- Plane tracking could do so much better at reporting useful planes we would want to use in apps. It has classification for windows and doors, but these rarely seem to work.
- Name the feature set: Using Reality Anchoring component with Spatial Tracking Session adds ARKit-like features to RealityKit. These disconnected features lack a name and a single source of documentation. Something like “Tracked RealityKit”.
- Hand Anchors (Anchoring Component or Anchor Entity) should used the scene physics space by default. I’ve lost count of the number of people who got stuck on this. Apple chose the wrong default value for this one.
- Head Anchors should be provided transform data when used with Spatial Tracking Session, just like Hands.
- SwiftUI
- Navigation Split View should use a separate glass pane for the list vs detail area.
- Navigation Stack and Navigation Split View should not add glass background when using plain window style.
- Inspector view! Include an option to render this as a detached glass pane to the side or bottom of a window.
- Improve hover effect to allow view transformations, rotations, etc.
- Make it easier to adapt SwiftUI view to very small windows. Spatial Computing shines when it is close and personal. The huge windows and volumes we have today take up too much space and feel too distant.
- Gradient versions of glass materials for windows.
- Using the .help modifier should always show a tooltip when hovering over an item. SwiftUI omits these in many cases now.
- End-user features
- App Store section to browse all visionOS apps.
- A visionOS native version of Reminders that supports multiple windows for each list/query.
- A “Stage Manager” system to group windows and volumes from multiple apps into sets. Hide or dismiss these. Quickly swap between them. Pin these to the user or to a room/area.
- Please let me turn off those “helpful” system tips. I’m not “too close” to a wall. I can see the wall.
- When using a system environment with progressive immersion, allow me to turn off the color/light tinting for the area that is rendering passthrough. Bonus if I can have a sharp edge between passthrough and the virtual content.
- Environment keyboard cutout should stay visible when I’m not typing.
- Let me hide that keyboard helper when using a physical keyboard.
- If I move the virtual keyboard, that is where it lives now. Stop moving it.
- Enter an immersive space to reorganize the app grid. Render all pages at once all around me in space. Make it easy to move apps without shifting other apps from page to page.
- App Library: Let me open a window that contains a list of all apps installed on the device. Let me sort and filter this list.
- Hide apps from the app grid (only show in the App Library.
- Allow empty spaces in the app grid.
- Allow widgets in the app grid.
- Open Control Center and “pin” it open as a window that I can keep in my space.
- Improve iPad and iPhone apps running in visionOS. Add some padding that these apps can interpret as safe area. This would go along way to making these more useful. Let me decide on a per-app basis if the app should prefer light or dark theme.
- Other items
- Swift Playgrounds on visionOS would be amazing! One of the main drawbacks of the iPad version is the limited space to draw a complex UI. visionOS does not share that limitation.
- Reality Composer Pro on Apple Vision Pro. Create and compose scenes on device, then export or sync them back to an Xcode project. Link these projects to Swift Playground on device.
- App should be able to contribute system environments that the user can use in the Shared Space.
- WebXR really needs some sort of way to add hover effects like the rest of visionOS. This limitation makes WebXR scenes feel off, holding back the potential that WebXR has to offer.
- Documentation: provide small code snippets on all API pages. Links to complex example projects are not helpful in this context.
From the Community
I asked a few members of our community what they wanted to see at WWDC 2025.
- Improvements to Collaboration
- I’d like to see visionOS Collaborative Session support, especially bridged with iOS and macOS support.
- More than five personas on a Spatial FaceTime.
- Additional Spatial participants in a Spatial Template beyond five, so they can explore the space. visionOS would have everyone represented by “puck discs” for their location in the space.
- Placing iOS, macOS, tvOS participants into seats on Spatial Template
- Expanded RealityKit support
- RealityView for tvOS
- RealityView SwiftUI attachments for iOS, macOS, tvOS
- TableTopKit for iOS, macOS, tvOS
- Pinning windows to real world surfaces/objects in the shared space
John wrote a longer version of his wishlist on his website.
“Update GameplayKit for RealityKit!” – Arman Dzhrahatspanian
“I would love an API to write to USD. Kind of being able to create Reality Composer Pro content with code.
Improved ARKit Room APIs with furniture and doors etc…” – Gil Nakache
“A flag to determine if Guest Mode is Enabled/Disabled so I can adapt my game to new users. Get the value of UI Zoom level from Settings -> Appearance” – Michael Bundy
There’s currently a showstopper for our use cases. We create interactive USD scenes with both Reality Composer Pro and in-house tools. They load into Apple AR Quick Look and no App is needed.
Four years ago there was an Object Anchor, based on points and stored inside .reality files only–so a secret format. Then came a successor based on Machine Learning, which only works on visionOS. The old .reality file format was recently deprecated and the new alternative doesn’t yet support iOS/iPadOS. It’s not a nice situation. We hope for a change here.
The best solution would be – xrOS on iOS, “iPhoneXR”. But then still iPads are left out. Allow me to wish. – Thomas Kumlehn
What do you hope to see this year at WWDC? Reply to this email or leave a comment below.

How about Sony PSVR 2 Controller support?
Expanded spacial audio support to enable more than 10 instances of the spatial audio component. Love to see rendering groups to enable collections of spatial audio components. Most object based audio systems support between 32 – 256 Spatial Audio instances, apples API should remove the instance limitation