WWDC 2025 – Wishlist Results
A follow up from the Wishlist post, to see what we got and what we didn’t.
We got a ton of new features and APIs in visionOS 26. You can read a short recap of some of my favorites. In this post, we’ll look back over my wishlist from before WWDC to see how I fared.
- Improved Scene management APIs. (Windows, Volumes, Spaces).
- ❌ Position windows relative to the user (expanding on the .utility placement).
- ❌ Move windows after they have been opened. For example, summon a window when it is needed to complete a task.
- ❌ Group windows together into a shared construct so they can be moved and sized together.
- ✅ Snap windows to walls and surfaces in the shared space.
- ✅ Pin a window to a location and permanently save that location even after a device reboot.
- ❌ Disable the feature that hides part of a window or volume when it is close to or overlaps another.
- ❌ Move from one space to another without exiting the first space. Space B replaced Space A when Space B loads.
- ❌ Window / volume flag to treat our windows the same way as other apps when entering an immersive space. Hide windows with this flag set, show them again when exiting the space.
- ❌ New windows and volumes should take ornaments into account when calculating their placement. We can hack around this now with sizes, but it isn’t a great solution.
- Notable additions that were not on my list
- Window launch and restoration behavior APIs
- Progressive view portrait aspect ratio
- Mixed immersive spaces can blend into system environments
- Unique windows by ID (use Window instead of Window Group)
- RealityKit
- ✅ Attachments and ornaments should support the presentation API for pickers, menus, popovers, etc…
- ❌ More than eight dynamic lights per scene.
- ❌ Support for area lights (surfaces and volumes that emit light).
- ❌ Generate dynamic concave collision shapes.
- ✅ Make it easier to generate physics joints
from an array of entities. (There is a new instance method to do this) - ❌ Hover effect component should support entity transformation. Bonus of hover effect can trigger entity actions.
- ❌ Timelines and Behaviors in code. We already have entity actions to call with them.
- ❌ Update all gestures to support access to
inputDevicePose3D. This is currently not available in Tap and Spatial Tap gestures, but it can be used from Drag Gesture. - 🤷🏻♂️ New components to solve common problems. Object pooling, entity spawners, gesture managers, etc. (We didn’t get any of these, but we got a few new useful components)
- ❌ Make it easier to change component values. Replace or obscure the whole “get component, edit value, set component” thing.
- ❓ Support for using SwiftData with RealityView. Currently, RealityViews are not updated/notified when SwiftData imports new data from CloudKit. (I haven’t had a chance to work with this yet, but I think some of the Entity observation stuff may make life easier in my SwiftData apps)
- Notable additions that were not on my list
- Manipulation gesture – this one is truly great!
- Create system gestures in entity components
- Create attachments in entity components
- Presentation component and presentation API support in more places
- Using SwiftUI animations in Reality Kit
- Reality Composer Pro (as of Beta 1, I can find no changes to Reality Composer Pro)
- ❌ Add real documentation for this app to the Help menu. The menu currently points to a not-all-that-helpful web page.
- ❌ Merge the lists of components we can add in RCP and in code. Currently, there are many components we can add in one, but not the other. Example, we can’t add hover effect component in RCP.
- ❌ Timelines and action should be able to read values from entity components and adjust behavior accordingly.
- ❌ Expand the list of behavior triggers and add conditional logic to these.
- ❌ Expand the list of entity actions in Timelines. Actions to animate the value of a property on any component.
- ❌ Create primitive plane entities. We can do this in code, but not in RCP. Bonus if they can be two-sided.
- ❌ Preview on device shows content as a volume. Include an option to show as an immersive space at real-world scale. This would turbo charge iterating on scenes.
- ARKit
- ✅ Improve tracking speed across all features. Hand tracking is still very slow compared to any other XR device, even with predictive mode enabled. Object tracking is so slow I don’t even want to use it.
- ❓Plane tracking could do so much better at reporting useful planes we would want to use in apps. It has classification for windows and doors, but these rarely seem to work. I saw someone say this was better, but I haven’t tested it yet.)
- ❌ Name the feature set: Using Reality Anchoring component with Spatial Tracking Session adds ARKit-like features to RealityKit. These disconnected features lack a name and a single source of documentation. Something like “Tracked RealityKit”.
- ❌ Hand Anchors (Anchoring Component or Anchor Entity) should used the scene physics space by default. I’ve lost count of the number of people who got stuck on this. Apple chose the wrong default value for this one.
- ❌ Head Anchors should be provided transform data when used with Spatial Tracking Session, just like Hands.
- Notable additions that were not on my list
- Support for accessories (controllers, etc.)
- Spatial Tracked Anchors now provide access to more data
- Scene Understanding can add physics and collision to room mesh
- SwiftUI
- ❌ Navigation Split View should use a separate glass pane for the list vs detail area.
- ❌ Navigation Stack and Navigation Split View should not add glass background when using plain window style.
- ❌ Inspector view! Include an option to render this as a detached glass pane to the side or bottom of a window.
- ❌ Improve hover effect to allow view transformations, rotations, etc. (but we did get some new hover effect ID and grouping features)
- ❌ Make it easier to adapt SwiftUI view to very small windows. Spatial Computing shines when it is close and personal. The huge windows and volumes we have today take up too much space and feel too distant.
- ❌ Gradient versions of glass materials for windows.
- ❌ Using the .help modifier should always show a tooltip when hovering over an item. SwiftUI omits these in many cases now.
- Notable additions that were not on my list
- Spatial Layout features!
- Swift Charts
- WebView
- End-user features
- ❌ App Store section to browse all visionOS apps.
- ❌ A visionOS native version of Reminders that supports multiple windows for each list/query.
- ❌ A “Stage Manager” system to group windows and volumes from multiple apps into sets. Hide or dismiss these. Quickly swap between them. Pin these to the user or to a room/area.
- ❌ Please let me turn off those “helpful” system tips. I’m not “too close” to a wall. I can see the wall.
- ❌ When using a system environment with progressive immersion, allow me to turn off the color/light tinting for the area that is rendering passthrough. Bonus if I can have a sharp edge between passthrough and the virtual content.
- ❌ Environment keyboard cutout should stay visible when I’m not typing.
- ❌ Let me hide that keyboard helper when using a physical keyboard.
- ❌ If I move the virtual keyboard, that is where it lives now. Stop moving it.
- ❌ Enter an immersive space to reorganize the app grid. Render all pages at once all around me in space. Make it easy to move apps without shifting other apps from page to page.
- ❌ App Library: Let me open a window that contains a list of all apps installed on the device. Let me sort and filter this list.
- ❌ Hide apps from the app grid (only show in the App Library)
- ❌ Allow empty spaces in the app grid.
- ❌ Allow widgets in the app grid.
- ❌ Open Control Center and “pin” it open as a window that I can keep in my space.
- ❌ Improve iPad and iPhone apps running in visionOS. Add some padding that these apps can interpret as safe area. This would go along way to making these more useful. Let me decide on a per-app basis if the app should prefer light or dark theme.
- Notable additions that were not on my list
- Folders on the App Grid
- Revised Control Center Layout
- Widgets!
- Other items
- ❌ Swift Playgrounds on visionOS would be amazing! One of the main drawbacks of the iPad version is the limited space to draw a complex UI. visionOS does not share that limitation.
- ❌ Reality Composer Pro on Apple Vision Pro. Create and compose scenes on device, then export or sync them back to an Xcode project. Link these projects to Swift Playground on device.
- ❌ App should be able to contribute system environments that the user can use in the Shared Space.
- ❌ WebXR really needs some sort of way to add hover effects like the rest of visionOS. This limitation makes WebXR scenes feel off, holding back the potential that WebXR has to offer.
- 🤷🏻♂️Documentation: provide small code snippets on all API pages. Links to complex example projects are not helpful in this context. (There has been some progress on this. A lot of the new APIs have short snippets in the docs)
- Notable additions that were not on my list
- Personas got a huge upgrade
As you can see, most of my wishlist items went unfulfilled this year. I’ll be filing tons of feature requests and feedback with Apple. In the meantime, I’m excited to work with the new features we did get.

Follow Step Into Vision