WWDC 2025 – What’s new for visionOS Developers
SwiftUI and RealityKit work together better than ever. We also awarded the first ever “WWDC extension of the year”!
Yesterday was pretty exciting for visionOS developers. Apple delivered a ton of new features that many of us had been waiting on. They also surprised us with a few new ones. visionOS had a few key announcements during the keynote, then a rapid-fire list of items during the Platforms State of the Union.
I spent the day today catching up on what’s new across visionOS. I decided to start with the features I care most about: SwiftUI, App Scenes (Windows, Volumes, Spaces), and RealityKit.
I started the day with coffee and the release notes for visionOS and Xcode. These were useful setting some expectations and preparing for any known issues. It’s no fun to try to workaround an issue that will be fixed in a few weeks or months, so it’s good to know what to avoid.
What’s new in visionOS offers the best grand tour of the platform. The new Spatial layout features are absolutely delightful and I can’t wait to start using them in my apps. We can use familiar concepts from SwiftUI to create interactive 3D layouts.
UI Presentation features removed a ton of limitations that we have had since Apple Vision Pro launched. We can now use menus, tooltips, popovers, sheets, alerts, etc. And we can use them in places like Volumes, attachments, and the new Presentation Component directly on Entities.
We get powerful new manipulation APIs that we can use as a SwiftUI modifier or a RealityKit component. This unlocks natural interactions without having to create and combine several gestures. When we do want to use regular system or custom gestures, we can now define them on components for Entities instead of as an afterthought for a RealityView.
Something else we can define with components is Attachments. We can use the new ViewAttachmentComponent to create attachments related to our entities. No more trying to sync an attachment entity with another entity or dealing with odd parent/child transform issues.
By far the most impactful change coming to visionOS 26 is the ability to persist windows and volumes in rooms. We can also snap these windows and volumes to walls, tables, and other surfaces. Apple provided APIs to let us adapt our apps when their scenes are snapped or anchored. We can even declare which scenes can opt into these features.
Speaking of snapping, widgets on visionOS 26 are awesome. I created about a dozen widgets in my office, using many from my existing compatible iOS apps. The widget I created in 2019 for my first SwiftUI app worked just as well as the latest ones from the system apps like Clock and Reminders.
The eye and hand tracking features we use today got improvements. In addition to the manipulation APIs mentioned above, we get new 90 Hz hand tracking. Accessories like the Logitech Muse and Playstation VR2 Sense controllers unlock new creative and gaming potential.
Safari on visionOS uses Reader mode to power up websites into immersive reading rooms. They even called out the ability for sites to provide custom environments–something I’ll have to look into for Step Into Vision.
What’s new in RealityKit was a tour of all the new features for–you guessed it–RealityKit! Spatial Tracking Session unlocks new data on AnchorEntity and Anchoring Component. Previously we could access transform and opt into physics. Now we can also get the extents of the anchor geometry and the offset from the anchor origin. This brings SpatialTrackingSession much closer to the ARKit APIs. Scene understanding now lets us opt the scene mesh into collisions and physics. This is going to be huge for any app that uses interactive or physics objects. We get new visual features like environment blending and optimizations like instancing. Immersive media has never been better, with new formats like Spatial scenes–an updated version of 2D to 3D photo conversion.
Meet SwiftUI spatial layout was the deep dive I needed for my favorite new features. I’m a total sucker for 3D layouts. It is a problem space I’ve worked in since I started XR development in 2017. I love being able to organize things in interesting new ways. This session covered some powerful spatial features that don’t involve complicated vector math. Instead, we can use simple concepts like stacks, spaces, geometry readers, and modifiers to create interactive spatial layouts.
Aside: I would like to give the award for WWDC extension of the year to the creators of this session. They used new features like spatialOverlay and rotation3DLayout to create a powerful 3D debugging bounding box. I’m going to add this to every project I work on. It’s just so helpful to understanding how visionOS is laying out our views.
// Winner of the WWDC extension of the year
// Credit: https://developer.apple.com/videos/play/wwdc2025/273
extension View {
func debugBorder3D(_ color: Color) -> some View {
spatialOverlay {
ZStack {
Color.clear.border(color, width: 4)
ZStack {
Color.clear.border(color, width: 4)
Spacer()
Color.clear.border(color, width: 4)
}
.rotation3DLayout(.degrees(90), axis: .y)
Color.clear.border(color, width: 4)
}
}
}Better together: SwiftUI and RealityKit blew my mind. In addition to going deeper into some of the features mentioned above, this session introduced the new information flow. RealityKit Entities are now Observable. Since day one, we have been able to updated our RealityView scene graph when SwiftUI state changed. We can do this with the update closure on RealityView. Starting in visionOS 26, we can now go the other direction. Our SwiftUI code can observe changes to entities and components. We could use this provide contextually relative interface elements or to persist data for future use.
This session also provides more information on three powerful new components:
- ViewAttachmentComponent lets us create our attachments with the entities they need to be bound to.
- PresentationComponent lets us show more relative SwiftUI content only when we need, while unlocking several features that were not supported in visionOS 1 and 2.
- GestureComponent lets us write system gestures for our entities instead of using features like targetedToEntity.
We get a new Unified coordinate conversion system. Moving between SwiftUI and RealityKit has never been easier.
We can use SwiftUI to animate our RealityKit entities and their components. This is huge! No need to write and manually trigger custom animations in RealityKit. Instead, we can create SwiftUI animations that target a value on a component and run these based on state changes in our view. This is going to feel so much easier to all the developers coming from the SwiftUI world.
My last session of the day was Set the scene with SwiftUI in visionOS. You know I love the scene APIs in visionOS. I’ve written more than twenty-five examples and labs about them. This topic even helped me kickstart Step Into Vision last summer.
The new scene restoration features are tied to rooms. This has an awesome new side-effect of letting users open and pin more than one instance of an app window to in multiple rooms. For example, I can pin the photos app to a wall in my office and to a surface in my living room. When I leave the office, the pinned one is hidden and when I enter the living room I see the version I expect. These two windows seem to share the same state and context. So if I browse to an album in my office, the window in my living room will be showing the same album.
We get some APIs for working with restoration and snapping. This session had a delightful example of removing the floor of a volume when snapping it to a table.
We also get the answer to the most asked question I get on Step Into Vision. How do I make visionOS always reopen a specific window? Scene modifiers like restorationBehavior and defaultLaunchBehavior let us throw out a ton of Scene Phase code. We can tell visionOS which scene it should open when the app comes back to the foreground.
Volumes can now let some background content break through the bounding box. This content isn’t interactive, but it can allow us to decorate our volumes with additional 3D assets.
Immersive spaces bring changes to progressive immersion with a new portrait mode aspect ratio. Mixed immersion spaces can now visually blend in with system environments. Not least of all, macOS can render immersive spaces that stream to Apple Vision Pro in realtime. We can use this to build previews for the content we’re working on, or built new tools to create directly in the headset while working from the Mac.
I’m really excited with everything that I learned today. You can bet I’ll be bringing you new example code and labs to explore all these features and more.

Follow Step Into Vision