Twenty-six of my Favorite Features and APIs in visionOS 26
A rapid-fire tour of some of the awesome things developers can use when building with visionOS 26.
1. Widgets
Obviously this had to be the first item on the list. Widgets are the flagship feature of this release as far as I’m concerned. Even with the limitations we have today, they have the potential to shape how we use devices like Apple Vision Pro to accomplish every day tasks. Developing widgets on visionOS is similar to other Apple platforms, but there are handful of things that make them special.
Mounting Styles: We can use .supportedMountingStyles to support .elevated (default) or .recessed (new) mounting styles. If you support both styles, then users can pick the one they want when configuring the widget. Some widgets may only make sense in one or the other style. For example, Apple uses .recessed on the Weather widget to make it feel more like a window or portal.
Texture: We can use widgetTexture to select a texture that suits our design. The options are .glass and .paper.
struct SimpleWidgets: Widget {
var body: some WidgetConfiguration {
AppIntentConfiguration(...) {...}
// Mounting Style
.supportedMountingStyles([.elevated, .recessed])
// Texture
.widgetTexture(.paper) // or .glass
}
}Level of Detail: Widgets on visionOS can have two versions. A detailed (default) version to display when a user is nearby and a simplified version.
Get starting Building Widgets for visionOS

2. Manipulation Component
This new component provides a set of combined gestures for moving, rotating, and scaling entities. It has reasonable defaults and we can customize it extensively. Using it can be as simple as adding to an entity. If you have an entity that already has collision and input components, just add `ManipulationComponent`
let subject = Entity()
let mc = ManipulationComponent()
subject.components.set(mc)
content.add(subject)We can also use a helper function to set up entities to work with manipulation.
let subject = Entity()
ManipulationComponent.configureEntity(subject, collisionShapes: [.generateBox(width: 0.25, height: 0.25, depth: 0.25)])
content.add(subject)There are a handful (haha) of behaviors we can customize such as releaseBehavior and dynamics. There several events we can subscribe to.
_ = content.subscribe(to: ManipulationEvents.WillBegin.self) { event in
// Remove the hover effect when manipulating an entity
event.entity.components.remove(HoverEffectComponent.self)
}We can constrain the position, rotation, or scale of an entity during manipulation.
_ = content.subscribe(to: ManipulationEvents.DidUpdateTransform.self) { event in
let newPostion = event.entity.position
... calculate a new position
event.entity.position = .init(x: posX, y: posY, z: posZ)
}Check out our full series on using this component and get subscribed for some future updates.
- Getting started with Manipulation Component
- Using events with Manipulation Component
- Using custom sounds with Manipulation Component
- Redirect input with Manipulation Component
- Constrain position with Manipulation Component
3. Manipulable Modifier
SwiftUI has a version of manipulation in the form of a modifier. It is mostly intended to be used on Model3D views.
Model3D(named: "ToyCar", bundle: realityKitContentBundle)
.manipulable() // default manipulationUsing it on a regular view is possible. As of visionOS 26 RC, these views are clipped by the bounds of the parent window or volume.
SomeCardView()
.manipulable()4. Gesture Component
We can attach unique gestures to any entity, instead of adding targeted gesture modifiers to the RealityView.
let tapGesture = TapGesture()
.onEnded({
// Perform an action here
})
let gestureComponent = GestureComponent(tapGesture)
subject.components.set(gestureComponent)5. View Attachment Component
We can use a new method to create attachments that feels more at home in RealityKit. Instead using the attachments closure of RealityView, we can create attachments when and where we need them.
let entity = Entity()
let attachment = ViewAttachmentComponent(rootView: SomeView())
entity.components.set(attachment)
content.add(entity)
6. Presentation Component
The long and dark days of “Presentations are not currently supported in Volumetric contexts” are behind us at last. Not only are SwiftUI presentations supported in Volumes, we even get a new component. We can use this to display popovers relative to an entity. Often you will want to create an anchor or presentation transform inside an entity.

guard let presentationPoint = scene.findEntity(named: "PresentationPoint") else { return }
let presentation = PresentationComponent(
isPresented: $showingPopover,
configuration: .popover(arrowEdge: .bottom),
content: RocketCard()
)
presentationPoint.components.set(presentation)7. Breakthrough Effects
Speaking of Attachments and Presentations… It may be hard to see our SwiftUI content when mixed RealityKit entities. We can ensure these views are always visible with breakthroughEffect and presentationBreakthroughEffect.
We can use breakthroughEffect on views in attachments, stacks, toolbars, and ornaments.
VStack {
Text("Earth")
.font(.title)
Text("The only known planet known to serve ice cream.")
.font(.caption)
}
.breakthroughEffect(.prominent)There is a special version of this that works with presented content.
VStack {...}
.presentationBreakthroughEffect(.prominent)
Over the summer, I completely rebuild Project Graveyard using components like Manipulation, View Attachment, Presentation, and Gesture. Read all about: Project Graveyard – Devlog 009
8. Unique Windows
We can create unique windows by using Window instead of WindowGroup.
struct Garden01App: App {
var body: some Scene {
WindowGroup {
ContentView()
}
WindowGroup(id: "YellowFlower") {
YellowFlowerView()
}
Window("Unique Rose Window", id: "RoseWindow") {
RoseFlowerView()
}
}
}9. Window and Volume Snapping
Users can snap Windows to vertical surfaces and Volumes to horizontal ones. We can tell when a scene is snapped by reading from the surfaceSnappingInfo environment variable.
struct Example083: View {
@Environment(\.surfaceSnappingInfo) private var surfaceSnappingInfo
var body: some View {
Text("\(surfaceSnappingInfo.isSnapped ? "Yes" : "No")")
}
}If we ask permission, we can even get the classification of the surface.

.onChange(of: surfaceSnappingInfo) {
if (!surfaceSnappingInfo.isSnapped && SurfaceSnappingInfo.authorizationStatus == .authorized) {
switch surfaceSnappingInfo.classification {
case .wall:
print("Snapped to a wall")
case .door:
print("Snapped to the door")
default:
print("Snapped to something else: \(surfaceSnappingInfo.classification?.description ?? "" )")
}
}
}How to read window snapping state and classification
How to read volume snapping state and classification
10. Restoring Windows and Volumes
A user may snap a window to a surface, but sometimes it may not make sense to restore it later. For example, a utility or tools window doesn’t make sense without the content it was linked with. We can control restoration with a new scene modifier.
WindowGroup(id: "UtilityWindow", makeContent: {
UtilityRoot()
})
.restorationBehavior(.disabled)How to use scene restoration with windows and volumes
11. Default Launch Behavior
We can clean up left over windows and volumes by suppressing the default launch behavior.
Consider an example where this main window was closed before the utility window. Before visionOS 26, the utility window would open if the user tapped the app icon. If this window didn’t provide a means to reopen the main window, then the user could be stuck with force quitting the app.
struct Garden028App: App {
var body: some Scene {
WindowGroup {
ContentView()
}
.defaultSize(width: 500, height: 500)
WindowGroup(id: "UtilityWindow", makeContent: {
UtilityRoot()
})
.defaultLaunchBehavior(.suppressed)
}
}Using defaultLaunchBehavior here will force visionOS to reopen the main window.
How to use default launch behavior
12. Clipping Margins
Need a little extra space around your Volume? We can use preferredWindowClippingMargins to request visionOS to allow us to render content outside of the regular bounds.
struct ContentView: View {
@State private var length: CGFloat = 300
@State private var edges: Edge3D.Set = [.top, .leading, .bottom, .trailing, .back]
var body: some View {
VStack {
RealityView { content in
...
}
}
.preferredWindowClippingMargins(edges, length)
}
}We can check to see if this request has been granted by reading data from windowClippingMargins
@Environment(\.windowClippingMargins) private var windowClippingMargins
...
windowClippingMargins.topHow to render content outside of a volume bounds
How to read and use volume clipping margins
13. Immersive Environment Behavior
We can present our mixed spaces along side system environments using immersiveEnvironmentBehavior.
ImmersiveSpace(id: appModel.immersiveSpaceID) {
ImmersiveView()
}
.immersionStyle(selection: .constant(.mixed), in: .mixed)
.immersiveEnvironmentBehavior(.coexist)Mandatory Video, volume on please
How to let immersive spaces coexist with system environments
14. Aspect Ratio for Progressive Immersion
When using progressive immersion prior to visionOS 26, the content portal ways always presented as a landscape rectangle. This was easiest to notice the immersive level was low. Now we can specify an aspect ratio.
var progressiveGardenAlt: ImmersionStyle = .progressive(
0.2...0.8,
initialAmount: 0.3,
aspectRatio: .portrait
)ImmersiveSpace(id: "GardenSceneProgressiveAlt") {
ImmersiveViewProgressiveAlt()
.environment(appModel)
}
.immersionStyle(selection: $appModel.progressiveGardenAlt, in: .progressive)Using aspect ratio with progressive immersive style
15. On World Re-center
We can use this modifier to execute code after the user has re-centered their view.
var body: some View {
VStack {
...
}
.onWorldRecenter {
bounceTheWorld()
}
}Spatial SwiftUI: onWorldRecenter
16. Look to Scroll
We can opt our views into the new Look to Scroll feature using scrollInputBehavior.
ScrollView(.vertical, showsIndicators: false) {
...
}
.scrollInputBehavior(.enabled, for: .look)17. Using SwiftUI Animations in RealityKit
SwiftUI has some awesome animations built in and now we can use them with our RealityKit entities.
var animatedIsOffset: Binding<Bool> {
$subjectToggle
.animation(.easeInOut(duration: 2))
}content.animate {
let scaler: Float = subjectToggle ? 2.0 : 1.0
subject.scale = .init(repeating: scaler)
}We’re using content.animate here, but there is also an Entity.animate.
18. Entity Observation
Since visionOS 1, we’ve been able to update RealityKit entities based on changes to data in SwiftUI. We commonly do this with the update closure on RealityView. This year, Apple made it possible for SwiftUI to observe changes to RealityKit entities. We now have two-way communication between these frameworks via Observation.
.onChange(of: subject?.observable.transform) {
print("Subject Transform Changed \(String(describing: subject?.observable.transform))")
}19. Mesh Instances Component
We can create many instances of a mesh and transform them relative to the coordinate system of the original.
let instanceCount = 10
var meshInstancesComponent = MeshInstancesComponent()
do {
// specify a number of instances
let instances = try LowLevelInstanceData(instanceCount: instanceCount)
// get the mesh / part to copy
meshInstancesComponent[partIndex: 0] = .init(data: instances)
// Loop over each instance and update the transform
instances.withMutableTransforms { transforms in
for i in 0..<instanceCount {
let offset: Float = 0.05 * Float(i)
var transform = Transform()
transform.translation = [offset, offset, offset]
transforms[i] = transform.matrix
}
}
entity.components.set(meshInstancesComponent)
} catch {
print("error creating instances = \(error)")
}20. Unified Coordinate Conversion
Views, and Entities can participate in a new Unified Coordinate Conversion system. We can tell where this view is in relation to the world origin, or in relation to other windows, views, and entities.
.onGeometryChange3D(for: Point3D.self) { proxy in try! proxy
.coordinateSpace3D()
.convert(value: Point3D.zero, to: .worldReference)
} action: { old, new in
// Do something with old and new values
}Unfortunately, this new set of features isn’t well documented yet. We’re on it.
21. Layout Depth Alignment
We can use depth alignment on layouts to align views in a 3D space.
VStackLayout().depthAlignment(alignment) {
// Row 1
HStackLayout().depthAlignment(alignment) {
...
}
// Row 2
HStackLayout().depthAlignment(alignment) {
...
}
}
Spatial SwiftUI: Layout Depth Alignment
22. Rotation 3D Layout
When we use rotation3DEffect, it is important to think of it as a visual effect. It will rotate the view, but it won’t have any other impact. We can use the new rotation3DLayout to impact frame and layout.
.rotation3DLayout(.degrees(10), axis: .y)This seemingly small change is doing a lot of heavy lifting in many Spatial Layouts.

Spatial SwiftUI: rotation3DLayout
23. Spatial Overlay
Each view in visionOS has a bounds. Most of the time, we think of SwiftUI content as having width and height, but these views can also have depth. spatialOverlay allows us to place secondary views within the bounds of a parent view. For example, imagine a simple BoxView with a 2D card placed to the front.
BoxView()
.spatialOverlay(alignment: .front) {
CardView()
}Spatial SwiftUI: spatialOverlay
24. Spatial Container
A new type of layout: SpatialContainer. This lets multiple views exist in the same space. The container will be shaped like a bounding box, sized to fit the largest child on each axis.
SpatialContainer(alignment: .top) {
ModelViewSimple(name: "Earth", bundle: realityKitContentBundle)
.opacity(0.22)
.frame(width: 200, height: 200)
ModelViewSimple(name: "Moon", bundle: realityKitContentBundle)
.frame(width: 100, height: 100)
}
Spatial SwiftUI: SpatialContainer
25. Reality View Sizing Behavior
When working with RealityKit, we use RealityView to show 3D content. RealityView is just that–a view. We can add modifiers to it just like any other view. realityViewLayoutBehavior is a new modifier in visionOS 26 that lets SwiftUI size the frame and align content inside the RealityView. There are three options.
.flexible: This is the default behavior when we don’t use this modifier. The Reality View will take up all available space and will not try to align content inside it..centered: This option will still have a flexible frame that will fill the available space. The visual content inside the RealityView will be centered.- .
fixedSize: This option will size the frame to fit the content of the Reality View.
RealityView { content in
...
}
.realityViewLayoutBehavior(.flexible) // or .centered, or .fixedSizeWatch a demo of all three
Spatial SwiftUI: realityViewSizingBehavior
26. Scaling Model3D
When working with Model3D in SwiftUI, we can use scaledToFit3D or scaledToFill3D to scale the resizable model.
// Fill
model
.resizable() // required to make the model resizable
.scaledToFill3D()// Fit
model
.resizable() // required to make the model resizable
.scaledToFit3D()Spatial SwiftUI: scaling views
Bonus: Visualizing SwiftUI Frames
This isn’t built into visionOS, but Apple shared an incredibly useful view extension. We can use this to debug and visualize spatial layouts. This makes use of spatialOverlay and rotation3DLayout to create a frame around any view. See Meet SwiftUI spatial layout for the details.
/// See WWDC 2025 Session: Meet SwiftUI spatial layout
/// https://developer.apple.com/videos/play/wwdc2025/273
extension View {
func debugBorder3D(_ color: Color) -> some View {
spatialOverlay {
ZStack {
Color.clear.border(color, width: 4)
ZStack {
Color.clear.border(color, width: 4)
Spacer()
Color.clear.border(color, width: 4)
}
.rotation3DLayout(.degrees(90), axis: .y)
Color.clear.border(color, width: 4)
}
}
}
}
More Coming Soon
This was not an exhaustive list. There is so much more to explore in visionOS. We’re going to dive into Swift Charts, tracked controllers and accessories, updates to ARKit, Model3D enhancements, image and video components, foundation model, and so much more. Subscribe to our email list or RSS feeds, or follow us on social media so you don’t miss anything.
Get Step Into Vision delivered to your inbox

Thanks for compiling this list Joseph! What an awesome resource running through new capabilities surfaced for devs in visionOS 26. The basic window management nitty-gritty has come a long way! So many frustrations resolved! I definitely missed a few of these from the WWDC sessions.
I’m glad you found something useful in this list! Thanks for reading