Preamble
Copyright (c) 2017-2025 The Khronos Group Inc.
This Specification is protected by copyright laws and contains material proprietary to Khronos. Except as described by these terms, it or any components may not be reproduced, republished, distributed, transmitted, displayed, broadcast or otherwise exploited in any manner without the express prior written permission of Khronos.
Khronos grants a conditional copyright license to use and reproduce the unmodified Specification for any purpose, without fee or royalty, EXCEPT no licenses to any patent, trademark or other intellectual property rights are granted under these terms.
Khronos makes no, and expressly disclaims any, representations or warranties, express or implied, regarding this Specification, including, without limitation: merchantability, fitness for a particular purpose, non-infringement of any intellectual property, correctness, accuracy, completeness, timeliness, and reliability. Under no circumstances will Khronos, or any of its Promoters, Contributors or Members, or their respective partners, officers, directors, employees, agents or representatives be liable for any damages, whether direct, indirect, special or consequential damages for lost revenues, lost profits, or otherwise, arising from or in connection with these materials.
This document contains extensions which are not ratified by Khronos, and as such is not a ratified Specification, though it contains text from (and is a superset of) the ratified OpenXR Specification that can be found at https://registry.khronos.org/OpenXR/specs/1.1-khr/html/xrspec.html (core with KHR and ratified non-KHR extensions).
The Khronos Intellectual Property Rights Policy defines the terms 'Scope', 'Compliant Portion', and 'Necessary Patent Claims'.
Some parts of this Specification are purely informative and so are EXCLUDED from the Scope of this Specification. The Document Conventions section of the Introduction defines how these parts of the Specification are identified.
Where this Specification uses technical terminology, defined in the Glossary or otherwise, that refer to enabling technologies that are not expressly set forth in this Specification, those enabling technologies are EXCLUDED from the Scope of this Specification. For clarity, enabling technologies not disclosed with particularity in this Specification (e.g. semiconductor manufacturing technology, hardware architecture, processor architecture or microarchitecture, memory architecture, compiler technology, object oriented technology, basic operating system technology, compression technology, algorithms, and so on) are NOT to be considered expressly set forth; only those application program interfaces and data structures disclosed with particularity are included in the Scope of this Specification.
For purposes of the Khronos Intellectual Property Rights Policy as it relates to the definition of Necessary Patent Claims, all recommended or optional features, behaviors and functionality set forth in this Specification, if implemented, are considered to be included as Compliant Portions.
Khronos® and Vulkan® are registered trademarks, and glTF™ is a trademark of The Khronos Group Inc. OpenXR™ is a trademark owned by The Khronos Group Inc. and is registered as a trademark in China, the European Union, Japan and the United Kingdom. OpenGL® is a registered trademark and the OpenGL ES™ and OpenGL SC™ logos are trademarks of Hewlett Packard Enterprise used under license by Khronos. All other product names, trademarks, and/or company names are used solely for identification and belong to their respective owners.
1. Introduction
This chapter is informative except for the section on Normative Terminology.
This document, referred to as the "OpenXR Specification" or just the "Specification" hereafter, describes OpenXR: what it is, how it acts, and what is required to implement it. We assume that the reader has a basic understanding of computer graphics and the technologies involved in virtual and augmented reality. This means familiarity with the essentials of computer graphics algorithms and terminology, modern GPUs (Graphic Processing Units), tracking technologies, head mounted devices, and input modalities.
The canonical version of the Specification is available in the official OpenXR Registry, located at URL
1.1. What is OpenXR?
OpenXR is an API (Application Programming Interface) for XR applications. XR refers to a continuum of real-and-virtual combined environments generated by computers through human-machine interaction and is inclusive of the technologies associated with virtual reality (VR), augmented reality (AR) and mixed reality (MR). OpenXR is the interface between an application and an in-process or out-of-process "XR runtime system", or just "runtime" hereafter. The runtime may handle such functionality as frame composition, peripheral management, and raw tracking information.
Optionally, a runtime may support device layer plugins which allow access to a variety of hardware across a commonly defined interface.
1.2. The Programmer’s View of OpenXR
To the application programmer, OpenXR is a set of functions that interface with a runtime to perform commonly required operations such as accessing controller/peripheral state, getting current and/or predicted tracking positions, and submitting rendered frames.
A typical OpenXR program begins with a call to create an instance which establishes a connection to a runtime. Then a call is made to create a system which selects for use a physical display and a subset of input, tracking, and graphics devices. Subsequently a call is made to create buffers into which the application will render one or more views using the appropriate graphics APIs for the platform. Finally calls are made to create a session and begin the application’s XR rendering loop.
1.3. The Implementor’s View of OpenXR
To the runtime implementor, OpenXR is a set of functions that control the operation of the XR system and establishes the lifecycle of a XR application.
The implementor’s task is to provide a software library on the host which implements the OpenXR API, while mapping the work for each OpenXR function to the graphics hardware as appropriate for the capabilities of the device.
1.4. Our View of OpenXR
We view OpenXR as a mechanism for interacting with VR/AR/MR systems in a platform-agnostic way.
We expect this model to result in a specification that satisfies the needs of both programmers and runtime implementors. It does not, however, necessarily provide a model for implementation. A runtime implementation must produce results conforming to those produced by the specified methods, but may carry out particular procedures in ways that are more efficient than the one specified.
1.5. Filing Bug Reports
Issues with and bug reports on the OpenXR Specification and the API Registry can be filed in the Khronos OpenXR GitHub repository, located at URL
Please tag issues with appropriate labels, such as “Specification”, “Ref Pages” or “Registry”, to help us triage and assign them appropriately. Unfortunately, GitHub does not currently let users who do not have write access to the repository set GitHub labels on issues. In the meantime, they can be added to the title line of the issue set in brackets, e.g. “[Specification]”.
1.6. Document Conventions
The OpenXR specification is intended for use by both implementors of the API and application developers seeking to make use of the API, forming a contract between these parties. Specification text may address either party; typically the intended audience can be inferred from context, though some sections are defined to address only one of these parties. (For example, Valid Usage sections only address application developers). Any requirements, prohibitions, recommendations or options defined by normative terminology are imposed only on the audience of that text.
1.6.1. Normative Terminology
The key words must, required, should, may, and optional in this document, when denoted as above, are to be interpreted as described in RFC 2119:
- must
-
When used alone, this word, or the term required, means that the definition is an absolute requirement of the specification. When followed by not (“must not” ), the phrase means that the definition is an absolute prohibition of the specification.
- should
-
When used alone, this word means that there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course. When followed by not (“should not”), the phrase means that there may exist valid reasons in particular circumstances when the particular behavior is acceptable or even useful, but the full implications should be understood and the case carefully weighed before implementing any behavior described with this label.
- may
-
This word, or the adjective optional, means that an item is truly optional. One vendor may choose to include the item because a particular marketplace requires it or because the vendor feels that it enhances the product while another vendor may omit the same item.
The additional terms can and cannot are to be interpreted as follows:
- can
-
This word means that the particular behavior described is a valid choice for an application, and is never used to refer to runtime behavior.
- cannot
-
This word means that the particular behavior described is not achievable by an application, for example, an entry point does not exist.
|
There is an important distinction between cannot and must not, as used in this Specification. Cannot means something the application literally is unable to express or accomplish through the API, while must not means something that the application is capable of expressing through the API, but that the consequences of doing so are undefined and potentially unrecoverable for the runtime. |
2. Fundamentals
2.1. API Version Numbers and Semantics
Multi-part version numbers are used in several places in the OpenXR API.
// Provided by XR_VERSION_1_0
typedef uint64_t XrVersion;
In each such use, the API major version number, minor version number, and
patch version number are packed into a 64-bit integer, referred to as
XrVersion, as follows:
Differences in any of the version numbers indicate a change to the API, with each part of the version number indicating a different scope of change, as follows.
|
Note
The rules below apply to OpenXR versions 1.0 or later. Prerelease versions of OpenXR may use different rules for versioning. |
A difference in patch version numbers indicates that some usually small part of the specification or header has been modified, typically to fix a bug, and may have an impact on the behavior of existing functionality. Differences in the patch version number must affect neither full compatibility nor backwards compatibility between two versions, nor may it add additional interfaces to the API. Runtimes may use patch version number to determine whether to enable implementation changes, such as bug fixes, that impact functionality. Runtimes should document any changes that are tied to the patch version. Application developers should retest their application on all runtimes they support after compiling with a new version.
A difference in minor version numbers indicates that some amount of new functionality has been added. This will usually include new interfaces in the header, and may also include behavior changes and bug fixes. Functionality may be deprecated in a minor revision, but must not be removed. When a new minor version is introduced, the patch version continues where the last minor version left off, making patch versions unique inside major versions. Differences in the minor version number should not affect backwards compatibility, but will affect full compatibility.
A difference in major version numbers indicates a large set of changes to the API, potentially including new functionality and header interfaces, behavioral changes, removal of deprecated features, modification or outright replacement of any feature, and is thus very likely to break compatibility. Differences in the major version number will typically require significant modification to application code in order for it to function properly.
The following table attempts to detail the changes that may occur versus when they must not be updated during an update to any of the major, minor, or patch version numbers:
Reason |
Major Version |
Minor Version |
Patch Version |
Extensions Added/Removed* |
may |
may |
may |
Spec-Optional Behavior Changed* |
may |
may |
may |
Spec Required Behavior Changed* |
may |
may |
must not |
Core Interfaces Added* |
may |
may |
must not |
Weak Deprecation* |
may |
may |
must not |
Strong Deprecation* |
may |
must not |
must not |
Core Interfaces Changed/Removed* |
may |
must not |
must not |
In the above table, the following identify the various cases in detail:
Extensions Added/Removed |
An extension may be added or removed with a change at this patch level. |
Specification-Optional Behavior Changed |
Some optional behavior laid out in this specification has changed. Usually this will involve a change in behavior that is marked with the normative language should or may. For example, a runtime that previously did not validate a particular use case may now begin validating that use case. |
Specification-Required Behavior Changed |
A behavior of runtimes that is required by this specification may have changed. For example, a previously optional validation may now have become mandatory for runtimes. |
Core Interfaces Added |
New interfaces may have been added to this specification (and to the OpenXR header file) in revisions at this level. |
Weak Deprecation |
An interface may have been weakly deprecated at this level. This may happen if there is now a better way to accomplish the same thing. Applications making this call should behave the same as before the deprecation, but following the new path may be more performant, lower latency, or otherwise yield better results. It is possible that some runtimes may choose to give run-time warnings that the feature has been weakly deprecated and will likely be strongly deprecated or removed in the future. |
Strong Deprecation |
An interface may have been strongly deprecated at this level. This means that the interface must still exist (so applications that are compiled against it will still run) but it may now be a no-op, or it may be that its behavior has been significantly changed. It may be that this functionality is no longer necessary, or that its functionality has been subsumed by another call. This should not break an application, but some behavior may be different or unanticipated. |
Interfaces Changed/Removed |
An interface may have been changed — with different parameters or return types — at this level. An interface or feature may also have been removed entirely. It is almost certain that rebuilding applications will be required. |
2.2. String Encoding
This API uses strings as input and output for some functions.
Unless otherwise specified, all such strings are NULL terminated UTF-8
encoded case-sensitive character arrays.
2.3. Threading Behavior
The OpenXR API is intended to provide scalable performance when used on multiple host threads. All functions must support being called concurrently from multiple threads, but certain parameters, or components of parameters are defined to be externally synchronized. This means that the caller must guarantee that no more than one thread is using such a parameter at a given time.
More precisely, functions use simple stores to update software structures representing objects. A parameter declared as externally synchronized may have its software structures updated at any time during the host execution of the function. If two functions operate on the same object and at least one of the functions declares the object to be externally synchronized, then the caller must guarantee not only that the functions do not execute simultaneously, but also that the two functions are separated by an appropriate memory barrier if needed.
For all functions which destroy an object handle, the application must externally synchronize the object handle parameter and any child handles.
2.4. Multiprocessing Behavior
The OpenXR API does not explicitly recognize nor require support for multiple processes using the runtime simultaneously, nor does it prevent a runtime from providing such support.
2.5. Runtime
An OpenXR runtime is software which implements the OpenXR API. There may be more than one OpenXR runtime installed on a system, but only one runtime can be active at any given time.
2.6. Extensions
OpenXR is an extensible API that grows through the addition of new features. Similar to other Khronos APIs, extensions may expose new OpenXR functions or modify the behavior of existing OpenXR functions. Extensions are optional, and therefore must be enabled by the application before the extended functionality is made available. Because extensions are optional, they may be implemented only on a subset of runtimes, graphics platforms, or operating systems. Therefore, an application should first query which extensions are available before enabling.
The application queries the available list of extensions using the xrEnumerateInstanceExtensionProperties function. Once an application determines which extensions are supported, it can enable some subset of them during the call to xrCreateInstance.
OpenXR extensions have unique names that convey information about what functionality is provided. The names have the following format:
For example: XR_KHR_composition_layer_cube is an OpenXR extension
created by the Khronos (KHR) OpenXR Working Group to support cube
composition layers.
The public list of available extensions known and configured for inclusion in this document at the time of this specification being generated appears in the List of Extensions appendix at the end of this document.
2.7. API Layers
OpenXR is designed to be a layered API, which means that a user or application may insert API layers between the application and the runtime implementation. These API layers provide additional functionality by intercepting OpenXR functions from the layer above and performing different operations than would otherwise be performed without the layer. In the simplest cases, the layer simply calls the next layer down with the same arguments, but a more complex layer may implement API functionality that is not present in the layers or runtime below it. This mechanism is essentially an architected "function shimming" or "intercept" feature that is designed into OpenXR and meant to replace more informal methods of "hooking" API calls.
2.7.1. Examples of API Layers
Validation Layer
The layered API approach employed by OpenXR allows for potentially expensive validation of correct API usage to be implemented in a "validation" layer. Such a layer allows the application developer to develop their application with a validation layer active to ensure that the application is using the API correctly. A validation layer confirms that the application has set up object state correctly, has provided the required data for each function, ensures that required resources are available, etc. If a validation layer detects a problem, it issues an error message that can be logged or captured by the application via a callback. After the developer has determined that the application is correct, they turn off a validation layer to allow the application to run in a production environment without repeatedly incurring the validation expense. (Note that some validation of correct API usage is required to be implemented by the runtime.)
2.7.2. Naming API Layers
To organize API layer names and prevent collisions in the API layer name namespace, API layers must be named using the following convention:
XR_APILAYER_<VENDOR-TAG>_short_name
Vendors are responsible for registering a vendor tag with the OpenXR working group, and just like for implementors, they must maintain their vendor namespace.
Example of an API layer name produced by the Acme company for the "check best practices" API layer:
XR_APILAYER_ACME_check_best_practices
2.7.3. Activating API Layers
Application Activation
Applications can determine the API layers that are available to them by calling the xrEnumerateApiLayerProperties function to obtain a list of available API layers. Applications then can select the desired API layers from this list and provide them to the xrCreateInstance function when creating an instance.
System Activation
Application users or users performing roles such as system integrator or system administrator may configure a system to activate API layers without involvement from the applications. These platform-dependent steps may include the installation of API layer-related files, setting environment variables, or other platform-specific operations. The options that are available for configuring the API layers in this manner are also dependent on the platform and/or runtime.
2.7.4. API Layer Extensions
API layers may implement OpenXR functions that are not supported by the underlying runtime. In order to expose these new features, the API layer must expose this functionality in the form of an OpenXR extension. It must not expose new OpenXR functions without an associated extension.
For example, an OpenXR API-logging API layer might expose an API function to
allow the application to turn logging on for only a portion of its
execution.
Since new functions must be exposed through an extension, the vendor has
created an extension called XR_ACME_logging_on_off to contain these new
functions.
The application should query if the API layer supports the extension and
then, only if it exists, enable both the extension and the API layer by name
during xrCreateInstance.
To find out what extensions an API layer supports, an application must first verify that the API layer exists on the current system by calling xrEnumerateApiLayerProperties. After verifying an API layer of interest exists, the application then should call xrEnumerateInstanceExtensionProperties and provide the API layer name as the first parameter. This will return the list of extensions implemented by that API layer.
2.8. Type Aliasing
Type aliasing refers to the situation in which the actual type of a element
does not match the declared type.
Some C and C++ compilers assume that the actual type matches the declared
type in some configurations, and may be so configured by default at common
optimization levels.
In such a compiler configured with that assumption, violating the assumption
may produce undefined behavior.
This compiler feature is typically referred to as "strict aliasing," and it
can usually be enabled or disabled via compiler options.
The OpenXR specification does not support strict aliasing, as there are
some cases in which an application intentionally provides a struct with a
type that differs from the declared type.
For example, XrFrameEndInfo::layers is an array of type
const XrCompositionLayerBaseHeader code:* const.
However, each element of the array must be of one of the specific layer
types, such as XrCompositionLayerQuad.
Similarly, xrEnumerateSwapchainImages accepts an array of
XrSwapchainImageBaseHeader, whereas the actual type passed must be an
array of a type such as
XrSwapchainImageVulkanKHR.
For OpenXR to work correctly, the compiler must support the type aliasing described here.
// Provided by XR_VERSION_1_0
#if !defined(XR_MAY_ALIAS)
#if defined(__clang__) || (defined(__GNUC__) && (__GNUC__ > 4))
#define XR_MAY_ALIAS __attribute__((__may_alias__))
#else
#define XR_MAY_ALIAS
#endif
#endif
As a convenience, some types and pointers that are known at specification time to alias values of different types have been annotated with the XR_MAY_ALIAS definition. If this macro is not defined before including OpenXR headers, and a new enough Clang or GCC compiler is used, it is defined to a compiler-specific attribute annotation to inform these compilers that those pointers may alias. However, there is no guarantee that all aliasing types or pointers have been correctly marked with this macro, so thorough testing is still recommended if you choose (at your own risk) to permit your compiler to perform type-based aliasing analysis.
2.9. Valid Usage
Valid usage defines a set of conditions which must be met in order to achieve well-defined run-time behavior in an application. These conditions depend only on API state, and the parameters or objects whose usage is constrained by the condition.
Some valid usage conditions have dependencies on runtime limits or feature availability. It is possible to validate these conditions against the API’s minimum or maximum supported values for these limits and features, or some subset of other known values.
Valid usage conditions should apply to a function or structure where complete information about the condition would be known during execution of an application. This is such that a validation API layer or linter can be written directly against these statements at the point they are specified.
2.9.1. Implicit Valid Usage
Some valid usage conditions apply to all functions and structures in the API, unless explicitly denoted otherwise for a specific function or structure. These conditions are considered implicit. Implicit valid usage conditions are described in detail below.
2.9.2. Valid Usage for Object Handles
Any input parameter to a function that is an object handle must be a valid object handle, unless otherwise specified. An object handle is valid if and only if all of the following conditions hold:
There are contexts in which an object handle is optional or otherwise
unspecified.
In those cases, the API uses XR_NULL_HANDLE, which has the integer
value 0.
2.9.3. Valid Usage for Pointers
Any parameter that is a pointer must be a valid pointer when the specification indicates that the runtime uses the pointer. A pointer is valid if and only if it points at memory containing values of the number and type(s) expected by the function, and all fundamental types accessed through the pointer (e.g. as elements of an array or as members of a structure) satisfy the alignment requirements of the host processor.
2.9.4. Valid Usage for Enumerated Types
Any parameter of an enumerated type must be a valid enumerant for that type. An enumerant is valid if and only if the enumerant is defined as part of the enumerated type in question.
2.9.5. Valid Usage for Flags
A collection of flags is represented by a bitmask using the type
XrFlags64:
typedef uint64_t XrFlags64;
Bitmasks are passed to many functions and structures to compactly represent
options and are stored in memory defined by the XrFlags64 type.
But the API does not use the XrFlags64 type directly.
Instead, a Xr*Flags type is used which is an alias of the
XrFlags64 type.
The API also defines a set of constant bit definitions used to set the
bitmasks.
Any Xr*Flags member or parameter used in the API must be a valid
combination of bit flags.
A valid combination is either zero or the bitwise OR of valid bit
flags.
A bit flag is valid if and only if:
2.9.6. Valid Usage for Structure Types
Any parameter that is a structure containing a type member must have
a value of type which is a valid XrStructureType value matching
the type of the structure.
As a general rule, the name of this value is obtained by taking the
structure name, stripping the leading Xr, prefixing each capital letter
with an underscore, converting the entire resulting string to upper case,
and prefixing it with XR_TYPE_.
The only exceptions to this rule are API and Operating System names which are converted in a way that produces a more readable value:
2.9.7. Valid Usage for Structure Pointer Chains
Any structure containing a void* next member must have a value
of next that is either NULL, or points to a valid structure that
also contains type and next member values.
The set of structures connected by next pointers is referred to as a
next chain.
In order to use a structure type defined by an extension in a next
chain, the proper extension must have been previously enabled during
xrCreateInstance.
A runtime must ignore all unrecognized structures in a next chain,
including those associated with an extension that has not been enabled.
Some structures for use in a chain are described in the core OpenXR specification and are mentioned in the Member Descriptions. Any structure described in this document intended for use in a chain is mentioned in a "See also" list in the implicit valid usage of the structure they chain to. Most chained structures are associated with extensions, and are described in the base OpenXR Specification under the List of Extensions. Vendor-specific extensions may be found there as well, or may only be available from the vendor’s website or internal document repositories.
Unless otherwise specified: Chained structs which are output structs may be modified by the runtime with the exception of the type and next fields. Upon return from any function, all type and next fields in the chain must be unmodified.
Useful Base Structures
As a convenience to runtimes and layers needing to iterate through a structure pointer chain, the OpenXR API provides the following base structures:
The XrBaseInStructure structure is defined as:
// Provided by XR_VERSION_1_0
typedef struct XrBaseInStructure {
XrStructureType type;
const struct XrBaseInStructure* next;
} XrBaseInStructure;
XrBaseInStructure can be used to facilitate iterating through a read-only structure pointer chain.
The XrBaseOutStructure structure is defined as:
// Provided by XR_VERSION_1_0
typedef struct XrBaseOutStructure {
XrStructureType type;
struct XrBaseOutStructure* next;
} XrBaseOutStructure;
XrBaseOutStructure can be used to facilitate iterating through a structure pointer chain that returns data back to the application.
These structures allow for some type safety and can be used by OpenXR API functions that operate on generic inputs and outputs.
Next Chain Structure Uniqueness
Applications should ensure that they create and insert no more than one
occurrence of each type of extension structure in a given next chain.
Other components of OpenXR (such as the OpenXR loader or an API Layer) may
insert duplicate structures into this chain.
This provides those components the ability to update a structure that
appears in the next chain by making a modified copy of that same
structure and placing the new version at the beginning of the chain.
The benefit of allowing this duplication is each component is no longer
required to create a copy of the entire next chain just to update one
structure.
When duplication is present, all other OpenXR components must process only
the first instance of a structure of a given type, and then ignore all
instances of a structure of that same type.
If a component makes such a structure copy, and the original structure is also used to return content, then that component must copy the necessary content from the copied structure and into the original version of the structure upon completion of the function prior to proceeding back up the call stack. This is to ensure that OpenXR behavior is consistent whether or not that particular OpenXR component is present and/or enabled on the system.
2.9.8. Valid Usage for Nested Structures
The above conditions also apply recursively to members of structures provided as input to a function, either as a direct argument to the function, or themselves a member of another structure.
Specifics on valid usage of each function are covered in their individual sections.
2.10. Return Codes
The core API is designed to capture most, but not all, instances of incorrect usage. As such, most functions provide return codes. Functions in the API return their status via return codes that are in one of the two categories below.
typedef enum XrResult {
XR_SUCCESS = 0,
XR_TIMEOUT_EXPIRED = 1,
XR_SESSION_LOSS_PENDING = 3,
XR_EVENT_UNAVAILABLE = 4,
XR_SPACE_BOUNDS_UNAVAILABLE = 7,
XR_SESSION_NOT_FOCUSED = 8,
XR_FRAME_DISCARDED = 9,
XR_ERROR_VALIDATION_FAILURE = -1,
XR_ERROR_RUNTIME_FAILURE = -2,
XR_ERROR_OUT_OF_MEMORY = -3,
XR_ERROR_API_VERSION_UNSUPPORTED = -4,
XR_ERROR_INITIALIZATION_FAILED = -6,
XR_ERROR_FUNCTION_UNSUPPORTED = -7,
XR_ERROR_FEATURE_UNSUPPORTED = -8,
XR_ERROR_EXTENSION_NOT_PRESENT = -9,
XR_ERROR_LIMIT_REACHED = -10,
XR_ERROR_SIZE_INSUFFICIENT = -11,
XR_ERROR_HANDLE_INVALID = -12,
XR_ERROR_INSTANCE_LOST = -13,
XR_ERROR_SESSION_RUNNING = -14,
XR_ERROR_SESSION_NOT_RUNNING = -16,
XR_ERROR_SESSION_LOST = -17,
XR_ERROR_SYSTEM_INVALID = -18,
XR_ERROR_PATH_INVALID = -19,
XR_ERROR_PATH_COUNT_EXCEEDED = -20,
XR_ERROR_PATH_FORMAT_INVALID = -21,
XR_ERROR_PATH_UNSUPPORTED = -22,
XR_ERROR_LAYER_INVALID = -23,
XR_ERROR_LAYER_LIMIT_EXCEEDED = -24,
XR_ERROR_SWAPCHAIN_RECT_INVALID = -25,
XR_ERROR_SWAPCHAIN_FORMAT_UNSUPPORTED = -26,
XR_ERROR_ACTION_TYPE_MISMATCH = -27,
XR_ERROR_SESSION_NOT_READY = -28,
XR_ERROR_SESSION_NOT_STOPPING = -29,
XR_ERROR_TIME_INVALID = -30,
XR_ERROR_REFERENCE_SPACE_UNSUPPORTED = -31,
XR_ERROR_FILE_ACCESS_ERROR = -32,
XR_ERROR_FILE_CONTENTS_INVALID = -33,
XR_ERROR_FORM_FACTOR_UNSUPPORTED = -34,
XR_ERROR_FORM_FACTOR_UNAVAILABLE = -35,
XR_ERROR_API_LAYER_NOT_PRESENT = -36,
XR_ERROR_CALL_ORDER_INVALID = -37,
XR_ERROR_GRAPHICS_DEVICE_INVALID = -38,
XR_ERROR_POSE_INVALID = -39,
XR_ERROR_INDEX_OUT_OF_RANGE = -40,
XR_ERROR_VIEW_CONFIGURATION_TYPE_UNSUPPORTED = -41,
XR_ERROR_ENVIRONMENT_BLEND_MODE_UNSUPPORTED = -42,
XR_ERROR_NAME_DUPLICATED = -44,
XR_ERROR_NAME_INVALID = -45,
XR_ERROR_ACTIONSET_NOT_ATTACHED = -46,
XR_ERROR_ACTIONSETS_ALREADY_ATTACHED = -47,
XR_ERROR_LOCALIZED_NAME_DUPLICATED = -48,
XR_ERROR_LOCALIZED_NAME_INVALID = -49,
XR_ERROR_GRAPHICS_REQUIREMENTS_CALL_MISSING = -50,
XR_ERROR_RUNTIME_UNAVAILABLE = -51,
// Provided by XR_VERSION_1_1
XR_ERROR_EXTENSION_DEPENDENCY_NOT_ENABLED = -1000710001,
// Provided by XR_VERSION_1_1
XR_ERROR_PERMISSION_INSUFFICIENT = -1000710000,
// Provided by XR_KHR_android_thread_settings
XR_ERROR_ANDROID_THREAD_SETTINGS_ID_INVALID_KHR = -1000003000,
// Provided by XR_KHR_android_thread_settings
XR_ERROR_ANDROID_THREAD_SETTINGS_FAILURE_KHR = -1000003001,
// Provided by XR_MSFT_spatial_anchor
XR_ERROR_CREATE_SPATIAL_ANCHOR_FAILED_MSFT = -1000039001,
// Provided by XR_MSFT_secondary_view_configuration
XR_ERROR_SECONDARY_VIEW_CONFIGURATION_TYPE_NOT_ENABLED_MSFT = -1000053000,
// Provided by XR_MSFT_controller_model
XR_ERROR_CONTROLLER_MODEL_KEY_INVALID_MSFT = -1000055000,
// Provided by XR_MSFT_composition_layer_reprojection
XR_ERROR_REPROJECTION_MODE_UNSUPPORTED_MSFT = -1000066000,
// Provided by XR_MSFT_scene_understanding
XR_ERROR_COMPUTE_NEW_SCENE_NOT_COMPLETED_MSFT = -1000097000,
// Provided by XR_MSFT_scene_understanding
XR_ERROR_SCENE_COMPONENT_ID_INVALID_MSFT = -1000097001,
// Provided by XR_MSFT_scene_understanding
XR_ERROR_SCENE_COMPONENT_TYPE_MISMATCH_MSFT = -1000097002,
// Provided by XR_MSFT_scene_understanding
XR_ERROR_SCENE_MESH_BUFFER_ID_INVALID_MSFT = -1000097003,
// Provided by XR_MSFT_scene_understanding
XR_ERROR_SCENE_COMPUTE_FEATURE_INCOMPATIBLE_MSFT = -1000097004,
// Provided by XR_MSFT_scene_understanding
XR_ERROR_SCENE_COMPUTE_CONSISTENCY_MISMATCH_MSFT = -1000097005,
// Provided by XR_FB_display_refresh_rate
XR_ERROR_DISPLAY_REFRESH_RATE_UNSUPPORTED_FB = -1000101000,
// Provided by XR_FB_color_space
XR_ERROR_COLOR_SPACE_UNSUPPORTED_FB = -1000108000,
// Provided by XR_FB_spatial_entity
XR_ERROR_SPACE_COMPONENT_NOT_SUPPORTED_FB = -1000113000,
// Provided by XR_FB_spatial_entity
XR_ERROR_SPACE_COMPONENT_NOT_ENABLED_FB = -1000113001,
// Provided by XR_FB_spatial_entity
XR_ERROR_SPACE_COMPONENT_STATUS_PENDING_FB = -1000113002,
// Provided by XR_FB_spatial_entity
XR_ERROR_SPACE_COMPONENT_STATUS_ALREADY_SET_FB = -1000113003,
// Provided by XR_FB_passthrough
XR_ERROR_UNEXPECTED_STATE_PASSTHROUGH_FB = -1000118000,
// Provided by XR_FB_passthrough
XR_ERROR_FEATURE_ALREADY_CREATED_PASSTHROUGH_FB = -1000118001,
// Provided by XR_FB_passthrough
XR_ERROR_FEATURE_REQUIRED_PASSTHROUGH_FB = -1000118002,
// Provided by XR_FB_passthrough
XR_ERROR_NOT_PERMITTED_PASSTHROUGH_FB = -1000118003,
// Provided by XR_FB_passthrough
XR_ERROR_INSUFFICIENT_RESOURCES_PASSTHROUGH_FB = -1000118004,
// Provided by XR_FB_passthrough
XR_ERROR_UNKNOWN_PASSTHROUGH_FB = -1000118050,
// Provided by XR_FB_render_model
XR_ERROR_RENDER_MODEL_KEY_INVALID_FB = -1000119000,
// Provided by XR_FB_render_model
XR_RENDER_MODEL_UNAVAILABLE_FB = 1000119020,
// Provided by XR_VARJO_marker_tracking
XR_ERROR_MARKER_NOT_TRACKED_VARJO = -1000124000,
// Provided by XR_VARJO_marker_tracking
XR_ERROR_MARKER_ID_INVALID_VARJO = -1000124001,
// Provided by XR_ML_marker_understanding
XR_ERROR_MARKER_DETECTOR_PERMISSION_DENIED_ML = -1000138000,
// Provided by XR_ML_marker_understanding
XR_ERROR_MARKER_DETECTOR_LOCATE_FAILED_ML = -1000138001,
// Provided by XR_ML_marker_understanding
XR_ERROR_MARKER_DETECTOR_INVALID_DATA_QUERY_ML = -1000138002,
// Provided by XR_ML_marker_understanding
XR_ERROR_MARKER_DETECTOR_INVALID_CREATE_INFO_ML = -1000138003,
// Provided by XR_ML_marker_understanding
XR_ERROR_MARKER_INVALID_ML = -1000138004,
// Provided by XR_ML_localization_map
XR_ERROR_LOCALIZATION_MAP_INCOMPATIBLE_ML = -1000139000,
// Provided by XR_ML_localization_map
XR_ERROR_LOCALIZATION_MAP_UNAVAILABLE_ML = -1000139001,
// Provided by XR_ML_localization_map
XR_ERROR_LOCALIZATION_MAP_FAIL_ML = -1000139002,
// Provided by XR_ML_localization_map
XR_ERROR_LOCALIZATION_MAP_IMPORT_EXPORT_PERMISSION_DENIED_ML = -1000139003,
// Provided by XR_ML_localization_map
XR_ERROR_LOCALIZATION_MAP_PERMISSION_DENIED_ML = -1000139004,
// Provided by XR_ML_localization_map
XR_ERROR_LOCALIZATION_MAP_ALREADY_EXISTS_ML = -1000139005,
// Provided by XR_ML_localization_map
XR_ERROR_LOCALIZATION_MAP_CANNOT_EXPORT_CLOUD_MAP_ML = -1000139006,
// Provided by XR_ML_spatial_anchors
XR_ERROR_SPATIAL_ANCHORS_PERMISSION_DENIED_ML = -1000140000,
// Provided by XR_ML_spatial_anchors
XR_ERROR_SPATIAL_ANCHORS_NOT_LOCALIZED_ML = -1000140001,
// Provided by XR_ML_spatial_anchors
XR_ERROR_SPATIAL_ANCHORS_OUT_OF_MAP_BOUNDS_ML = -1000140002,
// Provided by XR_ML_spatial_anchors
XR_ERROR_SPATIAL_ANCHORS_SPACE_NOT_LOCATABLE_ML = -1000140003,
// Provided by XR_ML_spatial_anchors_storage
XR_ERROR_SPATIAL_ANCHORS_ANCHOR_NOT_FOUND_ML = -1000141000,
// Provided by XR_MSFT_spatial_anchor_persistence
XR_ERROR_SPATIAL_ANCHOR_NAME_NOT_FOUND_MSFT = -1000142001,
// Provided by XR_MSFT_spatial_anchor_persistence
XR_ERROR_SPATIAL_ANCHOR_NAME_INVALID_MSFT = -1000142002,
// Provided by XR_MSFT_scene_marker
XR_SCENE_MARKER_DATA_NOT_STRING_MSFT = 1000147000,
// Provided by XR_FB_spatial_entity_sharing
XR_ERROR_SPACE_MAPPING_INSUFFICIENT_FB = -1000169000,
// Provided by XR_FB_spatial_entity_sharing
XR_ERROR_SPACE_LOCALIZATION_FAILED_FB = -1000169001,
// Provided by XR_FB_spatial_entity_sharing
XR_ERROR_SPACE_NETWORK_TIMEOUT_FB = -1000169002,
// Provided by XR_FB_spatial_entity_sharing
XR_ERROR_SPACE_NETWORK_REQUEST_FAILED_FB = -1000169003,
// Provided by XR_FB_spatial_entity_sharing
XR_ERROR_SPACE_CLOUD_STORAGE_DISABLED_FB = -1000169004,
// Provided by XR_META_spatial_entity_persistence
XR_ERROR_SPACE_INSUFFICIENT_RESOURCES_META = -1000259000,
// Provided by XR_META_spatial_entity_persistence
XR_ERROR_SPACE_STORAGE_AT_CAPACITY_META = -1000259001,
// Provided by XR_META_spatial_entity_persistence
XR_ERROR_SPACE_INSUFFICIENT_VIEW_META = -1000259002,
// Provided by XR_META_spatial_entity_persistence
XR_ERROR_SPACE_PERMISSION_INSUFFICIENT_META = -1000259003,
// Provided by XR_META_spatial_entity_persistence
XR_ERROR_SPACE_RATE_LIMITED_META = -1000259004,
// Provided by XR_META_spatial_entity_persistence
XR_ERROR_SPACE_TOO_DARK_META = -1000259005,
// Provided by XR_META_spatial_entity_persistence
XR_ERROR_SPACE_TOO_BRIGHT_META = -1000259006,
// Provided by XR_META_passthrough_color_lut
XR_ERROR_PASSTHROUGH_COLOR_LUT_BUFFER_SIZE_MISMATCH_META = -1000266000,
// Provided by XR_META_environment_depth
XR_ENVIRONMENT_DEPTH_NOT_AVAILABLE_META = 1000291000,
// Provided by XR_EXT_render_model
XR_ERROR_RENDER_MODEL_ID_INVALID_EXT = -1000300000,
// Provided by XR_EXT_render_model
XR_ERROR_RENDER_MODEL_ASSET_UNAVAILABLE_EXT = -1000300001,
// Provided by XR_EXT_render_model
XR_ERROR_RENDER_MODEL_GLTF_EXTENSION_REQUIRED_EXT = -1000300002,
// Provided by XR_EXT_interaction_render_model
XR_ERROR_NOT_INTERACTION_RENDER_MODEL_EXT = -1000301000,
// Provided by XR_QCOM_tracking_optimization_settings
XR_ERROR_HINT_ALREADY_SET_QCOM = -1000306000,
// Provided by XR_HTC_anchor
XR_ERROR_NOT_AN_ANCHOR_HTC = -1000319000,
// Provided by XR_BD_spatial_sensing
XR_ERROR_SPATIAL_ENTITY_ID_INVALID_BD = -1000389000,
// Provided by XR_BD_spatial_sensing
XR_ERROR_SPATIAL_SENSING_SERVICE_UNAVAILABLE_BD = -1000389001,
// Provided by XR_BD_spatial_sensing
XR_ERROR_ANCHOR_NOT_SUPPORTED_FOR_ENTITY_BD = -1000389002,
// Provided by XR_BD_spatial_anchor
XR_ERROR_SPATIAL_ANCHOR_NOT_FOUND_BD = -1000390000,
// Provided by XR_BD_spatial_anchor_sharing
XR_ERROR_SPATIAL_ANCHOR_SHARING_NETWORK_TIMEOUT_BD = -1000391000,
// Provided by XR_BD_spatial_anchor_sharing
XR_ERROR_SPATIAL_ANCHOR_SHARING_AUTHENTICATION_FAILURE_BD = -1000391001,
// Provided by XR_BD_spatial_anchor_sharing
XR_ERROR_SPATIAL_ANCHOR_SHARING_NETWORK_FAILURE_BD = -1000391002,
// Provided by XR_BD_spatial_anchor_sharing
XR_ERROR_SPATIAL_ANCHOR_SHARING_LOCALIZATION_FAIL_BD = -1000391003,
// Provided by XR_BD_spatial_anchor_sharing
XR_ERROR_SPATIAL_ANCHOR_SHARING_MAP_INSUFFICIENT_BD = -1000391004,
// Provided by XR_BD_spatial_scene
XR_ERROR_SCENE_CAPTURE_FAILURE_BD = -1000392000,
// Provided by XR_EXT_plane_detection
XR_ERROR_SPACE_NOT_LOCATABLE_EXT = -1000429000,
// Provided by XR_EXT_plane_detection
XR_ERROR_PLANE_DETECTION_PERMISSION_DENIED_EXT = -1000429001,
// Provided by XR_ANDROID_trackables
XR_ERROR_MISMATCHING_TRACKABLE_TYPE_ANDROID = -1000455000,
// Provided by XR_ANDROID_trackables
XR_ERROR_TRACKABLE_TYPE_NOT_SUPPORTED_ANDROID = -1000455001,
// Provided by XR_ANDROID_device_anchor_persistence
XR_ERROR_ANCHOR_ID_NOT_FOUND_ANDROID = -1000457000,
// Provided by XR_ANDROID_device_anchor_persistence
XR_ERROR_ANCHOR_ALREADY_PERSISTED_ANDROID = -1000457001,
// Provided by XR_ANDROID_device_anchor_persistence
XR_ERROR_ANCHOR_NOT_TRACKING_ANDROID = -1000457002,
// Provided by XR_ANDROID_device_anchor_persistence
XR_ERROR_PERSISTED_DATA_NOT_READY_ANDROID = -1000457003,
// Provided by XR_ANDROID_face_tracking
XR_ERROR_SERVICE_NOT_READY_ANDROID = -1000458000,
// Provided by XR_EXT_future
XR_ERROR_FUTURE_PENDING_EXT = -1000469001,
// Provided by XR_EXT_future
XR_ERROR_FUTURE_INVALID_EXT = -1000469002,
// Provided by XR_ML_system_notifications
XR_ERROR_SYSTEM_NOTIFICATION_PERMISSION_DENIED_ML = -1000473000,
// Provided by XR_ML_system_notifications
XR_ERROR_SYSTEM_NOTIFICATION_INCOMPATIBLE_SKU_ML = -1000473001,
// Provided by XR_ML_world_mesh_detection
XR_ERROR_WORLD_MESH_DETECTOR_PERMISSION_DENIED_ML = -1000474000,
// Provided by XR_ML_world_mesh_detection
XR_ERROR_WORLD_MESH_DETECTOR_SPACE_NOT_LOCATABLE_ML = -1000474001,
// Provided by XR_ML_facial_expression
XR_ERROR_FACIAL_EXPRESSION_PERMISSION_DENIED_ML = 1000482000,
// Provided by XR_META_colocation_discovery
XR_ERROR_COLOCATION_DISCOVERY_NETWORK_FAILED_META = -1000571001,
// Provided by XR_META_colocation_discovery
XR_ERROR_COLOCATION_DISCOVERY_NO_DISCOVERY_METHOD_META = -1000571002,
// Provided by XR_META_colocation_discovery
XR_COLOCATION_DISCOVERY_ALREADY_ADVERTISING_META = 1000571003,
// Provided by XR_META_colocation_discovery
XR_COLOCATION_DISCOVERY_ALREADY_DISCOVERING_META = 1000571004,
// Provided by XR_META_spatial_entity_group_sharing
XR_ERROR_SPACE_GROUP_NOT_FOUND_META = -1000572002,
// Provided by XR_ANDROID_anchor_sharing_export
XR_ERROR_ANCHOR_NOT_OWNED_BY_CALLER_ANDROID = -1000701000,
// Provided by XR_EXT_spatial_entity
XR_ERROR_SPATIAL_CAPABILITY_UNSUPPORTED_EXT = -1000740001,
// Provided by XR_EXT_spatial_entity
XR_ERROR_SPATIAL_ENTITY_ID_INVALID_EXT = -1000740002,
// Provided by XR_EXT_spatial_entity
XR_ERROR_SPATIAL_BUFFER_ID_INVALID_EXT = -1000740003,
// Provided by XR_EXT_spatial_entity
XR_ERROR_SPATIAL_COMPONENT_UNSUPPORTED_FOR_CAPABILITY_EXT = -1000740004,
// Provided by XR_EXT_spatial_entity
XR_ERROR_SPATIAL_CAPABILITY_CONFIGURATION_INVALID_EXT = -1000740005,
// Provided by XR_EXT_spatial_entity
XR_ERROR_SPATIAL_COMPONENT_NOT_ENABLED_EXT = -1000740006,
// Provided by XR_EXT_spatial_persistence
XR_ERROR_SPATIAL_PERSISTENCE_SCOPE_UNSUPPORTED_EXT = -1000763001,
// Provided by XR_EXT_spatial_persistence_operations
XR_ERROR_SPATIAL_PERSISTENCE_SCOPE_INCOMPATIBLE_EXT = -1000781001,
// Provided by XR_KHR_maintenance1
XR_ERROR_EXTENSION_DEPENDENCY_NOT_ENABLED_KHR = XR_ERROR_EXTENSION_DEPENDENCY_NOT_ENABLED,
// Provided by XR_KHR_maintenance1
XR_ERROR_PERMISSION_INSUFFICIENT_KHR = XR_ERROR_PERMISSION_INSUFFICIENT,
XR_RESULT_MAX_ENUM = 0x7FFFFFFF
} XrResult;
All return codes in the API are reported via XrResult return values.
The following are common suffixes shared across many of the return codes:
-
_INVALID: The specified handle, atom, or value is formatted incorrectly, or the specified handle was never created or has been destroyed. -
_UNSUPPORTED: The specified handle, atom, enumerant, or value is formatted correctly but cannot be used for the lifetime of this function’s parent handle. -
_UNAVAILABLE: The specified handle, atom, enumerant, or value is supported by the handle taken by this function, but is not usable at this moment.
Success Codes
| Enum | Description |
|---|---|
|
Function successfully completed. |
|
The specified timeout time occurred before the operation could complete. |
|
The session will be lost soon. |
|
No event was available. |
|
The space’s bounds are not known at the moment. |
|
The session is not in the focused state. |
|
A frame has been discarded from composition. |
|
The model is unavailable. (Added by the |
|
Marker does not encode a string. (Added by the |
|
Warning: The requested depth image is not yet available. (Added by the |
|
Permission to track facial expressions was not granted (Added by the |
|
Colocation advertisement has already been enabled (Added by the |
|
Colocation discovery has already been enabled (Added by the |
Error Codes
| Enum | Description |
|---|---|
|
The function usage was invalid in some way. |
|
The runtime failed to handle the function in an unexpected way that is not covered by another error result. |
|
A memory allocation has failed. |
|
The runtime does not support the requested API version. |
|
Initialization of object could not be completed. |
|
The requested function was not found or is otherwise unsupported. |
|
The requested feature is not supported. |
|
A requested extension is not supported. |
|
The runtime supports no more of the requested resource. |
|
The supplied size was smaller than required. |
|
A supplied object handle was invalid. |
|
The XrInstance was lost or could not be found. It will need to be destroyed and optionally recreated. |
|
The session is already running. |
|
The session is not yet running. |
|
The XrSession was lost. It will need to be destroyed and optionally recreated. |
|
The provided |
|
The provided |
|
The maximum number of supported semantic paths has been reached. |
|
The semantic path character format is invalid. |
|
The semantic path is unsupported. |
|
The layer was NULL or otherwise invalid. |
|
The number of specified layers is greater than the supported number. |
|
The image rect was negatively sized or otherwise invalid. |
|
The image format is not supported by the runtime or platform. |
|
The API used to retrieve an action’s state does not match the action’s type. |
|
The session is not in the ready state. |
|
The session is not in the stopping state. |
|
The provided |
|
The specified reference space is not supported by the runtime or system. |
|
The file could not be accessed. |
|
The file’s contents were invalid. |
|
The specified form factor is not supported by the current runtime or platform. |
|
The specified form factor is supported, but the device is currently not available, e.g. not plugged in or powered off. |
|
A requested API layer is not present or could not be loaded. |
|
The call was made without having made a previously required call. |
|
The given graphics device is not in a valid state. The graphics device could be lost or initialized without meeting graphics requirements. |
|
The supplied pose was invalid with respect to the requirements. |
|
The supplied index was outside the range of valid indices. |
|
The specified view configuration type is not supported by the runtime or platform. |
|
The specified environment blend mode is not supported by the runtime or platform. |
|
The name provided was a duplicate of an already-existing resource. |
|
The name provided was invalid. |
|
A referenced action set is not attached to the session. |
|
The session already has attached action sets. |
|
The localized name provided was a duplicate of an already-existing resource. |
|
The localized name provided was invalid. |
|
The |
|
The loader was unable to find or load a runtime. |
|
One or more of the extensions being enabled has dependency on extensions that are not enabled. |
|
Insufficient permissions. This error is included for use by vendor extensions. The precise definition of |
|
xrSetAndroidApplicationThreadKHR failed as thread id is invalid. (Added by the |
|
xrSetAndroidApplicationThreadKHR failed setting the thread attributes/priority. (Added by the |
|
Spatial anchor could not be created at that location. (Added by the |
|
The secondary view configuration was not enabled when creating the session. (Added by the |
|
The controller model key is invalid. (Added by the |
|
The reprojection mode is not supported. (Added by the |
|
Compute new scene not completed. (Added by the |
|
Scene component id invalid. (Added by the |
|
Scene component type mismatch. (Added by the |
|
Scene mesh buffer id invalid. (Added by the |
|
Scene compute feature incompatible. (Added by the |
|
Scene compute consistency mismatch. (Added by the |
|
The display refresh rate is not supported by the platform. (Added by the |
|
The color space is not supported by the runtime. (Added by the |
|
The component type is not supported for this space. (Added by the |
|
The required component is not enabled for this space. (Added by the |
|
A request to set the component’s status is currently pending. (Added by the |
|
The component is already set to the requested value. (Added by the |
|
The object state is unexpected for the issued command. (Added by the |
|
Trying to create an MR feature when one was already created and only one instance is allowed. (Added by the |
|
Requested functionality requires a feature to be created first. (Added by the |
|
Requested functionality is not permitted - application is not allowed to perform the requested operation. (Added by the |
|
There were insufficient resources available to perform an operation. (Added by the |
|
Unknown Passthrough error (no further details provided). (Added by the |
|
The model key is invalid. (Added by the |
|
Marker tracking is disabled or the specified marker is not currently tracked. (Added by the |
|
The specified marker ID is not valid. (Added by the |
|
The com.magicleap.permission.MARKER_TRACKING permission was denied. (Added by the |
|
The specified marker could not be located spatially. (Added by the |
|
The marker queried does not contain data of the requested type. (Added by the |
|
|
|
The marker id passed to the function was invalid. (Added by the |
|
The localization map being imported is not compatible with current OS or mode. (Added by the |
|
The localization map requested is not available. (Added by the |
|
The map localization service failed to fulfill the request, retry later. (Added by the |
|
The com.magicleap.permission.SPACE_IMPORT_EXPORT permission was denied. (Added by the |
|
The com.magicleap.permission.SPACE_MANAGER permission was denied. (Added by the |
|
The map being imported already exists in the system. (Added by the |
|
The map localization service cannot export cloud based maps. (Added by the |
|
The com.magicleap.permission.SPATIAL_ANCHOR permission was not granted. (Added by the |
|
Operation failed because the system is not localized into a localization map. (Added by the |
|
Operation failed because it is performed outside of the localization map. (Added by the |
|
Operation failed because the space referenced cannot be located. (Added by the |
|
The anchor references was not found. (Added by the |
|
A spatial anchor was not found associated with the spatial anchor name provided (Added by the |
|
The spatial anchor name provided was not valid (Added by the |
|
Anchor import from cloud or export from device failed. (Added by the |
|
Anchors were downloaded from the cloud but failed to be imported/aligned on the device. (Added by the |
|
Timeout occurred while waiting for network request to complete. (Added by the |
|
The network request failed. (Added by the |
|
Cloud storage is required for this operation but is currently disabled. (Added by the |
|
Resource limitation prevented this operation from executing. Recommend retrying, perhaps after a short delay and/or reducing memory consumption. (Added by the |
|
Operation could not be completed until resources used are reduced or storage expanded. (Added by the |
|
Look around the environment more for space tracking to function. (Added by the |
|
Space operation permission insufficient. Recommend confirming the status of the required permissions needed for using Space APIs. (Added by the |
|
Operation cancelled due to rate limiting. Recommend retrying after a short delay. (Added by the |
|
Environment too dark for tracking to complete operation. (Added by the |
|
Environment too bright for tracking to complete operation. (Added by the |
|
The provided data buffer did not match the required size. (Added by the |
|
The render model ID is invalid. (Added by the |
|
The render model asset is unavailable. (Added by the |
|
A glTF extension is required. (Added by the |
|
The provided XrRenderModelEXT was not created from a |
|
Tracking optimization hint is already set for the domain. (Added by the |
|
The provided space is valid but not an anchor. (Added by the |
|
The spatial entity id is invalid. (Added by the |
|
The spatial sensing service is unavailable. (Added by the |
|
The spatial entity does not support anchor. (Added by the |
|
The spatial anchor is not found. (Added by the |
|
The network transmission timeout. (Added by the |
|
The authentication for the user account failed. (Added by the |
|
The network connection failed, e.g. the connection is unstable or disconnected. (Added by the |
|
The spatial anchor localization failed. (Added by the |
|
The feature points of spatial anchor map are insufficient. (Added by the |
|
The scene capture is failed, for example exiting abnormally. (Added by the |
|
The space passed to the function was not locatable. (Added by the |
|
The permission for this resource was not granted. (Added by the |
|
Indicates that the parameters contains multiple trackable types. (Added by the |
|
Indicates that the function is not supported by the given trackable type. (Added by the |
|
XrUuidExt passed to the function was not found to be a persisted anchor. (Added by the |
|
XrUuidExt passed to the function was already marked to be persisted. (Added by the |
|
ANchor cannot be persisted because it is not tracking. (Added by the |
|
Persisted data stored by this app has not been loaded yet. (Added by the |
|
The underlying tracking service is not yet ready. (Added by the |
|
Returned by completion function to indicate future is not ready. (Added by the |
|
Returned by completion function to indicate future is not valid. (Added by the |
|
The com.magicleap.permission.SYSTEM_NOTIFICATION permission was not granted. (Added by the |
|
Incompatible SKU detected. (Added by the |
|
The world mesh detector permission was not granted. (Added by the |
|
At the time of the call the runtime was unable to locate the space and cannot fulfill your request. (Added by the |
|
The network request failed. (Added by the |
|
The runtime does not have any methods available to perform discovery. (Added by the |
|
The group UUID was not found within the runtime (Added by the |
|
Operation not allowed because anchor is not owned by the XrSession in which the function is being called. (Added by the |
|
Alias for |
|
Alias for |
|
The specified spatial capability is not supported by the runtime or the system. (Added by the |
|
The specified spatial entity id is invalid or an entity with that id does not exist in the environment. (Added by the |
|
The specified spatial buffer id is invalid or does not exist in the spatial snapshot being used to query for the buffer data. (Added by the |
|
The specified spatial component is not supported by the runtime or the system for the given capability. (Added by the |
|
The specified spatial capability configuration is invalid. (Added by the |
|
The specified spatial component is not enabled for the spatial context. (Added by the |
|
The specified spatial persistence scope is not supported by the runtime or the system. (Added by the |
|
The scope configured for the persistence context is incompatible for the current spatial entity. (Added by the |
2.10.1. Convenience Macros
// Provided by XR_VERSION_1_0
#define XR_SUCCEEDED(result) ((result) >= 0)
A convenience macro that can be used to test if a function succeeded.
Note that this evaluates to true for all success codes, including a
qualified success such as XR_FRAME_DISCARDED.
// Provided by XR_VERSION_1_0
#define XR_FAILED(result) ((result) < 0)
A convenience macro that can be used to test if a function has failed in some way. It evaluates to true for all failure codes.
// Provided by XR_VERSION_1_0
#define XR_UNQUALIFIED_SUCCESS(result) ((result) == 0)
A convenience macro that can be used to test a function’s failure.
The XR_UNQUALIFIED_SUCCESS macro evaluates to true exclusively when
the provided XrResult is equal to XR_SUCCESS (0).
2.10.2. Validation
Except as noted below or in individual API specifications, valid API usage may be required by the runtime. Runtimes may choose to validate some API usage and return an appropriate error code.
Application developers should use validation layers to catch and eliminate errors during development. Once validated, applications should not enable validation layers by default.
If a function returns a run time error, unless otherwise specified any
output parameters will have undefined contents, except that if the output
parameter is a structure with type and next fields, those fields will be
unmodified.
Any output structures chained from next will also have undefined contents,
except that the type and next will be unmodified.
Unless otherwise specified, errors do not affect existing OpenXR objects. Objects that have already been successfully created may still be used by the application.
XrResult code returns may be added to a given function in future versions of the specification. Runtimes must return only XrResult codes from the set documented for the given application API version.
Runtimes must ensure that incorrect usage by an application does not affect the integrity of the operating system, the API implementation, or other API client applications in the system, and does not allow one application to access data belonging to another application.
2.11. Handles
Objects which are allocated by the runtime on behalf of applications are
represented by handles.
Handles are opaque identifiers for objects whose lifetime is controlled by
applications via the create and destroy functions.
Example handle types include XrInstance, XrSession, and
XrSwapchain.
Handles which have not been destroyed are unique for a given application
process, but may be reused after being destroyed.
Unless otherwise specified, a successful handle creation function call
returns a new unique handle.
Unless otherwise specified, handles are implicitly destroyed when their
parent handle is destroyed.
Applications may destroy handles explicitly before the parent handle is
destroyed, and should do so if no longer needed, in order to conserve
resources.
Runtimes may detect XR_NULL_HANDLE and other invalid handles passed
where a valid handle is required and return XR_ERROR_HANDLE_INVALID.
However, runtimes are not required to do so unless otherwise specified, and
so use of any invalid handle may result in undefined behavior.
When a function has an optional handle parameter, XR_NULL_HANDLE
must be passed by the application if it does not pass a valid handle.
All functions that take a handle parameter may return
XR_ERROR_HANDLE_INVALID.
Handles form a hierarchy in which child handles fall under the validity and lifetime of parent handles. For example, to create an XrSwapchain handle, applications must call xrCreateSwapchain and pass an XrSession handle. Thus XrSwapchain is a child handle of XrSession.
2.12. Object Handle Types
The type of an object handle used in a function is usually determined by the specification of that function, as discussed in Valid Usage for Object Handles. However, some functions accept or return object handle parameters where the type of the object handle is unknown at execution time and is not specified in the description of the function itself. For these functions, the XrObjectType may be used to explicitly specify the type of a handle.
For example, an information-gathering or debugging mechanism implemented in a runtime extension or API layer extension may return a list of object handles that are generated by the mechanism’s operation. The same mechanism may also return a parallel list of object handle types that allow the recipient of this information to easily determine the types of the handles.
In general, anywhere an object handle of more than one type can occur, the object handle type may be provided to indicate its type.
// Provided by XR_VERSION_1_0
typedef enum XrObjectType {
XR_OBJECT_TYPE_UNKNOWN = 0,
XR_OBJECT_TYPE_INSTANCE = 1,
XR_OBJECT_TYPE_SESSION = 2,
XR_OBJECT_TYPE_SWAPCHAIN = 3,
XR_OBJECT_TYPE_SPACE = 4,
XR_OBJECT_TYPE_ACTION_SET = 5,
XR_OBJECT_TYPE_ACTION = 6,
// Provided by XR_EXT_debug_utils
XR_OBJECT_TYPE_DEBUG_UTILS_MESSENGER_EXT = 1000019000,
// Provided by XR_MSFT_spatial_anchor
XR_OBJECT_TYPE_SPATIAL_ANCHOR_MSFT = 1000039000,
// Provided by XR_MSFT_spatial_graph_bridge
XR_OBJECT_TYPE_SPATIAL_GRAPH_NODE_BINDING_MSFT = 1000049000,
// Provided by XR_EXT_hand_tracking
XR_OBJECT_TYPE_HAND_TRACKER_EXT = 1000051000,
// Provided by XR_FB_body_tracking
XR_OBJECT_TYPE_BODY_TRACKER_FB = 1000076000,
// Provided by XR_MSFT_scene_understanding
XR_OBJECT_TYPE_SCENE_OBSERVER_MSFT = 1000097000,
// Provided by XR_MSFT_scene_understanding
XR_OBJECT_TYPE_SCENE_MSFT = 1000097001,
// Provided by XR_HTC_facial_tracking
XR_OBJECT_TYPE_FACIAL_TRACKER_HTC = 1000104000,
// Provided by XR_FB_foveation
XR_OBJECT_TYPE_FOVEATION_PROFILE_FB = 1000114000,
// Provided by XR_FB_triangle_mesh
XR_OBJECT_TYPE_TRIANGLE_MESH_FB = 1000117000,
// Provided by XR_FB_passthrough
XR_OBJECT_TYPE_PASSTHROUGH_FB = 1000118000,
// Provided by XR_FB_passthrough
XR_OBJECT_TYPE_PASSTHROUGH_LAYER_FB = 1000118002,
// Provided by XR_FB_passthrough
XR_OBJECT_TYPE_GEOMETRY_INSTANCE_FB = 1000118004,
// Provided by XR_ML_marker_understanding
XR_OBJECT_TYPE_MARKER_DETECTOR_ML = 1000138000,
// Provided by XR_ML_localization_map
XR_OBJECT_TYPE_EXPORTED_LOCALIZATION_MAP_ML = 1000139000,
// Provided by XR_ML_spatial_anchors_storage
XR_OBJECT_TYPE_SPATIAL_ANCHORS_STORAGE_ML = 1000141000,
// Provided by XR_MSFT_spatial_anchor_persistence
XR_OBJECT_TYPE_SPATIAL_ANCHOR_STORE_CONNECTION_MSFT = 1000142000,
// Provided by XR_FB_face_tracking
XR_OBJECT_TYPE_FACE_TRACKER_FB = 1000201000,
// Provided by XR_FB_eye_tracking_social
XR_OBJECT_TYPE_EYE_TRACKER_FB = 1000202000,
// Provided by XR_META_virtual_keyboard
XR_OBJECT_TYPE_VIRTUAL_KEYBOARD_META = 1000219000,
// Provided by XR_FB_spatial_entity_user
XR_OBJECT_TYPE_SPACE_USER_FB = 1000241000,
// Provided by XR_META_passthrough_color_lut
XR_OBJECT_TYPE_PASSTHROUGH_COLOR_LUT_META = 1000266000,
// Provided by XR_FB_face_tracking2
XR_OBJECT_TYPE_FACE_TRACKER2_FB = 1000287012,
// Provided by XR_META_environment_depth
XR_OBJECT_TYPE_ENVIRONMENT_DEPTH_PROVIDER_META = 1000291000,
// Provided by XR_META_environment_depth
XR_OBJECT_TYPE_ENVIRONMENT_DEPTH_SWAPCHAIN_META = 1000291001,
// Provided by XR_EXT_render_model
XR_OBJECT_TYPE_RENDER_MODEL_EXT = 1000300000,
// Provided by XR_EXT_render_model
XR_OBJECT_TYPE_RENDER_MODEL_ASSET_EXT = 1000300001,
// Provided by XR_HTC_passthrough
XR_OBJECT_TYPE_PASSTHROUGH_HTC = 1000317000,
// Provided by XR_HTC_body_tracking
XR_OBJECT_TYPE_BODY_TRACKER_HTC = 1000320000,
// Provided by XR_BD_body_tracking
XR_OBJECT_TYPE_BODY_TRACKER_BD = 1000385000,
// Provided by XR_BD_facial_simulation
XR_OBJECT_TYPE_FACE_TRACKER_BD = 1000386000,
// Provided by XR_BD_spatial_sensing
XR_OBJECT_TYPE_SENSE_DATA_PROVIDER_BD = 1000389000,
// Provided by XR_BD_spatial_sensing
XR_OBJECT_TYPE_SENSE_DATA_SNAPSHOT_BD = 1000389001,
// Provided by XR_BD_spatial_sensing
XR_OBJECT_TYPE_ANCHOR_BD = 1000389002,
// Provided by XR_EXT_plane_detection
XR_OBJECT_TYPE_PLANE_DETECTOR_EXT = 1000429000,
// Provided by XR_ANDROID_trackables
XR_OBJECT_TYPE_TRACKABLE_TRACKER_ANDROID = 1000455001,
// Provided by XR_ANDROID_device_anchor_persistence
XR_OBJECT_TYPE_DEVICE_ANCHOR_PERSISTENCE_ANDROID = 1000457000,
// Provided by XR_ANDROID_face_tracking
XR_OBJECT_TYPE_FACE_TRACKER_ANDROID = 1000458000,
// Provided by XR_ML_world_mesh_detection
XR_OBJECT_TYPE_WORLD_MESH_DETECTOR_ML = 1000474000,
// Provided by XR_ML_facial_expression
XR_OBJECT_TYPE_FACIAL_EXPRESSION_CLIENT_ML = 1000482000,
// Provided by XR_EXT_spatial_entity
XR_OBJECT_TYPE_SPATIAL_ENTITY_EXT = 1000740000,
// Provided by XR_EXT_spatial_entity
XR_OBJECT_TYPE_SPATIAL_CONTEXT_EXT = 1000740001,
// Provided by XR_EXT_spatial_entity
XR_OBJECT_TYPE_SPATIAL_SNAPSHOT_EXT = 1000740002,
// Provided by XR_EXT_spatial_persistence
XR_OBJECT_TYPE_SPATIAL_PERSISTENCE_CONTEXT_EXT = 1000763000,
XR_OBJECT_TYPE_MAX_ENUM = 0x7FFFFFFF
} XrObjectType;
The XrObjectType enumeration defines values, each of which corresponds to a specific OpenXR handle type. These values can be used to associate debug information with a particular type of object through one or more extensions.
The following table defines XrObjectType and OpenXR Handle relationships in the core specification:
| XrObjectType | OpenXR Handle Type |
|---|---|
|
Unknown/Undefined Handle |
|
|
|
|
|
|
|
|
|
|
|
2.13. Buffer Size Parameters
Functions with buffer or array parameters passed as pointers, rather than declared with a static array size, follow different conventions depending on whether the buffer size is known to the application or variable per call.
2.13.1. Variable size buffer parameters
Functions with variable size output buffer parameters take on either
parameter form or structure form, as in one of the following examples, with
the element type being float in this case:
Parameter form:
XrResult xrFunction(uint32_t elementCapacityInput, uint32_t* elementCountOutput, float* elements);
Structure form:
XrResult xrFunction(XrBuffer* buffer);
struct XrBuffer {
uint32_t elementCapacityInput;
uint32_t elementCountOutput;
float* elements;
};
A "two-call idiom" should be employed by the application, first calling
xrFunction (with a valid elementCountOutput pointer if in
parameter form), but passing NULL as elements and 0 as
elementCapacityInput, to retrieve the required buffer size as number
of elements (number of floats in this example).
After allocating a buffer at least as large as elementCountOutput (in
a structure) or the value pointed to by elementCountOutput (as
parameters), a pointer to the allocated buffer should be passed as
elements, along with the buffer’s length in
elementCapacityInput, to a second call to xrFunction to perform
the retrieval of the data.
If the element type of elements is a structure with type and
next fields, the application must set the type to the correct
value, and must set next to a valid value.
A valid value for next is generally either NULL or another
structure with related data, in which type and next are also
valid, recursively.
(See Valid Usage for Structure Pointer Chains for details.)
In the following discussion, "set elementCountOutput" should be
interpreted as "set the value pointed to by elementCountOutput" in
parameter form and "set the value of elementCountOutput" in struct
form.
These functions have the following behavior with respect to the array/buffer
and its size parameters:
Some functions have a given elementCapacityInput and
elementCountOutput associated with more than one element array (i.e.
parallel arrays).
In this case, the capacity/count and all its associated arrays will share a
common name prefix.
All of the preceding general requirements continue to apply.
Some functions fill multiple element arrays of varying sizes in one call.
For these functions, the elementCapacityInput,
elementCountOutput, and elements array parameters or fields are
repeated with different prefixes.
In this case, all of the preceding general requirements still apply, with
these additional requirements:
-
If the application sets any
elementCapacityInputparameter or field to0, the runtime must treat allelementCapacityInputvalues as if they were set to0. -
If all
elementCapacityInputvalues are non-zero but any is insufficient to fit all elements of its corresponding array, the runtime must returnXR_ERROR_SIZE_INSUFFICIENT. As in the case of the single array, the data in all arrays is undefined when the function returns anyXR_ERROR_*result.
2.13.2. Known size buffer parameters
Functions with known size input and/or output buffer parameters, or buffer parameters of an application-chosen size, take a slightly different approach than variable size buffer parameters. Such functions also take on either parameter form or structure form, as in the following examples:
Parameter form:
XrResult xrFunction(uint32_t elementCount, float* elements);
Structure form:
XrResult xrFunction(XrBuffer* buffer);
struct XrBuffer {
uint32_t elementCount;
float* elements;
};
Unlike for variable size buffer parameters, only a single "count" is specified per buffer/array. Functions following this convention have the following behavior with respect to the array/buffer and its count parameters:
Some functions have a given elementCount associated with more than one
element array (i.e. parallel arrays).
In this case, the count and all its associated arrays will share a common
name prefix.
All of the preceding general requirements continue to apply.
Some functions operate on multiple element arrays of known sizes in one
call.
For these functions, the elementCount, and elements array
parameters or fields are repeated with different prefixes.
As in the case of the single array, the data in all arrays is undefined when
the function returns any XR_ERROR_* result.
All of the preceding general requirements continue to apply.
2.14. Time
Time is represented by a 64-bit signed integer representing nanoseconds
(XrTime).
The passage of time must be monotonic and not real-time (i.e. wall clock
time).
Thus the time is always increasing at a constant rate and is unaffected by
clock changes, time zones, daylight savings, etc.
2.14.1. XrTime
typedef int64_t XrTime;
XrTime is a base value type that represents time as a signed 64-bit
integer, representing the monotonically-increasing count of nanoseconds that
have elapsed since a runtime-chosen epoch.
XrTime always represents the time elapsed since that constant
epoch, rather than a duration or a time point relative to some moving epoch
such as vsync time, etc.
Durations are instead represented by XrDuration.
A single runtime must use the same epoch for all simultaneous applications. Time must be represented the same regardless of multiple processors or threads present in the system.
The period precision of time reported by the runtime is runtime-dependent, and may change. One nanosecond is the finest possible period precision. A runtime may, for example, report time progression with only microsecond-level granularity.
Time must not be assumed to correspond to a system clock time.
Unless specified otherwise, zero or a negative value is not a valid
XrTime, and related functions must return error
XR_ERROR_TIME_INVALID.
Applications must not initialize such XrTime fields to a zero
value.
Instead, applications should always assign XrTime fields to the
meaningful point in time they are choosing to reason about, such as a
frame’s predicted display time, or an action’s last change time.
The behavior of a runtime is undefined when time overflows beyond the
maximum positive value that can be represented by an XrTime.
Runtimes should choose an epoch that minimizes the chance of overflow.
Runtimes should also choose an epoch that minimizes the chance of underflow
below 0 for applications performing a reasonable amount of historical pose
lookback.
For example, if the runtime chooses an epoch relative to its startup time,
it should push the epoch into the past by enough time to avoid applications
performing reasonable pose lookback from reaching a negative XrTime
value.
An application cannot assume that the system’s clock and the runtime’s clock
will maintain a constant relationship across frames and should avoid
storing such an offset, as this may cause time drift.
Applications should instead always use time interop functions to convert a
relevant time point across the system’s clock and the runtime’s clock using
extensions, for example,
XR_KHR_win32_convert_performance_counter_time or
XR_KHR_convert_timespec_time.
2.15. Duration
Duration refers to an elapsed period of time, as opposed to an absolute timepoint.
2.15.1. XrDuration
typedef int64_t XrDuration;
The difference between two timepoints is a duration, and thus the difference
between two XrTime values is an XrDuration value.
XrDuration is a base value type that represents duration as a
signed 64-bit integer, representing the signed number of nanoseconds between
two timepoints.
Functions that refer to durations use XrDuration as opposed to
XrTime.
When an XrDuration is used as a timeout parameter, the constants
XR_NO_DURATION and XR_INFINITE_DURATION have special meaning.
A timeout with a duration that refers to the past (that is, a negative
duration) must be interpreted as a timeout of XR_NO_DURATION.
The interpretation of zero and negative durations in non-timeout uses is specified along with each such use.
// Provided by XR_VERSION_1_0
#define XR_NO_DURATION 0
For the case of timeout durations, XR_NO_DURATION can be used to indicate that the timeout is immediate.
// Provided by XR_VERSION_1_0
#define XR_INFINITE_DURATION 0x7fffffffffffffffLL
XR_INFINITE_DURATION is a special value that can be used to indicate that the timeout never occurs.
2.16. Prediction Time Limits
Some functions involve prediction or history retrieval for a supplied
XrTime timepoint.
For example, xrLocateViews accepts a display time for which to return
the resulting data.
Timepoints provided by applications may refer to time in the past or the
future.
Times in the past may be interpolated historical data.
Runtimes have different practical limits with respect to the accuracy
possible at varying past (historical or backwards prediction) and future
timepoints.
The runtime must treat as valid any future time requested by an
application, except when disqualified by size limitations of the underlying
types, though predictions may become less accurate as they get farther into
the future.
With respect to backward prediction, the application can pass a prediction
time equivalent to the timestamp of the most recently received pose plus as
much as 50 milliseconds in the past to retrieve accurate historical
data.
The runtime must retain and return at least 50 milliseconds of historical
data, interpolating as required, preceding the most recently received pose.
Requested times predating this time window, or requested times predating the
earliest received pose, may result in a best effort data whose accuracy
reduced or unspecified.
2.17. Colors
The XrColor3f structure is defined as:
// Provided by XR_VERSION_1_1
typedef struct XrColor3f {
float r;
float g;
float b;
} XrColor3f;
Unless otherwise specified, colors are encoded as linear (not with sRGB nor other gamma compression) values with individual components being in the range of 0.0 through 1.0.
The XrColor4f structure is defined as:
// Provided by XR_VERSION_1_0
typedef struct XrColor4f {
float r;
float g;
float b;
float a;
} XrColor4f;
Unless otherwise specified, colors are encoded as linear (not with sRGB nor other gamma compression) values with individual components being in the range of 0.0 through 1.0, and without the RGB components being premultiplied by the alpha component.
If color encoding is specified as being premultiplied by the alpha component, the RGB components are set to zero if the alpha component is zero.
2.18. Coordinate System
This API uses a Cartesian right-handed coordinate system.
The conventions for mapping coordinate axes of any particular space to meaningful directions depend on and are documented with the description of the space.
The API uses 2D, 3D, and 4D floating-point vectors to describe points and directions in a space.
A two-dimensional vector is defined by the XrVector2f structure:
typedef struct XrVector2f {
float x;
float y;
} XrVector2f;
If used to represent physical distances (rather than e.g. normalized direction) and not otherwise specified, values must be in meters.
A three-dimensional vector is defined by the XrVector3f structure:
typedef struct XrVector3f {
float x;
float y;
float z;
} XrVector3f;
If used to represent physical distances (rather than e.g. velocity or angular velocity) and not otherwise specified, values must be in meters.
A four-dimensional or homogeneous vector is defined by the XrVector4f structure:
// Provided by XR_VERSION_1_0
typedef struct XrVector4f {
float x;
float y;
float z;
float w;
} XrVector4f;
If used to represent physical distances, x, y, and z
values must be in meters.
Rotation is represented by a unit quaternion defined by the XrQuaternionf structure:
typedef struct XrQuaternionf {
float x;
float y;
float z;
float w;
} XrQuaternionf;
A pose is defined by the XrPosef structure:
typedef struct XrPosef {
XrQuaternionf orientation;
XrVector3f position;
} XrPosef;
A construct representing a position and orientation within a space, with
position expressed in meters, and orientation represented as a unit
quaternion.
When using XrPosef the rotation described by orientation is
always applied before the translation described by position.
A runtime must return XR_ERROR_POSE_INVALID if the orientation
norm deviates by more than 1% from unit length.
2.19. Common Data Types
Some OpenXR data types are used in multiple structures.
Those include the XrVector*f family of types, the spatial types
specified above, and the following categories of structures:
-
offset
-
extents
-
rectangle
-
field of view
Offsets are used to describe the direction and distance of an offset in two dimensions.
A floating-point offset is defined by the structure:
// Provided by XR_VERSION_1_0
typedef struct XrOffset2Df {
float x;
float y;
} XrOffset2Df;
This structure is used for component values that may be real numbers, represented with single-precision floating point. For representing offsets in discrete values, such as texels, the integer variant XrOffset2Di is used instead.
If used to represent physical distances, values must be in meters.
An integer offset is defined by the structure:
typedef struct XrOffset2Di {
int32_t x;
int32_t y;
} XrOffset2Di;
This variant is for representing discrete values such as texels. For representing physical distances, the floating-point variant XrOffset2Df is used instead.
Extents are used to describe the size of a rectangular region in two or three dimensions.
A two-dimensional floating-point extent is defined by the structure:
// Provided by XR_VERSION_1_0
typedef struct XrExtent2Df {
float width;
float height;
} XrExtent2Df;
This structure is used for component values that may be real numbers, represented with single-precision floating point. For representing extents in discrete values, such as texels, the integer variant XrExtent2Di is used instead.
If used to represent physical distances, values must be in meters.
The width and height value must be non-negative.
The XrExtent3Df structure is defined as:
// Provided by XR_VERSION_1_1
typedef struct XrExtent3Df {
float width;
float height;
float depth;
} XrExtent3Df;
This structure is used for component values that may be real numbers, represented with single-precision floating point.
If used to represent physical distances, values must be in meters. The width, height, and depth values must be non-negative.
A two-dimensional integer extent is defined by the structure:
typedef struct XrExtent2Di {
int32_t width;
int32_t height;
} XrExtent2Di;
This variant is for representing discrete values such as texels. For representing physical distances, the floating-point variant XrExtent2Df is used instead.
The width and height value must be non-negative.
Rectangles are used to describe a specific rectangular region in two dimensions. Rectangles must include both an offset and an extent defined in the same units. For instance, if a rectangle is in meters, both offset and extent must be in meters.
A rectangle with floating-point values is defined by the structure:
// Provided by XR_VERSION_1_0
typedef struct XrRect2Df {
XrOffset2Df offset;
XrExtent2Df extent;
} XrRect2Df;
This structure is used for component values that may be real numbers, represented with single-precision floating point.
The offset is the position of the rectangle corner with minimum value
coordinates.
The other three corners are computed by adding the
XrExtent2Df::width to the x offset,
XrExtent2Df::height to the y offset, or both.
A rectangle with integer values is defined by the structure:
typedef struct XrRect2Di {
XrOffset2Di offset;
XrExtent2Di extent;
} XrRect2Di;
This variant is for representing discrete values such as texels. For representing physical distances, the floating-point variant XrRect2Df is used instead.
The offset is the position of the rectangle corner with minimum value
coordinates.
The other three corners are computed by adding the
XrExtent2Di::width to the x offset,
XrExtent2Di::height to the y offset, or both.
An XrSpheref structure describes the center and radius of a sphere bounds.
// Provided by XR_VERSION_1_1
typedef struct XrSpheref {
XrPosef center;
float radius;
} XrSpheref;
The runtime must return XR_ERROR_VALIDATION_FAILURE if radius
is not a finite positive value.
An XrBoxf structure describes the pose and extents of an oriented box.
// Provided by XR_VERSION_1_1
typedef struct XrBoxf {
XrPosef center;
XrExtent3Df extents;
} XrBoxf;
The runtime must return XR_ERROR_VALIDATION_FAILURE if width, height
or depth values are negative.
An XrFrustumf structure describes the pose, field of view, and far distance of a frustum.
// Provided by XR_VERSION_1_1
typedef struct XrFrustumf {
XrPosef pose;
XrFovf fov;
float nearZ;
float farZ;
} XrFrustumf;
The runtime must return XR_ERROR_VALIDATION_FAILURE if farZ is
less than or equal to zero.
The runtime must return XR_ERROR_VALIDATION_FAILURE if nearZ is
less than zero.
See XrFovf for validity requirements on fov.
The XrUuid structure is a 128-bit Universally Unique Identifier and is defined as:
// Provided by XR_VERSION_1_1
typedef struct XrUuid {
uint8_t data[XR_UUID_SIZE];
} XrUuid;
The structure is composed of 16 octets, with the size and order of the fields defined in RFC 4122 section 4.1.2.
2.20. Angles
Where a value is provided as a function parameter or as a structure member and will be interpreted as an angle, the value is defined to be in radians.
Field of view (FoV) is defined by the structure:
typedef struct XrFovf {
float angleLeft;
float angleRight;
float angleUp;
float angleDown;
} XrFovf;
Angles to the right of the center and upwards from the center are positive,
and angles to the left of the center and down from the center are negative.
The total horizontal field of view is angleRight minus
angleLeft, and the total vertical field of view is angleUp minus
angleDown.
For a symmetric FoV, angleRight and angleUp will have positive
values, angleLeft will be -angleRight, and angleDown will
be -angleUp.
The angles must be specified in radians, and must be between -π/2 and π/2 exclusively.
When angleLeft > angleRight, the content of the view must be
flipped horizontally.
When angleDown > angleUp, the content of the view must be
flipped vertically.
2.21. Boolean Values
typedef uint32_t XrBool32;
Boolean values used by OpenXR are of type XrBool32 and are 32-bits
wide as suggested by the name.
The only valid values are the following:
#define XR_TRUE 1
#define XR_FALSE 0
2.22. Events
Events are messages sent from the runtime to the application.
2.22.1. Event Polling
Events are placed in a queue within the runtime. The application must read from the queue with regularity. Events are read from the queue one at a time via xrPollEvent. Every type of event is identified by an individual structure type, with each such structure beginning with an XrEventDataBaseHeader.
XrInstance instance; // previously initialized
// Initialize an event buffer to hold the output.
XrEventDataBuffer event = {XR_TYPE_EVENT_DATA_BUFFER};
XrResult result = xrPollEvent(instance, &event);
if (result == XR_SUCCESS) {
switch (event.type) {
case XR_TYPE_EVENT_DATA_SESSION_STATE_CHANGED: {
const XrEventDataSessionStateChanged& session_state_changed_event =
*reinterpret_cast<XrEventDataSessionStateChanged*>(&event);
// ...
break;
}
case XR_TYPE_EVENT_DATA_INSTANCE_LOSS_PENDING: {
const XrEventDataInstanceLossPending& instance_loss_pending_event =
*reinterpret_cast<XrEventDataInstanceLossPending*>(&event);
// ...
break;
}
}
}
xrPollEvent
The xrPollEvent function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrPollEvent(
XrInstance instance,
XrEventDataBuffer* eventData);
xrPollEvent polls for the next event and returns an event if one is
available.
xrPollEvent returns immediately regardless of whether an event was
available.
The event (if present) is unilaterally removed from the queue if a valid
XrInstance is provided.
On return, the eventData parameter is filled with the event’s data and
the type field is changed to the event’s type.
Runtimes may create valid next chains depending on enabled extensions,
but they must guarantee that any such chains point only to objects which
fit completely within the original XrEventDataBuffer pointed to by
eventData.
The runtime must discard queued events which contain destroyed or otherwise invalid handles. The runtime must not return events containing handles that have been destroyed or are otherwise invalid at the time of the call to xrPollEvent.
| Event | Description |
|---|---|
event queue has overflowed and some events were lost |
|
application is about to lose the instance |
|
current interaction profile for one or more top level user paths has changed |
|
runtime will begin operating with updated definitions or bounds for a reference space |
|
the application’s session has changed lifecycle state |
The XrEventDataBaseHeader structure is defined as:
// Provided by XR_VERSION_1_0
typedef struct XrEventDataBaseHeader {
XrStructureType type;
const void* next;
} XrEventDataBaseHeader;
The XrEventDataBaseHeader is a generic structure used to identify the common event data elements.
Upon receipt, the XrEventDataBaseHeader pointer should be type-cast
to a pointer of the appropriate event data type based on the type
parameter.
typedef struct XrEventDataBuffer {
XrStructureType type;
const void* next;
uint8_t varying[4000];
} XrEventDataBuffer;
The XrEventDataBuffer is a structure passed to xrPollEvent large enough to contain any returned event data element. The maximum size is specified by XR_MAX_EVENT_DATA_SIZE.
An application can set (or reset) only the type member and clear the
next member of an XrEventDataBuffer before passing it as an
input to xrPollEvent.
The runtime must ignore the contents of the varying field and
overwrite it without reading it.
A pointer to an XrEventDataBuffer may be type-cast to an
XrEventDataBaseHeader pointer, or a pointer to any other appropriate
event data based on the type parameter.
// Provided by XR_VERSION_1_0
#define XR_MAX_EVENT_DATA_SIZE sizeof(XrEventDataBuffer)
XR_MAX_EVENT_DATA_SIZE is the size of XrEventDataBuffer,
including the size of the XrEventDataBuffer::type and
XrEventDataBuffer::next members.
XrEventDataEventsLost
The XrEventDataEventsLost structure is defined as:
// Provided by XR_VERSION_1_0
typedef struct XrEventDataEventsLost {
XrStructureType type;
const void* next;
uint32_t lostEventCount;
} XrEventDataEventsLost;
Receiving the XrEventDataEventsLost event structure indicates that the event queue overflowed and some events were removed at the position within the queue at which this event was found.
Other event structures are defined in later chapters in the context where their definition is most relevant.
2.23. System resource lifetime
The creator of an underlying system resource is responsible for ensuring the resource’s lifetime matches the lifetime of the associated OpenXR handle.
Resources passed as inputs from the application to the runtime when creating
an OpenXR handle should not be freed while that handle is valid.
A runtime must not free resources passed as inputs or decrease their
reference counts (if applicable) from the initial value.
For example, the graphics device handle (or pointer) passed in to
xrCreateSession in XrGraphicsBinding* structure should be kept
alive when the corresponding XrSession handle is valid, and should be
freed by the application after the XrSession handle is destroyed.
Resources created by the runtime should not be freed by the application, and
the application should maintain the same reference count (if applicable) at
the destruction of the OpenXR handle as it had at its creation.
For example, the ID3D*Texture2D objects in the XrSwapchainImageD3D* are
created by the runtime and associated with the lifetime of the
XrSwapchain handle.
The application should not keep additional reference counts on any
ID3D*Texture2D objects past the lifetime of the XrSwapchain handle,
or make extra reference count decrease after destroying the
XrSwapchain handle.
3. API Initialization
Before using an OpenXR runtime, an application must initialize it by creating an XrInstance object. The following functions are useful for gathering information about the API layers and extensions installed on the system and creating the instance.
xrEnumerateApiLayerProperties and xrEnumerateInstanceExtensionProperties can be called before calling xrCreateInstance.
3.1. Exported Functions
A dynamically linked library (.dll or .so) that implements the API
loader must export all core OpenXR API functions.
The application can gain access to extension functions by obtaining
pointers to these functions through the use of xrGetInstanceProcAddr.
3.2. Function Pointers
Function pointers for all OpenXR functions can be obtained with the function xrGetInstanceProcAddr.
// Provided by XR_VERSION_1_0
XrResult xrGetInstanceProcAddr(
XrInstance instance,
const char* name,
PFN_xrVoidFunction* function);
xrGetInstanceProcAddr itself is obtained in a platform- and loader- specific manner. Typically, the loader library will export this function as a function symbol, so applications can link against the loader library, or load it dynamically and look up the symbol using platform-specific APIs. Loaders must export function symbols for all core OpenXR functions. Because of this, applications that use only the core OpenXR functions have no need to use xrGetInstanceProcAddr.
Because an application can call xrGetInstanceProcAddr before creating
an instance, xrGetInstanceProcAddr must return a valid function
pointer when the instance parameter is XR_NULL_HANDLE and the
name parameter is one of the following strings:
xrGetInstanceProcAddr must return XR_ERROR_HANDLE_INVALID if
name is not one of the above strings and instance is
XR_NULL_HANDLE.
xrGetInstanceProcAddr may return XR_ERROR_HANDLE_INVALID if
name is not one of the above strings and instance is invalid but
not XR_NULL_HANDLE.
xrGetInstanceProcAddr must return XR_ERROR_FUNCTION_UNSUPPORTED
if instance is a valid instance and the string specified in name
is not the name of an OpenXR core or enabled extension function.
If name is the name of an extension function, then the result returned
by xrGetInstanceProcAddr will depend upon how the instance was
created.
If instance was created with the related extension’s name appearing in
the XrInstanceCreateInfo::enabledExtensionNames array, then
xrGetInstanceProcAddr returns a valid function pointer.
If the related extension’s name did not appear in the
XrInstanceCreateInfo::enabledExtensionNames array during the
creation of instance, then xrGetInstanceProcAddr returns
XR_ERROR_FUNCTION_UNSUPPORTED.
Because of this, function pointers returned by xrGetInstanceProcAddr
using one XrInstance may not be valid when used with objects related
to a different XrInstance.
The returned function pointer is of type PFN_xrVoidFunction, and must be cast by the application to the type of the function being queried.
The table below defines the various use cases for xrGetInstanceProcAddr and return value (“fp” is “function pointer”) for each case.
instance parameter |
name parameter |
return value |
|---|---|---|
* |
|
undefined |
invalid instance |
* |
undefined |
|
fp |
|
|
fp |
|
|
fp |
|
|
* (any |
|
instance |
core OpenXR function |
fp1 |
instance |
enabled extension function for |
fp1 |
instance |
* (any |
|
- 1
-
The returned function pointer must only be called with a handle (the first parameter) that is
instanceor a child ofinstance.
typedef void (XRAPI_PTR *PFN_xrVoidFunction)(void);
PFN_xrVoidFunction is a generic function pointer type returned by queries, specifically those to xrGetInstanceProcAddr.
typedef XrResult (XRAPI_PTR *PFN_xrGetInstanceProcAddr)(XrInstance instance, const char* name, PFN_xrVoidFunction* function);
PFN_xrGetInstanceProcAddr is a function pointer type for xrGetInstanceProcAddr.
typedef struct XrApiLayerCreateInfo XrApiLayerCreateInfo;
typedef XrResult (XRAPI_PTR *PFN_xrCreateApiLayerInstance)(
const XrInstanceCreateInfo* info,
const XrApiLayerCreateInfo* apiLayerInfo,
XrInstance* instance);
PFN_xrCreateApiLayerInstance is a function pointer type for xrCreateApiLayerInstance.
Note: This function pointer type is only used by an OpenXR loader library, and never by an application.
3.3. Runtime Interface Negotiation
In order to negotiate the runtime interface version with the loader, the runtime must implement the xrNegotiateLoaderRuntimeInterface function.
|
Note
The API described in this section is solely intended for use between an OpenXR loader and a runtime (and/or an API layer, where noted). Applications use the appropriate loader library for their platform to load the active runtime and configured API layers, rather than making these calls directly. This section is included in the specification to ensure consistency between runtimes in their interactions with the loader. Be advised that as this is not application-facing API, some of the typical OpenXR API conventions are not followed in this section. |
The xrNegotiateLoaderRuntimeInterface function is defined as:
// Provided by XR_LOADER_VERSION_1_0
XrResult xrNegotiateLoaderRuntimeInterface(
const XrNegotiateLoaderInfo* loaderInfo,
XrNegotiateRuntimeRequest* runtimeRequest);
xrNegotiateLoaderRuntimeInterface should be directly exported by a
runtime so that using e.g. GetProcAddress on Windows or dlsym on POSIX
platforms returns a valid function pointer to it.
The runtime must return XR_ERROR_INITIALIZATION_FAILED if any of the
following conditions on loaderInfo are true:
-
XrNegotiateLoaderInfo::
structTypeis notXR_LOADER_INTERFACE_STRUCT_LOADER_INFO -
XrNegotiateLoaderInfo::
structVersionis not XR_LOADER_INFO_STRUCT_VERSION -
XrNegotiateLoaderInfo::
structSizeis notsizeof(XrNegotiateLoaderInfo)
The runtime must also return XR_ERROR_INITIALIZATION_FAILED if any of
the following conditions on runtimeRequest are true:
-
XrNegotiateRuntimeRequest::
structTypeis notXR_LOADER_INTERFACE_STRUCT_RUNTIME_REQUEST -
XrNegotiateRuntimeRequest::
structVersionis not XR_RUNTIME_INFO_STRUCT_VERSION -
XrNegotiateRuntimeRequest::
structSizeis notsizeof(XrNegotiateRuntimeRequest)
The runtime must determine if it supports the loader’s request. The runtime does not support the loader’s request if either of the following is true:
-
The runtime does not support any of the interface versions supported by the loader, as specified by the range XrNegotiateLoaderInfo::
minInterfaceVersionthrough XrNegotiateLoaderInfo::maxInterfaceVersioninclusive. -
The runtime does not support any of the API versions supported by the loader, ignoring "patch" version components, as specified by the range XrNegotiateLoaderInfo::
minApiVersionthrough XrNegotiateLoaderInfo::maxApiVersioninclusive.
The runtime must return XR_ERROR_INITIALIZATION_FAILED if it does not
support the loader’s request.
If the function succeeds, the runtime must set the
XrNegotiateRuntimeRequest::runtimeInterfaceVersion with the
runtime interface version it desires to support.
The XrNegotiateRuntimeRequest::runtimeInterfaceVersion set must
be in the range XrNegotiateLoaderInfo::minInterfaceVersion
through XrNegotiateLoaderInfo::maxInterfaceVersion inclusive.
If the function succeeds, the runtime must set the
XrNegotiateRuntimeRequest::runtimeApiVersion with the API
version of OpenXR it will execute under.
The XrNegotiateRuntimeRequest::runtimeApiVersion set must be in
the range XrNegotiateLoaderInfo::minApiVersion through
XrNegotiateLoaderInfo::maxApiVersion inclusive.
If the function succeeds, the runtime must set the
XrNegotiateRuntimeRequest::getInstanceProcAddr with a valid
function pointer for the loader to use to query function pointers to the
remaining OpenXR functions supported by the runtime.
If the function succeeds, the runtime must return XR_SUCCESS.
The XrNegotiateLoaderInfo structure is used to pass information about the loader to a runtime or an API layer.
The XrNegotiateLoaderInfo structure is defined as:
typedef struct XrNegotiateLoaderInfo {
XrLoaderInterfaceStructs structType;
uint32_t structVersion;
size_t structSize;
uint32_t minInterfaceVersion;
uint32_t maxInterfaceVersion;
XrVersion minApiVersion;
XrVersion maxApiVersion;
} XrNegotiateLoaderInfo;
This structure is an input from the loader to the runtime in an xrNegotiateLoaderRuntimeInterface call, as well as from the loader to an API layer in an xrNegotiateLoaderApiLayerInterface call.
The XrLoaderInterfaceStructs enumeration is defined as:
typedef enum XrLoaderInterfaceStructs {
XR_LOADER_INTERFACE_STRUCT_UNINTIALIZED = 0,
XR_LOADER_INTERFACE_STRUCT_LOADER_INFO = 1,
XR_LOADER_INTERFACE_STRUCT_API_LAYER_REQUEST = 2,
XR_LOADER_INTERFACE_STRUCT_RUNTIME_REQUEST = 3,
XR_LOADER_INTERFACE_STRUCT_API_LAYER_CREATE_INFO = 4,
XR_LOADER_INTERFACE_STRUCT_API_LAYER_NEXT_INFO = 5,
XR_LOADER_INTERFACE_STRUCTS_MAX_ENUM = 0x7FFFFFFF
} XrLoaderInterfaceStructs;
This enumeration serves a similar purpose in the runtime and API layer interface negotiation (loader) API as XrStructureType serves in the application-facing API.
// Provided by XR_LOADER_VERSION_1_0
#define XR_LOADER_INFO_STRUCT_VERSION 1
XR_LOADER_INFO_STRUCT_VERSION is the current version of the
XrNegotiateLoaderInfo structure.
It is used to populate the XrNegotiateLoaderInfo::structVersion
field.
// Provided by XR_LOADER_VERSION_1_0
#define XR_CURRENT_LOADER_RUNTIME_VERSION 1
XR_CURRENT_LOADER_RUNTIME_VERSION is the current version of the overall OpenXR Loader Runtime interface. It is used to populate maximum and minimum interface version fields in XrNegotiateLoaderInfo when loading a runtime.
// Provided by XR_LOADER_VERSION_1_0
#define XR_CURRENT_LOADER_API_LAYER_VERSION 1
XR_CURRENT_LOADER_API_LAYER_VERSION is the current version of the overall OpenXR Loader API Layer interface. It is used to populate maximum and minimum interface version fields in XrNegotiateLoaderInfo when loading an API layer.
The XrNegotiateRuntimeRequest structure is used to pass information about the runtime back to the loader.
The XrNegotiateRuntimeRequest structure is defined as:
typedef struct XrNegotiateRuntimeRequest {
XrLoaderInterfaceStructs structType;
uint32_t structVersion;
size_t structSize;
uint32_t runtimeInterfaceVersion;
XrVersion runtimeApiVersion;
PFN_xrGetInstanceProcAddr getInstanceProcAddr;
} XrNegotiateRuntimeRequest;
This is an output structure from runtime negotiation.
The loader must populate structType, structVersion, and
structSize to ensure correct interpretation by the runtime, while the
runtime populates the rest of the fields in a successful call to
xrNegotiateLoaderRuntimeInterface.
// Provided by XR_LOADER_VERSION_1_0
#define XR_RUNTIME_INFO_STRUCT_VERSION 1
XR_RUNTIME_INFO_STRUCT_VERSION is the current version of the
XrNegotiateRuntimeRequest structure.
It is used to populate the
XrNegotiateRuntimeRequest::structVersion field.
3.4. API Layer Interface Negotiation
In order to negotiate the API layer interface version with the loader, an OpenXR API layer must implement the xrNegotiateLoaderApiLayerInterface function.
|
Note
The API described in this section is solely intended for use between an OpenXR loader and an API layer. Applications use the appropriate loader library for their platform to load the active runtime and configured API layers, rather than making these calls directly. This section is included in the specification to ensure consistency between runtimes in their interactions with the loader. Be advised that as this is not application-facing API, some of the typical OpenXR API conventions are not followed in this section. |
The xrNegotiateLoaderApiLayerInterface function is defined as:
// Provided by XR_LOADER_VERSION_1_0
XrResult xrNegotiateLoaderApiLayerInterface(
const XrNegotiateLoaderInfo* loaderInfo,
const char* layerName,
XrNegotiateApiLayerRequest* apiLayerRequest);
xrNegotiateLoaderApiLayerInterface should be directly exported by an
API layer so that using e.g. GetProcAddress on Windows or dlsym on POSIX
platforms returns a valid function pointer to it.
The API layer must return XR_ERROR_INITIALIZATION_FAILED if any of
the following conditions on loaderInfo are true:
-
XrNegotiateLoaderInfo::
structTypeis notXR_LOADER_INTERFACE_STRUCT_LOADER_INFO -
XrNegotiateLoaderInfo::
structVersionis not XR_LOADER_INFO_STRUCT_VERSION -
XrNegotiateLoaderInfo::
structSizeis notsizeof(XrNegotiateLoaderInfo)
The API layer must also return XR_ERROR_INITIALIZATION_FAILED if any
of the following conditions on apiLayerRequest are true:
-
XrNegotiateApiLayerRequest::
structTypeis notXR_LOADER_INTERFACE_STRUCT_API_LAYER_REQUEST -
XrNegotiateApiLayerRequest::
structVersionis not XR_API_LAYER_INFO_STRUCT_VERSION -
XrNegotiateApiLayerRequest::
structSizeis notsizeof(XrNegotiateApiLayerRequest)
The API layer must determine if it supports the loader’s request. The API layer does not support the loader’s request if either of the following is true:
-
The API layer does not support the interface versions supported by the loader, as specified by the range XrNegotiateLoaderInfo::
minInterfaceVersionthrough XrNegotiateLoaderInfo::maxInterfaceVersioninclusive. -
The API layer does not support the API versions supported by the loader, ignoring "patch" version components, as specified by the range XrNegotiateLoaderInfo::
minApiVersionthrough XrNegotiateLoaderInfo::maxApiVersioninclusive.
The API layer must return XR_ERROR_INITIALIZATION_FAILED if it does
not support the loader’s request.
If the function succeeds, the API layer must set the
XrNegotiateApiLayerRequest::layerInterfaceVersion with the API
layer interface version it desires to support.
The XrNegotiateApiLayerRequest::layerInterfaceVersion set must
be in the range XrNegotiateLoaderInfo::minInterfaceVersion
through XrNegotiateLoaderInfo::maxInterfaceVersion inclusive.
If the function succeeds, the API layer must set the
XrNegotiateApiLayerRequest::layerApiVersion with the API version
of OpenXR it will execute under.
The XrNegotiateApiLayerRequest::layerApiVersion set must be in
the range XrNegotiateLoaderInfo::minApiVersion through
XrNegotiateLoaderInfo::maxApiVersion inclusive.
If the function succeeds, the API layer must set the
XrNegotiateApiLayerRequest::getInstanceProcAddr with a valid
function pointer for the loader to use to query function pointers to the
remaining OpenXR functions supported by the API layer.
If the function succeeds, the API layer must set the
XrNegotiateApiLayerRequest::createApiLayerInstance with a valid
function pointer to an implementation of xrCreateApiLayerInstance for
the loader to use to create the instance through the API layer call chain.
If the function succeeds, the API layer must return XR_SUCCESS.
The API layer must not call into another API layer from its implementation of the xrNegotiateLoaderApiLayerInterface function. The loader must handle all API layer negotiations with each API layer individually.
The XrNegotiateApiLayerRequest structure is used to pass information about the API layer back to the loader.
The XrNegotiateApiLayerRequest structure is defined as:
typedef struct XrNegotiateApiLayerRequest {
XrLoaderInterfaceStructs structType;
uint32_t structVersion;
size_t structSize;
uint32_t layerInterfaceVersion;
XrVersion layerApiVersion;
PFN_xrGetInstanceProcAddr getInstanceProcAddr;
PFN_xrCreateApiLayerInstance createApiLayerInstance;
} XrNegotiateApiLayerRequest;
This is an output structure from API layer negotiation.
The loader must populate structType, structVersion, and
structSize before calling to ensure correct interpretation by the API
layer, while the API layer populates the rest of the fields in a successful
call to xrNegotiateLoaderApiLayerInterface.
// Provided by XR_LOADER_VERSION_1_0
#define XR_API_LAYER_INFO_STRUCT_VERSION 1
XR_API_LAYER_INFO_STRUCT_VERSION is the current version of the
XrNegotiateApiLayerRequest structure.
It is used to populate the
XrNegotiateApiLayerRequest::structVersion field.
The xrCreateApiLayerInstance function is defined as:
// Provided by XR_LOADER_VERSION_1_0
XrResult xrCreateApiLayerInstance(
const XrInstanceCreateInfo* info,
const XrApiLayerCreateInfo* layerInfo,
XrInstance* instance);
An API layer’s implementation of the xrCreateApiLayerInstance function is invoked during the loader’s implementation of xrCreateInstance, if the layer in question is enabled.
An API layer needs additional information during xrCreateInstance calls, so each API layer must implement the xrCreateApiLayerInstance function, which is a special API layer function.
An API layer must not implement xrCreateInstance.
xrCreateApiLayerInstance must be called by the loader during its implementation of the xrCreateInstance function.
The loader must call the first API layer’s xrCreateApiLayerInstance function passing in the pointer to the created XrApiLayerCreateInfo.
The XrApiLayerCreateInfo::nextInfo must be a linked-list of
XrApiLayerNextInfo structures with information about each of the API
layers that are to be enabled.
Note that this does not operate like a next chain in the OpenXR
application API, but instead describes the enabled API layers from outermost
to innermost.
The API layer may validate that it is getting the correct next information
by checking that the XrApiLayerNextInfo::layerName matches the
expected value.
The API layer must use the information in its XrApiLayerNextInfo to call down the call chain to the next xrCreateApiLayerInstance:
-
The API layer must copy the XrApiLayerCreateInfo structure into its own structure.
-
The API layer must then update its copy of the XrApiLayerCreateInfo structure, setting XrApiLayerCreateInfo::
nextInfoto point to the XrApiLayerNextInfo for the next API layer (e.g.layerInfoCopy→nextInfo = layerInfo→nextInfo→next;). -
The API layer must then use the pointer to its XrApiLayerCreateInfo structure (instead of the one that was passed in) when it makes a call to the xrCreateApiLayerInstance function.
-
If the nested xrCreateApiLayerInstance call succeeds, the API layer may choose to setup its own dispatch table to the next API layer’s functions using the returned XrInstance and the next API layer’s xrGetInstanceProcAddr.
-
The API layer must return the XrResult returned from the next API layer.
The XrApiLayerCreateInfo structure contains special information required by a API layer during its create instance process.
The XrApiLayerCreateInfo structure is defined as:
typedef struct XrApiLayerCreateInfo {
XrLoaderInterfaceStructs structType;
uint32_t structVersion;
size_t structSize;
void* loaderInstance;
char settings_file_location[XR_API_LAYER_MAX_SETTINGS_PATH_SIZE];
XrApiLayerNextInfo* nextInfo;
} XrApiLayerCreateInfo;
// Provided by XR_LOADER_VERSION_1_0
#define XR_API_LAYER_CREATE_INFO_STRUCT_VERSION 1
XR_API_LAYER_CREATE_INFO_STRUCT_VERSION is the current version of the
XrApiLayerCreateInfo structure.
It is used to populate the XrApiLayerCreateInfo::structVersion
field.
// Provided by XR_LOADER_VERSION_1_0
#define XR_API_LAYER_MAX_SETTINGS_PATH_SIZE 512
XR_API_LAYER_MAX_SETTINGS_PATH_SIZE is the size of the
XrApiLayerCreateInfo::settings_file_location field.
The XrApiLayerNextInfo structure:
The XrApiLayerNextInfo structure is defined as:
typedef struct XrApiLayerNextInfo {
XrLoaderInterfaceStructs structType;
uint32_t structVersion;
size_t structSize;
char layerName[XR_MAX_API_LAYER_NAME_SIZE];
PFN_xrGetInstanceProcAddr nextGetInstanceProcAddr;
PFN_xrCreateApiLayerInstance nextCreateApiLayerInstance;
struct XrApiLayerNextInfo* next;
} XrApiLayerNextInfo;
// Provided by XR_LOADER_VERSION_1_0
#define XR_API_LAYER_NEXT_INFO_STRUCT_VERSION 1
XR_API_LAYER_NEXT_INFO_STRUCT_VERSION is the current version of the
XrApiLayerNextInfo structure.
It is used to populate the XrApiLayerNextInfo::structVersion
field.
4. Instance
XR_DEFINE_HANDLE(XrInstance)
An OpenXR instance is an object that allows an OpenXR application to communicate with an OpenXR runtime. The application accomplishes this communication by calling xrCreateInstance and receiving a handle to the resulting XrInstance object.
The XrInstance object stores and tracks OpenXR-related application state, without storing any such state in the application’s global address space. This allows the application to create multiple instances as well as safely encapsulate the application’s OpenXR state since this object is opaque to the application. OpenXR runtimes may limit the number of simultaneous XrInstance objects that may be created and used, but they must support the creation and usage of at least one XrInstance object per process.
Physically, this state may be stored in any of the OpenXR loader, OpenXR API layers or the OpenXR runtime components. The exact storage and distribution of this saved state is implementation-dependent, except where indicated by this specification.
The tracking of OpenXR state in the instance allows the streamlining of the API, where the intended instance is inferred from the highest ascendant of an OpenXR function’s target object. For example, in:
myResult = xrEndFrame(mySession, &myEndFrameDescription);
the XrSession object was created from an XrInstance object. The OpenXR loader typically keeps track of the XrInstance that is the parent of the XrSession object in this example and directs the function to the runtime associated with that instance. This tracking of OpenXR objects eliminates the need to specify an XrInstance in every OpenXR function.
4.1. API Layers and Extensions
Additional functionality may be provided by API layers or extensions. An API layer must not add or modify the definition of OpenXR functions, while an extension may do so.
The set of API layers to enable is specified when creating an instance, and those API layers are able to intercept any functions dispatched to that instance or any of its child objects.
Example API layers may include (but are not limited to):
-
an API layer to dump out OpenXR API calls
-
an API layer to perform OpenXR validation
To determine what set of API layers are available, OpenXR provides the xrEnumerateApiLayerProperties function:
// Provided by XR_VERSION_1_0
XrResult xrEnumerateApiLayerProperties(
uint32_t propertyCapacityInput,
uint32_t* propertyCountOutput,
XrApiLayerProperties* properties);
The list of available layers may change at any time due to actions outside
of the OpenXR runtime, so two calls to xrEnumerateApiLayerProperties
with the same parameters may return different results, or retrieve
different propertyCountOutput values or properties contents.
Once an instance has been created, the layers enabled for that instance will continue to be enabled and valid for the lifetime of that instance, even if some of them become unavailable for future instances.
The XrApiLayerProperties structure is defined as:
typedef struct XrApiLayerProperties {
XrStructureType type;
void* next;
char layerName[XR_MAX_API_LAYER_NAME_SIZE];
XrVersion specVersion;
uint32_t layerVersion;
char description[XR_MAX_API_LAYER_DESCRIPTION_SIZE];
} XrApiLayerProperties;
To enable a layer, the name of the layer should be added to
XrInstanceCreateInfo::enabledApiLayerNames when creating an
XrInstance.
Loader implementations may provide mechanisms outside this API for enabling
specific API layers.
API layers enabled through such a mechanism are implicitly enabled, while
API layers enabled by including the API layer name in
XrInstanceCreateInfo::enabledApiLayerNames are explicitly
enabled.
Except where otherwise specified, implicitly enabled and explicitly enabled
API layers differ only in the way they are enabled.
Explicitly enabling an API layer that is implicitly enabled has no
additional effect.
Instance extensions are able to affect the operation of the instance and any of its child objects. As stated earlier, extensions can expand the OpenXR API and provide new functions or augment behavior.
Examples of extensions may be (but are not limited to):
The application can determine the available instance extensions by calling xrEnumerateInstanceExtensionProperties:
// Provided by XR_VERSION_1_0
XrResult xrEnumerateInstanceExtensionProperties(
const char* layerName,
uint32_t propertyCapacityInput,
uint32_t* propertyCountOutput,
XrExtensionProperties* properties);
Because the list of available layers may change externally between calls to
xrEnumerateInstanceExtensionProperties, two calls may retrieve
different results if a layerName is available in one call but not in
another.
The extensions supported by a layer may also change between two calls, e.g.
if the layer implementation is replaced by a different version between those
calls.
The XrExtensionProperties structure is defined as:
typedef struct XrExtensionProperties {
XrStructureType type;
void* next;
char extensionName[XR_MAX_EXTENSION_NAME_SIZE];
uint32_t extensionVersion;
} XrExtensionProperties;
4.2. Instance Lifecycle
The xrCreateInstance function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrCreateInstance(
const XrInstanceCreateInfo* createInfo,
XrInstance* instance);
xrCreateInstance creates the XrInstance, then enables and
initializes global API layers and extensions requested by the application.
If an extension is provided by an API layer, both the API layer and
extension must be specified at xrCreateInstance time.
If a specified API layer cannot be found, no XrInstance will be
created and the function will return XR_ERROR_API_LAYER_NOT_PRESENT.
Likewise, if a specified extension cannot be found, the call must return
XR_ERROR_EXTENSION_NOT_PRESENT and no XrInstance will be
created.
Additionally, some runtimes may limit the number of concurrent instances
that may be in use.
If the application attempts to create more instances than a runtime can
simultaneously support, xrCreateInstance may return
XR_ERROR_LIMIT_REACHED.
If the XrApplicationInfo::applicationName is the empty string
the runtime must return XR_ERROR_NAME_INVALID.
If the XrInstanceCreateInfo structure contains a platform-specific
extension for a platform other than the target platform,
XR_ERROR_INITIALIZATION_FAILED may be returned.
If a mandatory platform-specific extension is defined for the target
platform but no matching extension struct is provided in
XrInstanceCreateInfo the runtime must return
XR_ERROR_INITIALIZATION_FAILED.
The XrInstanceCreateInfo structure is defined as:
typedef struct XrInstanceCreateInfo {
XrStructureType type;
const void* next;
XrInstanceCreateFlags createFlags;
XrApplicationInfo applicationInfo;
uint32_t enabledApiLayerCount;
const char* const* enabledApiLayerNames;
uint32_t enabledExtensionCount;
const char* const* enabledExtensionNames;
} XrInstanceCreateInfo;
The XrInstanceCreateInfo::createFlags member is of the following
type, and contains a bitwise-OR of zero or more of the bits defined in
XrInstanceCreateFlagBits.
typedef XrFlags64 XrInstanceCreateFlags;
Valid bits for XrInstanceCreateFlags are defined by XrInstanceCreateFlagBits.
// Flag bits for XrInstanceCreateFlags
There are currently no instance creation flag bits defined. This is reserved for future use.
The XrApplicationInfo structure is defined as:
typedef struct XrApplicationInfo {
char applicationName[XR_MAX_APPLICATION_NAME_SIZE];
uint32_t applicationVersion;
char engineName[XR_MAX_ENGINE_NAME_SIZE];
uint32_t engineVersion;
XrVersion apiVersion;
} XrApplicationInfo;
Useful values for apiVersion include XR_API_VERSION_1_0 and
XR_API_VERSION_1_1.
|
Note
When using the OpenXR API to implement a reusable engine that will be used
by many applications, When using the OpenXR API to implement an individual application without a
shared engine, the input |
The xrDestroyInstance function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrDestroyInstance(
XrInstance instance);
The xrDestroyInstance function is used to destroy an XrInstance.
XrInstance handles are destroyed using xrDestroyInstance. When an XrInstance is destroyed, all handles that are children of that XrInstance are also destroyed.
4.3. Instance Information
The xrGetInstanceProperties function provides information about the instance and the associated runtime.
// Provided by XR_VERSION_1_0
XrResult xrGetInstanceProperties(
XrInstance instance,
XrInstanceProperties* instanceProperties);
The instanceProperties parameter must be filled out by the runtime in
response to this call, with information as defined in
XrInstanceProperties.
The XrInstanceProperties structure is defined as:
typedef struct XrInstanceProperties {
XrStructureType type;
void* next;
XrVersion runtimeVersion;
char runtimeName[XR_MAX_RUNTIME_NAME_SIZE];
} XrInstanceProperties;
4.4. Platform-Specific Instance Creation
Some amount of data required for instance creation is exposed through chained structures defined in extensions. These structures may be optional or even required for instance creation on specific platforms, but not on other platforms. Separating off platform-specific functionality into extension structures prevents the primary XrInstanceCreateInfo structure from becoming too bloated with unnecessary information.
See the
List of Extensions
appendix for the list of available extensions and their related structures.
These structures expand the XrInstanceCreateInfo parent struct using
the XrInstanceCreateInfo::next member.
The specific list of structures that may be used for extending
XrInstanceCreateInfo::next can be found in the "Valid Usage
(Implicit)" block immediately following the definition of the structure.
4.4.1. The Instance Lost Error
The XR_ERROR_INSTANCE_LOST error indicates that the XrInstance
has become unusable.
This can happen if a critical runtime process aborts, if the connection to
the runtime is otherwise no longer available, or if the runtime encounters
an error during any function execution which prevents it from being able to
support further function execution.
Once XR_ERROR_INSTANCE_LOST is first returned, it must henceforth be
returned by all non-destroy functions that involve an XrInstance or
child handle type until the instance is destroyed.
Applications must destroy the XrInstance.
Applications may then attempt to continue by recreating all relevant OpenXR
objects, starting with a new XrInstance.
A runtime may generate an XrEventDataInstanceLossPending event when
instance loss is detected.
4.4.2. XrEventDataInstanceLossPending
The XrEventDataInstanceLossPending structure is defined as:
// Provided by XR_VERSION_1_0
typedef struct XrEventDataInstanceLossPending {
XrStructureType type;
const void* next;
XrTime lossTime;
} XrEventDataInstanceLossPending;
Receiving the XrEventDataInstanceLossPending event structure indicates
that the application is about to lose the indicated XrInstance at the
indicated lossTime in the future.
The application should call xrDestroyInstance and relinquish any
instance-specific resources.
This typically occurs to make way for a replacement of the underlying
runtime, such as via a software update.
After the application has destroyed all of its instances and their children
and waited past the specified time, it may then re-try
xrCreateInstance in a loop waiting for whatever maintenance the
runtime is performing to complete.
The runtime will return XR_ERROR_RUNTIME_UNAVAILABLE from
xrCreateInstance as long as it is unable to create the instance.
Once the runtime has returned and is able to continue, it must resume
returning XR_SUCCESS from xrCreateInstance if valid data is
passed in.
4.5. Instance Enumerated Type String Functions
Applications often want to turn certain enum values from the runtime into strings for use in log messages, to be localized in UI, or for various other reasons. OpenXR provides functions that turn common enum types into UTF-8 strings for use in applications.
// Provided by XR_VERSION_1_0
XrResult xrResultToString(
XrInstance instance,
XrResult value,
char buffer[XR_MAX_RESULT_STRING_SIZE]);
Returns the text version of the provided XrResult value as a UTF-8 string.
In all cases the returned string must be one of:
The XR_MAX_RESULT_STRING_SIZE enumerant defines the size of the buffer
passed to xrResultToString.
#define XR_MAX_RESULT_STRING_SIZE 64
The xrStructureTypeToString function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrStructureTypeToString(
XrInstance instance,
XrStructureType value,
char buffer[XR_MAX_STRUCTURE_NAME_SIZE]);
Returns the text version of the provided XrStructureType value as a UTF-8 string.
In all cases the returned string must be one of:
The XR_MAX_STRUCTURE_NAME_SIZE enumerant defines the size of the
buffer passed to xrStructureTypeToString.
#define XR_MAX_STRUCTURE_NAME_SIZE 64
5. System
This API separates the concept of physical systems of XR devices from the
logical objects that applications interact with directly.
A system represents a collection of related devices in the runtime, often
made up of several individual hardware components working together to enable
XR experiences.
An XrSystemId is returned by xrGetSystem representing the
system of devices the runtime will use to support a given
form factor.
Each system may include: a VR/AR display, various forms of input (gamepad,
touchpad, motion controller), and other trackable objects.
The application uses the system to create a session, which can then be used to accept input from the user and output rendered frames. The application also provides suggested bindings from its actions to any number of input sources. The runtime may use this action information to activate only a subset of devices and avoid wasting resources on devices that are not in use. Exactly which devices are active once an XR system is selected will depend on the features provided by the runtime, and may vary from runtime to runtime. For example, a runtime that is capable of mapping from one tracking system’s space to another’s may support devices from multiple tracking systems simultaneously.
5.1. Form Factors
The first step in selecting a system is for the application to request its desired form factor. The form factor defines how the display(s) moves in the environment relative to the user’s head and how the user will interact with the XR experience. A runtime may support multiple form factors, such as on a mobile phone that supports both slide-in VR headset experiences and handheld AR experiences.
While an application’s core XR rendering may span across form factors, its user interface will often be written to target a particular form factor, requiring explicit tailoring to function well on other form factors. For example, screen-space UI designed for a handheld phone will produce an uncomfortable experience for users if presented in screen-space on an AR headset.
typedef enum XrFormFactor {
XR_FORM_FACTOR_HEAD_MOUNTED_DISPLAY = 1,
XR_FORM_FACTOR_HANDHELD_DISPLAY = 2,
XR_FORM_FACTOR_MAX_ENUM = 0x7FFFFFFF
} XrFormFactor;
The predefined form factors which may be supported by OpenXR runtimes are:
5.2. Getting the XrSystemId
XR_DEFINE_ATOM(XrSystemId)
An XrSystemId is an opaque atom used by the runtime to identify a
system.
The value XR_NULL_SYSTEM_ID is considered an invalid system.
// Provided by XR_VERSION_1_0
#define XR_NULL_SYSTEM_ID 0
The only XrSystemId value defined to be constant across all
instances is the invalid system XR_NULL_SYSTEM_ID.
No supported system is associated with XR_NULL_SYSTEM_ID.
Unless explicitly permitted, it should not be passed to API calls or used
as a structure attribute when a valid XrSystemId is required.
The xrGetSystem function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrGetSystem(
XrInstance instance,
const XrSystemGetInfo* getInfo,
XrSystemId* systemId);
To get an XrSystemId, an application specifies its desired
form factor to xrGetSystem and gets
the runtime’s XrSystemId associated with that configuration.
If the form factor is supported but temporarily unavailable,
xrGetSystem must return XR_ERROR_FORM_FACTOR_UNAVAILABLE.
A runtime may return XR_SUCCESS on a subsequent call for a form
factor it previously returned XR_ERROR_FORM_FACTOR_UNAVAILABLE.
For example, connecting or warming up hardware might cause an unavailable
form factor to become available.
The XrSystemGetInfo structure is defined as:
typedef struct XrSystemGetInfo {
XrStructureType type;
const void* next;
XrFormFactor formFactor;
} XrSystemGetInfo;
The XrSystemGetInfo structure specifies attributes about a system as desired by an application.
XrInstance instance; // previously initialized
XrSystemGetInfo system_get_info = {XR_TYPE_SYSTEM_GET_INFO};
system_get_info.formFactor = XR_FORM_FACTOR_HEAD_MOUNTED_DISPLAY;
XrSystemId systemId;
CHK_XR(xrGetSystem(instance, &system_get_info, &systemId));
// create session
// create swapchains
// begin session
// main loop
// end session
// destroy session
// no access to hardware after this point
5.3. System Properties
The xrGetSystemProperties function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrGetSystemProperties(
XrInstance instance,
XrSystemId systemId,
XrSystemProperties* properties);
An application can call xrGetSystemProperties to retrieve information about the system such as vendor ID, system name, and graphics and tracking properties.
The XrSystemProperties structure is defined as:
typedef struct XrSystemProperties {
XrStructureType type;
void* next;
XrSystemId systemId;
uint32_t vendorId;
char systemName[XR_MAX_SYSTEM_NAME_SIZE];
XrSystemGraphicsProperties graphicsProperties;
XrSystemTrackingProperties trackingProperties;
} XrSystemProperties;
The runtime must report a valid vendor ID for the system. The vendor ID must be either the USB vendor ID defined for the physical device or a Khronos vendor ID.
The XrSystemGraphicsProperties structure is defined as:
typedef struct XrSystemGraphicsProperties {
uint32_t maxSwapchainImageHeight;
uint32_t maxSwapchainImageWidth;
uint32_t maxLayerCount;
} XrSystemGraphicsProperties;
// Provided by XR_VERSION_1_0
#define XR_MIN_COMPOSITION_LAYERS_SUPPORTED 16
XR_MIN_COMPOSITION_LAYERS_SUPPORTED defines the minimum number of
composition layers that a conformant runtime must support.
A runtime must return the
XrSystemGraphicsProperties::maxLayerCount at least the value of
XR_MIN_COMPOSITION_LAYERS_SUPPORTED.
The XrSystemTrackingProperties structure is defined as:
typedef struct XrSystemTrackingProperties {
XrBool32 orientationTracking;
XrBool32 positionTracking;
} XrSystemTrackingProperties;
6. Path Tree and Semantic Paths
OpenXR incorporates an internal semantic path tree model, also known as the path tree, with entities associated with nodes organized in a logical tree and referenced by path name strings structured like a filesystem path or URL. The path tree unifies a number of concepts used in this specification and a runtime may add additional nodes as implementation details. As a general design principle, the most application-facing paths should have semantic and hierarchical meaning in their name. Thus, these paths are often referred to as semantic paths. However, path names in the path tree model may not all have the same level or kind of semantic meaning.
In regular use in an application, path name strings are converted to
instance-specific XrPath values which are used in place of path
strings.
The mapping between XrPath values and their corresponding path name
strings may be considered to be tracked by the runtime in a one-to-one
mapping in addition to the natural tree structure of the referenced
entities.
Runtimes may use any internal implementation that satisfies the
requirements.
Formally, the runtime maintains an instance-specific bijective mapping
between well-formed path name strings and valid XrPath
(uint64_t) values.
These XrPath values are only valid within a single
XrInstance, and applications must not share these values between
instances.
Applications must instead use the string representation of a path in their
code and configuration, and obtain the correct corresponding XrPath
at runtime in each XrInstance.
The term path or semantic path may refer interchangeably to either the
path name string or its associated XrPath value within an instance
when context makes it clear which type is being discussed.
Given that path trees are a unifying model in this specification, the
entities referenced by paths can be of diverse types.
For example, they may be used to represent physical device or sensor
components, which may be of various component types.
They may also be used to represent frames of reference that are understood
by the application and the runtime, as defined by an XrSpace.
Additionally, to permit runtime re-configuration and support
hardware-independent development, any syntactically-valid path string may
be used to retrieve a corresponding XrPath without error given
sufficient resources, even if no logical or hardware entity currently
corresponds to that path at the time of the call.
Later retrieval of the associated path string of such an XrPath
using xrPathToString should succeed if the other requirements of that
call are met.
However, using such an XrPath in a later call to any other API
function may result in an error if no entity of the type required by the
call is available at the path at that later time.
A runtime should permit the entity referenced by a path to vary over time
to naturally reflect varying system configuration and hardware availability.
6.1. Path Atom Type
XR_DEFINE_ATOM(XrPath)
The XrPath is an atom that connects an application with a single
path, within the context of a single instance.
There is a bijective mapping between well-formed path strings and atoms in
use.
This atom is used — in place of the path name string it corresponds to — to retrieve state and perform other operations.
As an XrPath is only shorthand for a well-formed path string, they
have no explicit life cycle.
Lifetime is implicitly managed by the XrInstance.
An XrPath must not be used unless it is received at execution time
from the runtime in the context of a particular XrInstance.
Therefore, with the exception of XR_NULL_PATH, XrPath values
must not be specified as constant values in applications: the corresponding
path string should be used instead.
During the lifetime of a given XrInstance, the XrPath
associated with that instance with any given well-formed path must not
vary, and similarly the well-formed path string that corresponds to a given
XrPath in that instance must not vary.
An XrPath that is received from one XrInstance may not be
used with another.
Such an invalid use may be detected and result in an error being returned,
or it may result in undefined behavior.
Well-written applications should typically use a small, bounded set of
paths in practice.
However, the runtime should support looking up the XrPath for a
large number of path strings for maximum compatibility.
Runtime implementers should keep in mind that applications supporting
diverse systems may look up path strings in a quantity exceeding the number
of non-empty entities predicted or provided by any one runtime’s own path
tree model, and this is not inherently an error.
However, system resources are finite and thus runtimes may signal
exhaustion of resources dedicated to these associations under certain
conditions.
When discussing the behavior of runtimes at these limits, a new
XrPath refers to an XrPath value that, as of some point in
time, has neither been received by the application nor tracked internally by
the runtime.
In this case, since an application has not yet received the value of such an
XrPath, the runtime has not yet made any assertions about its
association with any path string.
In this context, new only refers to the fact that the mapping has not
necessarily been made constant for a given value/path string pair for the
remaining life of the associated instance by being revealed to the
application.
It does not necessarily imply creation of the entity, if any, referred to by
such a path.
Similarly, it does not imply the absence of such an entity prior to that
point.
Entities in the path tree have varied lifetime that is independent from the
duration of the mapping from path string to XrPath.
For flexibility, the runtime may internally track or otherwise make
constant, in instance or larger scope, any mapping of a path string to an
XrPath value even before an application would otherwise receive
that value, thus making it no longer new by the above definition.
When the runtime’s resources to track the path string-XrPath
mapping are exhausted, and the application makes an API call that would have
otherwise retrieved a new XrPath as defined above, the runtime
must return XR_ERROR_PATH_COUNT_EXCEEDED.
This includes both explicit calls to xrStringToPath as well as other
calls that retrieve an XrPath in any other way.
The runtime should support creating as many paths as memory will allow and
must return XR_ERROR_PATH_COUNT_EXCEEDED from relevant functions when
no more can be created.
// Provided by XR_VERSION_1_0
#define XR_NULL_PATH 0
The only XrPath value defined to be constant across all instances
is the invalid path XR_NULL_PATH.
No well-formed path string is associated with XR_NULL_PATH.
Unless explicitly permitted, it should not be passed to API calls or used
as a structure attribute when a valid XrPath is required.
6.2. Well-Formed Path Strings
Even though they look similar, semantic paths are not file paths. To avoid confusion with file path directory traversal conventions, many file path conventions are explicitly disallowed from well-formed path name strings.
A well-formed path name string must conform to the following rules:
-
Path name strings must be constructed entirely from characters on the following list.
-
Lower case ASCII letters: a-z
-
Numeric digits: 0-9
-
Dash: -
-
Underscore: _
-
Period: .
-
Forward Slash: /
-
-
Path name strings must start with a single forward slash character.
-
Path name strings must not end with a forward slash character.
-
Path name strings must not contain two or more adjacent forward slash characters.
-
Path name strings must not contain two forward slash characters that are separated by only period characters.
-
Path name strings must not contain only period characters following the final forward slash character in the string.
-
The maximum string length for a path name string, including the terminating
\0character, is defined byXR_MAX_PATH_LENGTH.
6.2.1. xrStringToPath
The xrStringToPath function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrStringToPath(
XrInstance instance,
const char* pathString,
XrPath* path);
xrStringToPath retrieves the XrPath value for a well-formed
path string.
If such a value had not yet been assigned by the runtime to the provided
path string in this XrInstance, one must be assigned at this point.
All calls to this function with the same XrInstance and path string
must retrieve the same XrPath value.
Upon failure, xrStringToPath must return an appropriate
XrResult, and may set the output parameter to XR_NULL_PATH.
See Path Atom Type for the conditions under which an
error may be returned when this function is given a valid XrInstance
and a well-formed path string.
If the runtime’s resources are exhausted and it cannot create the path, a
return value of XR_ERROR_PATH_COUNT_EXCEEDED must be returned.
If the application specifies a string that is not a well-formed path string,
XR_ERROR_PATH_FORMAT_INVALID must be returned.
A return value of XR_SUCCESS from xrStringToPath may not
necessarily imply that the runtime has a component or other source of data
that will be accessible through that semantic path.
It only means that the path string supplied was well-formed and that the
retrieved XrPath maps to the given path string within and during
the lifetime of the XrInstance given.
|
6.2.2. xrPathToString
// Provided by XR_VERSION_1_0
XrResult xrPathToString(
XrInstance instance,
XrPath path,
uint32_t bufferCapacityInput,
uint32_t* bufferCountOutput,
char* buffer);
xrPathToString retrieves the path name string associated with an
XrPath, in the context of a given XrInstance, in the form of
a NULL terminated string placed into a caller-allocated buffer.
Since the mapping between a well-formed path name string and an
XrPath is bijective, there will always be exactly one string for
each valid XrPath value.
This can be useful if the calling application receives an XrPath
value that they had not previously retrieved via xrStringToPath.
During the lifetime of the given XrInstance, the path name string
retrieved by this function for a given valid XrPath will not
change.
For invalid paths, including XR_NULL_PATH, XR_ERROR_PATH_INVALID
must be returned.
6.3. Reserved Paths
In order for some uses of semantic paths to work consistently across runtimes, it is necessary to standardize several paths and require each runtime to use the same paths or patterns of paths for certain classes of usage. Those paths are as follows.
6.3.1. Top level /user paths
Some paths are used to refer to entities that are filling semantic roles in the system. These paths are all under the /user subtree.
The reserved user paths are:
Runtimes are not required to provide interaction at all of these paths. For instance, in a system with no hand tracking, only /user/head would be active for interaction. In a system with only one controller, the runtime may provide access to that controller via either /user/hand/left or /user/hand/right as it deems appropriate.
The runtime may change the devices referred to by /user/hand/left and /user/hand/right at any time.
If more than two hand-held controllers or devices are active, the runtime must determine which two are accessible as /user/hand/left and /user/hand/right.
6.3.2. Input subpaths
Interaction profiles define paths for each component that can be bound to an
action.
This section describes the naming conventions for those input components.
Runtimes must ignore input subpaths that use identifiers and component
names that do not appear in this specification or otherwise do not follow
the pattern specified below.
Input subpaths further qualify top-level /user paths to form
binding paths.
For this reason, they are often shown starting with …
or omitting path components before /input or /output entirely.
The input subpaths considered valid when combined with any given top-level
/user path vary by interaction profile.
Each input subpath must match the following pattern:
-
…/input/<identifier>[_<location>][/<component>]
Identifiers are often the label on the component or related to the type and location of the component.
When specifying a suggested binding there are several cases where the component part of the path can be determined automatically. See Suggested Bindings for more details.
See Interaction Profiles for examples of input subpaths.
Standard identifiers
-
trackpad - A 2D input source that usually includes click and touch component.
-
thumbstick - A small 2D joystick that is meant to be used with the user’s thumb. These sometimes include click and/or touch components.
-
joystick - A 2D joystick that is meant to be used with the user’s entire hand, such as a flight stick. These generally do not have click component, but might have touch components.
-
trigger - A 1D analog input component that returns to a rest state when the user stops interacting with it. These sometime include touch and/or click components.
-
throttle - A 1D analog input component that remains in position when the user stops interacting with it.
-
trackball - A 2D relative input source. These sometimes include click components.
-
pedal - A 1D analog input component that is similar to a trigger but meant to be operated by a foot
-
system - A button with the specialised meaning that it enables the user to access system-level functions and UI. Input data from system buttons is generally used internally by runtimes and may not be available to applications.
-
dpad_up, dpad_down, dpad_left, and dpad_right - A set of buttons arranged in a plus shape.
-
diamond_up, diamond_down, diamond_left, and diamond_right - Gamepads often have a set of four buttons arranged in a diamond shape. The labels on those buttons vary from gamepad to gamepad, but their arrangement is consistent. These names are used for the A/B/X/Y buttons on a Xbox controller, and the square/cross/circle/triangle button on a PlayStation controller.
-
a, b, x, y, start, home, end, select - Standalone buttons are named for their physical labels. These are the standard identifiers for such buttons. Extensions may add new identifiers as detailed in the next section. Groups of four buttons in a diamond shape should use the diamond-prefix names above instead of using the labels on the buttons themselves.
-
volume_up, volume_down, mute_mic, play_pause, menu, view, back - Some other standard controls are often identified by icons. These are their standard names.
-
thumbrest - Some controllers have a place for the user to rest their thumb.
-
shoulder - A button that is usually pressed with the index finger and is often positioned above a trigger.
-
squeeze - An input source that indicates that the user is squeezing their fist closed. This could be a simple button or act more like a trigger. Sources with this identifier should either follow button or trigger conventions for their components.
-
wheel - A steering wheel.
-
thumb_resting_surfaces - Any surfaces that a thumb may naturally rest on. This may include, but is not limited to, face buttons, thumbstick, and thumbrest (Provided by
XR_VERSION_1_1) -
stylus - Tip that can be used for writing or drawing. May be able to detect various pressure levels (Provided by
XR_VERSION_1_1) -
trigger_curl - This sensor detects how pointed or curled the user’s finger is on the trigger: 0 = fully pointed, 1 = finger flat on surface (Provided by
XR_VERSION_1_1) -
trigger_slide - This sensor represents how far the user is sliding their index finger along the surface of the trigger: 0 = finger flat on the surface, 1 = finger fully drawn back (Provided by
XR_VERSION_1_1)
Standard pose identifiers
Input sources whose orientation and/or position are tracked also expose pose identifiers.
Standard pose identifiers for tracked hands or motion controllers as represented by /user/hand/left and /user/hand/right are:
-
grip - A pose that allows applications to reliably render a virtual object held in the user’s hand, whether it is tracked directly or by a motion controller. The grip pose is defined as follows:
-
The grip position:
-
For tracked hands: The user’s palm centroid when closing the fist, at the surface of the palm.
-
For handheld motion controllers: A fixed position within the controller that generally lines up with the palm centroid when held by a hand in a neutral position. This position should be adjusted left or right to center the position within the controller’s grip.
-
-
The grip orientation’s +X axis: When you completely open your hand to form a flat 5-finger pose, the ray that is normal to the user’s palm (away from the palm in the left hand, into the palm in the right hand).
-
The grip orientation’s -Z axis: When you close your hand partially (as if holding the controller), the ray that goes through the center of the tube formed by your non-thumb fingers, in the direction of little finger to thumb.
-
The grip orientation’s +Y axis: orthogonal to +Z and +X using the right-hand rule.
-
-
aim - A pose that allows applications to point in the world using the input source, according to the platform’s conventions for aiming with that kind of source. The aim pose is defined as follows:
-
For tracked hands: The ray that follows platform conventions for how the user aims at objects in the world with their entire hand, with +Y up, +X to the right, and -Z forward. The ray chosen will be runtime-dependent, often a ray emerging from the hand at a target pointed by moving the forearm.
-
For handheld motion controllers: The ray that follows platform conventions for how the user targets objects in the world with the motion controller, with +Y up, +X to the right, and -Z forward. This is usually for applications that are rendering a model matching the physical controller, as an application rendering a virtual object in the user’s hand likely prefers to point based on the geometry of that virtual object. The ray chosen will be runtime-dependent, although this will often emerge from the frontmost tip of a motion controller.
-
-
grip_surface - (Provided by
XR_VERSION_1_1) A pose that allows applications to reliably anchor visual content relative to the user’s physical hand, whether the user’s hand is tracked directly or its position and orientation is inferred by a physical controller. The grip_surface pose is defined as follows:-
The grip_surface position: The user’s physical palm centroid, at the surface of the palm. For the avoidance of doubt, the palm does not include fingers.
-
The grip_surface orientation’s +X axis: When a user is holding the controller and straightens their index fingers pointing forward, the ray that is normal (perpendicular) to the user’s palm (away from the palm in the left hand, into the palm in the right hand).
-
The grip_surface orientation’s -Z axis: When a user is holding the controller and straightens their index finger, the ray that is parallel to their finger’s pointing direction.
-
The grip_surface orientation’s +Y axis: orthogonal to +Z and +X using the right-hand rule.
-
|
Note
When the |
|
Note
When the |
Standard locations
When a single device contains multiple input sources that use the same identifier, a location suffix is added to create a unique identifier for that input source.
Standard locations are:
-
left
-
right
-
left_upper
-
left_lower
-
right_upper
-
right_lower
-
upper
-
lower
Standard components
Components are named for the specific boolean, scalar, or other value of the input source. Standard components are:
-
click - A physical switch has been pressed by the user. This is valid for all buttons, and is common for trackpads, thumbsticks, triggers, and dpads. "click" components are always boolean.
-
touch - The user has touched the input source. This is valid for all trackpads, and may be present for any other kind of input source if the device includes the necessary sensor. "touch" components are always boolean.
-
force - A 1D scalar value that represents the user applying force to the input. It varies from 0 to 1, with 0 being the rest state. This is present for any input source with a force sensor.
-
value - A 1D scalar value that varies from 0 to 1, with 0 being the rest state. This is present for triggers, throttles, and pedals. It may also be present for squeeze or other components.
-
x, y - scalar components of 2D values. These vary in value from -1 to 1. These represent the 2D position of the input source with 0 being the rest state on each axis. -1 means all the way left for x axis or all the way down for y axis. +1 means all the way right for x axis or all the way up for y axis. x and y components are present for trackpads, thumbsticks, and joysticks.
-
twist - Some sources, such as flight sticks, have a sensor that allows the user to twist the input left or right. For this component -1 means all the way left and 1 means all the way right.
-
pose - The orientation and/or position of this input source. This component may exist for dedicated pose identifiers like grip and aim, or may be defined on other identifiers such as trackpad to let applications reason about the surface of that part.
-
proximity - The user is in physical proximity of input source. This may be present for any kind of input source representing a physical component, such as a button, if the device includes the necessary sensor. The state of a "proximity" component must be
XR_TRUEif the same input source is returningXR_TRUEfor either a "touch" or any other component that implies physical contact. The runtime may returnXR_TRUEfor "proximity" when "touch" returnsXR_FALSEwhich would indicate that the user is hovering just above, but not touching the input source in question. "proximity" components are always boolean. (Provided byXR_VERSION_1_1)
Output paths
Many devices also have subpaths for output features such as haptics. The runtime must ignore output component paths that do not follow the pattern:
-
…/output/<output_identifier>[_<location>]
Standard output identifiers are:
-
haptic - A haptic element like an LRA (Linear Resonant Actuator) or vibration motor
-
haptic_trigger - A haptic element located in the trigger (Provided by
XR_VERSION_1_1) -
haptic_thumb - A haptic element located in the resting place of the thumb, like under the touchpad (Provided by
XR_VERSION_1_1)
Devices which contain multiple haptic elements with the same output identifier must use a location suffix as specified above.
6.3.3. Adding input sources via extensions
Extensions may enable input source path identifiers, output source path identifiers, and component names that are not included in the core specification, subject to the following conditions:
-
EXT extensions must include the _ext suffix on any identifier or component name. E.g. …/input/newidentifier_ext/newcomponent_ext
-
Vendor extensions must include the vendor’s tag as a suffix on any identifier or component name. E.g. …/input/newidentifier_vendor/newcomponent_vendor (where "vendor" is replaced with the vendor’s actual extension tag.)
-
Khronos (KHR) extensions may add undecorated identifier or component names.
These rules are in place to prevent extensions from adding first class undecorated names that become defacto standards. Runtimes must ignore input source paths that do not follow the restrictions above.
Extensions may also add new location suffixes, and may do so by adding a new identifier and location combination using the appropriate suffix. E.g. …/input/newidentifier_newlocation_ext
6.4. Interaction Profile Paths
An interaction profile path identifies a collection of buttons and other input sources in a physical arrangement to allow applications and runtimes to coordinate action bindings.
Interaction profile paths are of the form:
-
/interaction_profiles/<vendor_name>/<type_name>
|
Note
When the |
6.4.1. Khronos Simple Controller Profile
Path: /interaction_profiles/khr/simple_controller
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile provides basic pose, button, and haptic support for applications with simple input needs. There is no hardware associated with the profile, and runtimes which support this profile should map the input paths provided to whatever the appropriate paths are on the actual hardware.
Supported component paths:
-
…/input/select/click
-
…/input/menu/click
-
…/input/grip/pose
-
…/input/aim/pose
-
…/output/haptic
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
6.4.2. ByteDance PICO Neo 3 controller Profile
(Provided by XR_VERSION_1_1)
Path: /interaction_profiles/bytedance/pico_neo3_controller
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile represents the input sources and haptics on the ByteDance PICO Neo3 Controller.
-
On /user/hand/left only:
-
…/input/x/click
-
…/input/x/touch
-
…/input/y/click
-
…/input/y/touch
-
-
On /user/hand/right only:
-
…/input/a/click
-
…/input/a/touch
-
…/input/b/click
-
…/input/b/touch
-
-
…/input/menu/click
-
…/input/system/click (may not be available for application use)
-
…/input/trigger/click
-
…/input/trigger/value
-
…/input/trigger/touch
-
…/input/thumbstick/y
-
…/input/thumbstick/x
-
…/input/thumbstick/click
-
…/input/thumbstick/touch
-
…/input/squeeze/click
-
…/input/squeeze/value
-
…/input/grip/pose
-
…/input/aim/pose
-
…/output/haptic
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
6.4.3. ByteDance PICO 4 controller Profile
(Provided by XR_VERSION_1_1)
Path: /interaction_profiles/bytedance/pico4_controller
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile represents the input sources and haptics on the ByteDance PICO 4 Controller.
-
On /user/hand/left only:
-
…/input/x/click
-
…/input/x/touch
-
…/input/y/click
-
…/input/y/touch
-
…/input/menu/click
-
-
On /user/hand/right only:
-
…/input/a/click
-
…/input/a/touch
-
…/input/b/click
-
…/input/b/touch
-
-
…/input/system/click (may not be available for application use)
-
…/input/trigger/click
-
…/input/trigger/value
-
…/input/trigger/touch
-
…/input/thumbstick/y
-
…/input/thumbstick/x
-
…/input/thumbstick/click
-
…/input/thumbstick/touch
-
…/input/squeeze/click
-
…/input/squeeze/value
-
…/input/grip/pose
-
…/input/aim/pose
-
…/output/haptic
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
6.4.4. ByteDance PICO G3 controller Profile
(Provided by XR_VERSION_1_1)
Path: /interaction_profiles/bytedance/pico_g3_controller
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile represents the input sources and haptics on the ByteDance PICO G3 Controller.
-
…/input/trigger/click
-
…/input/trigger/value
-
…/input/menu/click
-
…/input/grip/pose
-
…/input/aim/pose
-
…/input/thumbstick
-
…/input/thumbstick/click
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
|
Note
When designing suggested bindings for this interaction profile, you may suggest bindings for both /user/hand/left and /user/hand/right. However, only one of them will be active at a given time, so do not design interactions that require simultaneous use of both hands. |
6.4.5. Google Daydream Controller Profile
Path: /interaction_profiles/google/daydream_controller
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile represents the input sources on the Google Daydream Controller.
Supported component paths:
-
…/input/select/click
-
…/input/trackpad/x
-
…/input/trackpad/y
-
…/input/trackpad/click
-
…/input/trackpad/touch
-
…/input/grip/pose
-
…/input/aim/pose
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
6.4.6. HP Mixed Reality Motion Controller Profile
(Provided by XR_VERSION_1_1)
Path: /interaction_profiles/hp/mixed_reality_controller
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile represents the input sources and haptics on the HP Mixed Reality Motion Controller.
-
On /user/hand/left only:
-
…/input/x/click
-
…/input/y/click
-
-
On /user/hand/right only:
-
…/input/a/click
-
…/input/b/click
-
-
…/input/menu/click
-
…/input/squeeze/value
-
…/input/trigger/value
-
…/input/thumbstick/x
-
…/input/thumbstick/y
-
…/input/thumbstick/click
-
…/input/grip/pose
-
…/input/aim/pose
-
…/output/haptic
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
6.4.7. HTC Vive Controller Profile
Path: /interaction_profiles/htc/vive_controller
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile represents the input sources and haptics on the Vive Controller.
Supported component paths:
-
…/input/system/click (may not be available for application use)
-
…/input/squeeze/click
-
…/input/menu/click
-
…/input/trigger/click
-
…/input/trigger/value
-
…/input/trackpad/x
-
…/input/trackpad/y
-
…/input/trackpad/click
-
…/input/trackpad/touch
-
…/input/grip/pose
-
…/input/aim/pose
-
…/output/haptic
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
6.4.8. HTC Vive Cosmos Controller Profile
(Provided by XR_VERSION_1_1)
Path: /interaction_profiles/htc/vive_cosmos_controller
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile represents the input sources and haptics on the Vive Cosmos Controller.
Supported component paths:
-
On /user/hand/left only:
-
…/input/x/click
-
…/input/y/click
-
…/input/menu/click
-
-
On /user/hand/right only:
-
…/input/a/click
-
…/input/b/click
-
…/input/system/click (may not be available for application use)
-
-
…/input/shoulder/click
-
…/input/squeeze/click
-
…/input/trigger/click
-
…/input/trigger/value
-
…/input/thumbstick/x
-
…/input/thumbstick/y
-
…/input/thumbstick/click
-
…/input/thumbstick/touch
-
…/input/grip/pose
-
…/input/aim/pose
-
…/output/haptic
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
6.4.9. HTC Vive Focus 3 Controller Profile
(Provided by XR_VERSION_1_1)
Path: /interaction_profiles/htc/vive_focus3_controller
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile represents the input sources and haptics on the Vive Focus 3 Controller.
-
On /user/hand/left only:
-
…/input/x/click
-
…/input/y/click
-
…/input/menu/click
-
-
On /user/hand/right only:
-
…/input/a/click
-
…/input/b/click
-
…/input/system/click (may not be available for application use)
-
-
…/input/squeeze/click
-
…/input/squeeze/touch
-
…/input/squeeze/value
-
…/input/trigger/click
-
…/input/trigger/touch
-
…/input/trigger/value
-
…/input/thumbstick/x
-
…/input/thumbstick/y
-
…/input/thumbstick/click
-
…/input/thumbstick/touch
-
…/input/thumbrest/touch
-
…/input/grip/pose
-
…/input/aim/pose
-
…/output/haptic
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
6.4.10. HTC Vive Pro Profile
Path: /interaction_profiles/htc/vive_pro
Valid for user paths:
-
/user/head
This interaction profile represents the input sources on the Vive Pro headset.
Supported component paths:
-
…/input/system/click (may not be available for application use)
-
…/input/volume_up/click
-
…/input/volume_down/click
-
…/input/mute_mic/click
6.4.11. Magic Leap 2 Controller Profile
(Provided by XR_VERSION_1_1)
Path: /interaction_profiles/ml/ml2_controller
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile represents the input sources and haptics on the Magic Leap 2 controller.
Supported component paths:
-
…/input/menu/click
-
…/input/home/click (may not be available for application use)
-
…/input/trigger/click
-
…/input/trigger/value
-
…/input/trackpad/y
-
…/input/trackpad/x
-
…/input/trackpad/click
-
…/input/trackpad/force
-
…/input/trackpad/touch
-
…/input/aim/pose
-
…/input/grip/pose
-
…/input/shoulder/click
-
…/output/haptic
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
6.4.12. Microsoft Mixed Reality Motion Controller Profile
Path: /interaction_profiles/microsoft/motion_controller
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile represents the input sources and haptics on the Microsoft Mixed Reality Controller.
Supported component paths:
-
…/input/menu/click
-
…/input/squeeze/click
-
…/input/trigger/value
-
…/input/thumbstick/x
-
…/input/thumbstick/y
-
…/input/thumbstick/click
-
…/input/trackpad/x
-
…/input/trackpad/y
-
…/input/trackpad/click
-
…/input/trackpad/touch
-
…/input/grip/pose
-
…/input/aim/pose
-
…/output/haptic
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
6.4.13. Microsoft Xbox Controller Profile
Path: /interaction_profiles/microsoft/xbox_controller
Valid for user paths:
-
/user/gamepad
This interaction profile represents the input sources and haptics on the Microsoft Xbox Controller.
Supported component paths:
-
…/input/menu/click
-
…/input/view/click
-
…/input/a/click
-
…/input/b/click
-
…/input/x/click
-
…/input/y/click
-
…/input/dpad_down/click
-
…/input/dpad_right/click
-
…/input/dpad_up/click
-
…/input/dpad_left/click
-
…/input/shoulder_left/click
-
…/input/shoulder_right/click
-
…/input/thumbstick_left/click
-
…/input/thumbstick_right/click
-
…/input/trigger_left/value
-
…/input/trigger_right/value
-
…/input/thumbstick_left/x
-
…/input/thumbstick_left/y
-
…/input/thumbstick_right/x
-
…/input/thumbstick_right/y
-
…/output/haptic_left
-
…/output/haptic_right
-
…/output/haptic_left_trigger
-
…/output/haptic_right_trigger
6.4.14. Oculus Go Controller Profile
Path: /interaction_profiles/oculus/go_controller
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile represents the input sources on the Oculus Go controller.
Supported component paths:
-
…/input/system/click (may not be available for application use)
-
…/input/trigger/click
-
…/input/back/click
-
…/input/trackpad/x
-
…/input/trackpad/y
-
…/input/trackpad/click
-
…/input/trackpad/touch
-
…/input/grip/pose
-
…/input/aim/pose
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
6.4.15. Oculus Touch Controller Profile
Path: /interaction_profiles/oculus/touch_controller
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile represents the input sources and haptics on the Oculus Touch controller.
Supported component paths:
-
On /user/hand/left only:
-
…/input/x/click
-
…/input/x/touch
-
…/input/y/click
-
…/input/y/touch
-
…/input/menu/click
-
-
On /user/hand/right only:
-
…/input/a/click
-
…/input/a/touch
-
…/input/b/click
-
…/input/b/touch
-
…/input/system/click (may not be available for application use)
-
-
…/input/squeeze/value
-
…/input/trigger/value
-
…/input/trigger/touch
-
…/input/trigger/proximity (Provided by
XR_VERSION_1_1) -
…/input/thumb_resting_surfaces/proximity (Provided by
XR_VERSION_1_1) -
…/input/thumbstick/x
-
…/input/thumbstick/y
-
…/input/thumbstick/click
-
…/input/thumbstick/touch
-
…/input/thumbrest/touch
-
…/input/grip/pose
-
…/input/aim/pose
-
…/output/haptic
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
6.4.16. Meta Touch Pro Controller Profile
(Provided by XR_VERSION_1_1)
Path: /interaction_profiles/meta/touch_pro_controller
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile represents the input sources and haptics on the Meta Touch Pro controller.
Supported component paths:
-
On /user/hand/left only:
-
…/input/x/click
-
…/input/x/touch
-
…/input/y/click
-
…/input/y/touch
-
…/input/menu/click
-
-
On /user/hand/right only:
-
…/input/a/click
-
…/input/a/touch
-
…/input/b/click
-
…/input/b/touch
-
…/input/system/click (may not be available for application use)
-
-
…/input/squeeze/value
-
…/input/trigger/value
-
…/input/trigger/touch
-
…/input/trigger/proximity
-
…/input/trigger_curl/value
-
…/input/trigger_slide/value
-
…/input/thumb_resting_surfaces/proximity
-
…/input/thumbstick/x
-
…/input/thumbstick/y
-
…/input/thumbstick/click
-
…/input/thumbstick/touch
-
…/input/thumbrest/touch
-
…/input/thumbrest/force
-
…/input/stylus/force
-
…/input/grip/pose
-
…/input/aim/pose
-
…/output/haptic
-
…/output/haptic_trigger
-
…/output/haptic_thumb
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
6.4.17. Meta Touch Plus Controller Profile
(Provided by XR_VERSION_1_1)
Path: /interaction_profiles/meta/touch_plus_controller
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile represents the input sources and haptics on the Meta Touch Plus controller.
Supported component paths:
-
On /user/hand/left only:
-
…/input/x/click
-
…/input/x/touch
-
…/input/y/click
-
…/input/y/touch
-
…/input/menu/click
-
-
On /user/hand/right only:
-
…/input/a/click
-
…/input/a/touch
-
…/input/b/click
-
…/input/b/touch
-
…/input/system/click (may not be available for application use)
-
-
…/input/squeeze/value
-
…/input/trigger/value
-
…/input/trigger/touch
-
…/input/trigger/force
-
…/input/trigger/proximity
-
…/input/trigger_curl/value
-
…/input/trigger_slide/value
-
…/input/thumb_resting_surfaces/proximity
-
…/input/thumbstick/x
-
…/input/thumbstick/y
-
…/input/thumbstick/click
-
…/input/thumbstick/touch
-
…/input/thumbrest/touch
-
…/input/grip/pose
-
…/input/aim/pose
-
…/output/haptic
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
6.4.18. Meta Touch Controller (Rift CV1) Profile
(Provided by XR_VERSION_1_1)
Path: /interaction_profiles/meta/touch_controller_rift_cv1
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile represents the input sources and haptics on the Oculus Touch controller and is a legacy profile added to specifically represent the controller shipped with the Rift CV1.
Supported component paths:
-
On /user/hand/left only:
-
…/input/x/click
-
…/input/x/touch
-
…/input/y/click
-
…/input/y/touch
-
…/input/menu/click
-
-
On /user/hand/right only:
-
…/input/a/click
-
…/input/a/touch
-
…/input/b/click
-
…/input/b/touch
-
…/input/system/click (may not be available for application use)
-
-
…/input/squeeze/value
-
…/input/trigger/value
-
…/input/trigger/touch
-
…/input/trigger/proximity
-
…/input/thumb_resting_surfaces/proximity
-
…/input/thumbstick/x
-
…/input/thumbstick/y
-
…/input/thumbstick/click
-
…/input/thumbstick/touch
-
…/input/thumbrest/touch
-
…/input/grip/pose
-
…/input/aim/pose
-
…/output/haptic
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
6.4.19. Meta Touch Controller (Rift S / Quest 1) Profile
(Provided by XR_VERSION_1_1)
Path: /interaction_profiles/meta/touch_controller_quest_1_rift_s
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile represents the input sources and haptics on the Oculus Touch controller and is a legacy profile added to specifically represent the controller shipped with the Rift S and Quest 1.
Supported component paths:
-
On /user/hand/left only:
-
…/input/x/click
-
…/input/x/touch
-
…/input/y/click
-
…/input/y/touch
-
…/input/menu/click
-
-
On /user/hand/right only:
-
…/input/a/click
-
…/input/a/touch
-
…/input/b/click
-
…/input/b/touch
-
…/input/system/click (may not be available for application use)
-
-
…/input/squeeze/value
-
…/input/trigger/value
-
…/input/trigger/touch
-
…/input/trigger/proximity
-
…/input/thumb_resting_surfaces/proximity
-
…/input/thumbstick/x
-
…/input/thumbstick/y
-
…/input/thumbstick/click
-
…/input/thumbstick/touch
-
…/input/grip/pose
-
…/input/aim/pose
-
…/output/haptic
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
6.4.20. Meta Touch Controller (Quest 2) Profile
(Provided by XR_VERSION_1_1)
Path: /interaction_profiles/meta/touch_controller_quest_2
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile represents the input sources and haptics on the Oculus Touch controller and is a legacy profile added to specifically represent the controller shipped with the Quest 2.
Supported component paths:
-
On /user/hand/left only:
-
…/input/x/click
-
…/input/x/touch
-
…/input/y/click
-
…/input/y/touch
-
…/input/menu/click
-
-
On /user/hand/right only:
-
…/input/a/click
-
…/input/a/touch
-
…/input/b/click
-
…/input/b/touch
-
…/input/system/click (may not be available for application use)
-
-
…/input/squeeze/value
-
…/input/trigger/value
-
…/input/trigger/touch
-
…/input/trigger/proximity
-
…/input/thumb_resting_surfaces/proximity
-
…/input/thumbstick/x
-
…/input/thumbstick/y
-
…/input/thumbstick/click
-
…/input/thumbstick/touch
-
…/input/thumbrest/touch
-
…/input/grip/pose
-
…/input/aim/pose
-
…/output/haptic
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
6.4.21. Samsung Odyssey Controller Profile
(Provided by XR_VERSION_1_1)
Path: /interaction_profiles/samsung/odyssey_controller
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile represents the input sources and haptics on the Samsung Odyssey Controller. It is exactly the same, with the exception of the name of the interaction profile, as the Microsoft Mixed Reality Controller interaction profile. It enables the application to differentiate the newer form factor of motion controller released with the Samsung Odyssey headset. It enables the application to customize the appearance and experience of the controller differently from the original mixed reality motion controller.
Supported component paths:
-
…/input/menu/click
-
…/input/squeeze/click
-
…/input/trigger/value
-
…/input/thumbstick/x
-
…/input/thumbstick/y
-
…/input/thumbstick/click
-
…/input/trackpad/x
-
…/input/trackpad/y
-
…/input/trackpad/click
-
…/input/trackpad/touch
-
…/input/grip/pose
-
…/input/aim/pose
-
…/output/haptic
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
6.4.22. Valve Index Controller Profile
Path: /interaction_profiles/valve/index_controller
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile represents the input sources and haptics on the Valve Index controller.
Supported component paths:
-
…/input/system/click (may not be available for application use)
-
…/input/system/touch (may not be available for application use)
-
…/input/a/click
-
…/input/a/touch
-
…/input/b/click
-
…/input/b/touch
-
…/input/squeeze/value
-
…/input/squeeze/force
-
…/input/trigger/click
-
…/input/trigger/value
-
…/input/trigger/touch
-
…/input/thumbstick/x
-
…/input/thumbstick/y
-
…/input/thumbstick/click
-
…/input/thumbstick/touch
-
…/input/trackpad/x
-
…/input/trackpad/y
-
…/input/trackpad/force
-
…/input/trackpad/touch
-
…/input/grip/pose
-
…/input/aim/pose
-
…/output/haptic
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
7. Spaces
Across both virtual reality and augmented reality, XR applications have a core need to map the location of virtual objects to the corresponding real-world locations where they will be rendered. Spaces allow applications to explicitly create and specify the frames of reference in which they choose to track the real world, and then determine how those frames of reference move relative to one another over time.
XR_DEFINE_HANDLE(XrSpace)
Spaces are represented by XrSpace handles, which the application creates and then uses in API calls. Whenever an application calls a function that returns coordinates, it provides an XrSpace to specify the frame of reference in which those coordinates will be expressed. Similarly, when providing coordinates to a function, the application specifies which XrSpace the runtime should use to interpret those coordinates.
OpenXR defines a set of well-known reference spaces that applications
use to bootstrap their spatial reasoning.
These reference spaces are: VIEW, LOCAL, LOCAL_FLOOR, and STAGE.
Each reference space has a well-defined meaning, which establishes where its
origin is positioned and how its axes are oriented.
Runtimes whose tracking systems improve their understanding of the world
over time may track spaces independently.
For example, even though a LOCAL space and a STAGE space each map their
origin to a static position in the world, a runtime with an inside-out
tracking system may introduce slight adjustments to the origin of each
space on a continuous basis to keep each origin in place.
Beyond well-known reference spaces, runtimes expose other independently-tracked spaces, such as a pose action space that tracks the pose of a motion controller over time.
When one or both spaces are tracking a dynamic object, passing in an updated
time to xrLocateSpace each frame will result in an updated relative
pose.
For example, the location of the left hand’s pose action space in the
STAGE reference space will change each frame as the user’s hand moves
relative to the stage’s predefined origin on the floor.
In other XR APIs, it is common to report the "pose" of an object relative to
some presumed underlying global space.
This API is careful to not explicitly define such an underlying global
space, because it does not apply to all systems.
Some systems will support no STAGE space, while others may support a
STAGE space that switches between various physical stages with dynamic
availability.
To satisfy this wide variability, "poses" are always described as the
relationship between two spaces.
Some devices improve their understanding of the world as the device is used. The location returned by xrLocateSpace in later frames may change over time, even for spaces that track static objects, as either the target space or base space adjusts its origin.
Composition layers submitted by the application include an XrSpace for
the runtime to use to position that layer over time.
Composition layers whose XrSpace is relative to the VIEW reference
space are implicitly "head-locked", even if they may not be "display-locked"
for non-head-mounted form factors.
7.1. Reference Spaces
The XrReferenceSpaceType enumeration is defined as:
typedef enum XrReferenceSpaceType {
XR_REFERENCE_SPACE_TYPE_VIEW = 1,
XR_REFERENCE_SPACE_TYPE_LOCAL = 2,
XR_REFERENCE_SPACE_TYPE_STAGE = 3,
// Provided by XR_VERSION_1_1
XR_REFERENCE_SPACE_TYPE_LOCAL_FLOOR = 1000426000,
// Provided by XR_MSFT_unbounded_reference_space
XR_REFERENCE_SPACE_TYPE_UNBOUNDED_MSFT = 1000038000,
// Provided by XR_VARJO_foveated_rendering
XR_REFERENCE_SPACE_TYPE_COMBINED_EYE_VARJO = 1000121000,
// Provided by XR_ML_localization_map
XR_REFERENCE_SPACE_TYPE_LOCALIZATION_MAP_ML = 1000139000,
// Provided by XR_EXT_local_floor
XR_REFERENCE_SPACE_TYPE_LOCAL_FLOOR_EXT = XR_REFERENCE_SPACE_TYPE_LOCAL_FLOOR,
XR_REFERENCE_SPACE_TYPE_MAX_ENUM = 0x7FFFFFFF
} XrReferenceSpaceType;
Brief introductions to core reference space types follow. Each has full requirements in a subsequent section, linked from these descriptions.
An XrSpace handle for a reference space is created using xrCreateReferenceSpace, by specifying the chosen reference space type and a pose within the natural reference frame defined for that reference space type.
Runtimes implement well-known reference spaces from XrReferenceSpaceType if they support tracking of that kind. Available reference space types are indicated by xrEnumerateReferenceSpaces. Note that other spaces can be created as well, such as pose action spaces created by xrCreateActionSpace, which are not enumerated by that API.
7.1.1. View Reference Space
The XR_REFERENCE_SPACE_TYPE_VIEW or VIEW reference space tracks the
view origin used to generate view transforms for the primary viewer (or
centroid of view origins if stereo), with +Y up, +X to the right, and -Z
forward.
This space points in the forward direction for the viewer without
incorporating the user’s eye orientation, and is not gravity-aligned.
The VIEW space is primarily useful when projecting from the user’s
perspective into another space to obtain a targeting ray, or when rendering
small head-locked content such as a reticle.
Content rendered in the VIEW space will stay at a fixed point on
head-mounted displays and may be uncomfortable to view if too large.
To obtain the ideal view and projection transforms to use each frame for
rendering world content, applications should call xrLocateViews
instead of using this space.
7.1.2. Local Reference Space
The XR_REFERENCE_SPACE_TYPE_LOCAL or LOCAL reference space
establishes a world-locked origin, gravity-aligned to exclude pitch and
roll, with +Y up, +X to the right, and -Z forward.
This space locks in both its initial position and orientation, which the
runtime may define to be either the initial position at application launch
or some other calibrated zero position.
When a user needs to recenter the LOCAL space, a runtime may offer some
system-level recentering interaction that is transparent to the application,
but which causes the current leveled head space to become the new LOCAL
space.
When such a recentering occurs, the runtime must queue the
XrEventDataReferenceSpaceChangePending event, with the recentered
LOCAL space origin only taking effect for xrLocateSpace or
xrLocateViews calls whose XrTime parameter is greater than or
equal to the XrEventDataReferenceSpaceChangePending::changeTime
in that event.
When views, controllers or other spaces experience tracking loss relative to
the LOCAL space, runtimes should continue to provide inferred or
last-known position and orientation values.
These inferred poses can, for example, be based on neck model updates,
inertial dead reckoning, or a last-known position, so long as it is still
reasonable for the application to use that pose.
While a runtime is providing position data, it must continue to set
XR_SPACE_LOCATION_POSITION_VALID_BIT and
XR_VIEW_STATE_POSITION_VALID_BIT but it can clear
XR_SPACE_LOCATION_POSITION_TRACKED_BIT and
XR_VIEW_STATE_POSITION_TRACKED_BIT to indicate that the position is
inferred or last-known in this way.
When tracking is recovered, runtimes should snap the pose of other spaces
back into position relative to the original origin of LOCAL space.
7.1.3. Stage Reference Space
The STAGE reference space is a runtime-defined flat, rectangular space
that is empty and can be walked around on.
The origin is on the floor at the center of the rectangle, with +Y up, and
the X and Z axes aligned with the rectangle edges.
The runtime may not be able to locate spaces relative to the STAGE
reference space if the user has not yet defined one within the
runtime-specific UI.
Applications can use xrGetReferenceSpaceBoundsRect to determine the
extents of the STAGE reference space’s XZ bounds rectangle, if defined.
The STAGE space is useful when an application needs to render
standing-scale content (no bounds) or room-scale content (with bounds)
that is relative to the physical floor.
When the user redefines the origin or bounds of the current STAGE space,
or the runtime otherwise switches to a new STAGE space definition, the
runtime must queue the XrEventDataReferenceSpaceChangePending event,
with the new STAGE space origin only taking effect for xrLocateSpace
or xrLocateViews calls whose XrTime parameter is greater than
or equal to the
XrEventDataReferenceSpaceChangePending::changeTime in that
event.
When views, controllers, or other spaces experience tracking loss relative
to the STAGE space, runtimes should continue to provide inferred or
last-known position and orientation values.
These inferred poses can, for example, be based on neck model updates,
inertial dead reckoning, or a last-known position, so long as it is still
reasonable for the application to use that pose.
While a runtime is providing position data, it must continue to set
XR_SPACE_LOCATION_POSITION_VALID_BIT and
XR_VIEW_STATE_POSITION_VALID_BIT but it can clear
XR_SPACE_LOCATION_POSITION_TRACKED_BIT and
XR_VIEW_STATE_POSITION_TRACKED_BIT to indicate that the position is
inferred or last-known in this way.
When tracking is recovered, runtimes should snap the pose of other spaces
back into position relative to the original origin of the STAGE space.
7.1.4. Local Floor Reference Space
Local floor reference space, indicated by
XR_REFERENCE_SPACE_TYPE_LOCAL_FLOOR, is closely related to the LOCAL
reference space.
It always aligns with the LOCAL space, and matches it in X and Z position.
However, unlike the LOCAL space, the LOCAL_FLOOR space has its Y axis
origin on the runtime’s best estimate of the floor level under the origin of
the LOCAL space.
The location of the origin of the LOCAL_FLOOR space must match the
LOCAL space in the X and Z coordinates but not in the Y coordinate.
The orientation of the LOCAL_FLOOR space must match the LOCAL space.
The runtime must establish the Y axis origin at its best estimate of the
floor level under the origin of the LOCAL space space, subject to
requirements under the following conditions to match the floor level of the
STAGE space.
If all of the following conditions are true, the Y axis origin of the
LOCAL_FLOOR space must match the Y axis origin of the STAGE space:
-
the
STAGEspace is supported -
the location of the
LOCALspace relative to theSTAGEspace has valid position (XR_SPACE_LOCATION_POSITION_VALID_BITis set) -
bounds are available from xrGetReferenceSpaceBoundsRect for the
STAGEspace -
the position of the
LOCALspace relative to theSTAGEspace is within theSTAGEspace XZ bounds
That is, if there is a stage with bounds, and if the local space and thus the local floor is logically within the stage, the local floor and the stage share the same floor level.
When the origin of the LOCAL space is changed in orientation or XZ
position, the origin of the LOCAL_FLOOR space must also change
accordingly.
When a change in origin of the LOCAL_FLOOR space occurs, the runtime must
queue the XrEventDataReferenceSpaceChangePending event, with the
changed LOCAL_FLOOR space origin only taking effect for
xrLocateSpace or xrLocateViews calls whose XrTime
parameter is greater than or equal to the
XrEventDataReferenceSpaceChangePending::changeTime in that
event.
The xrGetReferenceSpaceBoundsRect function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrGetReferenceSpaceBoundsRect(
XrSession session,
XrReferenceSpaceType referenceSpaceType,
XrExtent2Df* bounds);
XR systems may have limited real world spatial ranges in which users can freely move around while remaining tracked. Applications sometimes wish to query these boundaries and alter application behavior or content placement to ensure the user can complete the experience while remaining within the boundary. Applications can query this information using xrGetReferenceSpaceBoundsRect.
When called, xrGetReferenceSpaceBoundsRect should return the extents
of a rectangle that is clear of obstacles down to the floor, allowing where
the user can freely move while remaining tracked, if available for that
reference space.
The returned extent represents the dimensions of an axis-aligned bounding
box where the XrExtent2Df::width and
XrExtent2Df::height fields correspond to the X and Z axes of the
provided space, with the extents centered at the origin of the space.
Not all systems or spaces support boundaries.
If a runtime is unable to provide bounds for a given space,
XR_SPACE_BOUNDS_UNAVAILABLE must be returned and all fields of
bounds must be set to 0.
The returned extents are expressed relative to the natural origin of the provided XrReferenceSpaceType and must not incorporate any origin offsets specified by the application during calls to xrCreateReferenceSpace.
The runtime must return XR_ERROR_REFERENCE_SPACE_UNSUPPORTED if the
XrReferenceSpaceType passed in referenceSpaceType is not
supported by this session.
When a runtime will begin operating with updated space bounds, the runtime must queue a corresponding XrEventDataReferenceSpaceChangePending event.
The XrEventDataReferenceSpaceChangePending structure is defined as:
// Provided by XR_VERSION_1_0
typedef struct XrEventDataReferenceSpaceChangePending {
XrStructureType type;
const void* next;
XrSession session;
XrReferenceSpaceType referenceSpaceType;
XrTime changeTime;
XrBool32 poseValid;
XrPosef poseInPreviousSpace;
} XrEventDataReferenceSpaceChangePending;
The XrEventDataReferenceSpaceChangePending event is sent to the application to notify it that the origin (and perhaps the bounds) of a reference space is changing. This may occur due to the user recentering the space explicitly, or the runtime otherwise switching to a different space definition.
The reference space change must only take effect for xrLocateSpace or
xrLocateViews calls whose XrTime parameter is greater than or
equal to the changeTime provided in that event.
Runtimes should provide a changeTime to applications that allows for
a deep render pipeline to present frames that are already in flight using
the previous definition of the space.
Runtimes should choose a changeTime that is midway between the
XrFrameState::predictedDisplayTime of future frames to avoid
threshold issues with applications that calculate future frame times using
XrFrameState::predictedDisplayPeriod.
The poseInPreviousSpace provided here must only describe the change
in the natural origin of the reference space and must not incorporate any
origin offsets specified by the application during calls to
xrCreateReferenceSpace.
If the runtime does not know the location of the space’s new origin relative
to its previous origin, poseValid must be false, and the position and
orientation of poseInPreviousSpace are undefined.
7.2. Action Spaces
An XrSpace handle for a pose action is created using xrCreateActionSpace, by specifying the chosen pose action and a pose within the action’s natural reference frame.
Runtimes support suggested pose action bindings to well-known user paths with …/pose subpaths if they support tracking for that particular identifier.
Some example well-known pose action paths:
For definitions of these well-known pose device paths, see the discussion of device input subpaths in the Semantic Paths chapter.
7.2.1. Action Spaces Lifetime
XrSpace handles created for a pose action must be unlocatable unless the action set that contains the corresponding pose action was set as active via the most recent xrSyncActions call. If the underlying device that is active for the action changes, the device this space is tracking must only change to track the new device when xrSyncActions is called.
If xrLocateSpace is called with an unlocatable action space, the
implementation must return no position or orientation and both
XR_SPACE_LOCATION_POSITION_VALID_BIT and
XR_SPACE_LOCATION_ORIENTATION_VALID_BIT must be unset.
If XrSpaceVelocity is also supplied,
XR_SPACE_VELOCITY_LINEAR_VALID_BIT and
XR_SPACE_VELOCITY_ANGULAR_VALID_BIT must be unset.
If xrLocateViews is called with an unlocatable action space, the
implementation must return no position or orientation and both
XR_VIEW_STATE_POSITION_VALID_BIT and
XR_VIEW_STATE_ORIENTATION_VALID_BIT must be unset.
7.3. Space Lifecycle
There are a small set of core APIs that allow applications to reason about reference spaces, action spaces, and their relative locations.
7.3.1. xrEnumerateReferenceSpaces
The xrEnumerateReferenceSpaces function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrEnumerateReferenceSpaces(
XrSession session,
uint32_t spaceCapacityInput,
uint32_t* spaceCountOutput,
XrReferenceSpaceType* spaces);
Enumerates the set of reference space types that this runtime supports for a given session. Runtimes must always return identical buffer contents from this enumeration for the lifetime of the session.
If a session enumerates support for a given reference space type, calls to xrCreateReferenceSpace must succeed for that session, with any transient unavailability of poses expressed later during calls to xrLocateSpace.
7.3.2. xrCreateReferenceSpace
The xrCreateReferenceSpace function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrCreateReferenceSpace(
XrSession session,
const XrReferenceSpaceCreateInfo* createInfo,
XrSpace* space);
Creates an XrSpace handle based on a chosen reference space. Application can provide an XrPosef to define the position and orientation of the new space’s origin within the natural reference frame of the reference space.
Multiple XrSpace handles may exist simultaneously, up to some limit imposed by the runtime. The XrSpace handle must be eventually freed via the xrDestroySpace function.
The runtime must return XR_ERROR_REFERENCE_SPACE_UNSUPPORTED if the
given reference space type is not supported by this session.
The XrReferenceSpaceCreateInfo structure is defined as:
typedef struct XrReferenceSpaceCreateInfo {
XrStructureType type;
const void* next;
XrReferenceSpaceType referenceSpaceType;
XrPosef poseInReferenceSpace;
} XrReferenceSpaceCreateInfo;
7.3.3. xrCreateActionSpace
The xrCreateActionSpace function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrCreateActionSpace(
XrSession session,
const XrActionSpaceCreateInfo* createInfo,
XrSpace* space);
Creates an XrSpace handle based on a chosen pose action. Application can provide an XrPosef to define the position and orientation of the new space’s origin within the natural reference frame of the action space.
Multiple XrSpace handles may exist simultaneously, up to some limit imposed by the runtime. The XrSpace handle must be eventually freed via the xrDestroySpace function or by destroying the parent XrSession handle. See Action Spaces Lifetime for details.
The runtime must return XR_ERROR_ACTION_TYPE_MISMATCH if the action
provided in XrActionSpaceCreateInfo::action is not of type
XR_ACTION_TYPE_POSE_INPUT.
The XrActionSpaceCreateInfo structure is defined as:
typedef struct XrActionSpaceCreateInfo {
XrStructureType type;
const void* next;
XrAction action;
XrPath subactionPath;
XrPosef poseInActionSpace;
} XrActionSpaceCreateInfo;
7.3.4. xrDestroySpace
The xrDestroySpace function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrDestroySpace(
XrSpace space);
XrSpace handles are destroyed using xrDestroySpace. The runtime may still use this space if there are active dependencies (e.g, compositions in progress).
7.4. Locating Spaces
Applications use the xrLocateSpace function to find the pose of an
XrSpace’s origin within a base XrSpace at a given historical or
predicted time.
If an application wants to know the velocity of the space’s origin, it can
chain an XrSpaceVelocity structure to the next pointer of the
XrSpaceLocation structure when calling the xrLocateSpace
function.
Applications should inspect the output XrSpaceLocationFlagBits and
XrSpaceVelocityFlagBits to determine the validity and tracking status
of the components of the location.
7.4.1. xrLocateSpace
xrLocateSpace provides the physical location of a space in a base space at a specified time, if currently known by the runtime.
// Provided by XR_VERSION_1_0
XrResult xrLocateSpace(
XrSpace space,
XrSpace baseSpace,
XrTime time,
XrSpaceLocation* location);
For a time in the past, the runtime should locate the spaces based on
the runtime’s most accurate current understanding of how the world was at
that historical time.
For a time in the future, the runtime should locate the spaces based
on the runtime’s most up-to-date prediction of how the world will be at that
future time.
The minimum valid range of values for time are described in
Prediction Time Limits.
For values of time outside this range, xrLocateSpace may return
a location with no position and XR_SPACE_LOCATION_POSITION_VALID_BIT
unset.
Some devices improve their understanding of the world as the device is used.
The location returned by xrLocateSpace for a given space,
baseSpace and time may change over time, even for spaces that
track static objects, as one or both spaces adjust their origins.
During tracking loss of space relative to baseSpace, runtimes
should continue to provide inferred or last-known
XrPosef::position and XrPosef::orientation values.
These inferred poses can, for example, be based on neck model updates,
inertial dead reckoning, or a last-known position, so long as it is still
reasonable for the application to use that pose.
While a runtime is providing position data, it must continue to set
XR_SPACE_LOCATION_POSITION_VALID_BIT but it can clear
XR_SPACE_LOCATION_POSITION_TRACKED_BIT to indicate that the position
is inferred or last-known in this way.
If the runtime has not yet observed even a last-known pose for how to locate
space in baseSpace (e.g. one space is an action space bound to a
motion controller that has not yet been detected, or the two spaces are in
disconnected fragments of the runtime’s tracked volume), the runtime should
return a location with no position and
XR_SPACE_LOCATION_POSITION_VALID_BIT unset.
The runtime must return a location with both
XR_SPACE_LOCATION_POSITION_VALID_BIT and
XR_SPACE_LOCATION_POSITION_TRACKED_BIT set when locating space
and baseSpace if both spaces were created relative to the same entity
(e.g. two action spaces for the same action), even if the entity is
currently untracked.
The location in this case is the difference in the two spaces'
application-specified transforms relative to that common entity.
During tracking loss, the runtime should return a location with
XR_SPACE_LOCATION_POSITION_VALID_BIT and
XR_SPACE_LOCATION_ORIENTATION_VALID_BIT set and
XR_SPACE_LOCATION_POSITION_TRACKED_BIT and
XR_SPACE_LOCATION_ORIENTATION_TRACKED_BIT unset for spaces tracking
two static entities in the world when their relative pose is known to the
runtime.
This enables applications to continue to make use of the runtime’s latest
knowledge of the world.
If an XrSpaceVelocity structure is chained to the
XrSpaceLocation::next pointer, and the velocity is observed or
can be calculated by the runtime, the runtime must fill in the linear
velocity of the origin of space within the reference frame of
baseSpace and set the XR_SPACE_VELOCITY_LINEAR_VALID_BIT.
Similarly, if an XrSpaceVelocity structure is chained to the
XrSpaceLocation::next pointer, and the angular velocity is
observed or can be calculated by the runtime, the runtime must fill in the
angular velocity of the origin of space within the reference frame of
baseSpace and set the XR_SPACE_VELOCITY_ANGULAR_VALID_BIT.
The following example code shows how an application can get both the
location and velocity of a space within a base space using the
xrLocateSpace function by chaining an XrSpaceVelocity to the
next pointer of XrSpaceLocation and calling xrLocateSpace.
XrSpace space; // previously initialized
XrSpace baseSpace; // previously initialized
XrTime time; // previously initialized
XrSpaceVelocity velocity {XR_TYPE_SPACE_VELOCITY};
XrSpaceLocation location {XR_TYPE_SPACE_LOCATION, &velocity};
xrLocateSpace(space, baseSpace, time, &location);
The XrSpaceLocation structure is defined as:
typedef struct XrSpaceLocation {
XrStructureType type;
void* next;
XrSpaceLocationFlags locationFlags;
XrPosef pose;
} XrSpaceLocation;
The XrSpaceLocation::locationFlags member is of the following
type, and contains a bitwise-OR of zero or more of the bits defined in
XrSpaceLocationFlagBits.
typedef XrFlags64 XrSpaceLocationFlags;
Valid bits for XrSpaceLocationFlags are defined by XrSpaceLocationFlagBits, which is specified as:
// Flag bits for XrSpaceLocationFlags
static const XrSpaceLocationFlags XR_SPACE_LOCATION_ORIENTATION_VALID_BIT = 0x00000001;
static const XrSpaceLocationFlags XR_SPACE_LOCATION_POSITION_VALID_BIT = 0x00000002;
static const XrSpaceLocationFlags XR_SPACE_LOCATION_ORIENTATION_TRACKED_BIT = 0x00000004;
static const XrSpaceLocationFlags XR_SPACE_LOCATION_POSITION_TRACKED_BIT = 0x00000008;
The flag bits have the following meanings:
The XrSpaceVelocity structure is defined as:
// Provided by XR_VERSION_1_0
typedef struct XrSpaceVelocity {
XrStructureType type;
void* next;
XrSpaceVelocityFlags velocityFlags;
XrVector3f linearVelocity;
XrVector3f angularVelocity;
} XrSpaceVelocity;
The XrSpaceVelocity::velocityFlags member is of the following
type, and contains a bitwise-OR of zero or more of the bits defined in
XrSpaceVelocityFlagBits.
typedef XrFlags64 XrSpaceVelocityFlags;
Valid bits for XrSpaceVelocityFlags are defined by XrSpaceVelocityFlagBits, which is specified as:
// Flag bits for XrSpaceVelocityFlags
static const XrSpaceVelocityFlags XR_SPACE_VELOCITY_LINEAR_VALID_BIT = 0x00000001;
static const XrSpaceVelocityFlags XR_SPACE_VELOCITY_ANGULAR_VALID_BIT = 0x00000002;
The flag bits have the following meanings:
7.4.2. Locate spaces
Applications can use xrLocateSpaces function to locate an array of spaces.
The xrLocateSpaces function is defined as:
// Provided by XR_VERSION_1_1
XrResult xrLocateSpaces(
XrSession session,
const XrSpacesLocateInfo* locateInfo,
XrSpaceLocations* spaceLocations);
xrLocateSpaces provides the physical location of one or more spaces in a base space at a specified time, if currently known by the runtime.
The XrSpacesLocateInfo::time, the
XrSpacesLocateInfo::baseSpace, and each space in
XrSpacesLocateInfo::spaces, in the locateInfo parameter,
all follow the same specifics as the corresponding inputs to the
xrLocateSpace function.
The XrSpacesLocateInfo structure is defined as:
// Provided by XR_VERSION_1_1
typedef struct XrSpacesLocateInfo {
XrStructureType type;
const void* next;
XrSpace baseSpace;
XrTime time;
uint32_t spaceCount;
const XrSpace* spaces;
} XrSpacesLocateInfo;
The time, the baseSpace, and each space in spaces all
follow the same specifics as the corresponding inputs to the
xrLocateSpace function.
The baseSpace and all of the XrSpace handles in the spaces
array must be valid and share the same parent XrSession.
If the time is invalid, the xrLocateSpaces must return
XR_ERROR_TIME_INVALID.
The spaceCount must be a positive number, i.e. the array spaces
must not be empty.
Otherwise, the runtime must return XR_ERROR_VALIDATION_FAILURE.
The XrSpaceLocations structure is defined as:
// Provided by XR_VERSION_1_1
typedef struct XrSpaceLocations {
XrStructureType type;
void* next;
uint32_t locationCount;
XrSpaceLocationData* locations;
} XrSpaceLocations;
The XrSpaceLocations structure contains an array of space locations in
the member locations, to be used as output for xrLocateSpaces.
The application must allocate this array to be populated with the function
output.
The locationCount value must be the same as
XrSpacesLocateInfo::spaceCount, otherwise, the
xrLocateSpaces function must return
XR_ERROR_VALIDATION_FAILURE.
The XrSpaceLocationData structure is defined as:
// Provided by XR_VERSION_1_1
typedef struct XrSpaceLocationData {
XrSpaceLocationFlags locationFlags;
XrPosef pose;
} XrSpaceLocationData;
This is a single element of the array in
XrSpaceLocations::locations, and is used to return the pose and
location flags for a single space with respect to the specified base space
from a call to xrLocateSpaces.
It does not accept chained structures to allow for easier use in dynamically
allocated container datatypes.
Chained structures are possible with the XrSpaceLocations that
describes an array of these elements.
7.4.3. Locate space velocities
Applications can request the velocities of spaces by chaining the XrSpaceVelocities structure to the next pointer of XrSpaceLocations when calling xrLocateSpaces.
The XrSpaceVelocities structure is defined as:
// Provided by XR_VERSION_1_1
typedef struct XrSpaceVelocities {
XrStructureType type;
void* next;
uint32_t velocityCount;
XrSpaceVelocityData* velocities;
} XrSpaceVelocities;
The velocities member contains an array of space velocities in the
member velocities, to be used as output for xrLocateSpaces.
The application must allocate this array to be populated with the function
output.
The velocityCount value must be the same as
XrSpacesLocateInfo::spaceCount, otherwise, the
xrLocateSpaces function must return
XR_ERROR_VALIDATION_FAILURE.
The XrSpaceVelocityData structure is defined as:
// Provided by XR_VERSION_1_1
typedef struct XrSpaceVelocityData {
XrSpaceVelocityFlags velocityFlags;
XrVector3f linearVelocity;
XrVector3f angularVelocity;
} XrSpaceVelocityData;
This is a single element of the array in
XrSpaceVelocities::velocities, and is used to return the linear
and angular velocity and velocity flags for a single space with respect to
the specified base space from a call to xrLocateSpaces.
It does not accept chained structures to allow for easier use in dynamically
allocated container datatypes.
7.4.4. Example code for xrLocateSpaces
The following example code shows how an application retrieves both the location and velocity of one or more spaces in a base space at a given time using the xrLocateSpaces function.
XrInstance instance; // previously initialized
XrSession session; // previously initialized
XrSpace baseSpace; // previously initialized
std::vector<XrSpace> spacesToLocate; // previously initialized
// Prepare output buffers to receive data and get reused in frame loop.
std::vector<XrSpaceLocationData> locationBuffer(spacesToLocate.size());
std::vector<XrSpaceVelocityData> velocityBuffer(spacesToLocate.size());
// Get function pointer for xrLocateSpaces.
PFN_xrLocateSpaces xrLocateSpaces;
CHK_XR(xrGetInstanceProcAddr(instance, "xrLocateSpaces",
reinterpret_cast<PFN_xrVoidFunction*>(
&xrLocateSpaces)));
// application frame loop
while (1) {
// Typically the time is the predicted display time returned from xrWaitFrame.
XrTime displayTime; // previously initialized.
XrSpacesLocateInfo locateInfo{XR_TYPE_SPACES_LOCATE_INFO};
locateInfo.baseSpace = baseSpace;
locateInfo.time = displayTime;
locateInfo.spaceCount = (uint32_t)spacesToLocate.size();
locateInfo.spaces = spacesToLocate.data();
XrSpaceLocations locations{XR_TYPE_SPACE_LOCATIONS};
locations.locationCount = (uint32_t)locationBuffer.size();
locations.locations = locationBuffer.data();
XrSpaceVelocities velocities{XR_TYPE_SPACE_VELOCITIES};
velocities.velocityCount = (uint32_t)velocityBuffer.size();
velocities.velocities = velocityBuffer.data();
locations.next = &velocities;
CHK_XR(xrLocateSpaces(session, &locateInfo, &locations));
for (uint32_t i = 0; i < spacesToLocate.size(); i++) {
const auto positionAndOrientationTracked =
XR_SPACE_LOCATION_POSITION_TRACKED_BIT | XR_SPACE_LOCATION_ORIENTATION_TRACKED_BIT;
const auto orientationOnlyTracked = XR_SPACE_LOCATION_ORIENTATION_TRACKED_BIT;
if ((locationBuffer[i].locationFlags & positionAndOrientationTracked) == positionAndOrientationTracked) {
// if the location is 6dof tracked
do_something(locationBuffer[i].pose.position);
do_something(locationBuffer[i].pose.orientation);
const auto velocityValidBits =
XR_SPACE_VELOCITY_LINEAR_VALID_BIT | XR_SPACE_VELOCITY_ANGULAR_VALID_BIT;
if ((velocityBuffer[i].velocityFlags & velocityValidBits) == velocityValidBits) {
do_something(velocityBuffer[i].linearVelocity);
do_something(velocityBuffer[i].angularVelocity);
}
}
else if ((locationBuffer[i].locationFlags & orientationOnlyTracked) == orientationOnlyTracked) {
// if the location is 3dof tracked
do_something(locationBuffer[i].pose.orientation);
if ((velocityBuffer[i].velocityFlags & XR_SPACE_VELOCITY_ANGULAR_VALID_BIT) == XR_SPACE_VELOCITY_ANGULAR_VALID_BIT) {
do_something(velocityBuffer[i].angularVelocity);
}
}
}
}
8. View Configurations
A view configuration is a semantically meaningful set of one or more views for which an application can render images. A primary view configuration is a view configuration intended to be presented to the viewer interacting with the XR application. This distinction allows the later addition of additional views, for example views which are intended for spectators.
A typical head-mounted VR system has a view configuration with two views, while a typical phone-based AR system has a view configuration with a single view. A simple multi-wall projection-based (CAVE-like) VR system may have a view configuration with at least one view for each display surface (wall, floor, ceiling) in the room.
For any supported form factor, a system will support one or more primary view configurations. Supporting more than one primary view configuration can be useful if a system supports a special view configuration optimized for the hardware but also supports a more broadly used view configuration as a compatibility fallback.
View configurations are identified with an XrViewConfigurationType.
8.1. Primary View Configurations
typedef enum XrViewConfigurationType {
XR_VIEW_CONFIGURATION_TYPE_PRIMARY_MONO = 1,
XR_VIEW_CONFIGURATION_TYPE_PRIMARY_STEREO = 2,
// Provided by XR_VERSION_1_1
XR_VIEW_CONFIGURATION_TYPE_PRIMARY_STEREO_WITH_FOVEATED_INSET = 1000037000,
// Provided by XR_MSFT_first_person_observer
XR_VIEW_CONFIGURATION_TYPE_SECONDARY_MONO_FIRST_PERSON_OBSERVER_MSFT = 1000054000,
// Provided by XR_VARJO_quad_views
XR_VIEW_CONFIGURATION_TYPE_PRIMARY_QUAD_VARJO = XR_VIEW_CONFIGURATION_TYPE_PRIMARY_STEREO_WITH_FOVEATED_INSET,
XR_VIEW_CONFIGURATION_TYPE_MAX_ENUM = 0x7FFFFFFF
} XrViewConfigurationType;
The application selects its primary view configuration type when calling xrBeginSession, and that configuration remains constant for the lifetime of the session, until xrEndSession is called.
The number of views and the semantic meaning of each view index within a given view configuration is well-defined, specified below for all core view configurations. The predefined primary view configuration types are:
8.2. View Configuration API
First an application needs to select which primary view configuration it wants to use. If it supports multiple configurations, an application can call xrEnumerateViewConfigurations before creating an XrSession to get a list of the view configuration types supported for a given system.
The application can then call xrGetViewConfigurationProperties and xrEnumerateViewConfigurationViews to get detailed information about each view configuration type and its individual views.
8.2.1. xrEnumerateViewConfigurations
The xrEnumerateViewConfigurations function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrEnumerateViewConfigurations(
XrInstance instance,
XrSystemId systemId,
uint32_t viewConfigurationTypeCapacityInput,
uint32_t* viewConfigurationTypeCountOutput,
XrViewConfigurationType* viewConfigurationTypes);
xrEnumerateViewConfigurations enumerates the view configuration types
supported by the XrSystemId.
The supported set for that system must not change during the lifetime of
its XrInstance.
The returned list of primary view configurations should be in order from
what the runtime considered highest to lowest user preference.
Thus the first enumerated view configuration type should be the one the
runtime prefers the application to use if possible.
Runtimes must always return identical buffer contents from this enumeration
for the given systemId and for the lifetime of the instance.
8.2.2. xrGetViewConfigurationProperties
The xrGetViewConfigurationProperties function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrGetViewConfigurationProperties(
XrInstance instance,
XrSystemId systemId,
XrViewConfigurationType viewConfigurationType,
XrViewConfigurationProperties* configurationProperties);
xrGetViewConfigurationProperties queries properties of an individual
view configuration.
Applications must use one of the supported view configuration types
returned by xrEnumerateViewConfigurations.
If viewConfigurationType is not supported by this XrInstance the
runtime must return XR_ERROR_VIEW_CONFIGURATION_TYPE_UNSUPPORTED.
8.2.3. XrViewConfigurationProperties
The XrViewConfigurationProperties structure is defined as:
typedef struct XrViewConfigurationProperties {
XrStructureType type;
void* next;
XrViewConfigurationType viewConfigurationType;
XrBool32 fovMutable;
} XrViewConfigurationProperties;
8.2.4. xrEnumerateViewConfigurationViews
The xrEnumerateViewConfigurationViews function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrEnumerateViewConfigurationViews(
XrInstance instance,
XrSystemId systemId,
XrViewConfigurationType viewConfigurationType,
uint32_t viewCapacityInput,
uint32_t* viewCountOutput,
XrViewConfigurationView* views);
Each XrViewConfigurationType defines the number of views associated
with it.
Applications can query more details of each view element using
xrEnumerateViewConfigurationViews.
If the supplied viewConfigurationType is not supported by this
XrInstance and XrSystemId, the runtime must return
XR_ERROR_VIEW_CONFIGURATION_TYPE_UNSUPPORTED.
Runtimes must always return identical buffer contents from this enumeration
for the given systemId and viewConfigurationType for the
lifetime of the instance.
8.2.5. XrViewConfigurationView
Each XrViewConfigurationView specifies properties related to rendering of an individual view within a view configuration.
The XrViewConfigurationView structure is defined as:
typedef struct XrViewConfigurationView {
XrStructureType type;
void* next;
uint32_t recommendedImageRectWidth;
uint32_t maxImageRectWidth;
uint32_t recommendedImageRectHeight;
uint32_t maxImageRectHeight;
uint32_t recommendedSwapchainSampleCount;
uint32_t maxSwapchainSampleCount;
} XrViewConfigurationView;
See XrSwapchainSubImage for more information about
XrSwapchainSubImage::imageRect values, and
XrSwapchainCreateInfo for more information about creating swapchains
appropriately sized to support those
XrSwapchainSubImage::imageRect values.
The array of XrViewConfigurationView returned by the runtime must adhere to the rules defined in XrViewConfigurationType, such as the count and association to the left and right eyes.
8.3. Example View Configuration Code
XrInstance instance; // previously initialized
XrSystemId system; // previously initialized
XrSession session; // previously initialized
XrSpace sceneSpace; // previously initialized
// Enumerate the view configurations paths.
uint32_t configurationCount;
CHK_XR(xrEnumerateViewConfigurations(instance, system, 0, &configurationCount, nullptr));
std::vector<XrViewConfigurationType> configurationTypes(configurationCount);
CHK_XR(xrEnumerateViewConfigurations(instance, system, configurationCount, &configurationCount, configurationTypes.data()));
bool configFound = false;
XrViewConfigurationType viewConfig = XR_VIEW_CONFIGURATION_TYPE_MAX_ENUM;
for(uint32_t i = 0; i < configurationCount; ++i)
{
if (configurationTypes[i] == XR_VIEW_CONFIGURATION_TYPE_PRIMARY_STEREO)
{
configFound = true;
viewConfig = configurationTypes[i];
break; // Pick the first supported, i.e. preferred, view configuration.
}
}
if (!configFound)
return; // Cannot support any view configuration of this system.
// Get detailed information of each view element.
uint32_t viewCount;
CHK_XR(xrEnumerateViewConfigurationViews(instance, system,
viewConfig,
0,
&viewCount,
nullptr));
std::vector<XrViewConfigurationView> configViews(viewCount, {XR_TYPE_VIEW_CONFIGURATION_VIEW});
CHK_XR(xrEnumerateViewConfigurationViews(instance, system,
viewConfig,
viewCount,
&viewCount,
configViews.data()));
// Set the primary view configuration for the session.
XrSessionBeginInfo beginInfo = {XR_TYPE_SESSION_BEGIN_INFO};
beginInfo.primaryViewConfigurationType = viewConfig;
CHK_XR(xrBeginSession(session, &beginInfo));
// Allocate a buffer according to viewCount.
std::vector<XrView> views(viewCount, {XR_TYPE_VIEW});
// Run a per-frame loop.
while (!quit)
{
// Wait for a new frame.
XrFrameWaitInfo frameWaitInfo{XR_TYPE_FRAME_WAIT_INFO};
XrFrameState frameState{XR_TYPE_FRAME_STATE};
CHK_XR(xrWaitFrame(session, &frameWaitInfo, &frameState));
// Begin frame immediately before GPU work
XrFrameBeginInfo frameBeginInfo { XR_TYPE_FRAME_BEGIN_INFO };
CHK_XR(xrBeginFrame(session, &frameBeginInfo));
std::vector<XrCompositionLayerBaseHeader*> layers;
XrCompositionLayerProjectionView projViews[2] = { /*...*/ };
XrCompositionLayerProjection layerProj{ XR_TYPE_COMPOSITION_LAYER_PROJECTION};
if (frameState.shouldRender) {
XrViewLocateInfo viewLocateInfo{XR_TYPE_VIEW_LOCATE_INFO};
viewLocateInfo.viewConfigurationType = viewConfig;
viewLocateInfo.displayTime = frameState.predictedDisplayTime;
viewLocateInfo.space = sceneSpace;
XrViewState viewState{XR_TYPE_VIEW_STATE};
XrView views[2] = { {XR_TYPE_VIEW}, {XR_TYPE_VIEW}};
uint32_t viewCountOutput;
CHK_XR(xrLocateViews(session, &viewLocateInfo, &viewState, configViews.size(), &viewCountOutput, views));
// ...
// Use viewState and frameState for scene render, and fill in projViews[2]
// ...
// Assemble composition layers structure
layerProj.layerFlags = XR_COMPOSITION_LAYER_BLEND_TEXTURE_SOURCE_ALPHA_BIT;
layerProj.space = sceneSpace;
layerProj.viewCount = 2;
layerProj.views = projViews;
layers.push_back(reinterpret_cast<XrCompositionLayerBaseHeader*>(&layerProj));
}
// End frame and submit layers, even if layers is empty due to shouldRender = false
XrFrameEndInfo frameEndInfo{ XR_TYPE_FRAME_END_INFO};
frameEndInfo.displayTime = frameState.predictedDisplayTime;
frameEndInfo.environmentBlendMode = XR_ENVIRONMENT_BLEND_MODE_OPAQUE;
frameEndInfo.layerCount = (uint32_t)layers.size();
frameEndInfo.layers = layers.data();
CHK_XR(xrEndFrame(session, &frameEndInfo));
}
9. Session
XR_DEFINE_HANDLE(XrSession)
A session represents an application’s intention to display XR content to the user.
9.1. Session Lifecycle
|
A typical XR session coordinates the application and the runtime through session control functions and session state events.
|
A session is considered running after a successful call to
xrBeginSession and remains running until any call is made to
xrEndSession.
Certain functions are only valid to call when a session is running, such as
xrWaitFrame, or else the XR_ERROR_SESSION_NOT_RUNNING error
must be returned by the runtime.
A session is considered not running before a successful call to
xrBeginSession and becomes not running again after any call is made to
xrEndSession.
Certain functions are only valid to call when a session is not running, such
as xrBeginSession, or else the XR_ERROR_SESSION_RUNNING error
must be returned by the runtime.
If an error is returned from xrBeginSession, the session remains in its current running or not running state. Calling xrEndSession always transitions a session to the not running state, regardless of any errors returned.
Only running sessions may become focused sessions that receive XR input. When a session is not running, the application must not submit frames. This is important because without a running session, the runtime no longer has to spend resources on sub-systems (tracking etc.) that are no longer needed by the application.
An application must call xrBeginSession when the session is in the
XR_SESSION_STATE_READY state, or
XR_ERROR_SESSION_NOT_READY will be returned; it must call
xrEndSession when the session is in the XR_SESSION_STATE_STOPPING state, otherwise
XR_ERROR_SESSION_NOT_STOPPING will be returned.
This is to allow the runtimes to seamlessly transition from one
application’s session to another.
The application can call xrDestroySession at any time during the
session life cycle, however, it must stop using the XrSession handle
immediately in all threads and stop using any related resources.
Therefore, it’s typically undesirable to destroy a
running session and instead it’s recommended to wait for
XR_SESSION_STATE_EXITING to destroy a
session.
9.2. Session Creation
To present graphical content on an output device, OpenXR applications need to pick a graphics API which is supported by the runtime. Unextended OpenXR does not support any graphics APIs natively but provides a number of extensions of which each runtime can support any subset. These extensions can be activated during XrInstance create time.
During XrSession creation the application must provide information
about which graphics API it intends to use by adding an
XrGraphicsBinding* struct of one (and only one) of the enabled
graphics API extensions to the next chain of XrSessionCreateInfo.
The application must call the xrGet*GraphicsRequirements method
(where * is a placeholder) provided by the chosen graphics API extension
before attempting to create the session (for example,
xrGetD3D11GraphicsRequirementsKHR
xrGetD3D12GraphicsRequirementsKHR
xrGetOpenGLGraphicsRequirementsKHR
xrGetVulkanGraphicsRequirementsKHR
xrGetVulkanGraphicsRequirements2KHR
).
Unless specified differently in the graphics API extension, the application
is responsible for creating a valid graphics device binding based on the
requirements returned by xrGet*GraphicsRequirements methods (for
details refer to the extension specification of the graphics API).
The xrCreateSession function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrCreateSession(
XrInstance instance,
const XrSessionCreateInfo* createInfo,
XrSession* session);
Creates a session using the provided createInfo and returns a handle
to that session.
This session is created in the XR_SESSION_STATE_IDLE state, and a
corresponding XrEventDataSessionStateChanged event to the
XR_SESSION_STATE_IDLE state must be generated as the first such event
for the new session.
The runtime must return XR_ERROR_GRAPHICS_REQUIREMENTS_CALL_MISSING
(XR_ERROR_VALIDATION_FAILURE may be returned due to legacy behavior)
on calls to xrCreateSession if a function named like
xrGet*GraphicsRequirements has not been called for the same
instance and XrSessionCreateInfo::systemId.
(See graphics binding extensions for details.)
The XrSessionCreateInfo structure is defined as:
typedef struct XrSessionCreateInfo {
XrStructureType type;
const void* next;
XrSessionCreateFlags createFlags;
XrSystemId systemId;
} XrSessionCreateInfo;
The XrSessionCreateInfo::createFlags member is of the following
type, and contains a bitwise-OR of zero or more of the bits defined in
XrSessionCreateFlagBits.
typedef XrFlags64 XrSessionCreateFlags;
Valid bits for XrSessionCreateFlags are defined by XrSessionCreateFlagBits.
// Flag bits for XrSessionCreateFlags
There are currently no session creation flags. This is reserved for future use.
The xrDestroySession function is defined as.
// Provided by XR_VERSION_1_0
XrResult xrDestroySession(
XrSession session);
XrSession handles are destroyed using xrDestroySession. When an XrSession is destroyed, all handles that are children of that XrSession are also destroyed.
The application is responsible for ensuring that it has no calls using
session in progress when the session is destroyed.
xrDestroySession can be called when the session is in any session state.
9.3. Session Control
The xrBeginSession function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrBeginSession(
XrSession session,
const XrSessionBeginInfo* beginInfo);
When the application receives XrEventDataSessionStateChanged event
with the XR_SESSION_STATE_READY state, the application should then
call xrBeginSession to start rendering frames for display to the user.
After this function successfully returns, the session is considered to be running. The application should then start its frame loop consisting of some sequence of xrWaitFrame/xrBeginFrame/xrEndFrame calls.
If the session is already running when the application
calls xrBeginSession, the runtime must return error
XR_ERROR_SESSION_RUNNING.
If the session is not running when the application
calls xrBeginSession, but the session is not yet in the
XR_SESSION_STATE_READY state, the runtime must return error
XR_ERROR_SESSION_NOT_READY.
Note that a runtime may decide not to show the user any given frame from a
session at any time, for example if the user has switched to a different
application’s running session.
The application should check whether xrWaitFrame returns
XrFrameState::shouldRender set to true before rendering a given
frame to determine whether that frame will be visible to the user.
Runtime session frame state must start in a reset state when a session transitions to running so that no state is carried over from when the same session was previously running. Frame state in this context includes xrWaitFrame, xrBeginFrame, and xrEndFrame call order enforcement.
If XrSessionBeginInfo::primaryViewConfigurationType in
beginInfo is not supported by the XrSystemId used to create
the session, the runtime must return
XR_ERROR_VIEW_CONFIGURATION_TYPE_UNSUPPORTED.
The XrSessionBeginInfo structure is defined as:
typedef struct XrSessionBeginInfo {
XrStructureType type;
const void* next;
XrViewConfigurationType primaryViewConfigurationType;
} XrSessionBeginInfo;
The xrEndSession function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrEndSession(
XrSession session);
When the application receives XrEventDataSessionStateChanged event
with the XR_SESSION_STATE_STOPPING state, the application should stop
its frame loop and then call xrEndSession to end the
running session.
This function signals to the runtime that the application will no longer
call xrWaitFrame, xrBeginFrame or xrEndFrame from any
thread allowing the runtime to safely transition the session to
XR_SESSION_STATE_IDLE.
The application must also avoid reading input state or sending haptic
output after calling xrEndSession.
If the session is not running when the application
calls xrEndSession, the runtime must return error
XR_ERROR_SESSION_NOT_RUNNING.
If the session is still running when the application
calls xrEndSession, but the session is not yet in the
XR_SESSION_STATE_STOPPING state, the runtime must return error
XR_ERROR_SESSION_NOT_STOPPING.
If the application wishes to exit a running session, the application can
call xrRequestExitSession so that the session transitions from
XR_SESSION_STATE_IDLE to XR_SESSION_STATE_EXITING.
When an application wishes to exit a running session,
it can call xrRequestExitSession, requesting that the runtime
transition through the various intermediate session states including
XR_SESSION_STATE_STOPPING to XR_SESSION_STATE_EXITING.
On platforms where an application’s lifecycle is managed by the system, session state changes may be implicitly triggered by application lifecycle state changes. On such platforms, using platform-specific methods to alter application lifecycle state may be the preferred method of provoking session state changes. The behavior of xrRequestExitSession is not altered, however explicit session exit may not interact with the platform-specific application lifecycle.
The xrRequestExitSession function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrRequestExitSession(
XrSession session);
If session is not running when
xrRequestExitSession is called, XR_ERROR_SESSION_NOT_RUNNING
must be returned.
9.4. Session States
While events can be expanded upon, there are a minimum set of lifecycle events which can occur which all OpenXR applications must be aware of. These events are detailed below.
9.4.1. XrEventDataSessionStateChanged
The XrEventDataSessionStateChanged structure is defined as:
// Provided by XR_VERSION_1_0
typedef struct XrEventDataSessionStateChanged {
XrStructureType type;
const void* next;
XrSession session;
XrSessionState state;
XrTime time;
} XrEventDataSessionStateChanged;
Receiving the XrEventDataSessionStateChanged event structure indicates that the application has changed lifecycle state.
The XrSessionState enumerates the possible session lifecycle states:
typedef enum XrSessionState {
XR_SESSION_STATE_UNKNOWN = 0,
XR_SESSION_STATE_IDLE = 1,
XR_SESSION_STATE_READY = 2,
XR_SESSION_STATE_SYNCHRONIZED = 3,
XR_SESSION_STATE_VISIBLE = 4,
XR_SESSION_STATE_FOCUSED = 5,
XR_SESSION_STATE_STOPPING = 6,
XR_SESSION_STATE_LOSS_PENDING = 7,
XR_SESSION_STATE_EXITING = 8,
XR_SESSION_STATE_MAX_ENUM = 0x7FFFFFFF
} XrSessionState;
The XR_SESSION_STATE_UNKNOWN state must not be returned by the
runtime, and is only defined to avoid 0 being a valid state.
Receiving the XR_SESSION_STATE_IDLE state indicates that the runtime
considers the session is idle.
Applications in this state should minimize resource consumption but
continue to call xrPollEvent at some reasonable cadence.
Receiving the XR_SESSION_STATE_READY state indicates that the runtime
desires the application to prepare rendering resources, begin its session
and synchronize its frame loop with the runtime.
The application does this by successfully calling xrBeginSession and
then running its frame loop by calling xrWaitFrame, xrBeginFrame
and xrEndFrame in a loop.
If the runtime wishes to return the session to the
XR_SESSION_STATE_IDLE state, it must wait until the application calls
xrBeginSession.
After returning from the xrBeginSession call, the runtime may then
immediately transition forward through the
XR_SESSION_STATE_SYNCHRONIZED state to the
XR_SESSION_STATE_STOPPING state, to request that the application end
this session.
If the system supports a user engagement sensor and runtime is in
XR_SESSION_STATE_IDLE state, the runtime may wait until the user
starts engaging with the device before transitioning to the
XR_SESSION_STATE_READY state.
Receiving the XR_SESSION_STATE_SYNCHRONIZED state indicates that the
application has synchronized its frame loop with
the runtime, but its frames are not visible to the user.
The application should continue running its frame loop by calling
xrWaitFrame, xrBeginFrame and xrEndFrame, although it
should avoid heavy GPU work so that other visible applications can take CPU
and GPU precedence.
The application can save resources here by skipping rendering and not
submitting any composition layers until xrWaitFrame returns an
XrFrameState with shouldRender set to true.
A runtime may use this frame synchronization to facilitate seamless
switching from a previous XR application to this application on a frame
boundary.
Receiving the XR_SESSION_STATE_VISIBLE state indicates that the
application has synchronized its frame loop with
the runtime, and the session’s frames will be visible to the user, but the
session is not eligible to receive XR input.
An application may be visible but not have focus, for example when the
runtime is composing a modal pop-up on top of the application’s rendered
frames.
The application should continue running its frame loop, rendering and
submitting its composition layers, although it may wish to pause its
experience, as users cannot interact with the application at this time.
It is important for applications to continue rendering when visible, even
when they do not have focus, so the user continues to see something
reasonable underneath modal pop-ups.
Runtimes should make input actions inactive while the application is
unfocused, and applications should react to an inactive input action by
skipping rendering of that action’s input avatar (depictions of hands or
other tracked objects controlled by the user).
Receiving the XR_SESSION_STATE_FOCUSED state indicates that the
application has synchronized its frame loop with
the runtime, the session’s frames will be visible to the user, and the
session is eligible to receive XR input.
The runtime should only give one session XR input focus at any given time.
The application should be running its frame loop, rendering and submitting
composition layers, including input avatars (depictions of hands or other
tracked objects controlled by the user) for any input actions that are
active.
The runtime should avoid rendering its own input avatars when an
application is focused, unless input from a given source is being captured
by the runtime at the moment.
Receiving the XR_SESSION_STATE_STOPPING state indicates that the
runtime has determined that the application should halt its rendering loop.
Applications should exit their rendering loop and call xrEndSession
when in this state.
A possible reason for this would be to minimize contention between multiple
applications.
If the system supports a user engagement sensor and the session is running,
the runtime may transition to the XR_SESSION_STATE_STOPPING state
when the user stops engaging with the device.
Receiving the XR_SESSION_STATE_EXITING state indicates the runtime
wishes the application to terminate its XR experience, typically due to a
user request via a runtime user interface.
Applications should gracefully end their process when in this state if they
do not have a non-XR user experience.
Receiving the XR_SESSION_STATE_LOSS_PENDING state indicates the
runtime is no longer able to operate with the current session, for example
due to the loss of a display hardware connection.
An application should call xrDestroySession and may end its process
or decide to poll xrGetSystem at some reasonable cadence to get a new
XrSystemId, and re-initialize all graphics resources related to the
new system, and then create a new session using xrCreateSession.
After the event is queued, subsequent calls to functions that accept
XrSession parameters must no longer return any success code other
than XR_SESSION_LOSS_PENDING for the given XrSession handle.
The XR_SESSION_LOSS_PENDING success result is returned for an
unspecified grace period of time, and the functions that return it simulate
success in their behavior.
If the runtime has no reasonable way to successfully complete a given
function (e.g. xrCreateSwapchain) when a lost session is pending, or
if the runtime is not able to provide the application a grace period, the
runtime may return XR_ERROR_SESSION_LOST.
Thereafter, functions which accept XrSession parameters for the lost
session may return XR_ERROR_SESSION_LOST to indicate that the
function failed and the given session was lost.
The XrSession handle and child handles are henceforth unusable and
should be destroyed by the application in order to immediately free up
resources associated with those handles.
10. Rendering
10.1. Swapchain Image Management
XR_DEFINE_HANDLE(XrSwapchain)
Most XR applications present rendered images to the user. To allow this, the runtime provides collections of images organized in "swapchains" for the application to render into and submit. Note that these do not necessarily correspond to objects defined by any given graphics API named "swapchains". The runtime must allow applications to create multiple swapchains.
Swapchain image format support by the runtime is reported through use of the xrEnumerateSwapchainFormats function.
Swapchain images can be 2D or 2D Array.
Rendering operations involving composition of submitted layers are assumed
to be internally performed by the runtime in linear color space.
Images intended to be interpreted as being non-linear-encoded ("sRGB") must
be created using an API-specific "sRGB" format (e.g.
DXGI_FORMAT_R8G8B8A8_UNORM_SRGB, GL_SRGB8_ALPHA8,
VK_FORMAT_R8G8B8A8_SRGB) to signal the need for sRGB-to-linear
conversion (whether automatic or manual) when sampled by the runtime.
All other formats will be treated as linear values.
OpenXR applications should avoid submitting linear encoded 8 bit color data
(e.g. DXGI_FORMAT_R8G8B8A8_UNORM) whenever possible as it may result
in color banding.
|
Note
For additional information, see: Gritz, L. and d’Eon, E. 2007. The Importance of Being Linear. In: H. Nguyen, ed., GPU Gems 3. Addison-Wesley Professional. https://developer.nvidia.com/gpugems/gpugems3/part-iv-image-effects/chapter-24-importance-being-linear |
|
Note
DXGI resources will be created with their associated TYPELESS format, but the runtime will use the application-specified format for reading the data. |
The xrEnumerateSwapchainFormats function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrEnumerateSwapchainFormats(
XrSession session,
uint32_t formatCapacityInput,
uint32_t* formatCountOutput,
int64_t* formats);
xrEnumerateSwapchainFormats enumerates the texture formats supported
by the current session.
The type of formats returned are dependent on the graphics API specified by
the graphics binding structure passed to xrCreateSession.
For example, if a DirectX graphics API was specified, then the enumerated
formats correspond to the DXGI formats, such as
DXGI_FORMAT_R8G8B8A8_UNORM_SRGB.
Texture formats should be in order from highest to lowest runtime
preference.
The application should use the highest preference format that it supports
for optimal performance and quality.
Runtimes should support R8G8B8A8 and R8G8B8A8 formats with
non-linear ("sRGB") encoding if possible.
With an OpenGL-based graphics API, the texture formats correspond to OpenGL internal formats.
With a Direct3D-based graphics API, xrEnumerateSwapchainFormats never
returns typeless formats (e.g. DXGI_FORMAT_R8G8B8A8_TYPELESS).
Only concrete formats are returned, and only concrete formats may be
specified by applications for swapchain creation.
Runtimes must always return identical buffer contents from this enumeration for the lifetime of the session.
The xrCreateSwapchain function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrCreateSwapchain(
XrSession session,
const XrSwapchainCreateInfo* createInfo,
XrSwapchain* swapchain);
Creates an XrSwapchain handle.
The returned swapchain handle may be subsequently used in API calls.
Multiple XrSwapchain handles may exist simultaneously, up to some
limit imposed by the runtime.
The XrSwapchain handle must be eventually freed via the
xrDestroySwapchain function.
The runtime must return XR_ERROR_SWAPCHAIN_FORMAT_UNSUPPORTED if the
image format specified in the XrSwapchainCreateInfo is unsupported.
The runtime must return XR_ERROR_FEATURE_UNSUPPORTED if any bit of
the create or usage flags specified in the XrSwapchainCreateInfo is
unsupported.
The XrSwapchainCreateInfo structure is defined as:
typedef struct XrSwapchainCreateInfo {
XrStructureType type;
const void* next;
XrSwapchainCreateFlags createFlags;
XrSwapchainUsageFlags usageFlags;
int64_t format;
uint32_t sampleCount;
uint32_t width;
uint32_t height;
uint32_t faceCount;
uint32_t arraySize;
uint32_t mipCount;
} XrSwapchainCreateInfo;
The XrSwapchainCreateInfo::createFlags member is of the
following type, and contains a bitwise-OR of zero or more of the bits
defined in XrSwapchainCreateFlagBits.
typedef XrFlags64 XrSwapchainCreateFlags;
Valid bits for XrSwapchainCreateFlags are defined by XrSwapchainCreateFlagBits, which is specified as:
// Flag bits for XrSwapchainCreateFlags
static const XrSwapchainCreateFlags XR_SWAPCHAIN_CREATE_PROTECTED_CONTENT_BIT = 0x00000001;
static const XrSwapchainCreateFlags XR_SWAPCHAIN_CREATE_STATIC_IMAGE_BIT = 0x00000002;
The flag bits have the following meanings:
A runtime may implement any of these, but is not required to.
A runtime must return XR_ERROR_FEATURE_UNSUPPORTED from
xrCreateSwapchain if an XrSwapchainCreateFlags bit is requested
but not implemented.
XrSwapchainUsageFlags specify the intended usage of the swapchain
images.
The XrSwapchainCreateInfo::usageFlags member is of this type,
and contains a bitwise-OR of one or more of the bits defined in
XrSwapchainUsageFlagBits.
typedef XrFlags64 XrSwapchainUsageFlags;
When images are created, the runtime needs to know how the images are used in a way that requires more information than simply the image format. The XrSwapchainCreateInfo passed to xrCreateSwapchain must match the intended usage.
Flags include:
// Flag bits for XrSwapchainUsageFlags
static const XrSwapchainUsageFlags XR_SWAPCHAIN_USAGE_COLOR_ATTACHMENT_BIT = 0x00000001;
static const XrSwapchainUsageFlags XR_SWAPCHAIN_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT = 0x00000002;
static const XrSwapchainUsageFlags XR_SWAPCHAIN_USAGE_UNORDERED_ACCESS_BIT = 0x00000004;
static const XrSwapchainUsageFlags XR_SWAPCHAIN_USAGE_TRANSFER_SRC_BIT = 0x00000008;
static const XrSwapchainUsageFlags XR_SWAPCHAIN_USAGE_TRANSFER_DST_BIT = 0x00000010;
static const XrSwapchainUsageFlags XR_SWAPCHAIN_USAGE_SAMPLED_BIT = 0x00000020;
static const XrSwapchainUsageFlags XR_SWAPCHAIN_USAGE_MUTABLE_FORMAT_BIT = 0x00000040;
static const XrSwapchainUsageFlags XR_SWAPCHAIN_USAGE_INPUT_ATTACHMENT_BIT_MND = 0x00000080;
static const XrSwapchainUsageFlags XR_SWAPCHAIN_USAGE_INPUT_ATTACHMENT_BIT_KHR = 0x00000080; // alias of XR_SWAPCHAIN_USAGE_INPUT_ATTACHMENT_BIT_MND
The flag bits have the following meanings:
The number of images in each swapchain is implementation-defined except in the case of a static swapchain. To obtain the number of images actually allocated, call xrEnumerateSwapchainImages.
With a Direct3D-based graphics API, the swapchain returned by xrCreateSwapchain will be a typeless format if the requested format has a typeless analogue. Applications are required to reinterpret the swapchain as a compatible non-typeless type. Upon submitting such swapchains to the runtime, they are interpreted as the format specified by the application in the XrSwapchainCreateInfo.
Swapchains will be created with graphics API-specific flags appropriate to the type of underlying image and its usage.
Runtimes must honor underlying graphics API limits when creating resources.
xrEnumerateSwapchainFormats never returns typeless formats (e.g.
DXGI_FORMAT_R8G8B8A8_TYPELESS).
Only concrete formats are returned, and only concrete formats may be
specified by applications for swapchain creation.
The xrDestroySwapchain function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrDestroySwapchain(
XrSwapchain swapchain);
All submitted graphics API commands that refer to swapchain must have
completed execution before calling this function.
Runtimes may continue to utilize swapchain images after xrDestroySwapchain is called.
Swapchain images are acquired, waited on, and released by index, but the number of images in a swapchain is implementation-defined. Additionally, rendering to images requires access to the underlying image primitive of the graphics API being used. Applications may query and cache the images at any time after swapchain creation.
The xrEnumerateSwapchainImages function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrEnumerateSwapchainImages(
XrSwapchain swapchain,
uint32_t imageCapacityInput,
uint32_t* imageCountOutput,
XrSwapchainImageBaseHeader* images);
Fills an array of graphics API-specific XrSwapchainImage structures.
The resources must be constant and valid for the lifetime of the
XrSwapchain.
Runtimes must always return identical buffer contents from this enumeration for the lifetime of the swapchain.
Note: images is a pointer to an array of structures of graphics
API-specific type, not an array of structure pointers.
The pointer submitted as images will be treated as an array of the
expected graphics API-specific type based on the graphics API used at
session creation time.
If the type member of any array element accessed in this way does not
match the expected value, the runtime must return
XR_ERROR_VALIDATION_FAILURE.
|
Note
Under a typical memory model, a runtime must treat the supplied pointer as
an opaque blob beginning with XrSwapchainImageBaseHeader, until after
it has verified the XrSwapchainImageBaseHeader:: |
The XrSwapchainImageBaseHeader structure is defined as:
typedef struct XrSwapchainImageBaseHeader {
XrStructureType type;
void* next;
} XrSwapchainImageBaseHeader;
The XrSwapchainImageBaseHeader is a base structure that is extended by
graphics API-specific XrSwapchainImage* child structures.
Before an application builds graphics API command buffers that refer to an image in a swapchain, it must acquire the image from the swapchain. The acquire operation determines the index of the next image to be used in the swapchain. The order in which images are acquired is undefined. The runtime must allow the application to acquire more than one image from a single (non-static) swapchain at a time, for example if the application implements a multiple frame deep rendering pipeline.
The xrAcquireSwapchainImage function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrAcquireSwapchainImage(
XrSwapchain swapchain,
const XrSwapchainImageAcquireInfo* acquireInfo,
uint32_t* index);
Acquires the image corresponding to the index position in the array
returned by xrEnumerateSwapchainImages.
The runtime must return XR_ERROR_CALL_ORDER_INVALID if the next
available index has already been acquired and not yet released with
xrReleaseSwapchainImage.
If the swapchain was created with the
XR_SWAPCHAIN_CREATE_STATIC_IMAGE_BIT set in
XrSwapchainCreateInfo::createFlags, this function must not have
been previously called for this swapchain.
The runtime must return XR_ERROR_CALL_ORDER_INVALID if a
swapchain created with the XR_SWAPCHAIN_CREATE_STATIC_IMAGE_BIT
set in XrSwapchainCreateInfo::createFlags and this function has
been successfully called previously for this swapchain.
This function only provides the index of the swapchain image, for example for use in recording command buffers. It does not wait for the image to be usable by the application. The application must call xrWaitSwapchainImage for each "acquire" call before submitting graphics commands that write to the image.
The XrSwapchainImageAcquireInfo structure is defined as:
typedef struct XrSwapchainImageAcquireInfo {
XrStructureType type;
const void* next;
} XrSwapchainImageAcquireInfo;
Because this structure only exists to support extension-specific structures,
xrAcquireSwapchainImage will accept a NULL argument for
xrAcquireSwapchainImage::acquireInfo for applications that are
not using any relevant extensions.
The xrWaitSwapchainImage function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrWaitSwapchainImage(
XrSwapchain swapchain,
const XrSwapchainImageWaitInfo* waitInfo);
Before an application begins writing to a swapchain image, it must first wait on the image, to avoid writing to it before the compositor has finished reading from it. xrWaitSwapchainImage will implicitly wait on the oldest acquired swapchain image which has not yet been successfully waited on. Once a swapchain image has been successfully waited on without timeout, the app must release before waiting on the next acquired swapchain image.
This function may block for longer than the timeout specified in XrSwapchainImageWaitInfo due to scheduling or contention.
If the timeout expires without the image becoming available for writing,
XR_TIMEOUT_EXPIRED must be returned.
If xrWaitSwapchainImage returns XR_TIMEOUT_EXPIRED, the next
call to xrWaitSwapchainImage will wait on the same image index again
until the function succeeds with XR_SUCCESS.
Note that this is not an error code;
XR_SUCCEEDED( is XR_TIMEOUT_EXPIRED)true.
The runtime must eventually relinquish ownership of a swapchain image to the application and must not block indefinitely.
The runtime must return XR_ERROR_CALL_ORDER_INVALID if no image has
been acquired by calling xrAcquireSwapchainImage.
The XrSwapchainImageWaitInfo structure describes a swapchain image wait operation. It is defined as:
typedef struct XrSwapchainImageWaitInfo {
XrStructureType type;
const void* next;
XrDuration timeout;
} XrSwapchainImageWaitInfo;
Once an application is done submitting commands that reference the swapchain image, the application must release the swapchain image. xrReleaseSwapchainImage will implicitly release the oldest swapchain image which has been acquired. The swapchain image must have been successfully waited on without timeout before it is released. xrEndFrame will use the most recently released swapchain image. In each frame submitted to the compositor, only one image index from each swapchain will be used. Note that in case the swapchain contains 2D image arrays, one array is referenced per swapchain index and thus the whole image array may be used in one frame.
The xrReleaseSwapchainImage function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrReleaseSwapchainImage(
XrSwapchain swapchain,
const XrSwapchainImageReleaseInfo* releaseInfo);
If the swapchain was created with the
XR_SWAPCHAIN_CREATE_STATIC_IMAGE_BIT set in
XrSwapchainCreateInfo::createFlags structure, this function
must not have been previously called for this swapchain.
The runtime must return XR_ERROR_CALL_ORDER_INVALID if no image has
been waited on by calling xrWaitSwapchainImage.
The XrSwapchainImageReleaseInfo structure is defined as:
typedef struct XrSwapchainImageReleaseInfo {
XrStructureType type;
const void* next;
} XrSwapchainImageReleaseInfo;
Because this structure only exists to support extension-specific structures,
xrReleaseSwapchainImage will accept a NULL argument for
xrReleaseSwapchainImage::releaseInfo for applications that are
not using any relevant extensions.
10.2. View and Projection State
An application uses xrLocateViews to retrieve the viewer pose and projection parameters needed to render each view for use in a composition projection layer.
The xrLocateViews function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrLocateViews(
XrSession session,
const XrViewLocateInfo* viewLocateInfo,
XrViewState* viewState,
uint32_t viewCapacityInput,
uint32_t* viewCountOutput,
XrView* views);
The xrLocateViews function returns the view and projection info for a particular display time. This time is typically the target display time for a given frame. Repeatedly calling xrLocateViews with the same time may not necessarily return the same result. Instead the prediction gets increasingly accurate as the function is called closer to the given time for which a prediction is made. This allows an application to get the predicted views as late as possible in its pipeline to get the least amount of latency and prediction error.
xrLocateViews returns an array of XrView elements, one for each view of the specified view configuration type, along with an XrViewState containing additional state data shared across all views. The eye each view corresponds to is statically defined in XrViewConfigurationType in case the application wants to apply eye-specific rendering traits. The XrViewState and XrView member data may change on subsequent calls to xrLocateViews, and so applications must not assume it to be constant.
If an application gives a viewLocateInfo with a
XrViewLocateInfo::viewConfigurationType that was not passed in
the session’s call to xrBeginSession via the
XrSessionBeginInfo::primaryViewConfigurationType, or enabled
though an extension, then the runtime must return
XR_ERROR_VALIDATION_FAILURE.
The XrViewLocateInfo structure is defined as:
typedef struct XrViewLocateInfo {
XrStructureType type;
const void* next;
XrViewConfigurationType viewConfigurationType;
XrTime displayTime;
XrSpace space;
} XrViewLocateInfo;
The XrViewLocateInfo structure contains the display time and space used to locate the view XrView structures.
The runtime must return error
XR_ERROR_VIEW_CONFIGURATION_TYPE_UNSUPPORTED if the given
viewConfigurationType is not one of the supported type reported by
xrEnumerateViewConfigurations.
The XrViewState structure is defined as:
typedef struct XrViewState {
XrStructureType type;
void* next;
XrViewStateFlags viewStateFlags;
} XrViewState;
The XrViewState contains additional view state from xrLocateViews common to all views of the active view configuration.
The XrViewStateFlags specifies the validity and quality of the
corresponding XrView array returned by xrLocateViews.
The XrViewState::viewStateFlags member is of this type, and
contains a bitwise-OR of zero or more of the bits defined in
XrViewStateFlagBits.
typedef XrFlags64 XrViewStateFlags;
Valid bits for XrViewStateFlags are defined by XrViewStateFlagBits, which is specified as:
// Flag bits for XrViewStateFlags
static const XrViewStateFlags XR_VIEW_STATE_ORIENTATION_VALID_BIT = 0x00000001;
static const XrViewStateFlags XR_VIEW_STATE_POSITION_VALID_BIT = 0x00000002;
static const XrViewStateFlags XR_VIEW_STATE_ORIENTATION_TRACKED_BIT = 0x00000004;
static const XrViewStateFlags XR_VIEW_STATE_POSITION_TRACKED_BIT = 0x00000008;
The flag bits have the following meanings:
10.3. Frame Synchronization
An application synchronizes its rendering loop to the runtime by calling xrWaitFrame.
The xrWaitFrame function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrWaitFrame(
XrSession session,
const XrFrameWaitInfo* frameWaitInfo,
XrFrameState* frameState);
xrWaitFrame throttles the application frame loop in order to synchronize application frame submissions with the display. xrWaitFrame returns a predicted display time for the next time that the runtime predicts a composited frame will be displayed. The runtime may affect this computation by changing the return values and throttling of xrWaitFrame in response to feedback from frame submission and completion times in xrEndFrame. A subsequent xrWaitFrame call must block until the previous frame has been begun with xrBeginFrame and must unblock independently of the corresponding call to xrEndFrame. Refer to xrBeginSession for details on how a transition to session running resets the frame function call order.
When less than one frame interval has passed since the previous return from xrWaitFrame, the runtime should block until the beginning of the next frame interval. If more than one frame interval has passed since the last return from xrWaitFrame, the runtime may return immediately or block until the beginning of the next frame interval.
In the case that an application has pipelined frame submissions, the application should compute the appropriate target display time using both the predicted display time and predicted display interval. The application should use the computed target display time when requesting space and view locations for rendering.
The XrFrameState::predictedDisplayTime returned by
xrWaitFrame must be monotonically increasing.
The runtime may dynamically adjust the start time of the frame interval relative to the display hardware’s refresh cycle to minimize graphics processor contention between the application and the compositor.
xrWaitFrame must be callable from any thread, including a different thread than xrBeginFrame/xrEndFrame are being called from.
Calling xrWaitFrame must be externally synchronized by the application, concurrent calls may result in undefined behavior.
The runtime must return XR_ERROR_SESSION_NOT_RUNNING if the
session is not running.
|
Note
The engine simulation should advance based on the display time. Every stage in the engine pipeline should use the exact same display time for one particular application-generated frame. An accurate and consistent display time across all stages and threads in the engine pipeline is important to avoid object motion judder. If the application has multiple pipeline stages, the application should pass its computed display time through its pipeline, as xrWaitFrame must be called only once per frame. |
The XrFrameWaitInfo structure is defined as:
typedef struct XrFrameWaitInfo {
XrStructureType type;
const void* next;
} XrFrameWaitInfo;
Because this structure only exists to support extension-specific structures,
xrWaitFrame must accept a NULL argument for
xrWaitFrame::frameWaitInfo for applications that are not using
any relevant extensions.
The XrFrameState structure is defined as:
typedef struct XrFrameState {
XrStructureType type;
void* next;
XrTime predictedDisplayTime;
XrDuration predictedDisplayPeriod;
XrBool32 shouldRender;
} XrFrameState;
XrFrameState describes the time at which the next frame will be
displayed to the user.
predictedDisplayTime must refer to the midpoint of the interval
during which the frame is displayed.
The runtime may report a different predictedDisplayPeriod from the
hardware’s refresh cycle.
For any frame where shouldRender is XR_FALSE, the application
should avoid heavy GPU work for that frame, for example by not rendering
its layers.
This typically happens when the application is transitioning into or out of
a running session, or when some system UI is fully covering the application
at the moment.
As long as the session is running, the application
should keep running the frame loop to maintain the frame synchronization to
the runtime, even if this requires calling xrEndFrame with all layers
omitted.
10.4. Frame Submission
Every application must call xrBeginFrame before calling
xrEndFrame, and should call xrEndFrame before calling
xrBeginFrame again.
Calling xrEndFrame again without a prior call to xrBeginFrame
must result in XR_ERROR_CALL_ORDER_INVALID being returned by
xrEndFrame.
An application may call xrBeginFrame again if the prior
xrEndFrame fails or if the application wishes to discard an
in-progress frame.
A successful call to xrBeginFrame again with no intervening
xrEndFrame call must result in the success code
XR_FRAME_DISCARDED being returned from xrBeginFrame.
In this case it is assumed that the xrBeginFrame refers to the next
frame and the previously begun frame is forfeited by the application.
An application may call xrEndFrame without having called
xrReleaseSwapchainImage since the previous call to xrEndFrame
for any swapchain passed to xrEndFrame.
Applications should call xrBeginFrame right before executing any
graphics device work for a given frame, as opposed to calling it afterwards.
The runtime must only compose frames whose xrBeginFrame and
xrEndFrame both return success codes.
While xrBeginFrame and xrEndFrame do not need to be called on
the same thread, the application must handle synchronization if they are
called on separate threads.
The xrBeginFrame function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrBeginFrame(
XrSession session,
const XrFrameBeginInfo* frameBeginInfo);
xrBeginFrame is called prior to the start of frame rendering.
The application should still call xrBeginFrame but omit rendering
work for the frame if XrFrameState::shouldRender is
XR_FALSE.
Runtimes must not perform frame synchronization or throttling through the xrBeginFrame function and should instead do so through xrWaitFrame.
The runtime must return the error code XR_ERROR_CALL_ORDER_INVALID if
there was no corresponding successful call to xrWaitFrame.
The runtime must return the success code XR_FRAME_DISCARDED if a
prior xrBeginFrame has been called without an intervening call to
xrEndFrame.
Refer to xrBeginSession for details on how a transition to
session running resets the frame function call order.
The runtime must return XR_ERROR_SESSION_NOT_RUNNING if the
session is not running.
The XrFrameBeginInfo structure is defined as:
typedef struct XrFrameBeginInfo {
XrStructureType type;
const void* next;
} XrFrameBeginInfo;
Because this structure only exists to support extension-specific structures,
xrBeginFrame will accept a NULL argument for
xrBeginFrame::frameBeginInfo for applications that are not using
any relevant extensions.
The xrEndFrame function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrEndFrame(
XrSession session,
const XrFrameEndInfo* frameEndInfo);
xrEndFrame may return immediately to the application.
XrFrameEndInfo::displayTime should be computed using values
returned by xrWaitFrame.
The runtime should be robust against variations in the timing of calls to
xrWaitFrame, since a pipelined system may call xrWaitFrame on a
separate thread from xrBeginFrame and xrEndFrame without any
synchronization guarantees.
|
Note
An accurate predicted display time is very important to avoid black pull-in by reprojection and to reduce motion judder in case the runtime does not implement a translational reprojection. Reprojection should never display images before the display refresh period they were predicted for, even if they are completed early, because this will cause motion judder just the same. In other words, the better the predicted display time, the less latency experienced by the user. |
Every call to xrEndFrame must be preceded by a successful call to
xrBeginFrame.
Failure to do so must result in XR_ERROR_CALL_ORDER_INVALID being
returned by xrEndFrame.
Refer to xrBeginSession for details on how a transition to
session running resets the frame function call order.
XrFrameEndInfo may reference swapchains into which the application
has rendered for this frame.
From each XrSwapchain only one image index is implicitly referenced
per frame, the one corresponding to the last call to
xrReleaseSwapchainImage.
However, a specific swapchain (and by extension a specific swapchain image
index) may be referenced in XrFrameEndInfo multiple times.
This can be used for example to render a side by side image into a single
swapchain image and referencing it twice with differing image rectangles in
different layers.
If no layers are provided then the display must be cleared.
XR_ERROR_LAYER_INVALID must be returned if an unknown, unsupported
layer type, or NULL pointer is passed as one of the
XrFrameEndInfo::layers.
XR_ERROR_LAYER_INVALID must be returned if a layer references a
swapchain that has no released swapchain image.
XR_ERROR_LAYER_LIMIT_EXCEEDED must be returned if
XrFrameEndInfo::layerCount exceeds
XrSystemGraphicsProperties::maxLayerCount or if the runtime is unable
to composite the specified layers due to resource constraints.
XR_ERROR_SWAPCHAIN_RECT_INVALID must be returned if
XrFrameEndInfo::layers contains a composition layer which
references pixels outside of the associated swapchain image or if negatively
sized.
XR_ERROR_ENVIRONMENT_BLEND_MODE_UNSUPPORTED must be returned if and
only if the XrFrameEndInfo::environmentBlendMode was not
enumerated by xrEnumerateEnvironmentBlendModes for the
XrInstance and XrSystemId used to create session.
XR_ERROR_SESSION_NOT_RUNNING must be returned if the session
is not running.
|
Note
Applications should discard frames for which xrEndFrame returns a recoverable error over attempting to resubmit the frame with different frame parameters to provide a more consistent experience across different runtime implementations. |
The XrFrameEndInfo structure is defined as:
typedef struct XrFrameEndInfo {
XrStructureType type;
const void* next;
XrTime displayTime;
XrEnvironmentBlendMode environmentBlendMode;
uint32_t layerCount;
const XrCompositionLayerBaseHeader* const* layers;
} XrFrameEndInfo;
All layers submitted to xrEndFrame will be presented to the primary view configuration of the running session.
10.5. Frame Rate
For every application-generated frame, the application may call xrEndFrame to submit the application-generated composition layers. In addition, the application must call xrWaitFrame when the application is ready to begin preparing the next set of frame layers. xrEndFrame may return immediately to the application, but xrWaitFrame must block for an amount of time that depends on throttling of the application by the runtime. The earliest the runtime will return from xrWaitFrame is when it determines that the application should start drawing the next frame.
10.6. Compositing
Composition layers are submitted by the application via the xrEndFrame
call.
All composition layers to be drawn must be submitted with every
xrEndFrame call.
A layer that is omitted in this call will not be drawn by the runtime layer
compositor.
All views associated with projection layers must be supplied, or
XR_ERROR_VALIDATION_FAILURE must be returned by xrEndFrame.
Composition layers must be drawn in the same order as they are specified in via XrFrameEndInfo, with the 0th layer drawn first. Layers must be drawn with a "painter’s algorithm," with each successive layer potentially overwriting the destination layers whether or not the new layers are virtually closer to the viewer.
10.6.1. Composition Layer Flags
XrCompositionLayerFlags specifies options for individual composition layers, and contains a bitwise-OR of zero or more of the bits defined in XrCompositionLayerFlagBits.
typedef XrFlags64 XrCompositionLayerFlags;
Valid bits for XrCompositionLayerFlags are defined by XrCompositionLayerFlagBits, which is specified as:
// Flag bits for XrCompositionLayerFlags
// XR_COMPOSITION_LAYER_CORRECT_CHROMATIC_ABERRATION_BIT is deprecated and should not be used
static const XrCompositionLayerFlags XR_COMPOSITION_LAYER_CORRECT_CHROMATIC_ABERRATION_BIT = 0x00000001;
static const XrCompositionLayerFlags XR_COMPOSITION_LAYER_BLEND_TEXTURE_SOURCE_ALPHA_BIT = 0x00000002;
static const XrCompositionLayerFlags XR_COMPOSITION_LAYER_UNPREMULTIPLIED_ALPHA_BIT = 0x00000004;
static const XrCompositionLayerFlags XR_COMPOSITION_LAYER_INVERTED_ALPHA_BIT_EXT = 0x00000008;
The flag bits have the following meanings:
10.6.2. Composition Layer Blending
All types of composition layers are subject to blending with other layers.
Blending of layers can be controlled by layer per-texel source alpha.
Layer swapchain textures may contain an alpha channel, depending on the
image format.
If a submitted swapchain’s texture format does not include an alpha channel
or if the XR_COMPOSITION_LAYER_BLEND_TEXTURE_SOURCE_ALPHA_BIT is
unset, then the layer alpha is initialized to one.
If the swapchain texture format color encoding is other than RGBA, it is converted to RGBA.
If the texture color channels are encoded without premultiplying by alpha,
the XR_COMPOSITION_LAYER_UNPREMULTIPLIED_ALPHA_BIT should be set.
The effect of this bit alters the layer color as follows:
LayerColor.RGB *= LayerColor.A
LayerColor is then clamped to a range of [0.0, 1.0].
The layer blending operation is defined as:
CompositeColor = LayerColor + CompositeColor * (1 - LayerColor.A)
Before the first layer is composited, all components of CompositeColor are initialized to zero.
10.6.3. Composition Layer Types
Composition layers allow an application to offload the composition of the final image to a runtime-supplied compositor. This reduces the application’s rendering complexity since details such as frame-rate interpolation and distortion correction can be performed by the runtime. The core specification defines XrCompositionLayerProjection and XrCompositionLayerQuad layer types.
The projection layer type represents planar projected images rendered from the eye point of each eye using a perspective projection. This layer type is typically used to render the virtual world from the user’s perspective.
The quad layer type describes a posable planar rectangle in the virtual world for displaying two-dimensional content. Quad layers can subtend a smaller portion of the display’s field of view, allowing a better match between the resolutions of the XrSwapchain image and footprint of that image in the final composition. This improves legibility for user interface elements or heads-up displays and allows optimal sampling during any composition distortion corrections the runtime might employ.
The classes below describe the layer types in the layer composition system.
The XrCompositionLayerBaseHeader structure is defined as:
typedef struct XrCompositionLayerBaseHeader {
XrStructureType type;
const void* next;
XrCompositionLayerFlags layerFlags;
XrSpace space;
} XrCompositionLayerBaseHeader;
All composition layer structures begin with the elements described in the XrCompositionLayerBaseHeader. The XrCompositionLayerBaseHeader structure is not intended to be directly used, but forms a basis for defining current and future structures containing composition layer information. The XrFrameEndInfo structure contains an array of pointers to these polymorphic header structures. All composition layer type pointers must be type-castable as an XrCompositionLayerBaseHeader pointer.
Many composition layer structures also contain one or more references to generic layer data stored in an XrSwapchainSubImage structure.
The XrSwapchainSubImage structure is defined as:
typedef struct XrSwapchainSubImage {
XrSwapchain swapchain;
XrRect2Di imageRect;
uint32_t imageArrayIndex;
} XrSwapchainSubImage;
Runtimes must return XR_ERROR_VALIDATION_FAILURE if the
XrSwapchainSubImage::imageArrayIndex is equal to or greater than
the XrSwapchainCreateInfo::arraySize that the
XrSwapchainSubImage::swapchain was created with.
Projection Composition
The XrCompositionLayerProjection layer represents planar projected images rendered from the eye point of each eye using a standard perspective projection.
The XrCompositionLayerProjection structure is defined as:
// Provided by XR_VERSION_1_0
typedef struct XrCompositionLayerProjection {
XrStructureType type;
const void* next;
XrCompositionLayerFlags layerFlags;
XrSpace space;
uint32_t viewCount;
const XrCompositionLayerProjectionView* views;
} XrCompositionLayerProjection;
|
Note
Because a runtime may reproject the layer over time, a projection layer
should specify an XrSpace in which to maximize stability of the layer
content.
For example, a projection layer containing world-locked content should use
an XrSpace which is also world-locked, such as the |
The XrCompositionLayerProjectionView structure is defined as:
typedef struct XrCompositionLayerProjectionView {
XrStructureType type;
const void* next;
XrPosef pose;
XrFovf fov;
XrSwapchainSubImage subImage;
} XrCompositionLayerProjectionView;
The count and order of view poses submitted with
XrCompositionLayerProjection must be the same order as that returned
by xrLocateViews.
The XrCompositionLayerProjectionView::pose and
XrCompositionLayerProjectionView::fov should almost always
derive from XrView::pose and XrView::fov as found in
the xrLocateViews::views array.
However, applications may submit an XrCompositionLayerProjectionView
which has a different view or FOV than that from xrLocateViews.
In this case, the runtime will map the view and FOV to the system display
appropriately.
In the case that two submitted views within a single layer overlap, they
must be composited in view array order.
Quad Layer Composition
The XrCompositionLayerQuad structure defined as:
// Provided by XR_VERSION_1_0
typedef struct XrCompositionLayerQuad {
XrStructureType type;
const void* next;
XrCompositionLayerFlags layerFlags;
XrSpace space;
XrEyeVisibility eyeVisibility;
XrSwapchainSubImage subImage;
XrPosef pose;
XrExtent2Df size;
} XrCompositionLayerQuad;
The XrCompositionLayerQuad layer is useful for user interface elements or 2D content rendered into the virtual world. The layer’s XrSwapchainSubImage::swapchain image is applied to a quad in the virtual world space. Only front face of the quad surface is visible; the back face is not visible and must not be drawn by the runtime. A quad layer has no thickness; it is a two-dimensional object positioned and oriented in 3D space. The position of a quad refers to the center of the quad within the given XrSpace. The orientation of the quad refers to the orientation of the normal vector from the front face. The size of a quad refers to the quad’s size in the x-y plane of the given XrSpace’s coordinate system. A quad with a position of {0,0,0}, rotation of {0,0,0,1} (no rotation), and a size of {1,1} refers to a 1 meter x 1 meter quad centered at {0,0,0} with its front face normal vector coinciding with the +z axis.
The XrEyeVisibility enum selects which of the viewer’s eyes to display a layer to:
typedef enum XrEyeVisibility {
XR_EYE_VISIBILITY_BOTH = 0,
XR_EYE_VISIBILITY_LEFT = 1,
XR_EYE_VISIBILITY_RIGHT = 2,
XR_EYE_VISIBILITY_MAX_ENUM = 0x7FFFFFFF
} XrEyeVisibility;
10.6.4. Environment Blend Mode
After the compositor has blended and flattened all layers (including any
layers added by the runtime itself), it will then present this image to the
system’s display.
The composited image will then blend with the environment in one of three
modes, based on the application’s chosen environment blend mode.
VR applications will generally choose the
XR_ENVIRONMENT_BLEND_MODE_OPAQUE blend mode, while AR applications
will generally choose either the XR_ENVIRONMENT_BLEND_MODE_ADDITIVE or
XR_ENVIRONMENT_BLEND_MODE_ALPHA_BLEND mode.
The environment may be perceived in two ways. It could be the user’s view of the physical world that exists beyond the displays, or it could be a synthetic environment including virtual components generated externally from the application. Alternatively, it could be a combination of both these elements.
Applications select their environment blend mode each frame as part of their
call to xrEndFrame.
The application can inspect the set of supported environment blend modes for
a given system using xrEnumerateEnvironmentBlendModes, and prepare
their assets and rendering techniques differently based on the blend mode
they choose.
For example, a black shadow rendered using the
XR_ENVIRONMENT_BLEND_MODE_ADDITIVE blend mode will appear transparent,
and so an application in that mode may render a glow as a grounding effect
around the black shadow to ensure the shadow can be seen.
Similarly, an application designed for
XR_ENVIRONMENT_BLEND_MODE_OPAQUE or
XR_ENVIRONMENT_BLEND_MODE_ADDITIVE rendering may choose to leave
garbage in their alpha channel as a side effect of a rendering optimization,
but this garbage would appear as visible display artifacts if the
environment blend mode was instead
XR_ENVIRONMENT_BLEND_MODE_ALPHA_BLEND.
Not all systems will support all environment blend modes.
For example, a VR headset may not support the
XR_ENVIRONMENT_BLEND_MODE_ADDITIVE or
XR_ENVIRONMENT_BLEND_MODE_ALPHA_BLEND modes unless it has video
passthrough, while an AR headset with an additive display may not support
the XR_ENVIRONMENT_BLEND_MODE_OPAQUE or
XR_ENVIRONMENT_BLEND_MODE_ALPHA_BLEND modes.
For devices that support video/optical passthrough or synthetic
environments, they may support the XR_ENVIRONMENT_BLEND_MODE_ADDITIVE
or XR_ENVIRONMENT_BLEND_MODE_ALPHA_BLEND modes.
Selecting one of these modes would display the environment in the
background, contingent upon the capability and status of the headsets.
For devices that can support multiple environment blend modes, such as AR
phones with video passthrough, the runtime may optimize power consumption
on the device in response to the environment blend mode that the application
chooses each frame.
For example, if an application on a video passthrough phone knows that it is
currently rendering a 360-degree background covering all screen pixels, it
can submit frames with an environment blend mode of
XR_ENVIRONMENT_BLEND_MODE_OPAQUE, saving the runtime the cost of
compositing a camera-based underlay of the physical world behind the
application’s layers.
The xrEnumerateEnvironmentBlendModes function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrEnumerateEnvironmentBlendModes(
XrInstance instance,
XrSystemId systemId,
XrViewConfigurationType viewConfigurationType,
uint32_t environmentBlendModeCapacityInput,
uint32_t* environmentBlendModeCountOutput,
XrEnvironmentBlendMode* environmentBlendModes);
Enumerates the set of environment blend modes that this runtime supports for a given view configuration of the system. Environment blend modes should be in order from highest to lowest runtime preference.
Runtimes must always return identical buffer contents from this enumeration
for the given systemId and viewConfigurationType for the
lifetime of the instance.
The possible blend modes are specified by the XrEnvironmentBlendMode enumeration:
typedef enum XrEnvironmentBlendMode {
XR_ENVIRONMENT_BLEND_MODE_OPAQUE = 1,
XR_ENVIRONMENT_BLEND_MODE_ADDITIVE = 2,
XR_ENVIRONMENT_BLEND_MODE_ALPHA_BLEND = 3,
XR_ENVIRONMENT_BLEND_MODE_MAX_ENUM = 0x7FFFFFFF
} XrEnvironmentBlendMode;
11. Input and Haptics
11.1. Action Overview
OpenXR applications communicate with input devices using XrActions.
Actions are created at initialization time and later used to request input
device state, create action spaces, or control haptic events.
Input action handles represent 'actions' that the application is interested
in obtaining the state of, not direct input device hardware.
For example, instead of the application directly querying the state of the A
button when interacting with a menu, an OpenXR application instead creates a
menu_select action at startup then asks OpenXR for the state of
the action.
The application recommends that the action be assigned to a specific input source on the input device for a known interaction profile, but runtimes have the ability to choose a different control depending on user preference, input device availability, or any other reason. This abstraction ensures that applications can run on a wide variety of input hardware and maximize user accessibility.
Example usage:
XrInstance instance; // previously initialized
XrSession session; // previously initialized
// Create an action set
XrActionSetCreateInfo actionSetInfo{XR_TYPE_ACTION_SET_CREATE_INFO};
strcpy(actionSetInfo.actionSetName, "gameplay");
strcpy(actionSetInfo.localizedActionSetName, "Gameplay");
actionSetInfo.priority = 0;
XrActionSet inGameActionSet;
CHK_XR(xrCreateActionSet(instance, &actionSetInfo, &inGameActionSet));
// create a "teleport" input action
XrActionCreateInfo actioninfo{XR_TYPE_ACTION_CREATE_INFO};
strcpy(actioninfo.actionName, "teleport");
actioninfo.actionType = XR_ACTION_TYPE_BOOLEAN_INPUT;
strcpy(actioninfo.localizedActionName, "Teleport");
XrAction teleportAction;
CHK_XR(xrCreateAction(inGameActionSet, &actioninfo, &teleportAction));
// create a "player_hit" output action
XrActionCreateInfo hapticsactioninfo{XR_TYPE_ACTION_CREATE_INFO};
strcpy(hapticsactioninfo.actionName, "player_hit");
hapticsactioninfo.actionType = XR_ACTION_TYPE_VIBRATION_OUTPUT;
strcpy(hapticsactioninfo.localizedActionName, "Player hit");
XrAction hapticsAction;
CHK_XR(xrCreateAction(inGameActionSet, &hapticsactioninfo, &hapticsAction));
XrPath triggerClickPath, hapticPath;
CHK_XR(xrStringToPath(instance, "/user/hand/right/input/trigger/click", &triggerClickPath));
CHK_XR(xrStringToPath(instance, "/user/hand/right/output/haptic", &hapticPath))
XrPath interactionProfilePath;
CHK_XR(xrStringToPath(instance, "/interaction_profiles/vendor_x/profile_x", &interactionProfilePath));
XrActionSuggestedBinding bindings[2];
bindings[0].action = teleportAction;
bindings[0].binding = triggerClickPath;
bindings[1].action = hapticsAction;
bindings[1].binding = hapticPath;
XrInteractionProfileSuggestedBinding suggestedBindings{XR_TYPE_INTERACTION_PROFILE_SUGGESTED_BINDING};
suggestedBindings.interactionProfile = interactionProfilePath;
suggestedBindings.suggestedBindings = bindings;
suggestedBindings.countSuggestedBindings = 2;
CHK_XR(xrSuggestInteractionProfileBindings(instance, &suggestedBindings));
XrSessionActionSetsAttachInfo attachInfo{XR_TYPE_SESSION_ACTION_SETS_ATTACH_INFO};
attachInfo.countActionSets = 1;
attachInfo.actionSets = &inGameActionSet;
CHK_XR(xrAttachSessionActionSets(session, &attachInfo));
// application main loop
while (1)
{
// sync action data
XrActiveActionSet activeActionSet{inGameActionSet, XR_NULL_PATH};
XrActionsSyncInfo syncInfo{XR_TYPE_ACTIONS_SYNC_INFO};
syncInfo.countActiveActionSets = 1;
syncInfo.activeActionSets = &activeActionSet;
CHK_XR(xrSyncActions(session, &syncInfo));
// query input action state
XrActionStateBoolean teleportState{XR_TYPE_ACTION_STATE_BOOLEAN};
XrActionStateGetInfo getInfo{XR_TYPE_ACTION_STATE_GET_INFO};
getInfo.action = teleportAction;
CHK_XR(xrGetActionStateBoolean(session, &getInfo, &teleportState));
if (teleportState.changedSinceLastSync && teleportState.currentState)
{
// fire haptics using output action
XrHapticVibration vibration{XR_TYPE_HAPTIC_VIBRATION};
vibration.amplitude = 0.5;
vibration.duration = 300;
vibration.frequency = 3000;
XrHapticActionInfo hapticActionInfo{XR_TYPE_HAPTIC_ACTION_INFO};
hapticActionInfo.action = hapticsAction;
CHK_XR(xrApplyHapticFeedback(session, &hapticActionInfo, (const XrHapticBaseHeader*)&vibration));
}
}
11.2. Action Sets
XR_DEFINE_HANDLE(XrActionSet)
Action sets are application-defined collections of actions. They are attached to a given XrSession with a xrAttachSessionActionSets call. Enabled action sets are indicated by the application via xrSyncActions depending on the current application context.
For example, consider using one collection of actions that apply to controlling a character and another collection for navigating a menu system. When these actions are structured as two XrActionSet handles, the applicable action set is easy to specify according to application logic using a single function call.
Further, suppose some actions only apply when operating a vehicle as a character. This is intended to be modeled as another separate action set. While the user is operating a vehicle, the application enables both the character-control and vehicle action sets simultaneously in each xrSyncActions call.
Actions are passed a handle to their XrActionSet when they are created.
Action sets are created by calling xrCreateActionSet.
The xrCreateActionSet function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrCreateActionSet(
XrInstance instance,
const XrActionSetCreateInfo* createInfo,
XrActionSet* actionSet);
The xrCreateActionSet function creates an action set and returns a handle to the created action set.
The XrActionSetCreateInfo structure is defined as:
typedef struct XrActionSetCreateInfo {
XrStructureType type;
const void* next;
char actionSetName[XR_MAX_ACTION_SET_NAME_SIZE];
char localizedActionSetName[XR_MAX_LOCALIZED_ACTION_SET_NAME_SIZE];
uint32_t priority;
} XrActionSetCreateInfo;
When multiple actions are bound to the same input source, the priority
of each action set determines which bindings are suppressed.
Runtimes must ignore input sources from action sets with a lower priority
number if those specific input sources are also present in active actions
within a higher priority action set.
If multiple action sets with the same priority are bound to the same input
source and that is the highest priority number, runtimes must process all
those bindings at the same time.
Two actions are considered to be bound to the same input source if they use the same identifier and optional location path segments, even if they have different component segments.
When runtimes are ignoring bindings because of priority, they must treat
the binding to that input source as though they do not exist.
That means the isActive field must be XR_FALSE when retrieving
action data, and that the runtime must not provide any visual, haptic, or
other feedback related to the binding of that action to that input source.
Other actions in the same action set which are bound to input sources that
do not collide are not affected and are processed as normal.
If actionSetName or localizedActionSetName are empty strings,
the runtime must return XR_ERROR_NAME_INVALID or
XR_ERROR_LOCALIZED_NAME_INVALID respectively.
If actionSetName or localizedActionSetName are duplicates of the
corresponding field for any existing action set in the specified instance,
the runtime must return XR_ERROR_NAME_DUPLICATED or
XR_ERROR_LOCALIZED_NAME_DUPLICATED respectively.
If the conflicting action set is destroyed, the conflicting field is no
longer considered duplicated.
If actionSetName contains characters which are not allowed in a single
level of a well-formed path string, the
runtime must return XR_ERROR_PATH_FORMAT_INVALID.
The xrDestroyActionSet function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrDestroyActionSet(
XrActionSet actionSet);
Action set handles can be destroyed by calling xrDestroyActionSet. When an action set handle is destroyed, all handles of actions in that action set are also destroyed.
The implementation must not free underlying resources for the action set while there are other valid handles that refer to those resources. The implementation may release resources for an action set when all of the action spaces for actions in that action set have been destroyed. See Action Spaces Lifetime for details.
Resources for all action sets in an instance must be freed when the instance containing those actions sets is destroyed.
11.3. Creating Actions
XR_DEFINE_HANDLE(XrAction)
Action handles are used to refer to individual actions when retrieving action data, creating action spaces, or sending haptic events.
The xrCreateAction function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrCreateAction(
XrActionSet actionSet,
const XrActionCreateInfo* createInfo,
XrAction* action);
xrCreateAction creates an action and returns its handle.
If actionSet has been included in a call to
xrAttachSessionActionSets, the implementation must return
XR_ERROR_ACTIONSETS_ALREADY_ATTACHED.
The XrActionCreateInfo structure is defined as:
typedef struct XrActionCreateInfo {
XrStructureType type;
const void* next;
char actionName[XR_MAX_ACTION_NAME_SIZE];
XrActionType actionType;
uint32_t countSubactionPaths;
const XrPath* subactionPaths;
char localizedActionName[XR_MAX_LOCALIZED_ACTION_NAME_SIZE];
} XrActionCreateInfo;
Subaction paths are a mechanism that enables applications to use the same
action name and handle on multiple devices.
Applications can query action state using subaction paths that differentiate
data coming from each device.
This allows the runtime to group logically equivalent actions together in
system UI.
For instance, an application could create a single pick_up action
with the /user/hand/left and /user/hand/right subaction
paths and use the subaction paths to independently query the state of
pick_up_with_left_hand and pick_up_with_right_hand.
Applications can create actions with or without the subactionPaths
set to a list of paths.
If this list of paths is omitted (i.e. subactionPaths is set to
NULL, and countSubactionPaths is set to 0), the application is
opting out of filtering action results by subaction paths and any call to
get action data must also omit subaction paths.
If subactionPaths is specified and any of the following conditions are
not satisfied, the runtime must return XR_ERROR_PATH_UNSUPPORTED:
-
Each path provided is one of:
-
/user/head
-
/user/hand/left
-
/user/hand/right
-
/user/gamepad
-
-
No path appears in the list more than once
Extensions may append additional top level user paths to the above list.
|
Note
Earlier revisions of the spec mentioned /user but it could not be implemented as specified and was removed as errata. |
The runtime must return XR_ERROR_PATH_UNSUPPORTED in the following
circumstances:
-
The application specified subaction paths at action creation and the application called
xrGetActionState*or a haptic function with an empty subaction path array. -
The application called
xrGetActionState*or a haptic function with a subaction path that was not specified when the action was created.
If actionName or localizedActionName are empty strings, the
runtime must return XR_ERROR_NAME_INVALID or
XR_ERROR_LOCALIZED_NAME_INVALID respectively.
If actionName or localizedActionName are duplicates of the
corresponding field for any existing action in the specified action set, the
runtime must return XR_ERROR_NAME_DUPLICATED or
XR_ERROR_LOCALIZED_NAME_DUPLICATED respectively.
If the conflicting action is destroyed, the conflicting field is no longer
considered duplicated.
If actionName contains characters which are not allowed in a single
level of a well-formed path string, the
runtime must return XR_ERROR_PATH_FORMAT_INVALID.
The XrActionType parameter takes one of the following values:
typedef enum XrActionType {
XR_ACTION_TYPE_BOOLEAN_INPUT = 1,
XR_ACTION_TYPE_FLOAT_INPUT = 2,
XR_ACTION_TYPE_VECTOR2F_INPUT = 3,
XR_ACTION_TYPE_POSE_INPUT = 4,
XR_ACTION_TYPE_VIBRATION_OUTPUT = 100,
XR_ACTION_TYPE_MAX_ENUM = 0x7FFFFFFF
} XrActionType;
The xrDestroyAction function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrDestroyAction(
XrAction action);
Action handles can be destroyed by calling xrDestroyAction. Handles for actions that are part of an action set are automatically destroyed when the action set’s handle is destroyed.
The implementation must not destroy the underlying resources for an action when xrDestroyAction is called. Those resources are still used to make action spaces locatable and when processing action priority in xrSyncActions. Destroying the action handle removes the application’s access to these resources, but has no other change on actions.
Resources for all actions in an instance must be freed when the instance containing those actions sets is destroyed.
11.3.1. Input Actions & Output Actions
Input actions are used to read sensors like buttons or joysticks while output actions are used for triggering haptics or motion platforms. The type of action created by xrCreateAction depends on the value of the XrActionType argument.
A given action can either be used for either input or output, but not both.
Input actions are queried using one of the xrGetActionState* function
calls, while output actions are set using the haptics calls.
If either call is used with an action of the wrong type
XR_ERROR_ACTION_TYPE_MISMATCH must be returned.
11.4. Suggested Bindings
Applications suggest bindings for their actions to runtimes so that raw
input data is mapped appropriately to the application’s actions.
Suggested bindings also serve as a signal indicating the hardware that has
been tested by the application developer.
Applications can suggest bindings by calling
xrSuggestInteractionProfileBindings for each
interaction profile that the
application is developed and tested with.
If bindings are provided for an appropriate interaction profile, the runtime
may select one and input will begin to flow.
Interaction profile selection changes must only happen when
xrSyncActions is called.
Applications can call xrGetCurrentInteractionProfile during on a
running session to learn what the active interaction profile are for a top
level user path.
If this value ever changes, the runtime must send an
XR_TYPE_EVENT_DATA_INTERACTION_PROFILE_CHANGED event to the
application to indicate that the value should be queried again.
The bindings suggested by this system are only a hint to the runtime. Some runtimes may choose to use a different device binding depending on user preference, accessibility settings, or for any other reason. If the runtime is using the values provided by suggested bindings, it must make a best effort to convert the input value to the created action and apply certain rules to that use so that suggested bindings function in the same way across runtimes. If an input value cannot be converted to the type of the action, the value must be ignored and not contribute to the state of the action.
For actions created with XR_ACTION_TYPE_BOOLEAN_INPUT when the runtime
is obeying suggested bindings: Boolean input sources must be bound directly
to the action.
If the path is to a scalar value, a threshold must be applied to the value
and values over that threshold will be XR_TRUE.
The runtime should use hysteresis when applying this threshold.
The threshold and hysteresis range may vary from device to device or
component to component and are left as an implementation detail.
If the path refers to the parent of input values instead of to an input
value itself, the runtime must use …/example/path/click instead
of …/example/path if it is available.
If a parent path does not have a …/click subpath, the runtime
must use …/value and apply the same thresholding that would be
applied to any scalar input.
In any other situation the runtime may provide an alternate binding for the
action or it will be unbound.
For actions created with XR_ACTION_TYPE_FLOAT_INPUT when the runtime
is obeying suggested bindings: If the input value specified by the path is
scalar, the input value must be bound directly to the float.
If the path refers to the parent of input values instead of to an input
value itself, the runtime must use …/example/path/value instead
of …/example/path as the source of the value.
If a parent path does not have a …/value subpath, the runtime
must use …/click.
If the input value is boolean, the runtime must supply 0.0 or 1.0 as a
conversion of the boolean value.
In any other situation, the runtime may provide an alternate binding for
the action or it will be unbound.
For actions created with XR_ACTION_TYPE_VECTOR2F_INPUT when the
runtime is obeying suggested bindings: The suggested binding path must
refer to the parent of input values instead of to the input values
themselves, and that parent path must contain subpaths …/x and
…/y.
…/x and …/y must be bound to 'x' and 'y' of the
vector, respectively.
In any other situation, the runtime may provide an alternate binding for
the action or it will be unbound.
For actions created with XR_ACTION_TYPE_POSE_INPUT when the runtime is
obeying suggested bindings: Pose input sources must be bound directly to
the action.
If the path refers to the parent of input values instead of to an input
value itself, the runtime must use …/example/path/pose instead
of …/example/path if it is available.
In any other situation the runtime may provide an alternate binding for the
action or it will be unbound.
The xrSuggestInteractionProfileBindings function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrSuggestInteractionProfileBindings(
XrInstance instance,
const XrInteractionProfileSuggestedBinding* suggestedBindings);
The xrSuggestInteractionProfileBindings function provides action bindings for a single interaction profile. The application can call xrSuggestInteractionProfileBindings once per interaction profile that it supports.
The application can provide any number of bindings for each action.
If the application successfully calls xrSuggestInteractionProfileBindings more than once for an interaction profile, the runtime must discard the previous suggested bindings and replace them with the new suggested bindings for that profile.
If the interaction profile path does not follow the structure defined in
Interaction Profiles or suggested
bindings contain paths that do not follow the format defined in
Input subpaths (further described in
XrActionSuggestedBinding), the runtime must return
XR_ERROR_PATH_UNSUPPORTED.
If the interaction profile path or binding path (top level /user
path plus input subpath) for any of the suggested bindings does not exist in
the allowlist defined in Interaction
Profile Paths, the runtime must return XR_ERROR_PATH_UNSUPPORTED.
A runtime must accept every valid binding in the allowlist though it is
free to ignore any of them.
If the action set for any action referenced in the suggestedBindings
parameter has been included in a call to xrAttachSessionActionSets,
the implementation must return XR_ERROR_ACTIONSETS_ALREADY_ATTACHED.
The XrInteractionProfileSuggestedBinding structure is defined as:
typedef struct XrInteractionProfileSuggestedBinding {
XrStructureType type;
const void* next;
XrPath interactionProfile;
uint32_t countSuggestedBindings;
const XrActionSuggestedBinding* suggestedBindings;
} XrInteractionProfileSuggestedBinding;
The XrActionSuggestedBinding structure is defined as:
typedef struct XrActionSuggestedBinding {
XrAction action;
XrPath binding;
} XrActionSuggestedBinding;
The xrAttachSessionActionSets function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrAttachSessionActionSets(
XrSession session,
const XrSessionActionSetsAttachInfo* attachInfo);
xrAttachSessionActionSets attaches the XrActionSet handles in
XrSessionActionSetsAttachInfo::actionSets to the session.
Action sets must be attached in order to be synchronized with
xrSyncActions.
When an action set is attached to a session, that action set becomes immutable. See xrCreateAction and xrSuggestInteractionProfileBindings for details.
After action sets are attached to a session, if any unattached actions are
passed to functions for the same session, then for those functions the
runtime must return XR_ERROR_ACTIONSET_NOT_ATTACHED.
The runtime must return XR_ERROR_ACTIONSETS_ALREADY_ATTACHED if
xrAttachSessionActionSets is called more than once for a given
session.
The XrSessionActionSetsAttachInfo structure is defined as:
typedef struct XrSessionActionSetsAttachInfo {
XrStructureType type;
const void* next;
uint32_t countActionSets;
const XrActionSet* actionSets;
} XrSessionActionSetsAttachInfo;
11.5. Current Interaction Profile
The xrGetCurrentInteractionProfile function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrGetCurrentInteractionProfile(
XrSession session,
XrPath topLevelUserPath,
XrInteractionProfileState* interactionProfile);
xrGetCurrentInteractionProfile retrieves the current interaction profile for a top level user path.
The runtime must return only interaction profiles for which the application has provided suggested bindings with xrSuggestInteractionProfileBindings or XR_NULL_PATH. The runtime may return interaction profiles that do not represent physically present hardware, for example if the runtime is using a known interaction profile to bind to hardware that the application is not aware of. The runtime may return an anticipated interaction profile, from the list of interaction profiles with suggested bindings (as supplied by the application through xrSuggestInteractionProfileBindings) for this top level /user path, in the event that no controllers are active. Whether the runtime reports an interaction profile path or XR_NULL_PATH does not provide any signal to the application regarding presence or absence of a controller or other interaction method.
If xrAttachSessionActionSets has not yet been called for the
session, the runtime must return
XR_ERROR_ACTIONSET_NOT_ATTACHED.
If topLevelUserPath is not one of the top level user paths described
in Top level /user paths, the runtime must return
XR_ERROR_PATH_UNSUPPORTED.
The XrInteractionProfileState structure is defined as:
typedef struct XrInteractionProfileState {
XrStructureType type;
void* next;
XrPath interactionProfile;
} XrInteractionProfileState;
The runtime must only include interaction profiles that the application has provided bindings for via xrSuggestInteractionProfileBindings or XR_NULL_PATH. If the runtime is rebinding an interaction profile provided by the application to a device that the application did not provide bindings for, it must return the interaction profile path that it is emulating. If the runtime is unable to provide input because it cannot emulate any of the application-provided interaction profiles, it must return XR_NULL_PATH.
The XrEventDataInteractionProfileChanged structure is defined as:
// Provided by XR_VERSION_1_0
typedef struct XrEventDataInteractionProfileChanged {
XrStructureType type;
const void* next;
XrSession session;
} XrEventDataInteractionProfileChanged;
The XrEventDataInteractionProfileChanged event is queued to notify the application that the current interaction profile for one or more top level user paths has changed. This event must only be sent for interaction profiles that the application indicated its support for via xrSuggestInteractionProfileBindings. This event must only be queued for running sessions.
Upon receiving this event, an application can call xrGetCurrentInteractionProfile for each top level user path in use, if its behavior depends on the current interaction profile.
11.6. Reading Input Action State
The current state of an input action can be obtained by calling the
xrGetActionState* function call that matches the XrActionType
provided when the action was created.
If a mismatched call is used to retrieve the state
XR_ERROR_ACTION_TYPE_MISMATCH must be returned.
xrGetActionState* calls for an action in an action set never bound to
the session with xrAttachSessionActionSets must return
XR_ERROR_ACTIONSET_NOT_ATTACHED.
The result of calls to xrGetActionState* for an XrAction and
subaction path must not change between calls to xrSyncActions.
When the combination of the parent XrActionSet and subaction path for
an action is passed to xrSyncActions, the runtime must update the
results from xrGetActionState* after this call with any changes to the
state of the underlying hardware.
When the parent action set and subaction path for an action is removed from
or added to the list of active action sets passed to xrSyncActions,
the runtime must update isActive to reflect the new active state
after this call.
In all cases the runtime must not change the results of
xrGetActionState* calls between calls to xrSyncActions.
When xrGetActionState* or haptic output functions are called while the
session is not focused, the runtime must set
the isActive value to XR_FALSE and suppress all haptic output.
Furthermore, the runtime should stop all in-progress haptic events when a
session loses focus.
When retrieving action state, lastChangeTime must be set to the
runtime’s best estimate of when the physical state of the part of the device
bound to that action last changed.
The currentState value is computed based on the current sync,
combining the underlying input sources bound to the provided
subactionPaths within this action.
The changedSinceLastSync value must be XR_TRUE if the computed
currentState value differs from the currentState value that
would have been computed as of the previous sync for the same
subactionPaths.
If there is no previous sync, or the action was not active for the previous
sync, the changedSinceLastSync value must be set to XR_FALSE.
The isActive value must be XR_TRUE whenever an action is bound
and a source is providing state data for the current sync.
If the action is unbound or no source is present, the isActive value
must be XR_FALSE.
For any action which is inactive, the runtime must return zero (or
XR_FALSE) for state, XR_FALSE for changedSinceLastSync,
and 0 for lastChangeTime.
11.6.1. Resolving a single action bound to multiple inputs or outputs
It is often the case that a single action will be bound to multiple physical inputs simultaneously. In these circumstances, the runtime must resolve the ambiguity in that multiple binding as follows:
The current state value is selected based on the type of the action:
-
Boolean actions - The current state must be the result of a boolean
ORof all bound inputs -
Float actions - The current state must be the state of the input with the largest absolute value
-
Vector2 actions - The current state must be the state of the input with the longest length
-
Pose actions - The current state must be the state of a single pose source. The source of the pose must only be changed during a call to xrSyncAction. The runtime should only change the source in response to user actions, such as picking up a new controller, or external events, such as a controller running out of battery.
-
Haptic actions - The runtime must send output events to all bound haptic devices
11.6.2. Structs to describe action and subaction paths
The XrActionStateGetInfo structure is used to provide action and
subaction paths when calling xrGetActionState* function.
It is defined as:
typedef struct XrActionStateGetInfo {
XrStructureType type;
const void* next;
XrAction action;
XrPath subactionPath;
} XrActionStateGetInfo;
See XrActionCreateInfo for a description of subaction paths, and the restrictions on their use.
The XrHapticActionInfo structure is used to provide action and
subaction paths when calling xr*HapticFeedback function.
It is defined as:
typedef struct XrHapticActionInfo {
XrStructureType type;
const void* next;
XrAction action;
XrPath subactionPath;
} XrHapticActionInfo;
See XrActionCreateInfo for a description of subaction paths, and the restrictions on their use.
11.6.3. Boolean Actions
xrGetActionStateBoolean retrieves the current state of a boolean action. It is defined as:
// Provided by XR_VERSION_1_0
XrResult xrGetActionStateBoolean(
XrSession session,
const XrActionStateGetInfo* getInfo,
XrActionStateBoolean* state);
The XrActionStateBoolean structure is defined as:
typedef struct XrActionStateBoolean {
XrStructureType type;
void* next;
XrBool32 currentState;
XrBool32 changedSinceLastSync;
XrTime lastChangeTime;
XrBool32 isActive;
} XrActionStateBoolean;
When multiple input sources are bound to this action, the currentState
follows the previously defined rule to resolve ambiguity.
11.6.4. Scalar and Vector Actions
xrGetActionStateFloat retrieves the current state of a floating-point action. It is defined as:
// Provided by XR_VERSION_1_0
XrResult xrGetActionStateFloat(
XrSession session,
const XrActionStateGetInfo* getInfo,
XrActionStateFloat* state);
The XrActionStateFloat structure is defined as:
typedef struct XrActionStateFloat {
XrStructureType type;
void* next;
float currentState;
XrBool32 changedSinceLastSync;
XrTime lastChangeTime;
XrBool32 isActive;
} XrActionStateFloat;
When multiple input sources are bound to this action, the currentState
follows the previously defined rule to resolve ambiguity.
xrGetActionStateVector2f retrieves the current state of a two-dimensional vector action. It is defined as:
// Provided by XR_VERSION_1_0
XrResult xrGetActionStateVector2f(
XrSession session,
const XrActionStateGetInfo* getInfo,
XrActionStateVector2f* state);
The XrActionStateVector2f structure is defined as:
typedef struct XrActionStateVector2f {
XrStructureType type;
void* next;
XrVector2f currentState;
XrBool32 changedSinceLastSync;
XrTime lastChangeTime;
XrBool32 isActive;
} XrActionStateVector2f;
When multiple input sources are bound to this action, the currentState
follows the previously defined rule to resolve ambiguity.
11.6.5. Pose Actions
The xrGetActionStatePose function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrGetActionStatePose(
XrSession session,
const XrActionStateGetInfo* getInfo,
XrActionStatePose* state);
xrGetActionStatePose returns information about the binding and active state for the specified action. To determine the pose of this action at a historical or predicted time, the application can create an action space using xrCreateActionSpace. Then, after each sync, the application can locate the pose of this action space within a base space using xrLocateSpace.
The XrActionStatePose structure is defined as:
typedef struct XrActionStatePose {
XrStructureType type;
void* next;
XrBool32 isActive;
} XrActionStatePose;
A pose action must not be bound to multiple input sources, according to the previously defined rule.
11.7. Output Actions and Haptics
Haptic feedback is sent to a device using the xrApplyHapticFeedback
function.
The hapticEvent points to a supported event structure.
All event structures have in common that the first element is an
XrHapticBaseHeader which can be used to determine the type of the
haptic event.
Haptic feedback may be immediately halted for a haptic action using the xrStopHapticFeedback function.
Output action requests activate immediately and must not wait for the next call to xrSyncActions.
If a haptic event is sent to an action before a previous haptic event completes, the latest event will take precedence and the runtime must cancel all preceding incomplete haptic events on that action.
Output action requests must be discarded and have no effect on hardware if the application’s session is not focused.
Output action requests for an action in an action set never attached to the
session with xrAttachSessionActionSets must return
XR_ERROR_ACTIONSET_NOT_ATTACHED.
The only haptics type supported by unextended OpenXR is XrHapticVibration.
The xrApplyHapticFeedback function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrApplyHapticFeedback(
XrSession session,
const XrHapticActionInfo* hapticActionInfo,
const XrHapticBaseHeader* hapticFeedback);
Triggers a haptic event through the specified action of type
XR_ACTION_TYPE_VIBRATION_OUTPUT.
The runtime should deliver this request to the appropriate device, but
exactly which device, if any, this event is sent to is up to the runtime to
decide.
If an appropriate device is unavailable the runtime may ignore this request
for haptic feedback.
If session is not focused, the runtime must return
XR_SESSION_NOT_FOCUSED, and not trigger a haptic event.
If another haptic event from this session is currently happening on the device bound to this action, the runtime must interrupt that other event and replace it with the new one.
The XrHapticBaseHeader structure is defined as:
typedef struct XrHapticBaseHeader {
XrStructureType type;
const void* next;
} XrHapticBaseHeader;
The XrHapticVibration structure is defined as:
// Provided by XR_VERSION_1_0
typedef struct XrHapticVibration {
XrStructureType type;
const void* next;
XrDuration duration;
float frequency;
float amplitude;
} XrHapticVibration;
The XrHapticVibration is used in calls to xrApplyHapticFeedback
that trigger vibration output actions.
The duration, and frequency parameters may be clamped to
implementation-dependent ranges.
XR_MIN_HAPTIC_DURATION is used to indicate to the runtime that a short haptic pulse of the minimal supported duration for the haptic device.
// Provided by XR_VERSION_1_0
#define XR_MIN_HAPTIC_DURATION -1
XR_FREQUENCY_UNSPECIFIED is used to indicate that the application wants the runtime to decide what the optimal frequency is for the haptic pulse.
// Provided by XR_VERSION_1_0
#define XR_FREQUENCY_UNSPECIFIED 0
The xrStopHapticFeedback function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrStopHapticFeedback(
XrSession session,
const XrHapticActionInfo* hapticActionInfo);
If a haptic event from this XrAction is in progress, when this function is called the runtime must stop that event.
If session is not focused, the runtime must return
XR_SESSION_NOT_FOCUSED.
11.8. Input Action State Synchronization
The xrSyncActions function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrSyncActions(
XrSession session,
const XrActionsSyncInfo* syncInfo);
xrSyncActions updates the current state of input actions.
Repeated input action state queries between subsequent synchronization calls
must return the same values.
The XrActionSet structures referenced in the
XrActionsSyncInfo::activeActionSets must have been previously
attached to the session via xrAttachSessionActionSets.
If any action sets not attached to this session are passed to
xrSyncActions it must return XR_ERROR_ACTIONSET_NOT_ATTACHED.
Subsets of the bound action sets can be synchronized in order to control
which actions are seen as active.
If session is not focused, the runtime must return
XR_SESSION_NOT_FOCUSED, and all action states in the session must be
inactive.
The XrActionsSyncInfo structure is defined as:
typedef struct XrActionsSyncInfo {
XrStructureType type;
const void* next;
uint32_t countActiveActionSets;
const XrActiveActionSet* activeActionSets;
} XrActionsSyncInfo;
The XrActiveActionSet structure is defined as:
typedef struct XrActiveActionSet {
XrActionSet actionSet;
XrPath subactionPath;
} XrActiveActionSet;
This structure defines a single active action set and subaction path combination. Applications can provide a list of these structures to the xrSyncActions function.
11.9. Bound Sources
An application can use the xrEnumerateBoundSourcesForAction and
xrGetInputSourceLocalizedName calls to prompt the user which physical
inputs to use in order to perform an action.
The bound sources are opaque XrPath values representing the
physical controls that an action is bound to.
An action may be bound to multiple sources at one time, for example an
action named hold could be bound to both the X and A buttons.
Once the bound sources for an action are obtained, the application can gather additional information about it. xrGetInputSourceLocalizedName returns a localized human-readable string describing the bound physical control, e.g. 'A Button'.
The xrEnumerateBoundSourcesForAction function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrEnumerateBoundSourcesForAction(
XrSession session,
const XrBoundSourcesForActionEnumerateInfo* enumerateInfo,
uint32_t sourceCapacityInput,
uint32_t* sourceCountOutput,
XrPath* sources);
If an action is unbound, xrEnumerateBoundSourcesForAction must assign
0 to the value pointed-to by sourceCountOutput and not modify the
array.
xrEnumerateBoundSourcesForAction must return
XR_ERROR_ACTIONSET_NOT_ATTACHED if passed an action in an action set
never attached to the session with xrAttachSessionActionSets.
As bindings for actions do not change between calls to xrSyncActions,
xrEnumerateBoundSourcesForAction must enumerate the same set of bound
sources, or absence of bound sources, for a given query (defined by the
enumerateInfo parameter) between any two calls to xrSyncActions.
|
Note
The |
The XrBoundSourcesForActionEnumerateInfo structure is defined as:
typedef struct XrBoundSourcesForActionEnumerateInfo {
XrStructureType type;
const void* next;
XrAction action;
} XrBoundSourcesForActionEnumerateInfo;
The xrGetInputSourceLocalizedName function is defined as:
// Provided by XR_VERSION_1_0
XrResult xrGetInputSourceLocalizedName(
XrSession session,
const XrInputSourceLocalizedNameGetInfo* getInfo,
uint32_t bufferCapacityInput,
uint32_t* bufferCountOutput,
char* buffer);
xrGetInputSourceLocalizedName returns a string for the bound source in the current system locale.
If xrAttachSessionActionSets has not yet been called for the session,
the runtime must return XR_ERROR_ACTIONSET_NOT_ATTACHED.
The XrInputSourceLocalizedNameGetInfo structure is defined as:
typedef struct XrInputSourceLocalizedNameGetInfo {
XrStructureType type;
const void* next;
XrPath sourcePath;
XrInputSourceLocalizedNameFlags whichComponents;
} XrInputSourceLocalizedNameGetInfo;
The result of passing an XrPath sourcePath not retrieved
from xrEnumerateBoundSourcesForAction is not specified.
The XrInputSourceLocalizedNameGetInfo::whichComponents parameter
is of the following type, and contains a bitwise-OR of one or more of the
bits defined in XrInputSourceLocalizedNameFlagBits.
typedef XrFlags64 XrInputSourceLocalizedNameFlags;
// Flag bits for XrInputSourceLocalizedNameFlags
static const XrInputSourceLocalizedNameFlags XR_INPUT_SOURCE_LOCALIZED_NAME_USER_PATH_BIT = 0x00000001;
static const XrInputSourceLocalizedNameFlags XR_INPUT_SOURCE_LOCALIZED_NAME_INTERACTION_PROFILE_BIT = 0x00000002;
static const XrInputSourceLocalizedNameFlags XR_INPUT_SOURCE_LOCALIZED_NAME_COMPONENT_BIT = 0x00000004;
The flag bits have the following meanings:
12. List of Current Extensions
12.1. XR_KHR_android_create_instance
- Name String
-
XR_KHR_android_create_instance - Extension Type
-
Instance extension
- Registered Extension Number
-
9
- Revision
-
3
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2019-07-17
- IP Status
-
No known IP claims.
- Contributors
-
Robert Menzel, NVIDIA
Martin Renschler, Qualcomm
Krzysztof Kosiński, Google
Overview
When the application creates an XrInstance object on Android systems, additional information from the application has to be provided to the XR runtime.
The Android XR runtime must return error XR_ERROR_VALIDATION_FAILURE
if the additional information is not provided by the application or if the
additional parameters are invalid.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_INSTANCE_CREATE_INFO_ANDROID_KHR
New Enums
New Structures
The XrInstanceCreateInfoAndroidKHR structure is defined as:
// Provided by XR_KHR_android_create_instance
typedef struct XrInstanceCreateInfoAndroidKHR {
XrStructureType type;
const void* next;
void* applicationVM;
void* applicationActivity;
} XrInstanceCreateInfoAndroidKHR;
XrInstanceCreateInfoAndroidKHR contains additional Android specific
information needed when calling xrCreateInstance.
The applicationVM field should be populated with the JavaVM
structure received by the JNI_OnLoad function, while the
applicationActivity field will typically contain a reference to a Java
activity object received through an application-specific native method.
The XrInstanceCreateInfoAndroidKHR structure must be provided in the
next chain of the XrInstanceCreateInfo structure when calling
xrCreateInstance.
New Functions
Issues
Version History
-
Revision 1, 2017-05-26 (Robert Menzel)
-
Initial draft
-
-
Revision 2, 2019-01-24 (Martin Renschler)
-
Added error code, reformatted
-
-
Revision 3, 2019-07-17 (Krzysztof Kosiński)
-
Non-substantive clarifications.
-
12.2. XR_KHR_android_surface_swapchain
- Name String
-
XR_KHR_android_surface_swapchain - Extension Type
-
Instance extension
- Registered Extension Number
-
5
- Revision
-
4
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2019-05-30
- IP Status
-
No known IP claims.
- Contributors
-
Krzysztof Kosiński, Google
Johannes van Waveren, Oculus
Martin Renschler, Qualcomm
Overview
A common activity in XR is to view an image stream.
Image streams are often the result of camera previews or decoded video
streams.
On Android, the basic primitive representing the producer end of an image
queue is the class android.view.Surface.
This extension provides a special swapchain that uses an
android.view.Surface as its producer end.
New Object Types
New Flag Types
New Enum Constants
New Enums
New Structures
New Functions
To create an XrSwapchain object and an Android Surface object call:
// Provided by XR_KHR_android_surface_swapchain
XrResult xrCreateSwapchainAndroidSurfaceKHR(
XrSession session,
const XrSwapchainCreateInfo* info,
XrSwapchain* swapchain,
jobject* surface);
xrCreateSwapchainAndroidSurfaceKHR creates an XrSwapchain object
returned in swapchain and an Android Surface jobject returned in
surface.
The jobject must be valid to be passed back to Java code using JNI and
must be valid to be used with ordinary Android APIs for submitting images
to Surfaces.
The returned XrSwapchain must be valid to be referenced in
XrSwapchainSubImage structures to show content on the screen.
The width and height passed in XrSwapchainCreateInfo may not be
persistent throughout the life cycle of the created swapchain, since on
Android, the size of the images is controlled by the producer and possibly
changes at any time.
The only function that is allowed to be called on the XrSwapchain returned from this function is xrDestroySwapchain. For example, calling any of the functions xrEnumerateSwapchainImages, xrAcquireSwapchainImage, xrWaitSwapchainImage or xrReleaseSwapchainImage is invalid.
When the application receives the XrEventDataSessionStateChanged event
with the XR_SESSION_STATE_STOPPING state, it must ensure that no
threads are writing to any of the Android surfaces created with this
extension before calling xrEndSession.
The effect of writing frames to the Surface when the session is in states
other than XR_SESSION_STATE_VISIBLE or XR_SESSION_STATE_FOCUSED
is undefined.
xrCreateSwapchainAndroidSurfaceKHR must return the same set of error
codes as xrCreateSwapchain under the same circumstances, plus
XR_ERROR_FUNCTION_UNSUPPORTED in case the function is not supported.
Issues
Version History
-
Revision 1, 2017-01-17 (Johannes van Waveren)
-
Initial draft
-
-
Revision 2, 2017-10-30 (Kaye Mason)
-
Changed images to swapchains, used snippet includes. Added issue for Surfaces.
-
-
Revision 3, 2018-05-16 (Krzysztof Kosiński)
-
Refactored to use Surface instead of SurfaceTexture.
-
-
Revision 4, 2019-01-24 (Martin Renschler)
-
Refined the specification of the extension
-
12.3. XR_KHR_android_thread_settings
- Name String
-
XR_KHR_android_thread_settings - Extension Type
-
Instance extension
- Registered Extension Number
-
4
- Revision
-
6
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2023-12-04
- IP Status
-
No known IP claims.
- Contributors
-
Cass Everitt, Oculus
Johannes van Waveren, Oculus
Martin Renschler, Qualcomm
Krzysztof Kosiński, Google
Xiang Wei, Meta
Overview
For XR to be comfortable, it is important for applications to deliver frames quickly and consistently. In order to make sure the important application threads get their full share of time, these threads must be identified to the system, which will adjust their scheduling priority accordingly.
New Object Types
New Flag Types
New Enum Constants
XrResult enumeration is extended with:
-
XR_ERROR_ANDROID_THREAD_SETTINGS_ID_INVALID_KHR -
XR_ERROR_ANDROID_THREAD_SETTINGS_FAILURE_KHR
New Enums
The possible thread types are specified by the XrAndroidThreadTypeKHR enumeration:
// Provided by XR_KHR_android_thread_settings
typedef enum XrAndroidThreadTypeKHR {
XR_ANDROID_THREAD_TYPE_APPLICATION_MAIN_KHR = 1,
XR_ANDROID_THREAD_TYPE_APPLICATION_WORKER_KHR = 2,
XR_ANDROID_THREAD_TYPE_RENDERER_MAIN_KHR = 3,
XR_ANDROID_THREAD_TYPE_RENDERER_WORKER_KHR = 4,
XR_ANDROID_THREAD_TYPE_MAX_ENUM_KHR = 0x7FFFFFFF
} XrAndroidThreadTypeKHR;
New Structures
New Functions
To declare a thread to be of a certain XrAndroidThreadTypeKHR type call:
// Provided by XR_KHR_android_thread_settings
XrResult xrSetAndroidApplicationThreadKHR(
XrSession session,
XrAndroidThreadTypeKHR threadType,
uint32_t threadId);
xrSetAndroidApplicationThreadKHR allows to declare an XR-critical thread and to classify it.
Version History
-
Revision 1, 2017-01-17 (Johannes van Waveren)
-
Initial draft.
-
-
Revision 2, 2017-10-31 (Armelle Laine)
-
Move the performance settings to EXT extension.
-
-
Revision 3, 2018-12-20 (Paul Pedriana)
-
Revised the error code naming to use KHR and renamed xrSetApplicationThreadKHR → xrSetAndroidApplicationThreadKHR.
-
-
Revision 4, 2019-01-24 (Martin Renschler)
-
Added enum specification, reformatting
-
-
Revision 5, 2019-07-17 (Krzysztof Kosiński)
-
Clarify the type of thread identifier used by the extension.
-
-
Revision 6, 2023-12-04 (Xiang Wei)
-
Revise/fix the hints of enum specification
-
12.4. XR_KHR_binding_modification
- Name String
-
XR_KHR_binding_modification - Extension Type
-
Instance extension
- Registered Extension Number
-
121
- Revision
-
1
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2020-07-29
- IP Status
-
No known IP claims.
- Contributors
-
Joe Ludwig, Valve
- Contacts
-
Joe Ludwig, Valve
Overview
This extension adds an optional structure that can be included on the
XrInteractionProfileSuggestedBinding::next chain passed to
xrSuggestInteractionProfileBindings to specify additional information
to modify default binding behavior.
This extension does not define any actual modification structs, but includes the list of modifications and the XrBindingModificationBaseHeaderKHR structure to allow other extensions to provide specific modifications.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_BINDING_MODIFICATIONS_KHR
New Enums
New Structures
The XrBindingModificationsKHR structure is defined as:
// Provided by XR_KHR_binding_modification
typedef struct XrBindingModificationsKHR {
XrStructureType type;
const void* next;
uint32_t bindingModificationCount;
const XrBindingModificationBaseHeaderKHR* const* bindingModifications;
} XrBindingModificationsKHR;
The XrBindingModificationBaseHeaderKHR structure is defined as:
// Provided by XR_KHR_binding_modification
typedef struct XrBindingModificationBaseHeaderKHR {
XrStructureType type;
const void* next;
} XrBindingModificationBaseHeaderKHR;
The XrBindingModificationBaseHeaderKHR is a base structure is
overridden by XrBindingModification* child structures.
New Functions
Issues
Version History
-
Revision 1, 2020-08-06 (Joe Ludwig)
-
Initial draft.
-
12.5. XR_KHR_composition_layer_color_scale_bias
- Name String
-
XR_KHR_composition_layer_color_scale_bias - Extension Type
-
Instance extension
- Registered Extension Number
-
35
- Revision
-
5
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2019-01-28
- IP Status
-
No known IP claims.
- Contributors
-
Paul Pedriana, Oculus
Cass Everitt, Oculus
Martin Renschler, Qualcomm
Overview
Color scale and bias are applied to a layer color during composition, after its conversion to premultiplied alpha representation.
If specified, colorScale and colorBias must be used to alter
the LayerColor as follows:
-
colorScale = max( vec4( 0, 0, 0, 0 ), colorScale )
-
LayerColor.RGB = LayerColor.A > 0 ? LayerColor.RGB / LayerColor.A : vec3( 0, 0, 0 )
-
LayerColor = LayerColor * colorScale + colorBias
-
LayerColor.RGB *= LayerColor.A
This extension specifies the XrCompositionLayerColorScaleBiasKHR
structure, which, if present in the
XrCompositionLayerBaseHeader::next chain, must be applied to
the composition layer.
This extension does not define a new composition layer type, but rather it defines a transform that may be applied to the color derived from existing composition layer types.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_COMPOSITION_LAYER_COLOR_SCALE_BIAS_KHR
New Enums
New Structures
The XrCompositionLayerColorScaleBiasKHR structure is defined as:
// Provided by XR_KHR_composition_layer_color_scale_bias
typedef struct XrCompositionLayerColorScaleBiasKHR {
XrStructureType type;
const void* next;
XrColor4f colorScale;
XrColor4f colorBias;
} XrCompositionLayerColorScaleBiasKHR;
XrCompositionLayerColorScaleBiasKHR contains the information needed to scale and bias the color of layer textures.
The XrCompositionLayerColorScaleBiasKHR structure can be applied by
applications to composition layers by adding an instance of the struct to
the XrCompositionLayerBaseHeader::next list.
New Functions
Issues
Version History
-
Revision 1, 2017-09-13 (Paul Pedriana)
-
Initial implementation.
-
-
Revision 2, 2019-01-24 (Martin Renschler)
-
Formatting, spec language changes
-
-
Revision 3, 2019-01-28 (Paul Pedriana)
-
Revised math to remove premultiplied alpha before applying color scale and offset, then restoring.
-
-
Revision 4, 2019-07-17 (Cass Everitt)
-
Non-substantive updates to the spec language and equations.
-
-
Revision 5, 2020-05-20 (Cass Everitt)
-
Changed extension name, simplified language.
-
12.6. XR_KHR_composition_layer_cube
- Name String
-
XR_KHR_composition_layer_cube - Extension Type
-
Instance extension
- Registered Extension Number
-
7
- Revision
-
8
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2019-01-24
- IP Status
-
No known IP claims.
- Contributors
-
Johannes van Waveren, Oculus
Cass Everitt, Oculus
Paul Pedriana, Oculus
Gloria Kennickell, Oculus
Sam Martin, ARM
Kaye Mason, Google, Inc.
Martin Renschler, Qualcomm - Contacts
-
Cass Everitt, Oculus
Paul Pedriana, Oculus
Overview
This extension adds an additional layer type that enables direct sampling from cubemaps.
The cube layer is the natural layer type for hardware accelerated environment maps. Without updating the image source, the user can look all around, and the compositor can display what they are looking at without intervention from the application.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_COMPOSITION_LAYER_CUBE_KHR
New Enums
New Structures
The XrCompositionLayerCubeKHR structure is defined as:
// Provided by XR_KHR_composition_layer_cube
typedef struct XrCompositionLayerCubeKHR {
XrStructureType type;
const void* next;
XrCompositionLayerFlags layerFlags;
XrSpace space;
XrEyeVisibility eyeVisibility;
XrSwapchain swapchain;
uint32_t imageArrayIndex;
XrQuaternionf orientation;
} XrCompositionLayerCubeKHR;
XrCompositionLayerCubeKHR contains the information needed to render a cube map when calling xrEndFrame. XrCompositionLayerCubeKHR is an alias type for the base struct XrCompositionLayerBaseHeader used in XrFrameEndInfo.
New Functions
Issues
Version History
-
Revision 0, 2017-02-01 (Johannes van Waveren)
-
Initial draft.
-
-
Revision 1, 2017-05-19 (Sam Martin)
-
Initial draft, moving the 3 layer types to an extension.
-
-
Revision 2, 2017-08-30 (Paul Pedriana)
-
Updated the specification.
-
-
Revision 3, 2017-10-12 (Cass Everitt)
-
Updated to reflect per-eye structs and the change to swapchains
-
-
Revision 4, 2017-10-18 (Kaye Mason)
-
Update to flatten structs to remove per-eye arrays.
-
-
Revision 5, 2017-12-05 (Paul Pedriana)
-
Updated to break out the cylinder and equirect features into separate extensions.
-
-
Revision 6, 2017-12-07 (Paul Pedriana)
-
Updated to use transform components instead of transform matrices.
-
-
Revision 7, 2017-12-07 (Paul Pedriana)
-
Updated to convert XrPosef to XrQuaternionf (there’s no position component).
-
-
Revision 8, 2019-01-24 (Martin Renschler)
-
Updated struct to use XrSwapchainSubImage, reformat and spec language changes, eye parameter description update
-
12.7. XR_KHR_composition_layer_cylinder
- Name String
-
XR_KHR_composition_layer_cylinder - Extension Type
-
Instance extension
- Registered Extension Number
-
18
- Revision
-
4
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2019-01-24
- IP Status
-
No known IP claims.
- Contributors
-
James Hughes, Oculus
Paul Pedriana, Oculus
Martin Renschler, Qualcomm - Contacts
-
Paul Pedriana, Oculus
Cass Everitt, Oculus
Overview
This extension adds an additional layer type where the XR runtime must map a texture stemming from a swapchain onto the inside of a cylinder section. It can be imagined much the same way a curved television display looks to a viewer. This is not a projection type of layer but rather an object-in-world type of layer, similar to XrCompositionLayerQuad. Only the interior of the cylinder surface must be visible; the exterior of the cylinder is not visible and must not be drawn by the runtime.
The cylinder characteristics are specified by the following parameters:
XrPosef pose;
float radius;
float centralAngle;
float aspectRatio;
These can be understood via the following diagram, which is a top-down view of a horizontally oriented cylinder. The aspect ratio drives how tall the cylinder will appear based on the other parameters. Typically the aspectRatio would be set to be the aspect ratio of the texture being used, so that it looks the same within the cylinder as it does in 2D.
-
r — Radius
-
a — Central angle in (0, 2π)
-
p — Origin of pose transform
-
U/V — UV coordinates
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_COMPOSITION_LAYER_CYLINDER_KHR
New Enums
New Structures
The XrCompositionLayerCylinderKHR structure is defined as:
// Provided by XR_KHR_composition_layer_cylinder
typedef struct XrCompositionLayerCylinderKHR {
XrStructureType type;
const void* next;
XrCompositionLayerFlags layerFlags;
XrSpace space;
XrEyeVisibility eyeVisibility;
XrSwapchainSubImage subImage;
XrPosef pose;
float radius;
float centralAngle;
float aspectRatio;
} XrCompositionLayerCylinderKHR;
XrCompositionLayerCylinderKHR contains the information needed to render a texture onto a cylinder when calling xrEndFrame. XrCompositionLayerCylinderKHR is an alias type for the base struct XrCompositionLayerBaseHeader used in XrFrameEndInfo.
New Functions
Issues
Version History
-
Revision 1, 2017-05-19 (Paul Pedriana)
-
Initial version. This was originally part of a single extension which supported multiple such extension layer types.
-
-
Revision 2, 2017-12-07 (Paul Pedriana)
-
Updated to use transform components instead of transform matrices.
-
-
Revision 3, 2018-03-05 (Paul Pedriana)
-
Added improved documentation and brought the documentation in line with the existing core spec.
-
-
Revision 4, 2019-01-24 (Martin Renschler)
-
Reformatted, spec language changes, eye parameter description update
-
12.8. XR_KHR_composition_layer_depth
- Name String
-
XR_KHR_composition_layer_depth - Extension Type
-
Instance extension
- Registered Extension Number
-
11
- Revision
-
6
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2019-01-24
- IP Status
-
No known IP claims.
- Contributors
-
Paul Pedriana, Oculus
Bryce Hutchings, Microsoft
Andreas Loeve Selvik, Arm
Martin Renschler, Qualcomm
Overview
This extension defines an extra layer type which allows applications to submit depth images along with color images in projection layers, i.e. XrCompositionLayerProjection.
The XR runtime may use this information to perform more accurate reprojections taking depth into account. Use of this extension does not affect the order of layer composition as described in Compositing.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_COMPOSITION_LAYER_DEPTH_INFO_KHR
New Enums
New Structures
When submitting depth images along with projection layers, add the
XrCompositionLayerDepthInfoKHR to the next chain for all
XrCompositionLayerProjectionView structures in the given layer.
The XrCompositionLayerDepthInfoKHR structure is defined as:
// Provided by XR_KHR_composition_layer_depth
typedef struct XrCompositionLayerDepthInfoKHR {
XrStructureType type;
const void* next;
XrSwapchainSubImage subImage;
float minDepth;
float maxDepth;
float nearZ;
float farZ;
} XrCompositionLayerDepthInfoKHR;
|
Note
The window space depth values |
|
Note
A reversed mapping of depth, such that points closer to the view have a window space depth that is greater than points further away can be achieved by making nearZ > farZ. |
XrCompositionLayerDepthInfoKHR contains the information needed to
associate depth with the color information in a projection layer.
When submitting depth images along with projection layers, add the
XrCompositionLayerDepthInfoKHR to the next chain for all
XrCompositionLayerProjectionView structures in the given layer.
The homogeneous transform from view space z to window space depth is given by the following matrix, where a = minDepth, b = maxDepth, n = nearZ, and f = farZ.
Homogeneous values are constructed from real values by appending a w component with value 1.0.
General homogeneous values are projected back to real space by dividing by the w component.
New Functions
Issues
-
Should the range of
minDepthandmaxDepthbe constrained to [0,1]?RESOLVED: Yes.
There is no compelling mathematical reason for this constraint, however, it does not impose any hardship currently, and the constraint could be relaxed in a future version of the extension if needed.
-
Should we require
minDepthbe less thanmaxDepth?RESOLVED: Yes.
There is no compelling mathematical reason for this constraint, however, it does not impose any hardship currently, and the constraint could be relaxed in a future version of the extension if needed. Reverse z mappings can be achieved by making
nearZ>farZ. -
Does this extension support view space depth images?
RESOLVED: No.
The formulation of the transform between view and window depths implies projected depth. A different extension would be needed to support a different interpretation of depth. -
Is there any constraint on the resolution of the depth subimage?
RESOLVED: No.
The resolution of the depth image need not match that of the corresponding color image.
Version History
-
Revision 1, 2017-08-18 (Paul Pedriana)
-
Initial proposal.
-
-
Revision 2, 2017-10-30 (Kaye Mason)
-
Migration from Images to Swapchains.
-
-
Revision 3, 2018-07-20 (Bryce Hutchings)
-
Support for swapchain texture arrays
-
-
Revision 4, 2018-12-17 (Andreas Loeve Selvik)
-
depthImageRect in pixels instead of UVs
-
-
Revision 5, 2019-01-24 (Martin Renschler)
-
changed depthSwapchain/depthImageRect/depthImageArrayIndex
to XrSwapchainSubImage -
reformat and spec language changes
-
removed vendor specific terminology
-
-
Revision 6, 2022-02-16 (Cass Everitt)
-
Provide homogeneous transform as function of provided parameters
-
12.9. XR_KHR_composition_layer_equirect
- Name String
-
XR_KHR_composition_layer_equirect - Extension Type
-
Instance extension
- Registered Extension Number
-
19
- Revision
-
3
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2019-01-24
- IP Status
-
No known IP claims.
- Contributors
-
Johannes van Waveren, Oculus
Cass Everitt, Oculus
Paul Pedriana, Oculus
Gloria Kennickell, Oculus
Martin Renschler, Qualcomm - Contacts
-
Cass Everitt, Oculus
Paul Pedriana, Oculus
Overview
This extension adds an additional layer type where the XR runtime must map an equirectangular coded image stemming from a swapchain onto the inside of a sphere.
The equirect layer type provides most of the same benefits as a cubemap, but from an equirect 2D image source. This image source is appealing mostly because equirect environment maps are very common, and the highest quality you can get from them is by sampling them directly in the compositor.
This is not a projection type of layer but rather an object-in-world type of layer, similar to XrCompositionLayerQuad. Only the interior of the sphere surface must be visible; the exterior of the sphere is not visible and must not be drawn by the runtime.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_COMPOSITION_LAYER_EQUIRECT_KHR
New Enums
New Structures
The XrCompositionLayerEquirectKHR structure is defined as:
// Provided by XR_KHR_composition_layer_equirect
typedef struct XrCompositionLayerEquirectKHR {
XrStructureType type;
const void* next;
XrCompositionLayerFlags layerFlags;
XrSpace space;
XrEyeVisibility eyeVisibility;
XrSwapchainSubImage subImage;
XrPosef pose;
float radius;
XrVector2f scale;
XrVector2f bias;
} XrCompositionLayerEquirectKHR;
XrCompositionLayerEquirectKHR contains the information needed to render an equirectangular image onto a sphere when calling xrEndFrame. XrCompositionLayerEquirectKHR is an alias type for the base struct XrCompositionLayerBaseHeader used in XrFrameEndInfo.
New Functions
Issues
Version History
-
Revision 1, 2017-05-19 (Paul Pedriana)
-
Initial version. This was originally part of a single extension which supported multiple such extension layer types.
-
-
Revision 2, 2017-12-07 (Paul Pedriana)
-
Updated to use transform components instead of transform matrices.
-
-
Revision 3, 2019-01-24 (Martin Renschler)
-
Reformatted, spec language changes, eye parameter description update
-
12.10. XR_KHR_composition_layer_equirect2
- Name String
-
XR_KHR_composition_layer_equirect2 - Extension Type
-
Instance extension
- Registered Extension Number
-
92
- Revision
-
1
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2019-01-24
- IP Status
-
No known IP claims.
- Contributors
-
Johannes van Waveren, Oculus
Cass Everitt, Oculus
Paul Pedriana, Oculus
Gloria Kennickell, Oculus
Martin Renschler, Qualcomm - Contacts
-
Cass Everitt, Oculus
Overview
This extension adds an additional layer type where the XR runtime must map an equirectangular coded image stemming from a swapchain onto the inside of a sphere.
The equirect layer type provides most of the same benefits as a cubemap, but from an equirect 2D image source. This image source is appealing mostly because equirect environment maps are very common, and the highest quality you can get from them is by sampling them directly in the compositor.
This is not a projection type of layer but rather an object-in-world type of layer, similar to XrCompositionLayerQuad. Only the interior of the sphere surface must be visible; the exterior of the sphere is not visible and must not be drawn by the runtime.
This extension uses a different parameterization more in keeping with the formulation of KHR_composition_layer_cylinder but is functionally equivalent to KHR_composition_layer_equirect.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_COMPOSITION_LAYER_EQUIRECT2_KHR
New Enums
New Structures
The XrCompositionLayerEquirect2KHR structure is defined as:
// Provided by XR_KHR_composition_layer_equirect2
typedef struct XrCompositionLayerEquirect2KHR {
XrStructureType type;
const void* next;
XrCompositionLayerFlags layerFlags;
XrSpace space;
XrEyeVisibility eyeVisibility;
XrSwapchainSubImage subImage;
XrPosef pose;
float radius;
float centralHorizontalAngle;
float upperVerticalAngle;
float lowerVerticalAngle;
} XrCompositionLayerEquirect2KHR;
XrCompositionLayerEquirect2KHR contains the information needed to render an equirectangular image onto a sphere when calling xrEndFrame. XrCompositionLayerEquirect2KHR is an alias type for the base struct XrCompositionLayerBaseHeader used in XrFrameEndInfo.
New Functions
Issues
Version History
-
Revision 1, 2020-05-08 (Cass Everitt)
-
Initial version.
-
Kept contributors from the original equirect extension.
-
12.11. XR_KHR_convert_timespec_time
- Name String
-
XR_KHR_convert_timespec_time - Extension Type
-
Instance extension
- Registered Extension Number
-
37
- Revision
-
1
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2019-01-24
- IP Status
-
No known IP claims.
- Contributors
-
Paul Pedriana, Oculus
Overview
This extension provides two functions for converting between timespec
monotonic time and XrTime.
The xrConvertTimespecTimeToTimeKHR function converts from timespec
time to XrTime, while the xrConvertTimeToTimespecTimeKHR
function converts XrTime to timespec monotonic time.
The primary use case for this functionality is to be able to synchronize
events between the local system and the OpenXR system.
New Object Types
New Flag Types
New Enum Constants
New Enums
New Structures
New Functions
To convert from timespec monotonic time to XrTime, call:
// Provided by XR_KHR_convert_timespec_time
XrResult xrConvertTimespecTimeToTimeKHR(
XrInstance instance,
const struct timespec* timespecTime,
XrTime* time);
The xrConvertTimespecTimeToTimeKHR function converts a time obtained
by the clock_gettime function to the equivalent XrTime.
If the output time cannot represent the input timespecTime, the
runtime must return XR_ERROR_TIME_INVALID.
To convert from XrTime to timespec monotonic time, call:
// Provided by XR_KHR_convert_timespec_time
XrResult xrConvertTimeToTimespecTimeKHR(
XrInstance instance,
XrTime time,
struct timespec* timespecTime);
The xrConvertTimeToTimespecTimeKHR function converts an
XrTime to time as if generated by clock_gettime.
If the output timespecTime cannot represent the input time, the
runtime must return XR_ERROR_TIME_INVALID.
Issues
Version History
-
Revision 1, 2019-01-24 (Paul Pedriana)
-
Initial draft
-
12.12. XR_KHR_D3D11_enable
- Name String
-
XR_KHR_D3D11_enable - Extension Type
-
Instance extension
- Registered Extension Number
-
28
- Revision
-
11
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2025-03-21
- IP Status
-
No known IP claims.
- Contributors
-
Bryce Hutchings, Microsoft
Paul Pedriana, Oculus
Mark Young, LunarG
Minmin Gong, Microsoft
Aaron Leiby, Valve
12.12.1. Overview
This extension enables the use of the Direct3D 11 (D3D11) graphics API in an OpenXR application. Without this extension, an OpenXR application may not be able to use any D3D11 swapchain images.
This extension provides the mechanisms necessary for an application to generate a valid XrGraphicsBindingD3D11KHR structure in order to create a D3D11-based XrSession. Note that during this process the application is responsible for creating all the required D3D11 objects, including a graphics device to be used for rendering. However, the runtime provides the D3D11 textures to render into. This extension provides mechanisms for the application to interact with those textures by calling xrEnumerateSwapchainImages and providing XrSwapchainImageD3D11KHR structures to populate.
In order to expose the structures, types, and functions of this extension,
the application source code must define XR_USE_GRAPHICS_API_D3D11
before including the OpenXR platform header openxr_platform.h, in all
portions of the library or application that interact with the types, values,
and functions it defines.
12.12.2. Get Graphics Requirements
Some computer systems have multiple graphics devices, each of which may have independent external display outputs. XR systems that connect to such computer systems are typically connected to a single graphics device. Applications need to know the graphics device associated with the XR system, so that rendering takes place on the correct graphics device.
The xrGetD3D11GraphicsRequirementsKHR function is defined as:
// Provided by XR_KHR_D3D11_enable
XrResult xrGetD3D11GraphicsRequirementsKHR(
XrInstance instance,
XrSystemId systemId,
XrGraphicsRequirementsD3D11KHR* graphicsRequirements);
This call retrieves the D3D11 feature level and graphics device for an
instance and system.
The xrGetD3D11GraphicsRequirementsKHR function identifies to the
application the graphics device (Windows LUID) to be used and the minimum
feature level to use.
The runtime must return XR_ERROR_GRAPHICS_REQUIREMENTS_CALL_MISSING
(XR_ERROR_VALIDATION_FAILURE may be returned due to legacy behavior)
on calls to xrCreateSession if xrGetD3D11GraphicsRequirementsKHR
has not been called for the same instance and systemId.
The LUID and feature level that xrGetD3D11GraphicsRequirementsKHR
returns must be used to create the ID3D11Device that the application
passes to xrCreateSession in the XrGraphicsBindingD3D11KHR.
The XrGraphicsRequirementsD3D11KHR structure is defined as:
// Provided by XR_KHR_D3D11_enable
typedef struct XrGraphicsRequirementsD3D11KHR {
XrStructureType type;
void* next;
LUID adapterLuid;
D3D_FEATURE_LEVEL minFeatureLevel;
} XrGraphicsRequirementsD3D11KHR;
XrGraphicsRequirementsD3D11KHR is populated by xrGetD3D11GraphicsRequirementsKHR with the runtime’s D3D11 API feature level and adapter requirements.
12.12.3. Graphics Binding Structure
The XrGraphicsBindingD3D11KHR structure is defined as:
// Provided by XR_KHR_D3D11_enable
typedef struct XrGraphicsBindingD3D11KHR {
XrStructureType type;
const void* next;
ID3D11Device* device;
} XrGraphicsBindingD3D11KHR;
To create a D3D11-backed XrSession, the application provides a pointer
to an XrGraphicsBindingD3D11KHR structure in the
XrSessionCreateInfo::next chain when calling
xrCreateSession.
The D3D11 device specified in XrGraphicsBindingD3D11KHR::device
must be created in accordance with the requirements retrieved through
xrGetD3D11GraphicsRequirementsKHR, otherwise xrCreateSession
must return XR_ERROR_GRAPHICS_DEVICE_INVALID.
Creating a session using this structure triggers several requirements on the runtime regarding swapchain images. See the specification of XrSwapchainImageD3D11KHR for details.
12.12.4. Swapchain Images
The XrSwapchainImageD3D11KHR structure is defined as:
// Provided by XR_KHR_D3D11_enable
typedef struct XrSwapchainImageD3D11KHR {
XrStructureType type;
void* next;
ID3D11Texture2D* texture;
} XrSwapchainImageD3D11KHR;
If a given session was created with XrGraphicsBindingD3D11KHR, the following conditions apply.
-
Calls to xrEnumerateSwapchainImages on an XrSwapchain in that session must return an array of XrSwapchainImageD3D11KHR structures.
-
Whenever an OpenXR function accepts an XrSwapchainImageBaseHeader pointer as a parameter in that session, the runtime must also accept a pointer to an XrSwapchainImageD3D11KHR.
The OpenXR runtime must interpret the top-left corner of the swapchain image as the coordinate origin unless specified otherwise by extension functionality.
The OpenXR runtime must return a texture created in accordance with D3D11 Swapchain Flag Bits.
12.12.5. D3D11 Swapchain Flag Bits
All valid XrSwapchainUsageFlags values passed in a session created
using XrGraphicsBindingD3D11KHR must be interpreted as follows by the
runtime, so that the returned swapchain images used by the application may
be used as if they were created with the corresponding D3D11_BIND_FLAG
flags.
The runtime may set additional bind flags but must not restrict usage.
| XrSwapchainUsageFlagBits | Corresponding D3D11 bind flag bits |
|---|---|
|
|
|
|
|
|
|
ignored |
|
ignored |
|
|
|
ignored |
|
ignored |
All D3D11 swapchain textures are created with D3D11_USAGE_DEFAULT usage.
12.12.8. New Enum Constants
-
XR_KHR_D3D11_ENABLE_EXTENSION_NAME -
XR_KHR_D3D11_enable_SPEC_VERSION -
Extending XrStructureType:
-
XR_TYPE_GRAPHICS_BINDING_D3D11_KHR -
XR_TYPE_GRAPHICS_REQUIREMENTS_D3D11_KHR -
XR_TYPE_SWAPCHAIN_IMAGE_D3D11_KHR
-
12.12.9. Version History
-
Revision 1, 2018-05-07 (Mark Young)
-
Initial draft
-
-
Revision 2, 2018-06-21 (Bryce Hutchings)
-
Split
XR_KHR_D3D_enableintoXR_KHR_D3D11_enable -
Rename and expand
xrGetD3DGraphicsDeviceKHRfunctionality toxrGetD3D11GraphicsRequirementsKHR
-
-
Revision 3, 2018-11-15 (Paul Pedriana)
-
Specified the swapchain texture coordinate origin.
-
-
Revision 4, 2018-11-16 (Minmin Gong)
-
Specified Y direction and Z range in clip space
-
-
Revision 5, 2020-08-06 (Bryce Hutchings)
-
Added new
XR_ERROR_GRAPHICS_REQUIREMENTS_CALL_MISSINGerror code
-
-
Revision 8, 2021-09-09 (Bryce Hutchings)
-
Document mapping for
XrSwapchainUsageFlags
-
-
Revision 9, 2021-12-28 (Microsoft)
-
Added missing
XR_ERROR_GRAPHICS_DEVICE_INVALIDerror condition
-
-
Revision 10, 2025-03-07 (Rylie Pavlik, Collabora, Ltd.)
-
Re-organize, clarify, and make more uniform with other graphics binding extensions.
-
-
Revision 11, 2025-03-21 (Aaron Leiby)
-
Removed clip space specification
-
12.13. XR_KHR_D3D12_enable
- Name String
-
XR_KHR_D3D12_enable - Extension Type
-
Instance extension
- Registered Extension Number
-
29
- Revision
-
11
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2025-03-21
- IP Status
-
No known IP claims.
- Contributors
-
Bryce Hutchings, Microsoft
Paul Pedriana, Oculus
Mark Young, LunarG
Minmin Gong, Microsoft
Dan Ginsburg, Valve
Aaron Leiby, Valve
12.13.1. Overview
This extension enables the use of the Direct3D 12 (D3D12) graphics API in an OpenXR application. Without this extension, an OpenXR application may not be able to use any D3D12 swapchain images.
This extension provides the mechanisms necessary for an application to generate a valid XrGraphicsBindingD3D12KHR structure in order to create a D3D12-based XrSession. Note that during this process the application is responsible for creating all the required D3D12 objects, including a graphics device and queue to be used for rendering. However, the runtime provides the D3D12 images to render into. This extension provides mechanisms for the application to interact with those images by calling xrEnumerateSwapchainImages and providing XrSwapchainImageD3D12KHR structures to populate.
In order to expose the structures, types, and functions of this extension,
the application source code must define XR_USE_GRAPHICS_API_D3D12
before including the OpenXR platform header openxr_platform.h, in all
portions of the library or application that interact with the types, values,
and functions it defines.
12.13.2. Get Graphics Requirements
Some computer systems have multiple graphics devices, each of which may have independent external display outputs. XR systems that connect to such computer systems are typically connected to a single graphics device. Applications need to know the graphics device associated with the XR system, so that rendering takes place on the correct graphics device.
The xrGetD3D12GraphicsRequirementsKHR function is defined as:
// Provided by XR_KHR_D3D12_enable
XrResult xrGetD3D12GraphicsRequirementsKHR(
XrInstance instance,
XrSystemId systemId,
XrGraphicsRequirementsD3D12KHR* graphicsRequirements);
This call retrieves the D3D12 feature level and graphics device for an
instance and system.
The xrGetD3D12GraphicsRequirementsKHR function identifies to the
application the graphics device (Windows LUID) to be used and the minimum
feature level to use.
The runtime must return XR_ERROR_GRAPHICS_REQUIREMENTS_CALL_MISSING
(XR_ERROR_VALIDATION_FAILURE may be returned due to legacy behavior)
on calls to xrCreateSession if xrGetD3D12GraphicsRequirementsKHR
has not been called for the same instance and systemId.
The LUID and feature level that xrGetD3D12GraphicsRequirementsKHR
returns must be used to create the ID3D12Device that the application
passes to xrCreateSession in the XrGraphicsBindingD3D12KHR.
The XrGraphicsRequirementsD3D12KHR structure is defined as:
// Provided by XR_KHR_D3D12_enable
typedef struct XrGraphicsRequirementsD3D12KHR {
XrStructureType type;
void* next;
LUID adapterLuid;
D3D_FEATURE_LEVEL minFeatureLevel;
} XrGraphicsRequirementsD3D12KHR;
XrGraphicsRequirementsD3D12KHR is populated by xrGetD3D12GraphicsRequirementsKHR with the runtime’s D3D12 API feature level and adapter requirements.
12.13.3. Graphics Binding Structure
The XrGraphicsBindingD3D12KHR structure is defined as:
// Provided by XR_KHR_D3D12_enable
typedef struct XrGraphicsBindingD3D12KHR {
XrStructureType type;
const void* next;
ID3D12Device* device;
ID3D12CommandQueue* queue;
} XrGraphicsBindingD3D12KHR;
To create a D3D12-backed XrSession, the application provides a pointer
to an XrGraphicsBindingD3D12KHR structure in the
XrSessionCreateInfo::next chain when calling
xrCreateSession.
The D3D12 device specified in XrGraphicsBindingD3D12KHR::device
must be created in accordance with the requirements retrieved through
xrGetD3D12GraphicsRequirementsKHR, otherwise xrCreateSession
must return XR_ERROR_GRAPHICS_DEVICE_INVALID.
Creating a session using this structure triggers several requirements on the runtime regarding swapchain images. See the specification of XrSwapchainImageD3D12KHR for details.
12.13.4. Swapchain Images
The XrSwapchainImageD3D12KHR structure is defined as:
// Provided by XR_KHR_D3D12_enable
typedef struct XrSwapchainImageD3D12KHR {
XrStructureType type;
void* next;
ID3D12Resource* texture;
} XrSwapchainImageD3D12KHR;
If a given session was created with XrGraphicsBindingD3D12KHR, the following conditions apply.
-
Calls to xrEnumerateSwapchainImages on an XrSwapchain in that session must return an array of XrSwapchainImageD3D12KHR structures.
-
Whenever an OpenXR function accepts an XrSwapchainImageBaseHeader pointer as a parameter in that session, the runtime must also accept a pointer to an XrSwapchainImageD3D12KHR.
The OpenXR runtime must interpret the top-left corner of the swapchain image as the coordinate origin unless specified otherwise by extension functionality.
The OpenXR runtime must return a texture created in accordance with D3D12 Swapchain Flag Bits.
The OpenXR runtime must manage image resource state in accordance with D3D12 Swapchain Image Resource State.
12.13.5. D3D12 Swapchain Flag Bits
All valid XrSwapchainUsageFlags values passed in a session created
using XrGraphicsBindingD3D12KHR must be interpreted as follows by the
runtime, so that the returned swapchain images used by the application may
be used as if they were created with the corresponding
D3D12_RESOURCE_FLAGS flags and heap type.
The runtime may set additional resource flags but must not restrict usage.
| XrSwapchainUsageFlagBits | Corresponding D3D12 resource flag bits |
|---|---|
|
|
|
|
|
|
|
ignored |
|
ignored |
|
|
|
ignored |
|
ignored |
All D3D12 swapchain textures are created with D3D12_HEAP_TYPE_DEFAULT heap
type.
12.13.6. D3D12 Swapchain Image Resource State
If an application waits on a swapchain image by calling
xrWaitSwapchainImage in a session created using
XrGraphicsBindingD3D12KHR, and that call returns XR_SUCCESS or
XR_SESSION_LOSS_PENDING, then the OpenXR runtime must guarantee that
the following conditions are true:
-
The color rendering target image has a resource state match with
D3D12_RESOURCE_STATE_RENDER_TARGET -
The depth rendering target image has a resource state match with
D3D12_RESOURCE_STATE_DEPTH_WRITE -
The
ID3D12CommandQueuespecified in XrGraphicsBindingD3D12KHR is able to write to the image.
When an application releases a swapchain image by calling xrReleaseSwapchainImage in a session created using XrGraphicsBindingD3D12KHR, the OpenXR runtime must interpret the image as:
-
Having a resource state match with
D3D12_RESOURCE_STATE_RENDER_TARGETif the image is a color rendering target -
Having a resource state match with
D3D12_RESOURCE_STATE_DEPTH_WRITEif the image is a depth rendering target -
Being available for read/write on the
ID3D12CommandQueuespecified in XrGraphicsBindingD3D12KHR.
The application is responsible for transitioning the swapchain image back to the resource state and queue availability that the OpenXR runtime requires. If the image is not in a resource state match with the above specifications the runtime may exhibit undefined behavior.
12.13.9. New Enum Constants
-
XR_KHR_D3D12_ENABLE_EXTENSION_NAME -
XR_KHR_D3D12_enable_SPEC_VERSION -
Extending XrStructureType:
-
XR_TYPE_GRAPHICS_BINDING_D3D12_KHR -
XR_TYPE_GRAPHICS_REQUIREMENTS_D3D12_KHR -
XR_TYPE_SWAPCHAIN_IMAGE_D3D12_KHR
-
12.13.10. Version History
-
Revision 1, 2018-05-07 (Mark Young)
-
Initial draft
-
-
Revision 2, 2018-06-21 (Bryce Hutchings)
-
Split
XR_KHR_D3D_enableintoXR_KHR_D3D12_enable -
Rename and expand
xrGetD3DGraphicsDeviceKHRfunctionality toxrGetD3D12GraphicsRequirementsKHR
-
-
Revision 3, 2018-11-15 (Paul Pedriana)
-
Specified the swapchain texture coordinate origin.
-
-
Revision 4, 2018-11-16 (Minmin Gong)
-
Specified Y direction and Z range in clip space
-
-
Revision 5, 2019-01-29 (Dan Ginsburg)
-
Added swapchain image resource state details.
-
-
Revision 6, 2020-03-18 (Minmin Gong)
-
Specified depth swapchain image resource state.
-
-
Revision 7, 2020-08-06 (Bryce Hutchings)
-
Added new
XR_ERROR_GRAPHICS_REQUIREMENTS_CALL_MISSINGerror code
-
-
Revision 8, 2021-09-09 (Bryce Hutchings)
-
Document mapping for
XrSwapchainUsageFlags
-
-
Revision 9, 2021-12-28 (Microsoft)
-
Added missing
XR_ERROR_GRAPHICS_DEVICE_INVALIDerror condition
-
-
Revision 10, 2025-03-07 (Rylie Pavlik, Collabora, Ltd.)
-
Re-organize, clarify, and make more uniform with other graphics binding extensions.
-
-
Revision 11, 2025-03-21 (Aaron Leiby)
-
Removed clip space specification
-
12.14. XR_KHR_extended_struct_name_lengths
- Name String
-
XR_KHR_extended_struct_name_lengths - Extension Type
-
Instance extension
- Registered Extension Number
-
149
- Revision
-
1
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2024-08-12
- IP Status
-
No known IP claims.
- Contributors
-
Matthew Langille, Meta Platforms
Andreas Selvik, Meta Platforms
Rylie Pavlik, Collabora, Ltd.
12.14.1. Overview
This extension extends the maximum struct name sizes and provides a new function to access these new extended names.
xrStructureTypeToString2KHR is provided to allow retrieving the string names of structure type enumerants with lengths that exceed the original limit of 63 bytes (64 bytes including the null terminator). xrStructureTypeToString2KHR returns name strings for structure type enumerants up to 127 bytes in length (128 bytes including the null terminator). An application can use xrStructureTypeToString2KHR as a drop-in replacement for xrStructureTypeToString, as it works with all structure type enumerants, regardless of string name length.
12.14.2. Retrieving Structure Type Enumerant Strings
If the original xrStructureTypeToString is used to retrieve string names for structure type enumerants with name lengths in excess of 63 bytes, its behavior is clarified as follows. xrStructureTypeToString must populate the buffer with the correct name, except that the string must be truncated at a codepoint boundary to fit within the available buffer. That is, the returned string must always be valid UTF-8.
The xrStructureTypeToString2KHR function is defined as:
// Provided by XR_KHR_extended_struct_name_lengths
XrResult xrStructureTypeToString2KHR(
XrInstance instance,
XrStructureType value,
char buffer[XR_MAX_STRUCTURE_NAME_SIZE_EXTENDED_KHR]);
Returns the name of the provided XrStructureType value by copying a
valid null-terminated UTF-8 string into buffer.
In all cases the returned string must be one of:
For structure type enumerants whose names fit within the original size limit of 63 bytes, xrStructureTypeToString2KHR must return the same resultant string as xrStructureTypeToString, up to the null terminator.
The XR_MAX_STRUCTURE_NAME_SIZE_EXTENDED_KHR enumerant defines the size
of the buffer passed to xrStructureTypeToString2KHR.
#define XR_MAX_STRUCTURE_NAME_SIZE_EXTENDED_KHR 256
12.15. XR_KHR_generic_controller
- Name String
-
XR_KHR_generic_controller - Extension Type
-
Instance extension
- Registered Extension Number
-
712
- Revision
-
1
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- API Interactions
-
-
Interacts with
XR_EXT_dpad_binding -
Interacts with
XR_EXT_hand_interaction -
Interacts with
XR_EXT_palm_pose
-
- Contributors
-
Andreas Loeve Selvik, Meta Platforms
Bastiaan Olij, Godot Engine
Bryce Hutchings, Microsoft
John Kearney, Meta Platforms
Jules Blok, Epic Games
Nathan Nuber, Valve
Rylie Pavlik, Collabora
Lachlan Ford, Microsoft
Yin Li, Microsoft
12.15.1. Overview
This extension enables a new interaction profile for generic motion controllers. This new interaction profile provides button, trigger, squeeze, thumbstick, and haptic support for applications. Similarly to Khronos Simple Controller Profile, there is no hardware associated with the profile, and runtimes which support this profile should map the input paths provided to whatever the appropriate inputs are on the actual hardware.
12.15.2. New Interaction Profile Identifiers
-
primary - A standalone button that is easier for the user to interact with than secondary button (
secondary). -
secondary - A standalone button that is more difficult for the user to interact with than primary button (
primary).
12.15.3. New Interaction Profile
Khronos Generic Controller Profile
Path: /interaction_profiles/khr/generic_controller
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile provides basic pose, button, thumbstick, trigger, and haptic support for applications which are able to use generic controller style input.
Unlike many interaction profiles, there is no specific hardware associated with the interaction profile, and runtimes which support this profile should map the input/output binding paths to whatever the appropriate inputs/outputs are on the actual hardware.
If there is a specific interaction profile associated with the motion controller in use, and the application suggests bindings for that specific interaction profile and this Generic Controller Profile, the runtime should select the bindings suggested for the hardware specific interaction profile in preference to bindings suggested for this Generic Controller Profile.
Specifically, the Generic Controller Profile is designed to offer broad compatibility across motion controllers which offer generic controller style data but not to comprehensively cover any specific hardware.
|
Note
The intent of this interaction profile is to provide a fallback. It is still expected that the application will suggest bindings for all hardware based interaction profiles that the application has been tested with. |
Some runtimes must select bindings suggested for this interaction profile in some conditions.
Specifically, if in some condition, a runtime obeying suggested bindings selects bindings suggested for one of the following interaction profiles:
-
/interaction_profiles/oculus/touch_controller
-
/interaction_profiles/valve/index_controller
Then, such a runtime must select suggested bindings for /interaction_profiles/khr/generic_controller if bindings are suggested for neither of the above, nor for an interaction profile that maps more directly to the devices in use.
That is, if a runtime selects "touch_controller" or "index_controller" in some case, then it must select "generic_controller" in a similar situation.
|
Note
The intent of this language is to guarantee support for this interaction profile for runtimes implementing certain interaction profiles that are known to map well, but runtimes that do not typically remap any of these specific interaction profiles are encouraged to map this interaction profile onto their devices. |
Supported component paths:
-
…/input/primary/click
-
…/input/secondary/click
-
…/input/thumbstick
-
…/input/thumbstick/x
-
…/input/thumbstick/y
-
…/input/thumbstick/click
-
…/input/squeeze/value
-
…/input/trigger/value
-
…/input/grip/pose
-
…/input/grip_surface/pose
-
…/input/aim/pose
-
…/output/haptic
|
Note
When the
|
|
Note
When the
|
Expected binding mappings
The runtime may use any appropriate hardware binding in this; however there are natural equivalences between the profiles:
| Binding path for /interaction_profiles/khr/generic_controller | Equivalent binding path for /interaction_profiles/oculus/touch_controller |
|---|---|
/user/hand/left/input/primary/click |
/user/hand/left/input/x/click |
/user/hand/left/input/secondary/click |
/user/hand/left/input/y/click |
/user/hand/left/input/thumbstick |
/user/hand/left/input/thumbstick |
/user/hand/left/input/thumbstick/x |
/user/hand/left/input/thumbstick/x |
/user/hand/left/input/thumbstick/y |
/user/hand/left/input/thumbstick/y |
/user/hand/left/input/thumbstick/click |
/user/hand/left/input/thumbstick/click |
/user/hand/left/input/squeeze/value |
/user/hand/left/input/squeeze/value |
/user/hand/left/input/trigger/value |
/user/hand/left/input/trigger/value |
/user/hand/left/input/grip/pose |
/user/hand/left/input/grip/pose |
/user/hand/left/input/grip_surface/pose |
/user/hand/left/input/grip_surface/pose |
/user/hand/left/input/aim/pose |
/user/hand/left/input/aim/pose |
/user/hand/left/output/haptic |
/user/hand/left/output/haptic |
/user/hand/right/input/primary/click |
/user/hand/right/input/a/click |
/user/hand/right/input/secondary/click |
/user/hand/right/input/b/click |
/user/hand/right/input/thumbstick |
/user/hand/right/input/thumbstick |
/user/hand/right/input/thumbstick/x |
/user/hand/right/input/thumbstick/x |
/user/hand/right/input/thumbstick/y |
/user/hand/right/input/thumbstick/y |
/user/hand/right/input/thumbstick/click |
/user/hand/right/input/thumbstick/click |
/user/hand/right/input/squeeze/value |
/user/hand/right/input/squeeze/value |
/user/hand/right/input/trigger/value |
/user/hand/right/input/trigger/value |
/user/hand/right/input/grip/pose |
/user/hand/right/input/grip/pose |
/user/hand/right/input/grip_surface/pose |
/user/hand/right/input/grip_surface/pose |
/user/hand/right/input/aim/pose |
/user/hand/right/input/aim/pose |
/user/hand/right/output/haptic |
/user/hand/right/output/haptic |
The following binding paths for /interaction_profiles/oculus/touch_controller lack a generic controller equivalent and therefore are omitted from the preceding table:
-
/user/hand/left/input/trigger/proximity
-
/user/hand/left/input/thumb_resting_surfaces/proximity
-
/user/hand/left/input/menu/click
-
/user/hand/right/input/system/click
-
/user/hand/right/input/trigger/proximity
-
/user/hand/right/input/thumb_resting_surfaces/proximity
| Binding path for /interaction_profiles/khr/generic_controller | Equivalent binding path for /interaction_profiles/valve/index_controller |
|---|---|
/user/hand/left/input/primary/click |
/user/hand/left/input/a/click |
/user/hand/left/input/secondary/click |
/user/hand/left/input/b/click |
/user/hand/left/input/thumbstick |
/user/hand/left/input/thumbstick |
/user/hand/left/input/thumbstick/x |
/user/hand/left/input/thumbstick/x |
/user/hand/left/input/thumbstick/y |
/user/hand/left/input/thumbstick/y |
/user/hand/left/input/thumbstick/click |
/user/hand/left/input/thumbstick/click |
/user/hand/left/input/squeeze/value |
/user/hand/left/input/squeeze/value |
/user/hand/left/input/trigger/value |
/user/hand/left/input/trigger/value |
/user/hand/left/input/grip/pose |
/user/hand/left/input/grip/pose |
/user/hand/left/input/grip_surface/pose |
/user/hand/left/input/grip_surface/pose |
/user/hand/left/input/aim/pose |
/user/hand/left/input/aim/pose |
/user/hand/left/output/haptic |
/user/hand/left/output/haptic |
/user/hand/right/input/primary/click |
/user/hand/right/input/a/click |
/user/hand/right/input/secondary/click |
/user/hand/right/input/b/click |
/user/hand/right/input/thumbstick |
/user/hand/right/input/thumbstick |
/user/hand/right/input/thumbstick/x |
/user/hand/right/input/thumbstick/x |
/user/hand/right/input/thumbstick/y |
/user/hand/right/input/thumbstick/y |
/user/hand/right/input/thumbstick/click |
/user/hand/right/input/thumbstick/click |
/user/hand/right/input/squeeze/value |
/user/hand/right/input/squeeze/value |
/user/hand/right/input/trigger/value |
/user/hand/right/input/trigger/value |
/user/hand/right/input/grip/pose |
/user/hand/right/input/grip/pose |
/user/hand/right/input/grip_surface/pose |
/user/hand/right/input/grip_surface/pose |
/user/hand/right/input/aim/pose |
/user/hand/right/input/aim/pose |
/user/hand/right/output/haptic |
/user/hand/right/output/haptic |
The following binding paths for /interaction_profiles/valve/index_controller lack a generic controller equivalent and therefore are omitted from the preceding table:
-
/user/hand/left/input/system/click
-
/user/hand/left/input/system/touch
-
/user/hand/left/input/trackpad/x
-
/user/hand/left/input/trackpad/y
-
/user/hand/left/input/trackpad/force
-
/user/hand/left/input/trackpad/touch
-
/user/hand/right/input/system/click
-
/user/hand/right/input/system/touch
-
/user/hand/right/input/trackpad/x
-
/user/hand/right/input/trackpad/y
-
/user/hand/right/input/trackpad/force
-
/user/hand/right/input/trackpad/touch
12.15.4. New Enum Constants
-
XR_KHR_GENERIC_CONTROLLER_EXTENSION_NAME -
XR_KHR_generic_controller_SPEC_VERSION
12.15.5. Issues
-
Should the specification mandate specific bindings for all hardware with existing interaction profiles?
-
No. This is an area where we expect that the runtime has better information that about hardware configuration, user preferences, etc than the application or the specification authors. Requiring specific binding behavior for the runtimes would be counter productive given that assumption.
-
-
Why is the profile described as a Generic Controller Profile rather than some other name?
-
The data that is made available by the Generic Controller Profile can represent the data commonly made available by VR motion controllers but is not specific to any particular hardware.
-
-
Should controller system buttons be added to this profile?
-
No. While interaction profiles make these system buttons available to applications, they are generally not likely to be bound for regular applications, instead being reserved for internal system usage.
-
12.16. XR_KHR_loader_init
- Name String
-
XR_KHR_loader_init - Extension Type
-
Instance extension
- Registered Extension Number
-
89
- Revision
-
2
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2023-05-08
- IP Status
-
No known IP claims.
- Contributors
-
Cass Everitt, Facebook
Robert Blenkinsopp, Ultraleap
Overview
On some platforms, before loading can occur the loader must be initialized with platform-specific parameters.
Unlike other extensions, the presence of this extension is signaled by a
successful call to xrGetInstanceProcAddr to retrieve the function
pointer for xrInitializeLoaderKHR using XR_NULL_HANDLE as the
instance parameter.
If this extension is supported, its use may be required on some platforms and the use of the xrInitializeLoaderKHR function must precede other OpenXR calls except xrGetInstanceProcAddr.
This function exists as part of the loader library that the application is using and the loader must pass calls to xrInitializeLoaderKHR to the active runtime, and all enabled API layers that expose a xrInitializeLoaderKHR function exposed either through their manifest, or through their implementation of xrGetInstanceProcAddr.
If the xrInitializeLoaderKHR function is discovered through the
manifest, xrInitializeLoaderKHR will be called before
xrNegotiateLoaderRuntimeInterface or xrNegotiateLoaderApiLayerInterface
has been called on the runtime or layer respectively.
New Object Types
New Flag Types
New Enum Constants
New Enums
New Structures
The XrLoaderInitInfoBaseHeaderKHR structure is defined as:
// Provided by XR_KHR_loader_init
typedef struct XrLoaderInitInfoBaseHeaderKHR {
XrStructureType type;
const void* next;
} XrLoaderInitInfoBaseHeaderKHR;
New Functions
To initialize an OpenXR loader with platform or implementation-specific parameters, call:
// Provided by XR_KHR_loader_init
XrResult xrInitializeLoaderKHR(
const XrLoaderInitInfoBaseHeaderKHR* loaderInitInfo);
Issues
Version History
-
Revision 2, 2023-05-08 (Robert Blenkinsoppp)
-
Explicitly state that the call to xrInitializeLoaderKHR should be passed to the runtime and enabled API layers.
-
-
Revision 1, 2020-05-07 (Cass Everitt)
-
Initial draft
-
12.17. XR_KHR_loader_init_android
- Name String
-
XR_KHR_loader_init_android - Extension Type
-
Instance extension
- Registered Extension Number
-
90
- Revision
-
1
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2020-05-07
- IP Status
-
No known IP claims.
- Contributors
-
Cass Everitt, Facebook
Overview
On Android, some loader implementations need the application to provide additional information on initialization. This extension defines the parameters needed by such implementations. If this is available on a given implementation, an application must make use of it.
On implementations where use of this is required, the following condition must apply:
-
Whenever an OpenXR function accepts an XrLoaderInitInfoBaseHeaderKHR pointer, the runtime (and loader) must also accept a pointer to an XrLoaderInitInfoAndroidKHR.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_LOADER_INIT_INFO_ANDROID_KHR
New Enums
New Structures
The XrLoaderInitInfoAndroidKHR structure is defined as:
// Provided by XR_KHR_loader_init_android
typedef struct XrLoaderInitInfoAndroidKHR {
XrStructureType type;
const void* next;
void* applicationVM;
void* applicationContext;
} XrLoaderInitInfoAndroidKHR;
New Functions
Issues
Version History
-
Revision 1, 2020-05-07 (Cass Everitt)
-
Initial draft
-
12.18. XR_KHR_metal_enable
- Name String
-
XR_KHR_metal_enable - Extension Type
-
Instance extension
- Registered Extension Number
-
30
- Revision
-
3
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2025-03-21
- IP Status
-
No known IP claims.
- Contributors
-
Xiang Wei, Meta
Peter Kuhn, Unity
John Kearney, Meta
Andreas Selvik, Meta
Jakob Bornecrantz, Collabora
Rylie Pavlik, Collabora
Aaron Leiby, Valve
12.18.1. Overview
This extension enables the use of the Metal® graphics API in an OpenXR application. Without this extension, an OpenXR application may not be able to use any Metal swapchain images.
This extension provides the mechanisms necessary for an application to generate a valid XrGraphicsBindingMetalKHR structure in order to create a Metal-based XrSession. Note that during this process, the runtime is responsible for creating the Metal device for the application’s drawing operations, and the application is responsible for creating all the required Metal objects from that, including a Metal command queue to be used for rendering. The runtime however will provide the Metal textures to render into in the form of a swapchain.
This extension also provides mechanisms for the application to interact with images acquired by calling xrEnumerateSwapchainImages.
In order to expose the structures, types, and functions of this extension,
the application source code must define XR_USE_GRAPHICS_API_METAL
before including the OpenXR platform header openxr_platform.h, in all
portions of the library or application that interact with the types, values,
and functions it defines.
12.18.2. Get Graphics Requirements
Some computer systems may have multiple graphics devices, each of which may have independent external display outputs. XR systems that connect to such computer systems are typically connected to a single graphics device. Applications need to know the graphics device associated with the XR system, so that rendering takes place on the correct graphics device.
To retrieve the Metal device that can be used in drawing operations, call:
// Provided by XR_KHR_metal_enable
XrResult xrGetMetalGraphicsRequirementsKHR(
XrInstance instance,
XrSystemId systemId,
XrGraphicsRequirementsMetalKHR* graphicsRequirements);
The xrGetMetalGraphicsRequirementsKHR function identifies to the
application the Metal device to be used in drawing operations.
The runtime must return XR_ERROR_GRAPHICS_REQUIREMENTS_CALL_MISSING
on calls to xrCreateSession if xrGetMetalGraphicsRequirementsKHR
has not been called for the same instance and systemId.
The Metal device that xrGetMetalGraphicsRequirementsKHR returns must be used to create the Metal command queue that the application passes to xrCreateSession in the XrGraphicsBindingMetalKHR.
The XrGraphicsRequirementsMetalKHR structure is defined as:
// Provided by XR_KHR_metal_enable
typedef struct XrGraphicsRequirementsMetalKHR {
XrStructureType type;
void* next;
void* metalDevice;
} XrGraphicsRequirementsMetalKHR;
XrGraphicsRequirementsMetalKHR is populated by xrGetMetalGraphicsRequirementsKHR.
12.18.3. Graphics Binding Structure
The XrGraphicsBindingMetalKHR structure is defined as:
// Provided by XR_KHR_metal_enable
typedef struct XrGraphicsBindingMetalKHR {
XrStructureType type;
const void* next;
void* commandQueue;
} XrGraphicsBindingMetalKHR;
To create a Metal-backed XrSession, the application provides a pointer
to an XrGraphicsBindingMetalKHR in the
XrSessionCreateInfo::next field of structure passed to
xrCreateSession.
The Metal command queue specified in
XrGraphicsBindingMetalKHR::commandQueue must be created on the
Metal device retrieved through
XrGraphicsRequirementsMetalKHR::metalDevice, otherwise
xrCreateSession must return XR_ERROR_GRAPHICS_DEVICE_INVALID.
Creating a session using this structure triggers several requirements on the runtime regarding swapchain images. See the specification of XrSwapchainImageMetalKHR for details.
12.18.4. Swapchain Images
The XrSwapchainImageMetalKHR structure is defined as:
// Provided by XR_KHR_metal_enable
typedef struct XrSwapchainImageMetalKHR {
XrStructureType type;
void* next;
void* texture;
} XrSwapchainImageMetalKHR;
If a given session was created with XrGraphicsBindingMetalKHR, the following conditions apply.
-
Calls to xrEnumerateSwapchainImages on an XrSwapchain in that session must return an array of XrSwapchainImageMetalKHR structures.
-
Whenever an OpenXR function accepts an XrSwapchainImageBaseHeader pointer as a parameter in that session, the runtime must also accept a pointer to an XrSwapchainImageMetalKHR.
The OpenXR runtime must interpret the top-left corner of the swapchain image as the coordinate origin unless specified otherwise by extension functionality.
12.18.5. Metal Swapchain Flag Bits
All valid XrSwapchainUsageFlags values passed in a session created
using XrGraphicsBindingMetalKHR must be interpreted as follows by the
runtime, so that the returned swapchain images used by the application may
be used as if they were created with the corresponding MTLTextureUsage
flags.
The runtime may set additional bind flags but must not restrict usage.
| XrSwapchainUsageFlagBits | Corresponding MTLTextureUsage bits |
|---|---|
|
|
|
|
|
|
|
ignored |
|
ignored |
|
|
|
|
|
ignored |
All Metal swapchain textures are created with
MTLResourceStorageModePrivate resource option, and are accessible only by
the GPU.
12.18.6. Issues
-
How to manage the resource state of the Swapchain textures, etc?
-
The application uses the Metal device that is created by the runtime for the drawing operations. The runtime uses the same Metal device to create the swapchain images, and also create the synchronization events when necessary. On top of that, Metal tracks the write hazards and synchronizes the resources which are created from the same Metal device and directly bind to a pipeline. Please check this Apple documentation for more details: https://developer.apple.com/documentation/metal/resource_synchronization?language=objc
-
12.18.9. New Enum Constants
-
XR_KHR_METAL_ENABLE_EXTENSION_NAME -
XR_KHR_metal_enable_SPEC_VERSION -
Extending XrStructureType:
-
XR_TYPE_GRAPHICS_BINDING_METAL_KHR -
XR_TYPE_GRAPHICS_REQUIREMENTS_METAL_KHR -
XR_TYPE_SWAPCHAIN_IMAGE_METAL_KHR
-
12.19. XR_KHR_opengl_enable
- Name String
-
XR_KHR_opengl_enable - Extension Type
-
Instance extension
- Registered Extension Number
-
24
- Revision
-
12
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2025-10-16
- IP Status
-
No known IP claims.
- Contributors
-
Mark Young, LunarG
Bryce Hutchings, Microsoft
Paul Pedriana, Oculus
Minmin Gong, Microsoft
Robert Menzel, NVIDIA
Jakob Bornecrantz, Collabora
Paulo Gomes, Samsung Electronics
Aaron Leiby, Valve
12.19.1. Overview
This extension enables the use of the OpenGL graphics API in an OpenXR application. Without this extension, an OpenXR application may not be able to use any OpenGL swapchain images.
This extension provides the mechanisms necessary for an application to
generate a valid XrGraphicsBindingOpenGL*KHR structure in order to
create an OpenGL-based XrSession.
Note that the application is responsible for creating an OpenGL context to
be used for rendering.
However, the runtime provides the OpenGL textures to render into.
This extension provides mechanisms for the application to interact with
those textures by calling xrEnumerateSwapchainImages and providing
XrSwapchainImageOpenGLKHR structures to populate.
In order to expose the structures, types, and functions of this extension,
the application source code must define XR_USE_GRAPHICS_API_OPENGL,
as well as an appropriate window system
define supported by this extension, before including the OpenXR platform
header openxr_platform.h, in all portions of the library or application
that interact with the types, values, and functions it defines.
The window system defines currently supported by this extension are:
Note that a runtime implementation of this extension is only required to support the structures introduced by this extension which correspond to the platform it is running on. For Wayland deprecation see Wayland Notes.
12.19.2. OpenGL Context and Threading
Note that the OpenGL context given to the call to xrCreateSession must not be bound in another thread by the application when calling the functions:
However, it may be bound in the thread calling one of those functions. The runtime must not access the context from any other function. In particular the application must be able to call xrWaitFrame from a different thread than the rendering thread.
12.19.3. Get Graphics Requirements
The xrGetOpenGLGraphicsRequirementsKHR function is defined as:
// Provided by XR_KHR_opengl_enable
XrResult xrGetOpenGLGraphicsRequirementsKHR(
XrInstance instance,
XrSystemId systemId,
XrGraphicsRequirementsOpenGLKHR* graphicsRequirements);
This call queries OpenGL API version requirements for an instance and
system.
The xrGetOpenGLGraphicsRequirementsKHR function identifies to the
application the minimum OpenGL version requirement and the highest known
tested OpenGL version.
The runtime must return XR_ERROR_GRAPHICS_REQUIREMENTS_CALL_MISSING
(XR_ERROR_VALIDATION_FAILURE may be returned due to legacy behavior)
on calls to xrCreateSession if
xrGetOpenGLGraphicsRequirementsKHR has not been called for the same
instance and systemId.
The XrGraphicsRequirementsOpenGLKHR structure is defined as:
// Provided by XR_KHR_opengl_enable
typedef struct XrGraphicsRequirementsOpenGLKHR {
XrStructureType type;
void* next;
XrVersion minApiVersionSupported;
XrVersion maxApiVersionSupported;
} XrGraphicsRequirementsOpenGLKHR;
XrGraphicsRequirementsOpenGLKHR is populated by xrGetOpenGLGraphicsRequirementsKHR with the runtime’s OpenGL API version requirements.
12.19.4. Graphics Binding Structure
These structures are only available when the corresponding
XR_USE_PLATFORM_ window system/platform
macro is defined before including openxr_platform.h.
The XrGraphicsBindingOpenGLWin32KHR structure is defined as:
// Provided by XR_KHR_opengl_enable
typedef struct XrGraphicsBindingOpenGLWin32KHR {
XrStructureType type;
const void* next;
HDC hDC;
HGLRC hGLRC;
} XrGraphicsBindingOpenGLWin32KHR;
To create an OpenGL-backed XrSession on Microsoft Windows, the
application provides a pointer to an XrGraphicsBindingOpenGLWin32KHR
structure in the XrSessionCreateInfo::next chain when calling
xrCreateSession.
As no standardized way exists for OpenGL to create the graphics context on a
specific GPU, the runtime must assume that the application uses the
operating system’s default GPU when this structure is supplied.
If the GPU used by the runtime does not match the GPU on which the OpenGL
context of the application was created, xrCreateSession must return
XR_ERROR_GRAPHICS_DEVICE_INVALID.
Creating a session using this structure triggers several requirements on the runtime regarding swapchain images. See the specification of XrSwapchainImageOpenGLKHR for details.
By providing a context as hGLRC, the application becomes subject to
restrictions on use of that context which effectively introduces additional
external synchronization requirements on some OpenXR calls.
See OpenGL Context and Threading for details.
The required window system configuration define to expose this structure type is XR_USE_PLATFORM_WIN32.
The XrGraphicsBindingOpenGLXlibKHR structure is defined as:
// Provided by XR_KHR_opengl_enable
typedef struct XrGraphicsBindingOpenGLXlibKHR {
XrStructureType type;
const void* next;
Display* xDisplay;
uint32_t visualid;
GLXFBConfig glxFBConfig;
GLXDrawable glxDrawable;
GLXContext glxContext;
} XrGraphicsBindingOpenGLXlibKHR;
To create an OpenGL-backed XrSession on any Linux/Unix platform that
utilizes X11 and GLX, via the Xlib library, the application provides a
pointer to an XrGraphicsBindingOpenGLXlibKHR in the
XrSessionCreateInfo::next chain when calling
xrCreateSession.
Creating a session using this structure triggers several requirements on the runtime regarding swapchain images. See the specification of XrSwapchainImageOpenGLKHR for details.
By providing a GLX context as glxContext, the application becomes
subject to restrictions on use of that context which effectively introduces
additional external synchronization requirements on some OpenXR calls.
See OpenGL Context and Threading for details.
The required window system configuration define to expose this structure type is XR_USE_PLATFORM_XLIB.
The XrGraphicsBindingOpenGLXcbKHR structure is defined as:
// Provided by XR_KHR_opengl_enable
typedef struct XrGraphicsBindingOpenGLXcbKHR {
XrStructureType type;
const void* next;
xcb_connection_t* connection;
uint32_t screenNumber;
xcb_glx_fbconfig_t fbconfigid;
xcb_visualid_t visualid;
xcb_glx_drawable_t glxDrawable;
xcb_glx_context_t glxContext;
} XrGraphicsBindingOpenGLXcbKHR;
To create an OpenGL-backed XrSession on any Linux/Unix platform that
utilizes X11 and GLX, via the Xlib library, the application provides a
pointer to an XrGraphicsBindingOpenGLXcbKHR in the
XrSessionCreateInfo::next chain when calling
xrCreateSession.
Creating a session using this structure triggers several requirements on the runtime regarding swapchain images. See the specification of XrSwapchainImageOpenGLKHR for details.
By providing a GLX context as glxContext, the application becomes
subject to restrictions on use of that context which effectively introduces
additional external synchronization requirements on some OpenXR calls.
See OpenGL Context and Threading for details.
The required window system configuration define to expose this structure type is XR_USE_PLATFORM_XCB.
The XrGraphicsBindingOpenGLWaylandKHR structure is defined as:
// Provided by XR_KHR_opengl_enable
typedef struct XrGraphicsBindingOpenGLWaylandKHR {
XrStructureType type;
const void* next;
struct wl_display* display;
} XrGraphicsBindingOpenGLWaylandKHR;
Note that use of this structure is deprecated - see Wayland Notes.
To create an OpenGL-backed XrSession on any Linux/Unix platform that
utilizes the Wayland protocol with its compositor, the application provides
a pointer to an XrGraphicsBindingOpenGLWaylandKHR in the
XrSessionCreateInfo::next chain when calling
xrCreateSession.
Creating a session using this structure triggers several requirements on the runtime regarding swapchain images. See the specification of XrSwapchainImageOpenGLKHR for details.
The required window system configuration define to expose this structure type is XR_USE_PLATFORM_WAYLAND.
12.19.5. Swapchain Images
The XrSwapchainImageOpenGLKHR structure is defined as:
// Provided by XR_KHR_opengl_enable
typedef struct XrSwapchainImageOpenGLKHR {
XrStructureType type;
void* next;
uint32_t image;
} XrSwapchainImageOpenGLKHR;
If a given session was created with some XrGraphicsBindingOpenGL*KHR
graphics binding structure, the following conditions apply.
-
Calls to xrEnumerateSwapchainImages on an XrSwapchain in that session must return an array of XrSwapchainImageOpenGLKHR structures.
-
Whenever an OpenXR function accepts an XrSwapchainImageBaseHeader pointer as a parameter in that session, the runtime must also accept a pointer to an XrSwapchainImageOpenGLKHR.
The OpenXR runtime must interpret the bottom-left corner of the swapchain image as the coordinate origin unless specified otherwise by extension functionality.
12.19.6. OpenGL Swapchain Flag Bits
All valid XrSwapchainUsageFlags values passed in a session created using XrGraphicsBindingOpenGLWin32KHR, XrGraphicsBindingOpenGLXlibKHR, XrGraphicsBindingOpenGLXcbKHR, or XrGraphicsBindingOpenGLWaylandKHR should be ignored as there is no mapping to OpenGL texture settings.
|
Note
In such a session, a runtime may use a supporting graphics API, such as Vulkan, to allocate images that are intended to alias with OpenGL textures, and be part of an XrSwapchain. A runtime which allocates the texture with a different graphics API may need to enable several usage flags on the underlying native texture resource to ensure compatibility with OpenGL. |
12.19.9. New Enum Constants
-
XR_KHR_OPENGL_ENABLE_EXTENSION_NAME -
XR_KHR_opengl_enable_SPEC_VERSION -
Extending XrStructureType:
-
XR_TYPE_GRAPHICS_BINDING_OPENGL_WAYLAND_KHR -
XR_TYPE_GRAPHICS_BINDING_OPENGL_WIN32_KHR -
XR_TYPE_GRAPHICS_BINDING_OPENGL_XCB_KHR -
XR_TYPE_GRAPHICS_BINDING_OPENGL_XLIB_KHR -
XR_TYPE_GRAPHICS_REQUIREMENTS_OPENGL_KHR -
XR_TYPE_SWAPCHAIN_IMAGE_OPENGL_KHR
-
12.19.10. Issues
-
Why was Wayland deprecated?
-
See Wayland Notes.
-
12.19.11. Version History
-
Revision 1, 2018-05-07 (Mark Young)
-
Initial draft
-
-
Revision 2, 2018-06-21 (Bryce Hutchings)
-
Add new
xrGetOpenGLGraphicsRequirementsKHR
-
-
Revision 3, 2018-11-15 (Paul Pedriana)
-
Specified the swapchain texture coordinate origin.
-
-
Revision 4, 2018-11-16 (Minmin Gong)
-
Specified Y direction and Z range in clip space
-
-
Revision 5, 2019-01-25 (Robert Menzel)
-
Description updated
-
-
Revision 6, 2019-07-02 (Robert Menzel)
-
Minor fixes
-
-
Revision 7, 2019-07-08 (Rylie Pavlik)
-
Adjusted member name in XCB struct
-
-
Revision 8, 2019-11-28 (Jakob Bornecrantz)
-
Added note about context not allowed to be current in a different thread.
-
-
Revision 9, 2020-08-06 (Bryce Hutchings)
-
Added new
XR_ERROR_GRAPHICS_REQUIREMENTS_CALL_MISSINGerror code
-
-
Revision 10, 2021-08-31 (Paulo F. Gomes)
-
Document handling of
XrSwapchainUsageFlags
-
-
Revision 11, 2025-03-07 (Rylie Pavlik, Collabora, Ltd.)
-
Re-organize, clarify, and make more uniform with other graphics binding extensions.
-
-
Revision 12, 2025-10-16 (Jakob Bornecrantz, NVIDIA and Aaron Leiby)
-
Removed clip space specification
-
Deprecate Wayland part of the extension.
-
12.20. XR_KHR_opengl_es_enable
- Name String
-
XR_KHR_opengl_es_enable - Extension Type
-
Instance extension
- Registered Extension Number
-
25
- Revision
-
10
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2025-03-21
- IP Status
-
No known IP claims.
- Contributors
-
Mark Young, LunarG
Bryce Hutchings, Microsoft
Paul Pedriana, Oculus
Minmin Gong, Microsoft
Robert Menzel, NVIDIA
Martin Renschler, Qualcomm
Paulo Gomes, Samsung Electronics
Aaron Leiby, Valve
12.20.1. Overview
This extension enables the use of the OpenGL ES graphics API in an OpenXR application. Without this extension, an OpenXR application may not be able to use any OpenGL ES swapchain images.
This extension provides the mechanisms necessary for an application to generate a valid XrGraphicsBindingOpenGLESAndroidKHR structure in order to create an OpenGL ES-based XrSession. The runtime needs the following OpenGL ES objects from the application in order to interact properly with the OpenGL ES driver: EGLDisplay, EGLConfig, and EGLContext. Although not theoretically Android-specific, the OpenGL ES extension is currently tailored for Android. Note that the application is responsible for creating an OpenGL ES context to be used for rendering. However, the runtime provides the OpenGL ES textures to render into. This extension provides mechanisms for the application to interact with those textures by calling xrEnumerateSwapchainImages and providing XrSwapchainImageOpenGLESKHR structures to populate.
In order to expose the structures, types, and functions of this extension,
the application source code must define
XR_USE_GRAPHICS_API_OPENGL_ES, as well as an appropriate
window system define supported by
this extension, before including the OpenXR platform header
openxr_platform.h, in all portions of the library or application that
interact with the types, values, and functions it defines.
The only window system define currently supported by this extension is:
|
Note
This extension does not specify requirements for when the supplied
context is current in any thread, unlike |
12.20.2. Get Graphics Requirements
The xrGetOpenGLESGraphicsRequirementsKHR function is defined as:
// Provided by XR_KHR_opengl_es_enable
XrResult xrGetOpenGLESGraphicsRequirementsKHR(
XrInstance instance,
XrSystemId systemId,
XrGraphicsRequirementsOpenGLESKHR* graphicsRequirements);
This call queries OpenGL ES API version requirements for an instance and
system.
The xrGetOpenGLESGraphicsRequirementsKHR function identifies to the
application the minimum OpenGL ES version requirement and the highest known
tested OpenGL ES version.
The runtime must return XR_ERROR_GRAPHICS_REQUIREMENTS_CALL_MISSING
(XR_ERROR_VALIDATION_FAILURE may be returned due to legacy behavior)
on calls to xrCreateSession if
xrGetOpenGLESGraphicsRequirementsKHR has not been called for the same
instance and systemId.
The XrGraphicsRequirementsOpenGLESKHR structure is defined as:
// Provided by XR_KHR_opengl_es_enable
typedef struct XrGraphicsRequirementsOpenGLESKHR {
XrStructureType type;
void* next;
XrVersion minApiVersionSupported;
XrVersion maxApiVersionSupported;
} XrGraphicsRequirementsOpenGLESKHR;
XrGraphicsRequirementsOpenGLESKHR is populated by xrGetOpenGLESGraphicsRequirementsKHR with the runtime’s OpenGL ES API version requirements.
12.20.3. Graphics Binding Structure
These structures are only available when the corresponding
XR_USE_PLATFORM_ window system/platform
macro is defined before including openxr_platform.h.
The XrGraphicsBindingOpenGLESAndroidKHR structure is defined as:
// Provided by XR_KHR_opengl_es_enable
typedef struct XrGraphicsBindingOpenGLESAndroidKHR {
XrStructureType type;
const void* next;
EGLDisplay display;
EGLConfig config;
EGLContext context;
} XrGraphicsBindingOpenGLESAndroidKHR;
To create an OpenGL ES-backed XrSession on Android, the application
can provide a pointer to an XrGraphicsBindingOpenGLESAndroidKHR
structure in the XrSessionCreateInfo::next chain when calling
xrCreateSession.
Creating a session using this structure triggers several requirements on the runtime regarding swapchain images. See the specification of XrSwapchainImageOpenGLESKHR for details.
The required window system configuration define to expose this structure type is XR_USE_PLATFORM_ANDROID.
12.20.4. Swapchain Images
The XrSwapchainImageOpenGLESKHR structure is defined as:
// Provided by XR_KHR_opengl_es_enable
typedef struct XrSwapchainImageOpenGLESKHR {
XrStructureType type;
void* next;
uint32_t image;
} XrSwapchainImageOpenGLESKHR;
If a given session was created with some XrGraphicsBindingOpenGLES*KHR
graphics binding structure, the following conditions apply.
-
Calls to xrEnumerateSwapchainImages on an XrSwapchain in that session must return an array of XrSwapchainImageOpenGLESKHR structures.
-
Whenever an OpenXR function accepts an XrSwapchainImageBaseHeader pointer as a parameter in that session, the runtime must also accept a pointer to an XrSwapchainImageOpenGLESKHR structure.
The OpenXR runtime must interpret the bottom-left corner of the swapchain image as the coordinate origin unless specified otherwise by extension functionality.
12.20.5. OpenGL ES Swapchain Flag Bits
All valid XrSwapchainUsageFlags values passed in a session created using XrGraphicsBindingOpenGLESAndroidKHR should be ignored as there is no mapping to OpenGL ES texture settings.
|
Note
In such a session, a runtime may use a supporting graphics API, such as Vulkan, to allocate images that are intended to alias with OpenGLES textures, and be part of an XrSwapchain. A runtime which allocates the texture with a different graphics API may need to enable several usage flags on the underlying native texture resource to ensure compatibility with OpenGL ES. |
12.20.8. New Enum Constants
-
XR_KHR_OPENGL_ES_ENABLE_EXTENSION_NAME -
XR_KHR_opengl_es_enable_SPEC_VERSION -
Extending XrStructureType:
-
XR_TYPE_GRAPHICS_BINDING_OPENGL_ES_ANDROID_KHR -
XR_TYPE_GRAPHICS_REQUIREMENTS_OPENGL_ES_KHR -
XR_TYPE_SWAPCHAIN_IMAGE_OPENGL_ES_KHR
-
12.20.9. Version History
-
Revision 1, 2018-05-07 (Mark Young)
-
Initial draft
-
-
Revision 2, 2018-06-21 (Bryce Hutchings)
-
Add new
xrGetOpenGLESGraphicsRequirementsKHR
-
-
Revision 3, 2018-11-15 (Paul Pedriana)
-
Specified the swapchain texture coordinate origin.
-
-
Revision 4, 2018-11-16 (Minmin Gong)
-
Specified Y direction and Z range in clip space
-
-
Revision 5, 2019-01-25 (Robert Menzel)
-
Description updated
-
-
Revision 6, 2019-07-12 (Martin Renschler)
-
Description updated
-
-
Revision 7, 2020-08-06 (Bryce Hutchings)
-
Added new
XR_ERROR_GRAPHICS_REQUIREMENTS_CALL_MISSINGerror code
-
-
Revision 8, 2021-08-27 (Paulo F. Gomes)
-
Document handling of
XrSwapchainUsageFlags
-
-
Revision 9, 2025-03-07 (Rylie Pavlik, Collabora, Ltd.)
-
Re-organize, clarify, and make more uniform with other graphics binding extensions, and describe known design quirk.
-
-
Revision 10, 2025-03-21 (Aaron Leiby)
-
Removed clip space specification
-
12.21. XR_KHR_swapchain_usage_input_attachment_bit
- Name String
-
XR_KHR_swapchain_usage_input_attachment_bit - Extension Type
-
Instance extension
- Registered Extension Number
-
166
- Revision
-
3
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2021-05-11
- IP Status
-
No known IP claims.
- Contributors
-
Jakob Bornecrantz, Collabora
Rylie Pavlik, Collabora
Overview
This extension enables an application to specify that swapchain images should be created in a way so that they can be used as input attachments. At the time of writing this bit only affects Vulkan swapchains.
New Object Types
New Flag Types
New Enum Constants
XrSwapchainUsageFlagBits enumeration is extended with:
-
XR_SWAPCHAIN_USAGE_INPUT_ATTACHMENT_BIT_KHR- indicates that the image format may be used as an input attachment.
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2020-07-23 (Jakob Bornecrantz)
-
Initial draft
-
-
Revision 2, 2020-07-24 (Jakob Bornecrantz)
-
Added note about only affecting Vulkan
-
Changed from MNDX to MND
-
-
Revision 3, 2021-05-11 (Rylie Pavlik, Collabora, Ltd.)
-
Updated for promotion from MND to KHR
-
12.22. XR_KHR_visibility_mask
- Name String
-
XR_KHR_visibility_mask - Extension Type
-
Instance extension
- Registered Extension Number
-
32
- Revision
-
2
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2018-07-05
- IP Status
-
No known IP claims.
- Contributors
-
Paul Pedriana, Oculus
Alex Turner, Microsoft - Contacts
-
Paul Pedriana, Oculus
Overview
This extension support the providing of a per-view drawing mask for applications. The primary purpose of this is to enable performance improvements that result from avoiding drawing on areas that are not visible to the user. A common occurrence in head-mounted VR hardware is that the optical system’s frustum does not intersect precisely with the rectangular display it is viewing. As a result, it may be that there are parts of the display that are not visible to the user, such as the corners of the display. In such cases it would be unnecessary for the application to draw into those parts.
New Object Types
New Flag Types
New Enum Constants
New Enums
XrVisibilityMaskTypeKHR identifies the different types of mask specification that is supported. The application can request a view mask in any of the formats identified by these types.
// Provided by XR_KHR_visibility_mask
typedef enum XrVisibilityMaskTypeKHR {
XR_VISIBILITY_MASK_TYPE_HIDDEN_TRIANGLE_MESH_KHR = 1,
XR_VISIBILITY_MASK_TYPE_VISIBLE_TRIANGLE_MESH_KHR = 2,
XR_VISIBILITY_MASK_TYPE_LINE_LOOP_KHR = 3,
XR_VISIBILITY_MASK_TYPE_MAX_ENUM_KHR = 0x7FFFFFFF
} XrVisibilityMaskTypeKHR;
New Structures
The XrVisibilityMaskKHR structure is an input/output struct which specifies the view mask.
// Provided by XR_KHR_visibility_mask
typedef struct XrVisibilityMaskKHR {
XrStructureType type;
void* next;
uint32_t vertexCapacityInput;
uint32_t vertexCountOutput;
XrVector2f* vertices;
uint32_t indexCapacityInput;
uint32_t indexCountOutput;
uint32_t* indices;
} XrVisibilityMaskKHR;
The XrEventDataVisibilityMaskChangedKHR structure is defined as:
// Provided by XR_KHR_visibility_mask
typedef struct XrEventDataVisibilityMaskChangedKHR {
XrStructureType type;
const void* next;
XrSession session;
XrViewConfigurationType viewConfigurationType;
uint32_t viewIndex;
} XrEventDataVisibilityMaskChangedKHR;
The XrEventDataVisibilityMaskChangedKHR structure is queued to indicate that a given visibility mask has changed. The application should respond to the event by calling xrGetVisibilityMaskKHR to retrieve the updated mask. This event is per-view, so if the masks for multiple views in a configuration change then multiple instances of this event will be queued for the application, one per view.
New Functions
The xrGetVisibilityMaskKHR function is defined as:
// Provided by XR_KHR_visibility_mask
XrResult xrGetVisibilityMaskKHR(
XrSession session,
XrViewConfigurationType viewConfigurationType,
uint32_t viewIndex,
XrVisibilityMaskTypeKHR visibilityMaskType,
XrVisibilityMaskKHR* visibilityMask);
xrGetVisibilityMaskKHR retrieves the view mask for a given view.
This function follows the two-call
idiom for filling multiple buffers in a struct.
Specifically, if either XrVisibilityMaskKHR::vertexCapacityInput
or XrVisibilityMaskKHR::indexCapacityInput is 0, the runtime
must respond as if both fields were set to 0, returning the vertex count
and index count through XrVisibilityMaskKHR::vertexCountOutput
or XrVisibilityMaskKHR::indexCountOutput respectively.
If a view mask for the specified view isn’t available, the returned vertex
and index counts must be 0.
Issues
Version History
-
Revision 1, 2018-07-05 (Paul Pedriana)
-
Initial version.
-
-
Revision 2, 2019-07-15 (Alex Turner)
-
Adjust two-call idiom usage.
-
12.23. XR_KHR_vulkan_enable
- Name String
-
XR_KHR_vulkan_enable - Extension Type
-
Instance extension
- Registered Extension Number
-
26
- Revision
-
10
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2025-03-21
- IP Status
-
No known IP claims.
- Contributors
-
Mark Young, LunarG
Paul Pedriana, Oculus
Ed Hutchins, Oculus
Andres Rodriguez, Valve
Dan Ginsburg, Valve
Bryce Hutchings, Microsoft
Minmin Gong, Microsoft
Robert Menzel, NVIDIA
Paulo Gomes, Samsung Electronics
Aaron Leiby, Valve
12.23.1. Overview
This extension enables the use of the Vulkan graphics API in an OpenXR application. Without this extension, an OpenXR application may not be able to use any Vulkan swapchain images.
This extension provides the mechanisms necessary for an application to generate a valid XrGraphicsBindingVulkanKHR structure in order to create a Vulkan-based XrSession. Note that during this process the application is responsible for creating all the required Vulkan objects. However, the runtime provides the Vulkan images to render into. This extension provides mechanisms for the application to interact with those images by calling by calling xrEnumerateSwapchainImages.
In order to expose the structures, types, and functions of this extension,
the application source code must define XR_USE_GRAPHICS_API_VULKAN
before including the OpenXR platform header openxr_platform.h, in all
portions of the library or application that interact with the types, values,
and functions it defines.
12.23.2. Concurrency
Vulkan requires that concurrent access to a VkQueue from multiple
threads be externally synchronized.
Therefore, OpenXR functions that may access the VkQueue specified in
the XrGraphicsBindingVulkanKHR must also be externally synchronized.
The list of OpenXR functions where the OpenXR runtime may access the
VkQueue are:
The runtime must not access the VkQueue in any OpenXR function that
is not listed above or in an extension definition.
12.23.3. Initialization
Some of the requirements for creating a valid
XrGraphicsBindingVulkanKHR include correct initialization of a
VkInstance, VkPhysicalDevice, and VkDevice.
A runtime may require that the VkInstance be initialized to a
specific Vulkan API version.
Additionally, the runtime may require a set of instance extensions to be
enabled in the VkInstance.
These requirements can be queried by the application using
xrGetVulkanGraphicsRequirementsKHR and
xrGetVulkanInstanceExtensionsKHR, respectively.
Similarly, the runtime may require the VkDevice to have a set of
device extensions enabled, which can be queried using
xrGetVulkanDeviceExtensionsKHR.
In order to satisfy the VkPhysicalDevice requirements, the application
can query xrGetVulkanGraphicsDeviceKHR to identify the correct
VkPhysicalDevice.
Populating an XrGraphicsBindingVulkanKHR with a VkInstance,
VkDevice, or VkPhysicalDevice that does not meet the
requirements outlined by this extension may result in undefined behavior by
the OpenXR runtime.
The API version, instance extension, device extension and physical device
requirements only apply to the VkInstance, VkDevice, and
VkPhysicalDevice objects which the application wishes to associate
with an XrGraphicsBindingVulkanKHR.
The xrGetVulkanGraphicsRequirementsKHR function is defined as:
// Provided by XR_KHR_vulkan_enable
XrResult xrGetVulkanGraphicsRequirementsKHR(
XrInstance instance,
XrSystemId systemId,
XrGraphicsRequirementsVulkanKHR* graphicsRequirements);
The xrGetVulkanGraphicsRequirementsKHR function identifies to the
application the minimum Vulkan version requirement and the highest known
tested Vulkan version.
The runtime must return XR_ERROR_GRAPHICS_REQUIREMENTS_CALL_MISSING
(XR_ERROR_VALIDATION_FAILURE may be returned due to legacy behavior)
on calls to xrCreateSession if
xrGetVulkanGraphicsRequirementsKHR has not been called for the same
instance and systemId.
The XrGraphicsRequirementsVulkanKHR structure is defined as:
// Provided by XR_KHR_vulkan_enable
typedef struct XrGraphicsRequirementsVulkanKHR {
XrStructureType type;
void* next;
XrVersion minApiVersionSupported;
XrVersion maxApiVersionSupported;
} XrGraphicsRequirementsVulkanKHR;
XrGraphicsRequirementsVulkanKHR is populated by xrGetVulkanGraphicsRequirementsKHR with the runtime’s Vulkan API version requirements.
Some computer systems have multiple graphics devices, each of which may have independent external display outputs. XR systems that connect to such graphics devices are typically connected to a single device. Applications need to know what graphics device the XR system is connected to so that they can use that graphics device to generate XR images.
To identify what graphics device needs to be used for an instance and system, call:
// Provided by XR_KHR_vulkan_enable
XrResult xrGetVulkanGraphicsDeviceKHR(
XrInstance instance,
XrSystemId systemId,
VkInstance vkInstance,
VkPhysicalDevice* vkPhysicalDevice);
xrGetVulkanGraphicsDeviceKHR function identifies to the application
what graphics device (Vulkan VkPhysicalDevice) needs to be used.
xrGetVulkanGraphicsDeviceKHR must be called prior to calling
xrCreateSession, and the VkPhysicalDevice that
xrGetVulkanGraphicsDeviceKHR returns should be passed to
xrCreateSession in the XrGraphicsBindingVulkanKHR.
// Provided by XR_KHR_vulkan_enable
XrResult xrGetVulkanInstanceExtensionsKHR(
XrInstance instance,
XrSystemId systemId,
uint32_t bufferCapacityInput,
uint32_t* bufferCountOutput,
char* buffer);
// Provided by XR_KHR_vulkan_enable
XrResult xrGetVulkanDeviceExtensionsKHR(
XrInstance instance,
XrSystemId systemId,
uint32_t bufferCapacityInput,
uint32_t* bufferCountOutput,
char* buffer);
12.23.4. Graphics Binding Structure
The XrGraphicsBindingVulkanKHR structure is defined as:
// Provided by XR_KHR_vulkan_enable
typedef struct XrGraphicsBindingVulkanKHR {
XrStructureType type;
const void* next;
VkInstance instance;
VkPhysicalDevice physicalDevice;
VkDevice device;
uint32_t queueFamilyIndex;
uint32_t queueIndex;
} XrGraphicsBindingVulkanKHR;
To create a Vulkan-backed XrSession, the application provides a
pointer to an XrGraphicsBindingVulkanKHR structure in the
XrSessionCreateInfo::next chain when calling
xrCreateSession.
Creating a session using this structure triggers several requirements on the runtime regarding swapchain images. See the specification of XrSwapchainImageVulkanKHR for details. The application must externally synchronize the queue referred to by this structure according to Concurrency.
12.23.5. Swapchain Images
The XrSwapchainImageVulkanKHR structure is defined as:
// Provided by XR_KHR_vulkan_enable
typedef struct XrSwapchainImageVulkanKHR {
XrStructureType type;
void* next;
VkImage image;
} XrSwapchainImageVulkanKHR;
If a given session was created with XrGraphicsBindingVulkanKHR, the following conditions apply.
-
Calls to xrEnumerateSwapchainImages on an XrSwapchain in that session must return an array of XrSwapchainImageVulkanKHR structures.
-
Whenever an OpenXR function accepts an XrSwapchainImageBaseHeader pointer as a parameter in that session, the runtime must also accept a pointer to an XrSwapchainImageVulkanKHR.
The OpenXR runtime must interpret the top-left corner of the swapchain image as the coordinate origin unless specified otherwise by extension functionality.
The OpenXR runtime must return a texture created in accordance with Vulkan Swapchain Flag Bits.
The OpenXR runtime must manage image resource state in accordance with Vulkan Swapchain Image Layout.
12.23.6. Vulkan Swapchain Flag Bits
All XrSwapchainUsageFlags values passed in a session created using
XrGraphicsBindingVulkanKHR must be interpreted as follows by the
runtime, so that the returned swapchain images used by the application may
be used as if they were created with at least the specified
VkImageUsageFlagBits or VkImageCreateFlagBits set.
| XrSwapchainUsageFlagBits | Corresponding Vulkan flag bit |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12.23.7. Vulkan Swapchain Image Layout
If an application waits on a swapchain image by calling
xrWaitSwapchainImage in a session created using
XrGraphicsBindingVulkanKHR, and that call returns XR_SUCCESS or
XR_SESSION_LOSS_PENDING, then the OpenXR runtime must guarantee that
the following conditions are true, keeping in mind that the runtime must
not access the VkQueue in xrWaitSwapchainImage:
-
The image has a memory layout compatible with
VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMALfor color images, orVK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMALfor depth images. -
The
VkQueuespecified in XrGraphicsBindingVulkanKHR has ownership of the image.
When an application releases a swapchain image by calling xrReleaseSwapchainImage, in a session created using XrGraphicsBindingVulkanKHR, the OpenXR runtime must interpret the image as:
-
Having a memory layout compatible with
VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMALfor color images, orVK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMALfor depth images. -
Being owned by the
VkQueuespecified in XrGraphicsBindingVulkanKHR.
The application is responsible for transitioning the swapchain image back to the image layout and queue ownership that the OpenXR runtime requires. If the image is not in a layout compatible with the above specifications the runtime may exhibit undefined behavior.
12.23.10. New Enum Constants
-
XR_KHR_VULKAN_ENABLE_EXTENSION_NAME -
XR_KHR_vulkan_enable_SPEC_VERSION -
Extending XrStructureType:
-
XR_TYPE_GRAPHICS_BINDING_VULKAN_KHR -
XR_TYPE_GRAPHICS_REQUIREMENTS_VULKAN_KHR -
XR_TYPE_SWAPCHAIN_IMAGE_VULKAN_KHR
-
12.23.11. Version History
-
Revision 1, 2018-05-07 (Mark Young)
-
Initial draft
-
-
Revision 2, 2018-06-21 (Bryce Hutchings)
-
Replace
sessionparameter withinstanceandsystemIdparameters. -
Move
xrGetVulkanDeviceExtensionsKHR,xrGetVulkanInstanceExtensionsKHRandxrGetVulkanGraphicsDeviceKHRfunctions into this extension -
Add new
XrGraphicsRequirementsVulkanKHRfunction.
-
-
Revision 3, 2018-11-15 (Paul Pedriana)
-
Specified the swapchain texture coordinate origin.
-
-
Revision 4, 2018-11-16 (Minmin Gong)
-
Specified Y direction and Z range in clip space
-
-
Revision 5, 2019-01-24 (Robert Menzel)
-
Description updated
-
-
Revision 6, 2019-01-25 (Andres Rodriguez)
-
Reword sections of the spec to shift requirements on to the runtime instead of the app
-
-
Revision 7, 2020-08-06 (Bryce Hutchings)
-
Added new
XR_ERROR_GRAPHICS_REQUIREMENTS_CALL_MISSINGerror code
-
-
Revision 8, 2021-01-21 (Rylie Pavlik, Collabora, Ltd.)
-
Document mapping for
XrSwapchainUsageFlags
-
-
Revision 9, 2025-03-07 (Rylie Pavlik, Collabora, Ltd.)
-
Re-organize, clarify, and make more uniform with other graphics binding extensions.
-
-
Revision 10, 2025-03-21 (Aaron Leiby)
-
Removed clip space specification
-
12.24. XR_KHR_vulkan_enable2
- Name String
-
XR_KHR_vulkan_enable2 - Extension Type
-
Instance extension
- Registered Extension Number
-
91
- Revision
-
4
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2025-03-21
- IP Status
-
No known IP claims.
- Contributors
-
Mark Young, LunarG
Paul Pedriana, Oculus
Ed Hutchins, Oculus
Andres Rodriguez, Valve
Dan Ginsburg, Valve
Bryce Hutchings, Microsoft
Minmin Gong, Microsoft
Robert Menzel, NVIDIA
Paulo Gomes, Samsung Electronics
Aaron Leiby, Valve
12.24.1. Overview
This extension enables the use of the Vulkan graphics API in an OpenXR application. Without this extension, an OpenXR application may not be able to use any Vulkan swapchain images.
This extension provides the mechanisms necessary for an application to generate a valid XrGraphicsBindingVulkan2KHR structure in order to create a Vulkan-based XrSession.
This extension also provides mechanisms for the application to interact with images acquired by calling xrEnumerateSwapchainImages.
In order to expose the structures, types, and functions of this extension,
the application source code must define XR_USE_GRAPHICS_API_VULKAN
before including the OpenXR platform header openxr_platform.h, in all
portions of the library or application that interact with the types, values,
and functions it defines.
|
Note
This extension is intended as an alternative to |
12.24.2. Initialization
When operating in Vulkan mode, the OpenXR runtime and the application will share the Vulkan queue described in the XrGraphicsBindingVulkan2KHR structure. This section of the document describes the mechanisms this extension exposes to ensure the shared Vulkan queue is compatible with the runtime and the application’s requirements.
Vulkan Version Requirements
First, a compatible Vulkan version must be agreed upon. To query the runtime’s Vulkan API version requirements an application will call:
// Provided by XR_KHR_vulkan_enable2
XrResult xrGetVulkanGraphicsRequirements2KHR(
XrInstance instance,
XrSystemId systemId,
XrGraphicsRequirementsVulkanKHR* graphicsRequirements);
The xrGetVulkanGraphicsRequirements2KHR function identifies to the
application the runtime’s minimum Vulkan version requirement and the highest
known tested Vulkan version.
xrGetVulkanGraphicsRequirements2KHR must be called prior to calling
xrCreateSession.
The runtime must return XR_ERROR_GRAPHICS_REQUIREMENTS_CALL_MISSING
on calls to xrCreateSession if
xrGetVulkanGraphicsRequirements2KHR has not been called for the same
instance and systemId.
The XrGraphicsRequirementsVulkan2KHR structure populated by xrGetVulkanGraphicsRequirements2KHR is defined as:
// Provided by XR_KHR_vulkan_enable2
// XrGraphicsRequirementsVulkan2KHR is an alias for XrGraphicsRequirementsVulkanKHR
typedef struct XrGraphicsRequirementsVulkanKHR {
XrStructureType type;
void* next;
XrVersion minApiVersionSupported;
XrVersion maxApiVersionSupported;
} XrGraphicsRequirementsVulkanKHR;
typedef XrGraphicsRequirementsVulkanKHR XrGraphicsRequirementsVulkan2KHR;
Vulkan Instance Creation
Second, a compatible VkInstance must be created.
The xrCreateVulkanInstanceKHR entry point is a wrapper around
vkCreateInstance intended for this purpose.
When called, the runtime must aggregate the requirements specified by the
application with its own requirements and forward the VkInstance
creation request to the vkCreateInstance function pointer returned by
pfnGetInstanceProcAddr.
// Provided by XR_KHR_vulkan_enable2
XrResult xrCreateVulkanInstanceKHR(
XrInstance instance,
const XrVulkanInstanceCreateInfoKHR* createInfo,
VkInstance* vulkanInstance,
VkResult* vulkanResult);
The XrVulkanInstanceCreateInfoKHR structure contains the input parameters to xrCreateVulkanInstanceKHR.
// Provided by XR_KHR_vulkan_enable2
typedef struct XrVulkanInstanceCreateInfoKHR {
XrStructureType type;
const void* next;
XrSystemId systemId;
XrVulkanInstanceCreateFlagsKHR createFlags;
PFN_vkGetInstanceProcAddr pfnGetInstanceProcAddr;
const VkInstanceCreateInfo* vulkanCreateInfo;
const VkAllocationCallbacks* vulkanAllocator;
} XrVulkanInstanceCreateInfoKHR;
The XrVulkanInstanceCreateInfoKHR::createFlags member is of the
following type, and contains a bitwise-OR of zero or more of the bits
defined in XrVulkanInstanceCreateFlagBitsKHR.
typedef XrFlags64 XrVulkanInstanceCreateFlagsKHR;
Valid bits for XrVulkanInstanceCreateFlagsKHR are defined by XrVulkanInstanceCreateFlagBitsKHR.
// Flag bits for XrVulkanInstanceCreateFlagsKHR
There are currently no Vulkan instance creation flag bits defined. This is reserved for future use.
Physical Device Selection
Third, a VkPhysicalDevice must be chosen.
Some computer systems may have multiple graphics devices, each of which may
have independent external display outputs.
The runtime must report a VkPhysicalDevice that is compatible with
the OpenXR implementation when xrGetVulkanGraphicsDevice2KHR is
invoked.
The application will use this VkPhysicalDevice to interact with the
OpenXR runtime.
// Provided by XR_KHR_vulkan_enable2
XrResult xrGetVulkanGraphicsDevice2KHR(
XrInstance instance,
const XrVulkanGraphicsDeviceGetInfoKHR* getInfo,
VkPhysicalDevice* vulkanPhysicalDevice);
The XrVulkanGraphicsDeviceGetInfoKHR structure contains the input parameters to xrCreateVulkanInstanceKHR.
// Provided by XR_KHR_vulkan_enable2
typedef struct XrVulkanGraphicsDeviceGetInfoKHR {
XrStructureType type;
const void* next;
XrSystemId systemId;
VkInstance vulkanInstance;
} XrVulkanGraphicsDeviceGetInfoKHR;
Vulkan Device Creation
Fourth, a compatible VkDevice must be created.
The xrCreateVulkanDeviceKHR entry point is a wrapper around
vkCreateDevice intended for this purpose.
When called, the runtime must aggregate the requirements specified by the
application with its own requirements and forward the VkDevice
creation request to the vkCreateDevice function pointer returned by
XrVulkanInstanceCreateInfoKHR::pfnGetInstanceProcAddr.
// Provided by XR_KHR_vulkan_enable2
XrResult xrCreateVulkanDeviceKHR(
XrInstance instance,
const XrVulkanDeviceCreateInfoKHR* createInfo,
VkDevice* vulkanDevice,
VkResult* vulkanResult);
The XrVulkanDeviceCreateInfoKHR structure contains the input parameters to xrCreateVulkanDeviceKHR.
// Provided by XR_KHR_vulkan_enable2
typedef struct XrVulkanDeviceCreateInfoKHR {
XrStructureType type;
const void* next;
XrSystemId systemId;
XrVulkanDeviceCreateFlagsKHR createFlags;
PFN_vkGetInstanceProcAddr pfnGetInstanceProcAddr;
VkPhysicalDevice vulkanPhysicalDevice;
const VkDeviceCreateInfo* vulkanCreateInfo;
const VkAllocationCallbacks* vulkanAllocator;
} XrVulkanDeviceCreateInfoKHR;
If the vulkanPhysicalDevice parameter does not match the output of
xrGetVulkanGraphicsDeviceKHR, then the runtime must return
XR_ERROR_HANDLE_INVALID.
XrVulkanDeviceCreateFlagsKHR specify details of device creation.
The XrVulkanDeviceCreateInfoKHR::createFlags member is of the
following type, and contains a bitwise-OR of zero or more of the bits
defined in XrVulkanDeviceCreateFlagBitsKHR.
typedef XrFlags64 XrVulkanDeviceCreateFlagsKHR;
Valid bits for XrVulkanDeviceCreateFlagsKHR are defined by XrVulkanDeviceCreateFlagBitsKHR.
// Flag bits for XrVulkanDeviceCreateFlagsKHR
There are currently no Vulkan device creation flag bits defined. This is reserved for future use.
Queue Selection
Last, the application selects a VkQueue from the VkDevice that
has the VK_QUEUE_GRAPHICS_BIT set.
|
Note
The runtime may schedule work on the |
Vulkan Graphics Binding
When creating a Vulkan-backed XrSession, the application will chain a pointer to an XrGraphicsBindingVulkan2KHR to the XrSessionCreateInfo parameter of xrCreateSession. With the data collected in the previous sections, the application now has all the necessary information to populate an XrGraphicsBindingVulkan2KHR structure for session creation.
// Provided by XR_KHR_vulkan_enable2
// XrGraphicsBindingVulkan2KHR is an alias for XrGraphicsBindingVulkanKHR
typedef struct XrGraphicsBindingVulkanKHR {
XrStructureType type;
const void* next;
VkInstance instance;
VkPhysicalDevice physicalDevice;
VkDevice device;
uint32_t queueFamilyIndex;
uint32_t queueIndex;
} XrGraphicsBindingVulkanKHR;
typedef XrGraphicsBindingVulkanKHR XrGraphicsBindingVulkan2KHR;
Populating an XrGraphicsBindingVulkan2KHR structure with a member that does not meet the requirements outlined by this extension may result in undefined behavior by the OpenXR runtime.
The requirements outlined in this extension only apply to the
VkInstance, VkDevice, VkPhysicalDevice and VkQueue
objects which the application wishes to associate with an
XrGraphicsBindingVulkan2KHR.
12.24.3. Concurrency
Vulkan requires that concurrent access to a VkQueue from multiple
threads be externally synchronized.
Therefore, OpenXR functions that may access the VkQueue specified in
the XrGraphicsBindingVulkan2KHR must also be externally synchronized
by the OpenXR application.
The list of OpenXR functions where the OpenXR runtime may access the
VkQueue are:
The runtime must not access the VkQueue in any OpenXR function that
is not listed above or in an extension definition.
Failure by the application to synchronize access to VkQueue may
result in undefined behavior in the OpenXR runtime.
12.24.4. Swapchain Interactions
Swapchain Images
When an application interacts with XrSwapchainImageBaseHeader structures in a Vulkan-backed XrSession, the application can interpret these to be XrSwapchainImageVulkan2KHR structures. These are defined as:
// Provided by XR_KHR_vulkan_enable2
// XrSwapchainImageVulkan2KHR is an alias for XrSwapchainImageVulkanKHR
typedef struct XrSwapchainImageVulkanKHR {
XrStructureType type;
void* next;
VkImage image;
} XrSwapchainImageVulkanKHR;
typedef XrSwapchainImageVulkanKHR XrSwapchainImageVulkan2KHR;
If a given session was created with XrGraphicsBindingVulkan2KHR, the following conditions apply.
-
Calls to xrEnumerateSwapchainImages on an XrSwapchain in that session must return an array of XrSwapchainImageVulkan2KHR structures.
-
Whenever an OpenXR function accepts an XrSwapchainImageBaseHeader pointer as a parameter in that session, the runtime must also accept a pointer to an XrSwapchainImageVulkan2KHR.
The OpenXR runtime must interpret the top-left corner of the swapchain image as the coordinate origin unless specified otherwise by extension functionality.
Swapchain Image Layout
If an application waits on a swapchain image by calling
xrWaitSwapchainImage in a session created using
XrGraphicsBindingVulkan2KHR, and that call returns XR_SUCCESS or
XR_SESSION_LOSS_PENDING, then the OpenXR runtime must guarantee that
the following conditions are true, keeping in mind that the runtime must
not access the VkQueue in xrWaitSwapchainImage:
-
The image has a memory layout compatible with
VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMALfor color images, orVK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMALfor depth images. -
The
VkQueuespecified in XrGraphicsBindingVulkan2KHR has ownership of the image.
When an application releases a swapchain image by calling xrReleaseSwapchainImage, in a session created using XrGraphicsBindingVulkan2KHR, the OpenXR runtime must interpret the image as:
-
Having a memory layout compatible with
VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMALfor color images, orVK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMALfor depth images. -
Being owned by the
VkQueuespecified in XrGraphicsBindingVulkan2KHR. -
Being referenced by command buffers submitted to the
VkQueuespecified in XrGraphicsBindingVulkan2KHR which have not yet completed execution.
The application is responsible for transitioning the swapchain image back to the image layout and queue ownership that the OpenXR runtime requires. If the image is not in a layout compatible with the above specifications the runtime may exhibit undefined behavior.
Swapchain Flag Bits
All XrSwapchainUsageFlags values passed in a session created using
XrGraphicsBindingVulkan2KHR must be interpreted as follows by the
runtime, so that the returned swapchain images used by the application may
be used as if they were created with at least the specified
VkImageUsageFlagBits or VkImageCreateFlagBits set.
| XrSwapchainUsageFlagBits | Corresponding Vulkan flag bit |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12.24.5. Appendix
Questions
-
Should the xrCreateVulkanDeviceKHR and xrCreateVulkanInstanceKHR functions have an output parameter that returns the combined list of parameters used to create the Vulkan device/instance?
-
No. If the application is interested in capturing this data it can set the
pfnGetInstanceProcAddrparameter to a local callback that captures the relevant information.
-
Quick Reference
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_GRAPHICS_REQUIREMENTS_VULKAN2_KHR(alias ofXR_TYPE_GRAPHICS_REQUIREMENTS_VULKAN_KHR) -
XR_TYPE_GRAPHICS_BINDING_VULKAN2_KHR(alias ofXR_TYPE_GRAPHICS_BINDING_VULKAN_KHR) -
XR_TYPE_SWAPCHAIN_IMAGE_VULKAN2_KHR(alias ofXR_TYPE_SWAPCHAIN_IMAGE_VULKAN_KHR)
Version History
-
Revision 1, 2020-05-04 (Andres Rodriguez)
-
Initial draft
-
-
Revision 2, 2021-01-21 (Rylie Pavlik, Collabora, Ltd.)
-
Document mapping for
XrSwapchainUsageFlags
-
-
Revision 3, 2025-03-07 (Rylie Pavlik, Collabora, Ltd.)
-
Clarify and make more uniform with other graphics binding extensions.
-
-
Revision 4, 2025-03-21 (Aaron Leiby)
-
Removed clip space specification
-
12.25. XR_KHR_vulkan_swapchain_format_list
- Name String
-
XR_KHR_vulkan_swapchain_format_list - Extension Type
-
Instance extension
- Registered Extension Number
-
15
- Revision
-
5
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2024-11-13
- IP Status
-
No known IP claims.
- Contributors
-
Paul Pedriana, Oculus
Dan Ginsburg, Valve
Jakob Bornecrantz, NVIDIA
Overview
Vulkan has the VK_KHR_image_format_list extension which allows
applications to tell the vkCreateImage function which formats the
application intends to use when VK_IMAGE_CREATE_MUTABLE_FORMAT_BIT is
specified.
This OpenXR extension exposes that Vulkan extension to OpenXR applications.
In the same way that a Vulkan-based application can pass a
VkImageFormatListCreateInfo struct to the vkCreateImage
function, an OpenXR application can pass an identically configured
XrVulkanSwapchainFormatListCreateInfoKHR structure to
xrCreateSwapchain.
Applications using this extension to specify more than one swapchain format
must create OpenXR swapchains with the
XR_SWAPCHAIN_USAGE_MUTABLE_FORMAT_BIT bit set.
Runtimes implementing this extension must support the
XR_KHR_vulkan_enable or the XR_KHR_vulkan_enable2 extension.
When an application enables and uses XR_KHR_vulkan_enable2 as the
graphics binding extension, the runtime must add
VK_KHR_image_format_list to the list of extensions enabled in
xrCreateVulkanDeviceKHR.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
XR_TYPE_VULKAN_SWAPCHAIN_FORMAT_LIST_CREATE_INFO_KHR
New Enums
New Structures
// Provided by XR_KHR_vulkan_swapchain_format_list
typedef struct XrVulkanSwapchainFormatListCreateInfoKHR {
XrStructureType type;
const void* next;
uint32_t viewFormatCount;
const VkFormat* viewFormats;
} XrVulkanSwapchainFormatListCreateInfoKHR;
New Functions
Issues
Version History
-
Revision 1, 2017-09-13 (Paul Pedriana)
-
Initial proposal.
-
-
Revision 2, 2018-06-21 (Bryce Hutchings)
-
Update reference of
XR_KHR_vulkan_extension_requirementstoXR_KHR_vulkan_enable
-
-
Revision 3, 2020-01-01 (Andres Rodriguez)
-
Update for
XR_KHR_vulkan_enable2
-
-
Revision 4, 2021-01-21 (Rylie Pavlik, Collabora, Ltd.)
-
Fix reference to the mutable-format bit in Vulkan.
-
-
Revision 5, 2024-11-13 (Jakob Bornecrantz, NVIDIA)
-
Fix correct Vulkan enable extension being referenced.
-
Clarify
XR_KHR_vulkan_enable2being used by the app.
-
12.26. XR_KHR_win32_convert_performance_counter_time
- Name String
-
XR_KHR_win32_convert_performance_counter_time - Extension Type
-
Instance extension
- Registered Extension Number
-
36
- Revision
-
1
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2019-01-24
- IP Status
-
No known IP claims.
- Contributors
-
Paul Pedriana, Oculus
Bryce Hutchings, Microsoft
Overview
This extension provides two functions for converting between the Windows
performance counter (QPC) time stamps and XrTime.
The xrConvertWin32PerformanceCounterToTimeKHR function converts from
Windows performance counter time stamps to XrTime, while the
xrConvertTimeToWin32PerformanceCounterKHR function converts
XrTime to Windows performance counter time stamps.
The primary use case for this functionality is to be able to synchronize
events between the local system and the OpenXR system.
New Object Types
New Flag Types
New Enum Constants
New Enums
New Structures
New Functions
To convert from a Windows performance counter time stamp to XrTime,
call:
// Provided by XR_KHR_win32_convert_performance_counter_time
XrResult xrConvertWin32PerformanceCounterToTimeKHR(
XrInstance instance,
const LARGE_INTEGER* performanceCounter,
XrTime* time);
The xrConvertWin32PerformanceCounterToTimeKHR function converts a time
stamp obtained by the QueryPerformanceCounter Windows function to the
equivalent XrTime.
If the output time cannot represent the input
performanceCounter, the runtime must return
XR_ERROR_TIME_INVALID.
To convert from XrTime to a Windows performance counter time stamp,
call:
// Provided by XR_KHR_win32_convert_performance_counter_time
XrResult xrConvertTimeToWin32PerformanceCounterKHR(
XrInstance instance,
XrTime time,
LARGE_INTEGER* performanceCounter);
The xrConvertTimeToWin32PerformanceCounterKHR function converts an
XrTime to time as if generated by the QueryPerformanceCounter
Windows function.
If the output performanceCounter cannot represent the input
time, the runtime must return XR_ERROR_TIME_INVALID.
Issues
Version History
-
Revision 1, 2019-01-24 (Paul Pedriana)
-
Initial draft
-
12.27. XR_EXT_active_action_set_priority
- Name String
-
XR_EXT_active_action_set_priority - Extension Type
-
Instance extension
- Registered Extension Number
-
374
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2022-08-19
- IP Status
-
No known IP claims.
- Contributors
-
Jules Blok, Epic Games
Lachlan Ford, Microsoft
Overview
The properties of an XrActionSet become immutable after it has been attached to a session. This currently includes the priority of the action set preventing the application from changing the priority number for the duration of the session.
Given that most runtimes do not actually require this number to be immutable this extension adds the ability to provide a different priority number for every XrActiveActionSet provided to xrSyncActions.
When updating the action state with xrSyncActions, the application
can provide a pointer to an XrActiveActionSetPrioritiesEXT structure
in the next chain of XrActionsSyncInfo.
This structure contains an array of XrActiveActionSetPriorityEXT
structures mapping active action sets to their priority numbers.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_ACTIVE_ACTION_SET_PRIORITIES_EXT
New Enums
New Structures
The XrActiveActionSetPrioritiesEXT structure is defined as:
// Provided by XR_EXT_active_action_set_priority
typedef struct XrActiveActionSetPrioritiesEXT {
XrStructureType type;
const void* next;
uint32_t actionSetPriorityCount;
const XrActiveActionSetPriorityEXT* actionSetPriorities;
} XrActiveActionSetPrioritiesEXT;
The runtime must ignore any priority numbers for action sets that were not specified as an active action set in the XrActionsSyncInfo structure as this would have no effect.
The priority numbers provided in XrActiveActionSetPriorityEXT must override the priority number of the active action set starting with the xrSyncActions call it is provided to, until the first subsequent call to xrSyncActions.
When a subsequent call is made to xrSyncActions where an active action set does not have a corresponding priority number specified in the XrActiveActionSetPriorityEXT structure the priority number for that action set must revert back to the priority number provided in XrActionSetCreateInfo when that action set was created.
The XrActiveActionSetPriorityEXT structure is defined as:
// Provided by XR_EXT_active_action_set_priority
typedef struct XrActiveActionSetPriorityEXT {
XrActionSet actionSet;
uint32_t priorityOverride;
} XrActiveActionSetPriorityEXT;
New Functions
Issues
-
Can the same action set have a different priority on each subaction path?
-
No. To avoid additional complexity each action set can only be specified once in the array of priorities which does not include the subaction path.
-
Version History
-
Revision 1, 2022-08-19 (Jules Blok)
-
Initial proposal.
-
12.28. XR_EXT_composition_layer_inverted_alpha
- Name String
-
XR_EXT_composition_layer_inverted_alpha - Extension Type
-
Instance extension
- Registered Extension Number
-
555
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2023-12-06
- IP Status
-
No known IP claims.
- Contributors
-
Xiang Wei, Meta
Jian Zhang, ByteDance
Erica Stella, Epic Games
12.28.1. Overview
This extension provides a flag which, when applied to a composition layer, inverts the interpretation of the alpha value in the layer’s swapchain images. With this bit set, an alpha value of 1.0 represents fully transparent, and an alpha value of 0.0 represents fully opaque. This extension is primarily intended to allow applications using inverted alpha internally to submit composition layers with inverted alpha. Doing so using this extension over more general alternatives may result in less runtime overhead.
12.28.2. Modifications to Composition Layer Blending
Specifically, this extension supersedes some operations in
Composition Layer Blending
section, when XR_COMPOSITION_LAYER_INVERTED_ALPHA_BIT_EXT is set.
-
If a submitted swapchain’s texture format does not include an alpha channel or if the
XR_COMPOSITION_LAYER_BLEND_TEXTURE_SOURCE_ALPHA_BITis unset, then the layer alpha is initialized to zero, instead of one. -
After the LayerColor is fully initialized, the alpha component of LayerColor will then be inverted by the following formula, just before it might be further altered by the
XR_COMPOSITION_LAYER_UNPREMULTIPLIED_ALPHA_BIT.
LayerColor.A = 1.0 - LayerColor.A
When this extension is enabled and
XR_COMPOSITION_LAYER_INVERTED_ALPHA_BIT_EXT is set, the inversion of
LayerColor.A must happen before any other modification to the LayerColor,
including the operations that the
XR_KHR_composition_layer_color_scale_bias extension may introduce.
12.28.3. New Enum Constants
-
XR_EXT_COMPOSITION_LAYER_INVERTED_ALPHA_EXTENSION_NAME -
XR_EXT_composition_layer_inverted_alpha_SPEC_VERSION -
Extending XrCompositionLayerFlagBits:
-
XR_COMPOSITION_LAYER_INVERTED_ALPHA_BIT_EXT
-
Version History
-
Revision 1, 2023-12-06 (Xiang Wei)
-
Initial extension description
-
12.29. XR_EXT_conformance_automation
- Name String
-
XR_EXT_conformance_automation - Extension Type
-
Instance extension
- Registered Extension Number
-
48
- Revision
-
3
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2021-04-14
- IP Status
-
No known IP claims.
- Contributors
-
Lachlan Ford, Microsoft
Rylie Pavlik, Collabora
Overview
The XR_EXT_conformance_automation allows conformance test and runtime developers to provide hints to the underlying runtime as to what input the test is expecting. This enables runtime authors to automate the testing of their runtime conformance. This is useful for achieving rapidly iterative runtime development whilst maintaining conformance for runtime releases.
This extension provides the following capabilities:
-
The ability to toggle the active state of an input device.
-
The ability to set the state of an input device button or other input component.
-
The ability to set the location of the input device.
Applications may call these functions at any time. The runtime must do its best to honor the request of applications calling these functions, however it does not guarantee that any state change will be reflected immediately, at all, or with the exact value that was requested. Applications are thus advised to wait for the state change to be observable and to not assume that the value they requested will be the value observed. If any of the functions of this extension are called, control over input must be removed from the physical hardware of the system.
New Object Types
New Flag Types
New Enum Constants
New Enums
New Structures
New Functions
// Provided by XR_EXT_conformance_automation
XrResult xrSetInputDeviceActiveEXT(
XrSession session,
XrPath interactionProfile,
XrPath topLevelPath,
XrBool32 isActive);
// Provided by XR_EXT_conformance_automation
XrResult xrSetInputDeviceStateBoolEXT(
XrSession session,
XrPath topLevelPath,
XrPath inputSourcePath,
XrBool32 state);
// Provided by XR_EXT_conformance_automation
XrResult xrSetInputDeviceStateFloatEXT(
XrSession session,
XrPath topLevelPath,
XrPath inputSourcePath,
float state);
// Provided by XR_EXT_conformance_automation
XrResult xrSetInputDeviceStateVector2fEXT(
XrSession session,
XrPath topLevelPath,
XrPath inputSourcePath,
XrVector2f state);
// Provided by XR_EXT_conformance_automation
XrResult xrSetInputDeviceLocationEXT(
XrSession session,
XrPath topLevelPath,
XrPath inputSourcePath,
XrSpace space,
XrPosef pose);
New Function Pointers
Issues
None
Version History
-
Revision 1, 2019-10-01 (Lachlan Ford)
-
Initial draft
-
-
Revision 2, 2021-03-04 (Rylie Pavlik)
-
Correct errors in function parameter documentation.
-
-
Revision 3, 2021-04-14 (Rylie Pavlik)
-
Fix missing error code
-
12.30. XR_EXT_debug_utils
- Name String
-
XR_EXT_debug_utils - Extension Type
-
Instance extension
- Registered Extension Number
-
20
- Revision
-
5
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2021-04-14
- IP Status
-
No known IP claims.
- Contributors
-
Mark Young, LunarG
Karl Schultz, LunarG
Rylie Pavlik, Collabora
Overview
Due to the nature of the OpenXR interface, there is very little error
information available to the developer and application.
By using the XR_EXT_debug_utils extension, developers can obtain
more information.
When combined with validation layers, even more detailed feedback on the
application’s use of OpenXR will be provided.
This extension provides the following capabilities:
-
The ability to create a debug messenger which will pass along debug messages to an application supplied callback.
-
The ability to identify specific OpenXR handles using a name to improve tracking.
12.30.1. Object Debug Annotation
It can be useful for an application to provide its own content relative to a specific OpenXR handle.
Object Naming
xrSetDebugUtilsObjectNameEXT allows application developers to associate user-defined information with OpenXR handles.
This is useful when paired with the callback that you register when creating an XrDebugUtilsMessengerEXT object. When properly used, debug messages will contain not only the corresponding object handle, but the associated object name as well.
An application can change the name associated with an object simply by calling xrSetDebugUtilsObjectNameEXT again with a new string. If the objectName member of the XrDebugUtilsObjectNameInfoEXT structure is an empty string, then any previously set name is removed.
12.30.2. Debug Messengers
OpenXR allows an application to register arbitrary number of callbacks with all the OpenXR components wishing to report debug information. Some callbacks can log the information to a file, others can cause a debug break point or any other behavior defined by the application. A primary producer of callback messages are the validation layers. If the extension is enabled, an application can register callbacks even when no validation layers are enabled. The OpenXR loader, other layers, and runtimes may also produce callback messages.
The debug messenger will provide detailed feedback on the application’s use of OpenXR when events of interest occur. When an event of interest does occur, the debug messenger will submit a debug message to the debug callback that was provided during its creation. Additionally, the debug messenger is responsible with filtering out debug messages that the callback isn’t interested in and will only provide desired debug messages.
12.30.3. Debug Message Categorization
Messages that are triggered by the debug messenger are categorized by their
message type and severity.
Additionally, each message has a string value identifying its
messageId.
These 3 bits of information can be used to filter out messages so you only
receive reports on the messages you desire.
In fact, during debug messenger creation, the severity and type flag values
are provided to indicate what messages should be allowed to trigger the
user’s callback.
Message Type
The message type indicates the general category the message falls under. Currently we have the following message types:
| Enum | Description |
|---|---|
|
Specifies a general purpose event type. This is typically a non-validation, non-performance event. |
|
Specifies an event caused during a validation against the OpenXR specification that may indicate invalid OpenXR usage. |
|
Specifies a potentially non-optimal use of OpenXR. |
|
Specifies a non-conformant OpenXR result. This is typically caused by a layer or runtime returning non-conformant data. |
A message may correspond to more than one type.
For example, if a validation warning also could impact performance, then the
message might be identified with both the
XR_DEBUG_UTILS_MESSAGE_TYPE_VALIDATION_BIT_EXT and
XR_DEBUG_UTILS_MESSAGE_TYPE_PERFORMANCE_BIT_EXT flag bits.
Message Severity
The severity of a message is a flag that indicates how important the message is using standard logging naming. The severity flag bit values are shown in the following table.
| Enum | Description |
|---|---|
|
Specifies the most verbose output indicating all diagnostic messages from the OpenXR loader, layers, and drivers should be captured. |
|
Specifies an informational message such as resource details that might be handy when debugging an application. |
|
Specifies use of OpenXR that could be an application bug. Such cases may not be immediately harmful, such as providing too many swapchain images. Other cases may point to behavior that is almost certainly bad when unintended, such as using a swapchain image whose memory has not been filled. In general, if you see a warning but you know that the behavior is intended/desired, then simply ignore the warning. |
|
Specifies an error that may cause undefined behavior, including an application crash. |
|
Note
The values of XrDebugUtilsMessageSeverityFlagBitsEXT are sorted based on severity. The higher the flag value, the more severe the message. This allows for simple boolean operation comparisons when looking at XrDebugUtilsMessageSeverityFlagBitsEXT values. |
Message IDs
The XrDebugUtilsMessengerCallbackDataEXT structure contains a
messageId that may be a string identifying the message ID for the
triggering debug message.
This may be blank, or it may simply contain the name of an OpenXR component
(like "OpenXR Loader").
However, when certain API layers or runtimes are used, especially the OpenXR
core_validation API layer, then this value is intended to uniquely identify
the message generated.
If a certain warning/error message constantly fires, a user can simply look
at the unique ID in their callback handler and manually filter it out.
For validation layers, this messageId value actually can be used to
find the section of the OpenXR specification that the layer believes to have
been violated.
See the core_validation API Layer documentation for more information on how
this can be done.
12.30.4. Session Labels
All OpenXR work is performed inside of an XrSession. There are times that it helps to label areas in your OpenXR session to allow easier debugging. This can be especially true if your application creates more than one session. There are two kinds of labels provided in this extension:
-
Region labels
-
Individual labels
To begin identifying a region using a debug label inside a session, you may use the xrSessionBeginDebugUtilsLabelRegionEXT function. Calls to xrSessionBeginDebugUtilsLabelRegionEXT may be nested allowing you to identify smaller and smaller labeled regions within your code. Using this, you can build a "call-stack" of sorts with labels since any logging callback will contain the list of all active session label regions.
To end the last session label region that was begun, you must call xrSessionEndDebugUtilsLabelRegionEXT. Each xrSessionBeginDebugUtilsLabelRegionEXT must have a matching xrSessionEndDebugUtilsLabelRegionEXT. All of a session’s label regions must be closed before the xrDestroySession function is called for the given XrSession.
An individual debug label may be inserted at any time using xrSessionInsertDebugUtilsLabelEXT. The xrSessionInsertDebugUtilsLabelEXT is used to indicate a particular location within the execution of the application’s session functions. The next call to xrSessionInsertDebugUtilsLabelEXT, xrSessionBeginDebugUtilsLabelRegionEXT, or xrSessionEndDebugUtilsLabelRegionEXT overrides this value.
New Object Types
XR_DEFINE_HANDLE(XrDebugUtilsMessengerEXT)
XrDebugUtilsMessengerEXT represents a callback function and associated filters registered with the runtime.
New Flag Types
typedef XrFlags64 XrDebugUtilsMessageSeverityFlagsEXT;
// Flag bits for XrDebugUtilsMessageSeverityFlagsEXT
static const XrDebugUtilsMessageSeverityFlagsEXT XR_DEBUG_UTILS_MESSAGE_SEVERITY_VERBOSE_BIT_EXT = 0x00000001;
static const XrDebugUtilsMessageSeverityFlagsEXT XR_DEBUG_UTILS_MESSAGE_SEVERITY_INFO_BIT_EXT = 0x00000010;
static const XrDebugUtilsMessageSeverityFlagsEXT XR_DEBUG_UTILS_MESSAGE_SEVERITY_WARNING_BIT_EXT = 0x00000100;
static const XrDebugUtilsMessageSeverityFlagsEXT XR_DEBUG_UTILS_MESSAGE_SEVERITY_ERROR_BIT_EXT = 0x00001000;
typedef XrFlags64 XrDebugUtilsMessageTypeFlagsEXT;
// Flag bits for XrDebugUtilsMessageTypeFlagsEXT
static const XrDebugUtilsMessageTypeFlagsEXT XR_DEBUG_UTILS_MESSAGE_TYPE_GENERAL_BIT_EXT = 0x00000001;
static const XrDebugUtilsMessageTypeFlagsEXT XR_DEBUG_UTILS_MESSAGE_TYPE_VALIDATION_BIT_EXT = 0x00000002;
static const XrDebugUtilsMessageTypeFlagsEXT XR_DEBUG_UTILS_MESSAGE_TYPE_PERFORMANCE_BIT_EXT = 0x00000004;
static const XrDebugUtilsMessageTypeFlagsEXT XR_DEBUG_UTILS_MESSAGE_TYPE_CONFORMANCE_BIT_EXT = 0x00000008;
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_DEBUG_UTILS_OBJECT_NAME_INFO_EXT -
XR_TYPE_DEBUG_UTILS_MESSENGER_CALLBACK_DATA_EXT -
XR_TYPE_DEBUG_UTILS_MESSENGER_CREATE_INFO_EXT -
XR_TYPE_DEBUG_UTILS_LABEL_EXT
New Enums
New Structures
// Provided by XR_EXT_debug_utils
typedef struct XrDebugUtilsObjectNameInfoEXT {
XrStructureType type;
const void* next;
XrObjectType objectType;
uint64_t objectHandle;
const char* objectName;
} XrDebugUtilsObjectNameInfoEXT;
// Provided by XR_EXT_debug_utils
typedef struct XrDebugUtilsLabelEXT {
XrStructureType type;
const void* next;
const char* labelName;
} XrDebugUtilsLabelEXT;
// Provided by XR_EXT_debug_utils
typedef struct XrDebugUtilsMessengerCallbackDataEXT {
XrStructureType type;
const void* next;
const char* messageId;
const char* functionName;
const char* message;
uint32_t objectCount;
XrDebugUtilsObjectNameInfoEXT* objects;
uint32_t sessionLabelCount;
XrDebugUtilsLabelEXT* sessionLabels;
} XrDebugUtilsMessengerCallbackDataEXT;
An XrDebugUtilsMessengerCallbackDataEXT is a messenger object that handles passing along debug messages to a provided debug callback.
|
Note
This structure should only be considered valid during the lifetime of the triggered callback. |
The labels listed inside sessionLabels are organized in time order,
with the most recently generated label appearing first, and the oldest label
appearing last.
// Provided by XR_EXT_debug_utils
typedef struct XrDebugUtilsMessengerCreateInfoEXT {
XrStructureType type;
const void* next;
XrDebugUtilsMessageSeverityFlagsEXT messageSeverities;
XrDebugUtilsMessageTypeFlagsEXT messageTypes;
PFN_xrDebugUtilsMessengerCallbackEXT userCallback;
void* userData;
} XrDebugUtilsMessengerCreateInfoEXT;
For each XrDebugUtilsMessengerEXT that is created the
XrDebugUtilsMessengerCreateInfoEXT::messageSeverities and
XrDebugUtilsMessengerCreateInfoEXT::messageTypes determine when
that XrDebugUtilsMessengerCreateInfoEXT::userCallback is called.
The process to determine if the user’s userCallback is triggered when an
event occurs is as follows:
-
The runtime will perform a bitwise AND of the event’s XrDebugUtilsMessageSeverityFlagBitsEXT with the XrDebugUtilsMessengerCreateInfoEXT::
messageSeveritiesprovided during creation of the XrDebugUtilsMessengerEXT object. -
If this results in
0, the message is skipped. -
The runtime will perform bitwise AND of the event’s XrDebugUtilsMessageTypeFlagBitsEXT with the XrDebugUtilsMessengerCreateInfoEXT::
messageTypesprovided during the creation of the XrDebugUtilsMessengerEXT object. -
If this results in
0, the message is skipped. -
If the message of the current event is not skipped, the callback will be called with the message.
The callback will come directly from the component that detected the event, unless some other layer intercepts the calls for its own purposes (filter them in a different way, log to a system error log, etc.).
An application can receive multiple callbacks if multiple XrDebugUtilsMessengerEXT objects are created. A callback will always be executed in the same thread as the originating OpenXR call.
|
Note
A callback can be called from multiple threads simultaneously if the application is making OpenXR calls from multiple threads. |
New Functions
// Provided by XR_EXT_debug_utils
XrResult xrSetDebugUtilsObjectNameEXT(
XrInstance instance,
const XrDebugUtilsObjectNameInfoEXT* nameInfo);
Applications may change the name associated with an object simply by
calling xrSetDebugUtilsObjectNameEXT again with a new string.
If XrDebugUtilsObjectNameInfoEXT::objectName is an empty string,
then any previously set name is removed.
// Provided by XR_EXT_debug_utils
XrResult xrCreateDebugUtilsMessengerEXT(
XrInstance instance,
const XrDebugUtilsMessengerCreateInfoEXT* createInfo,
XrDebugUtilsMessengerEXT* messenger);
The application must ensure that xrCreateDebugUtilsMessengerEXT is
not executed in parallel with any OpenXR function that is also called with
instance or child of instance.
When an event of interest occurs a debug messenger calls its
XrDebugUtilsMessengerCreateInfoEXT::userCallback with a debug
message from the producer of the event.
Additionally, the debug messenger must filter out any debug messages that
the application’s callback is not interested in based on
XrDebugUtilsMessengerCreateInfoEXT flags, as described below.
// Provided by XR_EXT_debug_utils
XrResult xrDestroyDebugUtilsMessengerEXT(
XrDebugUtilsMessengerEXT messenger);
The application must ensure that xrDestroyDebugUtilsMessengerEXT is
not executed in parallel with any OpenXR function that is also called with
the instance or child of instance that it was created with.
// Provided by XR_EXT_debug_utils
XrResult xrSubmitDebugUtilsMessageEXT(
XrInstance instance,
XrDebugUtilsMessageSeverityFlagsEXT messageSeverity,
XrDebugUtilsMessageTypeFlagsEXT messageTypes,
const XrDebugUtilsMessengerCallbackDataEXT* callbackData);
The application can also produce a debug message, and submit it into the OpenXR messaging system.
The call will propagate through the layers and generate callback(s) as indicated by the message’s flags. The parameters are passed on to the callback in addition to the userData value that was defined at the time the messenger was created.
// Provided by XR_EXT_debug_utils
XrResult xrSessionBeginDebugUtilsLabelRegionEXT(
XrSession session,
const XrDebugUtilsLabelEXT* labelInfo);
The xrSessionBeginDebugUtilsLabelRegionEXT function begins a label
region within session.
// Provided by XR_EXT_debug_utils
XrResult xrSessionEndDebugUtilsLabelRegionEXT(
XrSession session);
This function ends the last label region begun with the
xrSessionBeginDebugUtilsLabelRegionEXT function within the same
session.
// Provided by XR_EXT_debug_utils
XrResult xrSessionInsertDebugUtilsLabelEXT(
XrSession session,
const XrDebugUtilsLabelEXT* labelInfo);
The xrSessionInsertDebugUtilsLabelEXT function inserts an individual
label within session.
The individual labels are useful for different reasons based on the type of
debugging scenario.
When used with something active like a profiler or debugger, it identifies a
single point of time.
When used with logging, the individual label identifies that a particular
location has been passed at the point the log message is triggered.
Because of this usage, individual labels only exist in a log until the next
call to any of the label functions:
New Function Pointers
// Provided by XR_EXT_debug_utils
typedef XrBool32 (XRAPI_PTR *PFN_xrDebugUtilsMessengerCallbackEXT)(
XrDebugUtilsMessageSeverityFlagsEXT messageSeverity,
XrDebugUtilsMessageTypeFlagsEXT messageTypes,
const XrDebugUtilsMessengerCallbackDataEXT* callbackData,
void* userData);
The callback must not call xrDestroyDebugUtilsMessengerEXT.
The callback returns an XrBool32 that indicates to the calling
layer the application’s desire to abort the call.
A value of XR_TRUE indicates that the application wants to abort this
call.
If the application returns XR_FALSE, the function must not be
aborted.
Applications should always return XR_FALSE so that they see the same
behavior with and without validation layers enabled.
If the application returns XR_TRUE from its callback and the OpenXR
call being aborted returns an XrResult, the layer will return
XR_ERROR_VALIDATION_FAILURE.
The object pointed to by callbackData (and any pointers in it
recursively) must be valid during the lifetime of the triggered callback.
It may become invalid afterwards.
Examples
Example 1
XR_EXT_debug_utils allows an application to register multiple callbacks with any OpenXR component wishing to report debug information. Some callbacks may log the information to a file, others may cause a debug break point or other application defined behavior. An application can register callbacks even when no validation layers are enabled, but they will only be called for loader and, if implemented, driver events.
To capture events that occur while creating or destroying an instance an application can link an XrDebugUtilsMessengerCreateInfoEXT structure to the next element of the XrInstanceCreateInfo structure given to xrCreateInstance. This callback is only valid for the duration of the xrCreateInstance and the xrDestroyInstance call. Use xrCreateDebugUtilsMessengerEXT to create persistent callback objects.
Example uses: Create three callback objects.
One will log errors and warnings to the debug console using Windows
OutputDebugString.
The second will cause the debugger to break at that callback when an error
happens and the third will log warnings to stdout.
extern XrInstance instance; // previously initialized
// Must call extension functions through a function pointer:
PFN_xrCreateDebugUtilsMessengerEXT pfnCreateDebugUtilsMessengerEXT;
CHK_XR(xrGetInstanceProcAddr(instance, "xrCreateDebugUtilsMessengerEXT",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnCreateDebugUtilsMessengerEXT)));
PFN_xrDestroyDebugUtilsMessengerEXT pfnDestroyDebugUtilsMessengerEXT;
CHK_XR(xrGetInstanceProcAddr(instance, "xrDestroyDebugUtilsMessengerEXT",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnDestroyDebugUtilsMessengerEXT)));
XrDebugUtilsMessengerCreateInfoEXT callback1 = {
XR_TYPE_DEBUG_UTILS_MESSENGER_CREATE_INFO_EXT, // type
NULL, // next
XR_DEBUG_UTILS_MESSAGE_SEVERITY_ERROR_BIT_EXT | // messageSeverities
XR_DEBUG_UTILS_MESSAGE_SEVERITY_WARNING_BIT_EXT,
XR_DEBUG_UTILS_MESSAGE_TYPE_GENERAL_BIT_EXT | // messageTypes
XR_DEBUG_UTILS_MESSAGE_TYPE_VALIDATION_BIT_EXT,
myOutputDebugString, // userCallback
NULL // userData
};
XrDebugUtilsMessengerEXT messenger1 = XR_NULL_HANDLE;
CHK_XR(pfnCreateDebugUtilsMessengerEXT(instance, &callback1, &messenger1));
callback1.messageSeverities = XR_DEBUG_UTILS_MESSAGE_SEVERITY_ERROR_BIT_EXT;
callback1.userCallback = myDebugBreak;
callback1.userData = NULL;
XrDebugUtilsMessengerEXT messenger2 = XR_NULL_HANDLE;
CHK_XR(pfnCreateDebugUtilsMessengerEXT(instance, &callback1, &messenger2));
XrDebugUtilsMessengerCreateInfoEXT callback3 = {
XR_TYPE_DEBUG_UTILS_MESSENGER_CREATE_INFO_EXT, // type
NULL, // next
XR_DEBUG_UTILS_MESSAGE_SEVERITY_WARNING_BIT_EXT, // messageSeverities
XR_DEBUG_UTILS_MESSAGE_TYPE_GENERAL_BIT_EXT | // messageTypes
XR_DEBUG_UTILS_MESSAGE_TYPE_VALIDATION_BIT_EXT,
myStdOutLogger, // userCallback
NULL // userData
};
XrDebugUtilsMessengerEXT messenger3 = XR_NULL_HANDLE;
CHK_XR(pfnCreateDebugUtilsMessengerEXT(instance, &callback3, &messenger3));
// ...
// Remove callbacks when cleaning up
pfnDestroyDebugUtilsMessengerEXT(messenger1);
pfnDestroyDebugUtilsMessengerEXT(messenger2);
pfnDestroyDebugUtilsMessengerEXT(messenger3);
Example 2
Associate a name with an XrSpace, for easier debugging in external tools or with validation layers that can print a friendly name when referring to objects in error messages.
extern XrInstance instance; // previously initialized
extern XrSpace space; // previously initialized
// Must call extension functions through a function pointer:
PFN_xrSetDebugUtilsObjectNameEXT pfnSetDebugUtilsObjectNameEXT;
CHK_XR(xrGetInstanceProcAddr(instance, "xrSetDebugUtilsObjectNameEXT",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnSetDebugUtilsObjectNameEXT)));
// Set a name on the space
const XrDebugUtilsObjectNameInfoEXT spaceNameInfo = {
XR_TYPE_DEBUG_UTILS_OBJECT_NAME_INFO_EXT, // type
NULL, // next
XR_OBJECT_TYPE_SPACE, // objectType
(uint64_t)space, // objectHandle
"My Object-Specific Space", // objectName
};
pfnSetDebugUtilsObjectNameEXT(instance, &spaceNameInfo);
// A subsequent error might print:
// Space "My Object-Specific Space" (0xc0dec0dedeadbeef) is used
// with an XrSession that is not it's parent.
Example 3
Labeling the workload with naming information so that any form of analysis can display a more usable visualization of where actions occur in the lifetime of a session.
extern XrInstance instance; // previously initialized
extern XrSession session; // previously initialized
// Must call extension functions through a function pointer:
PFN_xrSessionBeginDebugUtilsLabelRegionEXT pfnSessionBeginDebugUtilsLabelRegionEXT;
CHK_XR(xrGetInstanceProcAddr(instance, "xrSessionBeginDebugUtilsLabelRegionEXT",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnSessionBeginDebugUtilsLabelRegionEXT)));
PFN_xrSessionEndDebugUtilsLabelRegionEXT pfnSessionEndDebugUtilsLabelRegionEXT;
CHK_XR(xrGetInstanceProcAddr(instance, "xrSessionEndDebugUtilsLabelRegionEXT",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnSessionEndDebugUtilsLabelRegionEXT)));
PFN_xrSessionInsertDebugUtilsLabelEXT pfnSessionInsertDebugUtilsLabelEXT;
CHK_XR(xrGetInstanceProcAddr(instance, "xrSessionInsertDebugUtilsLabelEXT",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnSessionInsertDebugUtilsLabelEXT)));
XrSessionBeginInfo session_begin_info = {
XR_TYPE_SESSION_BEGIN_INFO,
nullptr,
XR_VIEW_CONFIGURATION_TYPE_PRIMARY_STEREO
};
xrBeginSession(session, &session_begin_info);
const XrDebugUtilsLabelEXT session_active_region_label = {
XR_TYPE_DEBUG_UTILS_LABEL_EXT, // type
NULL, // next
"Session active", // labelName
};
// Start an annotated region of calls under the 'Session Active' name
pfnSessionBeginDebugUtilsLabelRegionEXT(session, &session_active_region_label);
// Brackets added for clarity
{
XrDebugUtilsLabelEXT individual_label = {
XR_TYPE_DEBUG_UTILS_LABEL_EXT, // type
NULL, // next
"WaitFrame", // labelName
};
const char wait_frame_label[] = "WaitFrame";
individual_label.labelName = wait_frame_label;
pfnSessionInsertDebugUtilsLabelEXT(session, &individual_label);
XrFrameWaitInfo wait_frame_info; // initialization omitted for readability
XrFrameState frame_state = {XR_TYPE_FRAME_STATE, nullptr};
xrWaitFrame(session, &wait_frame_info, &frame_state);
// Do stuff 1
const XrDebugUtilsLabelEXT session_frame_region_label = {
XR_TYPE_DEBUG_UTILS_LABEL_EXT, // type
NULL, // next
"Session Frame 123", // labelName
};
// Start an annotated region of calls under the 'Session Frame 123' name
pfnSessionBeginDebugUtilsLabelRegionEXT(session, &session_frame_region_label);
// Brackets added for clarity
{
const char begin_frame_label[] = "BeginFrame";
individual_label.labelName = begin_frame_label;
pfnSessionInsertDebugUtilsLabelEXT(session, &individual_label);
XrFrameBeginInfo begin_frame_info; // initialization omitted for readability
xrBeginFrame(session, &begin_frame_info);
// Do stuff 2
const char end_frame_label[] = "EndFrame";
individual_label.labelName = end_frame_label;
pfnSessionInsertDebugUtilsLabelEXT(session, &individual_label);
XrFrameEndInfo end_frame_info; // initialization omitted for readability
xrEndFrame(session, &end_frame_info);
}
// End the session/begun region started above
// (in this case it's the "Session Frame 123" label)
pfnSessionEndDebugUtilsLabelRegionEXT(session);
}
// End the session/begun region started above
// (in this case it's the "Session Active" label)
pfnSessionEndDebugUtilsLabelRegionEXT(session);
In the above example, if an error occurred in the // Do stuff 1 section,
then your debug utils callback would contain the following data in its
sessionLabels array:
-
[0]=individual_labelwithlabelName= "WaitFrame" -
[1]=session_active_region_labelwithlabelName= "Session active"
However, if an error occurred in the // Do stuff 2 section, then your
debug utils callback would contain the following data in its
sessionLabels array:
-
[0]=individual_labelwithlabelName= "BeginFrame" -
[1]=session_frame_region_labelwithlabelName= "Session Frame 123" -
[2]=session_active_region_labelwithlabelName= "Session active"
You’ll notice that "WaitFrame" is no longer available as soon as the next call to another function like xrSessionBeginDebugUtilsLabelRegionEXT.
Issues
None
Version History
-
Revision 1, 2018-02-19 (Mark Young / Karl Schultz)
-
Initial draft, based on VK_EXT_debug_utils.
-
-
Revision 2, 2018-11-16 (Mark Young)
-
Clean up some language based on changes going into the Vulkan VK_EXT_debug_utils extension by Peter Kraus (aka @krOoze).
-
Added session labels
-
-
Revision 3, 2019-07-19 (Rylie Pavlik)
-
Update examples.
-
Improve formatting.
-
-
Revision 4, 2021-04-04 (Rylie Pavlik)
-
Fix missing error code.
-
Improve formatting.
-
-
Revision 5, 2023-07-25 (John Kearney, Meta)
-
XrDebugUtilsMessengerCallbackDataEXTparameters messageId and functionName to be optional.
-
12.31. XR_EXT_dpad_binding
- Name String
-
XR_EXT_dpad_binding - Extension Type
-
Instance extension
- Registered Extension Number
-
79
- Revision
-
1
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2022-04-20
- IP Status
-
No known IP claims.
- Contributors
-
Joe Ludwig, Valve
Keith Bradner, Valve
Rune Berg, Valve
Nathan Nuber, Valve
Jakob Bornecrantz, Collabora
Rylie Pavlik, Collabora
Jules Blok, Epic Games
Overview
This extension allows the application to bind one or more digital actions to a trackpad or thumbstick as though it were a dpad by defining additional component paths to suggest bindings for. The behavior of this dpad-like mapping may be customized using XrInteractionProfileDpadBindingEXT.
Applications must also enable the XR_KHR_binding_modification
extension that this builds on top of.
New Component Paths
When this extension is enabled, a runtime must accept otherwise-valid suggested bindings that refer to the following component paths added to certain existing input subpaths.
-
For a given interaction profile,
-
For each input subpath valid in that interaction profile that has identifier trackpad but without a component specified (i.e. …/input/trackpad or …/input/trackpad_<location>), a runtime must accept the following components appended to that path in a suggested binding:
-
…/dpad_up
-
…/dpad_down
-
…/dpad_left
-
…/dpad_right
-
…/dpad_center
-
-
For each input subpath valid in that interaction profile that has identifier thumbstick but without a component specified (i.e. …/input/thumbstick or …/input/thumbstick_<location>), a runtime must accept the following components appended to that path in a suggested binding:
-
…/dpad_up
-
…/dpad_down
-
…/dpad_left
-
…/dpad_right
-
-
While a runtime may ignore accepted suggested bindings, and may use their contents as suggestions for automatic remapping when not obeying them, this extension defines interpretations the runtime must make in the case that a suggested binding using one of these paths is being obeyed.
An application can pass XrInteractionProfileDpadBindingEXT in the
XrBindingModificationsKHR::bindingModifications array associated
with a suggested binding to customize the behavior of this mapping in the
case that suggested bindings are being obeyed, and to provide remapping
hints in other cases.
If no XrInteractionProfileDpadBindingEXT structure is present in
XrBindingModificationsKHR::bindingModifications for a given
action set and component-less input subpath, the runtime must behave as if
one were passed with the following values:
-
forceThreshold= 0.5 -
forceThresholdReleased= 0.4 -
centerRegion= 0.5 -
wedgeAngle= ½ π -
isSticky=XR_FALSE -
onHaptic=NULL -
offHaptic=NULL
For the purposes of description, the (-1, 1) ranges of the x and y components of trackpad and thumbstick inputs are depicted in this extension as if their scale were equal between axes. However, this is not required by this extension: while their numeric scale is treated as equal, their physical scale may not be.
Each of the component paths defined by this extension behave as boolean
inputs.
The center component …/dpad_center (only present when the path
identifier is trackpad) must not be active at the same time as any other
dpad component.
For the other components, zero, one, or (depending on the wedgeAngle)
two of them may be active at any time, though only adjacent components on a
single logical dpad may be active simultaneously.
For example, …/dpad_down and …/dpad_left are adjacent,
and thus may be active simultaneously, while …/dpad_up and
…/dpad_down are not adjacent and must not be active
simultaneously.
|
Note
If |
The following components are defined by possibly-overlapping truncated
wedges pointing away from 0, 0 in x, y input space, with their
angular size of XrInteractionProfileDpadBindingEXT::wedgeAngle
centered around the indicated direction.
-
…/dpad_up: direction (0, 1)
-
…/dpad_down: direction (0, -1)
-
…/dpad_left: direction (-1, 0)
-
…/dpad_right: direction (1, 0)
Typical values for wedgeAngle are ½ π (or 90°) for
regions that do not overlap or ¾ π (or 135°) for
regions are evenly divided between the exclusive region for one cardinal
direction and the overlap with neighboring regions.
Each of these regions are truncated by an arc to exclude the area within a
radius of XrInteractionProfileDpadBindingEXT::centerRegion away
from 0, 0.
When used with an input path with an identifier of trackpad, the area
within this radius corresponds to the …/dpad_center component.
When used with an input path with an identifier of thumbstick, the area
within this radius is a region where all dpad components must be inactive.
Behavior
For both the trackpad and thumbstick input identifiers, there are conditions that must be true for any dpad component to report active. If these conditions are true, the selection of which component or components are active, if any, takes place.
-
Activation of a dpad component when appended to an input path with identifier trackpad on the values of the …/x and …/y components, as well as on an overall activation state. If the overall state is inactive, the runtime must treat all corresponding dpad components as inactive.
-
If the component …/click is also valid for the trackpad, the overall activation state is equal to the value of the …/click.
-
If the component …/click is not valid for the trackpad, but the component …/force is valid, the overall activation state depends on the value of that …/force component, as well as the previous overall activation state for hysteresis. The …/force component value hysteresis thresholds for overall activation are XrInteractionProfileDpadBindingEXT::
forceThresholdandforceThresholdReleased. More explicitly:-
If the previous overall state was inactive, the current overall state must be active if and only if the value of the …/force component is greater than or equal to
forceThreshold. -
If the previous overall state was active, the current state must be inactive if and only if the value of the …/force component is strictly less than
forceThresholdReleased.
-
-
-
Activation of a dpad component when appended to an input path with identifier thumbstick depends only on the value of the …/x and …/y components of that input.
-
If the thumbstick x and y values correspond to a deflection from center of less than
centerRegion, all dpad components must be reported as inactive.
-
Hysteresis is desirable to avoid an unintentional, rapid toggling between
the active and inactive state that can occur when the amount of force
applied by the user is very close to the threshold at which the input is
considered active.
Hysteresis is optional, and is achieved through a difference between
forceThreshold and forceThresholdReleased.
When XrInteractionProfileDpadBindingEXT::isSticky is
XR_FALSE, and the above logic indicates that some dpad component is
active, a runtime obeying suggested bindings must select which dpad
components to report as active based solely on the current x, y values.
If XrInteractionProfileDpadBindingEXT::isSticky is
XR_TRUE, the region(s) to be made active must be latched when the
above logic begins to indicate that some dpad component is active, and the
x and y values are within at least one region.
The latched region(s) must continue to be reported as active until the
activation logic indicates that all dpad components must be inactive.
The latched region(s) remain active even if the input leaves that region or
enters another region.
The runtime must latch the x and y values, and thus the region or regions (in the case of overlapping dpad component wedges), when the sticky activation toggle becomes true. The latched regions must continue to be true until the input returns to the center region (for a thumbstick) or is released (for a trackpad). In this way, sticky dpads maintain their selected region across touch/click transitions.
New Structures
The XrInteractionProfileDpadBindingEXT structure is defined as:
// Provided by XR_EXT_dpad_binding
typedef struct XrInteractionProfileDpadBindingEXT {
XrStructureType type;
const void* next;
XrPath binding;
XrActionSet actionSet;
float forceThreshold;
float forceThresholdReleased;
float centerRegion;
float wedgeAngle;
XrBool32 isSticky;
const XrHapticBaseHeader* onHaptic;
const XrHapticBaseHeader* offHaptic;
} XrInteractionProfileDpadBindingEXT;
The XrInteractionProfileDpadBindingEXT structure is an input struct
that defines how to use any two-axis input to provide dpad-like
functionality to the application.
The struct must be added for each input that should be treated as a dpad to
the XrBindingModificationsKHR::bindingModifications array in the
XrBindingModificationsKHR structure (See
XR_KHR_binding_modification extension).
Runtimes are free to ignore any of the fields when not obeying the bindings, but may use it for automatic rebindings of actions.
The implementation must return XR_ERROR_VALIDATION_FAILURE from
xrSuggestInteractionProfileBindings if any of the following are true:
-
forceThresholdorforceThresholdReleasedare outside the half-open range (0, 1] -
forceThreshold<forceThresholdReleased -
centerRegionis outside the exclusive range (0, 1) -
wedgeAngleoutside the half-open range [0, π)
If more than one XrInteractionProfileDpadBindingEXT is provided for
the same input identifier, including top level path (e.g.
/user/hand/left/input/thumbstick), and two or more of them specify
the same actionset, the runtime must return
XR_ERROR_VALIDATION_FAILURE.
If the same input identifier, including top level path, is used for more
than one action set, in addition to inputs being suppressed by higher priority action sets, haptic events from dpads are
also suppressed.
For example, a Valve Index controller binding with a "Walking" action set can have a dpad on each of:
-
left thumbstick
-
right thumbstick
-
left trackpad
-
right trackpad
Another action set can also have a dpad active on each of those inputs, and they can have different settings. If both action sets are active, the higher priority one trumps the lower priority one, and the lower priority one is suppressed.
New Functions
Issues
-
What if an interaction profile is added that contains a trackpad identifier, for which there is neither a …/click or a …/force component?
-
Equivalent logic would apply to whatever component is available to distinguish action from inaction.
-
-
Is zero a valid wedge angle? Is π?
-
Yes, though it is mostly useless, as it makes the directional regions empty in size and thus impossible to activate. The user could only activate …/dpad_center on a trackpad identifier. π is not a valid wedge angle because that would imply being able to activate three adjacent directions, of which two must be opposite. In practice, the sensors underlying these inputs make it effectively impossible to input an exact floating point value.
-
Example
The following sample code shows how to create dpad bindings using this extension.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
// Create dpad paths
XrPath pathThumbstick, pathDpadUp, pathDpadDown;
xrStringToPath( pInstance, "/user/hand/left/input/thumbstick", &pathThumbstick);
xrStringToPath( pInstance, "/user/hand/left/input/thumbstick/dpad_up", &pathDpadUp );
xrStringToPath( pInstance, "/user/hand/left/input/thumbstick/dpad_down", &pathDpadDown );
// Set dpad binding modifiers
XrInteractionProfileDpadBindingEXT xrDpadModification { XR_TYPE_INTERACTION_PROFILE_DPAD_BINDING_EXT };
xrDpadModification.actionSet = xrActionSet_Main;
xrDpadModification.binding = pathThumbstick;
xrDpadModification.centerRegion = 0.25f;
xrDpadModification.wedgeAngle = 2.0f;
// A gap between these next two members creates hysteresis, to avoid rapid toggling
xrDpadModification.forceThreshold = 0.8f;
xrDpadModification.forceThresholdReleased = 0.2f;
// Add dpad binding modifiers to binding modifications vector
std::vector< XrInteractionProfileDpadBindingEXT > vBindingModifs;
vBindingModifs.push_back( xrDpadModification );
std::vector< XrBindingModificationBaseHeaderKHR* > vBindingModifsBase;
for ( XrInteractionProfileDpadBindingEXT &modif : vBindingModifs )
{
vBindingModifsBase.push_back( reinterpret_cast< XrBindingModificationBaseHeaderKHR* >( &modif) );
}
XrBindingModificationsKHR xrBindingModifications { XR_TYPE_BINDING_MODIFICATIONS_KHR };
xrBindingModifications.bindingModifications = vBindingModifsBase.data();
xrBindingModifications.bindingModificationCount = ( uint32_t )vBindingModifsBase.size();
// Set dpad input path as suggested binding for an action
XrActionSuggestedBinding xrActionBindingTeleport, xrActionBindingMenu;
xrActionBindingTeleport.action = xrAction_Teleport;
xrActionBindingTeleport.binding = pathDpadUp;
xrActionBindingMenu.action = xrAction_Menu;
xrActionBindingMenu.binding = pathDpadDown;
std::vector< XrActionSuggestedBinding > vActionBindings;
vActionBindings.push_back( xrActionBindingTeleport );
vActionBindings.push_back( xrActionBindingMenu );
// Create interaction profile/controller path
XrPath xrInteractionProfilePath;
xrStringToPath( pInstance, "/interaction_profiles/valve/index_controller", &xrInteractionProfilePath );
// Set suggested binding to interaction profile
XrInteractionProfileSuggestedBinding xrInteractionProfileSuggestedBinding { XR_TYPE_INTERACTION_PROFILE_SUGGESTED_BINDING };
xrInteractionProfileSuggestedBinding.interactionProfile = xrInteractionProfilePath;
xrInteractionProfileSuggestedBinding.suggestedBindings = vActionBindings.data();
xrInteractionProfileSuggestedBinding.countSuggestedBindings = ( uint32_t )vActionBindings.size();
// Set binding modifications to interaction profile's suggested binding
xrInteractionProfileSuggestedBinding.next = &xrBindingModifications;
// Finally, suggest interaction profile bindings to runtime
xrSuggestInteractionProfileBindings( pInstance, &xrInteractionProfileSuggestedBinding );
Version History
-
Revision 1, 2022-02-18 (Rune Berg)
-
Initial extension description
-
12.32. XR_EXT_eye_gaze_interaction
- Name String
-
XR_EXT_eye_gaze_interaction - Extension Type
-
Instance extension
- Registered Extension Number
-
31
- Revision
-
2
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2020-02-20
- IP Status
-
No known IP claims.
- Contributors
-
Denny Rönngren, Tobii
Yin Li, Microsoft
Alex Turner, Microsoft
Paul Pedriana, Oculus
Rémi Arnaud, Varjo
Blake Taylor, Magic Leap
Lachlan Ford, Microsoft
Cass Everitt, Oculus
Overview
This extension provides an XrPath for getting eye gaze input from
an eye tracker to enable eye gaze interactions.
The intended use for this extension is to provide:
-
system properties to inform if eye gaze interaction is supported by the current device.
-
an
XrPathfor real time eye tracking that exposes an accurate and precise eye gaze pose to be used to enable eye gaze interactions. -
a structure XrEyeGazeSampleTimeEXT that allows for an application to retrieve more information regarding the eye tracking samples.
With these building blocks, an application can discover if the XR runtime has access to an eye tracker, bind the eye gaze pose to the action system, determine if the eye tracker is actively tracking the users eye gaze, and use the eye gaze pose as an input signal to build eye gaze interactions.
12.32.1. Eye tracker
An eye tracker is a sensory device that tracks eyes and accurately maps what the user is looking at. The main purpose of this extension is to provide accurate and precise eye gaze for the application.
Eye tracking data can be sensitive personal information and is closely linked to personal privacy and integrity. It is strongly recommended that applications that store or transfer eye tracking data always ask the user for active and specific acceptance to do so.
If a runtime supports a permission system to control application access to
the eye tracker, then the runtime must set the isActive field to
XR_FALSE on the supplied XrActionStatePose structure, and must
clear XR_SPACE_LOCATION_POSITION_TRACKED_BIT,
XR_SPACE_LOCATION_POSITION_VALID_BIT,
XR_SPACE_LOCATION_ORIENTATION_TRACKED_BIT and
XR_SPACE_LOCATION_ORIENTATION_VALID_BIT when locating using the
tracked space until the application has been allowed access to the eye
tracker.
When the application access has been allowed, the runtime may set
isActive on the supplied XrActionStatePose structure to
XR_TRUE and may set XR_SPACE_LOCATION_POSITION_TRACKED_BIT,
XR_SPACE_LOCATION_POSITION_VALID_BIT
XR_SPACE_LOCATION_ORIENTATION_TRACKED_BIT and
XR_SPACE_LOCATION_ORIENTATION_VALID_BIT when locating using the
tracked space.
12.32.2. Device enumeration
When the eye gaze input extension is enabled an application may pass in a XrSystemEyeGazeInteractionPropertiesEXT structure in next chain structure when calling xrGetSystemProperties to acquire information about the connected eye tracker.
The runtime must populate the XrSystemEyeGazeInteractionPropertiesEXT structure with the relevant information to the XrSystemProperties returned by the xrGetSystemProperties call.
// Provided by XR_EXT_eye_gaze_interaction
typedef struct XrSystemEyeGazeInteractionPropertiesEXT {
XrStructureType type;
void* next;
XrBool32 supportsEyeGazeInteraction;
} XrSystemEyeGazeInteractionPropertiesEXT;
12.32.3. Eye gaze input
This extension exposes a new interaction profile path /interaction_profiles/ext/eye_gaze_interaction that is valid for the user path
-
/user/eyes_ext
with supported input subpath
-
…/input/gaze_ext/pose
The eye gaze pose is natively oriented with +Y up, +X to the right, and -Z
forward and not gravity-aligned, similar to the
XR_REFERENCE_SPACE_TYPE_VIEW.
The eye gaze pose may originate from a point positioned between the user’s
eyes.
At any point of time both the position and direction of the eye pose is
tracked or untracked.
This means that the runtime must set both
XR_SPACE_LOCATION_POSITION_TRACKED_BIT and
XR_SPACE_LOCATION_ORIENTATION_TRACKED_BIT or clear both
XR_SPACE_LOCATION_POSITION_TRACKED_BIT and
XR_SPACE_LOCATION_ORIENTATION_TRACKED_BIT.
One particularity for eye trackers compared to most other spatial input is
that the runtime may not have the capability to predict or interpolate eye
gaze poses.
Runtimes that cannot predict or interpolate eye gaze poses must clamp the
gaze pose requested in the xrLocateSpace call to the value nearest to
time requested in the call.
To allow for an application to reason about high accuracy eye tracking, the
application can chain in an XrEyeGazeSampleTimeEXT to the next
pointer of the XrSpaceLocation structure passed into the
xrLocateSpace call.
The runtime must set time in the XrEyeGazeSampleTimeEXT
structure to the clamped, predicted or interpolated time.
The application should inspect the time field to understand when in
time the pose is expressed.
The time field may be in the future if a runtime can predict gaze
poses.
The runtime must set the time field to 0 if the sample time is not
available.
When the runtime provides a nominal eye gaze pose, the
XR_SPACE_LOCATION_POSITION_TRACKED_BIT must be set if the eye
otherwise has a fully-tracked pose relative to the other space.
A runtime can provide a sub-nominal eye-gaze pose but must then clear the
XR_SPACE_LOCATION_POSITION_TRACKED_BIT.
An application can expect that a nominal eye gaze pose can be used for use
cases such as aiming or targeting, while a sub-nominal eye gaze pose has
degraded performance and should not be relied on for all input scenarios.
Applications should be very careful when using sub-nominal eye gaze pose,
since the behavior can vary considerably for different users and
manufacturers, and some manufacturers may not provide sub-nominal eye gaze
pose at all.
With current technology, some eye trackers may need to undergo an explicit calibration routine to provide a nominal accurate and precise eye gaze pose. If the eye tracker is in an uncalibrated state when the first call to xrSyncActions is made with an eye gaze action enabled, then the runtime should request eye tracker calibration from the user if it has not yet been requested.
// Provided by XR_EXT_eye_gaze_interaction
typedef struct XrEyeGazeSampleTimeEXT {
XrStructureType type;
void* next;
XrTime time;
} XrEyeGazeSampleTimeEXT;
12.32.4. Sample code
The following example code shows how to bind the eye pose to the action system.
extern XrInstance instance;
extern XrSession session;
extern XrPosef pose_identity;
// Create action set
XrActionSetCreateInfo actionSetInfo{XR_TYPE_ACTION_SET_CREATE_INFO};
strcpy(actionSetInfo.actionSetName, "gameplay");
strcpy(actionSetInfo.localizedActionSetName, "Gameplay");
actionSetInfo.priority = 0;
XrActionSet gameplayActionSet;
CHK_XR(xrCreateActionSet(instance, &actionSetInfo, &gameplayActionSet));
// Create user intent action
XrActionCreateInfo actionInfo{XR_TYPE_ACTION_CREATE_INFO};
strcpy(actionInfo.actionName, "user_intent");
actionInfo.actionType = XR_ACTION_TYPE_POSE_INPUT;
strcpy(actionInfo.localizedActionName, "User Intent");
XrAction userIntentAction;
CHK_XR(xrCreateAction(gameplayActionSet, &actionInfo, &userIntentAction));
// Create suggested bindings
XrPath eyeGazeInteractionProfilePath;
CHK_XR(xrStringToPath(instance, "/interaction_profiles/ext/eye_gaze_interaction", &eyeGazeInteractionProfilePath));
XrPath gazePosePath;
CHK_XR(xrStringToPath(instance, "/user/eyes_ext/input/gaze_ext/pose", &gazePosePath));
XrActionSuggestedBinding bindings;
bindings.action = userIntentAction;
bindings.binding = gazePosePath;
XrInteractionProfileSuggestedBinding suggestedBindings{XR_TYPE_INTERACTION_PROFILE_SUGGESTED_BINDING};
suggestedBindings.interactionProfile = eyeGazeInteractionProfilePath;
suggestedBindings.suggestedBindings = &bindings;
suggestedBindings.countSuggestedBindings = 1;
CHK_XR(xrSuggestInteractionProfileBindings(instance, &suggestedBindings));
XrSessionActionSetsAttachInfo attachInfo{XR_TYPE_SESSION_ACTION_SETS_ATTACH_INFO};
attachInfo.countActionSets = 1;
attachInfo.actionSets = &gameplayActionSet;
CHK_XR(xrAttachSessionActionSets(session, &attachInfo));
XrActionSpaceCreateInfo createActionSpaceInfo{XR_TYPE_ACTION_SPACE_CREATE_INFO};
createActionSpaceInfo.action = userIntentAction;
createActionSpaceInfo.poseInActionSpace = pose_identity;
XrSpace gazeActionSpace;
CHK_XR(xrCreateActionSpace(session, &createActionSpaceInfo, &gazeActionSpace));
XrReferenceSpaceCreateInfo createReferenceSpaceInfo{XR_TYPE_REFERENCE_SPACE_CREATE_INFO};
createReferenceSpaceInfo.referenceSpaceType = XR_REFERENCE_SPACE_TYPE_LOCAL;
createReferenceSpaceInfo.poseInReferenceSpace = pose_identity;
XrSpace localReferenceSpace;
CHK_XR(xrCreateReferenceSpace(session, &createReferenceSpaceInfo, &localReferenceSpace));
while(true)
{
XrActiveActionSet activeActionSet{gameplayActionSet, XR_NULL_PATH};
XrTime time;
XrActionsSyncInfo syncInfo{XR_TYPE_ACTIONS_SYNC_INFO};
syncInfo.countActiveActionSets = 1;
syncInfo.activeActionSets = &activeActionSet;
CHK_XR(xrSyncActions(session, &syncInfo));
XrActionStatePose actionStatePose{XR_TYPE_ACTION_STATE_POSE};
XrActionStateGetInfo getActionStateInfo{XR_TYPE_ACTION_STATE_GET_INFO};
getActionStateInfo.action = userIntentAction;
CHK_XR(xrGetActionStatePose(session, &getActionStateInfo, &actionStatePose));
if(actionStatePose.isActive){
XrEyeGazeSampleTimeEXT eyeGazeSampleTime{XR_TYPE_EYE_GAZE_SAMPLE_TIME_EXT};
XrSpaceLocation gazeLocation{XR_TYPE_SPACE_LOCATION, &eyeGazeSampleTime};
CHK_XR(xrLocateSpace(gazeActionSpace, localReferenceSpace, time, &gazeLocation));
// Do things
}
}
Version History
-
Revision 1, 2020-02-20 (Denny Rönngren)
-
Initial version
-
-
Revision 2, 2022-05-27 (Bryce Hutchings)
-
Remove error-prone
XrEyeGazeSampleTimeEXTvalidation requirement
-
12.33. XR_EXT_frame_synthesis
- Name String
-
XR_EXT_frame_synthesis - Extension Type
-
Instance extension
- Registered Extension Number
-
212
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Jian Zhang, Meta Platforms
Neel Bedekar, Meta Platforms
Xiang Wei, Meta Platforms
Guodong Rong, Meta Platforms
Trevor Dasch, Meta Platforms
Rémi Arnaud, Varjo
Paulo Gomes, Samsung Electronics
Bryce Hutchings, Microsoft
Rylie Pavlik, Collabora
Shuai Liu, ByteDance
12.33.1. Overview
This extension provides support to enable frame synthesis on applications based on additional application provided data. Application generated motion vector images and depth images may be used by the runtime to do high quality frame extrapolation and reprojection to synthesize a new frame, providing a smooth experience even when the application is running below the FPS target.
This extension is designed to be independent of
XR_KHR_composition_layer_depth, and both may be enabled and used at
the same time, for different purposes.
The XrFrameSynthesisInfoEXT::depthSubImage may use depth data
dedicated for frame synthesis, and its resolution may be lower than
XrCompositionLayerDepthInfoKHR::subImage.
See XrFrameSynthesisConfigViewEXT for the suggested resolution of
depthSubImage.
12.33.2. Submit motion vector images and depth images
The XrFrameSynthesisInfoEXT structure is defined as:
// Provided by XR_EXT_frame_synthesis
typedef struct XrFrameSynthesisInfoEXT {
XrStructureType type;
const void* next;
XrFrameSynthesisInfoFlagsEXT layerFlags;
XrSwapchainSubImage motionVectorSubImage;
XrVector4f motionVectorScale;
XrVector4f motionVectorOffset;
XrPosef appSpaceDeltaPose;
XrSwapchainSubImage depthSubImage;
float minDepth;
float maxDepth;
float nearZ;
float farZ;
} XrFrameSynthesisInfoEXT;
When submitting motion vector images and depth images along with projection
layers, add an XrFrameSynthesisInfoEXT structure to the
XrCompositionLayerProjectionView::next chain, for each
XrCompositionLayerProjectionView structure in the given layer.
The runtime must interpret the motion vector data in the
motionVectorSubImage’s RGB channels, modified by
motionVectorScale and motionVectorOffset as follows:
motionVector = motionVectorSubImagergb * motionVectorScalexyz
+ motionVectorOffsetxyz.
The components motionVectorSubImagea, motionVectorScalew and
motionVectorOffsetw are ignored.
The motion vector represents the movement of a pixel since the XrFrameEndInfo::displayTime of the previous frame until the XrFrameEndInfo::displayTime of the current frame. The runtime may use this information to extrapolate the rendered frame into a future frame.
The motion vector must derived from normalized device coordinate (NDC) space, which in this context uses Vulkan-style conventions: the NDC range is defined as [-1, -1, 0] to [1, 1, 1], different from OpenGL’s NDC range. However, the motion vector itself is not constrained to this range; its values depend on the pixel’s movement and may extend beyond the boundaries of the NDC space. For example, given that a pixel’s NDC in the previous frame is PrevNDC, and CurrNDC in current frame, and that there is no scale or offset, then the motion vector value is "(CurrNDC - PrevNDC)xyz".
|
Note
There are many different ways to generate |
12.33.3. Frame synthesis flags
typedef XrFlags64 XrFrameSynthesisInfoFlagsEXT;
// Flag bits for XrFrameSynthesisInfoFlagsEXT
static const XrFrameSynthesisInfoFlagsEXT XR_FRAME_SYNTHESIS_INFO_USE_2D_MOTION_VECTOR_BIT_EXT = 0x00000001;
static const XrFrameSynthesisInfoFlagsEXT XR_FRAME_SYNTHESIS_INFO_REQUEST_RELAXED_FRAME_INTERVAL_BIT_EXT = 0x00000002;
By default, 3D motion vector data is expected by the runtime, so motionVectorSubImagergb, motionVectorScalexyz and motionVectorOffsetxyz are used, as described in XrFrameSynthesisInfoEXT.
When XR_FRAME_SYNTHESIS_INFO_USE_2D_MOTION_VECTOR_BIT_EXT is enabled
on XrFrameSynthesisInfoEXT layerFlags, the runtime instead
interprets the submitted motion vector image as 2D motion vector data,
representing 2D pixel movement from the previous frame to the current frame.
Pixels values are interpreted as follows for 2D motion vector data:
motionVector = motionVectorSubImagerg * motionVectorScalexy +
motionVectorOffsetxy.
The components motionVectorSubImageba, motionVectorScalezw
and motionVectorOffsetzw are ignored.
Using 2D instead of 3D motion vector data may decrease the quality of the
synthesized frames.
12.33.4. Get recommended resolution
The XrFrameSynthesisConfigViewEXT structure is defined as:
// Provided by XR_EXT_frame_synthesis
typedef struct XrFrameSynthesisConfigViewEXT {
XrStructureType type;
void* next;
uint32_t recommendedMotionVectorImageRectWidth;
uint32_t recommendedMotionVectorImageRectHeight;
} XrFrameSynthesisConfigViewEXT;
When this extension is enabled, an application can pass in an
XrFrameSynthesisConfigViewEXT structure in the
XrViewConfigurationView::next chain when calling
xrEnumerateViewConfigurationViews to acquire information about the
recommended motion vector image resolution.
12.33.6. New Enum Constants
-
XR_EXT_FRAME_SYNTHESIS_EXTENSION_NAME -
XR_EXT_frame_synthesis_SPEC_VERSION -
Extending XrStructureType:
-
XR_TYPE_FRAME_SYNTHESIS_CONFIG_VIEW_EXT -
XR_TYPE_FRAME_SYNTHESIS_INFO_EXT
-
Issues
Version History
-
Revision 1, 2022-01-31 (Jian Zhang)
-
Initial extension description, converted from fb_space_warp
-
Collaborating with contributors to refine the extension interfaces.
-
12.34. XR_EXT_future
- Name String
-
XR_EXT_future - Extension Type
-
Instance extension
- Registered Extension Number
-
470
- Revision
-
2
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Contributors
-
Bryce Hutchings, Microsoft
Andreas Selvik, Meta
Ron Bessems, Magic Leap
Yin Li, Microsoft Corporation
Baolin Fu, ByteDance
Cass Everitt, Meta Platforms
Charlton Rodda, Collabora
Jakob Bornecrantz, NVIDIA
John Kearney, Meta Platforms
Jonathan Wright, Meta Platforms
Jun Yan, ByteDance
Junyi Wang, ByteDance
Karthik Kadappan, Magic Leap
Natalie Fleury, Meta Platforms
Nathan Nuber, Valve
Nikita Lutsenko, Meta Platforms
Robert Blenkinsopp, Ultraleap
Rylie Pavlik, Collabora
Tim Mowrer, Meta Platforms
Wenlin Mao, Meta Platforms
Will Fu, ByteDance
Zhipeng Liu, ByteDance
12.34.1. Overview
In XR systems there are certain operations that are long running and do not reasonably complete within a normal frame loop. This extension introduces the concept of a future which supports creation of asynchronous (async) functions for such long running operations. This extension does not include any asynchronous operations: it is expected that other extensions will use these futures and their associated conventions in this extension to define their asynchronous operations.
An XrFutureEXT represents the future result of an asynchronous
operation, comprising an XrResult and possibly additional outputs.
Long running operations immediately return an XrFutureEXT when
started, letting the application poll the state of the future, and get the
result once ready by calling a "complete"-function.
12.34.2. Getting a future
The XrFutureEXT basetype is defined as:
// Provided by XR_EXT_future
XR_DEFINE_OPAQUE_64(XrFutureEXT)
Asynchronous functions return an XrFutureEXT token as a placeholder
for a value that will be returned later.
An XrFutureEXT returned by a successful call to a function starting
an asynchronous operation should normally start in the
XR_FUTURE_STATE_PENDING_EXT state, but may skip directly to
XR_FUTURE_STATE_READY_EXT if the result is immediately available.
The value XR_NULL_FUTURE_EXT, numerically equal to 0, is never a
valid XrFutureEXT value.
Note that an XrFutureEXT token is neither a
handle nor an atom type (such as
XrPath).
It belongs to a new category and is defined as an opaque 64-bit value.
See Future Scope for details on the scope and lifecycle of a future.
Style note: Functions that return an XrFutureEXT should be
named with the suffix "Async", e.g. xrPerformLongTaskAsync.
This function must not set the XrFutureEXT to
XR_NULL_FUTURE_EXT when the function returns XR_SUCCESS.
12.34.3. Waiting for a future to become ready
The xrPollFutureEXT function is defined as:
// Provided by XR_EXT_future
XrResult xrPollFutureEXT(
XrInstance instance,
const XrFuturePollInfoEXT* pollInfo,
XrFuturePollResultEXT* pollResult);
Applications can use this function to check the current state of a future, typically while waiting for the async operation to complete and the future to become "ready" to complete.
|
Note
Each |
The XrFuturePollInfoEXT structure is defined as:
// Provided by XR_EXT_future
typedef struct XrFuturePollInfoEXT {
XrStructureType type;
const void* next;
XrFutureEXT future;
} XrFuturePollInfoEXT;
An XrFuturePollInfoEXT structure is used to pass future to
xrPollFutureEXT.
The XrFuturePollResultEXT structure is defined as:
// Provided by XR_EXT_future
typedef struct XrFuturePollResultEXT {
XrStructureType type;
void* next;
XrFutureStateEXT state;
} XrFuturePollResultEXT;
An XrFuturePollResultEXT structure is used to return the result of xrPollFutureEXT.
12.34.4. Completing a Future
Extensions that provide async functions returning a future should also
provide a matching completion function to "complete" the future in order to
return the result of the asynchronous operation.
This function should be named with the suffix "Complete" replacing the
"Async" suffix, e.g. xrPerformLongTaskComplete is a suitable completion
function name corresponding to xrPerformLongTaskAsync.
A completion function must populate a structure that must be based on
XrFutureCompletionBaseHeaderEXT to return the result of the
asynchronous operation.
Such a structure may be static_cast to and from
XrFutureCompletionBaseHeaderEXT, allowing generic handling of the
asynchronous operation results as well as polymorphic output from such an
operation.
The XrResult returned from a completion function must not be used to
return the result of the asynchronous operation.
Instead, the XrResult returned from a completion function must
indicate both whether the completion function was called correctly, and if
the completion of the future succeeded.
For instance, a completion function returning XR_ERROR_HANDLE_INVALID
means that a handle passed to the completion function was invalid, not that
a handle associated with the asynchronous operation is invalid.
Note that XR_SUCCESS should be returned from the completion function
even if the asynchronous operation itself was a failure; that failure is
indicated in XrFutureCompletionBaseHeaderEXT::futureResult
rather than the return value of the completion function.
When a completion function is called with a future that is in the
XR_FUTURE_STATE_PENDING_EXT state, the runtime must return
XR_ERROR_FUTURE_PENDING_EXT.
The XrResult of the asynchronous operation must be returned in the
futureResult of the return structure extending
XrFutureCompletionBaseHeaderEXT.
Completion functions which only need to return an XrResult may
populate the XrFutureCompletionEXT structure provided by this
extension as their output structure.
If the asynchronous operation creates a handle, the runtime must not queue
any events for that handle until the corresponding completion function has
been called.
For asynchronous operations that do not create a handle, the result of the
asynchronous operation may be observable through other functions even
before the completion function has been called, e.g. the runtime may mark
an object as "started" when the XrFutureEXT is in ready state,
without waiting for the completion function to be called.
Alternatively, the runtime may use the invocation of the completion
function to set those observable states, e.g. the runtime may mark an
object as "started" only when the completion function is called.
Extensions exposing asynchronous functions should clarify when the
observable states of the asynchronous operation change.
Once a completion function is called on a future with a valid output
structure and returns XR_SUCCESS, the future is considered
completed, and therefore invalidated.
Any usage of this future thereafter must return
XR_ERROR_FUTURE_INVALID_EXT.
Passing a completed future to any function accepting futures must return
XR_ERROR_FUTURE_INVALID_EXT.
The runtime may release any resources associated with an
XrFutureEXT once the future has been completed or invalidated.
|
Note
Each |
The XrFutureCompletionBaseHeaderEXT structure is defined as:
// Provided by XR_EXT_future
typedef struct XrFutureCompletionBaseHeaderEXT {
XrStructureType type;
void* next;
XrResult futureResult;
} XrFutureCompletionBaseHeaderEXT;
XrFutureCompletionBaseHeaderEXT is a base header for the result of a future completion function.
The XrFutureCompletionEXT structure is defined as:
// Provided by XR_EXT_future
typedef struct XrFutureCompletionEXT {
XrStructureType type;
void* next;
XrResult futureResult;
} XrFutureCompletionEXT;
This is a minimal implementation of XrFutureCompletionBaseHeaderEXT, containing only the fields present in the base header structure. It is intended for use by asynchronous operations that do not have other outputs or return values beyond an XrResult value, as the output parameter of their completion function.
12.34.5. Two-Call Idiom in Asynchronous Operations
OpenXR uses a two-call idiom for interfaces that return arrays or buffers of variable size. Asynchronous operations returning such an array or buffer similarly use the structure style of that two-call idiom, with small modifications to the typical completion function conventions to account for this pattern.
For completion functions returning an array or buffer using the two-call idiom, the future must be marked as completed if the output array size is sufficient for all elements of the data and was thus populated by the completion function. If the output array size is not sufficient, the runtime must not mark the future as completed nor invalidated.
For an array of zero data elements, this means the first call to the
two-call idiom completion function must mark the future as completed
and invalidated, even if the array is a NULL pointer.
If XrFutureCompletionBaseHeaderEXT::futureResult is a
failure the runtime must invalidate the
future after the first call, and any further usage of this future must
return XR_ERROR_FUTURE_INVALID_EXT.
For non-zero output arrays where
XrFutureCompletionBaseHeaderEXT::futureResult is not a failure,
XrFutureCompletionBaseHeaderEXT::futureResult must be identical
for both calls to the completion function.
This definition allows asynchronous operations to return dynamically sized outputs by using the two-call idiom in a familiar way.
12.34.6. Cancelling a future
The xrCancelFutureEXT function is defined as:
// Provided by XR_EXT_future
XrResult xrCancelFutureEXT(
XrInstance instance,
const XrFutureCancelInfoEXT* cancelInfo);
This function cancels the future and signals that the async operation is not
required.
After a future has been cancelled any functions using this future must
return XR_ERROR_FUTURE_INVALID_EXT.
A runtime may stop the asynchronous operation associated with a future after an app has cancelled it.
|
Note
Each |
The XrFutureCancelInfoEXT structure is defined as:
// Provided by XR_EXT_future
typedef struct XrFutureCancelInfoEXT {
XrStructureType type;
const void* next;
XrFutureEXT future;
} XrFutureCancelInfoEXT;
An XrFutureCancelInfoEXT describes which future to cancel.
12.34.7. XrFutureEXT Lifecycle
The XrFutureStateEXT enumerates the possible future lifecycle states:
// Provided by XR_EXT_future
typedef enum XrFutureStateEXT {
XR_FUTURE_STATE_PENDING_EXT = 1,
XR_FUTURE_STATE_READY_EXT = 2,
XR_FUTURE_STATE_MAX_ENUM_EXT = 0x7FFFFFFF
} XrFutureStateEXT;
XrFutureEXT LifecycleA future that is not invalidated (or completed) may be in one of two
states, Pending and Ready, represented by
XR_FUTURE_STATE_PENDING_EXT and XR_FUTURE_STATE_READY_EXT
respectively.
-
When successfully returned from an async function the future starts out as
Pending. In this state the future may be polled, but must not be passed to a completion function. Applications should wait for the future to become ready and keep polling the state of the future. If a pending future is passed to the associated completion function, it must returnXR_ERROR_FUTURE_PENDING_EXT. -
Once the asynchronous operation succeeds or fails, the state of the future moves to
Ready. In the ready state the future may be "Completed" with theCompletefunction. See Completing a Future. -
After being successfully completed, the future becomes invalidated if the completion function returns a success code, and in the case of two-call idioms, the array was not
NULL. -
After a call to xrCancelFutureEXT, the future becomes invalidated immediately and any resources associated with it may be freed (including handles)
-
When the associated handle is destroyed, the futures become invalidated. See Future Scope.
A future returned from an async function must be in either the state
XR_FUTURE_STATE_PENDING_EXT or XR_FUTURE_STATE_READY_EXT.
A runtime may skip the Pending state and go directly to Ready if the
result is immediately available.
12.34.8. Future Scope
An XrFutureEXT is scoped to the "associated handle" of the future.
The associated handle is the handle passed to the asynchronous operation
that returns the XrFutureEXT.
When the associated handle is destroyed, the runtime must invalidate the
future and may free any associated resources.
|
Note
For example, for a hypothetical async function Likewise, for |
12.34.9. Extension Guidelines for Asynchronous Functions
Extensions exposing asynchronous functions using XR_EXT_future
should follow the following patterns:
-
Functions returning a future should use the suffix "Async", prior to an author/vendor tag if applicable. For example:
-
xrGetFooAsync(…) -
xrRequestBarAsyncKHR(…) -
xrCreateObjectAsyncVENDOR(…)
-
-
The name of the future out parameter should be
future. For example:-
xrGetFooAsync(…,XrFutureEXT* future) -
xrRequestBarAsyncKHR(…,XrFutureEXT* future) -
xrCreateObjectAsyncVENDOR(…,XrFutureEXT* future)
-
-
Functions completing a future should match the name of the function returning the future, but with "Complete" rather than "Async" as the suffix. This is a deviation from the normal pattern in OpenXR, if "complete" is considered to be the verb; however this provides for a useful sorting order keeping the "Async" and "Complete" functions adjacent, and fits the pattern of using suffixes for asynchronous functions. The completion function must use the same handle type as the corresponding async function and the runtime must return
XR_ERROR_HANDLE_INVALIDif the handle value passed to the completion function is different from the value passed to the async function that returned the future. For example:-
xrGetFooComplete(…) -
xrRequestBarCompleteKHR(…), -
xrCreateObjectCompleteVENDOR(…)
-
-
The output structure used in the "Complete" function should extend XrFutureCompletionBaseHeaderEXT (starting with
type,next, andfutureResultfields). -
If an operation requires more than the basic XrFutureCompletionEXT output, the output structure populated by the "Complete" function should be named based on the function that returned the future, with the suffix "Completion". For example:
-
xrGetFooCompletepopulatesXrGetFooCompletion -
xrRequestBarCompletepopulatesXrRequestBarCompletionKHR -
xrCreateObjectCompleteVENDORpopulatesXrCreateObjectCompletionVENDOR
-
-
The
XrFutureEXTparameter in the "Complete" function should be namedfuture. For example:-
xrGetFooComplete(…,XrFutureEXTfuture) -
xrRequestBarCompleteKHR(…,XrFutureEXTfuture) -
xrCreateObjectCompleteVENDOR(…,XrFutureEXTfuture)
-
-
The parameter with the completion structure should be named
completion. e.g.-
xrGetFooComplete(…,XrFutureEXTfuture, XrGetFooCompletion* completion) -
xrRequestBarCompleteKHR(…,XrFutureEXTfuture, XrRequestBarCompletionKHR* completion) -
xrCreateObjectCompleteVENDOR(…,XrFutureEXTfuture, XrCreateObjectCompletionVENDOR* completion)
-
12.34.10. Asynchronous function patterns
xrCreate functions
/****************************/
/* Foo extension definition */
/****************************/
typedef void *XrFoo; // Handle definition
typedef struct XrFooObjectCreateInfo {
XrStructureType type;
const void *next;
} XrFooObjectCreateInfo;
#define XR_TYPE_FOO_OBJECT_CREATE_INFO ((XrStructureType)1100092000U)
// extends struct XrFutureCompletionBaseHeader using "parentstruct"
typedef struct XrFooObjectCreateCompletionEXT {
XrStructureType type;
void *XR_MAY_ALIAS next;
XrResult futureResult;
XrFoo foo;
} XrFooObjectCreateCompletionEXT;
#define XR_TYPE_FOO_OBJECT_CREATE_COMPLETION ((XrStructureType)1100092001U)
typedef XrResult(XRAPI_PTR *PFN_xrCreateFooObjectAsync)(
XrSession session, const XrFooObjectCreateInfo *createInfo,
XrFutureEXT *future);
typedef XrResult(XRAPI_PTR *PFN_xrCreateFooObjectComplete)(
XrSession session, XrFutureEXT future,
XrFooObjectCreateCompletionEXT *completion);
/*************************/
/* End Foo definition */
/*************************/
PFN_xrCreateFooObjectAsync xrCreateFooObjectAsync; // previously initialized
PFN_xrCreateFooObjectComplete
xrCreateFooObjectComplete; // previously initialized
PFN_xrPollFutureEXT xrPollFutureEXT; // previously initialized
XrInstance instance; // previously initialized
XrSession session; // previously initialized
XrFutureEXT futureFooObject;
XrResult result;
XrFooObjectCreateInfo createInfo{XR_TYPE_FOO_OBJECT_CREATE_INFO};
result = xrCreateFooObjectAsync(session, &createInfo, &futureFooObject);
CHK_XR(result);
bool keepLooping = true;
bool futureReady = false;
while (keepLooping) {
XrFuturePollInfoEXT pollInfo{XR_TYPE_FUTURE_POLL_INFO_EXT};
XrFuturePollResultEXT pollResult{XR_TYPE_FUTURE_POLL_RESULT_EXT};
pollInfo.future = futureFooObject;
CHK_XR(xrPollFutureEXT(instance, &pollInfo, &pollResult));
if (pollResult.state == XR_FUTURE_STATE_READY_EXT) {
futureReady = true;
keepLooping = false;
} else {
// sleep(10);
}
}
if (futureReady) {
XrFooObjectCreateCompletionEXT completion{
XR_TYPE_FOO_OBJECT_CREATE_COMPLETION};
result = xrCreateFooObjectComplete(session, futureFooObject, &completion);
CHK_XR(result); // Result of the complete function
CHK_XR(completion.futureResult); // Return code of the create function
// completion.fooObject is now valid and may be used!
}
Two-call idiom
/****************************/
/* Foo extension definition */
/****************************/
typedef struct XrFooObjectCreateInfo {
XrStructureType type;
const void *next;
} XrFooObjectCreateInfo;
#define XR_TYPE_FOO_OBJECTS_CREATE_INFO ((XrStructureType)1100092002U)
// extends struct XrFutureCompletionBaseHeader using "parentstruct"
typedef struct XrFooObjectsCreateCompletionEXT {
XrStructureType type;
void *next;
XrResult futureResult;
uint32_t elementCapacityInput;
uint32_t elementCapacityOutput;
float *elements;
} XrFooObjectsCreateCompletionEXT;
#define XR_TYPE_FOO_OBJECTS_CREATE_COMPLETION ((XrStructureType)1100092003U)
typedef XrResult(XRAPI_PTR *PFN_xrCreateFooObjectsAsync)(
XrSession session, const XrFooObjectCreateInfo *createInfo,
XrFutureEXT *future);
typedef XrResult(XRAPI_PTR *PFN_xrCreateFooObjectsComplete)(
XrSession session, XrFutureEXT future,
XrFooObjectsCreateCompletionEXT *completion);
/*************************/
/* End Foo definition */
/*************************/
PFN_xrCreateFooObjectsAsync xrCreateFooObjectsAsync; // previously initialized
PFN_xrCreateFooObjectsComplete
xrCreateFooObjectsComplete; // previously initialized
PFN_xrPollFutureEXT xrPollFutureEXT; // previously initialized
XrInstance instance; // previously initialized
XrSession session; // previously initialized
XrFutureEXT futureFooObjects;
XrResult result;
XrFooObjectCreateInfo createInfo{XR_TYPE_FOO_OBJECTS_CREATE_INFO};
result = xrCreateFooObjectsAsync(session, &createInfo, &futureFooObjects);
CHK_XR(result);
bool keepLooping = true;
bool futureReady = false;
while (keepLooping) {
XrFuturePollInfoEXT pollInfo{XR_TYPE_FUTURE_POLL_INFO_EXT};
XrFuturePollResultEXT pollResult{XR_TYPE_FUTURE_POLL_RESULT_EXT};
pollInfo.future = futureFooObjects;
CHK_XR(xrPollFutureEXT(instance, &pollInfo, &pollResult));
if (pollResult.state == XR_FUTURE_STATE_READY_EXT) {
futureReady = true;
keepLooping = false;
} else {
// sleep(10);
}
}
if (futureReady) {
XrFooObjectsCreateCompletionEXT completion{
XR_TYPE_FOO_OBJECTS_CREATE_COMPLETION};
result = xrCreateFooObjectsComplete(session, futureFooObjects, &completion);
CHK_XR(result); // Result of the complete function
CHK_XR(completion.futureResult);
// If elementCapacityOutput is 0, then the future is now complete / invalid
if (completion.elementCapacityOutput != 0) {
std::vector<float> floatValues(completion.elementCapacityOutput);
completion.elementCapacityInput = (uint32_t)floatValues.size();
completion.elements = floatValues.data();
result = xrCreateFooObjectsComplete(session, futureFooObjects, &completion);
CHK_XR(result); // Result of the complete function
}
}
// completion.elements has now been filled with values by the runtime.
Sample code
/*****************************************/
/* Slow Foo extension definition */
/*****************************************/
// extends struct XrFutureCompletionBaseHeader using "parentstruct"
typedef struct XrSlowFooCompletionEXT {
XrStructureType type;
void *XR_MAY_ALIAS next;
XrResult futureResult;
float foo;
} XrSlowFooCompletionEXT;
#define XR_TYPE_SLOW_FOO_COMPLETION_EXT ((XrStructureType)1100092005U)
typedef struct XrSlowFooInfoEXT {
XrStructureType type;
void *XR_MAY_ALIAS next;
} XrSlowFooInfoEXT;
#define XR_TYPE_SLOW_FOO_INFO_EXT ((XrStructureType)1100092006U)
typedef XrResult(XRAPI_PTR *PFN_xrSlowFooAsyncEXT)(XrSession session,
XrSlowFooInfoEXT slowFooInfo,
XrFutureEXT *future);
typedef XrResult(XRAPI_PTR *PFN_xrSlowFooCompleteEXT)(
XrSession session, XrFutureEXT future, XrSlowFooCompletionEXT *completion);
/*********************************************/
/* End Slow Foo extension definition */
/*********************************************/
class MyGame {
void OnSlowFooRequest() {
if (m_slowFooFuture == XR_NULL_FUTURE_EXT) {
// Make initial request.
XrSlowFooInfoEXT fooInfo{XR_TYPE_SLOW_FOO_INFO_EXT};
XrResult result = xrSlowFooAsyncEXT(session, fooInfo, &m_slowFooFuture);
CHK_XR(result);
}
}
void OnGameTickOrSomeOtherReoccurringFunction() {
// Check if a future is outstanding
if (m_slowFooFuture == XR_NULL_FUTURE_EXT) {
return;
}
// Poll for state of future
XrFuturePollInfoEXT pollInfo{XR_TYPE_FUTURE_POLL_INFO_EXT};
XrFuturePollResultEXT pollResult{XR_TYPE_FUTURE_POLL_RESULT_EXT};
pollInfo.future = m_slowFooFuture;
CHK_XR(xrPollFutureEXT(instance, &pollInfo, &pollResult));
if (pollResult.state == XR_FUTURE_STATE_READY_EXT) {
// Complete the future, consuming the result
XrSlowFooCompletionEXT completion{XR_TYPE_SLOW_FOO_COMPLETION_EXT};
XrResult result =
xrSlowFooCompleteEXT(session, m_slowFooFuture, &completion);
// Check XrResult from the completion function
CHK_XR(result);
// Check XrResult from the async operation
CHK_XR(completion.futureResult);
m_fooValue = completion.foo;
m_slowFooFuture = XR_NULL_FUTURE_EXT;
}
}
XrFutureEXT m_slowFooFuture{XR_NULL_FUTURE_EXT};
float m_fooValue{0.0f};
PFN_xrSlowFooAsyncEXT xrSlowFooAsyncEXT; // previously initialized
PFN_xrSlowFooCompleteEXT xrSlowFooCompleteEXT; // previously initialized
PFN_xrPollFutureEXT xrPollFutureEXT; // previously initialized
XrInstance instance; // previously initialized
XrSession session; // previously initialized
};
Multi-threaded code
class MyThreadedGame {
MyThreadedGame() {
// Start the thread
m_processThread = std::thread(&MyThreadedGame::ThreadFunction, this);
StartSlowFooRequest();
}
~MyThreadedGame() {
// all functions using futures must be synchronized.
CancelSlowFooRequestFuture();
m_abort = true;
m_processThread.join();
}
void StartSlowFooRequest() {
std::unique_lock<std::mutex> lock(m_mutex);
if (m_slowFooFuture == XR_NULL_FUTURE_EXT) {
// Make initial request.
XrSlowFooInfoEXT fooInfo{XR_TYPE_SLOW_FOO_INFO_EXT};
XrResult result = xrSlowFooAsyncEXT(session, fooInfo, &m_slowFooFuture);
CHK_XR(result);
}
}
void CancelSlowFooRequestFuture() {
std::unique_lock<std::mutex> lock(m_mutex);
if (m_slowFooFuture != XR_NULL_FUTURE_EXT) {
XrFutureCancelInfoEXT cancel_info{XR_TYPE_FUTURE_CANCEL_INFO_EXT};
cancel_info.future = m_slowFooFuture;
xrCancelFutureEXT(instance, &cancel_info);
m_slowFooFuture = XR_NULL_FUTURE_EXT;
}
}
void CheckFooRequestCompletion() {
std::unique_lock<std::mutex> lock(m_mutex);
// Check if a future is outstanding
if (m_slowFooFuture == XR_NULL_FUTURE_EXT) {
return;
}
// Poll for state of future
XrFuturePollInfoEXT pollInfo{XR_TYPE_FUTURE_POLL_INFO_EXT};
XrFuturePollResultEXT pollResult{XR_TYPE_FUTURE_POLL_RESULT_EXT};
pollInfo.future = m_slowFooFuture;
CHK_XR(xrPollFutureEXT(instance, &pollInfo, &pollResult));
if (pollResult.state == XR_FUTURE_STATE_READY_EXT) {
// Complete the future, consuming the result
XrSlowFooCompletionEXT completion{XR_TYPE_SLOW_FOO_COMPLETION_EXT};
XrResult result =
xrSlowFooCompleteEXT(session, m_slowFooFuture, &completion);
// Check XrResult from the completion function
CHK_XR(result);
// Check XrResult from the async operation
CHK_XR(completion.futureResult);
m_fooValue = completion.foo;
m_slowFooFuture = XR_NULL_FUTURE_EXT;
// Do something with the foo value.
}
}
void ThreadFunction() {
while (!m_abort) {
// other logic here
CheckFooRequestCompletion();
// sleep if needed.
}
}
XrFutureEXT m_slowFooFuture{XR_NULL_FUTURE_EXT};
float m_fooValue{0.0f};
bool m_abort{false};
std::mutex m_mutex;
std::thread m_processThread;
};
New Base Types
New Functions
New Structures
New Enum Constants
-
XR_NULL_FUTURE_EXT
XrStructureType enumeration is extended with:
-
XR_TYPE_FUTURE_CANCEL_INFO_EXT -
XR_TYPE_FUTURE_POLL_INFO_EXT -
XR_TYPE_FUTURE_POLL_RESULT_EXT -
XR_TYPE_FUTURE_COMPLETION_EXT
XrResult enumeration is extended with:
-
XR_ERROR_FUTURE_PENDING_EXT -
XR_ERROR_FUTURE_INVALID_EXT
Issues
-
Should there be a state for completed functions that is separate from "invalid"?
-
Resolved.
-
Answer: No. This would force an implementing runtime to remember old futures forever. In order to allow implementations that delete all associated data about a future after completion, we cannot differentiate between a future that never existed and one that was completed. Similarly, invalidated/completed is not formally a "state" for futures in the final API.
-
Version History
-
Revision 1, 2023-02-14 (Andreas Løve Selvik, Meta Platforms and Ron Bessems, Magic Leap)
-
Initial extension description
-
-
Revision 2, 2025-04-17 (Nihav Jain, Google)
-
Clarify what the runtime may do in a completion function.
-
12.35. XR_EXT_hand_interaction
- Name String
-
XR_EXT_hand_interaction - Extension Type
-
Instance extension
- Registered Extension Number
-
303
- Revision
-
2
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- API Interactions
-
-
Interacts with
XR_EXT_palm_pose
-
- Contributors
-
Yin Li, Microsoft
Alex Turner, Microsoft
Casey Meekhof, Microsoft
Lachlan Ford, Microsoft
Eric Provencher, Unity Technologies
Bryan Dube, Unity Technologies
Peter Kuhn, Unity Technologies
Tanya Li, Unity Technologies
Jakob Bornecrantz, Collabora
Jonathan Wright, Meta Platforms
Federico Schliemann, Meta Platforms
Andreas Loeve Selvik, Meta Platforms
Nathan Nuber, Valve
Joe Ludwig, Valve
Rune Berg, Valve
Adam Harwood, Ultraleap
Robert Blenkinsopp, Ultraleap
Paulo Gomes, Samsung Electronics
Ron Bessems, Magic Leap
Bastiaan Olij, Godot Engine
John Kearney, Meta Platforms
12.35.1. Overview
This extension defines four commonly used action poses for all user hand interaction profiles including both hand tracking devices and motion controller devices.
This extension also introduces a new interaction profile specifically designed for hand tracking devices to input through the OpenXR action system. Though, for runtimes with controller inputs, the runtime should also provide this interaction profile through action mappings from the controller inputs, so that an application whose suggested action bindings solely depending on this hand interaction profile is usable on such runtimes as well.
12.35.2. Action poses for hand interactions
The following four action poses (i.e. "pinch," "poke," "aim," and "grip") enable a hand and finger interaction model, whether the tracking inputs are provided by a hand tracking device or a motion controller device.
The runtime must support all of the following action subpaths on all interaction profiles that are valid for the user paths of /user/hand/left and /user/hand/right, including those interaction profiles enabled through extensions.
-
…/input/aim/pose
-
…/input/grip/pose
-
…/input/pinch_ext/pose
-
…/input/poke_ext/pose
Aim pose
The …/input/aim/pose is designed for interacting with objects out of arm’s reach. For example, using a virtual laser pointer to aim at a virtual button on the wall is an interaction suited to the "aim" pose.
This is the same "aim" pose defined in Standard pose identifiers. Every tracked controller profile already supports this pose.
Position
The position of an "aim" pose is typically in front of the user’s hand and moves together with the corresponding hand, so that the user is able to easily see the aiming ray cast to the target in the world and adjust for aim.
Orientation
The orientation of an "aim" pose is typically stabilized so that it is suitable to render an aiming ray emerging from the user’s hand pointing into the world.
The -Z direction is the forward direction of the aiming gesture, that is, where the aiming ray is pointing at.
The +Y direction is a runtime defined direction based on the hand tracking device or ergonomics of the controller in the user’s hand. It is typically pointing up in the world when the user is performing the aiming gesture naturally forward with a hand or controller in front of the user body.
The +X direction is orthogonal to +Y and +Z using the right-hand rule.
When targeting an object out of arm’s reach, the runtime may optimize the "aim" pose stability for pointing at a target, therefore the rotation of the "aim" pose may account for forearm or shoulder motion as well as hand rotation. Hence, the "aim" pose may not always rigidly attach to the user’s hand rotation. If the application desires to rotate the targeted remote object in place, it should use the rotation of the "grip" pose instead of "aim" pose, as if the user is remotely holding the object and rotating it.
Grip pose
The …/input/grip/pose is designed for holding an object with a full hand grip gesture, for example, grasping and pushing a door’s handle or holding and swinging a sword.
This is the same "grip" pose defined in Standard pose identifiers. Every tracked controller profile already supports this pose.
The runtime should optimize the "grip" pose orientation so that it stabilizes large virtual objects held in the user’s hand.
Position
The position of the "grip" pose is at the centroid of the user’s palm when the user makes a fist or holds a tube-like object in the hand.
Orientation
The orientation of the "grip" pose may be used to render a virtual object held in the hand, for example, holding the grip of a virtual sword.
The Z axis of the grip pose goes through the center of the user’s curled fingers when the user makes a fist or holds a controller, and the -Z direction (forward) goes from the little finger to the index finger.
When the user completely opens their hand to form a flat 5-finger pose and the palms face each other, the ray that is normal to the user’s palms defines the X axis. The +X direction points away from the palm of the left hand and into the palm of the right hand. That is to say, in the described pose, the +X direction points to the user’s right for both hands. To further illustrate: if the user is holding a stick by making a fist with each hand in front of the body and pointing the stick up, the +X direction points to the user’s right for both hands.
The +Y direction is orthogonal to +Z and +X using the right-hand rule.
Pinch pose
The …/input/pinch_ext/pose is designed for interacting with a small object within arm’s reach using a finger and thumb with a "pinch" gesture. For example, turning a key to open a lock or moving the knob on a slider control are interactions suited to the "pinch" pose.
The runtime should stabilize the "pinch" pose while the user is performing the "pinch" gesture.
Position
When the input is provided by a hand tracking device, the position of the "pinch" pose is typically where the index and thumb fingertips will touch each other for a "pinch" gesture.
The runtime may provide the "pinch" pose using any finger based on the current user’s preference for accessibility support. An application typically designs the "pinch" pose interaction assuming the "pinch" is performed using the index finger and thumb.
When the input is provided by a motion controller device, the position of the "pinch" pose is typically based on a fixed offset from the grip pose in front of the controller, where the user can naturally interact with a small object. The runtime should avoid obstructing the "pinch" pose with the physical profile of the motion controller.
Orientation
The "pinch" pose orientation must rotate together with the hand rotation.
The "pinch" pose’s orientation may be used to render a virtual object being held by a "pinch" gesture, for example, holding a key as illustrated in picture above.
If this virtual key is within a plane as illustrated in the above picture, the Y and Z axes of the "pinch" pose are within this plane.
The +Z axis is the backward direction of the "pinch" pose, typically the direction from the "pinch" position pointing to the mid point of thumb and finger proximal joints.
When the user puts both hands in front of the body at the same height, palms facing each other and fingers pointing forward, then performs a "pinch" gesture with both hands, the +Y direction for both hands should be roughly pointing up.
The X direction follows the right-hand rule using the Z and Y axes.
If the input is provided by a motion controller device, the orientation of the "pinch" pose is typically based on a fixed-rotation offset from the "grip" pose orientation that roughly follows the above definition when the user is holding the controller naturally.
Poke pose
The …/input/poke_ext/pose is designed for interactions using a fingertip to touch and push a small object. For example, pressing a push button with a fingertip, swiping to scroll a browser view, or typing on a virtual keyboard are interactions suited to the "poke" pose.
The application may use the "poke" pose as a point to interact with virtual objects, and this pose is typically enough for simple interactions.
The application may also use a volumetric representation of a "poke" gesture using a sphere combined with the "poke" pose. The center of such a sphere is located the distance of one radius in the +Z direction of the "poke" pose, such that the "poke" pose falls on the surface of the sphere and the sphere models the shape of the fingertip.
Position
When input is provided by a hand tracking device, the position of the "poke" pose is at the surface of the extended index fingertip. The runtime may provide the "poke" pose using other fingers for accessibility support.
When input is provided by a motion controller, the position of the "poke" pose is typically based on a fixed offset from the "grip" pose in front of the controller, where touching and pushing a small object feels natural using the controller. The runtime should avoid obstructing the "poke" pose with the physical profile of the motion controller.
Orientation
The +Y direction of the "poke" pose is the up direction in the world when the user is extending the index finger forward with palm facing down. When using a motion controller, +Y matches the up direction in the world when the user extends the index finger forward while holding the controller with palm facing down.
The +Z direction points from the fingertip towards the knuckle and parallel to the index finger distal bone, i.e. backwards when the user is holding a controller naturally in front of the body and pointing index finger forward.
The +X direction is orthogonal to +Y and +Z using the right-hand rule.
The "poke" pose must rotate together with the tip of the finger or the controller’s "grip" pose.
12.35.3. The interaction profile for hand tracking devices
The hand interaction profile is designed for runtimes which provide hand inputs using hand tracking devices instead of controllers with triggers or buttons. This allows hand tracking devices to provide commonly used gestures and action poses to the OpenXR action system.
In addition to hand tracking devices, runtimes with controller inputs should also implement this interaction profile through action bindings, so that an application whose suggested action bindings solely depending on this hand interaction profile is usable on such runtimes as well.
Interaction profile path:
-
/interaction_profiles/ext/hand_interaction_ext
Valid for top level user path:
-
/user/hand/left
-
/user/hand/right
Supported component paths:
-
…/input/aim/pose
-
…/input/grip/pose
-
…/input/pinch_ext/pose
-
…/input/poke_ext/pose
-
…/input/pinch_ext/value
-
…/input/pinch_ext/ready_ext
-
…/input/aim_activate_ext/value
-
…/input/aim_activate_ext/ready_ext
-
…/input/grasp_ext/value
-
…/input/grasp_ext/ready_ext
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
This interaction profile supports the above four action poses, as well as the following three groups of action inputs.
Pinch action
This interaction profile supports …/input/pinch_ext/value and …/input/pinch_ext/ready_ext actions.
The …/input/pinch_ext/value is a 1D analog input component indicating the extent which the user is bringing their finger and thumb together to perform a "pinch" gesture.
The …/input/pinch_ext/value can be used as either a boolean or
float action type, where the value XR_TRUE or 1.0f represents that the
finger and thumb are touching each other.
The …/input/pinch_ext/value must be at value 0.0f or
XR_FALSE when the hand is in a natural and relaxed open state without the
user making any extra effort.
The …/input/pinch_ext/value should be linear to the distance between the finger and thumb tips when they are in the range to change "pinch" value from 0 to 1.
The …/input/pinch_ext/ready_ext is a boolean input, where the
value XR_TRUE indicates that the fingers used to perform the "pinch"
gesture are properly tracked by the hand tracking device and the hand shape
is observed to be ready to perform or is performing a "pinch" gesture.
The …/input/pinch_ext/value must be 0.0f or XR_FALSE when
the …/input/pinch_ext/ready_ext is XR_FALSE.
The runtime may drive the input of the "pinch" gesture using any finger with the thumb to support accessibility.
Aim activate action
This interaction profile supports …/input/aim_activate_ext/value and …/input/aim_activate_ext/ready_ext actions.
The …/input/aim_activate_ext/value is a 1D analog input component indicating that the user activated the action on the target that the user is pointing at with the aim pose.
The "aim_activate" gesture is runtime defined, and it should be chosen so that the "aim" pose tracking is stable and usable for pointing at a distant target while the gesture is being performed.
The …/input/aim_activate_ext/value can be used as either a
boolean or float action type, where the value XR_TRUE or 1.0f represents
that the aimed-at target is being fully interacted with.
The …/input/aim_activate_ext/ready_ext is a boolean input, where
the value XR_TRUE indicates that the fingers to perform the "aim_activate"
gesture are properly tracked by the hand tracking device and the hand shape
is observed to be ready to perform or is performing an "aim_activate"
gesture.
The …/input/aim_activate_ext/value must be 0.0f or XR_FALSE
when the …/input/aim_activate_ext/ready_ext is XR_FALSE.
Grasp action
This interaction profile supports …/input/grasp_ext/value action.
The …/input/grasp_ext/value is a 1D analog input component indicating that the user is making a fist.
The …/input/grasp_ext/value can be used as either a boolean or
float action type, where the value XR_TRUE or 1.0f represents that the
fist is tightly closed.
The …/input/grasp_ext/value must be at value 0.0f or
XR_FALSE when the hand is in a natural and relaxed open state without the
user making any extra effort.
The …/input/grasp_ext/ready_ext is a boolean input, where the
value XR_TRUE indicates that the hand performing the grasp action is
properly tracked by the hand tracking device and it is observed to be ready
to perform or is performing the grasp action.
The …/input/grasp_ext/value must be 0.0f or XR_FALSE when
the …/input/grasp_ext/ready_ext is XR_FALSE.
Hand interaction gestures overlap
The values of the above "pinch", "grasp", and "aim_activate" input actions may not be mutually exclusive when the input is provided by a hand tracking device. The application should not assume these actions are distinctively activated as action inputs provided by buttons or triggers on a controller. The application should suggest action bindings considering the intent of the action and their paired action pose.
Using hand interaction profile with controllers
The runtimes with controller inputs should support the /interaction_profiles/ext/hand_interaction_ext profile using input mapping, so that applications can solely rely on the /interaction_profiles/ext/hand_interaction_ext profile to build XR experiences.
If the application desires to further customize the action poses with more flexible use of controller interaction profiles, the application can also provide action binding suggestions of controller profile using specific buttons or triggers to work together with the commonly used four action poses.
|
Typical usages of action poses with hand or controller profiles
|
New Object Types
New Flag Types
New Enum Constants
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2021-08-06 (Yin Li)
-
Initial extension description
-
-
Revision 2, 2025-08-20 (John Kearney, Meta)
-
Explicitly list support for grip_surface in extension definition.
-
12.36. XR_EXT_hand_joints_motion_range
- Name String
-
XR_EXT_hand_joints_motion_range - Extension Type
-
Instance extension
- Registered Extension Number
-
81
- Revision
-
1
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2021-04-15
- IP Status
-
No known IP claims.
- Contributors
-
Joe van den Heuvel, Valve
Rune Berg, Valve
Joe Ludwig, Valve
Jakob Bornecrantz, Collabora
Overview
This extension augments the XR_EXT_hand_tracking extension to enable
applications to request that the XrHandJointLocationsEXT returned by
xrLocateHandJointsEXT should return hand joint locations conforming to
a range of motion specified by the application.
The application must enable the XR_EXT_hand_tracking extension in
order to use this extension.
New Object Types
New Flag Types
New Enum Constants
New Enums
The XrHandJointsMotionRangeEXT describes the hand joints' range of motion returned by xrLocateHandJointsEXT.
Runtimes must support both
XR_HAND_JOINTS_MOTION_RANGE_CONFORMING_TO_CONTROLLER_EXT and
XR_HAND_JOINTS_MOTION_RANGE_UNOBSTRUCTED_EXT for each controller
interaction profile that supports hand joint data.
// Provided by XR_EXT_hand_joints_motion_range
typedef enum XrHandJointsMotionRangeEXT {
XR_HAND_JOINTS_MOTION_RANGE_UNOBSTRUCTED_EXT = 1,
XR_HAND_JOINTS_MOTION_RANGE_CONFORMING_TO_CONTROLLER_EXT = 2,
XR_HAND_JOINTS_MOTION_RANGE_MAX_ENUM_EXT = 0x7FFFFFFF
} XrHandJointsMotionRangeEXT;
New Structures
The XrHandJointsMotionRangeInfoEXT is a structure that an application
can chain in XrHandJointsLocateInfoEXT to request the joint motion
range specified by the handJointsMotionRange field.
Runtimes must return the appropriate joint locations depending on the
handJointsMotionRange field and the currently active interaction
profile.
// Provided by XR_EXT_hand_joints_motion_range
typedef struct XrHandJointsMotionRangeInfoEXT {
XrStructureType type;
const void* next;
XrHandJointsMotionRangeEXT handJointsMotionRange;
} XrHandJointsMotionRangeInfoEXT;
New Functions
Issues
Version History
-
Revision 1, 2021-04-15 (Rune Berg)
-
Initial extension description
-
12.37. XR_EXT_hand_tracking
- Name String
-
XR_EXT_hand_tracking - Extension Type
-
Instance extension
- Registered Extension Number
-
52
- Revision
-
4
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2021-04-15
- IP Status
-
No known IP claims.
- Contributors
-
Yin Li, Microsoft
Lachlan Ford, Microsoft
Alex Turner, Microsoft
Bryce Hutchings, Microsoft
Cass Everitt, Oculus
Blake Taylor, Magic Leap
Joe van den Heuvel, Valve
Rune Berg, Valve
Valerie Benson, Ultraleap
Rylie Pavlik, Collabora
12.37.1. Overview
This extension enables applications to locate the individual joints of hand tracking inputs. It enables applications to render hands in XR experiences and interact with virtual objects using hand joints.
12.37.2. Inspect system capability
An application can inspect whether the system is capable of hand tracking input by extending the XrSystemProperties with XrSystemHandTrackingPropertiesEXT structure when calling xrGetSystemProperties.
// Provided by XR_EXT_hand_tracking
typedef struct XrSystemHandTrackingPropertiesEXT {
XrStructureType type;
void* next;
XrBool32 supportsHandTracking;
} XrSystemHandTrackingPropertiesEXT;
If a runtime returns XR_FALSE for supportsHandTracking, the
runtime must return XR_ERROR_FEATURE_UNSUPPORTED from
xrCreateHandTrackerEXT.
12.37.3. Create a hand tracker handle
The XrHandTrackerEXT handle represents the resources for hand tracking of the specific hand.
XR_DEFINE_HANDLE(XrHandTrackerEXT)
An application creates separate XrHandTrackerEXT handles for left and right hands. This handle can be used to locate hand joints using xrLocateHandJointsEXT function.
A hand tracker provides joint locations with an unobstructed range of motion of an empty human hand.
|
Note
This behavior can be modified by the |
An application can create an XrHandTrackerEXT handle using xrCreateHandTrackerEXT function.
// Provided by XR_EXT_hand_tracking
XrResult xrCreateHandTrackerEXT(
XrSession session,
const XrHandTrackerCreateInfoEXT* createInfo,
XrHandTrackerEXT* handTracker);
If the system does not support hand tracking, runtime must return
XR_ERROR_FEATURE_UNSUPPORTED from xrCreateHandTrackerEXT.
In this case, the runtime must return XR_FALSE for
XrSystemHandTrackingPropertiesEXT::supportsHandTracking when the
function xrGetSystemProperties is called, so that the application can
avoid creating a hand tracker.
The XrHandTrackerCreateInfoEXT structure describes the information to create an XrHandTrackerEXT handle.
// Provided by XR_EXT_hand_tracking
typedef struct XrHandTrackerCreateInfoEXT {
XrStructureType type;
const void* next;
XrHandEXT hand;
XrHandJointSetEXT handJointSet;
} XrHandTrackerCreateInfoEXT;
The XrHandEXT describes which hand the XrHandTrackerEXT is tracking.
// Provided by XR_EXT_hand_tracking
typedef enum XrHandEXT {
XR_HAND_LEFT_EXT = 1,
XR_HAND_RIGHT_EXT = 2,
XR_HAND_MAX_ENUM_EXT = 0x7FFFFFFF
} XrHandEXT;
The XrHandJointSetEXT enum describes the set of hand joints to track when creating an XrHandTrackerEXT.
// Provided by XR_EXT_hand_tracking
typedef enum XrHandJointSetEXT {
XR_HAND_JOINT_SET_DEFAULT_EXT = 0,
// Provided by XR_ULTRALEAP_hand_tracking_forearm
XR_HAND_JOINT_SET_HAND_WITH_FOREARM_ULTRALEAP = 1000149000,
XR_HAND_JOINT_SET_MAX_ENUM_EXT = 0x7FFFFFFF
} XrHandJointSetEXT;
xrDestroyHandTrackerEXT function releases the handTracker and
the underlying resources when finished with hand tracking experiences.
// Provided by XR_EXT_hand_tracking
XrResult xrDestroyHandTrackerEXT(
XrHandTrackerEXT handTracker);
12.37.4. Locate hand joints
The xrLocateHandJointsEXT function locates an array of hand joints to a base space at given time.
// Provided by XR_EXT_hand_tracking
XrResult xrLocateHandJointsEXT(
XrHandTrackerEXT handTracker,
const XrHandJointsLocateInfoEXT* locateInfo,
XrHandJointLocationsEXT* locations);
The XrHandJointsLocateInfoEXT structure describes the information to locate hand joints.
// Provided by XR_EXT_hand_tracking
typedef struct XrHandJointsLocateInfoEXT {
XrStructureType type;
const void* next;
XrSpace baseSpace;
XrTime time;
} XrHandJointsLocateInfoEXT;
XrHandJointLocationsEXT structure returns the state of the hand joint locations.
// Provided by XR_EXT_hand_tracking
typedef struct XrHandJointLocationsEXT {
XrStructureType type;
void* next;
XrBool32 isActive;
uint32_t jointCount;
XrHandJointLocationEXT* jointLocations;
} XrHandJointLocationsEXT;
The application must allocate the memory for the output array
jointLocations that can contain at least jointCount of
XrHandJointLocationEXT.
The application must set jointCount as described by the
XrHandJointSetEXT when creating the XrHandTrackerEXT otherwise
the runtime must return XR_ERROR_VALIDATION_FAILURE.
The runtime must return jointLocations representing the range of
motion of a human hand, without any obstructions.
Input systems that obstruct the movement of the user’s hand (e.g.: a held
controller preventing the user from making a fist) or that have only limited
ability to track finger positions must use the information available to
them to emulate an unobstructed range of motion.
The runtime must update the jointLocations array ordered so that the
application can index elements using the corresponding hand joint enum (e.g.
XrHandJointEXT) as described by XrHandJointSetEXT when creating
the XrHandTrackerEXT.
For example, when the XrHandTrackerEXT is created with
XR_HAND_JOINT_SET_DEFAULT_EXT, the application must set the
jointCount to XR_HAND_JOINT_COUNT_EXT, and the runtime must
fill the jointLocations array ordered so that it may be indexed by the
XrHandJointEXT enum.
If the returned isActive is true, the runtime must return all joint
locations with both XR_SPACE_LOCATION_POSITION_VALID_BIT and
XR_SPACE_LOCATION_ORIENTATION_VALID_BIT set.
Although, in this case, some joint space locations may be untracked (i.e.
XR_SPACE_LOCATION_POSITION_TRACKED_BIT or
XR_SPACE_LOCATION_ORIENTATION_TRACKED_BIT is unset).
If the returned isActive is false, it indicates the hand tracker did
not detect the hand input or the application lost input focus.
In this case, the runtime must return all jointLocations with neither
XR_SPACE_LOCATION_POSITION_VALID_BIT nor
XR_SPACE_LOCATION_ORIENTATION_VALID_BIT set.
XrHandJointLocationEXT structure describes the position, orientation, and radius of a hand joint.
// Provided by XR_EXT_hand_tracking
typedef struct XrHandJointLocationEXT {
XrSpaceLocationFlags locationFlags;
XrPosef pose;
float radius;
} XrHandJointLocationEXT;
If the returned locationFlags has
XR_SPACE_LOCATION_POSITION_VALID_BIT set, the returned radius must be
a positive value.
If the returned locationFlags has
XR_SPACE_LOCATION_POSITION_VALID_BIT unset, the returned radius value
is undefined and should be avoided.
The application can chain an XrHandJointVelocitiesEXT structure to the
next pointer of XrHandJointLocationsEXT when calling
xrLocateHandJointsEXT to retrieve the hand joint velocities.
// Provided by XR_EXT_hand_tracking
typedef struct XrHandJointVelocitiesEXT {
XrStructureType type;
void* next;
uint32_t jointCount;
XrHandJointVelocityEXT* jointVelocities;
} XrHandJointVelocitiesEXT;
The application must allocate the memory for the output array
jointVelocities that can contain at least jointCount of
XrHandJointVelocityEXT.
The application must input jointCount as described by the
XrHandJointSetEXT when creating the XrHandTrackerEXT.
Otherwise, the runtime must return XR_ERROR_VALIDATION_FAILURE.
The runtime must update the jointVelocities array in the order so
that the application can index elements using the corresponding hand joint
enum (e.g. XrHandJointEXT) as described by the XrHandJointSetEXT
when creating the XrHandTrackerEXT.
For example, when the XrHandTrackerEXT is created with
XR_HAND_JOINT_SET_DEFAULT_EXT, the application must set the
jointCount to XR_HAND_JOINT_COUNT_EXT, and the returned
jointVelocities array must be ordered to be indexed by enum
XrHandJointEXT enum.
If the returned XrHandJointLocationsEXT::isActive is false, it
indicates the hand tracker did not detect a hand input or the application
lost input focus.
In this case, the runtime must return all jointVelocities with
neither XR_SPACE_VELOCITY_LINEAR_VALID_BIT nor
XR_SPACE_VELOCITY_ANGULAR_VALID_BIT set.
If an XrHandJointVelocitiesEXT structure is chained to
XrHandJointLocationsEXT::next, the returned
XrHandJointLocationsEXT::isActive is true, and the velocity is
observed or can be calculated by the runtime, the runtime must fill in the
linear velocity of each hand joint within the reference frame of
XrHandJointsLocateInfoEXT::baseSpace and set the
XR_SPACE_VELOCITY_LINEAR_VALID_BIT.
Similarly, if an XrHandJointVelocitiesEXT structure is chained to
XrHandJointLocationsEXT::next, the returned
XrHandJointLocationsEXT::isActive is true, and the angular
velocity is observed or can be calculated by the runtime, the runtime
must fill in the angular velocity of each joint within the reference frame
of XrHandJointsLocateInfoEXT::baseSpace and set the
XR_SPACE_VELOCITY_ANGULAR_VALID_BIT.
XrHandJointVelocityEXT structure describes the linear and angular velocity of a hand joint.
// Provided by XR_EXT_hand_tracking
typedef struct XrHandJointVelocityEXT {
XrSpaceVelocityFlags velocityFlags;
XrVector3f linearVelocity;
XrVector3f angularVelocity;
} XrHandJointVelocityEXT;
12.37.5. Example code for locating hand joints
The following example code demonstrates how to locate all hand joints relative to a world space.
XrInstance instance; // previously initialized
XrSystemId systemId; // previously initialized
XrSession session; // previously initialized
XrSpace worldSpace; // previously initialized, e.g. from
// XR_REFERENCE_SPACE_TYPE_LOCAL
// Inspect hand tracking system properties
XrSystemHandTrackingPropertiesEXT handTrackingSystemProperties{
XR_TYPE_SYSTEM_HAND_TRACKING_PROPERTIES_EXT};
XrSystemProperties systemProperties{XR_TYPE_SYSTEM_PROPERTIES,
&handTrackingSystemProperties};
CHK_XR(xrGetSystemProperties(instance, systemId, &systemProperties));
if (!handTrackingSystemProperties.supportsHandTracking) {
// The system does not support hand tracking
return;
}
// Get function pointer for xrCreateHandTrackerEXT
PFN_xrCreateHandTrackerEXT pfnCreateHandTrackerEXT;
CHK_XR(xrGetInstanceProcAddr(instance, "xrCreateHandTrackerEXT",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnCreateHandTrackerEXT)));
// Create a hand tracker for left hand that tracks default set of hand joints.
XrHandTrackerEXT leftHandTracker{};
{
XrHandTrackerCreateInfoEXT createInfo{XR_TYPE_HAND_TRACKER_CREATE_INFO_EXT};
createInfo.hand = XR_HAND_LEFT_EXT;
createInfo.handJointSet = XR_HAND_JOINT_SET_DEFAULT_EXT;
CHK_XR(pfnCreateHandTrackerEXT(session, &createInfo, &leftHandTracker));
}
// Allocate buffers to receive joint location and velocity data before frame
// loop starts
XrHandJointLocationEXT jointLocations[XR_HAND_JOINT_COUNT_EXT];
XrHandJointVelocityEXT jointVelocities[XR_HAND_JOINT_COUNT_EXT];
XrHandJointVelocitiesEXT velocities{XR_TYPE_HAND_JOINT_VELOCITIES_EXT};
velocities.jointCount = XR_HAND_JOINT_COUNT_EXT;
velocities.jointVelocities = jointVelocities;
XrHandJointLocationsEXT locations{XR_TYPE_HAND_JOINT_LOCATIONS_EXT};
locations.next = &velocities;
locations.jointCount = XR_HAND_JOINT_COUNT_EXT;
locations.jointLocations = jointLocations;
// Get function pointer for xrLocateHandJointsEXT
PFN_xrLocateHandJointsEXT pfnLocateHandJointsEXT;
CHK_XR(xrGetInstanceProcAddr(instance, "xrLocateHandJointsEXT",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnLocateHandJointsEXT)));
while (1) {
// ...
// For every frame in frame loop
// ...
XrFrameState frameState; // previously returned from xrWaitFrame
const XrTime time = frameState.predictedDisplayTime;
XrHandJointsLocateInfoEXT locateInfo{XR_TYPE_HAND_JOINTS_LOCATE_INFO_EXT};
locateInfo.baseSpace = worldSpace;
locateInfo.time = time;
CHK_XR(pfnLocateHandJointsEXT(leftHandTracker, &locateInfo, &locations));
if (locations.isActive) {
// The returned joint location array can be directly indexed with
// XrHandJointEXT enum.
const XrPosef &indexTipInWorld =
jointLocations[XR_HAND_JOINT_INDEX_TIP_EXT].pose;
const XrPosef &thumbTipInWorld =
jointLocations[XR_HAND_JOINT_THUMB_TIP_EXT].pose;
// using the returned radius and velocity of index finger tip.
const float indexTipRadius =
jointLocations[XR_HAND_JOINT_INDEX_TIP_EXT].radius;
const XrHandJointVelocityEXT &indexTipVelocity =
jointVelocities[XR_HAND_JOINT_INDEX_TIP_EXT];
}
}
12.37.6. Conventions of hand joints
This extension defines 26 joints for hand tracking: 4 joints for the thumb finger, 5 joints for the other four fingers, and the wrist and palm of the hands.
// Provided by XR_EXT_hand_tracking
typedef enum XrHandJointEXT {
XR_HAND_JOINT_PALM_EXT = 0,
XR_HAND_JOINT_WRIST_EXT = 1,
XR_HAND_JOINT_THUMB_METACARPAL_EXT = 2,
XR_HAND_JOINT_THUMB_PROXIMAL_EXT = 3,
XR_HAND_JOINT_THUMB_DISTAL_EXT = 4,
XR_HAND_JOINT_THUMB_TIP_EXT = 5,
XR_HAND_JOINT_INDEX_METACARPAL_EXT = 6,
XR_HAND_JOINT_INDEX_PROXIMAL_EXT = 7,
XR_HAND_JOINT_INDEX_INTERMEDIATE_EXT = 8,
XR_HAND_JOINT_INDEX_DISTAL_EXT = 9,
XR_HAND_JOINT_INDEX_TIP_EXT = 10,
XR_HAND_JOINT_MIDDLE_METACARPAL_EXT = 11,
XR_HAND_JOINT_MIDDLE_PROXIMAL_EXT = 12,
XR_HAND_JOINT_MIDDLE_INTERMEDIATE_EXT = 13,
XR_HAND_JOINT_MIDDLE_DISTAL_EXT = 14,
XR_HAND_JOINT_MIDDLE_TIP_EXT = 15,
XR_HAND_JOINT_RING_METACARPAL_EXT = 16,
XR_HAND_JOINT_RING_PROXIMAL_EXT = 17,
XR_HAND_JOINT_RING_INTERMEDIATE_EXT = 18,
XR_HAND_JOINT_RING_DISTAL_EXT = 19,
XR_HAND_JOINT_RING_TIP_EXT = 20,
XR_HAND_JOINT_LITTLE_METACARPAL_EXT = 21,
XR_HAND_JOINT_LITTLE_PROXIMAL_EXT = 22,
XR_HAND_JOINT_LITTLE_INTERMEDIATE_EXT = 23,
XR_HAND_JOINT_LITTLE_DISTAL_EXT = 24,
XR_HAND_JOINT_LITTLE_TIP_EXT = 25,
XR_HAND_JOINT_MAX_ENUM_EXT = 0x7FFFFFFF
} XrHandJointEXT;
The finger joints, except the tips, are named after the corresponding bone at the further end of the bone from the finger tips. The joint’s orientation is defined at a fully opened hand pose facing down as in the above picture.
|
Note
Many applications and game engines use names to identify joints rather than using indices. If possible, applications should use the joint name part of the XrHandJointEXT enum plus a hand identifier to help prevent joint name clashes (e.g. Index_Metacarpal_L, Thumb_Tip_R). Using consistent names increases the portability of assets between applications and engines. Including the hand in the identifier prevents ambiguity when both hands are used in the same skeleton, such as when they are combined with additional joints to form a full body skeleton. |
The backward (+Z) direction is parallel to the corresponding bone and points away from the finger tip. The up (+Y) direction is pointing out of the back of and perpendicular to the corresponding finger nail at the fully opened hand pose. The X direction is perpendicular to Y and Z and follows the right hand rule.
The wrist joint is located at the pivot point of the wrist which is location invariant when twisting hand without moving the forearm. The backward (+Z) direction is parallel to the line from wrist joint to middle finger metacarpal joint, and points away from the finger tips. The up (+Y) direction points out towards back of hand and perpendicular to the skin at wrist. The X direction is perpendicular to the Y and Z directions and follows the right hand rule.
The palm joint is located at the center of the middle finger’s metacarpal bone. The backward (+Z) direction is parallel to the middle finger’s metacarpal bone, and points away from the finger tips. The up (+Y) direction is perpendicular to palm surface and pointing towards the back of the hand. The X direction is perpendicular to the Y and Z directions and follows the right hand rule.
The radius of each joint is the distance from the joint to the skin in meters. The application can use a sphere at the joint location with joint radius for collision detection for interactions, such as pushing a virtual button using the index finger tip.
For example, suppose the radius of the palm joint is r then the app can
offset {0, -r, 0} to palm joint location to get the surface of hand palm
center, or offset {0, r, 0} to get the back surface of the hand.
Note that the palm joint for the hand tracking is not the same as …/input/grip/pose when hand tracking is provided by controller tracking. A "grip" pose is located at the center of the controller handle when user is holding a controller, outside of the user’s hand. A "palm" pose is located at the center of middle finger metacarpal bone which is inside the user’s hand.
// Provided by XR_EXT_hand_tracking
#define XR_HAND_JOINT_COUNT_EXT 26
XR_HAND_JOINT_COUNT_EXT defines the number of hand joint enumerants defined in XrHandJointEXT
New Object Types
New Flag Types
New Enum Constants
XrObjectType enumeration is extended with:
-
XR_OBJECT_TYPE_HAND_TRACKER_EXT
XrStructureType enumeration is extended with:
-
XR_TYPE_SYSTEM_HAND_TRACKING_PROPERTIES_EXT -
XR_TYPE_HAND_TRACKER_CREATE_INFO_EXT -
XR_TYPE_HAND_JOINTS_LOCATE_INFO_EXT -
XR_TYPE_HAND_JOINT_LOCATIONS_EXT -
XR_TYPE_HAND_JOINT_VELOCITIES_EXT
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2019-09-16 (Yin LI)
-
Initial extension description
-
-
Revision 2, 2020-04-20 (Yin LI)
-
Replace hand joint spaces to locate hand joints function.
-
-
Revision 3, 2021-04-13 (Rylie Pavlik, Rune Berg)
-
Fix example code to properly use
xrGetInstanceProcAddr. -
Add recommended bone names
-
-
Revision 4, 2021-04-15 (Rune Berg)
-
Clarify that use of this extension produces an unobstructed hand range of motion.
-
12.38. XR_EXT_hand_tracking_data_source
- Name String
-
XR_EXT_hand_tracking_data_source - Extension Type
-
Instance extension
- Registered Extension Number
-
429
- Revision
-
1
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2023-01-23
- IP Status
-
No known IP claims.
- Contributors
-
Jakob Bornecrantz, Collabora
John Kearney, Meta
Robert Memmott, Meta
Andreas Selvik, Meta
Yin Li, Microsoft
Robert Blenkinsopp, Ultraleap
Nathan Nuber, Valve - Contacts
-
John Kearney, Meta
Overview
This extension augments the XR_EXT_hand_tracking extension.
Runtimes may support a variety of data sources for hand joint data for
XR_EXT_hand_tracking, and some runtimes and devices may use joint
data from multiple sources.
This extension allows an application and the runtime to communicate about
and make use of those data sources in a cooperative manner.
This extension allows the application to specify the data sources that it wants data from when creating a hand tracking handle, and allows the runtime to specify the currently active data source.
The application must enable the XR_EXT_hand_tracking extension in
order to use this extension.
The XrHandTrackingDataSourceEXT enum describes a hand tracking data source when creating an XrHandTrackerEXT handle.
// Provided by XR_EXT_hand_tracking_data_source
typedef enum XrHandTrackingDataSourceEXT {
XR_HAND_TRACKING_DATA_SOURCE_UNOBSTRUCTED_EXT = 1,
XR_HAND_TRACKING_DATA_SOURCE_CONTROLLER_EXT = 2,
XR_HAND_TRACKING_DATA_SOURCE_MAX_ENUM_EXT = 0x7FFFFFFF
} XrHandTrackingDataSourceEXT;
The application can use XrHandTrackingDataSourceEXT with XrHandTrackingDataSourceInfoEXT when calling xrCreateHandTrackerEXT to tell the runtime all supported data sources for the application for the hand tracking inputs.
The application can use it with XrHandTrackingDataSourceStateEXT when calling xrLocateHandJointsEXT to inspect what data source the runtime used for the returned hand joint locations.
If the XR_EXT_hand_joints_motion_range extension is supported by the
runtime and the data source is
XR_HAND_TRACKING_DATA_SOURCE_CONTROLLER_EXT, then it is expected that
application will use that extension when retrieving hand joint poses.
The XrHandTrackingDataSourceInfoEXT structure is defined as:
// Provided by XR_EXT_hand_tracking_data_source
typedef struct XrHandTrackingDataSourceInfoEXT {
XrStructureType type;
const void* next;
uint32_t requestedDataSourceCount;
XrHandTrackingDataSourceEXT* requestedDataSources;
} XrHandTrackingDataSourceInfoEXT;
The XrHandTrackingDataSourceInfoEXT is a structure that an application
can chain to XrHandTrackerCreateInfoEXT::next to specify the
hand tracking data sources that the application accepts.
Because the hand tracking device may change during a running session, the
runtime may return a valid XrHandTrackerEXT handle even if there is
no currently active hand tracking device or the active device does not
safisty any or all data sources requested by the applications’s call to
xrCreateHandTrackerEXT.
The runtime may instead return XR_ERROR_FEATURE_UNSUPPORTED from
xrCreateHandTrackerEXT, if for example the runtime believes it will
never be able to satisfy the request.
If any value in requestedDataSources is duplicated, the runtime must
return XR_ERROR_VALIDATION_FAILURE from the call to
xrCreateHandTrackerEXT.
If requestedDataSourceCount is 0, the runtime must return
XR_ERROR_VALIDATION_FAILURE from the call to
xrCreateHandTrackerEXT.
The XrHandTrackingDataSourceStateEXT structure is defined as:
// Provided by XR_EXT_hand_tracking_data_source
typedef struct XrHandTrackingDataSourceStateEXT {
XrStructureType type;
void* next;
XrBool32 isActive;
XrHandTrackingDataSourceEXT dataSource;
} XrHandTrackingDataSourceStateEXT;
XrHandTrackingDataSourceStateEXT is a structure that an application
can chain to XrHandJointLocationsEXT::next when calling
xrLocateHandJointsEXT to retrieve the data source of the currently
active hand tracking device.
When the returned isActive is XR_FALSE, it indicates the currently
active hand tracking device does not support any of the requested data
sources.
In these cases, the runtime must also return no valid tracking locations
for hand joints from this xrLocateHandJointsEXT function.
If the tracker was not created with XrHandTrackingDataSourceInfoEXT
chained to XrHandTrackerCreateInfoEXT::next, then the runtime
must return XR_ERROR_VALIDATION_FAILURE, if
XrHandTrackingDataSourceStateEXT is passed in the call to
xrLocateHandJointsEXT.
If there is an active hand tracking device that is one of the specified
XrHandTrackingDataSourceInfoEXT::requestedDataSources, the
runtime must set isActive to XR_TRUE.
When the runtime sets isActive to XR_TRUE, the runtime must set
dataSource indicate the active data source.
The runtime must return a dataSource that is a subset of the
XrHandTrackingDataSourceInfoEXT::requestedDataSources when
creating the corresponding hand tracker.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
* XR_TYPE_HAND_TRACKING_DATA_SOURCE_INFO_EXT
* XR_TYPE_HAND_TRACKING_DATA_SOURCE_STATE_EXT
New Enums
New Structures
New Functions
Issues
-
Should this extension require
XR_HAND_JOINTS_MOTION_RANGE_CONFORMING_TO_CONTROLLER_EXTif the data source isXR_HAND_TRACKING_DATA_SOURCE_CONTROLLER_EXTandXR_EXT_hand_joints_motion_rangeis not enabled?RESOLVED: Yes.
It should not be required. We expect that a key use of the data from this extension will be replicating data hand tracking joint data for social purposes. For that use-case, the data returned in the style of
XR_HAND_JOINTS_MOTION_RANGE_UNOBSTRUCTED_EXTis more appropriate.This is consistent with
XR_EXT_hand_trackingextension which requires that thejointLocationsrepresentthe range of motion of a human hand, without any obstructions. -
Should XrHandTrackingDataSourceInfoEXT include an
isActivemember or can it useisActivefrom XrHandJointLocationsEXT?RESOLVED: Yes.
Yes; XrHandTrackingDataSourceInfoEXT needs to include the
isActivemember and cannot use theisActivefrom XrHandJointLocationsEXT as the meaning of these members is different.The
isActivemember of XrHandTrackingDataSourceStateEXT allows the runtime to describe if the tracking device is active. XrHandTrackingDataSourceStateEXT::isActivedescribes if the tracking device is actively tracking. It is possible for a data source to be active but not actively tracking and we want to represent if the device is active in this extension.
Version History
-
Revision 1, 2023-01-23 (John Kearney)
-
Initial extension description
-
12.39. XR_EXT_interaction_render_model
- Name String
-
XR_EXT_interaction_render_model - Extension Type
-
Instance extension
- Registered Extension Number
-
302
- Revision
-
1
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Contributors
-
Darryl Gough, Microsoft
Yin Li, Microsoft
Bryce Hutchings, Microsoft
Rylie Pavlik, Collabora
Joe Ludwig, Valve
Nathan Nuber, Valve
Dan Willmott, Valve
Jakob Bornecrantz, Collabora
Leonard Tsai, Meta Platforms
Paulo Gomes, Samsung Electronics
Lachlan Ford, Google
Wenlin Mao, Meta Platforms
Bastiaan Olij, Godot Engine
12.39.1. Overview
This extension allows an application to render realistic models representing the device or devices used by the user to interact. It is a generalized version of functionality that has been known elsewhere as "controller models", made generic by enumerating interaction-related render models without filtering them, and allowing association with a subaction path as a second lookup step.
12.39.2. Getting Models
The design intent of this extension is to allow enumerating models early and keep enumerating them as long as their future use is possible. This is so that applications have time to load models, transcode textures, and otherwise prepare for rendering early in the session, and so that applications do not discard the results of that processing if it will be needed again. This large scope is only for enumerating the models in the first place, however: when those models are intended to be shown is more narrowly scoped and tightly specified, since it is less likely to be associated with a high computational startup cost.
The base XR_EXT_render_model extension delegates several design
choices to dependent extensions, as described in
Choices Delegated to Related Extensions.
For models associated with this XR_EXT_interaction_render_model
extension, the XR_EXT_render_model extension is specialized in the
following ways, addressing those delegated choices and other important
distinctions:
- glTF extension behavior
-
For any render model ID retrieved from this extension, the runtime must support a glTF model without any required glTF extensions. Thus, the runtime must not return
XR_ERROR_RENDER_MODEL_GLTF_EXTENSION_REQUIRED_EXTfrom xrCreateRenderModelEXT for any render model ID retrieved from this extension. - Alpha blending
-
Due to the difficulty and potential performance impact of implementing alpha blending correctly for multiple overlapping objects, applications are unlikely to be able to correctly render a model using alpha blending everywhere an interaction render model may appear. As such, the runtime should not set
alphaModetoBLENDfor any material in a render model associated with this extension. Materials withalphaModeset toMASKdo not pose the same challenges of implementation and so are suitable for use if needed. - Animation
-
For any asset associated with this extension, the simple node-pose-visibility mechanism defined by
XR_EXT_render_modelin Animate Parts of a Render Model is used for animation. - External references
-
For any render model associated with this extension, the runtime must provide a glTF asset without any references to external buffers and textures outside of the GLB container. That is, all binary data must be embedded in the GLB binary chunk or as a Base64
data:URI. - Scenes
-
For any render model associated with this extension, the runtime must provide a glTF asset that contains 1 or more scene and defines the
sceneproperty to identify which scene to render. - Complexity and Optimization
-
The runtime should provide a glTF model optimized for real-time rendering use, with the expectation that the application may render all interaction render models every frame. Describing such optimization is beyond the scope of this specification.
- Space location
-
Render models are located by a render model space, which does not correspond directly to any named pose.
The xrEnumerateInteractionRenderModelIdsEXT function is defined as:
// Provided by XR_EXT_interaction_render_model
XrResult xrEnumerateInteractionRenderModelIdsEXT(
XrSession session,
const XrInteractionRenderModelIdsEnumerateInfoEXT* getInfo,
uint32_t renderModelIdCapacityInput,
uint32_t* renderModelIdCountOutput,
XrRenderModelIdEXT* renderModelIds);
This function returns render model IDs associated with any device associated
with actions, in any action set attached with session by
xrAttachSessionActionSets.
There is no specific meaning for array position.
A runtime may return values in any order, although the enumerated array
must remain constant between calls to xrSyncActions.
An application should not assume any meaning based on array order.
Note that a runtime may shuffle the order of IDs returned each time that
the list changes, to aid application developers in avoiding accidental
dependence on enumeration order.
An application must not assume any given size of this array based on suggested bindings: compatibility and user preference may result in more models being associated with actions than described in the suggested bindings. The runtime may return more models than the number of top level user paths in the suggested bindings due to user configuration and compatibility rebinding. The runtime should continue to return model IDs corresponding to any devices that has recently become inactive or disconnected, if they are reasonably expected to be used again soon, to minimize the need for applications to re-enumerate models and load assets. Similarly, the runtime may return model IDs for devices expected to be used, even if they are not yet connected or active.
The runtime must return render model IDs reflecting the actual hardware used, which must be independent of the currently active interaction profile. Accordingly, as long as the same actions within an XrInstance have suggested bindings, changing suggested bindings by adding or removing suggested bindings for an interaction profile must not change the underlying assets. Furthermore, provided that identical actions within an XrInstance are associated with suggested bindings for a specified list of glTF extensions, the runtime must return an identical collection of render model asset UUIDs.
The application can monitor for the XrEventDataInteractionRenderModelsChangedEXT event to get notified when interaction render models need to be re-enumerated.
Changes to the collection of models enumerated (for example, due to device change) must only occur during a call to xrSyncActions. If the collection of models changes, the XrEventDataInteractionRenderModelsChangedEXT event must be queued during that call to xrSyncActions to signal the need for re-enumeration. This implies that a runtime must enumerate no models prior to the first call to xrSyncActions in a session.
Note that the UUIDs associated with the enumerated render model IDs for a
given system and list of glTF extensions may change between instances due
to runtime changes.
Additionally, as with all atom types like XrRenderModelIdEXT, the
enumerated render model ID values associated with a logical device may
change between sessions as render model ID atoms inherently only have
meaning within the single XrSession they are enumerated from.
If an XrRenderModelIdEXT was enumerated during a call to
xrEnumerateInteractionRenderModelIdsEXT during the current session,
but the set of interaction render models has now changed and that
XrRenderModelIdEXT would not enumerated by a call to
xrEnumerateInteractionRenderModelIdsEXT after that change, a call to
xrCreateRenderModelEXT with that XrRenderModelIdEXT must
return XR_ERROR_RENDER_MODEL_ID_INVALID_EXT.
(Note that a change in the set of interaction render models only occurs
during calls to xrSyncActions, and queues an
XrEventDataInteractionRenderModelsChangedEXT event if it occurs.) That
is, if an ID was previously enumerated with this function during the current
session, but is no longer enumerated due to a change in interaction render
models during an xrSyncActions call, it is no longer valid to create a
XrRenderModelEXT from that XrRenderModelIdEXT.
Existing XrRenderModelEXT handles already created from an ID that is no longer enumerated remain valid, but "inactive" and effectively useless.
-
Locating an associated render model space must report untracked/unlocatable, and therefore the model is not to be rendered.
-
Calls to xrGetRenderModelStateEXT may stop providing updated data, as they are assumed to not be rendered and thus the model state is irrelevant.
-
The runtime may return
XR_ERROR_RENDER_MODEL_ASSET_UNAVAILABLE_EXTfrom xrCreateRenderModelAssetEXT if called with the cache UUID of that render model, if no other active render model uses the same asset UUID.
Runtimes must not enumerate a render model ID that they previously enumerated, then no longer enumerated. That is, if a render model ID is made inactive, it will never again become active. If the associated device returns, it will use a new render model ID.
A render model XrRenderModelEXT created from an
XrRenderModelIdEXT enumerated by this function must not be
visible/locatable when located by xrCreateRenderModelSpaceEXT if the
session state is not XR_SESSION_STATE_FOCUSED, to ensure render models
are only being rendered once per frame.
If the session is not running, the runtime must return
XR_ERROR_SESSION_NOT_RUNNING.
A render model XrRenderModelEXT created from an
XrRenderModelIdEXT enumerated by this function must be locatable
and visible if the corresponding device is locatable and there exists some
action in any action set with which the render model is associated.
This avoids having interaction render models disappear during corner cases
of application interaction, e.g. when a "menu" button present on only one
controller is the only active input.
If an application wishes to only show models for which there are active
actions, use the output of xrEnumerateRenderModelSubactionPathsEXT
which enumerates subaction paths per model for the active action sets only.
The XrInteractionRenderModelIdsEnumerateInfoEXT structure is defined as:
// Provided by XR_EXT_interaction_render_model
typedef struct XrInteractionRenderModelIdsEnumerateInfoEXT {
XrStructureType type;
const void* next;
} XrInteractionRenderModelIdsEnumerateInfoEXT;
XrInteractionRenderModelIdsEnumerateInfoEXT is an input structure for the xrEnumerateInteractionRenderModelIdsEXT function. XrInteractionRenderModelIdsEnumerateInfoEXT exists for future extensibility.
The XrEventDataInteractionRenderModelsChangedEXT structure is an event defined as:
// Provided by XR_EXT_interaction_render_model
typedef struct XrEventDataInteractionRenderModelsChangedEXT {
XrStructureType type;
const void* next;
} XrEventDataInteractionRenderModelsChangedEXT;
Receiving this event from xrPollEvent indicates that that the app should enumerate interaction render models (or re-enumerate them) using xrEnumerateInteractionRenderModelIdsEXT and the two-call idiom, because the list of IDs enumerated by it has changed. This event must only be queued by a call to xrSyncActions. For clarity, if an application has enabled this extension, this event must be emitted during the first xrSyncActions call if xrEnumerateInteractionRenderModelIdsEXT will enumerate any models, because it enumerates no models prior to the first xrSyncActions call.
12.39.3. Associating Models with Active Action Set Subaction Paths
An application might wish to know which models are associated with a subaction path as used in suggested bindings, for example to adjust the shading to highlight a controller to use in user instructions. This operation is structured as enumerating the subaction paths for each render model to encourage application logic that treats this data fully generally and handles common and less common configurations uniformly.
The xrEnumerateRenderModelSubactionPathsEXT function is defined as:
// Provided by XR_EXT_interaction_render_model
XrResult xrEnumerateRenderModelSubactionPathsEXT(
XrRenderModelEXT renderModel,
const XrInteractionRenderModelSubactionPathInfoEXT* info,
uint32_t pathCapacityInput,
uint32_t* pathCountOutput,
XrPath* paths);
xrEnumerateRenderModelSubactionPathsEXT allows the application to associate an interaction-related render model with the associated subaction paths according to the exposed current interaction profile and active action sets.
If renderModel is valid but was not created from a render model ID
from a call to xrEnumerateInteractionRenderModelIdsEXT earlier in the
current session, the runtime must return
XR_ERROR_NOT_INTERACTION_RENDER_MODEL_EXT.
The array enumerated by this function for a given render model must not change except during calls to xrSyncActions.
A given subaction path must be reported for a model if and only if both of the following are true:
-
That path appears in the corresponding XrActionCreateInfo::
subactionPathsfor some action or actions associated with it in the active action sets. -
That path is used as a top-level user path for some suggested binding of at least one such action in the current interaction profile.
This paragraph describes implications and clarifications of the preceding
requirement.
If a given path is used as a top-level user path for a suggested binding to
an action with no subaction paths specified, or without that specific
subaction path specified, it is not sufficient to require enumerating that
path.
The runtime must only enumerate subaction paths that are included in the
reported current interaction profile and mentioned in the corresponding
suggested bindings, even if one of the models is logically better described
by a path not used by the application.
For example, a treadmill-like interaction device with its input mapped to
actions suggested for left and right hands enumerates the paths
/user/hand/left and /user/hand/right even though
/user/treadmill is defined in the specification.
This also implies that a runtime must return no subaction paths prior to
the first call to xrSyncActions in a session, or when the most recent
call to xrSyncActions did not specify any active action sets.
Additionally, the runtime must return no subaction paths when a given
render model provides input only for actions that do not have a list of
subaction paths specified in XrActionCreateInfo::subactionPaths.
This function is intended for identifying models currently associated with any actions in an active action set, as well as identifying the subaction paths associated with the bound input. To identify which top-level /user path is most closely associated with the overall pose of any given interaction render model, see xrGetRenderModelPoseTopLevelUserPathEXT. The description of that function contains a further discussion of the differences with this function.
Important: The order of values returned from this function is not meaningful, and the entire array should be iterated and treated uniformly by the application. An application should always be prepared for this function to return a list of any length, up to the total number of subaction paths used in suggested bindings. Most functionality in OpenXR is defined to operate as if the hardware corresponding to the current interaction profile were in use according to the suggested bindings. However, this function, and this extension in general, allows the application to access aspects of the user’s actual input configuration, to provide accurate and realistic feedback to the user. Special care is required to ensure that application code using this function is maximally general.
The XrInteractionRenderModelSubactionPathInfoEXT structure is defined as:
// Provided by XR_EXT_interaction_render_model
typedef struct XrInteractionRenderModelSubactionPathInfoEXT {
XrStructureType type;
const void* next;
} XrInteractionRenderModelSubactionPathInfoEXT;
XrInteractionRenderModelSubactionPathInfoEXT exists for future extensibility.
12.39.4. Query Pose-Related Top Level /user Path for Model
Some applications need to know the top-level /user path most closely associated with the overall pose of an interaction render model. This allows an application to adjust positioning of the render model where a render model retains its relative position to related poses and/or hand models.
An example use case is when rendering the controller, hand model, and elements related to poses for the player’s right or left hand, while the player has moved their hand through a virtual wall. An application may choose to not render these elements at their tracked location but instead prevent movement through this obstruction. The application will want to adjust the position of these elements in equal measure.
The xrGetRenderModelPoseTopLevelUserPathEXT function is defined as:
// Provided by XR_EXT_interaction_render_model
XrResult xrGetRenderModelPoseTopLevelUserPathEXT(
XrRenderModelEXT renderModel,
const XrInteractionRenderModelTopLevelUserPathGetInfoEXT* info,
XrPath* topLevelUserPath);
This function returns the top level /user path most closely
associated with the pose of a given render model, if any, and if that path
is present in the list passed in info.
A runtime must return:
* the top level /user path from the list in info that is
most closely associated with the model pose as a physical reality (e.g.
a device currently held in the user’s left hand returns
/user/hand/left), if one exists.
Note that this requirement does provide fallback behavior.
That is, if a model pose is related to more than one top level
/user path, the runtime returns the path from info with the
closest association, even if it is less closely related than some other
path not included in info.
* XR_NULL_PATH if no such path can be determined (e.g. the
corresponding device is currently not held by or attached to the user, or
no path associated with the model pose was provided in info).
Note that unlike xrGetCurrentInteractionProfile, more than one model may report being most closely associated with a given top level /user path. For example, a runtime may represent a single controller as two render models, or a user may have both a handheld device and a wrist-mounted tracker.
Changes to the top level /user path state of each render model must only occur during a call to xrSyncActions.
If renderModel is valid but was not retrieved from a call to
xrEnumerateInteractionRenderModelIdsEXT earlier in the current
session, the runtime must return
XR_ERROR_NOT_INTERACTION_RENDER_MODEL_EXT.
This function differs from xrEnumerateRenderModelSubactionPathsEXT by emphasizing poses and being broadly distinct from actions. xrGetRenderModelPoseTopLevelUserPathEXT focuses solely on poses related to a top level /user path and returning only most applicable result. Contrast with xrEnumerateRenderModelSubactionPathsEXT, which reports all top level /user paths being used as subaction paths that are associated with actions in an active action set. That function is meant more for e.g. highlighting models providing input, especially non-pose input, associated with a subaction path. For example, the right hand might have a pie menu related action set active, and an application could show the devices that can interact with that menu in a highlighted way, while dimming the other models.
Important: An application should always be prepared for this function
to return any top-level /user path in their list or
XR_NULL_PATH for any of the interaction render models.
Many systems will not report XR_NULL_PATH for any models, provided
that both /user/hand/left and /user/hand/right are
included on the list in info, but application code must be prepared
to handle this and that code path should be tested manually.
Most functionality in OpenXR is defined to operate as if the hardware
corresponding to the current interaction profile were in use according to
the suggested bindings.
However, this function, and this extension in general, allows the
application to access aspects of the user’s actual input configuration, to
provide accurate and realistic feedback to the user.
Special care is required to ensure that application code using this function
is maximally general.
The XrInteractionRenderModelTopLevelUserPathGetInfoEXT structure is defined as:
// Provided by XR_EXT_interaction_render_model
typedef struct XrInteractionRenderModelTopLevelUserPathGetInfoEXT {
XrStructureType type;
const void* next;
uint32_t topLevelUserPathCount;
const XrPath* topLevelUserPaths;
} XrInteractionRenderModelTopLevelUserPathGetInfoEXT;
If any elements in topLevelUserPaths are duplicated, the runtime must
return XR_ERROR_VALIDATION_FAILURE from
xrGetRenderModelPoseTopLevelUserPathEXT.
If any elements in topLevelUserPaths are not valid
top level /user paths, the runtime must
return XR_ERROR_PATH_INVALID from
xrGetRenderModelPoseTopLevelUserPathEXT.
12.39.5. Example
// previously initialized
extern XrInstance instance;
extern XrSession session;
extern XrSpace baseSpace;
// Get the function pointers for the extension's functions.
PFN_xrEnumerateInteractionRenderModelIdsEXT
pfnEnumerateInteractionRenderModelIdsEXT;
CHK_XR(xrGetInstanceProcAddr(instance,
"xrEnumerateInteractionRenderModelIdsEXT",
reinterpret_cast<PFN_xrVoidFunction *>(
&pfnEnumerateInteractionRenderModelIdsEXT)));
// And the XR_EXT_render_model functions
PFN_xrCreateRenderModelEXT pfnCreateRenderModelEXT;
CHK_XR(xrGetInstanceProcAddr(
instance, "xrCreateRenderModelEXT",
reinterpret_cast<PFN_xrVoidFunction *>(&pfnCreateRenderModelEXT)));
PFN_xrDestroyRenderModelEXT pfnDestroyRenderModelEXT;
CHK_XR(xrGetInstanceProcAddr(
instance, "xrDestroyRenderModelEXT",
reinterpret_cast<PFN_xrVoidFunction *>(&pfnDestroyRenderModelEXT)));
PFN_xrGetRenderModelPropertiesEXT pfnGetRenderModelPropertiesEXT;
CHK_XR(xrGetInstanceProcAddr(
instance, "xrGetRenderModelPropertiesEXT",
reinterpret_cast<PFN_xrVoidFunction *>(&pfnGetRenderModelPropertiesEXT)));
PFN_xrCreateRenderModelSpaceEXT pfnCreateRenderModelSpaceEXT;
CHK_XR(xrGetInstanceProcAddr(
instance, "xrCreateRenderModelSpaceEXT",
reinterpret_cast<PFN_xrVoidFunction *>(&pfnCreateRenderModelSpaceEXT)));
PFN_xrCreateRenderModelAssetEXT pfnCreateRenderModelAssetEXT;
CHK_XR(xrGetInstanceProcAddr(
instance, "xrCreateRenderModelAssetEXT",
reinterpret_cast<PFN_xrVoidFunction *>(&pfnCreateRenderModelAssetEXT)));
PFN_xrDestroyRenderModelAssetEXT pfnDestroyRenderModelAssetEXT;
CHK_XR(xrGetInstanceProcAddr(
instance, "xrDestroyRenderModelAssetEXT",
reinterpret_cast<PFN_xrVoidFunction *>(&pfnDestroyRenderModelAssetEXT)));
PFN_xrGetRenderModelAssetDataEXT pfnGetRenderModelAssetDataEXT;
CHK_XR(xrGetInstanceProcAddr(
instance, "xrGetRenderModelAssetDataEXT",
reinterpret_cast<PFN_xrVoidFunction *>(&pfnGetRenderModelAssetDataEXT)));
PFN_xrGetRenderModelAssetPropertiesEXT pfnGetRenderModelAssetPropertiesEXT;
CHK_XR(xrGetInstanceProcAddr(instance, "xrGetRenderModelAssetPropertiesEXT",
reinterpret_cast<PFN_xrVoidFunction *>(
&pfnGetRenderModelAssetPropertiesEXT)));
PFN_xrGetRenderModelStateEXT pfnGetRenderModelStateEXT;
CHK_XR(xrGetInstanceProcAddr(
instance, "xrGetRenderModelStateEXT",
reinterpret_cast<PFN_xrVoidFunction *>(&pfnGetRenderModelStateEXT)));
XrPath rightHandPath;
CHK_XR(xrStringToPath(instance, "/user/hand/right", &rightHandPath));
// Enumerate the render model IDs
XrInteractionRenderModelIdsEnumerateInfoEXT renderModelGetInfo{
XR_TYPE_INTERACTION_RENDER_MODEL_IDS_ENUMERATE_INFO_EXT};
uint32_t numModels{0};
CHK_XR(pfnEnumerateInteractionRenderModelIdsEXT(session, NULL, 0, &numModels,
NULL));
std::vector<XrRenderModelIdEXT> interactionModelIds{XR_NULL_PATH, numModels};
CHK_XR(pfnEnumerateInteractionRenderModelIdsEXT(session, NULL, numModels,
&numModels,
interactionModelIds.data()));
// Create render model handles
// The names of glTF extensions that the application is capable of supporting.
// The returned glTF model may have any or all of these extensions listed in
// the "extensionsRequired" array.
// Pass only the extensions that your app/engine are capable of supporting.
std::vector<const char *> appSupportedGltfExtensions{"KHR_texture_basisu",
"KHR_materials_specular"};
std::vector<XrRenderModelEXT> interactionModels;
for (XrRenderModelIdEXT id : interactionModelIds) {
XrRenderModelEXT renderModel;
XrRenderModelCreateInfoEXT renderModelCreateInfo{
XR_TYPE_RENDER_MODEL_CREATE_INFO_EXT};
renderModelCreateInfo.renderModelId = id;
renderModelCreateInfo.gltfExtensionCount =
(uint32_t)appSupportedGltfExtensions.size();
renderModelCreateInfo.gltfExtensions = appSupportedGltfExtensions.data();
CHK_XR(
pfnCreateRenderModelEXT(session, &renderModelCreateInfo, &renderModel));
interactionModels.push_back(renderModel);
}
std::vector<XrSpace> modelSpaces;
std::vector<XrRenderModelPropertiesEXT> modelProperties;
for (XrRenderModelEXT renderModel : interactionModels) {
// Create a space for locating the render model.
XrRenderModelSpaceCreateInfoEXT spaceCreateInfo{
XR_TYPE_RENDER_MODEL_SPACE_CREATE_INFO_EXT};
spaceCreateInfo.renderModel = renderModel;
XrSpace modelSpace;
CHK_XR(pfnCreateRenderModelSpaceEXT(session, &spaceCreateInfo, &modelSpace));
modelSpaces.push_back(modelSpace);
// Get the model properties: UUID and number of animatable nodes
XrRenderModelPropertiesGetInfoEXT propertiesGetInfo{
XR_TYPE_RENDER_MODEL_PROPERTIES_GET_INFO_EXT};
XrRenderModelPropertiesEXT properties{XR_TYPE_RENDER_MODEL_PROPERTIES_EXT};
CHK_XR(pfnGetRenderModelPropertiesEXT(renderModel, &propertiesGetInfo,
&properties));
modelProperties.push_back(properties);
{
// Create the asset handle to request the data.
XrRenderModelAssetCreateInfoEXT assetCreateInfo{
XR_TYPE_RENDER_MODEL_ASSET_CREATE_INFO_EXT};
assetCreateInfo.cacheId = properties.cacheId;
XrRenderModelAssetEXT asset;
CHK_XR(pfnCreateRenderModelAssetEXT(session, &assetCreateInfo, &asset));
// Copy the binary glTF (GLB) asset data using two-call idiom.
XrRenderModelAssetDataGetInfoEXT assetGetInfo{
XR_TYPE_RENDER_MODEL_ASSET_DATA_GET_INFO_EXT};
XrRenderModelAssetDataEXT assetData{
XR_TYPE_RENDER_MODEL_ASSET_DATA_EXT};
CHK_XR(pfnGetRenderModelAssetDataEXT(asset, &assetGetInfo, &assetData));
std::vector<uint8_t> glbData(assetData.bufferCountOutput);
assetData.bufferCapacityInput = (uint32_t)glbData.size();
assetData.buffer = glbData.data();
CHK_XR(pfnGetRenderModelAssetDataEXT(asset, &assetGetInfo, &assetData));
// Parsing the binary glTF data is outside the scope of this extension,
// but do it here.
// Get the unique names of the animatable nodes
XrRenderModelAssetPropertiesGetInfoEXT assetPropertiesGetInfo{
XR_TYPE_RENDER_MODEL_ASSET_PROPERTIES_GET_INFO_EXT};
XrRenderModelAssetPropertiesEXT assetProperties{
XR_TYPE_RENDER_MODEL_ASSET_PROPERTIES_EXT};
std::vector<XrRenderModelAssetNodePropertiesEXT> nodeProperties(
properties.animatableNodeCount);
assetProperties.nodePropertyCount = (uint32_t)nodeProperties.size();
assetProperties.nodeProperties = nodeProperties.data();
CHK_XR(pfnGetRenderModelAssetPropertiesEXT(asset, &assetPropertiesGetInfo,
&assetProperties));
// Once the glTF data has been handled, we no longer need the
// XrRenderModelAssetEXT handle.
CHK_XR(pfnDestroyRenderModelAssetEXT(asset));
// Save the list of nodes for rendering. The order of the array matters.
// The application will store some sort of "reference" to a node for
// each element, using the node name (in nodeProperties) to find it here.
// This code is not shown because it will depend on how your
// application represents glTF assets, so add your own here.
}
}
// Each frame the application's work for each model includes
// reading the state of the animatable nodes
// and then adjusting the pose or visibility of the node.
// Initialized from xrWaitFrame output
XrTime predictedDisplayTime;
for (size_t modelIndex = 0; modelIndex < interactionModels.size();
++modelIndex) {
XrRenderModelEXT renderModel = interactionModels[modelIndex];
const XrRenderModelPropertiesEXT& properties = modelProperties[modelIndex];
XrSpace modelSpace = modelSpaces[modelIndex];
// Use xrLocateSpace to locate the model's space
XrSpaceLocation modelLocation{XR_TYPE_SPACE_LOCATION};
CHK_XR(xrLocateSpace(modelSpace, baseSpace, predictedDisplayTime, &modelLocation));
bool orientationTracked = (modelLocation.locationFlags &
XR_SPACE_LOCATION_ORIENTATION_TRACKED_BIT) != 0;
bool positionTracked = (modelLocation.locationFlags &
XR_SPACE_LOCATION_POSITION_TRACKED_BIT) != 0;
if (!orientationTracked || !positionTracked) {
// Only render if the model space is tracked,
// and if the session state is appropriate, if applicable.
// (e.g. interaction models are only to be rendered when FOCUSED)
// Flag this model as not-rendered-this-frame in your app-specific way here.
continue;
}
XrRenderModelStateGetInfoEXT stateGetInfo{
XR_TYPE_RENDER_MODEL_STATE_GET_INFO_EXT};
stateGetInfo.displayTime = predictedDisplayTime;
// In practice, you do not want to re-allocate this array of
// node state every frame, but it is clearer for illustration.
// We know the number of elements from the model properties,
// and we used the names from the asset handle to find and retain
// our app-specific references to those nodes in the model.
std::vector<XrRenderModelNodeStateEXT> nodeStates(
properties.animatableNodeCount);
XrRenderModelStateEXT state{XR_TYPE_RENDER_MODEL_STATE_EXT};
state.nodeStateCount = (uint32_t)nodeStates.size();
state.nodeStates = nodeStates.data();
// xrGetRenderModelStateEXT does not use the two-call idiom. The size is
// determined by xrGetRenderModelAssetPropertiesEXT.
CHK_XR(pfnGetRenderModelStateEXT(renderModel, &stateGetInfo, &state));
for (size_t i = 0; i < nodeStates.size(); ++i) {
// Use nodeStates[i].isVisible and nodeStates[i].nodePose to update the
// node's visibility or pose.
// nodeStates[i] refers to the node identified by name in nodeProperties[i]
}
// Your app now has the overall transform and all node transforms/status here.
}
As a demonstration of xrEnumerateRenderModelSubactionPathsEXT, the following additional code assumes that the application would like to modify rendering (e.g. highlight) for devices that provide input to a given subaction path, such as to emphasize which device is controlling a currently-active teleport targeting.
// previously initialized
extern XrInstance instance;
// as populated in the preceding sample
std::vector<XrRenderModelEXT> interactionModels;
// Get the function pointers for the extension's functions.
PFN_xrEnumerateRenderModelSubactionPathsEXT
pfnEnumerateRenderModelSubactionPathsEXT;
CHK_XR(xrGetInstanceProcAddr(instance,
"xrEnumerateRenderModelSubactionPathsEXT",
reinterpret_cast<PFN_xrVoidFunction *>(
&pfnEnumerateRenderModelSubactionPathsEXT)));
// During each frame when an application wishes to treat render models
// associated with some subaction path differently, it performs the following.
// Previously initialized
XrPath subactionPathToHighlight;
// Reused for each model because results are temporary
std::vector<XrPath> paths;
for (size_t modelIndex = 0; modelIndex < interactionModels.size();
++modelIndex) {
XrRenderModelEXT renderModel = interactionModels[modelIndex];
// Two-call idiom for subaction paths
uint32_t count;
CHK_XR(pfnEnumerateRenderModelSubactionPathsEXT(renderModel, nullptr,
0, &count, nullptr));
paths.resize(count, XR_NULL_PATH);
CHK_XR(pfnEnumerateRenderModelSubactionPathsEXT(renderModel, nullptr,
(uint32_t)paths.size(),
&count, paths.data()));
// Determine if our desired subaction path is in the collection.
bool foundHighlightPath = (paths.end() !=
std::find(paths.begin(),
paths.end(),
subactionPathToHighlight));
if (foundHighlightPath) {
// Highlight this model: it is providing input for
// actions on subactionPathToHighlight
} else {
// Render normally: no input from this model is
// associated with subactionPathToHighlight
}
}
12.39.8. New Enum Constants
-
XR_EXT_INTERACTION_RENDER_MODEL_EXTENSION_NAME -
XR_EXT_interaction_render_model_SPEC_VERSION -
Extending XrResult:
-
XR_ERROR_NOT_INTERACTION_RENDER_MODEL_EXT
-
-
Extending XrStructureType:
-
XR_TYPE_EVENT_DATA_INTERACTION_RENDER_MODELS_CHANGED_EXT -
XR_TYPE_INTERACTION_RENDER_MODEL_IDS_ENUMERATE_INFO_EXT -
XR_TYPE_INTERACTION_RENDER_MODEL_SUBACTION_PATH_INFO_EXT -
XR_TYPE_INTERACTION_RENDER_MODEL_TOP_LEVEL_USER_PATH_GET_INFO_EXT
-
12.39.9. Issues
-
Should we enumerate models per subaction path? per action? or overall?
-
We enumerate all models to normalize looping over an array of models of arbitrary length, to avoid fragility when more than one device is providing input for a single subaction path due to rebinding. (Application authors are likely to assume one model per subaction path unless the API is structured to avoid that assumption.)
-
-
Given enumeration of models first, what action-related data is safe to expose to the application without introducing untested code paths used only in case of rebinding?
-
Enumerating subaction paths for a model is not a problem: the runtime only returns subaction paths submitted by the app (so no untested code paths), and the mistaken assumption that only one subaction path is returned is less dangerous than assuming a number of models: the association with subaction paths is likely primarily for highlighting, etc. Incorrect processing of this data by the application produces a less-optimal experience, but does not result in any crash or incompatibility.
-
-
Can the application associate individual actions with models or nodes in them?
-
This is out of scope for this extension and will be provided in a follow-up. It requires more design work to achieve the working group goals.
-
-
Should the main function only enumerate models associated with currently-bound and active actions?
-
No, this will change for each active action set change, requiring frequent re-enumeration of models. If an application wants to display only models associated with a bound and active action, it can use the results of xrEnumerateRenderModelSubactionPathsEXT to identify them, and no event is needed as the application controls calling xrSyncActions. The current design instead enumerates models associated with the union of all actions attached to the session.
-
-
Does the asset corresponding to a render model ID change when the user switches devices, or should it trigger an event prompting the runtime to enumerate a new render model ID for the new device?
-
An event triggers fresh enumeration retrieving a new render model ID, to keep one render model ID closely associated with a physical device rather than with a role or the inputs driven by it. A different type of controller is a new model ID and not just an updated asset for an existing one. Additionally, the UUID and asset for a given render model ID and list of extensions in a session is now defined to be immutable.
-
-
Does xrEnumerateRenderModelSubactionPathsEXT enumerate subaction paths in any specific order?
-
No, the order is explicitly defined to have no meaning. An application that uses xrEnumerateRenderModelSubactionPathsEXT should assume there may be multiple values in this list, even though there may be only one in some cases, and treat the list returned from xrEnumerateRenderModelSubactionPathsEXT as a set. An application should process all values in that list equally: e.g. if looking to highlight "right hand" devices, apply a highlight shader to all render models that contain /user/hand/right in their list from xrEnumerateRenderModelSubactionPathsEXT no matter where it appears in the output.
-
-
Can the active assets for hardware change between sessions or only instances?
-
Assets for devices must remain fixed within a given instance. This is primarily unneeded implementation freedom that is restricted so that the conformance test suite can enforce the requirement that suggested bindings for additional interaction profiles, as long as they do not change the collection of bound actions, do not change the assets. It is very important for the purpose and usability of this extension that it returns assets related to the real hardware in use, which means it must be unaffected by the interaction profile system. We cannot test automatically whether the hardware looks like the model, but if we require that the underlying assets are fixed across sessions within an instance, we can check that the UUID does not change based on the suggested bindings for a given session.
-
-
What device render models are enumerated? Options include A: only devices for actions in the active action sets, B: devices associated with any action in any action set, C: any devices the user may interact with even if they do not have an associated action
-
Option B is selected. Option A (only devices for the currently active action sets) may mean that the set of enumerated devices changes frequently, if not all action sets contain actions being supplied by every device. This could lead applications to have a more robust lifecycle for interaction render models, and well tested code paths for setup and teardown, but it also could result in a lot of extra overhead from this setup and teardown. Option B would enumerate a larger group of models, though not all of them would necessarily be applicable at all times. (The runtime could report not-applicable ones as not locatable when no actions are active.) Option C would show devices that are not necessarily intended for interaction (things like cameras and base stations), which was determined to be out of scope for this extension, though may be added by additional extensions with chained structure modifying this functionality.
-
-
Should interaction render models remain locatable even when they do not have any active actions associated with them?
-
Yes. If applications want to further filter which models to display, this is possible by enumerating subaction paths in the active action set for each model, and omitting those that enumerate no subaction paths.
-
12.40. XR_EXT_loader_init_properties
- Name String
-
XR_EXT_loader_init_properties - Extension Type
-
Instance extension
- Registered Extension Number
-
839
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2024-11-20
- IP Status
-
No known IP claims.
- Contributors
-
Bryce Hutchings, Microsoft Corporation
Jakob Bornecrantz, NVIDIA
Rafal Karp, NVIDIA
12.40.1. Overview
This extension allows the application to safely pass custom name-value pair properties, which act as environment variable overrides, directly to the OpenXR loader in XrLoaderInitInfoPropertiesEXT by calling xrInitializeLoaderKHR. This extension introduces the concept of a internal database of name-value pairs that the loader stores internally. The loader queries this database first and when looking up environment variables as outlined in the OpenXR Loader - Design and Operation document, and if finding a name-value pair uses this instead of querying the platform. The application uses the XrLoaderInitInfoPropertiesEXT structure passed into the xrInitializeLoaderKHR function to populate these pairs.
12.40.2. Loader Initialization Structure
The XrLoaderInitInfoPropertiesEXT structure is defined as:
// Provided by XR_EXT_loader_init_properties
typedef struct XrLoaderInitInfoPropertiesEXT {
XrStructureType type;
const void* next;
uint32_t propertyValueCount;
const XrLoaderInitPropertyValueEXT* propertyValues;
} XrLoaderInitInfoPropertiesEXT;
This structure is either provided directly as the parameter to
xrInitializeLoaderKHR or on that parameter’s next chain,
depending on whether your platform requires another structure to be passed
to xrInitializeLoaderKHR.
It contains an array of string property name-value pairs, intended primarily
to be used by the loader.
The ordering of properties does not matter because the loader rejects any duplicate entries.
The loader must return XR_ERROR_VALIDATION_FAILURE if there are
duplicate entries of property names in the given array.
The loader must obey the properties in the internal database and use them
to override environment variables.
The loader must start the internal database of property pairs as empty.
The loader must must clear the internal database of property pairs when
xrInitializeLoaderKHR is called with
XrLoaderInitInfoPropertiesEXT passed in an allowed way.
(That is, the loader obeys only the properties passed in the most recent
successful xrInitializeLoaderKHR call that included a
XrLoaderInitInfoPropertiesEXT structure.)
It is unspecified what happens to the properties if the loader is unloaded then reloaded through dynamic library loading, due to differences in platforms beyond the control of this extension. Unlike platform functions to query environment variables, whose case sensitivity varies from platform to platform, the loader must always treat lookups of properties as case-sensitive in the internal database.
This extension describes a behavior of the OpenXR loader and not any behavior of a loaded runtime. As such any properties supplied by the application must not be exposed as environment variables and must not affect any other component of the system. Future loader implementations and loader specifications may add mechanisms for the runtime or OpenXR layers to query the properties but as the writing of this spec no such functionality exists.
The XrLoaderInitPropertyValueEXT structure is defined as:
typedef struct XrLoaderInitPropertyValueEXT {
const char* name;
const char* value;
} XrLoaderInitPropertyValueEXT;
This structure contains a single property name-value pair passed to loader initialization. The XrLoaderInitInfoPropertiesEXT structure is used to pass an array of these structures to the OpenXR loader, and the runtime upon loading, via xrInitializeLoaderKHR.
Rules for the value pair strings:
-
The loader must accept any valid UTF-8 string, including those that only contain whitespace, for both
nameandvalueexcept otherwise covered by a rule in this list. Whitespace-only strings are accepted by this extension because whitespace only paths are valid on some platforms. -
The loader must return
XR_ERROR_VALIDATION_FAILUREifnameorvalueis NULL. -
The loader must return
XR_ERROR_VALIDATION_FAILUREifnameis a zero length string. -
The loader must treat a zero length
valueas if the environmental variable was unset in a platform specific way. -
The loader may impose implementation or platform specific limitation on string lengths and/or total string memory usage and must return
XR_ERROR_LIMIT_REACHEDwhen hit. -
The loader may impose length limits on
nameand/orvaluestrings and must returnXR_ERROR_LIMIT_REACHEDwhen hit. Because they might be imposed by the platform, the limits are unspecified. -
The loader must not reference the memory pointed to by the given strings beyond returning from this call, this implies that the loader will copy the strings into an internal database.
12.40.4. New Enum Constants
-
XR_EXT_LOADER_INIT_PROPERTIES_EXTENSION_NAME -
XR_EXT_loader_init_properties_SPEC_VERSION -
Extending XrStructureType:
-
XR_TYPE_LOADER_INIT_INFO_PROPERTIES_EXT
-
12.40.5. Issues
-
Does this extension mandate when the loader reads the environment variables?
-
Resolved
-
Answer: No. To the extent that this extension is able, it does not mandate when a specific variable is read by the loader. But to be usable this extension needs to impose some rules on the OpenXR loader, but tries to defer to the base or OpenXR loader specification.
-
12.41. XR_EXT_performance_settings
- Name String
-
XR_EXT_performance_settings - Extension Type
-
Instance extension
- Registered Extension Number
-
16
- Revision
-
4
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2021-04-14
- IP Status
-
No known IP claims.
- Contributors
-
Armelle Laine, Qualcomm Technologies Inc, on behalf of Qualcomm Innovation Center, Inc
Rylie Pavlik, Collabora
12.41.1. Overview
This extension defines an API for the application to give performance hints to the runtime and for the runtime to send performance related notifications back to the application. This allows both sides to dial in a suitable compromise between needed CPU and GPU performance, thermal sustainability and a consistent good user experience throughout the session.
The goal is to render frames consistently, in time, under varying system load without consuming more energy than necessary.
In summary, the APIs allow:
-
setting performance level hints
-
receiving performance related notifications
12.41.2. Setting Performance Levels Hints
Performance level hint definition
The XR performance level hints for a given hardware system are expressed as a level XrPerfSettingsLevelEXT for each of the XR-critical processing domains XrPerfSettingsDomainEXT (currently defined is a CPU and a GPU domain):
// Provided by XR_EXT_performance_settings, XR_EXT_thermal_query
typedef enum XrPerfSettingsDomainEXT {
XR_PERF_SETTINGS_DOMAIN_CPU_EXT = 1,
XR_PERF_SETTINGS_DOMAIN_GPU_EXT = 2,
XR_PERF_SETTINGS_DOMAIN_MAX_ENUM_EXT = 0x7FFFFFFF
} XrPerfSettingsDomainEXT;
// Provided by XR_EXT_performance_settings
typedef enum XrPerfSettingsLevelEXT {
XR_PERF_SETTINGS_LEVEL_POWER_SAVINGS_EXT = 0,
XR_PERF_SETTINGS_LEVEL_SUSTAINED_LOW_EXT = 25,
XR_PERF_SETTINGS_LEVEL_SUSTAINED_HIGH_EXT = 50,
XR_PERF_SETTINGS_LEVEL_BOOST_EXT = 75,
XR_PERF_SETTINGS_LEVEL_MAX_ENUM_EXT = 0x7FFFFFFF
} XrPerfSettingsLevelEXT;
This extension defines platform-independent level hints:
-
XR_PERF_SETTINGS_LEVEL_POWER_SAVINGS_EXTis used by the application to indicate that it enters a non-XR section (head-locked / static screen), during which power savings are to be prioritized. Consistent XR compositing, consistent frame rendering, and low latency are not needed. -
XR_PERF_SETTINGS_LEVEL_SUSTAINED_LOW_EXTis used by the application to indicate that it enters a low and stable complexity section, during which reducing power is more important than occasional late rendering frames. With such a hint, the XR Runtime still strives for consistent XR compositing (no tearing) within a thermally sustainable range(*), but is allowed to take measures to reduce power, such as increasing latencies or reducing headroom. -
XR_PERF_SETTINGS_LEVEL_SUSTAINED_HIGH_EXTis used by the application to indicate that it enters a high or dynamic complexity section, during which the XR Runtime strives for consistent XR compositing and frame rendering within a thermally sustainable range(*). -
XR_PERF_SETTINGS_LEVEL_BOOST_EXTis used to indicate that the application enters a section with very high complexity, during which the XR Runtime is allowed to step up beyond the thermally sustainable range. As not thermally sustainable, this level is meant to be used for short-term durations (< 30 seconds).
(*) If the application chooses one of the two sustainable levels
(XR_PERF_SETTINGS_LEVEL_SUSTAINED_LOW_EXT or
XR_PERF_SETTINGS_LEVEL_SUSTAINED_HIGH_EXT), the device may still run
into thermal limits under non-nominal circumstances (high room temperature,
additional background loads, extended device operation) and therefore the
application should also in the sustainable modes be prepared to react to
performance notifications (in particular
XR_PERF_SETTINGS_NOTIF_LEVEL_WARNING_EXT and
XR_PERF_SETTINGS_NOTIF_LEVEL_IMPAIRED_EXT in the thermal sub-domain,
see Notification level definition).
The XR Runtime shall select XR_PERF_SETTINGS_LEVEL_SUSTAINED_HIGH_EXT
as the default hint if the application does not provide any.
The function to call for setting performance level hints is
xrPerfSettingsSetPerformanceLevelEXT.
// Provided by XR_EXT_performance_settings
XrResult xrPerfSettingsSetPerformanceLevelEXT(
XrSession session,
XrPerfSettingsDomainEXT domain,
XrPerfSettingsLevelEXT level);
Example of using the short-term boost level hint
For a limited amount of time, both the Mobile and PC systems can provide a higher level of performance than is thermally sustainable. It is desirable to make this extra computational power available for short complex scenes, then go back to a sustainable lower level. This section describes means for the application developer to apply settings directing the runtime to boost performance for a short-term duration.
The application developer must pay attention to keep these boost periods very short and carefully monitor the side effects, which may vary a lot between different hardware systems.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
extern XrInstance instance; (1)
extern XrSession session;
// Get function pointer for xrPerfSettingsSetPerformanceLevelEXT
PFN_xrPerfSettingsSetPerformanceLevelEXT pfnPerfSettingsSetPerformanceLevelEXT;
CHK_XR(xrGetInstanceProcAddr(instance, "xrPerfSettingsSetPerformanceLevelEXT",
(PFN_xrVoidFunction*)(
&pfnPerfSettingsSetPerformanceLevelEXT)));
// before entering the high complexity section
pfnPerfSettingsSetPerformanceLevelEXT(session, XR_PERF_SETTINGS_DOMAIN_CPU_EXT, XR_PERF_SETTINGS_LEVEL_BOOST_EXT); (2)
pfnPerfSettingsSetPerformanceLevelEXT(session, XR_PERF_SETTINGS_DOMAIN_GPU_EXT, XR_PERF_SETTINGS_LEVEL_BOOST_EXT);
// entering the high complexity section
// ... running
// end of the high complexity section
pfnPerfSettingsSetPerformanceLevelEXT(session, XR_PERF_SETTINGS_DOMAIN_CPU_EXT, XR_PERF_SETTINGS_LEVEL_SUSTAINED_HIGH_EXT); (3)
pfnPerfSettingsSetPerformanceLevelEXT(session, XR_PERF_SETTINGS_DOMAIN_GPU_EXT, XR_PERF_SETTINGS_LEVEL_SUSTAINED_HIGH_EXT);
| 1 | we assume that instance and session are initialized and their
handles are available |
| 2 | setting performance level to XR_PERF_SETTINGS_LEVEL_BOOST_EXT on
both CPU and GPU domains |
| 3 | going back to the sustainable
XR_PERF_SETTINGS_LEVEL_SUSTAINED_HIGH_EXT |
Example of using the sustained low level hint for the CPU domain
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
extern XrInstance instance; (1)
extern XrSession session;
// Get function pointer for xrPerfSettingsSetPerformanceLevelEXT
PFN_xrPerfSettingsSetPerformanceLevelEXT pfnPerfSettingsSetPerformanceLevelEXT;
CHK_XR(xrGetInstanceProcAddr(instance, "xrPerfSettingsSetPerformanceLevelEXT",
(PFN_xrVoidFunction*)(
&pfnPerfSettingsSetPerformanceLevelEXT)));
// before entering a low CPU complexity section
pfnPerfSettingsSetPerformanceLevelEXT(session, XR_PERF_SETTINGS_DOMAIN_CPU_EXT, XR_PERF_SETTINGS_LEVEL_SUSTAINED_LOW_EXT);
pfnPerfSettingsSetPerformanceLevelEXT(session, XR_PERF_SETTINGS_DOMAIN_GPU_EXT, XR_PERF_SETTINGS_LEVEL_SUSTAINED_HIGH_EXT); (2)
// entering the low complexity section
// ... running
// end of the low complexity section
pfnPerfSettingsSetPerformanceLevelEXT(session, XR_PERF_SETTINGS_DOMAIN_CPU_EXT, XR_PERF_SETTINGS_LEVEL_SUSTAINED_HIGH_EXT); (3)
| 1 | we assume that instance and session are initialized and their
handles are available |
| 2 | the developer may choose to only reduce CPU domain and keep the GPU
domain at XR_PERF_SETTINGS_LEVEL_SUSTAINED_HIGH_EXT |
| 3 | going back to the sustainable
XR_PERF_SETTINGS_LEVEL_SUSTAINED_HIGH_EXT for CPU |
12.41.3. Receiving Performance Related Notifications
The XR runtime shall provide performance related notifications to the application in the following situations:
-
the compositing performance within the runtime has reached a new level, either improved or degraded from the previous one (
subDomainis set toXR_PERF_SETTINGS_SUB_DOMAIN_COMPOSITING_EXT) -
the application rendering performance has reached a new level, either improved or degraded from the previous one (
subDomainis set toXR_PERF_SETTINGS_SUB_DOMAIN_RENDERING_EXT) -
the temperature of the device has reached a new level, either improved or degraded from the previous one (
subDomainis set toXR_PERF_SETTINGS_SUB_DOMAIN_THERMAL_EXT).
When degradation is observed, the application should take measures reducing
its workload, helping the compositing or rendering subDomain to meet
their deadlines, or the thermal subDomain to avoid or stop throttling.
When improvement is observed, the application can potentially rollback some
of its mitigations.
// Provided by XR_EXT_performance_settings
typedef struct XrEventDataPerfSettingsEXT {
XrStructureType type;
const void* next;
XrPerfSettingsDomainEXT domain;
XrPerfSettingsSubDomainEXT subDomain;
XrPerfSettingsNotificationLevelEXT fromLevel;
XrPerfSettingsNotificationLevelEXT toLevel;
} XrEventDataPerfSettingsEXT;
// Provided by XR_EXT_performance_settings
typedef enum XrPerfSettingsSubDomainEXT {
XR_PERF_SETTINGS_SUB_DOMAIN_COMPOSITING_EXT = 1,
XR_PERF_SETTINGS_SUB_DOMAIN_RENDERING_EXT = 2,
XR_PERF_SETTINGS_SUB_DOMAIN_THERMAL_EXT = 3,
XR_PERF_SETTINGS_SUB_DOMAIN_MAX_ENUM_EXT = 0x7FFFFFFF
} XrPerfSettingsSubDomainEXT;
Compositing Sub-Domain
One of the major functions the runtime shall provide is the timely
compositing of the submitted layers in the background.
The runtime has to share the CPU and GPU system resources for this operation
with the application.
Since this is extremely time sensitive - the head room is only a few
milliseconds - the runtime may have to ask the application via notifications
to cooperate and relinquish some usage of the indicated resource (CPU or GPU
domain).
Performance issues in this area that the runtime notices are notified to the
application with the subDomain set to
XR_PERF_SETTINGS_SUB_DOMAIN_COMPOSITING_EXT.
Rendering Sub-Domain
The application submits rendered layers to the runtime for compositing.
Performance issues in this area that the runtime notices (i.e. missing
submission deadlines) are notified to the application with the
subDomain set to XR_PERF_SETTINGS_SUB_DOMAIN_RENDERING_EXT.
Thermal Sub-Domain
XR applications run at a high-performance level during long periods of time, across a game or an entire movie session. As form factors shrink, especially on mobile solutions, the risk of reaching die thermal runaway or reaching the limits on skin and battery temperatures increases. When thermal limits are reached, the device mitigates the heat generation leading to severe performance reductions, which greatly affects user experience (dropped frames, high latency).
Better than dropping frames when it is too late, pro-active measures from the application should be encouraged.
The performance notification with the subDomain set to
XR_PERF_SETTINGS_SUB_DOMAIN_THERMAL_EXT provides an early warning
allowing the application to take mitigation actions.
Notification level definition
The levels are defined as follows:
// Provided by XR_EXT_performance_settings, XR_EXT_thermal_query
typedef enum XrPerfSettingsNotificationLevelEXT {
XR_PERF_SETTINGS_NOTIF_LEVEL_NORMAL_EXT = 0,
XR_PERF_SETTINGS_NOTIF_LEVEL_WARNING_EXT = 25,
XR_PERF_SETTINGS_NOTIF_LEVEL_IMPAIRED_EXT = 75,
XR_PERF_SETTINGS_NOTIFICATION_LEVEL_MAX_ENUM_EXT = 0x7FFFFFFF
} XrPerfSettingsNotificationLevelEXT;
-
XR_PERF_SETTINGS_NOTIF_LEVEL_NORMAL_EXTnotifies that the sub-domain has reached a level where no further actions other than currently applied are necessary. -
XR_PERF_SETTINGS_NOTIF_LEVEL_WARNING_EXTnotifies that the sub-domain has reached an early warning level where the application should start proactive mitigation actions with the goal to return to theXR_PERF_SETTINGS_NOTIF_LEVEL_NORMAL_EXTlevel. -
XR_PERF_SETTINGS_NOTIF_LEVEL_IMPAIRED_EXTnotifies that the sub-domain has reached a critical level with significant performance degradation. The application should take drastic mitigation action.
The above definitions summarize the broad interpretation of the notification levels, however sub-domain specific definitions of each level and their transitions are specified below:
-
XR_PERF_SETTINGS_NOTIF_LEVEL_NORMAL_EXT-
For the compositing sub-domain,
XR_PERF_SETTINGS_NOTIF_LEVEL_NORMAL_EXTindicates that the composition headroom is consistently being met with sufficient margin.
Getting intoXR_PERF_SETTINGS_NOTIF_LEVEL_NORMAL_EXTfromXR_PERF_SETTINGS_NOTIF_LEVEL_WARNING_EXTindicates that the composition headroom was consistently met with sufficient margin during a sufficient time period. -
For the rendering sub-domain,
XR_PERF_SETTINGS_NOTIF_LEVEL_NORMAL_EXTindicates that frames are being submitted in time to be used by the compositor.
Getting intoXR_PERF_SETTINGS_NOTIF_LEVEL_NORMAL_EXTfromXR_PERF_SETTINGS_NOTIF_LEVEL_WARNING_EXTindicates that during a sufficient time period, none of the due layers was too late to be picked up by the compositor. -
For the thermal sub-domain,
XR_PERF_SETTINGS_NOTIF_LEVEL_NORMAL_EXTindicates that the current load should be sustainable in the near future.
Getting intoXR_PERF_SETTINGS_NOTIF_LEVEL_NORMAL_EXTfromXR_PERF_SETTINGS_NOTIF_LEVEL_WARNING_EXTindicates that the runtime does not presuppose any further temperature mitigation action on the application side, other than the current ones.
-
-
XR_PERF_SETTINGS_NOTIF_LEVEL_WARNING_EXT-
For the compositing sub-domain,
XR_PERF_SETTINGS_NOTIF_LEVEL_WARNING_EXTindicates that the compositing headroom of the current frame was met but the margin is considered insufficient by the runtime, and the application should reduce its workload in the notified domain to solve this problem.
Getting intoXR_PERF_SETTINGS_NOTIF_LEVEL_WARNING_EXTfromXR_PERF_SETTINGS_NOTIF_LEVEL_IMPAIRED_EXTindicates that the compositing deadline was not missed during a sufficient time period. -
For the rendering sub-domain,
XR_PERF_SETTINGS_NOTIF_LEVEL_WARNING_EXTindicates that at least one layer is regularly late to be picked up by the compositor, resulting in a degraded user experience, and that the application should take action to consistently provide frames in a more timely manner.
Getting intoXR_PERF_SETTINGS_NOTIF_LEVEL_WARNING_EXTfromXR_PERF_SETTINGS_NOTIF_LEVEL_IMPAIRED_EXTindicates that the runtime has stopped any of its own independent actions which are tied to theXR_PERF_SETTINGS_NOTIF_LEVEL_IMPAIRED_EXTlevel. -
For the thermal sub-domain, the
XR_PERF_SETTINGS_NOTIF_LEVEL_WARNING_EXTindicates that the runtime expects the device to overheat under the current load, and that the application should take mitigating action in order to prevent thermal throttling.
Getting intoXR_PERF_SETTINGS_NOTIF_LEVEL_WARNING_EXTfromXR_PERF_SETTINGS_NOTIF_LEVEL_IMPAIRED_EXTindicates that the underlying system thermal throttling has stopped.
-
-
XR_PERF_SETTINGS_NOTIF_LEVEL_IMPAIRED_EXT-
For the compositing sub-domain,
XR_PERF_SETTINGS_NOTIF_LEVEL_IMPAIRED_EXTindicates that composition can no longer be maintained under the current workload. The runtime may take independent action that will interfere with the application (e.g. limiting the framerate, ignoring submitted layers, or shutting down the application) in order to correct this problem. -
For the rendering sub-domain,
XR_PERF_SETTINGS_NOTIF_LEVEL_IMPAIRED_EXTindicates that at least one layer is too often late to be picked up by the compositor, and consequently the runtime may take independent action that will interfere with the application (e.g. informing the user that the application is not responding, displaying a tracking environment in order to maintain user orientation). -
For the thermal sub-domain,
XR_PERF_SETTINGS_NOTIF_LEVEL_IMPAIRED_EXTindicates that the underlying system is taking measures, such as thermal throttling to reduce the temperature, impacting the XR experience..
-
Leaving XR_PERF_SETTINGS_NOTIF_LEVEL_IMPAIRED_EXT indicates that any
mitigating actions by the runtime (e.g. down-clocking the device to stay
within thermal limits) have ended.
Performance Settings API Reference
xrPerfSettingsSetPerformanceLevelEXT
// Provided by XR_EXT_performance_settings
XrResult xrPerfSettingsSetPerformanceLevelEXT(
XrSession session,
XrPerfSettingsDomainEXT domain,
XrPerfSettingsLevelEXT level);
Refer to Performance level hint definition for the definition of the level enumerations.
XrEventDataPerformanceSettingsEXT
The XrEventDataPerfSettingsEXT structure is defined as:
// Provided by XR_EXT_performance_settings
typedef struct XrEventDataPerfSettingsEXT {
XrStructureType type;
const void* next;
XrPerfSettingsDomainEXT domain;
XrPerfSettingsSubDomainEXT subDomain;
XrPerfSettingsNotificationLevelEXT fromLevel;
XrPerfSettingsNotificationLevelEXT toLevel;
} XrEventDataPerfSettingsEXT;
// Provided by XR_EXT_performance_settings, XR_EXT_thermal_query
typedef enum XrPerfSettingsDomainEXT {
XR_PERF_SETTINGS_DOMAIN_CPU_EXT = 1,
XR_PERF_SETTINGS_DOMAIN_GPU_EXT = 2,
XR_PERF_SETTINGS_DOMAIN_MAX_ENUM_EXT = 0x7FFFFFFF
} XrPerfSettingsDomainEXT;
// Provided by XR_EXT_performance_settings
typedef enum XrPerfSettingsSubDomainEXT {
XR_PERF_SETTINGS_SUB_DOMAIN_COMPOSITING_EXT = 1,
XR_PERF_SETTINGS_SUB_DOMAIN_RENDERING_EXT = 2,
XR_PERF_SETTINGS_SUB_DOMAIN_THERMAL_EXT = 3,
XR_PERF_SETTINGS_SUB_DOMAIN_MAX_ENUM_EXT = 0x7FFFFFFF
} XrPerfSettingsSubDomainEXT;
// Provided by XR_EXT_performance_settings, XR_EXT_thermal_query
typedef enum XrPerfSettingsNotificationLevelEXT {
XR_PERF_SETTINGS_NOTIF_LEVEL_NORMAL_EXT = 0,
XR_PERF_SETTINGS_NOTIF_LEVEL_WARNING_EXT = 25,
XR_PERF_SETTINGS_NOTIF_LEVEL_IMPAIRED_EXT = 75,
XR_PERF_SETTINGS_NOTIFICATION_LEVEL_MAX_ENUM_EXT = 0x7FFFFFFF
} XrPerfSettingsNotificationLevelEXT;
Version History
-
Revision 1, 2017-11-30 (Armelle Laine)
-
Revision 2, 2021-04-13 (Rylie Pavlik)
-
Correctly show function pointer retrieval in sample code
-
Fix sample code callouts
-
-
Revision 3, 2021-04-14 (Rylie Pavlik)
-
Fix missing error code
-
-
Revision 4, 2022-10-26 (Rylie Pavlik)
-
Update XML markup to correct the generated valid usage
-
12.42. XR_EXT_plane_detection
- Name String
-
XR_EXT_plane_detection - Extension Type
-
Instance extension
- Registered Extension Number
-
430
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2024-05-23
- Contributors
-
Aitor Font, Qualcomm
Daniel Guttenberg, Qualcomm
Maximilian Mayer, Qualcomm
Martin Renschler, Qualcomm
Karthik Nagarajan, Qualcomm
Ron Bessems, Magic Leap
Karthik Kadappan, Magic Leap
12.42.2. Runtime support
To determine if this runtime supports detecting planes xrGetSystemProperties can be used.
XrSystemPlaneDetectionPropertiesEXT provides information on the features supported by the runtime.
// Provided by XR_EXT_plane_detection
typedef struct XrSystemPlaneDetectionPropertiesEXT {
XrStructureType type;
void* next;
XrPlaneDetectionCapabilityFlagsEXT supportedFeatures;
} XrSystemPlaneDetectionPropertiesEXT;
The XrSystemPlaneDetectionPropertiesEXT::supportedFeatures
member is of the following type, and contains a bitwise-OR of zero or more
of the bits defined in XrPlaneDetectionCapabilityFlagBitsEXT.
// Provided by XR_EXT_plane_detection
typedef XrFlags64 XrPlaneDetectionCapabilityFlagsEXT;
Valid bits for XrPlaneDetectionCapabilityFlagsEXT are defined by XrPlaneDetectionCapabilityFlagBitsEXT, which is specified as:
// Flag bits for XrPlaneDetectionCapabilityFlagsEXT
static const XrPlaneDetectionCapabilityFlagsEXT XR_PLANE_DETECTION_CAPABILITY_PLANE_DETECTION_BIT_EXT = 0x00000001;
static const XrPlaneDetectionCapabilityFlagsEXT XR_PLANE_DETECTION_CAPABILITY_PLANE_HOLES_BIT_EXT = 0x00000002;
static const XrPlaneDetectionCapabilityFlagsEXT XR_PLANE_DETECTION_CAPABILITY_SEMANTIC_CEILING_BIT_EXT = 0x00000004;
static const XrPlaneDetectionCapabilityFlagsEXT XR_PLANE_DETECTION_CAPABILITY_SEMANTIC_FLOOR_BIT_EXT = 0x00000008;
static const XrPlaneDetectionCapabilityFlagsEXT XR_PLANE_DETECTION_CAPABILITY_SEMANTIC_WALL_BIT_EXT = 0x00000010;
static const XrPlaneDetectionCapabilityFlagsEXT XR_PLANE_DETECTION_CAPABILITY_SEMANTIC_PLATFORM_BIT_EXT = 0x00000020;
static const XrPlaneDetectionCapabilityFlagsEXT XR_PLANE_DETECTION_CAPABILITY_ORIENTATION_BIT_EXT = 0x00000040;
The flag bits have the following meanings:
12.42.3. Create a plane detection handle
// Provided by XR_EXT_plane_detection
XR_DEFINE_HANDLE(XrPlaneDetectorEXT)
The XrPlaneDetectorEXT handle represents the resources for detecting one or more planes.
An application may create separate XrPlaneDetectorEXT handles for different sets of planes. This handle can be used to detect planes using other functions in this extension.
Plane detection provides locations of planes in the scene.
The xrCreatePlaneDetectorEXT function is defined as:
// Provided by XR_EXT_plane_detection
XrResult xrCreatePlaneDetectorEXT(
XrSession session,
const XrPlaneDetectorCreateInfoEXT* createInfo,
XrPlaneDetectorEXT* planeDetector);
An application creates an XrPlaneDetectorEXT handle using xrCreatePlaneDetectorEXT function.
If the system does not support plane detection, the runtime must return
XR_ERROR_FEATURE_UNSUPPORTED from xrCreatePlaneDetectorEXT.
The XrPlaneDetectorCreateInfoEXT structure is defined as:
// Provided by XR_EXT_plane_detection
typedef struct XrPlaneDetectorCreateInfoEXT {
XrStructureType type;
const void* next;
XrPlaneDetectorFlagsEXT flags;
} XrPlaneDetectorCreateInfoEXT;
The XrPlaneDetectorCreateInfoEXT structure describes the information to create an XrPlaneDetectorEXT handle.
The XrPlaneDetectorCreateInfoEXT::flags member is of the
following type, and contains a bitwise-OR of zero or more of the bits
defined in XrPlaneDetectorFlagBitsEXT.
// Provided by XR_EXT_plane_detection
typedef XrFlags64 XrPlaneDetectorFlagsEXT;
Valid bits for XrPlaneDetectorFlagsEXT are defined by XrPlaneDetectorFlagBitsEXT, which is specified as:
// Flag bits for XrPlaneDetectorFlagsEXT
static const XrPlaneDetectorFlagsEXT XR_PLANE_DETECTOR_ENABLE_CONTOUR_BIT_EXT = 0x00000001;
The flag bits have the following meanings:
The xrDestroyPlaneDetectorEXT function is defined as:
// Provided by XR_EXT_plane_detection
XrResult xrDestroyPlaneDetectorEXT(
XrPlaneDetectorEXT planeDetector);
xrDestroyPlaneDetectorEXT function releases the planeDetector
and the underlying resources when finished with plane detection experiences.
12.42.4. Detecting planes
The xrBeginPlaneDetectionEXT function is defined as:
// Provided by XR_EXT_plane_detection
XrResult xrBeginPlaneDetectionEXT(
XrPlaneDetectorEXT planeDetector,
const XrPlaneDetectorBeginInfoEXT* beginInfo);
The xrBeginPlaneDetectionEXT function begins the detection of planes in the scene. Detecting planes in a scene is an asynchronous operation. xrGetPlaneDetectionStateEXT can be used to determine if the query has finished. Once it has finished the results may be retrieved via xrGetPlaneDetectionsEXT. If a detection has already been started on a plane detector handle, calling xrBeginPlaneDetectionEXT again on the same handle will cancel the operation in progress and start a new detection with the new filter parameters.
The bounding volume is resolved and fixed relative to LOCAL space at the
time of the call to xrBeginPlaneDetectionEXT using
XrPlaneDetectorBeginInfoEXT::baseSpace,
XrPlaneDetectorBeginInfoEXT::time,
XrPlaneDetectorBeginInfoEXT::boundingBoxPose and
XrPlaneDetectorBeginInfoEXT::boundingBoxExtent.
The runtime must resolve the location defined by
XrPlaneDetectorBeginInfoEXT::baseSpace at the time of the call.
The XrPlaneDetectorBeginInfoEXT::boundingBoxPose is the pose of
the center of the box defined by
XrPlaneDetectorBeginInfoEXT::boundingBoxExtent.
The runtime must return XR_ERROR_SPACE_NOT_LOCATABLE_EXT if the
XrPlaneDetectorBeginInfoEXT::baseSpace is not locatable at the
time of the call.
The XrPlaneDetectorBeginInfoEXT structure describes the information to detect planes.
// Provided by XR_EXT_plane_detection
typedef struct XrPlaneDetectorBeginInfoEXT {
XrStructureType type;
const void* next;
XrSpace baseSpace;
XrTime time;
uint32_t orientationCount;
const XrPlaneDetectorOrientationEXT* orientations;
uint32_t semanticTypeCount;
const XrPlaneDetectorSemanticTypeEXT* semanticTypes;
uint32_t maxPlanes;
float minArea;
XrPosef boundingBoxPose;
XrExtent3DfEXT boundingBoxExtent;
} XrPlaneDetectorBeginInfoEXT;
The xrGetPlaneDetectionStateEXT function is defined as:
// Provided by XR_EXT_plane_detection
XrResult xrGetPlaneDetectionStateEXT(
XrPlaneDetectorEXT planeDetector,
XrPlaneDetectionStateEXT* state);
The xrGetPlaneDetectionStateEXT function retrieves the state of the plane query and must be called before calling xrGetPlaneDetectionsEXT.
If the plane detection has not yet finished state must be
XR_PLANE_DETECTION_STATE_PENDING_EXT.
If the plane detection has finished state must be
XR_PLANE_DETECTION_STATE_DONE_EXT.
If no plane detection was previously started
XR_PLANE_DETECTION_STATE_NONE_EXT must be returned.
For all three states the function must return XR_SUCCESS.
When a query error occurs the function must return XR_SUCCESS and the
appropriate error state value must be set.
The xrGetPlaneDetectionsEXT function is defined as:
// Provided by XR_EXT_plane_detection
XrResult xrGetPlaneDetectionsEXT(
XrPlaneDetectorEXT planeDetector,
const XrPlaneDetectorGetInfoEXT* info,
XrPlaneDetectorLocationsEXT* locations);
xrGetPlaneDetectionsEXT must return XR_ERROR_CALL_ORDER_INVALID
if the detector state reported by xrGetPlaneDetectionStateEXT is not
XR_PLANE_DETECTION_STATE_DONE_EXT for the current query started by
xrBeginPlaneDetectionEXT.
If the XrPlaneDetectorGetInfoEXT::baseSpace is not locatable
XR_ERROR_SPACE_NOT_LOCATABLE_EXT must be returned.
Once xrBeginPlaneDetectionEXT is called again, the previous results for that handle are no longer available. The application should cache them before calling xrBeginPlaneDetectionEXT again if it needs access to that data while waiting for updated detection results.
Upon the completion of a detection cycle (xrBeginPlaneDetectionEXT, xrGetPlaneDetectionStateEXT to xrGetPlaneDetectionsEXT) the runtime must keep a snapshot of the plane data and no data may be modified. Calling xrGetPlaneDetectionsEXT multiple times with the same baseSpace and time must return the same plane pose data.
The current snapshot, if any, must be discarded upon calling xrBeginPlaneDetectionEXT.
If the XrEventDataReferenceSpaceChangePending is queued and the changeTime elapsed while the application is holding cached data the application may use the event data to adjust the poses accordingly.
XrPlaneDetectorGetInfoEXT structure contains the information required to retrieve the detected planes.
// Provided by XR_EXT_plane_detection
typedef struct XrPlaneDetectorGetInfoEXT {
XrStructureType type;
const void* next;
XrSpace baseSpace;
XrTime time;
} XrPlaneDetectorGetInfoEXT;
XrPlaneDetectorLocationsEXT structure contains information on the detected planes.
// Provided by XR_EXT_plane_detection
typedef struct XrPlaneDetectorLocationsEXT {
XrStructureType type;
void* next;
uint32_t planeLocationCapacityInput;
uint32_t planeLocationCountOutput;
XrPlaneDetectorLocationEXT* planeLocations;
} XrPlaneDetectorLocationsEXT;
XrPlaneDetectorLocationEXT structure describes the position and orientation of a plane.
// Provided by XR_EXT_plane_detection
typedef struct XrPlaneDetectorLocationEXT {
XrStructureType type;
void* next;
uint64_t planeId;
XrSpaceLocationFlags locationFlags;
XrPosef pose;
XrExtent2Df extents;
XrPlaneDetectorOrientationEXT orientation;
XrPlaneDetectorSemanticTypeEXT semanticType;
uint32_t polygonBufferCount;
} XrPlaneDetectorLocationEXT;
The XrPlaneDetectorOrientationEXT enumeration identifies the different general categories of orientations of detected planes.
// Provided by XR_EXT_plane_detection
typedef enum XrPlaneDetectorOrientationEXT {
XR_PLANE_DETECTOR_ORIENTATION_HORIZONTAL_UPWARD_EXT = 0,
XR_PLANE_DETECTOR_ORIENTATION_HORIZONTAL_DOWNWARD_EXT = 1,
XR_PLANE_DETECTOR_ORIENTATION_VERTICAL_EXT = 2,
XR_PLANE_DETECTOR_ORIENTATION_ARBITRARY_EXT = 3,
XR_PLANE_DETECTOR_ORIENTATION_MAX_ENUM_EXT = 0x7FFFFFFF
} XrPlaneDetectorOrientationEXT;
The enums have the following meanings:
| Enum | Description |
|---|---|
|
The detected plane is horizontal and faces upward (e.g. floor). |
|
The detected plane is horizontal and faces downward (e.g. ceiling). |
|
The detected plane is vertical (e.g. wall). |
|
The detected plane has an arbitrary, non-vertical and non-horizontal orientation. |
The XrPlaneDetectorSemanticTypeEXT enumeration identifies the different semantic types of detected planes.
// Provided by XR_EXT_plane_detection
typedef enum XrPlaneDetectorSemanticTypeEXT {
XR_PLANE_DETECTOR_SEMANTIC_TYPE_UNDEFINED_EXT = 0,
XR_PLANE_DETECTOR_SEMANTIC_TYPE_CEILING_EXT = 1,
XR_PLANE_DETECTOR_SEMANTIC_TYPE_FLOOR_EXT = 2,
XR_PLANE_DETECTOR_SEMANTIC_TYPE_WALL_EXT = 3,
XR_PLANE_DETECTOR_SEMANTIC_TYPE_PLATFORM_EXT = 4,
XR_PLANE_DETECTOR_SEMANTIC_TYPE_MAX_ENUM_EXT = 0x7FFFFFFF
} XrPlaneDetectorSemanticTypeEXT;
The enums have the following meanings:
| Enum | Description |
|---|---|
|
The runtime was unable to classify this plane. |
|
The detected plane is a ceiling. |
|
The detected plane is a floor. |
|
The detected plane is a wall. |
|
The detected plane is a platform, like a table. |
The XrPlaneDetectionStateEXT enumeration identifies the possible states of the plane detector.
// Provided by XR_EXT_plane_detection
typedef enum XrPlaneDetectionStateEXT {
XR_PLANE_DETECTION_STATE_NONE_EXT = 0,
XR_PLANE_DETECTION_STATE_PENDING_EXT = 1,
XR_PLANE_DETECTION_STATE_DONE_EXT = 2,
XR_PLANE_DETECTION_STATE_ERROR_EXT = 3,
XR_PLANE_DETECTION_STATE_FATAL_EXT = 4,
XR_PLANE_DETECTION_STATE_MAX_ENUM_EXT = 0x7FFFFFFF
} XrPlaneDetectionStateEXT;
12.42.5. Read plane polygon vertices
The xrGetPlanePolygonBufferEXT function is defined as:
// Provided by XR_EXT_plane_detection
XrResult xrGetPlanePolygonBufferEXT(
XrPlaneDetectorEXT planeDetector,
uint64_t planeId,
uint32_t polygonBufferIndex,
XrPlaneDetectorPolygonBufferEXT* polygonBuffer);
The xrGetPlanePolygonBufferEXT function retrieves the plane’s polygon
buffer for the given planeId and polygonBufferIndex.
Calling xrGetPlanePolygonBufferEXT with polygonBufferIndex equal
to 0 must return the outside contour, if available.
Calls with non-zero indices less than
XrPlaneDetectorLocationEXT::polygonBufferCount must return
polygons corresponding to holes in the plane.
This feature may not be supported by all runtimes, check the
XrSystemPlaneDetectionPropertiesEXT::supportedFeatures for
support.
Outside contour polygon vertices must be ordered in counter clockwise order. Vertices of holes must be ordered in clockwise order. The right-hand rule is used to determine the direction of the normal of this plane. The polygon contour data is relative to the pose of the plane and coplanar with it.
This function only retrieves polygons, which means that it needs to be converted to a regular mesh to be rendered.
XrPlaneDetectorPolygonBufferEXT is an input/output structure for reading plane contour polygon vertices.
// Provided by XR_EXT_plane_detection
typedef struct XrPlaneDetectorPolygonBufferEXT {
XrStructureType type;
void* next;
uint32_t vertexCapacityInput;
uint32_t vertexCountOutput;
XrVector2f* vertices;
} XrPlaneDetectorPolygonBufferEXT;
The XrExtent3DfEXT structure is defined as:
// Provided by XR_EXT_plane_detection
// XrExtent3DfEXT is an alias for XrExtent3Df
typedef struct XrExtent3Df {
float width;
float height;
float depth;
} XrExtent3Df;
typedef XrExtent3Df XrExtent3DfEXT;
The XrExtent3DfEXT structure describes a axis aligned three-dimensional floating-point extent: This structure is used for component values that may be fractional (floating-point). If used to represent physical distances, values must be in meters.
The width (X), height (Y) and depth (Z) values must be
non-negative.
12.42.6. Example code for locating planes
The following example code demonstrates how to detect planes relative to a local space.
XrInstance instance; // previously initialized
XrSystemId systemId; // previously initialized
XrSession session; // previously initialized
XrSpace localSpace; // previously initialized, e.g. from
// XR_REFERENCE_SPACE_TYPE_LOCAL
XrSpace viewSpace; // previously initialized, e.g. from
// XR_REFERENCE_SPACE_TYPE_VIEW
// The function pointers are previously initialized using
// xrGetInstanceProcAddr.
PFN_xrCreatePlaneDetectorEXT xrCreatePlaneDetectorEXT; // previously initialized
PFN_xrBeginPlaneDetectionEXT xrBeginPlaneDetectionEXT; // previously initialized
PFN_xrGetPlaneDetectionStateEXT xrGetPlaneDetectionStateEXT; // previously initialized
PFN_xrGetPlaneDetectionsEXT xrGetPlaneDetectionsEXT; // previously initialized
PFN_xrGetPlanePolygonBufferEXT xrGetPlanePolygonBufferEXT; // previously initialized
XrSystemPlaneDetectionPropertiesEXT planeDetectionProperties{XR_TYPE_SYSTEM_PLANE_DETECTION_PROPERTIES_EXT};
XrSystemProperties systemProperties{XR_TYPE_SYSTEM_PROPERTIES,
&planeDetectionProperties};
CHK_XR(xrGetSystemProperties(instance, systemId, &systemProperties));
if (!(planeDetectionProperties.supportedFeatures & XR_PLANE_DETECTION_CAPABILITY_PLANE_DETECTION_BIT_EXT )) {
// plane detection is not supported.
return;
}
// Create a plane detection
XrPlaneDetectorEXT planeDetector{};
{
XrPlaneDetectorCreateInfoEXT createInfo{ XR_TYPE_PLANE_DETECTOR_CREATE_INFO_EXT };
createInfo.flags = XR_PLANE_DETECTOR_ENABLE_CONTOUR_BIT_EXT;
CHK_XR(xrCreatePlaneDetectorEXT(session, &createInfo, &planeDetector));
}
bool queryRunning = false;
std::vector<XrPlaneDetectorOrientationEXT> orientations;
orientations.push_back(XR_PLANE_DETECTOR_ORIENTATION_HORIZONTAL_UPWARD_EXT);
orientations.push_back(XR_PLANE_DETECTOR_ORIENTATION_HORIZONTAL_DOWNWARD_EXT);
std::vector<XrPlaneDetectorLocationEXT> cachedPlaneLocations;
auto processPlanes = [&](const XrTime time) {
if (!queryRunning) {
XrPlaneDetectorBeginInfoEXT beginInfo{ XR_TYPE_PLANE_DETECTOR_BEGIN_INFO_EXT };
XrPosef pose{};
XrExtent3DfEXT extents = {10.0f, 10.0f, 10.0f};
pose.orientation.w = 1.0f;
beginInfo.baseSpace = viewSpace;
beginInfo.time = time;
beginInfo.boundingBoxPose = pose;
beginInfo.boundingBoxExtent = extents;
beginInfo.orientationCount = (uint32_t)orientations.size();
beginInfo.orientations = orientations.data();
CHK_XR(xrBeginPlaneDetectionEXT(planeDetector, &beginInfo));
queryRunning = true;
return;
} else {
XrPlaneDetectionStateEXT planeDetectionState;
if (xrGetPlaneDetectionStateEXT(planeDetector, &planeDetectionState)!=XR_SUCCESS) {
queryRunning = false;
return;
}
switch(planeDetectionState) {
case XR_PLANE_DETECTION_STATE_DONE_EXT:
// query has finished, process the results.
break;
case XR_PLANE_DETECTION_STATE_ERROR_EXT:
// something temporary went wrong, just
// retry
queryRunning = false;
return;
case XR_PLANE_DETECTION_STATE_FATAL_EXT:
// there was something wrong with the query
// do not retry.
// exit();
return;
case XR_PLANE_DETECTION_STATE_PENDING_EXT:
// query is still processing, come back on the next loop.
return;
default:
// restart the query.
queryRunning = false;
return;
}
XrPlaneDetectorGetInfoEXT planeGetInfo{};
planeGetInfo.type = XR_TYPE_PLANE_DETECTOR_GET_INFO_EXT;
planeGetInfo.time = time;
planeGetInfo.baseSpace = localSpace;
XrPlaneDetectorLocationsEXT planeLocations{};
planeLocations.type = XR_TYPE_PLANE_DETECTOR_LOCATIONS_EXT;
planeLocations.planeLocationCapacityInput = 0;
planeLocations.planeLocations = nullptr;
if (xrGetPlaneDetectionsEXT(planeDetector, &planeGetInfo, &planeLocations) != XR_SUCCESS ) {
queryRunning = false;
return;
}
if (planeLocations.planeLocationCountOutput > 0) {
queryRunning = false;
std::vector<XrPlaneDetectorLocationEXT>
locationsBuffer(planeLocations.planeLocationCountOutput,
{ XR_TYPE_PLANE_DETECTOR_LOCATION_EXT });
planeLocations.planeLocationCapacityInput =
planeLocations.planeLocationCountOutput;
planeLocations.planeLocations = locationsBuffer.data();
CHK_XR(xrGetPlaneDetectionsEXT(planeDetector, &planeGetInfo, &planeLocations));
cachedPlaneLocations = locationsBuffer;
for (int i = 0; i < planeLocations.planeLocationCountOutput; ++i) {
const XrPosef& planeInLocalSpace = planeLocations.planeLocations[i].pose;
auto planeId =
planeLocations.planeLocations[i].planeId;
auto polygonBufferCount =
planeLocations.planeLocations[i].polygonBufferCount;
for (uint32_t polygonBufferIndex=0; polygonBufferIndex < polygonBufferCount; polygonBufferIndex++) {
// polygonBufferIndex = 0 -> outside contour CCW
// polygonBufferIndex > 0 -> holes CW
XrPlaneDetectorPolygonBufferEXT polygonBuffer{};
polygonBuffer.vertexCapacityInput = 0;
CHK_XR(xrGetPlanePolygonBufferEXT(planeDetector,
planeId, polygonBufferIndex, &polygonBuffer));
// allocate space and use buffer
}
// plane planeInLocalSpace, planeType
}
}
}
};
while (1) {
// ...
// For every frame in frame loop
// ...
XrFrameState frameState; // previously returned from xrWaitFrame
const XrTime time = frameState.predictedDisplayTime;
processPlanes(time);
// Draw the planes as needed from cachedPlaneLocations.
// drawPlanes(cachedPlaneLocations);
// ...
// Finish frame loop
// ...
}
New Object Types
New Enum Constants
XrObjectType enumeration is extended with:
-
XR_OBJECT_TYPE_PLANE_DETECTOR_EXT
XrStructureType enumeration is extended with:
-
XR_TYPE_PLANE_DETECTOR_CREATE_INFO_EXT -
XR_TYPE_PLANE_DETECTOR_BEGIN_INFO_EXT -
XR_TYPE_PLANE_DETECTOR_GET_INFO_EXT -
XR_TYPE_PLANE_DETECTOR_LOCATION_EXT -
XR_TYPE_PLANE_DETECTOR_POLYGON_BUFFER_EXT -
XR_TYPE_SYSTEM_PLANE_DETECTION_PROPERTIES_EXT
the XrResult enumeration is extended with:
-
XR_ERROR_SPACE_NOT_LOCATABLE_EXT -
XR_ERROR_PLANE_DETECTION_PERMISSION_DENIED_EXT
New Enums
New Structures
New Functions
Version History
-
Revision 1, 2023-06-26 (Ron Bessems)
-
Revision 2, 2024-05-23 (Ron Bessems)
-
Fix extents description and plane axis to match CTS and implementations.
-
12.43. XR_EXT_render_model
- Name String
-
XR_EXT_render_model - Extension Type
-
Instance extension
- Registered Extension Number
-
301
- Revision
-
1
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Contributors
-
Darryl Gough, Microsoft
Yin Li, Microsoft
Bryce Hutchings, Microsoft
Joe Ludwig, Valve
Nathan Nuber, Valve
Rylie Pavlik, Collabora
Wenlin Mao, Meta Platforms
Dan Willmott, Valve
Jakob Bornecrantz, Collabora
Leonard Tsai, Meta Platforms
Paulo Gomes, Samsung Electronics
Lachlan Ford, Google
12.43.1. Overview
This extension enables the application to retrieve a glTF 2.0 render model asset from a runtime and animate parts of the model. Other extensions depending on this one specify how to obtain a render model ID, and may specify further restrictions on glTF assets associated with IDs they produce.
|
Note
An OpenXR application typically uses a render model as follows:
|
12.43.2. Choices Delegated to Related Extensions
This extension is permissive in its design to accommodate a variety of use
cases for runtime-provided, application-rendered glTF assets.
Extensions that build on this one are encouraged to further specify render
model properties for render models associated with them.
Be aware that the required behavior of functions in this extension depend on
the extension from which a given XrRenderModelIdEXT was retrieved.
Some aspects for other extensions to specify include:
- glTF Extension Behavior
-
Whether a runtime must support providing an asset with no required glTF extensions (and thus not return
XR_ERROR_RENDER_MODEL_GLTF_EXTENSION_REQUIRED_EXTfrom xrCreateRenderModelAssetEXT for its models), or whether the runtime may returnXR_ERROR_RENDER_MODEL_GLTF_EXTENSION_REQUIRED_EXTif specific glTF extensions are not supported. (If possible, indicate which extensions may be considered mandatory.)
- Alpha Blending
-
What values for
alphaModeare permissible in materials used by a render model asset. Some use cases are highly interactive and thus must not use alpha mode ofBLENDto avoid mandating order independent transparency processing between application content and render models.
- Animation
-
How any animation is performed: whether the simple node-pose-visibility mechanism described in this extension is used for animation, and/or whether and how standard glTF animations are used.
- External References
-
Whether external references for buffers and textures are permitted.
- Scenes
-
Whether the asset may contain more than one scene without specifying a default
scene, and if so, how to select the scene to render. Alternately, the number of scenes the asset may contain, and that the property value for defaultscenemust be defined. (An extension is encouraged to require the presence of thesceneproperty except in cases where the extension provides a way to explicitly compute which scene to use.)
- Complexity and Optimization
-
What hard limits exist for models associated with an extension, if any; any guidelines for asset size, complexity, and feature usage; and what type of usage to optimize assets for.
12.43.3. Render Model ID Atom and Handle
// Provided by XR_EXT_render_model
XR_DEFINE_ATOM(XrRenderModelIdEXT)
The render model ID is used to create an XrRenderModelEXT handle. Like other atom types in OpenXR, the ID should not correspond to consuming noticeable resources in the runtime, it has no explicit lifetime of its own, and it has no persistence nor identity beyond the lifetime of the XrSession handle it is retrieved from. Once the XrRenderModelEXT handle is created from the ID, the runtime may start to consume resources to load and track the state of the render model.
The application can use a valid XrRenderModelIdEXT to create an
XrRenderModelEXT handle.
The value XR_NULL_RENDER_MODEL_ID_EXT, equal to 0, is defined to be
an invalid XrRenderModelIdEXT value.
The application can use a valid XrRenderModelIdEXT to create an
XrRenderModelEXT handle.
This XR_EXT_render_model extension does not specify how to obtain a
valid XrRenderModelIdEXT.
The application can obtain a valid ID through other extensions that depend
on this one.
Be aware that there is a potential pitfall when creating a dependent
extension, if the set of render models it enumerates has any in common with
the set of render models enumerated by another (existing) dependent
extension.
To avoid unexpected application behavior when the same
XrRenderModelIdEXT is enumerated from two separate functions, it is
recommended to do one of the following:
-
Extend the existing enumeration function through extending an input structure chain, rather than creating a new enumeration function.
-
Forbid simultaneous use of those two extensions in your new extension.
#define XR_NULL_RENDER_MODEL_ID_EXT 0
The ID XR_NULL_RENDER_MODEL_ID_EXT cannot be used to create an
XrRenderModelEXT handle, and is considered by definition to be an
invalid render model ID.
// Provided by XR_EXT_render_model
XR_DEFINE_HANDLE(XrRenderModelEXT)
The XrRenderModelEXT handle represents the resources to load and track the state of a render model, states of animatable parts, and a set of glTF extensions that the application is prepared to handle in a corresponding asset.
It does not directly represent the model’s data, however. See XrRenderModelAssetEXT for the handle representing the data for a render model asset, including names of animatable nodes.
An application can create an XrRenderModelEXT handle using the xrCreateRenderModelEXT function.
// Provided by XR_EXT_render_model
XrResult xrCreateRenderModelEXT(
XrSession session,
const XrRenderModelCreateInfoEXT* createInfo,
XrRenderModelEXT* renderModel);
If, when attempting to create the handle, the session does not support any
render model of the given render model ID requiring only glTF extensions
from the supplied glTF extension list (in
XrRenderModelCreateInfoEXT::gltfExtensions), the runtime must
return XR_ERROR_RENDER_MODEL_GLTF_EXTENSION_REQUIRED_EXT.
The XrRenderModelCreateInfoEXT structure is defined as:
// Provided by XR_EXT_render_model
typedef struct XrRenderModelCreateInfoEXT {
XrStructureType type;
const void* next;
XrRenderModelIdEXT renderModelId;
uint32_t gltfExtensionCount;
const char* const* gltfExtensions;
} XrRenderModelCreateInfoEXT;
The XrRenderModelCreateInfoEXT structure describes the information necessary to create an XrRenderModelEXT handle.
The input renderModelId value must be obtained from the same
XrSession used in xrCreateRenderModelEXT.
If the renderModelId value does not match one retrieved from the
relevant XrSession, the runtime must return error
XR_ERROR_RENDER_MODEL_ID_INVALID_EXT.
Note: There is a chance that a renderModelId value incorrectly
retained from another session may have the same numerical value as one
retrieved from the current XrSession.
In such instances, the runtime is unable to distinguish between the two IDs.
As a result, the runtime may mistakenly accept the ID and return a success
code, even though it represents an invalid usage.
Applications should be prepared to handle unexpected behaviors or outcomes
stemming from this scenario.
The application can create multiple XrRenderModelEXT handles using the same ID. The runtime must return the same render model states and asset UUID to these handles if they also share the same list of extensions, since they are sharing the same underlying render model ID. If the list of extensions differs, the runtime may expose a different number of animatable nodes, different asset data and UUID, etc.
The runtime must return
XR_ERROR_RENDER_MODEL_GLTF_EXTENSION_REQUIRED_EXT if the runtime is
unable to return a glTF asset that only requires extensions found in the
application’s list of supported glTF extensions.
Related extensions may require the application to support certain glTF extensions, in which case this error code indicates a failure to satisfy the requirement.
Alternately, related extensions may require the runtime to support providing base glTF assets without any required glTF extensions, in which case this error must not be returned by xrCreateRenderModelEXT in association with render model IDs retrieved from such extensions. See Delegated Choice: glTF Extension Behavior.
The order of gltfExtensions array represents the preferences from the
application when multiple extensions are specified.
The runtime may select or modify the retrieved glTF assets based on this
array of extensions to optimize the glTF asset for this application.
Successful creation of this handle implies that the runtime is ready to report a fixed number and sequence of animatable node states for an asset satisfying the application’s criteria, and that asset data, with node names, meeting the criteria may be available during this session. The asset data and node names may still be unavailable at the time the XrRenderModelEXT handle is returned.
The xrDestroyRenderModelEXT function is defined as:
// Provided by XR_EXT_render_model
XrResult xrDestroyRenderModelEXT(
XrRenderModelEXT renderModel);
xrDestroyRenderModelEXT function releases the XrRenderModelEXT handle and the underlying resources when finished with the render model tracking and animation.
Although any associated XrSpace handles created by
xrCreateRenderModelSpaceEXT are not destroyed upon calling
xrDestroyRenderModelEXT because the space is a child of the session
handle, any render model spaces created from a now-destroyed render model
handle must no longer return any XrSpaceLocationFlagBits or
XrSpaceVelocityFlagBits set in
XrSpaceLocation::locationFlags or
XrSpaceVelocity::velocityFlags, respectively.
That is, a space created from a render model handle that is now destroyed
becomes no longer locatable.
12.43.4. Get Render Model Properties
The xrGetRenderModelPropertiesEXT function is defined as:
// Provided by XR_EXT_render_model
XrResult xrGetRenderModelPropertiesEXT(
XrRenderModelEXT renderModel,
const XrRenderModelPropertiesGetInfoEXT* getInfo,
XrRenderModelPropertiesEXT* properties);
The properties of an XrRenderModelEXT handle are immutable and must not change for the lifetime of the handle.
XrRenderModelPropertiesGetInfoEXT is an input structure for xrGetRenderModelPropertiesEXT.
// Provided by XR_EXT_render_model
typedef struct XrRenderModelPropertiesGetInfoEXT {
XrStructureType type;
const void* next;
} XrRenderModelPropertiesGetInfoEXT;
The XrRenderModelPropertiesEXT structure is defined as:
// Provided by XR_EXT_render_model
typedef struct XrRenderModelPropertiesEXT {
XrStructureType type;
void* next;
XrUuidEXT cacheId;
uint32_t animatableNodeCount;
} XrRenderModelPropertiesEXT;
The XrRenderModelPropertiesEXT structure is an output structure for xrGetRenderModelPropertiesEXT.
Applications may use cacheId to avoid loading the exact same render
model asset twice when two or more XrRenderModelEXT handles use the
same glTF asset.
Applications may also use cacheId to cache preprocessed render model
asset data (and the associated animatableNodeCount node names) between
sessions: it is a persistent UUID, unlike the associated
XrRenderModelEXT handle or XrRenderModelIdEXT atom.
Note that runtimes may return a different UUID for a given logical entity
(e.g. hardware) in another session.
Within the corresponding XrSession, the association between an
XrRenderModelIdEXT value, the glTF extensions required by the
underlying model based on the contents of the
XrRenderModelCreateInfoEXT::gltfExtensions array, and the
cacheId, is constant.
A UUID cacheId corresponds to a unique binary asset, with a constant
animatableNodeCount, and is a function of the render model ID and the
required glTF extensions selected based on the supported glTF extension
contents reported by the application.
The runtime must set cacheId to a valid UUID value and subsequent
valid calls to xrGetRenderModelPropertiesEXT with the same
XrRenderModelEXT and XrRenderModelPropertiesGetInfoEXT values
must return the same values for cacheId while that ID remains valid
to use.
12.43.5. Locate a Render Model in Space
The application can locate a render model by first creating an XrSpace handle from an XrRenderModelEXT handle.
The xrCreateRenderModelSpaceEXT function is defined as:
// Provided by XR_EXT_render_model
XrResult xrCreateRenderModelSpaceEXT(
XrSession session,
const XrRenderModelSpaceCreateInfoEXT* createInfo,
XrSpace* space);
The application can create an XrSpace handle that tracks a render model using xrCreateRenderModelSpaceEXT.
The origin of the underlying render model space is defined to be the origin of the glTF model.
Applications can use xrLocateSpace to locate the space created this
way in a desired base space, as with all other varieties of XrSpace
handles.
Unless otherwise specified by a related extension, the pose and locatability
of a render model space have no fixed relationship with any other object or
space, and should be used only to transform the associated model for
rendering.
If a render model space is not both position and orientation TRACKED when
location is queried for a time equal to the intended display time, this
indicates that the application is intended to not render that model in
that frame, unless otherwise specified by a related extension.
This is used in lieu of an explicit visibility state flag.
The XrRenderModelSpaceCreateInfoEXT structure is defined as:
// Provided by XR_EXT_render_model
typedef struct XrRenderModelSpaceCreateInfoEXT {
XrStructureType type;
const void* next;
XrRenderModelEXT renderModel;
} XrRenderModelSpaceCreateInfoEXT;
XrRenderModelSpaceCreateInfoEXT is an input structure for xrCreateRenderModelSpaceEXT.
12.43.6. Create Render Model Asset Handle
// Provided by XR_EXT_render_model
XR_DEFINE_HANDLE(XrRenderModelAssetEXT)
The XrRenderModelAssetEXT handle represents the in-runtime memory buffer for a glTF 2.0 render model asset, and the node names in that asset that correspond to the state array elements tracked by XrRenderModelEXT. The application may destroy the asset handle when it has finished retrieving the binary data and name array into its own memory, that is, after successful application of the two-call idiom with two calls to xrGetRenderModelAssetDataEXT.
The xrCreateRenderModelAssetEXT function is defined as:
// Provided by XR_EXT_render_model
XrResult xrCreateRenderModelAssetEXT(
XrSession session,
const XrRenderModelAssetCreateInfoEXT* createInfo,
XrRenderModelAssetEXT* asset);
An application can create an XrRenderModelAssetEXT handle using the
xrCreateRenderModelAssetEXT function.
The application must only call xrCreateRenderModelAssetEXT with a
UUID specified by parameter createInfo member
XrRenderModelAssetCreateInfoEXT::cacheId that has been retrieved
by calling xrGetRenderModelPropertiesEXT on a render model associated
with the current session.
If the application passes a UUID not retrieved in this way (for example,
passing a UUID received from a previous session), the runtime must return
XR_ERROR_RENDER_MODEL_ASSET_UNAVAILABLE_EXT.
This implies that the runtime must track which UUIDs it has returned to the
application in a given session to validate the input to this function.
If this function returns successfully, the runtime must have the asset data
and node names in memory for immediate return to the application in a
subsequent use of xrGetRenderModelAssetDataEXT.
The runtime may return XR_ERROR_RENDER_MODEL_ASSET_UNAVAILABLE_EXT if
the asset data has become unavailable for external reasons after the
creation of the relevant XrRenderModelEXT.
A valid asset handle enables the application to retrieve the data for the
glTF asset of the render model and the names of animatable nodes.
For a valid XrRenderModelPropertiesEXT::cacheId, the runtime
must return the same glTF asset data, even between different sessions, if
the cache ID is returned from both sessions.
Therefore, the application may rely on the
XrRenderModelPropertiesEXT::cacheId to cache the glTF asset data
and the processed derived data from the asset, as well as the names of
animatable nodes, for reuse across sessions.
An application may choose to use the UUID as a key to cache data associated
with the asset, but is not the asset data itself, however it is invalid to
call xrCreateRenderModelAssetEXT using a cached UUID before it is
available from the current session.
An application must not use a cached UUID to retrieve asset data from the
runtime without ensuring it is retrievable from the current session (and
identifying the semantic use of the model) by calling
xrGetRenderModelPropertiesEXT.
The XrRenderModelAssetCreateInfoEXT structure is defined as:
// Provided by XR_EXT_render_model
typedef struct XrRenderModelAssetCreateInfoEXT {
XrStructureType type;
const void* next;
XrUuidEXT cacheId;
} XrRenderModelAssetCreateInfoEXT;
The XrRenderModelAssetCreateInfoEXT structure contains the information to create an XrRenderModelAssetEXT handle.
The UUID cacheId must match the
XrRenderModelPropertiesEXT::cacheId from some previous call to
xrGetRenderModelPropertiesEXT in the current session.
The xrDestroyRenderModelAssetEXT function is defined as:
// Provided by XR_EXT_render_model
XrResult xrDestroyRenderModelAssetEXT(
XrRenderModelAssetEXT asset);
The xrDestroyRenderModelAssetEXT function releases the XrRenderModelAssetEXT handle and the underlying resources for the glTF asset data and names of animatable nodes.
For clarity, a call to xrDestroyRenderModelAssetEXT does not stop the ability to locate a render model space, nor the ability to retrieve animatable node states. The asset handle refers only to the asset data and list of animatable node names in memory for transfer to the application.
12.43.7. Retrieve Render Model Asset Data
The xrGetRenderModelAssetDataEXT function is defined as:
// Provided by XR_EXT_render_model
XrResult xrGetRenderModelAssetDataEXT(
XrRenderModelAssetEXT asset,
const XrRenderModelAssetDataGetInfoEXT* getInfo,
XrRenderModelAssetDataEXT* buffer);
The application can use the xrGetRenderModelAssetDataEXT function to populate application-allocated memory with the glTF 2.0 binary data and animatable node names of a render model asset. The application uses a two-call idiom with xrGetRenderModelAssetDataEXT to allocate the memory required for the binary asset data.
The binary data copied by the xrGetRenderModelAssetDataEXT function must conform to the glTF 2.0 binary format (GLB) and must contain a valid glTF 2.0 asset that passes validation.
The glTF asset data returned from this function must not change during the
lifetime of the corresponding XrRenderModelAssetEXT handle.
Further, the runtime must return the same glTF binary data for any
XrRenderModelAssetEXT handles created using the same XrUuidEXT
XrRenderModelPropertiesEXT::cacheId.
The application may call xrDestroyRenderModelAssetEXT after successfully populating the buffer with this call, and similar successful use of xrGetRenderModelAssetPropertiesEXT, as the only purpose of this handle is to manage the lifetime of the loaded glTF asset (copied into application-allocated memory by this call) and animatable node names (copied into application-allocated memory by xrGetRenderModelAssetPropertiesEXT) within the runtime.
The XrRenderModelAssetDataGetInfoEXT structure is defined as:
// Provided by XR_EXT_render_model
typedef struct XrRenderModelAssetDataGetInfoEXT {
XrStructureType type;
const void* next;
} XrRenderModelAssetDataGetInfoEXT;
XrRenderModelAssetDataGetInfoEXT is an input structure for xrGetRenderModelAssetDataEXT, defined for the purpose of future extension.
The XrRenderModelAssetDataEXT structure is defined as:
// Provided by XR_EXT_render_model
typedef struct XrRenderModelAssetDataEXT {
XrStructureType type;
void* next;
uint32_t bufferCapacityInput;
uint32_t bufferCountOutput;
uint8_t* buffer;
} XrRenderModelAssetDataEXT;
XrRenderModelAssetDataEXT is an input/output structure for xrGetRenderModelAssetDataEXT.
12.43.8. Animate Parts of a Render Model
The application can animate parts of the glTF model using data from the runtime by retrieving and updating the XrPosef offset and visibility state of certain glTF nodes identified by unique names. The requirements for interpretation of the pose and visibility state in an application renderer are described in XrRenderModelNodeStateEXT.
The number of animatable nodes is a property of the XrRenderModelEXT, and are retrieved with xrGetRenderModelPropertiesEXT as previously described. The identities of those animatable nodes are properties of the render model asset, and are retrieved with xrGetRenderModelAssetPropertiesEXT.
The xrGetRenderModelAssetPropertiesEXT function is defined as:
// Provided by XR_EXT_render_model
XrResult xrGetRenderModelAssetPropertiesEXT(
XrRenderModelAssetEXT asset,
const XrRenderModelAssetPropertiesGetInfoEXT* getInfo,
XrRenderModelAssetPropertiesEXT* properties);
The application can use the xrGetRenderModelAssetPropertiesEXT function to get the array of animatable node names in the glTF asset.
The runtime must return node names in properties member
XrRenderModelAssetPropertiesEXT::nodeProperties that are unique
within the corresponding glTF asset.
The application must allocate an array of
XrRenderModelAssetNodePropertiesEXT within properties, of size
XrRenderModelAssetPropertiesEXT::nodePropertyCount, which must
be equal to XrRenderModelPropertiesEXT::animatableNodeCount.
If XrRenderModelAssetPropertiesEXT::nodePropertyCount is not
equal to XrRenderModelPropertiesEXT::animatableNodeCount as
populated by xrGetRenderModelPropertiesEXT, the runtime must return
XR_ERROR_VALIDATION_FAILURE from
xrGetRenderModelAssetPropertiesEXT.
Because the number of animatable nodes is fixed per render model handle and
retrievable with xrGetRenderModelPropertiesEXT, the two-call idiom for
buffer sizing and allocation is not needed in this case.
The application may call xrDestroyRenderModelAssetEXT after successfully populating the buffer with this call, and similar successful use of xrGetRenderModelAssetDataEXT, as the only purpose of this handle is to manage the lifetime of the animatable node names (copied into application-allocated memory by this call) the loaded glTF asset (copied into application-allocated memory by xrGetRenderModelAssetDataEXT) within the runtime.
The xrGetRenderModelAssetPropertiesEXT call takes an optional
getInfo parameter for extensibility.
The XrRenderModelAssetPropertiesGetInfoEXT structure is defined as:
// Provided by XR_EXT_render_model
typedef struct XrRenderModelAssetPropertiesGetInfoEXT {
XrStructureType type;
const void* next;
} XrRenderModelAssetPropertiesGetInfoEXT;
This structure exists for extensibility purposes.
The xrGetRenderModelAssetPropertiesEXT call populates a XrRenderModelAssetPropertiesEXT supplied by the application, including an application-allocated array for the animatable node properties.
The XrRenderModelAssetPropertiesEXT structure is defined as:
// Provided by XR_EXT_render_model
typedef struct XrRenderModelAssetPropertiesEXT {
XrStructureType type;
void* next;
uint32_t nodePropertyCount;
XrRenderModelAssetNodePropertiesEXT* nodeProperties;
} XrRenderModelAssetPropertiesEXT;
The count XrRenderModelAssetPropertiesEXT::nodePropertyCount
must be equal to
XrRenderModelPropertiesEXT::animatableNodeCount.
If XrRenderModelAssetPropertiesEXT::nodePropertyCount is not
equal to XrRenderModelPropertiesEXT::animatableNodeCount as
populated by xrGetRenderModelPropertiesEXT, the runtime must return
XR_ERROR_VALIDATION_FAILURE from
xrGetRenderModelAssetPropertiesEXT.
The node names in the nodeProperties array define the identities of
the animatable nodes.
Order is significant, in that node states retrieved repeatedly during
rendering form a parallel associated array.
Because the number of animatable nodes is fixed per render model handle and retrievable with xrGetRenderModelPropertiesEXT, the two-call idiom for buffer sizing and allocation is not needed in this case.
The XrRenderModelAssetNodePropertiesEXT structure is defined as:
// Provided by XR_EXT_render_model
typedef struct XrRenderModelAssetNodePropertiesEXT {
char uniqueName[XR_MAX_RENDER_MODEL_ASSET_NODE_NAME_SIZE_EXT];
} XrRenderModelAssetNodePropertiesEXT;
The string returned in uniqueName must be the name of exactly one
node in the glTF asset.
Any given name must appear no more than once in the
XrRenderModelAssetPropertiesEXT::nodeProperties for a given
XrRenderModelAssetEXT.
The xrGetRenderModelStateEXT function reads the current state of the animatable nodes in the render model.
// Provided by XR_EXT_render_model
XrResult xrGetRenderModelStateEXT(
XrRenderModelEXT renderModel,
const XrRenderModelStateGetInfoEXT* getInfo,
XrRenderModelStateEXT* state);
The order of the elements in XrRenderModelStateEXT::nodeStates
in state is the same as the order of node names returned by the
xrGetRenderModelAssetPropertiesEXT function.
The corresponding index in XrRenderModelStateEXT::nodeStates is
the same as the index in
XrRenderModelAssetPropertiesEXT::nodeProperties.
The number of states is
XrRenderModelPropertiesEXT::animatableNodeCount.
The runtime must return XR_ERROR_VALIDATION_FAILURE if
XrRenderModelStateEXT::nodeStateCount is not equal to
XrRenderModelPropertiesEXT::animatableNodeCount.
The XrRenderModelStateGetInfoEXT structure is defined as:
// Provided by XR_EXT_render_model
typedef struct XrRenderModelStateGetInfoEXT {
XrStructureType type;
const void* next;
XrTime displayTime;
} XrRenderModelStateGetInfoEXT;
When retrieving model state for a given frame, displayTime should be
set to the time value intended to be passed as
XrFrameEndInfo::displayTime.
See xrEndFrame for information on how to compute this value.
The XrRenderModelStateEXT structure is defined as:
// Provided by XR_EXT_render_model
typedef struct XrRenderModelStateEXT {
XrStructureType type;
void* next;
uint32_t nodeStateCount;
XrRenderModelNodeStateEXT* nodeStates;
} XrRenderModelStateEXT;
The XrRenderModelNodeStateEXT structure is defined as:
// Provided by XR_EXT_render_model
typedef struct XrRenderModelNodeStateEXT {
XrPosef nodePose;
XrBool32 isVisible;
} XrRenderModelNodeStateEXT;
This structure is populated with state for a single animatable node in an XrRenderModelEXT.
For any animatable node N, if an ancestor node M is also animatable, and
isVisible is XR_FALSE for node M, then isVisible must
be XR_FALSE for node N as well.
That is, being not-visible is recursive.
An application should interpret all descendant nodes of an animatable node
with isVisible = XR_FALSE to also not be visible (to similarly
interpret being not-visible as recursive).
The pose nodePose locates the associated animatable node, and all
descendants, relative to that animatable node’s parent, replacing the
animatable node’s transform, if any was supplied as matrix or
translation/rotation/scale properties in the glTF asset.
The application should apply this nodePose to the associated node, as
well as to all descendant nodes per the glTF specification.
That is, the nodePose replaces, instead of composes with, the
asset-specified transform.
Where one animatable node M is a descendant of another animatable node
N, the application should transform the descendant node M and its
descendants by the composition of the nodePose for both M and N.
That is, nodePose should be interpreted by the application to respect
the hierarchy in the glTF asset, and compose with other animatable node
poses, as well as transformations supplied in the glTF asset on
non-animatable nodes.
For clarity, given a model for which the runtime returns a nodePose
equal to the original transform in the asset for all animatable nodes, the
resulting rendered model should be rendered the same as the unmodified glTF
asset.
This implies that for ease of use, runtimes may consider structuring their
assets such that animatable nodes have no (or identity) transformation
specified in the glTF asset, such that nodePose of identity for all
animatable nodes produces an rendered model in its neutral, original state.
12.43.9. Example
// previously initialized
extern XrInstance instance;
extern XrSession session;
extern XrSpace baseSpace;
// retrieved from another extension that builds on this one
extern XrRenderModelIdEXT renderModelId;
// Get the function pointers for the extension's functions.
PFN_xrCreateRenderModelEXT pfnCreateRenderModelEXT;
CHK_XR(xrGetInstanceProcAddr(
instance, "xrCreateRenderModelEXT",
reinterpret_cast<PFN_xrVoidFunction *>(&pfnCreateRenderModelEXT)));
PFN_xrDestroyRenderModelEXT pfnDestroyRenderModelEXT;
CHK_XR(xrGetInstanceProcAddr(
instance, "xrDestroyRenderModelEXT",
reinterpret_cast<PFN_xrVoidFunction *>(&pfnDestroyRenderModelEXT)));
PFN_xrGetRenderModelPropertiesEXT pfnGetRenderModelPropertiesEXT;
CHK_XR(xrGetInstanceProcAddr(
instance, "xrGetRenderModelPropertiesEXT",
reinterpret_cast<PFN_xrVoidFunction *>(&pfnGetRenderModelPropertiesEXT)));
PFN_xrCreateRenderModelSpaceEXT pfnCreateRenderModelSpaceEXT;
CHK_XR(xrGetInstanceProcAddr(
instance, "xrCreateRenderModelSpaceEXT",
reinterpret_cast<PFN_xrVoidFunction *>(&pfnCreateRenderModelSpaceEXT)));
PFN_xrCreateRenderModelAssetEXT pfnCreateRenderModelAssetEXT;
CHK_XR(xrGetInstanceProcAddr(
instance, "xrCreateRenderModelAssetEXT",
reinterpret_cast<PFN_xrVoidFunction *>(&pfnCreateRenderModelAssetEXT)));
PFN_xrDestroyRenderModelAssetEXT pfnDestroyRenderModelAssetEXT;
CHK_XR(xrGetInstanceProcAddr(
instance, "xrDestroyRenderModelAssetEXT",
reinterpret_cast<PFN_xrVoidFunction *>(&pfnDestroyRenderModelAssetEXT)));
PFN_xrGetRenderModelAssetDataEXT pfnGetRenderModelAssetDataEXT;
CHK_XR(xrGetInstanceProcAddr(
instance, "xrGetRenderModelAssetDataEXT",
reinterpret_cast<PFN_xrVoidFunction *>(&pfnGetRenderModelAssetDataEXT)));
PFN_xrGetRenderModelAssetPropertiesEXT pfnGetRenderModelAssetPropertiesEXT;
CHK_XR(xrGetInstanceProcAddr(instance, "xrGetRenderModelAssetPropertiesEXT",
reinterpret_cast<PFN_xrVoidFunction *>(
&pfnGetRenderModelAssetPropertiesEXT)));
PFN_xrGetRenderModelStateEXT pfnGetRenderModelStateEXT;
CHK_XR(xrGetInstanceProcAddr(
instance, "xrGetRenderModelStateEXT",
reinterpret_cast<PFN_xrVoidFunction *>(&pfnGetRenderModelStateEXT)));
// Create render model handles
// The names of glTF extensions that the application is capable of supporting.
// The returned glTF model is allowed to have have any or all of these extensions
// listed in the "extensionsRequired" array.
// Pass only the extensions that your app/engine are capable of supporting.
std::vector<const char *> appSupportedGltfExtensions{"KHR_texture_basisu",
"KHR_materials_specular"};
XrRenderModelEXT renderModel = XR_NULL_HANDLE;
XrRenderModelCreateInfoEXT renderModelCreateInfo{
XR_TYPE_RENDER_MODEL_CREATE_INFO_EXT};
renderModelCreateInfo.renderModelId = renderModelId;
renderModelCreateInfo.gltfExtensionCount =
(uint32_t)appSupportedGltfExtensions.size();
renderModelCreateInfo.gltfExtensions = appSupportedGltfExtensions.data();
CHK_XR(
pfnCreateRenderModelEXT(session, &renderModelCreateInfo, &renderModel));
// Create a space for locating the render model.
XrRenderModelSpaceCreateInfoEXT spaceCreateInfo{
XR_TYPE_RENDER_MODEL_SPACE_CREATE_INFO_EXT};
spaceCreateInfo.renderModel = renderModel;
XrSpace modelSpace;
CHK_XR(pfnCreateRenderModelSpaceEXT(session, &spaceCreateInfo, &modelSpace));
// Get the model properties: UUID and number of animatable nodes
XrRenderModelPropertiesGetInfoEXT propertiesGetInfo{
XR_TYPE_RENDER_MODEL_PROPERTIES_GET_INFO_EXT};
XrRenderModelPropertiesEXT properties{XR_TYPE_RENDER_MODEL_PROPERTIES_EXT};
CHK_XR(pfnGetRenderModelPropertiesEXT(renderModel, &propertiesGetInfo,
&properties));
{
// Create the asset handle to request the data.
XrRenderModelAssetCreateInfoEXT assetCreateInfo{
XR_TYPE_RENDER_MODEL_ASSET_CREATE_INFO_EXT};
assetCreateInfo.cacheId = properties.cacheId;
XrRenderModelAssetEXT asset;
CHK_XR(pfnCreateRenderModelAssetEXT(session, &assetCreateInfo, &asset));
// Copy the binary glTF (GLB) asset data using two-call idiom.
XrRenderModelAssetDataGetInfoEXT assetGetInfo{
XR_TYPE_RENDER_MODEL_ASSET_DATA_GET_INFO_EXT};
XrRenderModelAssetDataEXT assetData{
XR_TYPE_RENDER_MODEL_ASSET_DATA_EXT};
CHK_XR(pfnGetRenderModelAssetDataEXT(asset, &assetGetInfo, &assetData));
std::vector<uint8_t> glbData(assetData.bufferCountOutput);
assetData.bufferCapacityInput = (uint32_t)glbData.size();
assetData.buffer = glbData.data();
CHK_XR(pfnGetRenderModelAssetDataEXT(asset, &assetGetInfo, &assetData));
// Parsing the binary glTF data is outside the scope of this extension,
// but do it here.
// Get the unique names of the animatable nodes
XrRenderModelAssetPropertiesGetInfoEXT assetPropertiesGetInfo{
XR_TYPE_RENDER_MODEL_ASSET_PROPERTIES_GET_INFO_EXT};
XrRenderModelAssetPropertiesEXT assetProperties{
XR_TYPE_RENDER_MODEL_ASSET_PROPERTIES_EXT};
std::vector<XrRenderModelAssetNodePropertiesEXT> nodeProperties(
properties.animatableNodeCount);
assetProperties.nodePropertyCount = (uint32_t)nodeProperties.size();
assetProperties.nodeProperties = nodeProperties.data();
CHK_XR(pfnGetRenderModelAssetPropertiesEXT(asset, &assetPropertiesGetInfo,
&assetProperties));
// Once the glTF data has been handled, we no longer need the
// XrRenderModelAssetEXT handle.
CHK_XR(pfnDestroyRenderModelAssetEXT(asset));
// Save the list of nodes for rendering. The order of the array matters.
// The application will store some sort of "reference" to a node for
// each element, using the node name (in nodeProperties) to find it here.
// This code is not shown because it will depend on how your
// application represents glTF assets, so add your own here.
}
// Each frame the application's work for each model includes
// reading the state of the animatable nodes
// and then adjusting the pose or visibility of the node.
// Initialized from xrWaitFrame output
XrTime predictedDisplayTime;
// Use xrLocateSpace to locate the model's space
XrSpaceLocation modelLocation{XR_TYPE_SPACE_LOCATION};
CHK_XR(xrLocateSpace(modelSpace, baseSpace, predictedDisplayTime, &modelLocation));
bool orientationTracked = (modelLocation.locationFlags &
XR_SPACE_LOCATION_ORIENTATION_TRACKED_BIT) != 0;
bool positionTracked = (modelLocation.locationFlags &
XR_SPACE_LOCATION_POSITION_TRACKED_BIT) != 0;
if (orientationTracked && positionTracked) {
// Only render if the model space is tracked,
// and if the session state is appropriate, if applicable.
// (e.g. interaction models are only to be rendered when FOCUSED)
XrRenderModelStateGetInfoEXT stateGetInfo{
XR_TYPE_RENDER_MODEL_STATE_GET_INFO_EXT};
stateGetInfo.displayTime = predictedDisplayTime;
// In practice, you do not want to re-allocate this array of
// node state every frame, but it is clearer for illustration.
// We know the number of elements from the model properties,
// and we used the names from the asset handle to find and retain
// our app-specific references to those nodes in the model.
std::vector<XrRenderModelNodeStateEXT> nodeStates(
properties.animatableNodeCount);
XrRenderModelStateEXT state{XR_TYPE_RENDER_MODEL_STATE_EXT};
state.nodeStateCount = (uint32_t)nodeStates.size();
state.nodeStates = nodeStates.data();
// xrGetRenderModelStateEXT does not use the two-call idiom. The size is
// determined by xrGetRenderModelAssetPropertiesEXT.
CHK_XR(pfnGetRenderModelStateEXT(renderModel, &stateGetInfo, &state));
for (size_t i = 0; i < nodeStates.size(); ++i) {
// Use nodeStates[i].isVisible and nodeStates[i].nodePose to update the
// node's visibility or pose.
// nodeStates[i] refers to the node identified by name in nodeProperties[i]
}
} else {
// do not render any of the model if the space not locatable
}
12.43.14. New Enum Constants
-
XR_EXT_RENDER_MODEL_EXTENSION_NAME -
XR_EXT_render_model_SPEC_VERSION -
XR_MAX_RENDER_MODEL_ASSET_NODE_NAME_SIZE_EXT -
XR_NULL_RENDER_MODEL_ID_EXT -
Extending XrObjectType:
-
XR_OBJECT_TYPE_RENDER_MODEL_ASSET_EXT -
XR_OBJECT_TYPE_RENDER_MODEL_EXT
-
-
Extending XrResult:
-
XR_ERROR_RENDER_MODEL_ASSET_UNAVAILABLE_EXT -
XR_ERROR_RENDER_MODEL_GLTF_EXTENSION_REQUIRED_EXT -
XR_ERROR_RENDER_MODEL_ID_INVALID_EXT
-
-
Extending XrStructureType:
-
XR_TYPE_RENDER_MODEL_ASSET_CREATE_INFO_EXT -
XR_TYPE_RENDER_MODEL_ASSET_DATA_EXT -
XR_TYPE_RENDER_MODEL_ASSET_DATA_GET_INFO_EXT -
XR_TYPE_RENDER_MODEL_ASSET_PROPERTIES_EXT -
XR_TYPE_RENDER_MODEL_ASSET_PROPERTIES_GET_INFO_EXT -
XR_TYPE_RENDER_MODEL_CREATE_INFO_EXT -
XR_TYPE_RENDER_MODEL_PROPERTIES_EXT -
XR_TYPE_RENDER_MODEL_PROPERTIES_GET_INFO_EXT -
XR_TYPE_RENDER_MODEL_SPACE_CREATE_INFO_EXT -
XR_TYPE_RENDER_MODEL_STATE_EXT -
XR_TYPE_RENDER_MODEL_STATE_GET_INFO_EXT
-
12.43.15. Issues
-
Is there any restriction on unique node names in a retrieved asset?
-
Resolved. Yes: any node name intended by the runtime to be used by the application, such as through the transform/visibility animation capability in this extension, must be unique in that glTF file: see XrRenderModelAssetNodePropertiesEXT. The working group has verified that this is the intended way for glTF nodes to be referred to, not by index or any other method. Node names not for use by the application do not need to be unique. Node names used for animation must also fit in the fixed size buffer in XrRenderModelAssetNodePropertiesEXT
-
-
Is visibility of nodes in the provided animation system recursive?
-
Resolved. Partially recursive: If an animatable node is not visible, and it is the ancestor of another animatable node, its descendant node is also reported as not visible. The base glTF specification does not have a concept of visibility in this way, so the semantics of it are left for this specification to define. See XrRenderModelNodeStateEXT for detail.
-
-
What values are valid for XrRenderModelStateGetInfoEXT::
displayTime?-
Not fully resolved. It must be valid to use the XrFrameState::
predictedDisplayTimereturned from the most recent xrWaitFrame call. For the sake of pipelined rendering engines, XrFrameState::predictedDisplayTime
XrFrameState::predictedDisplayPeriodmust also be considered valid. Because the purpose of these calls is solely for rendering, it is unclear if any time earlier than the most recent predicted display time makes sense to support. It is also unclear how far in the future runtimes support. Additionally, depending on the purpose of a given render model, the runtime may not have any useful method to predict future states beyond using the most recently measured physical state.
-
-
Do animation transforms replace transforms provided in the glTF file, or compose with them? If they compose, in what order do they compose?
-
Resolved. They replace the transforms. For simplicity and performance, the node state transforms are specified to replace any transformation as supplied as
matrixortranslation/rotation/scaleproperties in the glTF asset. Composing automatically gives the useful property that having all node states contain identity is equivalent to rendering without any animation ability at all, providing a way to check rendering. However, if "compose" were selected as the specified behavior, and some runtimes "baked" transforms into their node vertices (producing an asset with no transforms) while others did not, this would have presented a trap for application developers who might not realize they are supposed to honor both the glTF-provided transform as well as the node state transform. Additionally, consensus among the Working Group appeared to be strongly in favor of the "replace" option.
-
12.44. XR_EXT_spatial_anchor
- Name String
-
XR_EXT_spatial_anchor - Extension Type
-
Instance extension
- Registered Extension Number
-
763
- Revision
-
1
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Contributors
-
Nihav Jain, Google
Natalie Fleury, Meta
Yuichi Taguchi, Meta
Ron Bessems, Meta
Yin Li, Microsoft
Jimmy Alamparambil, ByteDance
Zhipeng Liu, ByteDance
Jun Yan, ByteDance
12.44.1. Overview
This extension builds on XR_EXT_spatial_entity and allows
applications to create spatial anchors, which are arbitrary points in the
user’s physical environment that will then be tracked by the runtime.
The runtime should then adjust the position and orientation of the anchor’s
origin over time as needed, independent of all other spaces & anchors, to
ensure that it maintains its original mapping to the real world.
An anchor that tracks a given position and orientation within an
XrSpatialContextEXT is represented as a spatial entity with (or "that
has") the XR_SPATIAL_COMPONENT_TYPE_ANCHOR_EXT component.
12.44.2. Benefit of using anchors
As the runtime’s understanding of the user’s physical environment updates throughout the lifetime of an XrSpatialContextEXT, virtual objects may appear to drift away from where they were placed by the application, which impacts the application’s realism and the quality of the user’s experience. By creating an anchor close to where a virtual object is placed, and then always rendering that virtual object relative to its anchor, an application can ensure that each virtual object appears to stay at the same position and orientation in the physical environment. Also, unlike certain reference spaces, anchors are unaffected by system-level recentering.
12.44.3. Runtime support
If the runtime supports spatial anchors, it must indicate this by
enumerating XR_SPATIAL_CAPABILITY_ANCHOR_EXT in
xrEnumerateSpatialCapabilitiesEXT.
12.44.4. Configuration
The XrSpatialCapabilityConfigurationAnchorEXT structure is defined as:
// Provided by XR_EXT_spatial_anchor
typedef struct XrSpatialCapabilityConfigurationAnchorEXT {
XrStructureType type;
const void* next;
XrSpatialCapabilityEXT capability;
uint32_t enabledComponentCount;
const XrSpatialComponentTypeEXT* enabledComponents;
} XrSpatialCapabilityConfigurationAnchorEXT;
Applications can enable the XR_SPATIAL_CAPABILITY_ANCHOR_EXT spatial
capability by including a pointer to an
XrSpatialCapabilityConfigurationAnchorEXT structure in
XrSpatialContextCreateInfoEXT::capabilityConfigs.
The runtime must return XR_ERROR_VALIDATION_FAILURE if
capability is not XR_SPATIAL_CAPABILITY_ANCHOR_EXT.
12.44.5. Guaranteed Components
A runtime that supports XR_SPATIAL_CAPABILITY_ANCHOR_EXT must provide
the following spatial components as guaranteed components of all entities
created or discovered by this capability and must enumerate them in
xrEnumerateSpatialCapabilityComponentTypesEXT:
-
XR_SPATIAL_COMPONENT_TYPE_ANCHOR_EXT
Anchor Component
Component Data
The XR_SPATIAL_COMPONENT_TYPE_ANCHOR_EXT uses XrPosef for its
data which provides the position and orientation of the anchor.
Component List Structure to Query Data
The XrSpatialComponentAnchorListEXT structure is defined as:
// Provided by XR_EXT_spatial_anchor
typedef struct XrSpatialComponentAnchorListEXT {
XrStructureType type;
void* next;
uint32_t locationCount;
XrPosef* locations;
} XrSpatialComponentAnchorListEXT;
The runtime must return XR_ERROR_VALIDATION_FAILURE from
xrQuerySpatialComponentDataEXT if
XrSpatialComponentAnchorListEXT is in the
XrSpatialComponentDataQueryResultEXT::next chain but
XR_SPATIAL_COMPONENT_TYPE_ANCHOR_EXT is not included in
XrSpatialComponentDataQueryConditionEXT::componentTypes.
The runtime must return XR_ERROR_SIZE_INSUFFICIENT from
xrQuerySpatialComponentDataEXT if locationCount is less than
XrSpatialComponentDataQueryResultEXT::entityIdCountOutput.
Configuration
If XR_SPATIAL_COMPONENT_TYPE_ANCHOR_EXT is enumerated in
XrSpatialCapabilityComponentTypesEXT::componentTypes for some
capability, an application can enable it by including the enumerant in the
XrSpatialCapabilityConfigurationBaseHeaderEXT::enabledComponents
list of the XrSpatialCapabilityConfigurationBaseHeaderEXT derived
structure of the capability that supports this component.
This component does not require any special configuration to be included in
the XrSpatialCapabilityConfigurationBaseHeaderEXT::next chain.
12.44.6. Creating a Spatial Anchor
The xrCreateSpatialAnchorEXT function is defined as:
// Provided by XR_EXT_spatial_anchor
XrResult xrCreateSpatialAnchorEXT(
XrSpatialContextEXT spatialContext,
const XrSpatialAnchorCreateInfoEXT* createInfo,
XrSpatialEntityIdEXT* anchorEntityId,
XrSpatialEntityEXT* anchorEntity);
The application can create a spatial anchor by using xrCreateSpatialAnchorEXT.
To get updated component data for an anchor, pass the value populated in
anchorEntity into the
XrSpatialUpdateSnapshotCreateInfoEXT::entities when creating a
snapshot.
The application can use anchorEntityId to uniquely identify this
anchor in the XrSpatialComponentDataQueryResultEXT::entityIds
array when using xrQuerySpatialComponentDataEXT.
The runtime must return XR_ERROR_VALIDATION_FAILURE from
xrCreateSpatialAnchorEXT if XR_SPATIAL_CAPABILITY_ANCHOR_EXT was
not configured for spatialContext.
See Configuration for how to configure an
XrSpatialContextEXT for the XR_SPATIAL_CAPABILITY_ANCHOR_EXT
capability.
The anchor represented by anchorEntity is only valid for the lifetime
of spatialContext, or until the application calls
xrDestroySpatialEntityEXT on it, whichever comes first.
Other extensions may offer functions to persist this newly created anchor
across multiple XrSession or to share it across process boundaries
with other applications.
A newly created anchor, until destroyed, must be discoverable in its parent
spatial context.
This means that the runtime must include anchorEntityId in the
snapshot created using xrCreateSpatialDiscoverySnapshotAsyncEXT for
spatialContext if the anchor matches the discovery criteria set in
XrSpatialDiscoverySnapshotCreateInfoEXT.
The newly created anchor may also be discoverable in other spatial contexts
configured with XR_SPATIAL_CAPABILITY_ANCHOR_EXT, although with a
different XrSpatialEntityIdEXT since a particular
XrSpatialEntityIdEXT is unique to its XrSpatialContextEXT.
The XrSpatialAnchorCreateInfoEXT structure is defined as:
// Provided by XR_EXT_spatial_anchor
typedef struct XrSpatialAnchorCreateInfoEXT {
XrStructureType type;
const void* next;
XrSpace baseSpace;
XrTime time;
XrPosef pose;
} XrSpatialAnchorCreateInfoEXT;
12.44.7. Query Anchor Pose
After the anchor is created, the runtime should then adjust its position
and orientation over time relative to other spaces in order to maintain the
best possible alignment to its original real-world location, even if that
changes the anchor’s relationship to the original
XrSpatialAnchorCreateInfoEXT::baseSpace used to initialize it.
The application can use xrCreateSpatialUpdateSnapshotEXT with the
anchor’s XrSpatialEntityEXT to create a new XrSpatialSnapshotEXT
and then query the XR_SPATIAL_COMPONENT_TYPE_ANCHOR_EXT component from
that snapshot using xrQuerySpatialComponentDataEXT.
The application can add XrSpatialComponentAnchorListEXT to
XrSpatialComponentDataQueryResultEXT::next to retrieve the
latest location data for the anchors.
The runtime may set the tracking state of a newly created anchor to
XR_SPATIAL_ENTITY_TRACKING_STATE_PAUSED_EXT.
The application must only read the anchor entity’s state provided in
XrSpatialComponentDataQueryResultEXT::entityStates and the
entity’s anchor component data if the tracking state is
XR_SPATIAL_ENTITY_TRACKING_STATE_TRACKING_EXT.
12.44.8. Guidelines For Using Anchors
-
Each anchor’s pose adjusts independent of any other anchor or space. Separately anchored virtual objects may shift or rotate relative to each other, breaking the spatial hierarchy in cases where these virtual objects are expected to stay in place relative to each other. For such cases, the application should reuse the same anchor for all virtual objects that do not move relative to each other.
-
Application should destroy any XrSpatialEntityEXT handles for anchors that are no longer being used in order to free up the resources the runtime may be using to track those anchors.
12.44.9. Example Code
Configure Anchor Capability
The following example demonstrates how to configure the anchor capability when creating a spatial context.
// Create a spatial spatial context
XrSpatialContextEXT spatialContext{};
{
std::vector<XrSpatialComponentTypeEXT> enabledComponents = {
XR_SPATIAL_COMPONENT_TYPE_ANCHOR_EXT,
};
XrSpatialCapabilityConfigurationAnchorEXT anchorConfig{XR_TYPE_SPATIAL_CAPABILITY_CONFIGURATION_ANCHOR_EXT};
anchorConfig.capability = XR_SPATIAL_CAPABILITY_ANCHOR_EXT;
anchorConfig.enabledComponentCount = enabledComponents.size();
anchorConfig.enabledComponents = enabledComponents.data();
std::array<XrSpatialCapabilityConfigurationBaseHeaderEXT*, 1> capabilityConfigs = {
reinterpret_cast<XrSpatialCapabilityConfigurationBaseHeaderEXT*>(&anchorConfig),
};
XrSpatialContextCreateInfoEXT spatialContextCreateInfo{XR_TYPE_SPATIAL_CONTEXT_CREATE_INFO_EXT};
spatialContextCreateInfo.capabilityConfigCount = capabilityConfigs.size();
spatialContextCreateInfo.capabilityConfigs = capabilityConfigs.data();
XrFutureEXT createContextFuture;
CHK_XR(xrCreateSpatialContextAsyncEXT(session, &spatialContextCreateInfo, &createContextFuture));
waitUntilReady(createContextFuture);
XrCreateSpatialContextCompletionEXT completion{XR_TYPE_CREATE_SPATIAL_CONTEXT_COMPLETION_EXT};
CHK_XR(xrCreateSpatialContextCompleteEXT(session, createContextFuture, &completion));
if (completion.futureResult != XR_SUCCESS) {
return;
}
spatialContext = completion.spatialContext;
}
// ...
// Create spatial anchors and get their latest pose in the frame loop.
// ...
CHK_XR(xrDestroySpatialContextEXT(spatialContext));
Create Spatial Anchor & Get Its Location
The following example demonstrates how to create a spatial anchor & gets its pose every frame.
XrSpatialAnchorCreateInfoEXT createInfo{XR_TYPE_SPATIAL_ANCHOR_CREATE_INFO_EXT};
createInfo.baseSpace = localSpace;
createInfo.time = predictedDisplayTime;
createInfo.pose = {{0, 0, 0, 1}, {1, 1, 1}};
XrSpatialEntityIdEXT spatialAnchorEntityId;
XrSpatialEntityEXT spatialAnchorEntity;
CHK_XR(xrCreateSpatialAnchorEXT(spatialContext, &createInfo, &spatialAnchorEntityId, &spatialAnchorEntity));
auto updateAnchorLocation = [&](XrTime time) {
// We want to get updated data for all components of the entities, so skip specifying componentTypes.
XrSpatialUpdateSnapshotCreateInfoEXT snapshotCreateInfo{XR_TYPE_SPATIAL_UPDATE_SNAPSHOT_CREATE_INFO_EXT};
snapshotCreateInfo.entityCount = 1;
snapshotCreateInfo.entities = &spatialAnchorEntity;
snapshotCreateInfo.baseSpace = localSpace;
snapshotCreateInfo.time = time;
XrSpatialSnapshotEXT snapshot;
CHK_XR(xrCreateSpatialUpdateSnapshotEXT(spatialContext, &snapshotCreateInfo, &snapshot));
// Query for the entities that have the anchor component on them.
std::array<XrSpatialComponentTypeEXT, 1> componentsToQuery {XR_SPATIAL_COMPONENT_TYPE_ANCHOR_EXT};
XrSpatialComponentDataQueryConditionEXT queryCond{XR_TYPE_SPATIAL_COMPONENT_DATA_QUERY_CONDITION_EXT};
queryCond.componentTypeCount = componentsToQuery.size();
queryCond.componentTypes = componentsToQuery.data();
XrSpatialComponentDataQueryResultEXT queryResult{XR_TYPE_SPATIAL_COMPONENT_DATA_QUERY_RESULT_EXT};
CHK_XR(xrQuerySpatialComponentDataEXT(snapshot, &queryCond, &queryResult));
std::vector<XrSpatialEntityIdEXT> entityIds(queryResult.entityIdCountOutput);
std::vector<XrSpatialEntityTrackingStateEXT> entityStates(queryResult.entityIdCountOutput);
queryResult.entityIdCapacityInput = entityIds.size();
queryResult.entityIds = entityIds.data();
queryResult.entityStateCapacityInput = entityStates.size();
queryResult.entityStates = entityStates.data();
// query for the pose data
std::vector<XrPosef> locations(queryResult.entityIdCountOutput);
XrSpatialComponentAnchorListEXT locationList{XR_TYPE_SPATIAL_COMPONENT_ANCHOR_LIST_EXT};
locationList.locationCount = locations.size();
locationList.locations = locations.data();
queryResult.next = &locationList;
CHK_XR(xrQuerySpatialComponentDataEXT(snapshot, &queryCond, &queryResult));
for (int32_t i = 0; i < queryResult.entityIdCountOutput; ++i) {
if (entityStates[i] == XR_SPATIAL_ENTITY_TRACKING_STATE_TRACKING_EXT) {
// Pose for entity entityIds[i] is locations[i].
}
}
CHK_XR(xrDestroySpatialSnapshotEXT(snapshot));
};
while (1) {
// ...
// For every frame in frame loop
// ...
XrFrameState frameState; // previously returned from xrWaitFrame
const XrTime time = frameState.predictedDisplayTime;
updateAnchorLocation(time);
// ...
// Finish frame loop
// ...
}
CHK_XR(xrDestroySpatialEntityEXT(spatialAnchorEntity));
12.44.12. New Enum Constants
-
XR_EXT_SPATIAL_ANCHOR_EXTENSION_NAME -
XR_EXT_spatial_anchor_SPEC_VERSION -
Extending XrSpatialCapabilityEXT:
-
XR_SPATIAL_CAPABILITY_ANCHOR_EXT
-
-
Extending XrSpatialComponentTypeEXT:
-
XR_SPATIAL_COMPONENT_TYPE_ANCHOR_EXT
-
-
Extending XrStructureType:
-
XR_TYPE_SPATIAL_ANCHOR_CREATE_INFO_EXT -
XR_TYPE_SPATIAL_CAPABILITY_CONFIGURATION_ANCHOR_EXT -
XR_TYPE_SPATIAL_COMPONENT_ANCHOR_LIST_EXT
-
12.44.13. Issues
-
Why does xrCreateSpatialAnchorEXT output an entity ID as well as an entity handle?
-
Resolved
-
Answer: The xrCreateSpatialAnchorEXT function could very well have provided just the entity id as the output and applications could create an entity handle for that id using xrCreateSpatialEntityFromIdEXT. However, given the typical usage of an anchor where applications query the anchor pose every frame, it becomes a good candidate to be used in an "update snapshot", which requires entity handles as input. Anticipating this typical usecase, xrCreateSpatialAnchorEXT performs xrCreateSpatialEntityFromIdEXT on behalf of the application and provides it with the entity handle to use with xrCreateSpatialUpdateSnapshotEXT.
-
12.45. XR_EXT_spatial_entity
- Name String
-
XR_EXT_spatial_entity - Extension Type
-
Instance extension
- Registered Extension Number
-
741
- Revision
-
1
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Contributors
-
Nihav Jain, Google
Jared Finder, Google
Natalie Fleury, Meta
Yuichi Taguchi, Meta
Ron Bessems, Meta
Yin Li, Microsoft
Karthik Kadappan, Magic Leap
Jimmy Alamparambil, ByteDance
Zhipeng Liu, ByteDance
Jun Yan, ByteDance
12.45.1. Overview
This extension introduces the concepts and foundations for scene understanding and spatial reasoning in OpenXR. This unifies several related but distinct areas of functionality, which are enumerated, configured, and interacted with in a broadly uniform way as defined by this extension. As this extension lacks concrete definitions of any one of these functional areas, the formal specification text tends to be somewhat abstract. Examples included in this extension specification text refers at times to functionality defined in a forthcoming or hypothetical related extension for the purpose of illustration, without inherently limiting or specifying such additional functionality.
The broad pieces of this extension are the following:
-
Spatial entities: The functionality is centered around entities, which provide very little functionality on their own.
-
Spatial components: These entities have components associated with them that provide data and behaviors.
-
Spatial component types: Each spatial component is of a specific component type, and any given entity has at most a single component of any given component type.
-
Spatial context: All spatial entity interaction occurs in a context after an initialization and configuration phase.
-
Spatial capabilities: Spatial entity manipulation is broadly provided by capabilities. A capability is some unit of functionality, for example (without limitation) application-defined anchors, plane detection, or image tracking. Each capability is typically defined in a separate extension (enabled at instance creation as usual) and is enabled for a specific context at the time of creation.
-
Each capability is associated with a set of component types for which components are present on every entity exposed by that capability. The extension defining a capability specifies which component types are mandatory for the capability ("guaranteed"), while that same extension or others may specify optional component types provided by some potential implementations. Any number of capabilities might provide entities with components of a given component type, which are uniformly usable no matter the capability that produced it.
-
Spatial capability features: Further, some capabilities require configuration, and thus are parameterized by capability features.
This extension provides a mechanism for enumerating the components provided by each capability supported on the current system, both the mandatory and any optional components.
As some implementations may require different degrees of parameterization for capabilities, this extension provides a mechanism for enumerating the supported capability features associated with a given capability in the current system.
This extension also defines several common components expected to be used across a wide range of capabilities.
12.45.2. Spatial Entity
Spatial entities are entities that exist in some space, that have various associated data organized into components. They may be any of the following:
-
Physical (e.g. planar surfaces like walls and floors, objects like chairs and bookcases, etc.)
-
Virtual (e.g. content placed and shared by another application or user),
-
App-defined (e.g. application marking an area as the "living room" or "kitchen", or marking a point as the location to place the TV etc.)
Things which are exposed via the action system, like controllers or eye gaze, are not intended to be modeled as spatial entities.
Spatial entities in OpenXR are modeled as an Entity-Component system. Each spatial entity has a set of components, and each component provides a unique set of data and behaviors for that entity.
Spatial entities are represented by either an XrSpatialEntityIdEXT
atom or an XrSpatialEntityEXT handle, details of which are provided in
the Spatial Entity Representations section.
12.45.3. Spatial Component Types
A spatial entity has one or more components which provide data or behaviors for that entity. See Common Components for some common components defined by this extension.
// Provided by XR_EXT_spatial_entity
typedef enum XrSpatialComponentTypeEXT {
XR_SPATIAL_COMPONENT_TYPE_BOUNDED_2D_EXT = 1,
XR_SPATIAL_COMPONENT_TYPE_BOUNDED_3D_EXT = 2,
XR_SPATIAL_COMPONENT_TYPE_PARENT_EXT = 3,
XR_SPATIAL_COMPONENT_TYPE_MESH_3D_EXT = 4,
// Provided by XR_EXT_spatial_plane_tracking
XR_SPATIAL_COMPONENT_TYPE_PLANE_ALIGNMENT_EXT = 1000741000,
// Provided by XR_EXT_spatial_plane_tracking
XR_SPATIAL_COMPONENT_TYPE_MESH_2D_EXT = 1000741001,
// Provided by XR_EXT_spatial_plane_tracking
XR_SPATIAL_COMPONENT_TYPE_POLYGON_2D_EXT = 1000741002,
// Provided by XR_EXT_spatial_plane_tracking
XR_SPATIAL_COMPONENT_TYPE_PLANE_SEMANTIC_LABEL_EXT = 1000741003,
// Provided by XR_EXT_spatial_marker_tracking
XR_SPATIAL_COMPONENT_TYPE_MARKER_EXT = 1000743000,
// Provided by XR_EXT_spatial_anchor
XR_SPATIAL_COMPONENT_TYPE_ANCHOR_EXT = 1000762000,
// Provided by XR_EXT_spatial_persistence
XR_SPATIAL_COMPONENT_TYPE_PERSISTENCE_EXT = 1000763000,
XR_SPATIAL_COMPONENT_TYPE_MAX_ENUM_EXT = 0x7FFFFFFF
} XrSpatialComponentTypeEXT;
The XrSpatialComponentTypeEXT enumeration identifies the different types of components that the runtime may support.
Not all component types listed are provided by this extension on its own: some require additional extensions to be enabled at instance creation time, as documented.
The enumerants have the following values:
| Enum | Description |
|---|---|
|
Component that provides the 2D bounds for a spatial entity. Corresponding list structure is XrSpatialComponentBounded2DListEXT; Corresponding data structure is XrSpatialBounded2DDataEXT |
|
Component that provides the 3D bounds for a spatial entity. Corresponding list structure is XrSpatialComponentBounded3DListEXT; Corresponding data structure is XrBoxf |
|
Component that provides the |
|
Component that provides a 3D mesh for a spatial entity. Corresponding list structure is XrSpatialComponentMesh3DListEXT; Corresponding data structure is XrSpatialMeshDataEXT |
|
Component that provides the plane alignment enum for a spatial entity. Corresponding list structure is XrSpatialComponentPlaneAlignmentListEXT; Corresponding data structure is XrSpatialPlaneAlignmentEXT (Added by the |
|
Component that provides a 2D mesh for a spatial entity. Corresponding list structure is XrSpatialComponentMesh2DListEXT; Corresponding data structure is XrSpatialMeshDataEXT (Added by the |
|
Component that provides a 2D boundary polygon for a spatial entity. Corresponding list structure is XrSpatialComponentPolygon2DListEXT; Corresponding data structure is XrSpatialPolygon2DDataEXT (Added by the |
|
Component that provides a semantic label for a plane. Corresponding list structure is XrSpatialComponentPlaneSemanticLabelListEXT; Corresponding data structure is XrSpatialPlaneSemanticLabelEXT (Added by the |
|
A component describing the marker type, id and location. Corresponding list structure is XrSpatialComponentMarkerListEXT; Corresponding data structure is XrSpatialMarkerDataEXT (Added by the |
|
Component that provides the location for an anchor. Corresponding list structure is XrSpatialComponentAnchorListEXT; Corresponding data structure is XrPosef (Added by the |
|
Component that provides the persisted UUID for a spatial entity. Corresponding list structure is XrSpatialComponentPersistenceListEXT; Corresponding data structure is XrSpatialPersistenceDataEXT (Added by the |
12.45.4. Spatial Capabilities and Setup
Spatial capabilities define a runtime’s abilities to discover entities that have a guaranteed set of components on them. Applications enable the components of a spatial capability when creating the XrSpatialContextEXT, and the runtime in turn must provide only the enabled components on discovered entities. e.g. If a runtime reports that one of the components for a given capability is "semantic labels", it means the application can enable semantic labels via the configuration for that capability and the runtime must only provide the semantic label component if it is configured.
// Provided by XR_EXT_spatial_entity
typedef enum XrSpatialCapabilityEXT {
// Provided by XR_EXT_spatial_plane_tracking
XR_SPATIAL_CAPABILITY_PLANE_TRACKING_EXT = 1000741000,
// Provided by XR_EXT_spatial_marker_tracking
XR_SPATIAL_CAPABILITY_MARKER_TRACKING_QR_CODE_EXT = 1000743000,
// Provided by XR_EXT_spatial_marker_tracking
XR_SPATIAL_CAPABILITY_MARKER_TRACKING_MICRO_QR_CODE_EXT = 1000743001,
// Provided by XR_EXT_spatial_marker_tracking
XR_SPATIAL_CAPABILITY_MARKER_TRACKING_ARUCO_MARKER_EXT = 1000743002,
// Provided by XR_EXT_spatial_marker_tracking
XR_SPATIAL_CAPABILITY_MARKER_TRACKING_APRIL_TAG_EXT = 1000743003,
// Provided by XR_EXT_spatial_anchor
XR_SPATIAL_CAPABILITY_ANCHOR_EXT = 1000762000,
XR_SPATIAL_CAPABILITY_MAX_ENUM_EXT = 0x7FFFFFFF
} XrSpatialCapabilityEXT;
The XrSpatialCapabilityEXT enumeration identifies the different types of capabilities that the runtime may support.
The enumerants have the following values:
| Enum | Description |
|---|---|
|
Plane tracking (Added by the |
|
Capability to be able to detect and track QR codes. (Added by the |
|
Capability to be able to detect and track Micro QR codes. (Added by the |
|
Capability to be able to detect and track Aruco Markers. (Added by the |
|
Capability to be able to detect and track AprilTags. (Added by the |
|
Capability to be able to create spatial anchors (Added by the |
The xrEnumerateSpatialCapabilitiesEXT function is defined as:
// Provided by XR_EXT_spatial_entity
XrResult xrEnumerateSpatialCapabilitiesEXT(
XrInstance instance,
XrSystemId systemId,
uint32_t capabilityCapacityInput,
uint32_t* capabilityCountOutput,
XrSpatialCapabilityEXT* capabilities);
The application can enumerate the list of spatial capabilities supported by
a given XrSystemId using xrEnumerateSpatialCapabilitiesEXT.
The runtime must not enumerate the spatial capabilities whose extension is
not enabled for instance.
The xrEnumerateSpatialCapabilityComponentTypesEXT function is defined as:
// Provided by XR_EXT_spatial_entity
XrResult xrEnumerateSpatialCapabilityComponentTypesEXT(
XrInstance instance,
XrSystemId systemId,
XrSpatialCapabilityEXT capability,
XrSpatialCapabilityComponentTypesEXT* capabilityComponents);
This function enumerates the component types that the given capability provides on its entities in the system as configured.
The application can use the component types enumerated in
XrSpatialCapabilityComponentTypesEXT::componentTypes to
understand the full set of components that the systemId supports for
capability and can use this list to determine what a valid
configuration for capability is when creating an
XrSpatialContextEXT for it.
The runtime must return XR_ERROR_SPATIAL_CAPABILITY_UNSUPPORTED_EXT
if capability is not enumerated by
xrEnumerateSpatialCapabilitiesEXT.
The runtime must not enumerate the spatial component types whose extension
is not enabled for instance.
The XrSpatialCapabilityComponentTypesEXT structure is defined as:
// Provided by XR_EXT_spatial_entity
typedef struct XrSpatialCapabilityComponentTypesEXT {
XrStructureType type;
void* next;
uint32_t componentTypeCapacityInput;
uint32_t componentTypeCountOutput;
XrSpatialComponentTypeEXT* componentTypes;
} XrSpatialCapabilityComponentTypesEXT;
12.45.5. Spatial capability features
// Provided by XR_EXT_spatial_entity
typedef enum XrSpatialCapabilityFeatureEXT {
// Provided by XR_EXT_spatial_marker_tracking
XR_SPATIAL_CAPABILITY_FEATURE_MARKER_TRACKING_FIXED_SIZE_MARKERS_EXT = 1000743000,
// Provided by XR_EXT_spatial_marker_tracking
XR_SPATIAL_CAPABILITY_FEATURE_MARKER_TRACKING_STATIC_MARKERS_EXT = 1000743001,
XR_SPATIAL_CAPABILITY_FEATURE_MAX_ENUM_EXT = 0x7FFFFFFF
} XrSpatialCapabilityFeatureEXT;
Some capabilities have parameters exposed to the application to configure how the component data is computed by the runtime. These dimensions of parameterization/configurability are known as capability features. E.g. for an image tracking capability, a runtime may support a feature for the application to specify whether the tracked images are stationary or not.
Providing this information to the runtime via a configuration structure must not change the set of component types present on the associated entities, e.g. on the tracked image. However, the runtime may be able to optimize e.g. the tracking abilities of the image tracking capability and provide a better experience to the application.
Such features are represented by XrSpatialCapabilityFeatureEXT and the application enumerates them by using xrEnumerateSpatialCapabilityFeaturesEXT.
Each capability feature has a corresponding configuration structure to
enable it.
Such configuration structures must be chained to
XrSpatialCapabilityConfigurationBaseHeaderEXT::next of the
corresponding capability.
The enumerants have the following values:
| Enum | Description |
|---|---|
|
Capability feature to allow applications to specify the size for the markers. Corresponding config structure is XrSpatialMarkerSizeEXT (Added by the |
|
Capability feature to allow applications to specify if markers are static. Corresponding config structure is XrSpatialMarkerStaticOptimizationEXT (Added by the |
The xrEnumerateSpatialCapabilityFeaturesEXT function is defines as:
// Provided by XR_EXT_spatial_entity
XrResult xrEnumerateSpatialCapabilityFeaturesEXT(
XrInstance instance,
XrSystemId systemId,
XrSpatialCapabilityEXT capability,
uint32_t capabilityFeatureCapacityInput,
uint32_t* capabilityFeatureCountOutput,
XrSpatialCapabilityFeatureEXT* capabilityFeatures);
The application discovers the features supported by a given system for a XrSpatialCapabilityEXT by using xrEnumerateSpatialCapabilityFeaturesEXT.
For capabilities that have features exposed, the application selects the
feature or features to enable and provides the corresponding configuration
structure in the next chain of the capability configuration structures in
XrSpatialContextCreateInfoEXT::capabilityConfigs.
If capability is not a capability enumerated by
xrEnumerateSpatialCapabilitiesEXT, the runtime must return
XR_ERROR_SPATIAL_CAPABILITY_UNSUPPORTED_EXT.
The runtime must not enumerate the spatial capability features whose
extension is not enabled for instance.
12.45.6. Spatial Context
Create a spatial context
// Provided by XR_EXT_spatial_entity
XR_DEFINE_HANDLE(XrSpatialContextEXT)
The XrSpatialContextEXT handle represents the resources for discovering and updating some number of spatial entities in the environment of the user. Application can use this handle to discover and update spatial entities using other functions in this extension.
The xrCreateSpatialContextAsyncEXT function is defined as:
// Provided by XR_EXT_spatial_entity
XrResult xrCreateSpatialContextAsyncEXT(
XrSession session,
const XrSpatialContextCreateInfoEXT* createInfo,
XrFutureEXT* future);
The application can create an XrSpatialContextEXT handle by:
-
Providing XrSpatialCapabilityConfigurationBaseHeaderEXT derived structures in XrSpatialContextCreateInfoEXT::
capabilityConfigsto enable capabilities and enable components for that capability. -
Configuring the capabilities themselves with the corresponding configuration structures of its XrSpatialCapabilityFeatureEXT.
The runtime must return
XR_ERROR_SPATIAL_CAPABILITY_CONFIGURATION_INVALID_EXT if
XrSpatialContextCreateInfoEXT::capabilityConfigCount is 0.
A spatial context handle needs at least one capability.
The runtime must return XR_ERROR_SPATIAL_CAPABILITY_UNSUPPORTED_EXT
if any capability in the
XrSpatialContextCreateInfoEXT::capabilityConfigs array is not
enumerated by xrEnumerateSpatialCapabilitiesEXT.
The runtime must return
XR_ERROR_SPATIAL_CAPABILITY_CONFIGURATION_INVALID_EXT if any
XrSpatialCapabilityConfigurationBaseHeaderEXT::enabledComponentCount
in XrSpatialContextCreateInfoEXT::capabilityConfigs is 0.
A capability configuration is incomplete without a list of component types
to enable for that capability.
The runtime must return
XR_ERROR_SPATIAL_COMPONENT_UNSUPPORTED_FOR_CAPABILITY_EXT if any
component type listed in
XrSpatialCapabilityConfigurationBaseHeaderEXT::enabledComponents
is not enumerated for
XrSpatialCapabilityConfigurationBaseHeaderEXT::capability in
xrEnumerateSpatialCapabilityComponentTypesEXT.
If any of the structures in the next chain of
XrSpatialContextCreateInfoEXT::capabilityConfigs corresponds to
an XrSpatialCapabilityFeatureEXT that is not enumerated for that
capability in xrEnumerateSpatialCapabilityFeaturesEXT, the runtime
must ignore that XrSpatialCapabilityFeatureEXT structure.
The runtime must return
XR_ERROR_SPATIAL_CAPABILITY_CONFIGURATION_INVALID_EXT if
XrSpatialContextCreateInfoEXT::capabilityConfigs contains
multiple structures with the same
XrSpatialCapabilityConfigurationBaseHeaderEXT::capability.
To ensure optimal use of system resources, the runtime may use the configurations provided in XrSpatialContextCreateInfoEXT array to prepare itself for spatial requests to come in. For example, a runtime that supports plane tracking capability may only begin its plane tracking pipeline if a spatial context handle containing the plane tracking capability is created by the application. If the configured capabilities have a long warm-up time, calls to xrCreateSpatialDiscoverySnapshotAsyncEXT may result in an empty snapshot. Application can wait for XrEventDataSpatialDiscoveryRecommendedEXT before using xrCreateSpatialDiscoverySnapshotAsyncEXT to be sure that the underlying tracking services have warmed up.
If a runtime enforces a permission system to control application access to
the spatial capabilities being configured for the XrSpatialContextEXT,
then the runtime must return XR_ERROR_PERMISSION_INSUFFICIENT if
those permissions have not been granted to this application.
This function starts an asynchronous operation and creates a corresponding
XrFutureEXT, usable with xrPollFutureEXT and related
functions.
The return value of this function only indicates whether the parameters were
acceptable to schedule the asynchronous operation.
The corresponding completion function is
xrCreateSpatialContextCompleteEXT, usable when a future from this
function is in the READY state, with outputs populated by that function in
the completion structure XrCreateSpatialContextCompletionEXT.
The XrSpatialContextCreateInfoEXT structure is defined as:
// Provided by XR_EXT_spatial_entity
typedef struct XrSpatialContextCreateInfoEXT {
XrStructureType type;
const void* next;
uint32_t capabilityConfigCount;
const XrSpatialCapabilityConfigurationBaseHeaderEXT* const* capabilityConfigs;
} XrSpatialContextCreateInfoEXT;
The XrSpatialContextCreateInfoEXT structure describes the information to create an XrSpatialContextEXT handle.
The XrSpatialCapabilityConfigurationBaseHeaderEXT structure is defined as:
// Provided by XR_EXT_spatial_entity
typedef struct XrSpatialCapabilityConfigurationBaseHeaderEXT {
XrStructureType type;
const void* next;
XrSpatialCapabilityEXT capability;
uint32_t enabledComponentCount;
const XrSpatialComponentTypeEXT* enabledComponents;
} XrSpatialCapabilityConfigurationBaseHeaderEXT;
This structure is not directly used in the API but instead its child structures can be used with XrSpatialContextCreateInfoEXT to configure spatial capabilities.
The runtime must return XR_ERROR_SPATIAL_CAPABILITY_UNSUPPORTED_EXT
if capability is not enumerated by
xrEnumerateSpatialCapabilitiesEXT.
The runtime must return
XR_ERROR_SPATIAL_COMPONENT_UNSUPPORTED_FOR_CAPABILITY_EXT if any
component type listed in enabledComponents is not enumerated for
capability in xrEnumerateSpatialCapabilityComponentTypesEXT.
The xrCreateSpatialContextCompleteEXT function is defined as:
// Provided by XR_EXT_spatial_entity
XrResult xrCreateSpatialContextCompleteEXT(
XrSession session,
XrFutureEXT future,
XrCreateSpatialContextCompletionEXT* completion);
xrCreateSpatialContextCompleteEXT completes the asynchronous operation
started by xrCreateSpatialContextAsyncEXT.
The runtime must return XR_ERROR_FUTURE_PENDING_EXT if future
is not in ready state.
The runtime must return XR_ERROR_FUTURE_INVALID_EXT if future
has already been completed or cancelled.
The XrCreateSpatialContextCompletionEXT structure is defined as:
// Provided by XR_EXT_spatial_entity
typedef struct XrCreateSpatialContextCompletionEXT {
XrStructureType type;
void* next;
XrResult futureResult;
XrSpatialContextEXT spatialContext;
} XrCreateSpatialContextCompletionEXT;
If futureResult is a success code, spatialContext must be
valid.
If spatialContext is valid, it remains so only within the lifecycle of
xrCreateSpatialContextAsyncEXT::session or until the application
destroys the spatialContext with xrDestroySpatialContextEXT,
whichever comes first.
Destroy the spatial context
The xrDestroySpatialContextEXT function is defined as:
// Provided by XR_EXT_spatial_entity
XrResult xrDestroySpatialContextEXT(
XrSpatialContextEXT spatialContext);
The application can call xrDestroySpatialContextEXT function to
release the spatialContext handle and the underlying resources when
finished with spatial entity discovery and update tasks.
If there is no other valid XrSpatialContextEXT that was created with
the same spatial capabilities as spatialContext, this call serves as a
suggestion to the runtime to disable the tracking services required for
those capabilities to save system resources.
12.45.7. Spatial Entity Representations
Spatial Entity ID
// Provided by XR_EXT_spatial_entity
XR_DEFINE_ATOM(XrSpatialEntityIdEXT)
XrSpatialEntityIdEXT is used to represent any kind of entity
discovered by the runtime in the spatial environment of the user.
An XrSpatialEntityIdEXT is valid for the XrSpatialContextEXT
in which it is discovered, and the runtime must not reuse the same
XrSpatialEntityIdEXT for different entities within the same
XrSpatialContextEXT.
Also, the runtime must not reuse the same XrSpatialEntityIdEXT
across multiple XrSpatialContextEXT within the same XrSession
regardless of whether it represents the same entity or different ones.
// Provided by XR_EXT_spatial_entity
#define XR_NULL_SPATIAL_ENTITY_ID_EXT 0
XR_NULL_SPATIAL_ENTITY_ID_EXT is a reserved value representing an
invalid XrSpatialEntityIdEXT.
It may be passed to and returned from API functions only when specifically
allowed.
Spatial Entity Handle
// Provided by XR_EXT_spatial_entity
XR_DEFINE_HANDLE(XrSpatialEntityEXT)
The XrSpatialEntityEXT handle represents a spatial entity. An application can create such a handle to express its interest in a specific entity to the runtime.
Create Spatial Entity Handle from ID
The xrCreateSpatialEntityFromIdEXT function is defined as:
// Provided by XR_EXT_spatial_entity
XrResult xrCreateSpatialEntityFromIdEXT(
XrSpatialContextEXT spatialContext,
const XrSpatialEntityFromIdCreateInfoEXT* createInfo,
XrSpatialEntityEXT* spatialEntity);
The application can use xrCreateSpatialEntityFromIdEXT to create an XrSpatialEntityEXT handle which is a reference to an entity that exists in the user’s environment.
The runtime must return XR_ERROR_SPATIAL_ENTITY_ID_INVALID_EXT if
XrSpatialEntityFromIdCreateInfoEXT::entityId is not a valid ID
for spatialContext.
The XrSpatialEntityFromIdCreateInfoEXT structure is defined as:
// Provided by XR_EXT_spatial_entity
typedef struct XrSpatialEntityFromIdCreateInfoEXT {
XrStructureType type;
const void* next;
XrSpatialEntityIdEXT entityId;
} XrSpatialEntityFromIdCreateInfoEXT;
Destroy Spatial Entity Handle
The xrDestroySpatialEntityEXT function is defined as:
// Provided by XR_EXT_spatial_entity
XrResult xrDestroySpatialEntityEXT(
XrSpatialEntityEXT spatialEntity);
The application can use xrDestroySpatialEntityEXT to release the
spatialEntity handle when it is no longer interested in the entity
referenced by this handle.
12.45.8. Spatial Snapshot
// Provided by XR_EXT_spatial_entity
XR_DEFINE_HANDLE(XrSpatialSnapshotEXT)
The application can create spatial snapshots for the purpose of discovering spatial entities or for updating its information about known spatial entities. The XrSpatialSnapshotEXT handle represents the immutable data for the discovered or updated spatial entities and a subset of their components as selected by the application. The spatial snapshot represents a coherent view of the entities and their component data. Once a snapshot is created, the snapshot’s data must remain constant while the snapshot is valid.
The application can create any number of snapshots it wants but must be mindful of the memory being allocated for each new snapshot and must destroy the snapshots once it no longer needs them.
Create discovery snapshot
The xrCreateSpatialDiscoverySnapshotAsyncEXT function is defined as:
// Provided by XR_EXT_spatial_entity
XrResult xrCreateSpatialDiscoverySnapshotAsyncEXT(
XrSpatialContextEXT spatialContext,
const XrSpatialDiscoverySnapshotCreateInfoEXT* createInfo,
XrFutureEXT* future);
The application can discover spatial entities by creating a discovery snapshot by using xrCreateSpatialDiscoverySnapshotAsyncEXT.
This function starts an asynchronous operation and creates a corresponding
XrFutureEXT, usable with xrPollFutureEXT and related
functions.
The return value of this function only indicates whether the parameters were
acceptable to schedule the asynchronous operation.
The corresponding completion function is
xrCreateSpatialDiscoverySnapshotCompleteEXT, usable when a future from
this function is in the READY state, with outputs populated by that function
in the completion structure
XrCreateSpatialDiscoverySnapshotCompletionEXT.
The application can submit multiple discovery snapshot creation requests without needing to wait for the previous one to be completed. The runtime may process and complete the snapshot creation in any order. The runtime may delay the completion of the discovery snapshot creation to throttle the application if it needs to reduce the use of system resources due to power, thermal or other policies of the device.
The application can use
XrSpatialDiscoverySnapshotCreateInfoEXT::componentTypes to
filter the list of entities and the components whose data the runtime must
include in the snapshot.
If the application provides a valid list of spatial component types in
XrSpatialDiscoverySnapshotCreateInfoEXT::componentTypes, then
the runtime must only include spatial entities in the snapshot that have at
least one of the components provided in
XrSpatialDiscoverySnapshotCreateInfoEXT::componentTypes.
Also, the runtime must only include data for only those components in the
snapshot.
The runtime must return XR_ERROR_SPATIAL_COMPONENT_NOT_ENABLED_EXT if
any of the XrSpatialComponentTypeEXT in
XrSpatialDiscoverySnapshotCreateInfoEXT::componentTypes are not
enabled for the spatial capabilities passed to
XrSpatialContextCreateInfoEXT::capabilityConfigs when creating
spatialContext.
If the application does not provide a list of spatial component types in
XrSpatialDiscoverySnapshotCreateInfoEXT::componentTypes, the
runtime must include all the spatial entities in the snapshot that have the
set of components which are enumerated in
XrSpatialCapabilityConfigurationBaseHeaderEXT::enabledComponents for
the capabilities configured for spatialContext.
The runtime must include the data for all the enabled components of the
capabilities configured for spatialContext.
If XrEventDataReferenceSpaceChangePending is queued before the
completion of future, and
XrEventDataReferenceSpaceChangePending::poseValid is false, then
the runtime may either create an XrSpatialSnapshotEXT that has no
entities in it or set the XrSpatialEntityTrackingStateEXT of the
entities that are no longer locatable in
XrCreateSpatialDiscoverySnapshotCompletionInfoEXT::baseSpace at
XrCreateSpatialDiscoverySnapshotCompletionInfoEXT::time to
XR_SPATIAL_ENTITY_TRACKING_STATE_PAUSED_EXT or
XR_SPATIAL_ENTITY_TRACKING_STATE_STOPPED_EXT.
The runtime must not set
XrCreateSpatialContextCompletionEXT::futureResult to an error
code because of XrEventDataReferenceSpaceChangePending.
The XrSpatialDiscoverySnapshotCreateInfoEXT structure is defined as:
// Provided by XR_EXT_spatial_entity
typedef struct XrSpatialDiscoverySnapshotCreateInfoEXT {
XrStructureType type;
const void* next;
uint32_t componentTypeCount;
const XrSpatialComponentTypeEXT* componentTypes;
} XrSpatialDiscoverySnapshotCreateInfoEXT;
The XrSpatialDiscoverySnapshotCreateInfoEXT structure describes the information to create an XrSpatialSnapshotEXT handle when discovering spatial entities.
The xrCreateSpatialDiscoverySnapshotCompleteEXT function is defined as:
// Provided by XR_EXT_spatial_entity
XrResult xrCreateSpatialDiscoverySnapshotCompleteEXT(
XrSpatialContextEXT spatialContext,
const XrCreateSpatialDiscoverySnapshotCompletionInfoEXT* createSnapshotCompletionInfo,
XrCreateSpatialDiscoverySnapshotCompletionEXT* completion);
xrCreateSpatialDiscoverySnapshotCompleteEXT completes the asynchronous
operation started by xrCreateSpatialDiscoverySnapshotAsyncEXT.
The runtime must return XR_ERROR_FUTURE_PENDING_EXT if
XrCreateSpatialDiscoverySnapshotCompletionInfoEXT::future is not
in ready state.
The runtime must return XR_ERROR_FUTURE_INVALID_EXT if
XrCreateSpatialDiscoverySnapshotCompletionInfoEXT::future has
already been completed or cancelled.
The XrCreateSpatialDiscoverySnapshotCompletionInfoEXT structure is defined as:
// Provided by XR_EXT_spatial_entity
typedef struct XrCreateSpatialDiscoverySnapshotCompletionInfoEXT {
XrStructureType type;
const void* next;
XrSpace baseSpace;
XrTime time;
XrFutureEXT future;
} XrCreateSpatialDiscoverySnapshotCompletionInfoEXT;
The locations in the various component data included in the created snapshot
will be represented in baseSpace, located at time.
The XrCreateSpatialDiscoverySnapshotCompletionEXT structure is defined as:
// Provided by XR_EXT_spatial_entity
typedef struct XrCreateSpatialDiscoverySnapshotCompletionEXT {
XrStructureType type;
void* next;
XrResult futureResult;
XrSpatialSnapshotEXT snapshot;
} XrCreateSpatialDiscoverySnapshotCompletionEXT;
Discovery Recommendation Event
The XrEventDataSpatialDiscoveryRecommendedEXT structure is defined as:
// Provided by XR_EXT_spatial_entity
typedef struct XrEventDataSpatialDiscoveryRecommendedEXT {
XrStructureType type;
const void* next;
XrSpatialContextEXT spatialContext;
} XrEventDataSpatialDiscoveryRecommendedEXT;
The application can retrieve this event by using xrPollEvent. The application can avoid excessive calls to xrCreateSpatialDiscoverySnapshotAsyncEXT to discover spatial entities by waiting for this event. If the application creates multiple discovery snapshots with the same XrSpatialDiscoverySnapshotCreateInfoEXT between two XrEventDataSpatialDiscoveryRecommendedEXT events, the resultant snapshots may contain the same entities and therefore the snapshot creation and data queries would be wasteful.
Waiting for this event to create a new discovery snapshot ensures that the application is not overloading the system with discovery requests for which the runtime may not return any new data and helps avoid the risk of overusing the system resources, and getting throttled due to power or thermal policies of the device. This also helps create parity between runtimes that are discovering spatial entities on the fly with live tracking and runtimes which are providing spatial entities off of a previously recorded state (where the runtime may queue the discovery recommendation event only once for each XrSpatialContextEXT).
The runtime must not queue this event for notifying the application about changes or adjustments made to the component data of existing spatial entities. The application can use the xrCreateSpatialUpdateSnapshotEXT to keep track of component data updates for the spatial entities it is interested in.
A runtime may queue a discovery recommendation event without waiting for the application to first call xrCreateSpatialDiscoverySnapshotAsyncEXT. For example, a runtime may base the decision of queueing the discovery recommendation event on the configuration of the XrSpatialContextEXT, its own understanding of the environment around the user (discovery of new entities or loss of existing ones), or for hinting an appropriate discovery request cadence to the application so as not to overload the system resources. The runtime may choose to never queue this event for an XrSpatialContextEXT if no entities are found in the user’s environment throughout the lifetime of that XrSpatialContextEXT.
The runtime must not queue this event for a given spatialContext
until the application completes its creation by using
xrCreateSpatialContextCompleteEXT.
After the application calls xrDestroySpatialContextEXT, the runtime must not queue any more discovery recommendation events for that spatial context nor return any such events for that context from xrPollEvent.
Query Component Data
The xrQuerySpatialComponentDataEXT function is defined as:
// Provided by XR_EXT_spatial_entity
XrResult xrQuerySpatialComponentDataEXT(
XrSpatialSnapshotEXT snapshot,
const XrSpatialComponentDataQueryConditionEXT* queryCondition,
XrSpatialComponentDataQueryResultEXT* queryResult);
The application can use xrQuerySpatialComponentDataEXT to query the
component data of the entities in the snapshot by attaching a list structure
to XrSpatialComponentDataQueryResultEXT::next corresponding to
each XrSpatialComponentTypeEXT in
XrSpatialComponentDataQueryConditionEXT::componentTypes.
If the application attaches a list structure to
XrSpatialComponentDataQueryResultEXT::next that does not
correspond to any of the components listed in
XrSpatialComponentDataQueryConditionEXT::componentTypes, the
runtime must return XR_ERROR_VALIDATION_FAILURE.
The application can choose to attach the list structures corresponding to
only a subset of components listed in
XrSpatialComponentDataQueryConditionEXT::componentTypes.
The application can choose to omit the list structures altogether if it
only wishes to know the ids and tracking state of the spatial entities that
satisfy the queryCondition.
The runtime must not treat the absence of list structures from the
XrSpatialComponentDataQueryResultEXT::next chain as a failure.
If XrEventDataReferenceSpaceChangePending is queued and
XrEventDataReferenceSpaceChangePending::changeTime elapsed while
the application is querying component data from an
XrSpatialSnapshotEXT, the application may use the event data to
adjust the poses accordingly.
The runtime must populate
XrSpatialComponentDataQueryResultEXT::entityIds only with
entities that have all the components specified in
XrSpatialComponentDataQueryConditionEXT::componentTypes.
If XrSpatialComponentDataQueryConditionEXT::componentTypeCount
is 0, the runtime must populate queryResult with all the entities
(and their tracking states) that are in the snapshot.
If additional query conditions are added to
XrSpatialComponentDataQueryConditionEXT::next, the runtime must
treat those as an "AND" with the component types availability i.e. the
runtime must populate XrSpatialComponentDataQueryResultEXT::entityIds
only with entities that satisfy all of the provided conditions.
The runtime must populate the component data in the list structures in the
same order as the entities in
XrSpatialComponentDataQueryResultEXT::entityIds i.e. the
component data at a given index in the list structure array must correspond
to the entity at the same index.
If the tracking state for an entity is not
XR_SPATIAL_ENTITY_TRACKING_STATE_TRACKING_EXT, the runtime must not
change the data at the index corresponding to that entity in the array
contained in the list structures attached to
XrSpatialComponentDataQueryResultEXT.
As an example the application creates an XrSpatialSnapshotEXT which contains 5 entities, where -
-
Entity 1 and 2 have components
XR_SPATIAL_COMPONENT_TYPE_BOUNDED_2D_EXTandXR_SPATIAL_COMPONENT_TYPE_PARENT_EXT -
Entity 3 and 4 have components
XR_SPATIAL_COMPONENT_TYPE_BOUNDED_3D_EXTandXR_SPATIAL_COMPONENT_TYPE_MESH_3D_EXT -
Entity 5 has components
XR_SPATIAL_COMPONENT_TYPE_BOUNDED_2D_EXTandXR_SPATIAL_COMPONENT_TYPE_MESH_3D_EXT.
xrQuerySpatialComponentDataEXT on the above snapshot with
XR_SPATIAL_COMPONENT_TYPE_BOUNDED_2D_EXT listed in the query condition
will result in entity #1, #2, and #5 being returned to the application and
the application can attach an array of XrSpatialBounded2DDataEXT as
part of the XrSpatialComponentBounded2DListEXT structure to the next
chain of XrSpatialComponentDataQueryResultEXT to get the bounded2D
data.
xrQuerySpatialComponentDataEXT on the above snapshot with
XR_SPATIAL_COMPONENT_TYPE_BOUNDED_3D_EXT and
XR_SPATIAL_COMPONENT_TYPE_MESH_3D_EXT components listed in the query
condition will result in entity #3 and #4 being returned to the application
and the application can attach arrays of XrBoxf and
XrSpatialMeshDataEXT as part of the
XrSpatialComponentBounded3DListEXT and
XrSpatialComponentMesh3DListEXT structures respectively to the next
chain of XrSpatialComponentDataQueryResultEXT to get the component
data.
The XrSpatialComponentDataQueryConditionEXT structure is defined as:
// Provided by XR_EXT_spatial_entity
typedef struct XrSpatialComponentDataQueryConditionEXT {
XrStructureType type;
const void* next;
uint32_t componentTypeCount;
const XrSpatialComponentTypeEXT* componentTypes;
} XrSpatialComponentDataQueryConditionEXT;
The XrSpatialComponentDataQueryResultEXT structure is defined as:
// Provided by XR_EXT_spatial_entity
typedef struct XrSpatialComponentDataQueryResultEXT {
XrStructureType type;
void* next;
uint32_t entityIdCapacityInput;
uint32_t entityIdCountOutput;
XrSpatialEntityIdEXT* entityIds;
uint32_t entityStateCapacityInput;
uint32_t entityStateCountOutput;
XrSpatialEntityTrackingStateEXT* entityStates;
} XrSpatialComponentDataQueryResultEXT;
An application can use the entityIds with
xrCreateSpatialEntityFromIdEXT to create XrSpatialEntityEXT
handles for the entities it is interested in getting regular updates for.
The application can then use these XrSpatialEntityEXT handles with
xrCreateSpatialUpdateSnapshotEXT to create an update snapshot that has
the runtime’s latest known data of the components for the provided entities.
// Provided by XR_EXT_spatial_entity
typedef enum XrSpatialEntityTrackingStateEXT {
XR_SPATIAL_ENTITY_TRACKING_STATE_STOPPED_EXT = 1,
XR_SPATIAL_ENTITY_TRACKING_STATE_PAUSED_EXT = 2,
XR_SPATIAL_ENTITY_TRACKING_STATE_TRACKING_EXT = 3,
XR_SPATIAL_ENTITY_TRACKING_STATE_MAX_ENUM_EXT = 0x7FFFFFFF
} XrSpatialEntityTrackingStateEXT;
The XrSpatialEntityTrackingStateEXT enumerates the possible spatial entity tracking states:
The enums have the following meanings:
| Enum | Description |
|---|---|
|
The runtime has stopped tracking this entity and will never resume tracking it. |
|
The runtime has paused tracking this entity but may resume tracking it in the future. |
|
The runtime is currently tracking this entity and its component data is valid. |
-
The runtime may change the state of the spatial entity from
XR_SPATIAL_ENTITY_TRACKING_STATE_TRACKING_EXTtoXR_SPATIAL_ENTITY_TRACKING_STATE_PAUSED_EXTif it suspends the tracking of that spatial entity but has the possibility of resuming its tracking in the future. Some examples of when the runtime may do this include (but not limited to) if the application loses input focus; or if the given spatial entity is too far from the user to be accurately tracked; or if there are too many entities being tracked and the runtime wants to reduce the cost of tracking. XrSpatialEntityTrackingStateEXT helps the application insulate itself from the different tracking policies of each runtime. -
The runtime may change the state of an entity from
XR_SPATIAL_ENTITY_TRACKING_STATE_PAUSED_EXTtoXR_SPATIAL_ENTITY_TRACKING_STATE_TRACKING_EXTorXR_SPATIAL_ENTITY_TRACKING_STATE_STOPPED_EXT. -
The runtime must change the state of the spatial entity from
XR_SPATIAL_ENTITY_TRACKING_STATE_TRACKING_EXTorXR_SPATIAL_ENTITY_TRACKING_STATE_PAUSED_EXTtoXR_SPATIAL_ENTITY_TRACKING_STATE_STOPPED_EXTif the spatial entity is lost and its tracking will never be recovered or resumed. An example of such a case would be if the device loses tracking, restarts its tracking session but is unable to relocalize in its environment, and therefore treats discovered entities of this tracking session as new entities. -
Once the tracking state of an entity is set to
XR_SPATIAL_ENTITY_TRACKING_STATE_STOPPED_EXT, the runtime must never change it any other state. -
When querying the component data of a spatial entity using xrQuerySpatialComponentDataEXT, the runtime must set valid data in the contents of the buffers provided by the application in the next chain of XrSpatialComponentDataQueryResultEXT if the entity state is
XR_SPATIAL_ENTITY_TRACKING_STATE_TRACKING_EXT. If the entity state isXR_SPATIAL_ENTITY_TRACKING_STATE_STOPPED_EXTorXR_SPATIAL_ENTITY_TRACKING_STATE_PAUSED_EXT, the runtime must not change the content of the buffers.
Two-call idiom for component data
The XrSpatialBufferEXT structure is defined as:
// Provided by XR_EXT_spatial_entity
typedef struct XrSpatialBufferEXT {
XrSpatialBufferIdEXT bufferId;
XrSpatialBufferTypeEXT bufferType;
} XrSpatialBufferEXT;
Some spatial components have variable-sized data and therefore require using the two-call idiom to retrieve their data. In such cases, the spatial component data structure provides an XrSpatialBufferEXT for each variable sized buffer needed in that component’s data.
For the same bufferId, the runtime must provide the same data from
one component data query to another, even across one snapshot to another.
A different bufferId between component data query calls indicates to
the application that the data for that component may have changed.
// Provided by XR_EXT_spatial_entity
XR_DEFINE_ATOM(XrSpatialBufferIdEXT)
XrSpatialBufferIdEXT is used to represent any kind of variable
sized data for a spatial component.
The runtime must keep the XrSpatialBufferIdEXT and its data in
memory for at least the lifecycle of the XrSpatialSnapshotEXT that
contains it.
The runtime may keep the XrSpatialBufferIdEXT and its data in
memory for longer than the lifecycle of the XrSpatialSnapshotEXT in
order to return the same ID as part of snapshots created later on by the
application.
For the same XrSpatialBufferIdEXT, the runtime must always return
the same data via the appropriate xrGetSpatialBuffer* function.
// Provided by XR_EXT_spatial_entity
typedef enum XrSpatialBufferTypeEXT {
XR_SPATIAL_BUFFER_TYPE_UNKNOWN_EXT = 0,
XR_SPATIAL_BUFFER_TYPE_STRING_EXT = 1,
XR_SPATIAL_BUFFER_TYPE_UINT8_EXT = 2,
XR_SPATIAL_BUFFER_TYPE_UINT16_EXT = 3,
XR_SPATIAL_BUFFER_TYPE_UINT32_EXT = 4,
XR_SPATIAL_BUFFER_TYPE_FLOAT_EXT = 5,
XR_SPATIAL_BUFFER_TYPE_VECTOR2F_EXT = 6,
XR_SPATIAL_BUFFER_TYPE_VECTOR3F_EXT = 7,
XR_SPATIAL_BUFFER_TYPE_MAX_ENUM_EXT = 0x7FFFFFFF
} XrSpatialBufferTypeEXT;
The XrSpatialBufferTypeEXT enumeration identifies the different data
types of the buffer represented XrSpatialBufferIdEXT.
The xrGetSpatialBufferStringEXT function is defined as:
// Provided by XR_EXT_spatial_entity
XrResult xrGetSpatialBufferStringEXT(
XrSpatialSnapshotEXT snapshot,
const XrSpatialBufferGetInfoEXT* info,
uint32_t bufferCapacityInput,
uint32_t* bufferCountOutput,
char* buffer);
The application can get the data for an XrSpatialBufferEXT provided
by a component, where XrSpatialBufferEXT::bufferType is
XR_SPATIAL_BUFFER_TYPE_STRING_EXT by using
xrGetSpatialBufferStringEXT.
The runtime must return XR_ERROR_VALIDATION_FAILURE if the
XrSpatialBufferTypeEXT for
XrSpatialBufferGetInfoEXT::bufferId is not
XR_SPATIAL_BUFFER_TYPE_STRING_EXT.
The runtime must return XR_ERROR_SPATIAL_BUFFER_ID_INVALID_EXT if
XrSpatialBufferGetInfoEXT::bufferId does not belong to
snapshot.
buffer filled by the runtime must be a null-terminated UTF-8 string.
The xrGetSpatialBufferUint8EXT function is defined as:
// Provided by XR_EXT_spatial_entity
XrResult xrGetSpatialBufferUint8EXT(
XrSpatialSnapshotEXT snapshot,
const XrSpatialBufferGetInfoEXT* info,
uint32_t bufferCapacityInput,
uint32_t* bufferCountOutput,
uint8_t* buffer);
The application can get the data for an XrSpatialBufferEXT provided
by a component, where XrSpatialBufferEXT::bufferType is
XR_SPATIAL_BUFFER_TYPE_UINT8_EXT by using
xrGetSpatialBufferUint8EXT.
The runtime must return XR_ERROR_VALIDATION_FAILURE if the
XrSpatialBufferTypeEXT for
XrSpatialBufferGetInfoEXT::bufferId is not
XR_SPATIAL_BUFFER_TYPE_UINT8_EXT.
The runtime must return XR_ERROR_SPATIAL_BUFFER_ID_INVALID_EXT if
XrSpatialBufferGetInfoEXT::bufferId does not belong to
snapshot.
The xrGetSpatialBufferUint16EXT function is defined as:
// Provided by XR_EXT_spatial_entity
XrResult xrGetSpatialBufferUint16EXT(
XrSpatialSnapshotEXT snapshot,
const XrSpatialBufferGetInfoEXT* info,
uint32_t bufferCapacityInput,
uint32_t* bufferCountOutput,
uint16_t* buffer);
The application can get the data for an XrSpatialBufferEXT provided
by a component, where XrSpatialBufferEXT::bufferType is
XR_SPATIAL_BUFFER_TYPE_UINT16_EXT by using
xrGetSpatialBufferUint16EXT.
The runtime must return XR_ERROR_VALIDATION_FAILURE if the
XrSpatialBufferTypeEXT for
XrSpatialBufferGetInfoEXT::bufferId is not
XR_SPATIAL_BUFFER_TYPE_UINT16_EXT.
The runtime must return XR_ERROR_SPATIAL_BUFFER_ID_INVALID_EXT if
XrSpatialBufferGetInfoEXT::bufferId does not belong to
snapshot.
The xrGetSpatialBufferUint32EXT function is defined as:
// Provided by XR_EXT_spatial_entity
XrResult xrGetSpatialBufferUint32EXT(
XrSpatialSnapshotEXT snapshot,
const XrSpatialBufferGetInfoEXT* info,
uint32_t bufferCapacityInput,
uint32_t* bufferCountOutput,
uint32_t* buffer);
The application can get the data for an XrSpatialBufferEXT provided
by a component, where XrSpatialBufferEXT::bufferType is
XR_SPATIAL_BUFFER_TYPE_UINT32_EXT by using
xrGetSpatialBufferUint32EXT.
The runtime must return XR_ERROR_VALIDATION_FAILURE if the
XrSpatialBufferTypeEXT for
XrSpatialBufferGetInfoEXT::bufferId is not
XR_SPATIAL_BUFFER_TYPE_UINT32_EXT.
The runtime must return XR_ERROR_SPATIAL_BUFFER_ID_INVALID_EXT if
XrSpatialBufferGetInfoEXT::bufferId does not belong to
snapshot.
The xrGetSpatialBufferFloatEXT function is defined as:
// Provided by XR_EXT_spatial_entity
XrResult xrGetSpatialBufferFloatEXT(
XrSpatialSnapshotEXT snapshot,
const XrSpatialBufferGetInfoEXT* info,
uint32_t bufferCapacityInput,
uint32_t* bufferCountOutput,
float* buffer);
The application can get the data for an XrSpatialBufferEXT provided
by a component, where XrSpatialBufferEXT::bufferType is
XR_SPATIAL_BUFFER_TYPE_FLOAT_EXT by using
xrGetSpatialBufferFloatEXT.
The runtime must return XR_ERROR_VALIDATION_FAILURE if the
XrSpatialBufferTypeEXT for
XrSpatialBufferGetInfoEXT::bufferId is not
XR_SPATIAL_BUFFER_TYPE_FLOAT_EXT.
The runtime must return XR_ERROR_SPATIAL_BUFFER_ID_INVALID_EXT if
XrSpatialBufferGetInfoEXT::bufferId does not belong to
snapshot.
The xrGetSpatialBufferVector2fEXT function is defined as:
// Provided by XR_EXT_spatial_entity
XrResult xrGetSpatialBufferVector2fEXT(
XrSpatialSnapshotEXT snapshot,
const XrSpatialBufferGetInfoEXT* info,
uint32_t bufferCapacityInput,
uint32_t* bufferCountOutput,
XrVector2f* buffer);
The application can get the data for an XrSpatialBufferEXT provided
by a component, where XrSpatialBufferEXT::bufferType is
XR_SPATIAL_BUFFER_TYPE_VECTOR2F_EXT by using
xrGetSpatialBufferVector2fEXT.
The runtime must return XR_ERROR_VALIDATION_FAILURE if the
XrSpatialBufferTypeEXT for
XrSpatialBufferGetInfoEXT::bufferId is not
XR_SPATIAL_BUFFER_TYPE_VECTOR2F_EXT.
The runtime must return XR_ERROR_SPATIAL_BUFFER_ID_INVALID_EXT if
XrSpatialBufferGetInfoEXT::bufferId does not belong to
snapshot.
The xrGetSpatialBufferVector3fEXT function is defined as:
// Provided by XR_EXT_spatial_entity
XrResult xrGetSpatialBufferVector3fEXT(
XrSpatialSnapshotEXT snapshot,
const XrSpatialBufferGetInfoEXT* info,
uint32_t bufferCapacityInput,
uint32_t* bufferCountOutput,
XrVector3f* buffer);
The application can get the data for an XrSpatialBufferEXT provided
by a component, where XrSpatialBufferEXT::bufferType is
XR_SPATIAL_BUFFER_TYPE_VECTOR3F_EXT by using
xrGetSpatialBufferVector3fEXT.
The runtime must return XR_ERROR_VALIDATION_FAILURE if the
XrSpatialBufferTypeEXT for
XrSpatialBufferGetInfoEXT::bufferId is not
XR_SPATIAL_BUFFER_TYPE_VECTOR3F_EXT.
The runtime must return XR_ERROR_SPATIAL_BUFFER_ID_INVALID_EXT if
XrSpatialBufferGetInfoEXT::bufferId does not belong to
snapshot.
The XrSpatialBufferGetInfoEXT structure is defined as:
// Provided by XR_EXT_spatial_entity
typedef struct XrSpatialBufferGetInfoEXT {
XrStructureType type;
const void* next;
XrSpatialBufferIdEXT bufferId;
} XrSpatialBufferGetInfoEXT;
// Provided by XR_EXT_spatial_entity
#define XR_NULL_SPATIAL_BUFFER_ID_EXT 0
XR_NULL_SPATIAL_BUFFER_ID_EXT is a reserved value representing an
invalid XrSpatialBufferIdEXT.
It may be passed to and returned from API functions only when specifically
allowed.
Create Update Snapshot
The xrCreateSpatialUpdateSnapshotEXT function is defined as:
// Provided by XR_EXT_spatial_entity
XrResult xrCreateSpatialUpdateSnapshotEXT(
XrSpatialContextEXT spatialContext,
const XrSpatialUpdateSnapshotCreateInfoEXT* createInfo,
XrSpatialSnapshotEXT* snapshot);
The application can use xrCreateSpatialUpdateSnapshotEXT to create a snapshot and get the latest component data for specific entities as known by the runtime. Applications can provide the XrSpatialEntityEXT handles and the component types they are interested in when creating the snapshot.
The application can use
XrSpatialUpdateSnapshotCreateInfoEXT::componentTypes to filter
the list of components whose data must be included in the snapshot.
If the application provides a valid list of spatial component types in
XrSpatialUpdateSnapshotCreateInfoEXT::componentTypes, then the
runtime must only include spatial entities in the snapshot that have at
least one of the components provided in
XrSpatialUpdateSnapshotCreateInfoEXT::componentTypes.
Also, the runtime must only include data for those components in the
snapshot.
The runtime must return XR_ERROR_SPATIAL_COMPONENT_NOT_ENABLED_EXT if
any of the XrSpatialComponentTypeEXT in
XrSpatialUpdateSnapshotCreateInfoEXT::componentTypes are not
enabled for the spatial capabilities passed to
XrSpatialContextCreateInfoEXT::capabilityConfigs when creating
spatialContext.
If the application does not provide a list of spatial component types in
XrSpatialUpdateSnapshotCreateInfoEXT::componentTypes, the
runtime must include all the spatial entities listed in
XrSpatialUpdateSnapshotCreateInfoEXT::entities in the snapshot
and it must include the data for all the enabled components of the
capabilities configured for spatialContext.
The application can create any number of snapshots it wants but must be mindful of the memory being allocated for each new snapshot and must destroy the snapshots once it no longer needs them.
The XrSpatialUpdateSnapshotCreateInfoEXT structure is defined as:
// Provided by XR_EXT_spatial_entity
typedef struct XrSpatialUpdateSnapshotCreateInfoEXT {
XrStructureType type;
const void* next;
uint32_t entityCount;
const XrSpatialEntityEXT* entities;
uint32_t componentTypeCount;
const XrSpatialComponentTypeEXT* componentTypes;
XrSpace baseSpace;
XrTime time;
} XrSpatialUpdateSnapshotCreateInfoEXT;
Destroy snapshot
The xrDestroySpatialSnapshotEXT function is defined as:
// Provided by XR_EXT_spatial_entity
XrResult xrDestroySpatialSnapshotEXT(
XrSpatialSnapshotEXT snapshot);
The application can call xrDestroySpatialSnapshotEXT to destroy the XrSpatialSnapshotEXT handle and the resources associated with it.
12.45.9. Common Components
Bounded 2D
Component data
The XrSpatialBounded2DDataEXT structure is defined as:
// Provided by XR_EXT_spatial_entity
typedef struct XrSpatialBounded2DDataEXT {
XrPosef center;
XrExtent2Df extents;
} XrSpatialBounded2DDataEXT;
The extents of the XR_SPATIAL_COMPONENT_TYPE_BOUNDED_2D_EXT
refer to the entity’s size in the x-y plane of the plane’s coordinate
system.
A plane with a position of {0, 0, 0}, rotation of {0, 0, 0, 1} (no
rotation), and an extent of {1, 1} refers to a 1 meter x 1 meter plane
centered at {0, 0, 0} with its front face normal vector pointing towards the
+Z direction in the component’s space.
|
Note
OpenXR uses an X-Y plane with +Z as the plane normal but other APIs may use an X-Z plane with +Y as the plane normal. The X-Y plane can be converted to an X-Z plane by rotating -π/2 radians around the +X axis. |
Component list structure to query data
The XrSpatialComponentBounded2DListEXT structure is defined as:
// Provided by XR_EXT_spatial_entity
typedef struct XrSpatialComponentBounded2DListEXT {
XrStructureType type;
void* next;
uint32_t boundCount;
XrSpatialBounded2DDataEXT* bounds;
} XrSpatialComponentBounded2DListEXT;
The runtime must return XR_ERROR_VALIDATION_FAILURE from
xrQuerySpatialComponentDataEXT if
XrSpatialComponentBounded2DListEXT is in the next chain of
XrSpatialComponentDataQueryResultEXT::next but
XrSpatialComponentDataQueryConditionEXT::componentTypeCount is not
zero and XR_SPATIAL_COMPONENT_TYPE_BOUNDED_2D_EXT is not included in
XrSpatialComponentDataQueryConditionEXT::componentTypes.
The runtime must return XR_ERROR_SIZE_INSUFFICIENT from
xrQuerySpatialComponentDataEXT if boundCount is less than
XrSpatialComponentDataQueryResultEXT::entityIdCountOutput.
If xrQuerySpatialComponentDataEXT::snapshot was created from
xrCreateSpatialDiscoverySnapshotCompleteEXT, then the runtime must
provide XrSpatialBounded2DDataEXT::center in
XrCreateSpatialDiscoverySnapshotCompletionInfoEXT::baseSpace and
XrCreateSpatialDiscoverySnapshotCompletionInfoEXT::time.
If xrQuerySpatialComponentDataEXT::snapshot was created from
xrCreateSpatialUpdateSnapshotEXT, then the runtime must provide
XrSpatialBounded2DDataEXT::center in
XrSpatialUpdateSnapshotCreateInfoEXT::baseSpace and
XrSpatialUpdateSnapshotCreateInfoEXT::time.
Configuration
If XR_SPATIAL_COMPONENT_TYPE_BOUNDED_2D_EXT is enumerated in
XrSpatialCapabilityComponentTypesEXT::componentTypes for some
capability, the application can enable it by including the enum value in
the
XrSpatialCapabilityConfigurationBaseHeaderEXT::enabledComponents
list.
This component does not require any special configuration to be included in
the next chain of XrSpatialCapabilityConfigurationBaseHeaderEXT.
Bounded 3D
Component data
XR_SPATIAL_COMPONENT_TYPE_BOUNDED_3D_EXT uses XrBoxf for its
data.
Component list structure to query data
The XrSpatialComponentBounded3DListEXT structure is defined as:
// Provided by XR_EXT_spatial_entity
typedef struct XrSpatialComponentBounded3DListEXT {
XrStructureType type;
void* next;
uint32_t boundCount;
XrBoxf* bounds;
} XrSpatialComponentBounded3DListEXT;
The runtime must return XR_ERROR_VALIDATION_FAILURE from
xrQuerySpatialComponentDataEXT if
XrSpatialComponentBounded3DListEXT is in the next chain of
XrSpatialComponentDataQueryResultEXT::next but
XrSpatialComponentDataQueryConditionEXT::componentTypeCount is not
zero and XR_SPATIAL_COMPONENT_TYPE_BOUNDED_3D_EXT is not included in
XrSpatialComponentDataQueryConditionEXT::componentTypes.
The runtime must return XR_ERROR_SIZE_INSUFFICIENT from
xrQuerySpatialComponentDataEXT if boundCount is less than
XrSpatialComponentDataQueryResultEXT::entityIdCountOutput.
If xrQuerySpatialComponentDataEXT::snapshot was created from
xrCreateSpatialDiscoverySnapshotCompleteEXT, then the runtime must
provide XrBoxf::center in
XrCreateSpatialDiscoverySnapshotCompletionInfoEXT::baseSpace at
XrCreateSpatialDiscoverySnapshotCompletionInfoEXT::time.
If xrQuerySpatialComponentDataEXT::snapshot was created from
xrCreateSpatialUpdateSnapshotEXT, then the runtime must provide
XrBoxf::center in
XrSpatialUpdateSnapshotCreateInfoEXT::baseSpace at
XrSpatialUpdateSnapshotCreateInfoEXT::time.
Configuration
If XR_SPATIAL_COMPONENT_TYPE_BOUNDED_3D_EXT is enumerated in
XrSpatialCapabilityComponentTypesEXT::componentTypes for some
capability, the application can enable it by including the enum in the
XrSpatialCapabilityConfigurationBaseHeaderEXT::enabledComponents
list.
This component does not require any special configuration to be included in
the next chain of XrSpatialCapabilityConfigurationBaseHeaderEXT.
Parent
Component data
XR_SPATIAL_COMPONENT_TYPE_PARENT_EXT uses
XrSpatialEntityIdEXT for its data.
Component list structure to query data
The XrSpatialComponentParentListEXT structure is defined as:
// Provided by XR_EXT_spatial_entity
typedef struct XrSpatialComponentParentListEXT {
XrStructureType type;
void* next;
uint32_t parentCount;
XrSpatialEntityIdEXT* parents;
} XrSpatialComponentParentListEXT;
The runtime must return XR_ERROR_VALIDATION_FAILURE from
xrQuerySpatialComponentDataEXT if
XrSpatialComponentParentListEXT is in the next chain of
XrSpatialComponentDataQueryResultEXT::next but
XrSpatialComponentDataQueryConditionEXT::componentTypeCount is not
zero and XR_SPATIAL_COMPONENT_TYPE_PARENT_EXT is not included in
XrSpatialComponentDataQueryConditionEXT::componentTypes.
The runtime must return XR_ERROR_SIZE_INSUFFICIENT from
xrQuerySpatialComponentDataEXT if parentCount is less than
XrSpatialComponentDataQueryResultEXT::entityIdCountOutput.
Configuration
If XR_SPATIAL_COMPONENT_TYPE_PARENT_EXT is enumerated in
XrSpatialCapabilityComponentTypesEXT::componentTypes for some
capability, the application can enable it by including the enum in the
XrSpatialCapabilityConfigurationBaseHeaderEXT::enabledComponents
list.
This component does not require any special configuration to be included in
the next chain of XrSpatialCapabilityConfigurationBaseHeaderEXT.
Mesh 3D
Component data
The XrSpatialMeshDataEXT structure is defined as:
// Provided by XR_EXT_spatial_entity
typedef struct XrSpatialMeshDataEXT {
XrPosef origin;
XrSpatialBufferEXT vertexBuffer;
XrSpatialBufferEXT indexBuffer;
} XrSpatialMeshDataEXT;
The component type using XrSpatialMeshDataEXT must specify the
XrSpatialBufferTypeEXT of the vertexBuffer and
indexBuffer.
Component list structure to query data
The XrSpatialComponentMesh3DListEXT structure is defined as:
// Provided by XR_EXT_spatial_entity
typedef struct XrSpatialComponentMesh3DListEXT {
XrStructureType type;
void* next;
uint32_t meshCount;
XrSpatialMeshDataEXT* meshes;
} XrSpatialComponentMesh3DListEXT;
The application can query the mesh 3D component of the spatial entities in
an XrSpatialSnapshotEXT by adding
XR_SPATIAL_COMPONENT_TYPE_MESH_3D_EXT in
XrSpatialComponentDataQueryConditionEXT::componentTypes and
adding XrSpatialComponentMesh3DListEXT to the next pointer chain of
XrSpatialComponentDataQueryResultEXT.
The runtime must return XR_ERROR_VALIDATION_FAILURE from
xrQuerySpatialComponentDataEXT if
XrSpatialComponentMesh3DListEXT is in the next chain of
XrSpatialComponentDataQueryResultEXT::next but
XrSpatialComponentDataQueryConditionEXT::componentTypeCount is not
zero and XR_SPATIAL_COMPONENT_TYPE_MESH_3D_EXT is not included in
XrSpatialComponentDataQueryConditionEXT::componentTypes.
The runtime must return XR_ERROR_SIZE_INSUFFICIENT from
xrQuerySpatialComponentDataEXT if meshCount is less than
XrSpatialComponentDataQueryResultEXT::entityIdCountOutput.
For the XrSpatialMeshDataEXT filled out by the runtime in the
meshes array, the XrSpatialBufferEXT::bufferType for
XrSpatialMeshDataEXT::vertexBuffer must be
XR_SPATIAL_BUFFER_TYPE_VECTOR3F_EXT and
XrSpatialBufferEXT::bufferType for
XrSpatialMeshDataEXT::indexBuffer must be
XR_SPATIAL_BUFFER_TYPE_UINT32_EXT.
Configuration
If XR_SPATIAL_COMPONENT_TYPE_MESH_3D_EXT is enumerated in
XrSpatialCapabilityComponentTypesEXT::componentTypes for some
capability, the application can enable it by including the enum in the
XrSpatialCapabilityConfigurationBaseHeaderEXT::enabledComponents
list of the XrSpatialCapabilityConfigurationBaseHeaderEXT derived
structure of the capability that supports this component.
This component does not require any special configuration to be included in the next chain of XrSpatialCapabilityConfigurationBaseHeaderEXT.
12.45.10. Tracking state filters
The XrSpatialFilterTrackingStateEXT structure is defined as:
// Provided by XR_EXT_spatial_entity
typedef struct XrSpatialFilterTrackingStateEXT {
XrStructureType type;
const void* next;
XrSpatialEntityTrackingStateEXT trackingState;
} XrSpatialFilterTrackingStateEXT;
The application can use XrSpatialFilterTrackingStateEXT in the next
chain of XrSpatialDiscoverySnapshotCreateInfoEXT to scope the
discovery to only those entities whose tracking state is
trackingState.
The application can use XrSpatialFilterTrackingStateEXT in the next
chain of XrSpatialComponentDataQueryConditionEXT to scope the
component data query from a snapshot only to entities whose tracking state
is trackingState.
12.45.11. Example code
Application Usage
Applications typically use the spatial entity extension in the following pattern:
-
An application first enumerates the spatial capabilities of the system using xrEnumerateSpatialCapabilitiesEXT. It then inspects the returned array of XrSpatialCapabilityEXT and enumerates the components and features supported for each of those capabilities by using xrEnumerateSpatialCapabilityComponentTypesEXT and xrEnumerateSpatialCapabilityFeaturesEXT respectively. This gives the application a full picture of the components that it can enable and the configurations the capability accepts.
-
The application then creates one or many XrSpatialContextEXT handles with specific spatial capability configurations, wherein the configurations enable & configure a specific capability in the spatial context, and enable & configure components for those capabilities.
-
For each XrSpatialContextEXT, the application waits to receive XrEventDataSpatialDiscoveryRecommendedEXT events for that XrSpatialContextEXT before using xrCreateSpatialDiscoverySnapshotAsyncEXT to discover spatial entities. Once this async operation is complete, the application receives a XrSpatialSnapshotEXT handle.
-
The application queries for the entities and the component data included in this XrSpatialSnapshotEXT by using xrQuerySpatialComponentDataEXT. The application reads the latest component data of the queried entities from structures attached to the next chain of XrSpatialComponentDataQueryResultEXT if the entity state is
XR_SPATIAL_ENTITY_TRACKING_STATE_TRACKING_EXT. -
If there are specific entities that the application identifies as interesting and wants to get updates for over time, it creates XrSpatialEntityEXT handles for those entities by using xrCreateSpatialEntityFromIdEXT. The application gets updates for such interesting entities by using xrCreateSpatialUpdateSnapshotEXT and use the same xrQuerySpatialComponentDataEXT function on the newly created XrSpatialSnapshotEXT to get the latest component data for those entities.
Discover spatial entities & query component data
The following example code demonstrates how to discover spatial entities for capability "Foo" query its component data.
/****************************/
/* Capability definition */
/****************************/
// Foo capability has the following components -
// - XR_SPATIAL_COMPONENT_TYPE_BOUNDED_2D_EXT
#define XR_SPATIAL_CAPABILITY_FOO ((XrSpatialCapabilityEXT)1000740000U)
#define XR_TYPE_SPATIAL_CAPABILITY_CONFIGURATION_FOO_EXT ((XrStructureType)1000740000U)
// Derives from XrSpatialCapabilityConfigurationBaseHeaderEXT
typedef struct XrSpatialCapabilityConfigurationFooEXT {
XrStructureType type;
const void* XR_MAY_ALIAS next;
XrSpatialCapabilityEXT capability;
uint32_t enabledComponentCount;
const XrSpatialComponentTypeEXT* enabledComponents;
} XrSpatialCapabilityConfigurationFooEXT;
/******************************/
/* End capability definition */
/******************************/
auto waitUntilReady = [](XrFutureEXT future) {
XrFuturePollInfoEXT pollInfo{XR_TYPE_FUTURE_POLL_INFO_EXT};
XrFuturePollResultEXT pollResult{XR_TYPE_FUTURE_POLL_RESULT_EXT};
pollInfo.future = future;
do {
// sleep(1);
CHK_XR(xrPollFutureEXT(instance, &pollInfo, &pollResult));
} while (pollResult.state != XR_FUTURE_STATE_READY_EXT);
};
// Create a spatial spatial context
XrSpatialContextEXT spatialContext{};
{
const std::array<XrSpatialComponentTypeEXT, 1> enabledComponents = {
XR_SPATIAL_COMPONENT_TYPE_BOUNDED_2D_EXT,
};
// Configure Foo capability for the spatial context
XrSpatialCapabilityConfigurationFooEXT fooConfig{XR_TYPE_SPATIAL_CAPABILITY_CONFIGURATION_FOO_EXT};
fooConfig.capability = XR_SPATIAL_CAPABILITY_FOO;
fooConfig.enabledComponentCount = enabledComponents.size();
fooConfig.enabledComponents = enabledComponents.data();
std::vector<XrSpatialCapabilityConfigurationBaseHeaderEXT*> capabilityConfigs;
capabilityConfigs.push_back(reinterpret_cast<XrSpatialCapabilityConfigurationBaseHeaderEXT*>(&fooConfig));
XrSpatialContextCreateInfoEXT spatialContextCreateInfo{XR_TYPE_SPATIAL_CONTEXT_CREATE_INFO_EXT};
spatialContextCreateInfo.capabilityConfigCount = capabilityConfigs.size();
spatialContextCreateInfo.capabilityConfigs = capabilityConfigs.data();
XrFutureEXT createContextFuture;
CHK_XR(xrCreateSpatialContextAsyncEXT(session, &spatialContextCreateInfo, &createContextFuture));
waitUntilReady(createContextFuture);
XrCreateSpatialContextCompletionEXT completion{XR_TYPE_CREATE_SPATIAL_CONTEXT_COMPLETION_EXT};
CHK_XR(xrCreateSpatialContextCompleteEXT(session, createContextFuture, &completion));
if (completion.futureResult != XR_SUCCESS) {
return;
}
spatialContext = completion.spatialContext;
}
auto discoverSpatialEntities = [&](XrSpatialContextEXT spatialContext, XrTime time) {
// We want to look for entities that have the following components.
std::array<XrSpatialComponentTypeEXT, 1> snapshotComponents {XR_SPATIAL_COMPONENT_TYPE_BOUNDED_2D_EXT};
XrSpatialDiscoverySnapshotCreateInfoEXT snapshotCreateInfo{XR_TYPE_SPATIAL_DISCOVERY_SNAPSHOT_CREATE_INFO_EXT};
snapshotCreateInfo.componentTypeCount = snapshotComponents.size();
snapshotCreateInfo.componentTypes = snapshotComponents.data();
XrFutureEXT future = XR_NULL_FUTURE_EXT;
CHK_XR(xrCreateSpatialDiscoverySnapshotAsyncEXT(spatialContext, &snapshotCreateInfo, &future));
waitUntilReady(future);
XrCreateSpatialDiscoverySnapshotCompletionInfoEXT completionInfo{XR_TYPE_CREATE_SPATIAL_DISCOVERY_SNAPSHOT_COMPLETION_INFO_EXT};
completionInfo.baseSpace = localSpace;
completionInfo.time = time;
completionInfo.future = future;
XrCreateSpatialDiscoverySnapshotCompletionEXT completion{XR_TYPE_CREATE_SPATIAL_DISCOVERY_SNAPSHOT_COMPLETION_EXT};
CHK_XR(xrCreateSpatialDiscoverySnapshotCompleteEXT(spatialContext, &completionInfo, &completion));
if (completion.futureResult == XR_SUCCESS) {
// Query for the bounded2d component data
XrSpatialComponentTypeEXT componentToQuery = XR_SPATIAL_COMPONENT_TYPE_BOUNDED_2D_EXT;
XrSpatialComponentDataQueryConditionEXT queryCond{XR_TYPE_SPATIAL_COMPONENT_DATA_QUERY_CONDITION_EXT};
queryCond.componentTypes = &componentToQuery;
XrSpatialComponentDataQueryResultEXT queryResult{XR_TYPE_SPATIAL_COMPONENT_DATA_QUERY_RESULT_EXT};
queryResult.entityIdCapacityInput = 0;
queryResult.entityIds = nullptr;
queryResult.entityStateCapacityInput = 0;
queryResult.entityStates = nullptr;
CHK_XR(xrQuerySpatialComponentDataEXT(completion.snapshot, &queryCond, &queryResult));
std::vector<XrSpatialEntityIdEXT> entityIds(queryResult.entityIdCountOutput);
std::vector<XrSpatialEntityTrackingStateEXT> entityStates(queryResult.entityStateCountOutput);
queryResult.entityIdCapacityInput = entityIds.size();
queryResult.entityIds = entityIds.data();
queryResult.entityStateCapacityInput = entityStates.size();
queryResult.entityStates = entityStates.data();
std::vector<XrSpatialBounded2DDataEXT> bounded2d(queryResult.entityIdCountOutput);
XrSpatialComponentBounded2DListEXT boundsList{XR_TYPE_SPATIAL_COMPONENT_BOUNDED_2D_LIST_EXT};
boundsList.boundCount = bounded2d.size();
boundsList.bounds = bounded2d.data();
queryResult.next = &boundsList;
CHK_XR(xrQuerySpatialComponentDataEXT(completion.snapshot, &queryCond, &queryResult));
for (int32_t i = 0; i < queryResult.entityIdCountOutput; ++i) {
if (entityStates[i] == XR_SPATIAL_ENTITY_TRACKING_STATE_TRACKING_EXT) {
// 2D extents for entity entityIds[i] is bounded2d[i].extents.
}
}
CHK_XR(xrDestroySpatialSnapshotEXT(completion.snapshot));
}
};
while (1) {
// ...
// For every frame in frame loop
// ...
XrFrameState frameState; // previously returned from xrWaitFrame
const XrTime time = frameState.predictedDisplayTime;
// Poll for the XR_TYPE_EVENT_DATA_SPATIAL_DISCOVERY_RECOMMENDED_EXT event
XrEventDataBuffer event = {XR_TYPE_EVENT_DATA_BUFFER};
XrResult result = xrPollEvent(instance, &event);
if (result == XR_SUCCESS) {
switch (event.type) {
case XR_TYPE_EVENT_DATA_SPATIAL_DISCOVERY_RECOMMENDED_EXT: {
const XrEventDataSpatialDiscoveryRecommendedEXT& eventdata =
*reinterpret_cast<XrEventDataSpatialDiscoveryRecommendedEXT*>(&event);
// Discover spatial entities for the context that we recceived the "discovery
// recommended" event for.
discoverSpatialEntities(eventdata.spatialContext, time);
break;
}
}
}
// ...
// Finish frame loop
// ...
}
CHK_XR(xrDestroySpatialContextEXT(spatialContext));
Query buffer data
The following example code demonstrates how to get the data of a component that provides an XrSpatialBufferEXT.
/****************************/
/* Component definition */
/****************************/
// Foo component that provides an XrVector3f buffer
#define XR_SPATIAL_COMPONENT_TYPE_FOO_EXT ((XrSpatialComponentTypeEXT)1000740000U)
#define XR_TYPE_SPATIAL_COMPONENT_FOO_LIST_EXT ((XrStructureType)1000740000U)
// XrSpatialComponentFooListEXT extends XrSpatialComponentDataQueryResultEXT
typedef struct XrSpatialComponentFooListEXT {
XrStructureType type;
void* XR_MAY_ALIAS next;
uint32_t fooCount;
XrSpatialBufferEXT* foo;
} XrSpatialComponentFooListEXT;
/******************************/
/* End Component definition */
/******************************/
// Query for the foo component data
XrSpatialComponentTypeEXT componentToQuery = XR_SPATIAL_COMPONENT_TYPE_FOO_EXT;
XrSpatialComponentDataQueryConditionEXT queryCond{XR_TYPE_SPATIAL_COMPONENT_DATA_QUERY_CONDITION_EXT};
queryCond.componentTypeCount = 1;
queryCond.componentTypes = &componentToQuery;
XrSpatialComponentDataQueryResultEXT queryResult{XR_TYPE_SPATIAL_COMPONENT_DATA_QUERY_RESULT_EXT};
CHK_XR(xrQuerySpatialComponentDataEXT(snapshot, &queryCond, &queryResult));
std::vector<XrSpatialEntityIdEXT> entityIds(queryResult.entityIdCountOutput);
queryResult.entityIdCapacityInput = entityIds.size();
queryResult.entityIds = entityIds.data();
std::vector<XrSpatialBufferEXT> fooBuffers(queryResult.entityIdCountOutput);
XrSpatialComponentFooListEXT fooList{XR_TYPE_SPATIAL_COMPONENT_FOO_LIST_EXT};
fooList.fooCount = fooBuffers.size();
fooList.foo = fooBuffers.data();
queryResult.next = &fooList;
CHK_XR(xrQuerySpatialComponentDataEXT(snapshot, &queryCond, &queryResult));
for (int32_t i = 0; i < queryResult.entityIdCountOutput; ++i) {
// foo component data for entity entityIds[i]
if (fooBuffers[i].bufferType == XR_SPATIAL_BUFFER_TYPE_VECTOR3F_EXT) {
XrSpatialBufferGetInfoEXT getInfo{XR_TYPE_SPATIAL_BUFFER_GET_INFO_EXT};
getInfo.bufferId = fooBuffers[i].bufferId;
uint32_t bufferCountOutput;
CHK_XR(xrGetSpatialBufferVector3fEXT(snapshot, &getInfo, 0, &bufferCountOutput, nullptr));
std::vector<XrVector3f> vertexBuffer(bufferCountOutput);
CHK_XR(xrGetSpatialBufferVector3fEXT(snapshot, &getInfo, bufferCountOutput, &bufferCountOutput, vertexBuffer.data()));
// XrVertex3f buffer for entity entityIds[i] is now available in vertexBuffer vector.
}
}
12.45.12. Extension guidelines
-
If an extension is defining a new XrSpatialComponentTypeEXT which provides additional data about a spatial entity,
-
the extension must also define a list structure for that component which allows the application to pass an array to the runtime to fill out with the data for each of the spatial entities that satisfy xrQuerySpatialComponentDataEXT::
queryCondition. Some examples of such list structures are XrSpatialComponentParentListEXT and XrSpatialComponentBounded2DListEXT. If the component data size is variable and requires the application to use the two-call idiom to query it, the component data should provide an XrSpatialBufferEXT for each variable-sized data in the list structure and it must specify the XrSpatialBufferTypeEXT for each buffer. Application can then query the buffer data using functions defined in Two-call idiom for component data. An example of such a structure is XrSpatialMeshDataEXT which is included XrSpatialComponentMesh3DListEXT. -
The extension can also provide structures that the application can chain to XrSpatialComponentDataQueryConditionEXT::
nextto provide additional filters for the query pertaining to the data of this component.
-
-
Extensions can define structures that extend XrSpatialDiscoverySnapshotCreateInfoEXT to provide additional filters for discovery. The filters for creating the snapshot must not affect the configuration of the spatial context, but instead are to be used to provide hints to the runtime on what entities and data are to be included in the snapshot as tracked by the current configuration of the spatial context (and therefore the current configuration of the underlying services).
-
If an extension defines a new XrSpatialCapabilityEXT,
-
it should also specify the list of XrSpatialComponentTypeEXT that the runtimes must provide on entities for that capability.
-
it must also provide structures derived from XrSpatialCapabilityConfigurationBaseHeaderEXT that will allow the configuration of that capability.
-
-
If an extension defines a new XrSpatialCapabilityFeatureEXT, it must also define a corresponding configuration structure that can be chained to the next pointer of XrSpatialCapabilityConfigurationBaseHeaderEXT, that the application can use to enable the feature when creating an XrSpatialContextEXT.
-
An extension defining a new XrSpatialCapabilityEXT should follow this template for the specification -
-
Overview
-
Runtime support
-
Configuration
-
Guaranteed components
-
Example Code
-
-
An extension defining a new XrSpatialComponentTypeEXT should follow this template for the specification -
-
Component data
-
Component list structure to query data
-
Configuration
-
-
When writing an api that provides the application with a
XrSpatialBufferIdEXT, it must be accompanied with a XrSpatialBufferTypeEXT to inform the application what the data type of the buffer is and the application can use an appropriatexrGetSpatialBuffer*function to retrieve the actual contents of the buffer.
12.45.17. New Structures
12.45.19. New Enum Constants
-
XR_EXT_SPATIAL_ENTITY_EXTENSION_NAME -
XR_EXT_spatial_entity_SPEC_VERSION -
Extending XrObjectType:
-
XR_OBJECT_TYPE_SPATIAL_CONTEXT_EXT -
XR_OBJECT_TYPE_SPATIAL_ENTITY_EXT -
XR_OBJECT_TYPE_SPATIAL_SNAPSHOT_EXT
-
-
Extending XrResult:
-
XR_ERROR_SPATIAL_BUFFER_ID_INVALID_EXT -
XR_ERROR_SPATIAL_CAPABILITY_CONFIGURATION_INVALID_EXT -
XR_ERROR_SPATIAL_CAPABILITY_UNSUPPORTED_EXT -
XR_ERROR_SPATIAL_COMPONENT_NOT_ENABLED_EXT -
XR_ERROR_SPATIAL_COMPONENT_UNSUPPORTED_FOR_CAPABILITY_EXT -
XR_ERROR_SPATIAL_ENTITY_ID_INVALID_EXT
-
-
Extending XrStructureType:
-
XR_TYPE_CREATE_SPATIAL_CONTEXT_COMPLETION_EXT -
XR_TYPE_CREATE_SPATIAL_DISCOVERY_SNAPSHOT_COMPLETION_EXT -
XR_TYPE_CREATE_SPATIAL_DISCOVERY_SNAPSHOT_COMPLETION_INFO_EXT -
XR_TYPE_EVENT_DATA_SPATIAL_DISCOVERY_RECOMMENDED_EXT -
XR_TYPE_SPATIAL_BUFFER_GET_INFO_EXT -
XR_TYPE_SPATIAL_CAPABILITY_COMPONENT_TYPES_EXT -
XR_TYPE_SPATIAL_COMPONENT_BOUNDED_2D_LIST_EXT -
XR_TYPE_SPATIAL_COMPONENT_BOUNDED_3D_LIST_EXT -
XR_TYPE_SPATIAL_COMPONENT_DATA_QUERY_CONDITION_EXT -
XR_TYPE_SPATIAL_COMPONENT_DATA_QUERY_RESULT_EXT -
XR_TYPE_SPATIAL_COMPONENT_MESH_3D_LIST_EXT -
XR_TYPE_SPATIAL_COMPONENT_PARENT_LIST_EXT -
XR_TYPE_SPATIAL_CONTEXT_CREATE_INFO_EXT -
XR_TYPE_SPATIAL_DISCOVERY_SNAPSHOT_CREATE_INFO_EXT -
XR_TYPE_SPATIAL_ENTITY_FROM_ID_CREATE_INFO_EXT -
XR_TYPE_SPATIAL_FILTER_TRACKING_STATE_EXT -
XR_TYPE_SPATIAL_UPDATE_SNAPSHOT_CREATE_INFO_EXT
-
12.45.20. Issues
-
Does a single entity always derive from solely a single capability?
-
Resolved
-
Answer: No. It is completely upto the runtime based on its own tracking capabilities and how it wants to represent a detected entity. The spec does not prescribe any particular representation of spatial entity except for the guaranteed components of a given capability to set a minimum expectation. A runtime may be able to merge entities detected by separate capabilities and represent them as a single entity with the guaranteed components of all the capabilities that helped identify it. An example of this could be that tables can be detected by both a plane tracking capability and an object tracking capability, with plane tracking providing the
XR_SPATIAL_COMPONENT_TYPE_BOUNDED_2D_EXTcomponent on the entity and object tracking providingXR_SPATIAL_COMPONENT_TYPE_BOUNDED_3D_EXT. A certain runtime may provide the table as 2 separate entities, each with their own set of guaranteed components, while certain runtimes may provide just 1 entity to represent the table, and have bothXR_SPATIAL_COMPONENT_TYPE_BOUNDED_2D_EXTandXR_SPATIAL_COMPONENT_TYPE_BOUNDED_3D_EXTon the same entity. What is important to note here is that a given spatial entity can have at most a single component of any given component type. Therefore, if the component data produced by the different capabilities conflicts for a certain entity, the runtime must represent them as 2 separate entities.
-
12.46. XR_EXT_spatial_marker_tracking
- Name String
-
XR_EXT_spatial_marker_tracking - Extension Type
-
Instance extension
- Registered Extension Number
-
744
- Revision
-
1
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Contributors
-
Ron Bessems, Meta
Nihav Jain, Google
Natalie Fleury, Meta
Yuichi Taguchi, Meta
Yin Li, Microsoft
Jimmy Alamparambil, ByteDance
Zhipeng Liu, ByteDance
Jun Yan, ByteDance
12.46.1. Overview
This extension builds on XR_EXT_spatial_entity and allows
applications to detect and track markers in their environment.
Markers are 2D codes which may include QR Codes, Micro QR Codes, ArUco
markers, or AprilTags.
A tracked marker is represented as a spatial entity with (or "that has") the following components:
-
XR_SPATIAL_COMPONENT_TYPE_MARKER_EXT -
XR_SPATIAL_COMPONENT_TYPE_BOUNDED_2D_EXT
12.46.2. Runtime support
A runtime must advertise its support for the various marker tracking capabilities using xrEnumerateSpatialCapabilitiesEXT by listing any of the following capabilities:
-
XR_SPATIAL_CAPABILITY_MARKER_TRACKING_QR_CODE_EXT -
XR_SPATIAL_CAPABILITY_MARKER_TRACKING_MICRO_QR_CODE_EXT -
XR_SPATIAL_CAPABILITY_MARKER_TRACKING_ARUCO_MARKER_EXT -
XR_SPATIAL_CAPABILITY_MARKER_TRACKING_APRIL_TAG_EXT
12.46.3. Configuration
To enable detection of a marker type the application must pass the corresponding configuration structure to xrCreateSpatialContextAsyncEXT.
Marker Type Configurations
QR codes
The XrSpatialCapabilityConfigurationQrCodeEXT structure is defined as:
// Provided by XR_EXT_spatial_marker_tracking
typedef struct XrSpatialCapabilityConfigurationQrCodeEXT {
XrStructureType type;
const void* next;
XrSpatialCapabilityEXT capability;
uint32_t enabledComponentCount;
const XrSpatialComponentTypeEXT* enabledComponents;
} XrSpatialCapabilityConfigurationQrCodeEXT;
If QR codes are supported, the runtime must enable QR Code tracking when an
XrSpatialCapabilityConfigurationQrCodeEXT structure is passed in
XrSpatialContextCreateInfoEXT::capabilityConfigs when calling
xrCreateSpatialContextAsyncEXT.
The runtime must return XR_ERROR_SPATIAL_CAPABILITY_UNSUPPORTED_EXT
if XR_SPATIAL_CAPABILITY_MARKER_TRACKING_QR_CODE_EXT is not enumerated
by xrEnumerateSpatialCapabilitiesEXT.
Micro QR codes
The XrSpatialCapabilityConfigurationMicroQrCodeEXT structure is defined as:
// Provided by XR_EXT_spatial_marker_tracking
typedef struct XrSpatialCapabilityConfigurationMicroQrCodeEXT {
XrStructureType type;
const void* next;
XrSpatialCapabilityEXT capability;
uint32_t enabledComponentCount;
const XrSpatialComponentTypeEXT* enabledComponents;
} XrSpatialCapabilityConfigurationMicroQrCodeEXT;
If Micro QR codes are supported, the runtime must enable Micro QR Code
tracking when an XrSpatialCapabilityConfigurationMicroQrCodeEXT
structure is passed in
XrSpatialContextCreateInfoEXT::capabilityConfigs when calling
xrCreateSpatialContextAsyncEXT.
The runtime must return XR_ERROR_SPATIAL_CAPABILITY_UNSUPPORTED_EXT
if XR_SPATIAL_CAPABILITY_MARKER_TRACKING_MICRO_QR_CODE_EXT is not
enumerated by xrEnumerateSpatialCapabilitiesEXT.
ArUco Markers
The XrSpatialCapabilityConfigurationArucoMarkerEXT structure is defined as:
// Provided by XR_EXT_spatial_marker_tracking
typedef struct XrSpatialCapabilityConfigurationArucoMarkerEXT {
XrStructureType type;
const void* next;
XrSpatialCapabilityEXT capability;
uint32_t enabledComponentCount;
const XrSpatialComponentTypeEXT* enabledComponents;
XrSpatialMarkerArucoDictEXT arUcoDict;
} XrSpatialCapabilityConfigurationArucoMarkerEXT;
If ArUco markers are supported, the runtime must enable ArUco marker
tracking when an XrSpatialCapabilityConfigurationArucoMarkerEXT
structure is passed in
XrSpatialContextCreateInfoEXT::capabilityConfigs when calling
xrCreateSpatialContextAsyncEXT.
The runtime must return XR_ERROR_SPATIAL_CAPABILITY_UNSUPPORTED_EXT
from xrCreateSpatialContextAsyncEXT if an
XrSpatialCapabilityConfigurationArucoMarkerEXT structure is in
XrSpatialContextCreateInfoEXT::capabilityConfigs but
XR_SPATIAL_CAPABILITY_MARKER_TRACKING_ARUCO_MARKER_EXT is not
enumerated by xrEnumerateSpatialCapabilitiesEXT.
The XrSpatialMarkerArucoDictEXT enumeration is defined as:
// Provided by XR_EXT_spatial_marker_tracking
typedef enum XrSpatialMarkerArucoDictEXT {
XR_SPATIAL_MARKER_ARUCO_DICT_4X4_50_EXT = 1,
XR_SPATIAL_MARKER_ARUCO_DICT_4X4_100_EXT = 2,
XR_SPATIAL_MARKER_ARUCO_DICT_4X4_250_EXT = 3,
XR_SPATIAL_MARKER_ARUCO_DICT_4X4_1000_EXT = 4,
XR_SPATIAL_MARKER_ARUCO_DICT_5X5_50_EXT = 5,
XR_SPATIAL_MARKER_ARUCO_DICT_5X5_100_EXT = 6,
XR_SPATIAL_MARKER_ARUCO_DICT_5X5_250_EXT = 7,
XR_SPATIAL_MARKER_ARUCO_DICT_5X5_1000_EXT = 8,
XR_SPATIAL_MARKER_ARUCO_DICT_6X6_50_EXT = 9,
XR_SPATIAL_MARKER_ARUCO_DICT_6X6_100_EXT = 10,
XR_SPATIAL_MARKER_ARUCO_DICT_6X6_250_EXT = 11,
XR_SPATIAL_MARKER_ARUCO_DICT_6X6_1000_EXT = 12,
XR_SPATIAL_MARKER_ARUCO_DICT_7X7_50_EXT = 13,
XR_SPATIAL_MARKER_ARUCO_DICT_7X7_100_EXT = 14,
XR_SPATIAL_MARKER_ARUCO_DICT_7X7_250_EXT = 15,
XR_SPATIAL_MARKER_ARUCO_DICT_7X7_1000_EXT = 16,
XR_SPATIAL_MARKER_ARUCO_DICT_MAX_ENUM_EXT = 0x7FFFFFFF
} XrSpatialMarkerArucoDictEXT;
Supported predefined ArUco dictionary:
AprilTags
The XrSpatialCapabilityConfigurationAprilTagEXT structure is defined as:
// Provided by XR_EXT_spatial_marker_tracking
typedef struct XrSpatialCapabilityConfigurationAprilTagEXT {
XrStructureType type;
const void* next;
XrSpatialCapabilityEXT capability;
uint32_t enabledComponentCount;
const XrSpatialComponentTypeEXT* enabledComponents;
XrSpatialMarkerAprilTagDictEXT aprilDict;
} XrSpatialCapabilityConfigurationAprilTagEXT;
If AprilTags are supported, the runtime must enable AprilTag tracking when
an XrSpatialCapabilityConfigurationAprilTagEXT structure is passed in
XrSpatialContextCreateInfoEXT::capabilityConfigs when calling
xrCreateSpatialContextAsyncEXT.
The runtime must return XR_ERROR_SPATIAL_CAPABILITY_UNSUPPORTED_EXT
from xrCreateSpatialContextAsyncEXT if an
XrSpatialCapabilityConfigurationAprilTagEXT structure is in
XrSpatialContextCreateInfoEXT::capabilityConfigs but
XR_SPATIAL_CAPABILITY_MARKER_TRACKING_APRIL_TAG_EXT is not enumerated
by xrEnumerateSpatialCapabilitiesEXT.
The XrSpatialMarkerAprilTagDictEXT enumeration is defined as:
// Provided by XR_EXT_spatial_marker_tracking
typedef enum XrSpatialMarkerAprilTagDictEXT {
XR_SPATIAL_MARKER_APRIL_TAG_DICT_16H5_EXT = 1,
XR_SPATIAL_MARKER_APRIL_TAG_DICT_25H9_EXT = 2,
XR_SPATIAL_MARKER_APRIL_TAG_DICT_36H10_EXT = 3,
XR_SPATIAL_MARKER_APRIL_TAG_DICT_36H11_EXT = 4,
XR_SPATIAL_MARKER_APRIL_TAG_DICT_MAX_ENUM_EXT = 0x7FFFFFFF
} XrSpatialMarkerAprilTagDictEXT;
Supported predefined AprilTag dictionary:
Optional Marker Configurations
Applications should call xrEnumerateSpatialCapabilityFeaturesEXT to get the list of supported optional features.
See XrSpatialCapabilityFeatureEXT for a complete list of all spatial capability features supported by any extension.
Marker Size
The XrSpatialMarkerSizeEXT structure is defined as:
// Provided by XR_EXT_spatial_marker_tracking
typedef struct XrSpatialMarkerSizeEXT {
XrStructureType type;
const void* next;
float markerSideLength;
} XrSpatialMarkerSizeEXT;
If
XR_SPATIAL_CAPABILITY_FEATURE_MARKER_TRACKING_FIXED_SIZE_MARKERS_EXT
is enumerated by xrEnumerateSpatialCapabilityFeaturesEXT for a certain
capability, and if the application chains XrSpatialMarkerSizeEXT to
the corresponding configuration structure of that capability, the runtime
must assume that all markers detected have width and height of
markerSideLength.
Providing this information to the runtime allows the runtime to return a
more accurate pose and size.
This structure must be linked into the next chain of
XrSpatialCapabilityConfigurationQrCodeEXT,
XrSpatialCapabilityConfigurationMicroQrCodeEXT,
XrSpatialCapabilityConfigurationArucoMarkerEXT, or
XrSpatialCapabilityConfigurationAprilTagEXT.
Static Marker Optimization
The XrSpatialMarkerStaticOptimizationEXT structure is defined as:
// Provided by XR_EXT_spatial_marker_tracking
typedef struct XrSpatialMarkerStaticOptimizationEXT {
XrStructureType type;
const void* next;
XrBool32 optimizeForStaticMarker;
} XrSpatialMarkerStaticOptimizationEXT;
If XR_SPATIAL_CAPABILITY_FEATURE_MARKER_TRACKING_STATIC_MARKERS_EXT is
enumerated by xrEnumerateSpatialCapabilityFeaturesEXT for a certain
capability, and if the application chains
XrSpatialMarkerStaticOptimizationEXT to the corresponding
configuration structure of that capability, the runtime must assume that
all markers detected are static if optimizeForStaticMarker is set to
XR_TRUE.
This allows the runtime to generate a more accurate pose and size.
This structure must be linked into the next chain of
XrSpatialCapabilityConfigurationQrCodeEXT,
XrSpatialCapabilityConfigurationMicroQrCodeEXT,
XrSpatialCapabilityConfigurationArucoMarkerEXT, or
XrSpatialCapabilityConfigurationAprilTagEXT.
12.46.4. Guaranteed Components
A runtime that supports
XR_SPATIAL_CAPABILITY_MARKER_TRACKING_QR_CODE_EXT,
XR_SPATIAL_CAPABILITY_MARKER_TRACKING_MICRO_QR_CODE_EXT,
XR_SPATIAL_CAPABILITY_MARKER_TRACKING_ARUCO_MARKER_EXT, or
XR_SPATIAL_CAPABILITY_MARKER_TRACKING_APRIL_TAG_EXT must provide the
following spatial components as guaranteed components of all entities
discovered by those capabilities, and must enumerate them in
xrEnumerateSpatialCapabilityComponentTypesEXT:
-
XR_SPATIAL_COMPONENT_TYPE_MARKER_EXT -
XR_SPATIAL_COMPONENT_TYPE_BOUNDED_2D_EXT
Marker Component
Component data
The XrSpatialMarkerDataEXT structure is defined as:
// Provided by XR_EXT_spatial_marker_tracking
typedef struct XrSpatialMarkerDataEXT {
XrSpatialCapabilityEXT capability;
uint32_t markerId;
XrSpatialBufferEXT data;
} XrSpatialMarkerDataEXT;
XR_SPATIAL_CAPABILITY_MARKER_TRACKING_QR_CODE_EXT and
XR_SPATIAL_CAPABILITY_MARKER_TRACKING_MICRO_QR_CODE_EXT support extra
data.
If capability is one of these -
-
If the runtime has successfully decoded the data for the marker, it must set the
databuffer type to eitherXR_SPATIAL_BUFFER_TYPE_UINT8_EXTorXR_SPATIAL_BUFFER_TYPE_STRING_EXT, depending on the data in the marker. The runtime must also set a valid buffer ID indatawhich the application can use with the appropriatexrGetSpatialBuffer*function to get the data. -
If the runtime has not yet decoded the data of the marker, it must set
databuffer ID to XR_NULL_SPATIAL_BUFFER_ID_EXT and the buffer type toXR_SPATIAL_BUFFER_TYPE_UNKNOWN_EXT.
XR_SPATIAL_CAPABILITY_MARKER_TRACKING_ARUCO_MARKER_EXT and
XR_SPATIAL_CAPABILITY_MARKER_TRACKING_APRIL_TAG_EXT do not support
extra data and the runtime must set the buffer ID of data to
XR_NULL_SPATIAL_BUFFER_ID_EXT.
Component list structure to query data
The XrSpatialComponentMarkerListEXT structure is defined as:
// Provided by XR_EXT_spatial_marker_tracking
typedef struct XrSpatialComponentMarkerListEXT {
XrStructureType type;
void* next;
uint32_t markerCount;
XrSpatialMarkerDataEXT* markers;
} XrSpatialComponentMarkerListEXT;
The application can query the marker component of the spatial entities in
an XrSpatialSnapshotEXT by adding
XR_SPATIAL_COMPONENT_TYPE_MARKER_EXT in
XrSpatialComponentDataQueryConditionEXT::componentTypes and
adding XrSpatialComponentMarkerListEXT to the next pointer chain of
XrSpatialComponentDataQueryResultEXT.
The runtime must return XR_ERROR_VALIDATION_FAILURE from
xrQuerySpatialComponentDataEXT if
XrSpatialComponentMarkerListEXT is in the next chain of
XrSpatialComponentDataQueryResultEXT::next but
XR_SPATIAL_COMPONENT_TYPE_MARKER_EXT is not included in
XrSpatialComponentDataQueryConditionEXT::componentTypes.
The runtime must return XR_ERROR_SIZE_INSUFFICIENT from
xrQuerySpatialComponentDataEXT if markerCount is less than
XrSpatialComponentDataQueryResultEXT::entityIdCountOutput.
Configuration
If XR_SPATIAL_COMPONENT_TYPE_MARKER_EXT is enumerated in
XrSpatialCapabilityComponentTypesEXT::componentTypes for some
capability, an application can enable it by including the enumerant in the
XrSpatialCapabilityConfigurationBaseHeaderEXT::enabledComponents
list of the XrSpatialCapabilityConfigurationBaseHeaderEXT derived
structure of the capability that supports this component.
This component does not require any special configuration to be included in
the XrSpatialCapabilityConfigurationBaseHeaderEXT::next chain.
Bounded 2D Component
The bounded 2D component provides the center and extents of the marker represented by the entity it is on. See Bounded 2D for more details about the bounded 2D component.
The XrSpatialBounded2DDataEXT::center must point to the center
of the marker.
When looking at the front face of the marker, the X-axis must point to the
right, and the Y-axis must point to the top of the marker.
The runtime must follow the right-handed coordinate system convention thus
the Z-axis comes out of the front face of the marker.
This means that a marker with a position of {0, 0, 0}, rotation of {0, 0, 0,
1} (no rotation), and an extent of {1, 1} refers to a 1 meter x 1 meter
marker centered at {0, 0, 0} with its front face normal vector pointing
towards the +Z direction in the component’s space.
A representation of the orientation of the marker is shown below.
12.46.5. Test Codes
The following codes must have their X-Y plane inside the document and the Z-axis pointing at the viewer. The axis origin must appear at the center of each marker. The X-axis must point to the right, the Y-axis must point to the top of the document.
XR_SPATIAL_MARKER_APRIL_TAG_DICT_36H11_EXT with ID 42XR_SPATIAL_MARKER_ARUCO_DICT_5X5_50_EXT with ID 4312.46.6. Example Code
Configure QR Code Tracking Capability
The following example code demonstrates how to configure the QR code tracking capability when creating a spatial context.
// Check if marker tracking capability is supported
uint32_t capabilityCount;
CHK_XR(xrEnumerateSpatialCapabilitiesEXT(instance, systemId, 0, &capabilityCount, nullptr));
std::vector<XrSpatialCapabilityEXT> capabilities(capabilityCount);
CHK_XR(xrEnumerateSpatialCapabilitiesEXT(instance, systemId, capabilityCount, &capabilityCount, capabilities.data()));
if (std::find(capabilities.begin(), capabilities.end(), XR_SPATIAL_CAPABILITY_MARKER_TRACKING_QR_CODE_EXT) == capabilities.end()) {
return;
}
uint32_t featureCount = 0;
CHK_XR(xrEnumerateSpatialCapabilityFeaturesEXT(instance, systemId, XR_SPATIAL_CAPABILITY_MARKER_TRACKING_QR_CODE_EXT, 0, &featureCount, nullptr));
std::vector<XrSpatialCapabilityFeatureEXT> capabilityFeatures(featureCount);
CHK_XR(xrEnumerateSpatialCapabilityFeaturesEXT(instance, systemId, XR_SPATIAL_CAPABILITY_MARKER_TRACKING_QR_CODE_EXT, featureCount, &featureCount, capabilityFeatures.data()));
bool supportsFixedMarkerSize = std::find(capabilityFeatures.begin(), capabilityFeatures.end(), XR_SPATIAL_CAPABILITY_FEATURE_MARKER_TRACKING_FIXED_SIZE_MARKERS_EXT) != capabilityFeatures.end();
// Create a spatial context
XrSpatialContextEXT spatialContext{};
// Enable the 2 guaranteed components of the qr code tracking capability
std::vector<XrSpatialComponentTypeEXT> enabledComponents = {
XR_SPATIAL_COMPONENT_TYPE_BOUNDED_2D_EXT,
XR_SPATIAL_COMPONENT_TYPE_MARKER_EXT,
};
XrSpatialCapabilityConfigurationQrCodeEXT markerConfiguration{XR_TYPE_SPATIAL_CAPABILITY_CONFIGURATION_QR_CODE_EXT};
markerConfiguration.capability = XR_SPATIAL_CAPABILITY_MARKER_TRACKING_QR_CODE_EXT;
markerConfiguration.enabledComponentCount = static_cast<uint32_t>(enabledComponents.size());
markerConfiguration.enabledComponents = enabledComponents.data();
// only chained if features.markerSideLength is true.
XrSpatialMarkerSizeEXT markerSize{XR_TYPE_SPATIAL_MARKER_SIZE_EXT};
markerSize.markerSideLength = 0.10f;
if (supportsFixedMarkerSize) {
markerConfiguration.next = &markerSize;
}
std::array<XrSpatialCapabilityConfigurationBaseHeaderEXT*, 1> capabilityConfigs = {
reinterpret_cast<XrSpatialCapabilityConfigurationBaseHeaderEXT*>(&markerConfiguration),
};
XrSpatialContextCreateInfoEXT spatialContextCreateInfo{XR_TYPE_SPATIAL_CONTEXT_CREATE_INFO_EXT};
spatialContextCreateInfo.capabilityConfigCount = capabilityConfigs.size();
spatialContextCreateInfo.capabilityConfigs = capabilityConfigs.data();
XrFutureEXT createContextFuture;
CHK_XR(xrCreateSpatialContextAsyncEXT(session, &spatialContextCreateInfo, &createContextFuture));
waitUntilReady(createContextFuture);
XrCreateSpatialContextCompletionEXT completion{XR_TYPE_CREATE_SPATIAL_CONTEXT_COMPLETION_EXT};
CHK_XR(xrCreateSpatialContextCompleteEXT(session, createContextFuture, &completion));
if (completion.futureResult != XR_SUCCESS) {
return;
}
spatialContext = completion.spatialContext;
// ...
// Discovery entities with the spatial context
// ...
CHK_XR(xrDestroySpatialContextEXT(spatialContext));
Discover Spatial Entities & Query Component Data
The following example code demonstrates how to discover spatial entities for
a context configured with
XR_SPATIAL_CAPABILITY_MARKER_TRACKING_QR_CODE_EXT and query its
component data.
XrFutureEXT future = XR_NULL_FUTURE_EXT;
// We want to look for entities that have the following components.
std::vector<XrSpatialComponentTypeEXT> snapshotComponents = {
XR_SPATIAL_COMPONENT_TYPE_BOUNDED_2D_EXT,
XR_SPATIAL_COMPONENT_TYPE_MARKER_EXT,
};
auto discoverSpatialEntities = [&](XrSpatialContextEXT spatialContext, XrTime time) {
XrSpatialDiscoverySnapshotCreateInfoEXT snapshotCreateInfo{XR_TYPE_SPATIAL_DISCOVERY_SNAPSHOT_CREATE_INFO_EXT};
snapshotCreateInfo.componentTypeCount = snapshotComponents.size();
snapshotCreateInfo.componentTypes = snapshotComponents.data();
CHK_XR(xrCreateSpatialDiscoverySnapshotAsyncEXT(spatialContext, &snapshotCreateInfo, &future));
waitUntilReady(future);
XrCreateSpatialDiscoverySnapshotCompletionInfoEXT completionInfo{XR_TYPE_CREATE_SPATIAL_DISCOVERY_SNAPSHOT_COMPLETION_INFO_EXT};
completionInfo.baseSpace = localSpace;
completionInfo.time = time;
completionInfo.future = future;
XrCreateSpatialDiscoverySnapshotCompletionEXT completion{XR_TYPE_CREATE_SPATIAL_DISCOVERY_SNAPSHOT_COMPLETION_EXT};
CHK_XR(xrCreateSpatialDiscoverySnapshotCompleteEXT(spatialContext, &completionInfo, &completion));
if (completion.futureResult == XR_SUCCESS) {
// Query for the bounded2D and marker component data
XrSpatialComponentDataQueryConditionEXT queryCond{XR_TYPE_SPATIAL_COMPONENT_DATA_QUERY_CONDITION_EXT};
queryCond.componentTypeCount = snapshotComponents.size();
queryCond.componentTypes = snapshotComponents.data();
XrSpatialComponentDataQueryResultEXT queryResult{XR_TYPE_SPATIAL_COMPONENT_DATA_QUERY_RESULT_EXT};
CHK_XR(xrQuerySpatialComponentDataEXT(completion.snapshot, &queryCond, &queryResult));
std::vector<XrSpatialEntityIdEXT> entityIds(queryResult.entityIdCountOutput);
std::vector<XrSpatialEntityTrackingStateEXT> entityStates(queryResult.entityIdCountOutput);
queryResult.entityIdCapacityInput = entityIds.size();
queryResult.entityIds = entityIds.data();
queryResult.entityStateCapacityInput = entityStates.size();
queryResult.entityStates = entityStates.data();
std::vector<XrSpatialBounded2DDataEXT> bounded2D(queryResult.entityIdCountOutput);
XrSpatialComponentBounded2DListEXT bounded2DList{XR_TYPE_SPATIAL_COMPONENT_BOUNDED_2D_LIST_EXT};
bounded2DList.boundCount = bounded2D.size();
bounded2DList.bounds = bounded2D.data();
queryResult.next = &bounded2DList;
std::vector<XrSpatialMarkerDataEXT> markers;
XrSpatialComponentMarkerListEXT markerList{XR_TYPE_SPATIAL_COMPONENT_MARKER_LIST_EXT};
markers.resize(queryResult.entityIdCountOutput);
markerList.markerCount = markers.size();
markerList.markers = markers.data();
bounded2DList.next = &markerList;
CHK_XR(xrQuerySpatialComponentDataEXT(completion.snapshot, &queryCond, &queryResult));
for (int32_t i = 0; i < queryResult.entityIdCountOutput; ++i) {
if (entityStates[i] != XR_SPATIAL_ENTITY_TRACKING_STATE_TRACKING_EXT) {
continue;
}
// 2D bounds for entity entityIds[i] is bounded2D[i].extents centered on bounded2D[i].center.
if (markers[i].capability == XR_SPATIAL_CAPABILITY_MARKER_TRACKING_QR_CODE_EXT) {
// Check if marker data has been decoded.
if (markers[i].data.bufferId != XR_NULL_SPATIAL_BUFFER_ID_EXT) {
if (markers[i].data.bufferType == XR_SPATIAL_BUFFER_TYPE_STRING_EXT) {
// Qr Code data can be queried using
// XrSpatialBufferGetInfoEXT getInfo{XR_TYPE_SPATIAL_BUFFER_GET_INFO_EXT};
// info.bufferId = markers[i].data.bufferId;
// xrGetSpatialBufferStringEXT(completion.snapshot, &getInfo, ...)
} else if (markers[i].data.bufferType == XR_SPATIAL_BUFFER_TYPE_UINT8_EXT) {
// Qr Code data can be queried using
// XrSpatialBufferGetInfoEXT getInfo{XR_TYPE_SPATIAL_BUFFER_GET_INFO_EXT};
// info.bufferId = markers[i].data.bufferId;
// xrGetSpatialBufferUint8(completion.snapshot, &getInfo, ...)
}
}
}
}
CHK_XR(xrDestroySpatialSnapshotEXT(completion.snapshot));
}
};
while (1) {
// ...
// For every frame in frame loop
// ...
XrFrameState frameState; // previously returned from xrWaitFrame
const XrTime time = frameState.predictedDisplayTime;
// Poll for the XR_TYPE_EVENT_DATA_SPATIAL_DISCOVERY_RECOMMENDED_EXT event
XrEventDataBuffer event = {XR_TYPE_EVENT_DATA_BUFFER};
XrResult result = xrPollEvent(instance, &event);
if (result == XR_SUCCESS) {
switch (event.type) {
case XR_TYPE_EVENT_DATA_SPATIAL_DISCOVERY_RECOMMENDED_EXT: {
const XrEventDataSpatialDiscoveryRecommendedEXT& eventdata =
*reinterpret_cast<XrEventDataSpatialDiscoveryRecommendedEXT*>(&event);
// Discover spatial entities for the context that we received the "discovery
// recommended" event for.
discoverSpatialEntities(eventdata.spatialContext, time);
break;
}
}
}
// ...
// Finish frame loop
// ...
}
12.46.7. New Structures
12.46.9. New Enum Constants
-
XR_EXT_SPATIAL_MARKER_TRACKING_EXTENSION_NAME -
XR_EXT_spatial_marker_tracking_SPEC_VERSION -
Extending XrSpatialCapabilityEXT:
-
XR_SPATIAL_CAPABILITY_MARKER_TRACKING_APRIL_TAG_EXT -
XR_SPATIAL_CAPABILITY_MARKER_TRACKING_ARUCO_MARKER_EXT -
XR_SPATIAL_CAPABILITY_MARKER_TRACKING_MICRO_QR_CODE_EXT -
XR_SPATIAL_CAPABILITY_MARKER_TRACKING_QR_CODE_EXT
-
-
Extending XrSpatialCapabilityFeatureEXT:
-
XR_SPATIAL_CAPABILITY_FEATURE_MARKER_TRACKING_FIXED_SIZE_MARKERS_EXT -
XR_SPATIAL_CAPABILITY_FEATURE_MARKER_TRACKING_STATIC_MARKERS_EXT
-
-
Extending XrSpatialComponentTypeEXT:
-
XR_SPATIAL_COMPONENT_TYPE_MARKER_EXT
-
-
Extending XrStructureType:
-
XR_TYPE_SPATIAL_CAPABILITY_CONFIGURATION_APRIL_TAG_EXT -
XR_TYPE_SPATIAL_CAPABILITY_CONFIGURATION_ARUCO_MARKER_EXT -
XR_TYPE_SPATIAL_CAPABILITY_CONFIGURATION_MICRO_QR_CODE_EXT -
XR_TYPE_SPATIAL_CAPABILITY_CONFIGURATION_QR_CODE_EXT -
XR_TYPE_SPATIAL_COMPONENT_MARKER_LIST_EXT -
XR_TYPE_SPATIAL_MARKER_SIZE_EXT -
XR_TYPE_SPATIAL_MARKER_STATIC_OPTIMIZATION_EXT
-
12.47. XR_EXT_spatial_persistence
- Name String
-
XR_EXT_spatial_persistence - Extension Type
-
Instance extension
- Registered Extension Number
-
764
- Revision
-
1
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Contributors
-
Nihav Jain, Google
Jared Finder, Google
Natalie Fleury, Meta
Yuichi Taguchi, Meta
Ron Bessems, Meta
Yin Li, Microsoft
Jimmy Alamparambil, ByteDance
Zhipeng Liu, ByteDance
Jun Yan, ByteDance
12.47.1. Overview
This extension allows applications to discover and correlate spatial
entities across application sessions, OpenXR sessions and multiple OpenXR
spatial contexts within a session.
The XR_EXT_spatial_entity extension established that an entity within
an XrSpatialContextEXT is represented by an
XrSpatialEntityIdEXT.
This extension extends on that concept by establishing that an entity, if
persisted, is represented by an XrUuid across application and OpenXR
sessions i.e. an application can use the XrUuid provided by this
extension to identify an entity across sessions.
This extension also provides useful overlaps with the
XR_EXT_spatial_entity extension to discover persisted entities in the
user’s environment and the ability to query their component data.
12.47.2. Spatial Persistence Context
Create a spatial persistence context
// Provided by XR_EXT_spatial_persistence
XR_DEFINE_HANDLE(XrSpatialPersistenceContextEXT)
The XrSpatialPersistenceContextEXT handle represents the connection to a persistent spatial entity storage.
The xrCreateSpatialPersistenceContextAsyncEXT function is defined as:
// Provided by XR_EXT_spatial_persistence
XrResult xrCreateSpatialPersistenceContextAsyncEXT(
XrSession session,
const XrSpatialPersistenceContextCreateInfoEXT* createInfo,
XrFutureEXT* future);
An application can create an XrSpatialPersistenceContextEXT handle
using the xrCreateSpatialPersistenceContextAsyncEXT function and
configure the scope of the persistence context in createInfo.
The runtime must return
XR_ERROR_SPATIAL_PERSISTENCE_SCOPE_UNSUPPORTED_EXT if
XrSpatialPersistenceContextCreateInfoEXT::scope is not
enumerated by xrEnumerateSpatialPersistenceScopesEXT.
If a runtime enforces a permission system to control application access to
the persistence storage represented by XrSpatialPersistenceContextEXT,
then the runtime must return XR_ERROR_PERMISSION_INSUFFICIENT if
those permissions have not been granted to this application.
The XrSpatialPersistenceContextCreateInfoEXT structure is defined as:
// Provided by XR_EXT_spatial_persistence
typedef struct XrSpatialPersistenceContextCreateInfoEXT {
XrStructureType type;
const void* next;
XrSpatialPersistenceScopeEXT scope;
} XrSpatialPersistenceContextCreateInfoEXT;
The XrSpatialPersistenceContextCreateInfoEXT structure describes the information to create an XrSpatialPersistenceContextEXT handle.
// Provided by XR_EXT_spatial_persistence
typedef enum XrSpatialPersistenceScopeEXT {
XR_SPATIAL_PERSISTENCE_SCOPE_SYSTEM_MANAGED_EXT = 1,
// Provided by XR_EXT_spatial_persistence_operations
XR_SPATIAL_PERSISTENCE_SCOPE_LOCAL_ANCHORS_EXT = 1000781000,
XR_SPATIAL_PERSISTENCE_SCOPE_MAX_ENUM_EXT = 0x7FFFFFFF
} XrSpatialPersistenceScopeEXT;
The XrSpatialPersistenceScopeEXT enumeration identifies the different types of persistence context scopes.
The enums have the following meanings:
| Enum | Description |
|---|---|
|
Provides the application with read-only access (i.e. application cannot modify the store associated with this scope) to spatial entities persisted and managed by the system. The application can use the UUID in the persistence component for this scope to correlate entities across spatial contexts and device reboots. |
|
Persistence operations and data access is limited to spatial anchors, on the same device, for the same user and same app (Added by the |
The xrEnumerateSpatialPersistenceScopesEXT function is defined as:
// Provided by XR_EXT_spatial_persistence
XrResult xrEnumerateSpatialPersistenceScopesEXT(
XrInstance instance,
XrSystemId systemId,
uint32_t persistenceScopeCapacityInput,
uint32_t* persistenceScopeCountOutput,
XrSpatialPersistenceScopeEXT* persistenceScopes);
The application can enumerate the list of spatial persistence scopes
supported by a given XrSystemId using
xrEnumerateSpatialPersistenceScopesEXT.
The xrCreateSpatialPersistenceContextCompleteEXT function is defined as:
// Provided by XR_EXT_spatial_persistence
XrResult xrCreateSpatialPersistenceContextCompleteEXT(
XrSession session,
XrFutureEXT future,
XrCreateSpatialPersistenceContextCompletionEXT* completion);
xrCreateSpatialPersistenceContextCompleteEXT completes the
asynchronous operation started by
xrCreateSpatialPersistenceContextAsyncEXT.
The runtime must return XR_ERROR_FUTURE_PENDING_EXT if future
is not in ready state.
The runtime must return XR_ERROR_FUTURE_INVALID_EXT if future
has already been completed or cancelled.
The XrCreateSpatialPersistenceContextCompletionEXT structure is defined as:
// Provided by XR_EXT_spatial_persistence
typedef struct XrCreateSpatialPersistenceContextCompletionEXT {
XrStructureType type;
void* next;
XrResult futureResult;
XrSpatialPersistenceContextResultEXT createResult;
XrSpatialPersistenceContextEXT persistenceContext;
} XrCreateSpatialPersistenceContextCompletionEXT;
If futureResult and createResult are both success codes,
persistenceContext must be valid.
If persistenceContext is valid, it must remain so within the
lifecycle of xrCreateSpatialPersistenceContextAsyncEXT::session
or until the application uses xrDestroySpatialPersistenceContextEXT
with persistenceContext, whichever comes first.
The runtime must set createResult only if futureResult is a
success code.
// Provided by XR_EXT_spatial_persistence
typedef enum XrSpatialPersistenceContextResultEXT {
XR_SPATIAL_PERSISTENCE_CONTEXT_RESULT_SUCCESS_EXT = 0,
// Provided by XR_EXT_spatial_persistence_operations
XR_SPATIAL_PERSISTENCE_CONTEXT_RESULT_ENTITY_NOT_TRACKING_EXT = -1000781001,
// Provided by XR_EXT_spatial_persistence_operations
XR_SPATIAL_PERSISTENCE_CONTEXT_RESULT_PERSIST_UUID_NOT_FOUND_EXT = -1000781002,
XR_SPATIAL_PERSISTENCE_CONTEXT_RESULT_MAX_ENUM_EXT = 0x7FFFFFFF
} XrSpatialPersistenceContextResultEXT;
The XrSpatialPersistenceContextResultEXT enumeration identifies the different types of result codes for a persistence operation. Failures during persistence operations are not always in control of the application and this enumeration is used for conveying such cases. Similar to XrResult, success codes in the XrSpatialPersistenceContextResultEXT enumeration are non-negative values, and failure codes are negative values.
The enums have the following meanings:
| Enum | Description |
|---|---|
|
The persistence context operation was a success. |
|
The persistence operation failed because the entity could not be tracked by the runtime. (Added by the |
|
The provided persist UUID was not found in the storage. (Added by the |
Destroy the spatial persistence context
The xrDestroySpatialPersistenceContextEXT function is defined as:
// Provided by XR_EXT_spatial_persistence
XrResult xrDestroySpatialPersistenceContextEXT(
XrSpatialPersistenceContextEXT persistenceContext);
The application can use xrDestroySpatialPersistenceContextEXT to
release the persistenceContext handle when it is finished with spatial
persistence tasks.
The runtime must not destroy the underlying resources for
persistenceContext when xrDestroySpatialPersistenceContextEXT is
called if there are any valid XrSpatialContextEXT handles that
persistenceContext was linked to via
XrSpatialContextPersistenceConfigEXT.
This is because the persistence context’s resources are still used by the
spatial context for discovering persisted entities.
Destroying the persistence context handle in such a situation only removes
the application’s access to these resources.
The resources for a destroyed XrSpatialPersistenceContextEXT must be freed when all the XrSpatialContextEXT handles the persistence context was linked to are destroyed.
12.47.3. Discover persisted entities
Persistence component
Persisted spatial entities have the persistence component on them which the
runtime must include in the discovery and update snapshots if
XR_SPATIAL_COMPONENT_TYPE_PERSISTENCE_EXT is enabled during the
creation of XrSpatialContextEXT and included in
XrSpatialDiscoverySnapshotCreateInfoEXT::componentTypes or
XrSpatialUpdateSnapshotCreateInfoEXT::componentTypes.
Component Data
The XrSpatialPersistenceDataEXT structure is defined as:
// Provided by XR_EXT_spatial_persistence
typedef struct XrSpatialPersistenceDataEXT {
XrUuid persistUuid;
XrSpatialPersistenceStateEXT persistState;
} XrSpatialPersistenceDataEXT;
// Provided by XR_EXT_spatial_persistence
typedef enum XrSpatialPersistenceStateEXT {
XR_SPATIAL_PERSISTENCE_STATE_LOADED_EXT = 1,
XR_SPATIAL_PERSISTENCE_STATE_NOT_FOUND_EXT = 2,
XR_SPATIAL_PERSISTENCE_STATE_MAX_ENUM_EXT = 0x7FFFFFFF
} XrSpatialPersistenceStateEXT;
The XrSpatialPersistenceStateEXT enumeration identifies the different states of the persisted uuid.
The enums have the following meanings:
| Enum | Description |
|---|---|
|
The persisted UUID has been successfully loaded from the storage. |
|
The persisted UUID was not found in the storage and was either removed from it or never was in it. |
Component list structure to query data
The XrSpatialComponentPersistenceListEXT structure is defined as:
// Provided by XR_EXT_spatial_persistence
typedef struct XrSpatialComponentPersistenceListEXT {
XrStructureType type;
void* next;
uint32_t persistDataCount;
XrSpatialPersistenceDataEXT* persistData;
} XrSpatialComponentPersistenceListEXT;
The runtime must return XR_ERROR_VALIDATION_FAILURE from
xrQuerySpatialComponentDataEXT if
XrSpatialComponentPersistenceListEXT is in the next chain of
XrSpatialComponentDataQueryResultEXT::next but
XR_SPATIAL_COMPONENT_TYPE_PERSISTENCE_EXT is not included in
XrSpatialComponentDataQueryConditionEXT::componentTypes.
The runtime must return XR_ERROR_SIZE_INSUFFICIENT from
xrQuerySpatialComponentDataEXT if persistDataCount is less than
XrSpatialComponentDataQueryResultEXT::entityIdCountOutput.
Unlike the other components, the runtime must set the data for
XR_SPATIAL_COMPONENT_TYPE_PERSISTENCE_EXT regardless of the
XrSpatialEntityTrackingStateEXT.
Configuration
If XR_SPATIAL_COMPONENT_TYPE_PERSISTENCE_EXT is enumerated in
XrSpatialCapabilityComponentTypesEXT::componentTypes for some
capability, the application can enable it by including the enum in the
XrSpatialCapabilityConfigurationBaseHeaderEXT::enabledComponents
list.
This component does not require any special configuration to be included in
the next chain of XrSpatialCapabilityConfigurationBaseHeaderEXT.
If the application is including
XR_SPATIAL_COMPONENT_TYPE_PERSISTENCE_EXT in the enabled component
list, it must also include XrSpatialContextPersistenceConfigEXT in
the next chain of XrSpatialContextCreateInfoEXT otherwise the runtime
must return XR_ERROR_SPATIAL_CAPABILITY_CONFIGURATION_INVALID_EXT
from xrCreateSpatialContextAsyncEXT.
Configure spatial context with persistence contexts
The XrSpatialContextPersistenceConfigEXT structure is defined as:
// Provided by XR_EXT_spatial_persistence
typedef struct XrSpatialContextPersistenceConfigEXT {
XrStructureType type;
const void* next;
uint32_t persistenceContextCount;
const XrSpatialPersistenceContextEXT* persistenceContexts;
} XrSpatialContextPersistenceConfigEXT;
An application can add XrSpatialContextPersistenceConfigEXT to the
next chain of XrSpatialContextCreateInfoEXT.
This will configure the created XrSpatialContextEXT with
persistenceContexts and allow the application to discover the spatial
entities persisted in the storage represented by the
XrSpatialPersistenceContextEXT handles in persistenceContexts.
Create discovery snapshot
Discover entities with specific UUIDs
The XrSpatialDiscoveryPersistenceUuidFilterEXT structure is defined as:
// Provided by XR_EXT_spatial_persistence
typedef struct XrSpatialDiscoveryPersistenceUuidFilterEXT {
XrStructureType type;
const void* next;
uint32_t persistedUuidCount;
const XrUuid* persistedUuids;
} XrSpatialDiscoveryPersistenceUuidFilterEXT;
The application can use XrSpatialDiscoveryPersistenceUuidFilterEXT in
the next chain of XrSpatialDiscoverySnapshotCreateInfoEXT to scope the
discovery operation to just the entities whose persisted UUIDs are in the
set of the UUIDs provided in persistedUuids.
If the application adds XrSpatialDiscoveryPersistenceUuidFilterEXT in
the next chain of XrSpatialDiscoverySnapshotCreateInfoEXT but the
xrCreateSpatialDiscoverySnapshotAsyncEXT::spatialContext was not
configured with any XrSpatialPersistenceContextEXT using
XrSpatialContextPersistenceConfigEXT, the runtime must return
XR_ERROR_VALIDATION_FAILURE from
xrCreateSpatialDiscoverySnapshotAsyncEXT.
The runtime must treat the XrSpatialDiscoveryPersistenceUuidFilterEXT
filter as an 'AND' condition with any other filters provided in
XrSpatialDiscoverySnapshotCreateInfoEXT or its next chain.
The runtime must treat the persistedUuids array itself as an 'OR'
condition i.e. filter for entities that have any of the UUIDs provided in
that array.
The runtime must include one entry in the created snapshot for each of the
UUIDs in persistedUuids for which it was able to determine the
XrSpatialPersistenceStateEXT state at this time.
-
If the runtime has successfully found the UUID in its storage, then -
-
The runtime must set the XrSpatialPersistenceStateEXT in the XrSpatialPersistenceDataEXT of this entity to
XR_SPATIAL_PERSISTENCE_STATE_LOADED_EXT. -
The runtime must include a valid
XrSpatialEntityIdEXTfor this entity in the created snapshot. -
The runtime must set the XrSpatialEntityTrackingStateEXT of that entity to
XR_SPATIAL_ENTITY_TRACKING_STATE_TRACKING_EXTif it is actively tracking the entity and has valid data for its components. Otherwise, the runtime must set the XrSpatialEntityTrackingStateEXT toXR_SPATIAL_ENTITY_TRACKING_STATE_PAUSED_EXT.
-
-
If the runtime has determined that the UUID is not present in its storage (regardless of whether that UUID was never in the storage or has was present once but has since been unpersisted), then -
-
The runtime must set the XrSpatialPersistenceStateEXT in the XrSpatialPersistenceDataEXT of this entity to
XR_SPATIAL_PERSISTENCE_STATE_NOT_FOUND_EXTto indicate to the application that this UUID is no longer present in the storage. -
The runtime must set the
XrSpatialEntityIdEXTfor this entity in the created snapshot to XR_NULL_SPATIAL_ENTITY_ID_EXT. -
The runtime must set the XrSpatialEntityTrackingStateEXT of that entity to
XR_SPATIAL_ENTITY_TRACKING_STATE_STOPPED_EXTto indicate to the application that this entity will never be tracked.
-
-
If the runtime was not able to determine if the UUID is present in its storage or not, it must not include it in the snapshot.
The application can also use
XrSpatialDiscoveryPersistenceUuidFilterEXT in the next chain of
XrSpatialComponentDataQueryConditionEXT to query for entities of
specific UUIDs in existing snapshots.
When used with XrSpatialComponentDataQueryConditionEXT, if
XrSpatialDiscoveryPersistenceUuidFilterEXT::persistedUuids
contains any XrUuid that is not in the XrSpatialSnapshotEXT, the
runtime must not include an entry for that XrUuid in the query
result.
Also, the order (sequence) of entities in the query result may not match
the order of UUIDs provided in
XrSpatialDiscoveryPersistenceUuidFilterEXT::persistedUuids.
Application should include XR_SPATIAL_COMPONENT_TYPE_PERSISTENCE_EXT
in XrSpatialComponentDataQueryConditionEXT::componentTypes and
XrSpatialComponentPersistenceListEXT in the next chain of
XrSpatialComponentDataQueryResultEXT and then check the
XrSpatialPersistenceDataEXT::persistUuid of the each query
result index to understand which UUID the current result index corresponds
to.
Discover all persisted entities
If the application uses xrCreateSpatialDiscoverySnapshotAsyncEXT without XrSpatialDiscoveryPersistenceUuidFilterEXT and with an XrSpatialContextEXT which has been configured with an XrSpatialPersistenceContextEXT, then the runtime must include those entities in the created snapshot that are persisted in the storage represented by XrSpatialPersistenceContextEXT and satisfy the filters provided in XrSpatialDiscoverySnapshotCreateInfoEXT. For those entities -
-
The runtime must set the XrSpatialPersistenceStateEXT in the XrSpatialPersistenceDataEXT of this entity to
XR_SPATIAL_PERSISTENCE_STATE_LOADED_EXT. -
The runtime must include a valid
XrSpatialEntityIdEXTfor this entity in the created snapshot. -
The runtime must set the XrSpatialEntityTrackingStateEXT of that entity to
XR_SPATIAL_ENTITY_TRACKING_STATE_TRACKING_EXTif it is actively tracking the entity and has valid data for its components. Otherwise, the runtime must set the XrSpatialEntityTrackingStateEXT toXR_SPATIAL_ENTITY_TRACKING_STATE_PAUSED_EXT.
12.47.8. New Enum Constants
-
XR_EXT_SPATIAL_PERSISTENCE_EXTENSION_NAME -
XR_EXT_spatial_persistence_SPEC_VERSION -
Extending XrObjectType:
-
XR_OBJECT_TYPE_SPATIAL_PERSISTENCE_CONTEXT_EXT
-
-
Extending XrResult:
-
XR_ERROR_SPATIAL_PERSISTENCE_SCOPE_UNSUPPORTED_EXT
-
-
Extending XrSpatialComponentTypeEXT:
-
XR_SPATIAL_COMPONENT_TYPE_PERSISTENCE_EXT
-
-
Extending XrStructureType:
-
XR_TYPE_CREATE_SPATIAL_PERSISTENCE_CONTEXT_COMPLETION_EXT -
XR_TYPE_SPATIAL_COMPONENT_PERSISTENCE_LIST_EXT -
XR_TYPE_SPATIAL_CONTEXT_PERSISTENCE_CONFIG_EXT -
XR_TYPE_SPATIAL_DISCOVERY_PERSISTENCE_UUID_FILTER_EXT -
XR_TYPE_SPATIAL_PERSISTENCE_CONTEXT_CREATE_INFO_EXT
-
12.47.9. Example Code
Create Persistence Context
// Check if the required persistence scope supported
uint32_t scopeCount;
CHK_XR(xrEnumerateSpatialPersistenceScopesEXT(instance, systemId, 0, &scopeCount, nullptr));
std::vector<XrSpatialPersistenceScopeEXT> persistenceScopes(scopeCount);
CHK_XR(xrEnumerateSpatialPersistenceScopesEXT(instance, systemId, scopeCount, &scopeCount, persistenceScopes.data()));
if (std::find(persistenceScopes.begin(), persistenceScopes.end(), XR_SPATIAL_PERSISTENCE_SCOPE_SYSTEM_MANAGED_EXT) == persistenceScopes.end()) {
return;
}
XrSpatialPersistenceContextEXT persistenceContext{};
XrSpatialPersistenceContextCreateInfoEXT persistenceContextCreateInfo{XR_TYPE_SPATIAL_PERSISTENCE_CONTEXT_CREATE_INFO_EXT};
persistenceContextCreateInfo.scope = XR_SPATIAL_PERSISTENCE_SCOPE_SYSTEM_MANAGED_EXT;
XrFutureEXT createContextFuture;
CHK_XR(xrCreateSpatialPersistenceContextAsyncEXT(session, &persistenceContextCreateInfo, &createContextFuture));
waitUntilReady(createContextFuture);
XrCreateSpatialPersistenceContextCompletionEXT completion{XR_TYPE_CREATE_SPATIAL_PERSISTENCE_CONTEXT_COMPLETION_EXT};
CHK_XR(xrCreateSpatialPersistenceContextCompleteEXT(session, createContextFuture, &completion));
if (completion.futureResult != XR_SUCCESS || completion.createResult != XR_SPATIAL_PERSISTENCE_CONTEXT_RESULT_SUCCESS_EXT) {
return;
}
persistenceContext = completion.persistenceContext;
// ...
// Connect persistence context to a spatial context and discover persisted entities.
// ...
CHK_XR(xrDestroySpatialPersistenceContextEXT(persistenceContext));
Connect Persistence Context to a Spatial Context
// Note: Anchor capability is just used as an example here. Persistence can be
// supported by other capabilities too. xrEnumerateSpatialCapabilityComponentTypesEXT() can
// be used to check if a certain capability supports persistence.
if (!isSpatialCapabilitySupported(instance, systemId, XR_SPATIAL_CAPABILITY_ANCHOR_EXT)) {
return;
}
const bool supportsPersistenceComponent = isSpatialComponentSupported(instance, systemId, XR_SPATIAL_CAPABILITY_ANCHOR_EXT, XR_SPATIAL_COMPONENT_TYPE_PERSISTENCE_EXT);
// Create a spatial spatial context
XrSpatialContextEXT spatialContext{};
{
std::vector<XrSpatialComponentTypeEXT> enabledComponents = {
XR_SPATIAL_COMPONENT_TYPE_ANCHOR_EXT,
};
if (supportsPersistenceComponent) {
enabledComponents.push_back(XR_SPATIAL_COMPONENT_TYPE_PERSISTENCE_EXT);
}
XrSpatialCapabilityConfigurationAnchorEXT anchorConfig{XR_TYPE_SPATIAL_CAPABILITY_CONFIGURATION_ANCHOR_EXT};
anchorConfig.capability = XR_SPATIAL_CAPABILITY_ANCHOR_EXT;
anchorConfig.enabledComponentCount = enabledComponents.size();
anchorConfig.enabledComponents = enabledComponents.data();
std::array<XrSpatialCapabilityConfigurationBaseHeaderEXT*, 1> capabilityConfigs = {
reinterpret_cast<XrSpatialCapabilityConfigurationBaseHeaderEXT*>(&anchorConfig),
};
XrSpatialContextCreateInfoEXT spatialContextCreateInfo{XR_TYPE_SPATIAL_CONTEXT_CREATE_INFO_EXT};
spatialContextCreateInfo.capabilityConfigCount = capabilityConfigs.size();
spatialContextCreateInfo.capabilityConfigs = capabilityConfigs.data();
XrSpatialContextPersistenceConfigEXT persistenceConfig{XR_TYPE_SPATIAL_CONTEXT_PERSISTENCE_CONFIG_EXT};
persistenceConfig.persistenceContextCount = 1;
persistenceConfig.persistenceContexts = &persistenceContext;
if (supportsPersistenceComponent) {
spatialContextCreateInfo.next = &persistenceConfig;
}
XrFutureEXT createContextFuture;
CHK_XR(xrCreateSpatialContextAsyncEXT(session, &spatialContextCreateInfo, &createContextFuture));
waitUntilReady(createContextFuture);
XrCreateSpatialContextCompletionEXT completion{XR_TYPE_CREATE_SPATIAL_CONTEXT_COMPLETION_EXT};
CHK_XR(xrCreateSpatialContextCompleteEXT(session, createContextFuture, &completion));
if (completion.futureResult != XR_SUCCESS) {
return;
}
spatialContext = completion.spatialContext;
}
// ...
// Discover persisted anchors.
// ...
CHK_XR(xrDestroySpatialContextEXT(spatialContext));
Discover all persisted entities
XrFutureEXT future = XR_NULL_FUTURE_EXT;
// We want to look for entities that have the following components.
std::vector<XrSpatialComponentTypeEXT> snapshotComponents = {
XR_SPATIAL_COMPONENT_TYPE_PERSISTENCE_EXT,
};
auto discoverSpatialEntities = [&](XrSpatialContextEXT spatialContext, XrTime time) {
XrSpatialDiscoverySnapshotCreateInfoEXT snapshotCreateInfo{XR_TYPE_SPATIAL_DISCOVERY_SNAPSHOT_CREATE_INFO_EXT};
snapshotCreateInfo.componentTypeCount = snapshotComponents.size();
snapshotCreateInfo.componentTypes = snapshotComponents.data();
CHK_XR(xrCreateSpatialDiscoverySnapshotAsyncEXT(spatialContext, &snapshotCreateInfo, &future));
waitUntilReady(future);
XrCreateSpatialDiscoverySnapshotCompletionInfoEXT completionInfo{XR_TYPE_CREATE_SPATIAL_DISCOVERY_SNAPSHOT_COMPLETION_INFO_EXT};
completionInfo.baseSpace = localSpace;
completionInfo.time = time;
completionInfo.future = future;
XrCreateSpatialDiscoverySnapshotCompletionEXT completion{XR_TYPE_CREATE_SPATIAL_DISCOVERY_SNAPSHOT_COMPLETION_EXT};
CHK_XR(xrCreateSpatialDiscoverySnapshotCompleteEXT(spatialContext, &completionInfo, &completion));
if (completion.futureResult == XR_SUCCESS) {
// Query for the semantic label component data
XrSpatialComponentDataQueryConditionEXT queryCond{XR_TYPE_SPATIAL_COMPONENT_DATA_QUERY_CONDITION_EXT};
queryCond.componentTypeCount = snapshotComponents.size();
queryCond.componentTypes = snapshotComponents.data();
XrSpatialComponentDataQueryResultEXT queryResult{XR_TYPE_SPATIAL_COMPONENT_DATA_QUERY_RESULT_EXT};
CHK_XR(xrQuerySpatialComponentDataEXT(completion.snapshot, &queryCond, &queryResult));
std::vector<XrSpatialEntityIdEXT> entityIds(queryResult.entityIdCountOutput);
std::vector<XrSpatialEntityTrackingStateEXT> entityStates(queryResult.entityIdCountOutput);
queryResult.entityIdCapacityInput = entityIds.size();
queryResult.entityIds = entityIds.data();
queryResult.entityStateCapacityInput = entityStates.size();
queryResult.entityStates = entityStates.data();
std::vector<XrSpatialPersistenceDataEXT> persistenceData(queryResult.entityIdCountOutput);
XrSpatialComponentPersistenceListEXT persistenceDataList{XR_TYPE_SPATIAL_COMPONENT_PERSISTENCE_LIST_EXT};
persistenceDataList.persistDataCount = persistenceData.size();
persistenceDataList.persistData = persistenceData.data();
queryResult.next = &persistenceDataList;
CHK_XR(xrQuerySpatialComponentDataEXT(completion.snapshot, &queryCond, &queryResult));
for (int32_t i = 0; i < queryResult.entityIdCountOutput; ++i) {
// persistenceData[i].persistUuid is the UUID of the persisted entity whose entity ID is entityIds[i].
// The persistenceData array is essentially the uuids persisted in the scope that the current
// XrSpatialPersistenceContextEXT is configured with.
}
CHK_XR(xrDestroySpatialSnapshotEXT(completion.snapshot));
}
};
while (1) {
// ...
// For every frame in frame loop
// ...
XrFrameState frameState; // previously returned from xrWaitFrame
const XrTime time = frameState.predictedDisplayTime;
// Poll for the XR_TYPE_EVENT_DATA_SPATIAL_DISCOVERY_RECOMMENDED_EXT event
XrEventDataBuffer event = {XR_TYPE_EVENT_DATA_BUFFER};
XrResult result = xrPollEvent(instance, &event);
if (result == XR_SUCCESS) {
switch (event.type) {
case XR_TYPE_EVENT_DATA_SPATIAL_DISCOVERY_RECOMMENDED_EXT: {
const XrEventDataSpatialDiscoveryRecommendedEXT& eventdata =
*reinterpret_cast<XrEventDataSpatialDiscoveryRecommendedEXT*>(&event);
// Discover spatial entities for the context that we received the "discovery
// recommended" event for.
discoverSpatialEntities(eventdata.spatialContext, time);
break;
}
}
}
// ...
// Finish frame loop
// ...
}
Discover entities with specific UUIDs
XrFutureEXT future = XR_NULL_FUTURE_EXT;
// Load up the uuids that the app has stored on its own i.e. the uuids it is interested in.
std::vector<XrUuid> uuidsStoredByApp = loadPersistedUuids();
// We want to look for entities that have the following components.
std::vector<XrSpatialComponentTypeEXT> snapshotComponents = {
XR_SPATIAL_COMPONENT_TYPE_PERSISTENCE_EXT,
};
auto discoverSpatialEntities = [&](XrSpatialContextEXT spatialContext, XrTime time) {
XrSpatialDiscoveryPersistenceUuidFilterEXT persistenceFilter{XR_TYPE_SPATIAL_DISCOVERY_PERSISTENCE_UUID_FILTER_EXT};
persistenceFilter.persistedUuidCount = uuidsStoredByApp.size();
persistenceFilter.persistedUuids = uuidsStoredByApp.data();
XrSpatialDiscoverySnapshotCreateInfoEXT snapshotCreateInfo{XR_TYPE_SPATIAL_DISCOVERY_SNAPSHOT_CREATE_INFO_EXT};
snapshotCreateInfo.componentTypeCount = snapshotComponents.size();
snapshotCreateInfo.componentTypes = snapshotComponents.data();
snapshotCreateInfo.next = &persistenceFilter;
CHK_XR(xrCreateSpatialDiscoverySnapshotAsyncEXT(spatialContext, &snapshotCreateInfo, &future));
waitUntilReady(future);
XrCreateSpatialDiscoverySnapshotCompletionInfoEXT completionInfo{XR_TYPE_CREATE_SPATIAL_DISCOVERY_SNAPSHOT_COMPLETION_INFO_EXT};
completionInfo.baseSpace = localSpace;
completionInfo.time = time;
completionInfo.future = future;
XrCreateSpatialDiscoverySnapshotCompletionEXT completion{XR_TYPE_CREATE_SPATIAL_DISCOVERY_SNAPSHOT_COMPLETION_EXT};
CHK_XR(xrCreateSpatialDiscoverySnapshotCompleteEXT(spatialContext, &completionInfo, &completion));
if (completion.futureResult == XR_SUCCESS) {
// Query for the semantic label component data
XrSpatialComponentDataQueryConditionEXT queryCond{XR_TYPE_SPATIAL_COMPONENT_DATA_QUERY_CONDITION_EXT};
queryCond.componentTypeCount = snapshotComponents.size();
queryCond.componentTypes = snapshotComponents.data();
XrSpatialComponentDataQueryResultEXT queryResult{XR_TYPE_SPATIAL_COMPONENT_DATA_QUERY_RESULT_EXT};
CHK_XR(xrQuerySpatialComponentDataEXT(completion.snapshot, &queryCond, &queryResult));
std::vector<XrSpatialEntityIdEXT> entityIds(queryResult.entityIdCountOutput);
std::vector<XrSpatialEntityTrackingStateEXT> entityStates(queryResult.entityIdCountOutput);
queryResult.entityIdCapacityInput = entityIds.size();
queryResult.entityIds = entityIds.data();
queryResult.entityStateCapacityInput = entityStates.size();
queryResult.entityStates = entityStates.data();
std::vector<XrSpatialPersistenceDataEXT> persistenceData(queryResult.entityIdCountOutput);
XrSpatialComponentPersistenceListEXT persistenceDataList{XR_TYPE_SPATIAL_COMPONENT_PERSISTENCE_LIST_EXT};
persistenceDataList.persistDataCount = persistenceData.size();
persistenceDataList.persistData = persistenceData.data();
queryResult.next = &persistenceDataList;
CHK_XR(xrQuerySpatialComponentDataEXT(completion.snapshot, &queryCond, &queryResult));
for (int32_t i = 0; i < queryResult.entityIdCountOutput; ++i) {
if (persistenceData[i].persistState == XR_SPATIAL_PERSISTENCE_STATE_LOADED_EXT) {
// persistenceData[i].persistUuid, requested by the app, is present in the persistence scope
// and its corresponding entity ID and state are entityIds[i] & entityStates[i] respectively.
} else if (persistenceData[i].persistState == XR_SPATIAL_PERSISTENCE_STATE_NOT_FOUND_EXT) {
// persistenceData[i].persistUuid, requested by the app, is NOT present in the persistence scope
// and its corresponding entity ID (entityIds[i]) would be XR_NULL_SPATIAL_ENTITY_ID_EXT
// and tracking state (entityStates[i]) would be XR_SPATIAL_ENTITY_TRACKING_STATE_STOPPED_EXT.
}
}
CHK_XR(xrDestroySpatialSnapshotEXT(completion.snapshot));
}
};
while (1) {
// ...
// For every frame in frame loop
// ...
XrFrameState frameState; // previously returned from xrWaitFrame
const XrTime time = frameState.predictedDisplayTime;
// Poll for the XR_TYPE_EVENT_DATA_SPATIAL_DISCOVERY_RECOMMENDED_EXT event
XrEventDataBuffer event = {XR_TYPE_EVENT_DATA_BUFFER};
XrResult result = xrPollEvent(instance, &event);
if (result == XR_SUCCESS) {
switch (event.type) {
case XR_TYPE_EVENT_DATA_SPATIAL_DISCOVERY_RECOMMENDED_EXT: {
const XrEventDataSpatialDiscoveryRecommendedEXT& eventdata =
*reinterpret_cast<XrEventDataSpatialDiscoveryRecommendedEXT*>(&event);
// Discover spatial entities for the context that we received the "discovery
// recommended" event for.
discoverSpatialEntities(eventdata.spatialContext, time);
break;
}
}
}
// ...
// Finish frame loop
// ...
}
12.48. XR_EXT_spatial_persistence_operations
- Name String
-
XR_EXT_spatial_persistence_operations - Extension Type
-
Instance extension
- Registered Extension Number
-
782
- Revision
-
1
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Contributors
-
Nihav Jain, Google
Jared Finder, Google
Natalie Fleury, Meta
Yuichi Taguchi, Meta
Ron Bessems, Meta
Yin Li, Microsoft
Jimmy Alamparambil, ByteDance
Zhipeng Liu, ByteDance
Jun Yan, ByteDance
12.48.1. Overview
While XR_EXT_spatial_persistence allows applications to discover
persisted entities, this extension allows applications to persist and
unpersist spatial entities.
12.48.2. Persist spatial entities
The xrPersistSpatialEntityAsyncEXT function is defined as:
// Provided by XR_EXT_spatial_persistence_operations
XrResult xrPersistSpatialEntityAsyncEXT(
XrSpatialPersistenceContextEXT persistenceContext,
const XrSpatialEntityPersistInfoEXT* persistInfo,
XrFutureEXT* future);
An application can persist a spatial entity using the xrPersistSpatialEntityAsyncEXT function.
The runtime must return XR_ERROR_SPATIAL_ENTITY_ID_INVALID_EXT if
XrSpatialEntityPersistInfoEXT::spatialEntityId does not belong
to XrSpatialEntityPersistInfoEXT::spatialContext.
The runtime must return XR_ERROR_PERMISSION_INSUFFICIENT if the
XrSpatialPersistenceScopeEXT that persistenceContext was
configured with is a read-only scope and does not allow applications to
modify the storage represented by it.
An example of this would be if persistenceContext was created with
XR_SPATIAL_PERSISTENCE_SCOPE_SYSTEM_MANAGED_EXT and the application
uses xrPersistSpatialEntityAsyncEXT with that
persistenceContext.
The runtime must return
XR_ERROR_SPATIAL_PERSISTENCE_SCOPE_INCOMPATIBLE_EXT if the
XrSpatialPersistenceScopeEXT that persistenceContext was
configured does allow the application to persist entities of its choice in
the storage but XrSpatialEntityPersistInfoEXT::spatialEntityId
is not covered in the configured scope.
An example of this would be if the persistence context scope is set to
XR_SPATIAL_PERSISTENCE_SCOPE_LOCAL_ANCHORS_EXT, and
XrSpatialEntityPersistInfoEXT::spatialEntityId does not
represent an anchor.
The runtime must not return an error if
XrSpatialEntityPersistInfoEXT::spatialContext was not configured
with persistenceContext using
XrSpatialContextPersistenceConfigEXT.
Using xrPersistSpatialEntityAsyncEXT does not require that
persistenceContext be connected with the spatial context.
This function starts an asynchronous operation and creates a corresponding
XrFutureEXT, usable with xrPollFutureEXT and related
functions.
The return value of this function only indicates whether the parameters were
acceptable to schedule the asynchronous operation.
The corresponding completion function is
xrPersistSpatialEntityCompleteEXT, usable when a future from this
function is in the READY state, with outputs populated
by that function in the completion structure
XrPersistSpatialEntityCompletionEXT.
If the XrSpatialEntityTrackingStateEXT of
XrSpatialEntityPersistInfoEXT::spatialEntityId is not
XR_SPATIAL_ENTITY_TRACKING_STATE_TRACKING_EXT when
xrPersistSpatialEntityAsyncEXT is called, the runtime must not return
an error from this function or set
XrPersistSpatialEntityCompletionEXT::futureResult to an error
code to indicate this.
The runtime may either set future to the READY
state immediately and set
XrPersistSpatialEntityCompletionEXT::persistResult to
XR_SPATIAL_PERSISTENCE_CONTEXT_RESULT_ENTITY_NOT_TRACKING_EXT to
indicate the lack of tracking state, or wait for the entity to get into
tracking state as part of the async operation and set
XrPersistSpatialEntityCompletionEXT::persistResult to
XR_SPATIAL_PERSISTENCE_CONTEXT_RESULT_ENTITY_NOT_TRACKING_EXT if the
entity does not get into tracking state until a runtime determined timeout.
A common usage pattern of applications is to create a spatial anchor using
xrCreateSpatialAnchorEXT and then immediately request to persist the
newly created spatial anchor using xrPersistSpatialEntityAsyncEXT.
XR_EXT_spatial_anchor states that the tracking state of an anchor
may not be XR_SPATIAL_ENTITY_TRACKING_STATE_TRACKING_EXT immediately
upon its creation.
For such cases, the runtime should wait for the anchor to get into tracking
state as part of the persist async operation instead of immediately setting
future to the READY state and fail the operation
with XR_SPATIAL_PERSISTENCE_CONTEXT_RESULT_ENTITY_NOT_TRACKING_EXT
only if the anchor does not get into tracking state within a runtime
determined timeout.
If the spatial entity represented by
XrSpatialEntityPersistInfoEXT::spatialEntityId has already been
persisted in the scope associated with persistenceContext, the runtime
must not treat that as an error but instead complete the async operation
successfully and provide the appropriate persist UUID to the application.
The XrSpatialEntityPersistInfoEXT structure is defined as:
// Provided by XR_EXT_spatial_persistence_operations
typedef struct XrSpatialEntityPersistInfoEXT {
XrStructureType type;
const void* next;
XrSpatialContextEXT spatialContext;
XrSpatialEntityIdEXT spatialEntityId;
} XrSpatialEntityPersistInfoEXT;
The XrSpatialEntityPersistInfoEXT structure describes the information
to persist a spatial entity represented by spatialEntityId in an
XrSpatialPersistenceContextEXT.
The xrPersistSpatialEntityCompleteEXT function is defined as:
// Provided by XR_EXT_spatial_persistence_operations
XrResult xrPersistSpatialEntityCompleteEXT(
XrSpatialPersistenceContextEXT persistenceContext,
XrFutureEXT future,
XrPersistSpatialEntityCompletionEXT* completion);
xrPersistSpatialEntityCompleteEXT completes the asynchronous operation
started by xrPersistSpatialEntityAsyncEXT.
The runtime must return XR_ERROR_FUTURE_PENDING_EXT if future
is not in READY state.
The runtime must return XR_ERROR_FUTURE_INVALID_EXT if future
has already been completed or cancelled.
This is the completion function corresponding to the operation started by
xrPersistSpatialEntityAsyncEXT.
Do not call until the future is READY.
If XrPersistSpatialEntityCompletionEXT::persistUuid is a UUID
that has already been provided to the application either via a previous
successful completion of xrPersistSpatialEntityAsyncEXT or by
discovering existing persisted entities, then the
XrSpatialEntityPersistInfoEXT::spatialEntityId must represent
the same entity as the one the UUID was originally provided for.
The XrPersistSpatialEntityCompletionEXT structure is defined as:
// Provided by XR_EXT_spatial_persistence_operations
typedef struct XrPersistSpatialEntityCompletionEXT {
XrStructureType type;
void* next;
XrResult futureResult;
XrSpatialPersistenceContextResultEXT persistResult;
XrUuid persistUuid;
} XrPersistSpatialEntityCompletionEXT;
If futureResult and persistResult are both success codes,
persistUuid must be valid and the application can use it to identify
the persisted spatial entity across sessions.
The runtime must set persistResult to
XR_SPATIAL_PERSISTENCE_CONTEXT_RESULT_ENTITY_NOT_TRACKING_EXT if it
lost tracking of the entity represented by
XrSpatialEntityPersistInfoEXT::spatialEntityId before it could
be successfully persisted.
The runtime must set persistResult only if futureResult is a
success code.
12.48.3. Unpersist spatial entities
The xrUnpersistSpatialEntityAsyncEXT function is defined as:
// Provided by XR_EXT_spatial_persistence_operations
XrResult xrUnpersistSpatialEntityAsyncEXT(
XrSpatialPersistenceContextEXT persistenceContext,
const XrSpatialEntityUnpersistInfoEXT* unpersistInfo,
XrFutureEXT* future);
An application can unpersist a spatial entity using the xrUnpersistSpatialEntityAsyncEXT function.
The runtime must return XR_ERROR_PERMISSION_INSUFFICIENT if the
XrSpatialPersistenceScopeEXT that persistenceContext was
configured with is a read-only scope and does not allow applications to
modify the storage represented by it.
An example of this would be if persistenceContext was created with
XR_SPATIAL_PERSISTENCE_SCOPE_SYSTEM_MANAGED_EXT and application uses
xrUnpersistSpatialEntityAsyncEXT with that persistenceContext.
This function starts an asynchronous operation and creates a corresponding
XrFutureEXT, usable with xrPollFutureEXT and related
functions.
The return value of this function only indicates whether the parameters were
acceptable to schedule the asynchronous operation.
The corresponding completion function is
xrUnpersistSpatialEntityCompleteEXT, usable when a future from this
function is in the READY state, with outputs populated
by that function in the completion structure
XrUnpersistSpatialEntityCompletionEXT.
The XrSpatialEntityUnpersistInfoEXT structure is defined as:
// Provided by XR_EXT_spatial_persistence_operations
typedef struct XrSpatialEntityUnpersistInfoEXT {
XrStructureType type;
const void* next;
XrUuid persistUuid;
} XrSpatialEntityUnpersistInfoEXT;
The XrSpatialEntityUnpersistInfoEXT structure describes the information to unpersist a spatial entity previously persisted using xrPersistSpatialEntityAsyncEXT.
The xrUnpersistSpatialEntityCompleteEXT function is defined as:
// Provided by XR_EXT_spatial_persistence_operations
XrResult xrUnpersistSpatialEntityCompleteEXT(
XrSpatialPersistenceContextEXT persistenceContext,
XrFutureEXT future,
XrUnpersistSpatialEntityCompletionEXT* completion);
xrUnpersistSpatialEntityCompleteEXT completes the asynchronous
operation started by xrUnpersistSpatialEntityAsyncEXT.
The runtime must return XR_ERROR_FUTURE_PENDING_EXT if future
is not in READY state.
The runtime must return XR_ERROR_FUTURE_INVALID_EXT if future
has already been completed or cancelled.
This is the completion function corresponding to
xrUnpersistSpatialEntityAsyncEXT.
It completes the asynchronous operation and returns the results.
Do not call until the future is READY.
The XrUnpersistSpatialEntityCompletionEXT structure is defined as:
// Provided by XR_EXT_spatial_persistence_operations
typedef struct XrUnpersistSpatialEntityCompletionEXT {
XrStructureType type;
void* next;
XrResult futureResult;
XrSpatialPersistenceContextResultEXT unpersistResult;
} XrUnpersistSpatialEntityCompletionEXT;
The runtime must set unpersistResult only if futureResult is a
success code.
If XrSpatialEntityUnpersistInfoEXT::persistUuid is not found in
the storage represented by
xrUnpersistSpatialEntityCompleteEXT::persistenceContext, then
the runtime must set unpersistResult to
XR_SPATIAL_PERSISTENCE_CONTEXT_RESULT_PERSIST_UUID_NOT_FOUND_EXT.
12.48.4. Anchor Persistence Local Scope
If the runtime supports persistence for spatial anchors, and stores them on
the same device, for the same user and application that originally created
it, it must indicate this by enumerating
XR_SPATIAL_PERSISTENCE_SCOPE_LOCAL_ANCHORS_EXT in
xrEnumerateSpatialPersistenceScopesEXT.
If a runtime enumerates XR_SPATIAL_PERSISTENCE_SCOPE_LOCAL_ANCHORS_EXT
in xrEnumerateSpatialPersistenceScopesEXT, the runtime must also
enumerate XR_SPATIAL_CAPABILITY_ANCHOR_EXT in
xrEnumerateSpatialCapabilitiesEXT and
XR_SPATIAL_COMPONENT_TYPE_PERSISTENCE_EXT in
xrEnumerateSpatialCapabilityComponentTypesEXT for
XR_SPATIAL_CAPABILITY_ANCHOR_EXT.
12.48.7. New Enum Constants
-
XR_EXT_SPATIAL_PERSISTENCE_OPERATIONS_EXTENSION_NAME -
XR_EXT_spatial_persistence_operations_SPEC_VERSION -
Extending XrResult:
-
XR_ERROR_SPATIAL_PERSISTENCE_SCOPE_INCOMPATIBLE_EXT
-
-
Extending XrSpatialPersistenceContextResultEXT:
-
XR_SPATIAL_PERSISTENCE_CONTEXT_RESULT_ENTITY_NOT_TRACKING_EXT -
XR_SPATIAL_PERSISTENCE_CONTEXT_RESULT_PERSIST_UUID_NOT_FOUND_EXT
-
-
Extending XrSpatialPersistenceScopeEXT:
-
XR_SPATIAL_PERSISTENCE_SCOPE_LOCAL_ANCHORS_EXT
-
-
Extending XrStructureType:
-
XR_TYPE_PERSIST_SPATIAL_ENTITY_COMPLETION_EXT -
XR_TYPE_SPATIAL_ENTITY_PERSIST_INFO_EXT -
XR_TYPE_SPATIAL_ENTITY_UNPERSIST_INFO_EXT -
XR_TYPE_UNPERSIST_SPATIAL_ENTITY_COMPLETION_EXT
-
12.49. XR_EXT_spatial_plane_tracking
- Name String
-
XR_EXT_spatial_plane_tracking - Extension Type
-
Instance extension
- Registered Extension Number
-
742
- Revision
-
1
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Contributors
-
Nihav Jain, Google
Natalie Fleury, Meta
Yuichi Taguchi, Meta
Ron Bessems, Meta
Yin Li, Microsoft
Jimmy Alamparambil, ByteDance
Zhipeng Liu, ByteDance
Jun Yan, ByteDance
12.49.1. Overview
This extension builds on XR_EXT_spatial_entity and defines the plane
tracking spatial capability for the spatial entity framework.
12.49.2. Runtime Support
If the runtime supports plane tracking, it must indicate this by
enumerating XR_SPATIAL_CAPABILITY_PLANE_TRACKING_EXT in
xrEnumerateSpatialCapabilitiesEXT.
12.49.3. Configuration
The XrSpatialCapabilityConfigurationPlaneTrackingEXT structure is defined as:
// Provided by XR_EXT_spatial_plane_tracking
typedef struct XrSpatialCapabilityConfigurationPlaneTrackingEXT {
XrStructureType type;
const void* next;
XrSpatialCapabilityEXT capability;
uint32_t enabledComponentCount;
const XrSpatialComponentTypeEXT* enabledComponents;
} XrSpatialCapabilityConfigurationPlaneTrackingEXT;
Applications can enable the XR_SPATIAL_CAPABILITY_PLANE_TRACKING_EXT
spatial capability by including a pointer to an
XrSpatialCapabilityConfigurationPlaneTrackingEXT structure in
XrSpatialContextCreateInfoEXT::capabilityConfigs.
The runtime must return XR_ERROR_VALIDATION_FAILURE if
capability is not XR_SPATIAL_CAPABILITY_PLANE_TRACKING_EXT.
12.49.4. Guaranteed Components
A runtime that supports XR_SPATIAL_CAPABILITY_PLANE_TRACKING_EXT must
provide the following spatial components as guaranteed components of all
entities discovered by this capability and must enumerate them in
xrEnumerateSpatialCapabilityComponentTypesEXT:
-
XR_SPATIAL_COMPONENT_TYPE_BOUNDED_2D_EXT -
XR_SPATIAL_COMPONENT_TYPE_PLANE_ALIGNMENT_EXT
Bounded 2D Component
The bounded 2D component provides the center and extents of the plane represented by the entity it is on. See Bounded 2D for more details.
Plane Alignment Component
Component data
// Provided by XR_EXT_spatial_plane_tracking
typedef enum XrSpatialPlaneAlignmentEXT {
XR_SPATIAL_PLANE_ALIGNMENT_HORIZONTAL_UPWARD_EXT = 0,
XR_SPATIAL_PLANE_ALIGNMENT_HORIZONTAL_DOWNWARD_EXT = 1,
XR_SPATIAL_PLANE_ALIGNMENT_VERTICAL_EXT = 2,
XR_SPATIAL_PLANE_ALIGNMENT_ARBITRARY_EXT = 3,
XR_SPATIAL_PLANE_ALIGNMENT_MAX_ENUM_EXT = 0x7FFFFFFF
} XrSpatialPlaneAlignmentEXT;
The XrSpatialPlaneAlignmentEXT enumeration describes the alignment of
the plane associated with the spatial entity with an
XR_SPATIAL_COMPONENT_TYPE_PLANE_ALIGNMENT_EXT component.
The enumeration values have the following meanings:
| Enum | Description |
|---|---|
|
The entity is horizontal and faces upward (e.g. floor). |
|
The entity is horizontal and faces downward (e.g. ceiling). |
|
The entity is vertical (e.g. wall). |
|
The entity has an arbitrary, non-vertical and non-horizontal orientation. |
Component list structure to query data
The XrSpatialComponentPlaneAlignmentListEXT structure is defined as:
// Provided by XR_EXT_spatial_plane_tracking
typedef struct XrSpatialComponentPlaneAlignmentListEXT {
XrStructureType type;
void* next;
uint32_t planeAlignmentCount;
XrSpatialPlaneAlignmentEXT* planeAlignments;
} XrSpatialComponentPlaneAlignmentListEXT;
To query the plane alignment component of the spatial entities in an
XrSpatialSnapshotEXT, include
XR_SPATIAL_COMPONENT_TYPE_PLANE_ALIGNMENT_EXT in
XrSpatialComponentDataQueryConditionEXT::componentTypes and add
XrSpatialComponentPlaneAlignmentListEXT to the
XrSpatialComponentDataQueryResultEXT::next chain.
The runtime must return XR_ERROR_VALIDATION_FAILURE from
xrQuerySpatialComponentDataEXT if
XrSpatialComponentPlaneAlignmentListEXT is in the
XrSpatialComponentDataQueryResultEXT::next chain but
XR_SPATIAL_COMPONENT_TYPE_PLANE_ALIGNMENT_EXT is not included in
XrSpatialComponentDataQueryConditionEXT::componentTypes.
The runtime must return XR_ERROR_SIZE_INSUFFICIENT from
xrQuerySpatialComponentDataEXT if planeAlignmentCount is less
than XrSpatialComponentDataQueryResultEXT::entityIdCountOutput.
Configuration
If XR_SPATIAL_COMPONENT_TYPE_PLANE_ALIGNMENT_EXT is enumerated in
XrSpatialCapabilityComponentTypesEXT::componentTypes for some
capability, an application can enable it by including the enumerant in the
XrSpatialCapabilityConfigurationBaseHeaderEXT::enabledComponents
list of the XrSpatialCapabilityConfigurationBaseHeaderEXT derived
structure of the capability that supports this component.
This component does not require any special configuration to be included in
the XrSpatialCapabilityConfigurationBaseHeaderEXT::next chain.
12.49.5. Optional Components
A runtime that supports XR_SPATIAL_CAPABILITY_PLANE_TRACKING_EXT may
support other spatial components in addition to the ones listed in the
Guaranteed Components section.
An application uses xrEnumerateSpatialCapabilityComponentTypesEXT to
get the full list of components that a runtime supports, then configures the
ones it is interested in when creating the spatial context.
Mesh 2D Component
Component data
XR_SPATIAL_COMPONENT_TYPE_MESH_2D_EXT uses the
XrSpatialMeshDataEXT structure for its data.
Component list structure to query data
The XrSpatialComponentMesh2DListEXT structure is defined as:
// Provided by XR_EXT_spatial_plane_tracking
typedef struct XrSpatialComponentMesh2DListEXT {
XrStructureType type;
void* next;
uint32_t meshCount;
XrSpatialMeshDataEXT* meshes;
} XrSpatialComponentMesh2DListEXT;
To query the mesh 2D component of the spatial entities in an
XrSpatialSnapshotEXT, include
XR_SPATIAL_COMPONENT_TYPE_MESH_2D_EXT in
XrSpatialComponentDataQueryConditionEXT::componentTypes and add
XrSpatialComponentMesh2DListEXT to the
XrSpatialComponentDataQueryResultEXT::next chain.
The runtime must return XR_ERROR_VALIDATION_FAILURE from
xrQuerySpatialComponentDataEXT if
XrSpatialComponentMesh2DListEXT is in the
XrSpatialComponentDataQueryResultEXT::next chain but
XR_SPATIAL_COMPONENT_TYPE_MESH_2D_EXT is not included in
XrSpatialComponentDataQueryConditionEXT::componentTypes.
The runtime must return XR_ERROR_SIZE_INSUFFICIENT from
xrQuerySpatialComponentDataEXT if meshCount is less than
XrSpatialComponentDataQueryResultEXT::entityIdCountOutput.
For the XrSpatialMeshDataEXT populated by the runtime in the
meshes array, the XrSpatialBufferEXT::bufferType for
XrSpatialMeshDataEXT::vertexBuffer must be
XR_SPATIAL_BUFFER_TYPE_VECTOR2F_EXT and
XrSpatialBufferEXT::bufferType for
XrSpatialMeshDataEXT::indexBuffer must be
XR_SPATIAL_BUFFER_TYPE_UINT16_EXT.
Configuration
If XR_SPATIAL_COMPONENT_TYPE_MESH_2D_EXT is enumerated in
XrSpatialCapabilityComponentTypesEXT::componentTypes for some
capability, an application can enable it by including the enumerant in the
XrSpatialCapabilityConfigurationBaseHeaderEXT::enabledComponents
list of the XrSpatialCapabilityConfigurationBaseHeaderEXT derived
structure of the capability that supports this component.
This component does not require any special configuration to be included in
the XrSpatialCapabilityConfigurationBaseHeaderEXT::next chain.
Polygon 2D Component
Component Data
The XrSpatialPolygon2DDataEXT structure is defined as:
// Provided by XR_EXT_spatial_plane_tracking
typedef struct XrSpatialPolygon2DDataEXT {
XrPosef origin;
XrSpatialBufferEXT vertexBuffer;
} XrSpatialPolygon2DDataEXT;
XrSpatialBufferEXT::bufferType for vertexBuffer must be
XR_SPATIAL_BUFFER_TYPE_VECTOR2F_EXT.
Component list structure to query data
The XrSpatialComponentPolygon2DListEXT structure is defined as:
// Provided by XR_EXT_spatial_plane_tracking
typedef struct XrSpatialComponentPolygon2DListEXT {
XrStructureType type;
void* next;
uint32_t polygonCount;
XrSpatialPolygon2DDataEXT* polygons;
} XrSpatialComponentPolygon2DListEXT;
To query the polygon 2D component of the spatial entities in an
XrSpatialSnapshotEXT, include
XR_SPATIAL_COMPONENT_TYPE_POLYGON_2D_EXT in
XrSpatialComponentDataQueryConditionEXT::componentTypes and add
XrSpatialComponentPolygon2DListEXT to the
XrSpatialComponentDataQueryResultEXT::next chain.
The runtime must return XR_ERROR_VALIDATION_FAILURE from
xrQuerySpatialComponentDataEXT if
XrSpatialComponentPolygon2DListEXT is in the
XrSpatialComponentDataQueryResultEXT::next chain but
XR_SPATIAL_COMPONENT_TYPE_POLYGON_2D_EXT is not included in
XrSpatialComponentDataQueryConditionEXT::componentTypes.
The runtime must return XR_ERROR_SIZE_INSUFFICIENT from
xrQuerySpatialComponentDataEXT if polygonCount is less than
XrSpatialComponentDataQueryResultEXT::entityIdCountOutput.
Configuration
If XR_SPATIAL_COMPONENT_TYPE_POLYGON_2D_EXT is enumerated in
XrSpatialCapabilityComponentTypesEXT::componentTypes for some
capability, an application can enable it by including the enumerant in the
XrSpatialCapabilityConfigurationBaseHeaderEXT::enabledComponents
list of the XrSpatialCapabilityConfigurationBaseHeaderEXT derived
structure of the capability that supports this component.
This component does not require any special configuration to be included in
the XrSpatialCapabilityConfigurationBaseHeaderEXT::next chain.
Plane Semantic Label
Component Data
// Provided by XR_EXT_spatial_plane_tracking
typedef enum XrSpatialPlaneSemanticLabelEXT {
XR_SPATIAL_PLANE_SEMANTIC_LABEL_UNCATEGORIZED_EXT = 1,
XR_SPATIAL_PLANE_SEMANTIC_LABEL_FLOOR_EXT = 2,
XR_SPATIAL_PLANE_SEMANTIC_LABEL_WALL_EXT = 3,
XR_SPATIAL_PLANE_SEMANTIC_LABEL_CEILING_EXT = 4,
XR_SPATIAL_PLANE_SEMANTIC_LABEL_TABLE_EXT = 5,
XR_SPATIAL_PLANE_SEMANTIC_LABEL_MAX_ENUM_EXT = 0x7FFFFFFF
} XrSpatialPlaneSemanticLabelEXT;
The XrSpatialPlaneSemanticLabelEXT enumeration describes a set of semantic labels for planes.
| Enum | Description |
|---|---|
|
The runtime was unable to classify this entity. |
|
The entity is a floor. |
|
The entity is a wall. |
|
The entity is a ceiling. |
|
The entity is a table. |
Component List Structure to Query Data
The XrSpatialComponentPlaneSemanticLabelListEXT structure is defined as:
// Provided by XR_EXT_spatial_plane_tracking
typedef struct XrSpatialComponentPlaneSemanticLabelListEXT {
XrStructureType type;
void* next;
uint32_t semanticLabelCount;
XrSpatialPlaneSemanticLabelEXT* semanticLabels;
} XrSpatialComponentPlaneSemanticLabelListEXT;
To query the plane semantic label component of the spatial entities in an
XrSpatialSnapshotEXT, include
XR_SPATIAL_COMPONENT_TYPE_PLANE_SEMANTIC_LABEL_EXT in
XrSpatialComponentDataQueryConditionEXT::componentTypes and add
XrSpatialComponentPlaneSemanticLabelListEXT to the
XrSpatialComponentDataQueryResultEXT::next chain.
The runtime must return XR_ERROR_VALIDATION_FAILURE from
xrQuerySpatialComponentDataEXT if
XrSpatialComponentPlaneSemanticLabelListEXT is in the next chain of
XrSpatialComponentDataQueryResultEXT::next but
XR_SPATIAL_COMPONENT_TYPE_PLANE_SEMANTIC_LABEL_EXT is not included in
XrSpatialComponentDataQueryConditionEXT::componentTypes.
The runtime must return XR_ERROR_SIZE_INSUFFICIENT from
xrQuerySpatialComponentDataEXT if semanticLabelCount is less
than XrSpatialComponentDataQueryResultEXT::entityIdCountOutput.
Configuration
If XR_SPATIAL_COMPONENT_TYPE_PLANE_SEMANTIC_LABEL_EXT is enumerated in
XrSpatialCapabilityComponentTypesEXT::componentTypes for some
capability, an application can enable it by including the enumerant in the
XrSpatialCapabilityConfigurationBaseHeaderEXT::enabledComponents
list of the XrSpatialCapabilityConfigurationBaseHeaderEXT derived
structure of the capability that supports this component.
This component does not require any special configuration to be included in
the XrSpatialCapabilityConfigurationBaseHeaderEXT::next chain.
12.49.6. Example Code
Configure Plane Tracking Capability
The following example code demonstrates how to configure plane tracking capability when creating a spatial context.
// Check if plane tracking capability is supported
uint32_t capabilityCount;
CHK_XR(xrEnumerateSpatialCapabilitiesEXT(instance, systemId, 0, &capabilityCount, nullptr));
std::vector<XrSpatialCapabilityEXT> capabilities(capabilityCount);
CHK_XR(xrEnumerateSpatialCapabilitiesEXT(instance, systemId, capabilityCount, &capabilityCount, capabilities.data()));
if (std::find(capabilities.begin(), capabilities.end(), XR_SPATIAL_CAPABILITY_PLANE_TRACKING_EXT) == capabilities.end()) {
return;
}
// Enumerate supported components for plane tracking capability
XrSpatialCapabilityComponentTypesEXT planeComponents{XR_TYPE_SPATIAL_CAPABILITY_COMPONENT_TYPES_EXT};
CHK_XR(xrEnumerateSpatialCapabilityComponentTypesEXT(instance, systemId, XR_SPATIAL_CAPABILITY_PLANE_TRACKING_EXT, &planeComponents));
std::vector<XrSpatialComponentTypeEXT> planeCapabilityComponents(planeComponents.componentTypeCountOutput);
planeComponents.componentTypeCapacityInput = planeCapabilityComponents.size();
planeComponents.componentTypes = planeCapabilityComponents.data();
CHK_XR(xrEnumerateSpatialCapabilityComponentTypesEXT(instance, systemId, XR_SPATIAL_CAPABILITY_PLANE_TRACKING_EXT, &planeComponents));
// Check if polygon 2D and plane semantic labels optional components are supported
const auto supportsComponent = [&planeCapabilityComponents](XrSpatialComponentTypeEXT component) {
return std::find(planeCapabilityComponents.begin(), planeCapabilityComponents.end(), component) != planeCapabilityComponents.end();
};
const bool supportsPolygon2DComponent = supportsComponent(XR_SPATIAL_COMPONENT_TYPE_POLYGON_2D_EXT);
const bool supportsSemanticLabelComponent = supportsComponent(XR_SPATIAL_COMPONENT_TYPE_PLANE_SEMANTIC_LABEL_EXT);
// Create a spatial context
XrSpatialContextEXT spatialContext{};
// Enable the 2 guaranteed components of the plane tracking capability
std::vector<XrSpatialComponentTypeEXT> enabledComponents = {
XR_SPATIAL_COMPONENT_TYPE_BOUNDED_2D_EXT,
XR_SPATIAL_COMPONENT_TYPE_PLANE_ALIGNMENT_EXT,
};
// Optionally enable polygon2D if it is supported
if (supportsPolygon2DComponent) {
enabledComponents.push_back(XR_SPATIAL_COMPONENT_TYPE_POLYGON_2D_EXT);
}
// Optionally enable semantic labels if it is supported
if (supportsSemanticLabelComponent) {
enabledComponents.push_back(XR_SPATIAL_COMPONENT_TYPE_PLANE_SEMANTIC_LABEL_EXT);
}
XrSpatialCapabilityConfigurationPlaneTrackingEXT planeConfig{XR_TYPE_SPATIAL_CAPABILITY_CONFIGURATION_PLANE_TRACKING_EXT};
planeConfig.capability = XR_SPATIAL_CAPABILITY_PLANE_TRACKING_EXT;
planeConfig.enabledComponentCount = enabledComponents.size();
planeConfig.enabledComponents = enabledComponents.data();
std::array<XrSpatialCapabilityConfigurationBaseHeaderEXT*, 1> capabilityConfigs = {
reinterpret_cast<XrSpatialCapabilityConfigurationBaseHeaderEXT*>(&planeConfig),
};
XrSpatialContextCreateInfoEXT spatialContextCreateInfo{XR_TYPE_SPATIAL_CONTEXT_CREATE_INFO_EXT};
spatialContextCreateInfo.capabilityConfigCount = capabilityConfigs.size();
spatialContextCreateInfo.capabilityConfigs = capabilityConfigs.data();
XrFutureEXT createContextFuture;
CHK_XR(xrCreateSpatialContextAsyncEXT(session, &spatialContextCreateInfo, &createContextFuture));
waitUntilReady(createContextFuture);
XrCreateSpatialContextCompletionEXT completion{XR_TYPE_CREATE_SPATIAL_CONTEXT_COMPLETION_EXT};
CHK_XR(xrCreateSpatialContextCompleteEXT(session, createContextFuture, &completion));
if (completion.futureResult != XR_SUCCESS) {
return;
}
spatialContext = completion.spatialContext;
// ...
// Discover entities with the spatial context
// ...
CHK_XR(xrDestroySpatialContextEXT(spatialContext));
Discover Spatial Entities & Query Component Data
The following example code demonstrates how to discover spatial entities for
a context configured with XR_SPATIAL_CAPABILITY_PLANE_TRACKING_EXT and
query its component data.
XrFutureEXT future = XR_NULL_FUTURE_EXT;
// We want to look for entities that have the following components.
std::vector<XrSpatialComponentTypeEXT> snapshotComponents = {
XR_SPATIAL_COMPONENT_TYPE_BOUNDED_2D_EXT,
XR_SPATIAL_COMPONENT_TYPE_PLANE_ALIGNMENT_EXT,
};
if (supportsPolygon2DComponent) {
snapshotComponents.push_back(XR_SPATIAL_COMPONENT_TYPE_POLYGON_2D_EXT);
}
if (supportsSemanticLabelComponent) {
snapshotComponents.push_back(XR_SPATIAL_COMPONENT_TYPE_PLANE_SEMANTIC_LABEL_EXT);
}
auto discoverSpatialEntities = [&](XrSpatialContextEXT spatialContext, XrTime time) {
XrSpatialDiscoverySnapshotCreateInfoEXT snapshotCreateInfo{XR_TYPE_SPATIAL_DISCOVERY_SNAPSHOT_CREATE_INFO_EXT};
snapshotCreateInfo.componentTypeCount = snapshotComponents.size();
snapshotCreateInfo.componentTypes = snapshotComponents.data();
CHK_XR(xrCreateSpatialDiscoverySnapshotAsyncEXT(spatialContext, &snapshotCreateInfo, &future));
waitUntilReady(future);
XrCreateSpatialDiscoverySnapshotCompletionInfoEXT completionInfo{XR_TYPE_CREATE_SPATIAL_DISCOVERY_SNAPSHOT_COMPLETION_INFO_EXT};
completionInfo.baseSpace = localSpace;
completionInfo.time = time;
completionInfo.future = future;
XrCreateSpatialDiscoverySnapshotCompletionEXT completion{XR_TYPE_CREATE_SPATIAL_DISCOVERY_SNAPSHOT_COMPLETION_EXT};
CHK_XR(xrCreateSpatialDiscoverySnapshotCompleteEXT(spatialContext, &completionInfo, &completion));
if (completion.futureResult == XR_SUCCESS) {
// Query for the semantic label component data
XrSpatialComponentDataQueryConditionEXT queryCond{XR_TYPE_SPATIAL_COMPONENT_DATA_QUERY_CONDITION_EXT};
queryCond.componentTypeCount = snapshotComponents.size();
queryCond.componentTypes = snapshotComponents.data();
XrSpatialComponentDataQueryResultEXT queryResult{XR_TYPE_SPATIAL_COMPONENT_DATA_QUERY_RESULT_EXT};
CHK_XR(xrQuerySpatialComponentDataEXT(completion.snapshot, &queryCond, &queryResult));
std::vector<XrSpatialEntityIdEXT> entityIds(queryResult.entityIdCountOutput);
std::vector<XrSpatialEntityTrackingStateEXT> entityStates(queryResult.entityIdCountOutput);
queryResult.entityIdCapacityInput = entityIds.size();
queryResult.entityIds = entityIds.data();
queryResult.entityStateCapacityInput = entityStates.size();
queryResult.entityStates = entityStates.data();
std::vector<XrSpatialBounded2DDataEXT> bounded2D(queryResult.entityIdCountOutput);
XrSpatialComponentBounded2DListEXT bounded2DList{XR_TYPE_SPATIAL_COMPONENT_BOUNDED_2D_LIST_EXT};
bounded2DList.boundCount = bounded2D.size();
bounded2DList.bounds = bounded2D.data();
queryResult.next = &bounded2DList;
std::vector<XrSpatialPolygon2DDataEXT> polygons;
XrSpatialComponentPolygon2DListEXT polygonList{XR_TYPE_SPATIAL_COMPONENT_POLYGON_2D_LIST_EXT};
if (supportsPolygon2DComponent) {
polygons.resize(queryResult.entityIdCountOutput);
polygonList.polygonCount = polygons.size();
polygonList.polygons = polygons.data();
polygonList.next = queryResult.next;
queryResult.next = &polygonList;
}
std::vector<XrSpatialPlaneSemanticLabelEXT> semanticLabels;
XrSpatialComponentPlaneSemanticLabelListEXT semanticLabelsList{XR_TYPE_SPATIAL_COMPONENT_PLANE_SEMANTIC_LABEL_LIST_EXT};
if (supportsSemanticLabelComponent) {
semanticLabels.resize(queryResult.entityIdCountOutput);
semanticLabelsList.semanticLabelCount = semanticLabels.size();
semanticLabelsList.semanticLabels = semanticLabels.data();
semanticLabelsList.next = queryResult.next;
queryResult.next = &semanticLabelsList;
}
CHK_XR(xrQuerySpatialComponentDataEXT(completion.snapshot, &queryCond, &queryResult));
for (int32_t i = 0; i < queryResult.entityIdCountOutput; ++i) {
if (entityStates[i] != XR_SPATIAL_ENTITY_TRACKING_STATE_TRACKING_EXT) {
continue;
}
// 2D bounds for entity entityIds[i] is bounded2D[i].extents centered on bounded2D[i].center.
if (supportsPolygon2DComponent) {
// 2D polygon for entity entityIds[i] is the buffer represented by polygons[i].bufferId.
// Application uses flink:xrGetSpatialBufferVector2fEXT to get the buffer data.
}
if (supportsSemanticLabelComponent) {
// semantic label for entity entityIds[i] is semanticLabels[i].
}
}
CHK_XR(xrDestroySpatialSnapshotEXT(completion.snapshot));
}
};
while (1) {
// ...
// For every frame in frame loop
// ...
XrFrameState frameState; // previously returned from xrWaitFrame
const XrTime time = frameState.predictedDisplayTime;
// Poll for the XR_TYPE_EVENT_DATA_SPATIAL_DISCOVERY_RECOMMENDED_EXT event
XrEventDataBuffer event = {XR_TYPE_EVENT_DATA_BUFFER};
XrResult result = xrPollEvent(instance, &event);
if (result == XR_SUCCESS) {
switch (event.type) {
case XR_TYPE_EVENT_DATA_SPATIAL_DISCOVERY_RECOMMENDED_EXT: {
const XrEventDataSpatialDiscoveryRecommendedEXT& eventdata =
*reinterpret_cast<XrEventDataSpatialDiscoveryRecommendedEXT*>(&event);
// Discover spatial entities for the context that we received the "discovery
// recommended" event for.
discoverSpatialEntities(eventdata.spatialContext, time);
break;
}
}
}
// ...
// Finish frame loop
// ...
}
12.49.9. New Enum Constants
-
XR_EXT_SPATIAL_PLANE_TRACKING_EXTENSION_NAME -
XR_EXT_spatial_plane_tracking_SPEC_VERSION -
Extending XrSpatialCapabilityEXT:
-
XR_SPATIAL_CAPABILITY_PLANE_TRACKING_EXT
-
-
Extending XrSpatialComponentTypeEXT:
-
XR_SPATIAL_COMPONENT_TYPE_MESH_2D_EXT -
XR_SPATIAL_COMPONENT_TYPE_PLANE_ALIGNMENT_EXT -
XR_SPATIAL_COMPONENT_TYPE_PLANE_SEMANTIC_LABEL_EXT -
XR_SPATIAL_COMPONENT_TYPE_POLYGON_2D_EXT
-
-
Extending XrStructureType:
-
XR_TYPE_SPATIAL_CAPABILITY_CONFIGURATION_PLANE_TRACKING_EXT -
XR_TYPE_SPATIAL_COMPONENT_MESH_2D_LIST_EXT -
XR_TYPE_SPATIAL_COMPONENT_PLANE_ALIGNMENT_LIST_EXT -
XR_TYPE_SPATIAL_COMPONENT_PLANE_SEMANTIC_LABEL_LIST_EXT -
XR_TYPE_SPATIAL_COMPONENT_POLYGON_2D_LIST_EXT
-
12.50. XR_EXT_thermal_query
- Name String
-
XR_EXT_thermal_query - Extension Type
-
Instance extension
- Registered Extension Number
-
17
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2021-04-14
- IP Status
-
No known IP claims.
- Contributors
-
Armelle Laine, Qualcomm Technologies Inc, on behalf of Qualcomm Innovation Center, Inc
12.50.1. Overview
This extension provides an API to query a domain’s current thermal warning level and current thermal trend.
12.50.2. Querying the current thermal level and trend
This query allows to determine the extent and urgency of the needed workload
reduction and to verify that the mitigation measures efficiently reduce the
temperature.
This query allows the application to retrieve the current
notificationLevel, allowing to quickly verify whether the underlying
system’s thermal throttling is still in effect.
It also provides the application with the remaining temperature headroom
(tempHeadroom) until thermal throttling occurs, and the current rate
of change (tempSlope).
The most critical temperature of the domain is the one which is currently
most likely to be relevant for thermal throttling.
To query the status of a given domain:
// Provided by XR_EXT_thermal_query
XrResult xrThermalGetTemperatureTrendEXT(
XrSession session,
XrPerfSettingsDomainEXT domain,
XrPerfSettingsNotificationLevelEXT* notificationLevel,
float* tempHeadroom,
float* tempSlope);
// Provided by XR_EXT_performance_settings, XR_EXT_thermal_query
typedef enum XrPerfSettingsDomainEXT {
XR_PERF_SETTINGS_DOMAIN_CPU_EXT = 1,
XR_PERF_SETTINGS_DOMAIN_GPU_EXT = 2,
XR_PERF_SETTINGS_DOMAIN_MAX_ENUM_EXT = 0x7FFFFFFF
} XrPerfSettingsDomainEXT;
// Provided by XR_EXT_performance_settings, XR_EXT_thermal_query
typedef enum XrPerfSettingsNotificationLevelEXT {
XR_PERF_SETTINGS_NOTIF_LEVEL_NORMAL_EXT = 0,
XR_PERF_SETTINGS_NOTIF_LEVEL_WARNING_EXT = 25,
XR_PERF_SETTINGS_NOTIF_LEVEL_IMPAIRED_EXT = 75,
XR_PERF_SETTINGS_NOTIFICATION_LEVEL_MAX_ENUM_EXT = 0x7FFFFFFF
} XrPerfSettingsNotificationLevelEXT;
For the definition of the notification levels, see Notification level definition.
12.50.3. Thermal Query API Reference
xrThermalGetTemperatureTrendEXT
// Provided by XR_EXT_thermal_query
XrResult xrThermalGetTemperatureTrendEXT(
XrSession session,
XrPerfSettingsDomainEXT domain,
XrPerfSettingsNotificationLevelEXT* notificationLevel,
float* tempHeadroom,
float* tempSlope);
Allows to query the current temperature warning level of a domain, the remaining headroom and the trend.
// Provided by XR_EXT_performance_settings, XR_EXT_thermal_query
typedef enum XrPerfSettingsDomainEXT {
XR_PERF_SETTINGS_DOMAIN_CPU_EXT = 1,
XR_PERF_SETTINGS_DOMAIN_GPU_EXT = 2,
XR_PERF_SETTINGS_DOMAIN_MAX_ENUM_EXT = 0x7FFFFFFF
} XrPerfSettingsDomainEXT;
// Provided by XR_EXT_performance_settings, XR_EXT_thermal_query
typedef enum XrPerfSettingsNotificationLevelEXT {
XR_PERF_SETTINGS_NOTIF_LEVEL_NORMAL_EXT = 0,
XR_PERF_SETTINGS_NOTIF_LEVEL_WARNING_EXT = 25,
XR_PERF_SETTINGS_NOTIF_LEVEL_IMPAIRED_EXT = 75,
XR_PERF_SETTINGS_NOTIFICATION_LEVEL_MAX_ENUM_EXT = 0x7FFFFFFF
} XrPerfSettingsNotificationLevelEXT;
Version History
-
Revision 1, 2017-11-30 (Armelle Laine)
-
Revision 2, 2021-04-14 (Rylie Pavlik, Collabora, Ltd.)
-
Fix missing error code
-
12.51. XR_EXT_user_presence
- Name String
-
XR_EXT_user_presence - Extension Type
-
Instance extension
- Registered Extension Number
-
471
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2023-04-22
- IP Status
-
No known IP claims.
- Contributors
-
Yin Li, Microsoft
Bryce Hutchings, Microsoft
John Kearney, Meta Platforms
Andreas Loeve Selvik, Meta Platforms
Peter Kuhn, Unity Technologies
Jakob Bornecrantz, Collabora
12.51.1. Overview
This extension introduces a new event to notify when the system detected the change of user presence, such as when the user has taken off or put on an XR headset.
This event is typically used by an XR applications with non-XR experiences outside of the XR headset. For instance, some applications pause the game logic or video playback until the user puts on the headset, displaying an instructional message to the user in the mirror window on the desktop PC monitor. As another example, the application might use this event to disable a head-tracking driven avatar in an online meeting when the user has taken off the headset.
The user presence is fundamentally decoupled from the session lifecycle. Although the core spec for XrSessionState hinted potential correlation between the session state and user presence, in practice, such a connection may not consistently hold across various runtimes. Application should avoid relying on assumptions regarding these relationships between session state and user presence, instead, they should utilize this extension to reliably obtain user presence information.
12.51.2. System Supports User Presence
The XrSystemUserPresencePropertiesEXT structure is defined as:
// Provided by XR_EXT_user_presence
typedef struct XrSystemUserPresencePropertiesEXT {
XrStructureType type;
void* next;
XrBool32 supportsUserPresence;
} XrSystemUserPresencePropertiesEXT;
The application can use the XrSystemUserPresencePropertiesEXT event in xrGetSystemProperties to detect if the given system supports the sensing of user presence.
If the system does not support user presence sensing, the runtime must
return XR_FALSE for supportsUserPresence and must not queue the
XrEventDataUserPresenceChangedEXT event for any session on this
system.
In this case, an application typically assumes that the user is always present, as the runtime is unable to detect changes in user presence.
12.51.3. User Presence Changed Event
The XrEventDataUserPresenceChangedEXT structure is defined as:
// Provided by XR_EXT_user_presence
typedef struct XrEventDataUserPresenceChangedEXT {
XrStructureType type;
const void* next;
XrSession session;
XrBool32 isUserPresent;
} XrEventDataUserPresenceChangedEXT;
The XrEventDataUserPresenceChangedEXT event is queued for retrieval using xrPollEvent when the user presence is changed, as well as when a session starts running.
Receiving XrEventDataUserPresenceChangedEXT with the
isUserPresent is XR_TRUE indicates that the system has detected
the presence of a user in the XR experience.
For example, this may indicate that the user has put on the headset, or has
entered the tracking area of a non-head-worn XR system.
Receiving XrEventDataUserPresenceChangedEXT with the
isUserPresent is XR_FALSE indicates that the system has detected
the absence of a user in the XR experience.
For example, this may indicate that the user has removed the headset or has
stepped away from the tracking area of a non-head-worn XR system.
The runtime must queue this event upon a successful call to the
xrBeginSession function, regardless of the value of
isUserPresent, so that the application can be in sync on the state
when a session begins running.
The runtime must return a valid XrSession handle for a running session.
After the application calls xrEndSession, a running
session is ended and the runtime must not enqueue any more user presence
events.
Therefore, the application will no longer observe any changes of the
isUserPresent until another running session.
|
Note
This extension does not require any specific correlation between user presence state and session state except that the XrEventDataUserPresenceChangedEXT event can not be observed without a running session. A runtime may choose to correlate the two states or keep them independent. |
XrInstance instance; // previously initialized
XrSystemId systemId; // previously initialized
XrSession session; // previously initialized
XrSystemUserPresencePropertiesEXT userPresenceProperties{XR_TYPE_SYSTEM_USER_PRESENCE_PROPERTIES_EXT};
XrSystemProperties systemProperties{XR_TYPE_SYSTEM_PROPERTIES,
&userPresenceProperties};
CHK_XR(xrGetSystemProperties(instance, systemId, &systemProperties));
bool supportsUserPresence = userPresenceProperties.supportsUserPresence;
// When either the extension is not supported or the system does not support the sensor,
// the application typically assumes user always present, and initialize the isUserPresent
// to true before xrBeginSession and reset it to false after xrEndSession.
bool isUserPresent = true;
// Initialize an event buffer to hold the output.
XrEventDataBuffer event = {XR_TYPE_EVENT_DATA_BUFFER};
XrResult result = xrPollEvent(instance, &event);
if (result == XR_SUCCESS) {
switch (event.type) {
case XR_TYPE_EVENT_DATA_SESSION_STATE_CHANGED: {
const XrEventDataSessionStateChanged& eventdata =
*reinterpret_cast<XrEventDataSessionStateChanged*>(&event);
XrSessionState sessionState = eventdata.state;
switch(sessionState)
{
case XR_SESSION_STATE_READY: {
isUserPresent = true;
XrSessionBeginInfo beginInfo{XR_TYPE_SESSION_BEGIN_INFO};
CHK_XR(xrBeginSession(session, &beginInfo));
break;
}
case XR_SESSION_STATE_STOPPING:{
CHK_XR(xrEndSession(session));
isUserPresent = false;
break;
}
}
break;
}
case XR_TYPE_EVENT_DATA_USER_PRESENCE_CHANGED_EXT: {
const XrEventDataUserPresenceChangedEXT& eventdata =
*reinterpret_cast<XrEventDataUserPresenceChangedEXT*>(&event);
isUserPresent = eventdata.isUserPresent;
// do_something(isUserPresent);
break;
}
}
}
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_EVENT_DATA_USER_PRESENCE_CHANGED_EXT -
XR_TYPE_SYSTEM_USER_PRESENCE_PROPERTIES_EXT
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2023-04-22 (Yin Li)
-
Initial extension description
-
12.52. XR_EXT_view_configuration_depth_range
- Name String
-
XR_EXT_view_configuration_depth_range - Extension Type
-
Instance extension
- Registered Extension Number
-
47
- Revision
-
1
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Last Modified Date
-
2019-08-16
- IP Status
-
No known IP claims.
- Contributors
-
Blake Taylor, Magic Leap
Gilles Cadet, Magic Leap
Michael Liebenow, Magic Leap
Supreet Suresh, Magic Leap
Alex Turner, Microsoft
Bryce Hutchings, Microsoft
Yin Li, Microsoft
Overview
For XR systems there may exist a per view recommended min/max depth range at which content should be rendered into the virtual world. The depth range may be driven by several factors, including user comfort, or fundamental capabilities of the system.
Displaying rendered content outside the recommended min/max depth range
would violate the system requirements for a properly integrated application,
and can result in a poor user experience due to observed visual artifacts,
visual discomfort, or fatigue.
The near/far depth values will fall in the range of (0..+infinity] where
max(recommendedNearZ, minNearZ) < min(recommendedFarZ,
maxFarZ).
Infinity is defined matching the standard library definition such that
std::isinf will return true for a returned infinite value.
In order to provide the application with the appropriate depth range at which to render content for each XrViewConfigurationView, this extension provides additional view configuration information, as defined by XrViewConfigurationDepthRangeEXT, to inform the application of the min/max recommended and absolute distances at which content should be rendered for that view.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_VIEW_CONFIGURATION_DEPTH_RANGE_EXT
New Enums
New Structures
The XrViewConfigurationDepthRangeEXT structure is defined as:
// Provided by XR_EXT_view_configuration_depth_range
typedef struct XrViewConfigurationDepthRangeEXT {
XrStructureType type;
void* next;
float recommendedNearZ;
float minNearZ;
float recommendedFarZ;
float maxFarZ;
} XrViewConfigurationDepthRangeEXT;
When enumerating the view configurations with
xrEnumerateViewConfigurationViews, the application can provide a
pointer to an XrViewConfigurationDepthRangeEXT in the next chain
of XrViewConfigurationView.
New Functions
Issues
Version History
-
Revision 1, 2019-10-01 (Blake Taylor)
-
Initial proposal.
-
12.53. XR_EXT_win32_appcontainer_compatible
- Name String
-
XR_EXT_win32_appcontainer_compatible - Extension Type
-
Instance extension
- Registered Extension Number
-
58
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2019-12-16
- IP Status
-
No known IP claims.
- Contributors
-
Yin Li, Microsoft
Alex Turner, Microsoft
Lachlan Ford, Microsoft
Overview
To minimize opportunities for malicious manipulation, a common practice on the Windows OS is to isolate the application process in an AppContainer execution environment. In order for a runtime to work properly in such an application process, the runtime must properly set ACL to device resources and cross process resources.
An application running in an AppContainer process can request for a runtime
to enable such AppContainer compatibility by adding
XR_EXT_WIN32_APPCONTAINER_COMPATIBLE_EXTENSION_NAME to
enabledExtensionNames of XrInstanceCreateInfo when calling
xrCreateInstance.
If the runtime is not capable of running properly within the AppContainer
execution environment, it must return XR_ERROR_EXTENSION_NOT_PRESENT.
If the runtime supports this extension, it can further inspect the
capability based on the connected device.
If the XR system cannot support an AppContainer execution environment, the
runtime must return XR_ERROR_FORM_FACTOR_UNAVAILABLE when the
application calls xrGetSystem.
If the call to xrGetSystem successfully returned with a valid
XrSystemId, the application can rely on the runtime working
properly in the AppContainer execution environment.
New Object Types
New Flag Types
New Enum Constants
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2019-12-16 (Yin Li)
-
Initial proposal.
-
12.54. XR_ALMALENCE_digital_lens_control
- Name String
-
XR_ALMALENCE_digital_lens_control - Extension Type
-
Instance extension
- Registered Extension Number
-
197
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2021-11-08
- IP Status
-
No known IP claims.
- Contributors
-
Ivan Chupakhin, Almalence Inc.
Dmitry Shmunk, Almalence Inc.
Overview
Digital Lens for VR (DLVR) is a computational lens aberration correction technology enabling high resolution, visual clarity and fidelity in VR head mounted displays. The Digital Lens allows to overcome two fundamental factors limiting VR picture quality, size constraints and presence of a moving optical element — the eye pupil.
Features:
-
Complete removal of lateral chromatic aberrations, across the entire FoV, at all gaze directions.
-
Correction of longitudinal chromatic aberrations, lens blur and higher order aberrations.
-
Increase of visible resolution.
-
Enhancement of edge contrast (otherwise degraded due to lens smear).
-
Enables high quality at wide FoV.
For OpenXR runtimes DLVR is implemented as implicit API Layer distributed by Almalence Inc. as installable package. DLVR utilize eye tracking data (eye pupil coordinates and gaze direction) to produce corrections of render frames. As long as current core OpenXR API does not expose an eye tracking data, DLVR API Layer relies on 3rd-party eye tracking runtimes.
List of supported eye tracking devices:
-
Tobii_VR4_CARBON_P1 (HP Reverb G2 Omnicept Edition)
-
Tobii_VR4_U2_P2 (HTC Vive Pro Eye)
This extension enables the handling of the Digital Lens for VR API Layer by calling xrSetDigitalLensControlALMALENCE.
New Object Types
New Flag Types
typedef XrFlags64 XrDigitalLensControlFlagsALMALENCE;
// Flag bits for XrDigitalLensControlFlagsALMALENCE
static const XrDigitalLensControlFlagsALMALENCE XR_DIGITAL_LENS_CONTROL_PROCESSING_DISABLE_BIT_ALMALENCE = 0x00000001;
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_DIGITAL_LENS_CONTROL_ALMALENCE
New Enums
New Structures
The XrDigitalLensControlALMALENCE structure is defined as:
typedef struct XrDigitalLensControlALMALENCE {
XrStructureType type;
const void* next;
XrDigitalLensControlFlagsALMALENCE flags;
} XrDigitalLensControlALMALENCE;
New Functions
The xrSetDigitalLensControlALMALENCE function is defined as:
// Provided by XR_ALMALENCE_digital_lens_control
XrResult xrSetDigitalLensControlALMALENCE(
XrSession session,
const XrDigitalLensControlALMALENCE* digitalLensControl);
xrSetDigitalLensControlALMALENCE handles state of Digital Lens API Layer
Issues
Version History
-
Revision 1, 2021-11-08 (Ivan Chupakhin)
-
Initial draft
-
12.55. XR_ANDROID_anchor_sharing_export
- Name String
-
XR_ANDROID_anchor_sharing_export - Extension Type
-
Instance extension
- Registered Extension Number
-
702
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2025-07-23
- IP Status
-
No known IP claims.
- Contributors
-
Nihav Jain, Google
Levana Chen, Google
Spencer Quin, Google
Kenny Vercaemer, Google
12.55.1. Overview
Anchors created by an application via XR_ANDROID_trackables are tied
to a specific session and are thus not visible to other applications.
This extension allows applications to share anchors with other running
applications.
This extension does not define any functions that make use of shared anchors. Other extensions may define such functions.
|
Permissions
Android applications must have the
android.permission.SCENE_UNDERSTANDING_COARSE permission listed in their
manifest as this extension depends on (protection level: dangerous) |
12.55.2. Inspect system capability
The XrSystemAnchorSharingExportPropertiesANDROID structure is defined as:
// Provided by XR_ANDROID_anchor_sharing_export
typedef struct XrSystemAnchorSharingExportPropertiesANDROID {
XrStructureType type;
void* next;
XrBool32 supportsAnchorSharingExport;
} XrSystemAnchorSharingExportPropertiesANDROID;
An application can inspect whether the system is capable of sharing anchors
by chaining an XrSystemAnchorSharingExportPropertiesANDROID structure
to the XrSystemProperties::next chain when calling
xrGetSystemProperties.
If and only if a runtime returns XR_FALSE for
supportsAnchorSharingExport, the runtime must return
XR_ERROR_FEATURE_UNSUPPORTED from xrShareAnchorANDROID and
xrUnshareAnchorANDROID.
12.55.3. Share an anchor
The xrShareAnchorANDROID function is defined as:
// Provided by XR_ANDROID_anchor_sharing_export
XrResult xrShareAnchorANDROID(
XrSession session,
const XrAnchorSharingInfoANDROID* sharingInfo,
XrAnchorSharingTokenANDROID* anchorToken);
The xrShareAnchorANDROID function returns a share token to the
application.
The application can share this token with other applications to give them
access to the pose data of XrAnchorSharingInfoANDROID::anchor.
The runtime must return XR_ERROR_ANCHOR_NOT_OWNED_BY_CALLER_ANDROID
if XrAnchorSharingInfoANDROID::anchor was not originally created
by the application that calls xrShareAnchorANDROID.
If the application calls xrShareAnchorANDROID for the same
XrAnchorSharingInfoANDROID::anchor multiple times without
unsharing it first, the runtime must return the same share token each time.
If the application calls xrShareAnchorANDROID for an anchor that was
previously shared but then unshared, the runtime may return a share token
that is different from the previous sharing scope.
The lifetime of the anchorToken share token is limited to the
application controlled share-unshare scope.
The XrAnchorSharingInfoANDROID structure is defined as:
// Provided by XR_ANDROID_anchor_sharing_export
typedef struct XrAnchorSharingInfoANDROID {
XrStructureType type;
const void* next;
XrSpace anchor;
} XrAnchorSharingInfoANDROID;
XrAnchorSharingInfoANDROID::anchor must be a valid
XrSpace created previously by calling xrCreateAnchorSpaceANDROID
or xrCreatePersistedAnchorSpaceANDROID.
The XrAnchorSharingTokenANDROID structure is defined as:
// Provided by XR_ANDROID_anchor_sharing_export
typedef struct XrAnchorSharingTokenANDROID {
XrStructureType type;
void* next;
struct AIBinder* token;
} XrAnchorSharingTokenANDROID;
Other extensions may define functions that use the
XrAnchorSharingTokenANDROID::token to access the anchor from
other sessions.
12.55.4. Unshare an anchor
The xrUnshareAnchorANDROID function is defined as:
// Provided by XR_ANDROID_anchor_sharing_export
XrResult xrUnshareAnchorANDROID(
XrSession session,
XrSpace anchor);
The xrUnshareAnchorANDROID function invalidates previously shared
anchors.
This means that future uses of the anchor sharing token obtained when
xrShareAnchorANDROID was called must fail until the anchor is shared
again via xrShareAnchorANDROID.
The runtime must also invalidate previous imports of the anchor sharing
token and must not set XR_SPACE_LOCATION_POSITION_TRACKED_BIT or
XR_SPACE_LOCATION_ORIENTATION_TRACKED_BIT for the imported anchor’s
XrSpace.
This means that a previously imported anchor must stop tracking if the
original anchor is unshared by its parent session.
The runtime must return XR_ERROR_ANCHOR_NOT_OWNED_BY_CALLER_ANDROID
if the anchor is not owned by the same XrSession that is the parent of
XrAnchorSharingInfoANDROID::anchor.
An anchor that is not unshared explicitly by calling xrUnshareAnchorANDROID must be implicitly unshared by the runtime when the anchor is destroyed via xrDestroySpace, or when the parent XrSession is destroyed, including when the application that shared the anchor quits.
12.55.5. Example code for anchor sharing export
The following example code demonstrates how to share and unshare an anchor with other applications.
#if defined(XR_USE_PLATFORM_ANDROID)
XrSession session; // previously initialized
XrSpace anchor; // created via xrCreateAnchorSpaceANDROID() or xrCreatePersistedAnchorSpaceANDROID()
// The function pointers are previously initialized using xrGetInstanceProcAddr.
PFN_xrShareAnchorANDROID xrShareAnchorANDROID; // previously initialized
PFN_xrUnshareAnchorANDROID xrUnshareAnchorANDROID; // previously initialized
XrAnchorSharingInfoANDROID sharingInfo = {
.type = XR_TYPE_ANCHOR_SHARING_INFO_ANDROID,
.next = nullptr,
.anchor = anchor,
};
AIBinder* anchorToken;
CHK_XR(xrShareAnchorANDROID(
session,
&sharingInfo,
&anchorToken
));
// ... share anchorToken with other processes ...
// Once app no longer wants to share the anchor
CHK_XR(xrUnshareAnchorANDROID(session, anchor));
// Once app is done using the anchor. This will also unshare
// the anchor if the anchor has not been unshared explicitly before.
CHK_XR(xrDestroySpace(anchor));
#endif
12.55.8. New Enum Constants
-
XR_ANDROID_ANCHOR_SHARING_EXPORT_EXTENSION_NAME -
XR_ANDROID_anchor_sharing_export_SPEC_VERSION -
Extending XrResult:
-
XR_ERROR_ANCHOR_NOT_OWNED_BY_CALLER_ANDROID
-
-
Extending XrStructureType:
-
XR_TYPE_ANCHOR_SHARING_INFO_ANDROID -
XR_TYPE_ANCHOR_SHARING_TOKEN_ANDROID -
XR_TYPE_SYSTEM_ANCHOR_SHARING_EXPORT_PROPERTIES_ANDROID
-
12.56. XR_ANDROID_device_anchor_persistence
- Name String
-
XR_ANDROID_device_anchor_persistence - Extension Type
-
Instance extension
- Registered Extension Number
-
458
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2025-07-23
- IP Status
-
No known IP claims.
- Contributors
-
Nihav Jain, Google
Levana Chen, Google
Spencer Quin, Google
Kenny Vercaemer, Google
12.56.1. Overview
This extension allows the application to persist, retrieve, and unpersist anchors on the current device for the current user, across application and device sessions. The anchors are persisted per app, as identified by their Android package name.
|
Permissions
Android applications must have the
android.permission.SCENE_UNDERSTANDING_COARSE permission listed in their
manifest as this extension depends on (protection level: dangerous) |
12.56.2. Inspect system capability
The XrSystemDeviceAnchorPersistencePropertiesANDROID structure is defined as:
// Provided by XR_ANDROID_device_anchor_persistence
typedef struct XrSystemDeviceAnchorPersistencePropertiesANDROID {
XrStructureType type;
void* next;
XrBool32 supportsAnchorPersistence;
} XrSystemDeviceAnchorPersistencePropertiesANDROID;
An application inspects whether the system is capable of persisting spatial anchors (see xrCreateAnchorSpaceANDROID) by extending the XrSystemProperties with XrSystemDeviceAnchorPersistencePropertiesANDROID structure when calling xrGetSystemProperties.
To query the supported types for the supported trackable anchors an application calls xrEnumerateSupportedPersistenceAnchorTypesANDROID.
If and only if a runtime returns XR_FALSE for
supportsAnchorPersistence, the runtime must return
XR_ERROR_FEATURE_UNSUPPORTED from device anchor persistence functions
that operate on a spatial anchor.
The xrEnumerateSupportedPersistenceAnchorTypesANDROID function is defined as:
// Provided by XR_ANDROID_device_anchor_persistence
XrResult xrEnumerateSupportedPersistenceAnchorTypesANDROID(
XrInstance instance,
XrSystemId systemId,
uint32_t trackableTypeCapacityInput,
uint32_t* trackableTypeCountOutput,
XrTrackableTypeANDROID* trackableTypes);
To check for support of anchor persistence on other XrTrackableTypeANDROID trackables the application calls xrEnumerateSupportedPersistenceAnchorTypesANDROID.
If and only if a runtime does not return a given
XrTrackableTypeANDROID in the trackableTypes array, the runtime
must return XR_ERROR_FEATURE_UNSUPPORTED from device anchor
persistence functions that operate on an anchor of that type.
12.56.3. Create a device anchor persistence handle
An XrDeviceAnchorPersistenceANDROID is a handle that represents the resources required to persist anchors.
// Provided by XR_ANDROID_device_anchor_persistence
XR_DEFINE_HANDLE(XrDeviceAnchorPersistenceANDROID)
The xrCreateDeviceAnchorPersistenceANDROID function is defined as:
// Provided by XR_ANDROID_device_anchor_persistence
XrResult xrCreateDeviceAnchorPersistenceANDROID(
XrSession session,
const XrDeviceAnchorPersistenceCreateInfoANDROID* createInfo,
XrDeviceAnchorPersistenceANDROID* outHandle);
An application creates an XrDeviceAnchorPersistenceANDROID handle by calling xrCreateDeviceAnchorPersistenceANDROID.
This function starts off an asynchronous loading of the application’s
persisted data.
All functions of this extensions, except
xrDestroyDeviceAnchorPersistenceANDROID, must return
XR_ERROR_PERSISTED_DATA_NOT_READY_ANDROID if the data loading is not
complete.
The application retries those functions at a later time.
The XrDeviceAnchorPersistenceANDROID handle must be eventually freed via the xrDestroyDeviceAnchorPersistenceANDROID function or by destroying the parent XrSession handle.
The XrDeviceAnchorPersistenceCreateInfoANDROID structure is defined as:
// Provided by XR_ANDROID_device_anchor_persistence
typedef struct XrDeviceAnchorPersistenceCreateInfoANDROID {
XrStructureType type;
const void* next;
} XrDeviceAnchorPersistenceCreateInfoANDROID;
The XrDeviceAnchorPersistenceCreateInfoANDROID structure provides creation options for the XrDeviceAnchorPersistenceANDROID when passed to xrCreateDeviceAnchorPersistenceANDROID.
The xrDestroyDeviceAnchorPersistenceANDROID function is defined as:
// Provided by XR_ANDROID_device_anchor_persistence
XrResult xrDestroyDeviceAnchorPersistenceANDROID(
XrDeviceAnchorPersistenceANDROID handle);
The xrDestroyDeviceAnchorPersistenceANDROID function destroys the device anchor persistence handle. After this call the runtime may free all related memory and resources.
12.56.4. Persist an anchor
The xrPersistAnchorANDROID function is defined as:
// Provided by XR_ANDROID_device_anchor_persistence
XrResult xrPersistAnchorANDROID(
XrDeviceAnchorPersistenceANDROID handle,
const XrPersistedAnchorSpaceInfoANDROID* persistedInfo,
XrUuidEXT* anchorIdOutput);
The application requests anchors to be persisted by calling xrPersistAnchorANDROID. The application must not assume a success return value means the anchor is immediately persisted. The application should use xrGetAnchorPersistStateANDROID to check the persist state of the anchor using the returned anchor XrUuidEXT.
-
The runtime must return
XR_ERROR_PERSISTED_DATA_NOT_READY_ANDROIDif the persisted data loading forhandleis not yet complete. -
The runtime must return
XR_SUCCESSonce the anchor has been queued for persistence.
To unpersist the anchor, the application calls xrUnpersistAnchorANDROID.
The XrPersistedAnchorSpaceInfoANDROID structure is defined as:
// Provided by XR_ANDROID_device_anchor_persistence
typedef struct XrPersistedAnchorSpaceInfoANDROID {
XrStructureType type;
const void* next;
XrSpace anchor;
} XrPersistedAnchorSpaceInfoANDROID;
The xrGetAnchorPersistStateANDROID function is defined as:
// Provided by XR_ANDROID_device_anchor_persistence
XrResult xrGetAnchorPersistStateANDROID(
XrDeviceAnchorPersistenceANDROID handle,
const XrUuidEXT* anchorId,
XrAnchorPersistStateANDROID* persistState);
-
The runtime must set
persistStatetoXR_ANCHOR_PERSIST_STATE_PERSIST_NOT_REQUESTED_ANDROIDif the anchor XrUuidEXT has not been requested for persistence. -
The runtime must return
XR_ERROR_PERSISTED_DATA_NOT_READY_ANDROIDif the persisted data ofanchorIdis not ready.
The XrAnchorPersistStateANDROID enum is defined as:
// Provided by XR_ANDROID_device_anchor_persistence
typedef enum XrAnchorPersistStateANDROID {
XR_ANCHOR_PERSIST_STATE_PERSIST_NOT_REQUESTED_ANDROID = 0,
XR_ANCHOR_PERSIST_STATE_PERSIST_PENDING_ANDROID = 1,
XR_ANCHOR_PERSIST_STATE_PERSISTED_ANDROID = 2,
XR_ANCHOR_PERSIST_STATE_MAX_ENUM_ANDROID = 0x7FFFFFFF
} XrAnchorPersistStateANDROID;
| Enum | Description |
|---|---|
|
Anchor has not been requested to be persisted by the app. |
|
Anchor has been requested to be persisted but not persisted yet. |
|
Anchor has been successfully persisted by the runtime. |
12.56.5. Enumerate persisted anchors
The xrEnumeratePersistedAnchorsANDROID function is defined as:
// Provided by XR_ANDROID_device_anchor_persistence
XrResult xrEnumeratePersistedAnchorsANDROID(
XrDeviceAnchorPersistenceANDROID handle,
uint32_t anchorIdCapacityInput,
uint32_t* anchorIdCountOutput,
XrUuidEXT* anchorIds);
To enumerate all current persisted anchors, the application calls
xrEnumeratePersistedAnchorsANDROID.
anchorIds will hold the UUIDs of the persisted anchors up to the
capacity of the array.
Since anchors are persisted asynchronously, the count of persisted anchors
may change in between calls, causing anchorIdCountOutput to differ as
well.
If the capacity is insufficient, the runtime must truncate the values to
fit the output array and makes no guarantees about which anchors are
returned.
The runtime must return XR_ERROR_PERSISTED_DATA_NOT_READY_ANDROID if
the persisted data loading for handle is not yet complete.
12.56.6. Create an anchor from persisted data
The xrCreatePersistedAnchorSpaceANDROID function is defined as:
// Provided by XR_ANDROID_device_anchor_persistence
XrResult xrCreatePersistedAnchorSpaceANDROID(
XrDeviceAnchorPersistenceANDROID handle,
const XrPersistedAnchorSpaceCreateInfoANDROID* createInfo,
XrSpace* anchorOutput);
The application creates an XrSpace anchor from a previously persisted
anchor by calling xrCreatePersistedAnchorSpaceANDROID with the same
XrUuidEXT.
This is another way of creating anchors as defined in
XR_ANDROID_trackables.
-
The runtime must return
XR_ERROR_PERSISTED_DATA_NOT_READY_ANDROIDif the persisted data loading forhandleis not yet complete. -
The runtime must return
XR_ERROR_ANCHOR_ID_NOT_FOUND_ANDROIDif the anchor XrUuidEXT is not found. -
If the XrUuidEXT refers to an anchor that has not reached the
XR_ANCHOR_PERSIST_STATE_PERSISTED_ANDROIDstate yet, the runtime must returnXR_ERROR_ANCHOR_ID_NOT_FOUND_ANDROID. -
Despite the first parameter of this function being an XrDeviceAnchorPersistenceANDROID, the parent of the created XrSpace is the XrSession that is the parent of
handle. The runtime must track the anchor untilanchorOutputis destroyed.
The XrPersistedAnchorSpaceCreateInfoANDROID structure is defined as:
// Provided by XR_ANDROID_device_anchor_persistence
typedef struct XrPersistedAnchorSpaceCreateInfoANDROID {
XrStructureType type;
const void* next;
XrUuidEXT anchorId;
} XrPersistedAnchorSpaceCreateInfoANDROID;
The XrPersistedAnchorSpaceCreateInfoANDROID structure provides creation options for the anchor when passed to xrCreateDeviceAnchorPersistenceANDROID.
12.56.7. Unpersist a persisted anchor
The xrUnpersistAnchorANDROID function is defined as:
// Provided by XR_ANDROID_device_anchor_persistence
XrResult xrUnpersistAnchorANDROID(
XrDeviceAnchorPersistenceANDROID handle,
const XrUuidEXT* anchorId);
The application unpersists a persisted anchor by calling xrUnpersistAnchorANDROID and passing the anchor XrUuidEXT of the anchor to unpersist. The runtime may take some time to unpersist the anchor. Applications should use xrEnumeratePersistedAnchorsANDROID to verify that an anchor has been unpersisted before exiting. Anchor spaces that have been created from persisted anchors are unaffected by unpersisting and keep tracking as normal anchors.
-
The runtime must return
XR_ERROR_PERSISTED_DATA_NOT_READY_ANDROIDif the persisted data is not ready. -
The runtime must return
XR_ERROR_ANCHOR_ID_NOT_FOUND_ANDROIDif the anchor XrUuidEXT is not found. -
The runtime must guarantee that the anchor will be unpersisted, regardless if the anchor at the time of the call is in either pending or persisted state. I.e. A call to xrPersistAnchorANDROID followed by a call to xrUnpersistAnchorANDROID must always result in the anchor being unpersisted, or never persisted.
12.56.8. Example code for anchor persistence
The following example code demonstrates how to inspect system capability, persist, enumerate and unpersist anchors, as well as creating an anchor from persisted anchor XrUuidEXT.
XrSession session; // previously initialized
XrSpace anchor; // previously initialized
XrSpace appSpace; // previously initialized
// The function pointers are previously initialized using xrGetInstanceProcAddr.
PFN_xrEnumerateSupportedPersistenceAnchorTypesANDROID xrEnumerateSupportedPersistenceAnchorTypesANDROID; // previously initialized
PFN_xrCreateDeviceAnchorPersistenceANDROID xrCreateDeviceAnchorPersistenceANDROID; // previously initialized
PFN_xrDestroyDeviceAnchorPersistenceANDROID xrDestroyDeviceAnchorPersistenceANDROID; // previously initialized
PFN_xrPersistAnchorANDROID xrPersistAnchorANDROID; // previously initialized
PFN_xrGetAnchorPersistStateANDROID xrGetAnchorPersistStateANDROID; // previously initialized
PFN_xrCreatePersistedAnchorSpaceANDROID xrCreatePersistedAnchorSpaceANDROID; // previously initialized
PFN_xrEnumeratePersistedAnchorsANDROID xrEnumeratePersistedAnchorsANDROID; // previously initialized
PFN_xrUnpersistAnchorANDROID xrUnpersistAnchorANDROID; // previously initialized
// Create a device anchor persistence handle
XrDeviceAnchorPersistenceCreateInfoANDROID persistenceHandleCreateInfo;
persistenceHandleCreateInfo.type = XR_TYPE_DEVICE_ANCHOR_PERSISTENCE_CREATE_INFO_ANDROID;
persistenceHandleCreateInfo.next = nullptr;
XrDeviceAnchorPersistenceANDROID persistenceHandle;
CHK_XR(xrCreateDeviceAnchorPersistenceANDROID(session, &persistenceHandleCreateInfo, &persistenceHandle));
/// Persist an anchor
XrPersistedAnchorSpaceInfoANDROID anchorSpaceInfo;
anchorSpaceInfo.type = XR_TYPE_PERSISTED_ANCHOR_SPACE_INFO_ANDROID;
anchorSpaceInfo.next = nullptr;
anchorSpaceInfo.anchor = anchor;
XrUuidEXT anchorId;
XrResult result;
do {
result = xrPersistAnchorANDROID(
persistenceHandle, &anchorSpaceInfo, &anchorId);
} while (result == XR_ERROR_PERSISTED_DATA_NOT_READY_ANDROID ||
result == XR_ERROR_ANCHOR_NOT_TRACKING_ANDROID);
if (result != XR_SUCCESS) {
// Handle errors
}
// ... Update loop ...
// Poll for anchor persist state to confirm if it was successfully persisted
XrAnchorPersistStateANDROID persistState;
CHK_XR(xrGetAnchorPersistStateANDROID(persistenceHandle, &anchorId, &persistState));
if (persistState == XR_ANCHOR_PERSIST_STATE_PERSISTED_ANDROID) {
// The anchor was persisted successfully
}
// Enumerate all persisted anchors
uint32_t anchorCountOutput = 0;
std::vector<XrUuidEXT> allAnchors;
CHK_XR(xrEnumeratePersistedAnchorsANDROID(
persistenceHandle,
anchorCountOutput,
&anchorCountOutput,
nullptr
));
allAnchors.resize(anchorCountOutput, XrUuidEXT{.data={0}});
// Fetch the actual anchors in an appropriately resized array.
CHK_XR(xrEnumeratePersistedAnchorsANDROID(
persistenceHandle,
anchorCountOutput,
&anchorCountOutput,
allAnchors.data()
));
// Creating an anchor from a previously persisted anchor using its UUID
XrTime updateTime; // Time used for the current frame's simulation update.
anchorId = allAnchors[0];
XrPersistedAnchorSpaceCreateInfoANDROID createInfo;
createInfo.type = XR_TYPE_PERSISTED_ANCHOR_SPACE_CREATE_INFO_ANDROID;
createInfo.next = nullptr;
createInfo.anchorId = anchorId;
XrSpace anchorSpace = XR_NULL_HANDLE;
CHK_XR(xrCreatePersistedAnchorSpaceANDROID(
persistenceHandle,
&createInfo,
&anchorSpace
));
// The anchor was found and retrieved from the local device successfully.
XrSpaceLocation anchorLocation = { XR_TYPE_SPACE_LOCATION };
CHK_XR(xrLocateSpace(anchorSpace, appSpace, updateTime, &anchorLocation));
XrPosef pose = anchorLocation.pose;
// Unpersist the anchor
do {
result = xrUnpersistAnchorANDROID(persistenceHandle, &anchorId);
} while (result == XR_ERROR_PERSISTED_DATA_NOT_READY_ANDROID);
if (result != XR_SUCCESS) {
// Handle errors
}
// Once app is done with all persistence related tasks
CHK_XR(xrDestroySpace(anchorSpace));
CHK_XR(xrDestroyDeviceAnchorPersistenceANDROID(persistenceHandle));
12.56.13. New Enum Constants
-
XR_ANDROID_DEVICE_ANCHOR_PERSISTENCE_EXTENSION_NAME -
XR_ANDROID_device_anchor_persistence_SPEC_VERSION -
Extending XrObjectType:
-
XR_OBJECT_TYPE_DEVICE_ANCHOR_PERSISTENCE_ANDROID
-
-
Extending XrResult:
-
XR_ERROR_ANCHOR_ALREADY_PERSISTED_ANDROID -
XR_ERROR_ANCHOR_ID_NOT_FOUND_ANDROID -
XR_ERROR_ANCHOR_NOT_TRACKING_ANDROID -
XR_ERROR_PERSISTED_DATA_NOT_READY_ANDROID
-
-
Extending XrStructureType:
-
XR_TYPE_DEVICE_ANCHOR_PERSISTENCE_CREATE_INFO_ANDROID -
XR_TYPE_PERSISTED_ANCHOR_SPACE_CREATE_INFO_ANDROID -
XR_TYPE_PERSISTED_ANCHOR_SPACE_INFO_ANDROID -
XR_TYPE_SYSTEM_DEVICE_ANCHOR_PERSISTENCE_PROPERTIES_ANDROID
-
12.57. XR_ANDROID_face_tracking
- Name String
-
XR_ANDROID_face_tracking - Extension Type
-
Instance extension
- Registered Extension Number
-
459
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2025-06-04
- IP Status
-
No known IP claims.
- Contributors
-
Spencer Quin, Google
Jared Finder, Google
Levana Chen, Google
12.57.1. Overview
This extension enables applications to get weights of blend shapes and render facial expressions in XR experiences.
This extension is intended to provide the information needed to create realistic avatars and expressive representations of users in virtual space. The runtime may provide additional face calibration to help with face tracking experiences. The application can check calibration activation before getting weights of blend shapes.
12.57.2. Face Tracker
Face tracking data is sensitive personal information and is closely linked to personal privacy and integrity. It is strongly recommended that applications storing or transferring face tracking data always ask the user for active and specific acceptance to do so.
|
Permissions
Android applications must have the android.permission.FACE_TRACKING permission listed in their manifest. The android.permission.FACE_TRACKING permission is considered a dangerous permission. The application must request the permission at runtime to use these functions: (protection level: dangerous) |
The runtime must support a permission system to control application access to face tracking.
-
The runtime must return
XR_ERROR_PERMISSION_INSUFFICIENTwhen creating an active face tracker until the application has been granted permission for the face tracker. -
When the application access has been granted permission, the runtime may set XrFaceStateANDROID::
isValidXR_TRUEwhen getting face states via xrGetFaceStateANDROID.
12.57.3. Inspect system capability
The XrSystemFaceTrackingPropertiesANDROID structure is defined as:
// Provided by XR_ANDROID_face_tracking
typedef struct XrSystemFaceTrackingPropertiesANDROID {
XrStructureType type;
void* next;
XrBool32 supportsFaceTracking;
} XrSystemFaceTrackingPropertiesANDROID;
|
Permissions
Android applications must have the android.permission.FACE_TRACKING permission listed in their manifest. The android.permission.FACE_TRACKING permission is considered a dangerous permission. The application must request the permission at runtime to use these functions: (protection level: dangerous) |
An application inspects whether the system is capable of face tracking by extending the XrSystemProperties with an XrSystemFaceTrackingPropertiesANDROID structure when calling xrGetSystemProperties.
If and only if a runtime returns XR_FALSE for
supportsFaceTracking, the runtime must return
XR_ERROR_FEATURE_UNSUPPORTED from xrCreateFaceTrackerANDROID.
12.57.4. Create a face tracker handle
// Provided by XR_ANDROID_face_tracking
XR_DEFINE_HANDLE(XrFaceTrackerANDROID)
The XrFaceTrackerANDROID handle represents a face tracker for face tracking.
The application uses this handle to access face tracking data using other functions in this extension.
The xrCreateFaceTrackerANDROID function is defined as:
// Provided by XR_ANDROID_face_tracking
XrResult xrCreateFaceTrackerANDROID(
XrSession session,
const XrFaceTrackerCreateInfoANDROID* createInfo,
XrFaceTrackerANDROID* faceTracker);
|
Permissions
Android applications must have the android.permission.FACE_TRACKING permission listed in their manifest. The android.permission.FACE_TRACKING permission is considered a dangerous permission. The application must request the permission at runtime to use these functions: (protection level: dangerous) |
An application creates an XrFaceTrackerANDROID handle by calling the xrCreateFaceTrackerANDROID function.
If the system does not support face tracking, the runtime must return
XR_ERROR_FEATURE_UNSUPPORTED from xrCreateFaceTrackerANDROID.
The XrFaceTrackerCreateInfoANDROID structure is described as follows:
// Provided by XR_ANDROID_face_tracking
typedef struct XrFaceTrackerCreateInfoANDROID {
XrStructureType type;
const void* next;
} XrFaceTrackerCreateInfoANDROID;
The XrFaceTrackerCreateInfoANDROID structure describes the information to create an XrFaceTrackerANDROID handle.
The xrDestroyFaceTrackerANDROID function is defined as:
// Provided by XR_ANDROID_face_tracking
XrResult xrDestroyFaceTrackerANDROID(
XrFaceTrackerANDROID faceTracker);
When the application’s face tracking experience is over it calls the
xrDestroyFaceTrackerANDROID function to release the faceTracker
and underlying resources.
12.57.5. Check face calibration
The xrGetFaceCalibrationStateANDROID function is defined as:
// Provided by XR_ANDROID_face_tracking
XrResult xrGetFaceCalibrationStateANDROID(
XrFaceTrackerANDROID faceTracker,
XrBool32* faceIsCalibratedOutput);
An application checks the face calibration state by calling the xrGetFaceCalibrationStateANDROID function.
When the tracking service is still under initialization, the runtime may
return XR_ERROR_SERVICE_NOT_READY_ANDROID from
xrGetFaceCalibrationStateANDROID to indicate that the application can
retry later.
If the system does not support face calibration, the runtime must return
XR_ERROR_FEATURE_UNSUPPORTED from
xrGetFaceCalibrationStateANDROID.
Otherwise, the runtime may set faceIsCalibratedOutput to
XR_TRUE to reflect the face calibration state.
12.57.6. Get facial expressions
The xrGetFaceStateANDROID function returns blend shapes of facial expressions at a given time.
// Provided by XR_ANDROID_face_tracking
XrResult xrGetFaceStateANDROID(
XrFaceTrackerANDROID faceTracker,
const XrFaceStateGetInfoANDROID* getInfo,
XrFaceStateANDROID* faceStateOutput);
The XrFaceStateGetInfoANDROID structure describes the information to obtain facial expressions.
// Provided by XR_ANDROID_face_tracking
typedef struct XrFaceStateGetInfoANDROID {
XrStructureType type;
const void* next;
XrTime time;
} XrFaceStateGetInfoANDROID;
Applications should request a time equal to the predicted display time for the rendered frame. The runtime will employ appropriate modeling to provide expressions for this time.
XrFaceStateANDROID structure returns the face tracking state and facial expressions.
// Provided by XR_ANDROID_face_tracking
typedef struct XrFaceStateANDROID {
XrStructureType type;
void* next;
uint32_t parametersCapacityInput;
uint32_t parametersCountOutput;
float* parameters;
XrFaceTrackingStateANDROID faceTrackingState;
XrTime sampleTime;
XrBool32 isValid;
uint32_t regionConfidencesCapacityInput;
uint32_t regionConfidencesCountOutput;
float* regionConfidences;
} XrFaceStateANDROID;
The application can set parametersCapacityInput to
XR_FACE_PARAMETER_COUNT_ANDROID to get facial expressions which are
indexed by XrFaceParameterIndicesANDROID.
The runtime must return parameters representing the weights of blend
shapes of current facial expressions.
The runtime must update the parameters array ordered so that the
application can index elements using the corresponding facial expression
enum (e.g. XrFaceParameterIndicesANDROID).
The XrFaceTrackingStateANDROID enumeration identifies the different states of the face tracker.
// Provided by XR_ANDROID_face_tracking
typedef enum XrFaceTrackingStateANDROID {
XR_FACE_TRACKING_STATE_PAUSED_ANDROID = 0,
XR_FACE_TRACKING_STATE_STOPPED_ANDROID = 1,
XR_FACE_TRACKING_STATE_TRACKING_ANDROID = 2,
XR_FACE_TRACKING_STATE_MAX_ENUM_ANDROID = 0x7FFFFFFF
} XrFaceTrackingStateANDROID;
The enumerants have the following meanings:
| Enum | Description |
|---|---|
|
Indicates that face tracking is paused but may be resumed in the future. |
|
Tracking has stopped but the client still has an active face tracker. |
|
The face is currently tracked and its pose is current. |
The XrFaceConfidenceRegionsANDROID enumeration identifies the different confidence regions of the face.
// Provided by XR_ANDROID_face_tracking
typedef enum XrFaceConfidenceRegionsANDROID {
XR_FACE_CONFIDENCE_REGIONS_LOWER_ANDROID = 0,
XR_FACE_CONFIDENCE_REGIONS_LEFT_UPPER_ANDROID = 1,
XR_FACE_CONFIDENCE_REGIONS_RIGHT_UPPER_ANDROID = 2,
XR_FACE_CONFIDENCE_REGIONS_MAX_ENUM_ANDROID = 0x7FFFFFFF
} XrFaceConfidenceRegionsANDROID;
The enumerants have the following meanings:
| Enum | Description |
|---|---|
|
Confidence corresponding to the lower region. |
|
Confidence corresponding to the left upper region. |
|
Confidence corresponding to the right upper region. |
12.57.7. Conventions of blend shapes
This extension defines 68 blend shapes, i.e.
XR_FACE_PARAMETER_COUNT_ANDROID, for the reduced G-Nome format.
Each parameter in this enum is an index into a blend shape array whose
values are type of float and the runtime must normalize to 1 - 0.
// Provided by XR_ANDROID_face_tracking
typedef enum XrFaceParameterIndicesANDROID {
XR_FACE_PARAMETER_INDICES_BROW_LOWERER_L_ANDROID = 0,
XR_FACE_PARAMETER_INDICES_BROW_LOWERER_R_ANDROID = 1,
XR_FACE_PARAMETER_INDICES_CHEEK_PUFF_L_ANDROID = 2,
XR_FACE_PARAMETER_INDICES_CHEEK_PUFF_R_ANDROID = 3,
XR_FACE_PARAMETER_INDICES_CHEEK_RAISER_L_ANDROID = 4,
XR_FACE_PARAMETER_INDICES_CHEEK_RAISER_R_ANDROID = 5,
XR_FACE_PARAMETER_INDICES_CHEEK_SUCK_L_ANDROID = 6,
XR_FACE_PARAMETER_INDICES_CHEEK_SUCK_R_ANDROID = 7,
XR_FACE_PARAMETER_INDICES_CHIN_RAISER_B_ANDROID = 8,
XR_FACE_PARAMETER_INDICES_CHIN_RAISER_T_ANDROID = 9,
XR_FACE_PARAMETER_INDICES_DIMPLER_L_ANDROID = 10,
XR_FACE_PARAMETER_INDICES_DIMPLER_R_ANDROID = 11,
XR_FACE_PARAMETER_INDICES_EYES_CLOSED_L_ANDROID = 12,
XR_FACE_PARAMETER_INDICES_EYES_CLOSED_R_ANDROID = 13,
XR_FACE_PARAMETER_INDICES_EYES_LOOK_DOWN_L_ANDROID = 14,
XR_FACE_PARAMETER_INDICES_EYES_LOOK_DOWN_R_ANDROID = 15,
XR_FACE_PARAMETER_INDICES_EYES_LOOK_LEFT_L_ANDROID = 16,
XR_FACE_PARAMETER_INDICES_EYES_LOOK_LEFT_R_ANDROID = 17,
XR_FACE_PARAMETER_INDICES_EYES_LOOK_RIGHT_L_ANDROID = 18,
XR_FACE_PARAMETER_INDICES_EYES_LOOK_RIGHT_R_ANDROID = 19,
XR_FACE_PARAMETER_INDICES_EYES_LOOK_UP_L_ANDROID = 20,
XR_FACE_PARAMETER_INDICES_EYES_LOOK_UP_R_ANDROID = 21,
XR_FACE_PARAMETER_INDICES_INNER_BROW_RAISER_L_ANDROID = 22,
XR_FACE_PARAMETER_INDICES_INNER_BROW_RAISER_R_ANDROID = 23,
XR_FACE_PARAMETER_INDICES_JAW_DROP_ANDROID = 24,
XR_FACE_PARAMETER_INDICES_JAW_SIDEWAYS_LEFT_ANDROID = 25,
XR_FACE_PARAMETER_INDICES_JAW_SIDEWAYS_RIGHT_ANDROID = 26,
XR_FACE_PARAMETER_INDICES_JAW_THRUST_ANDROID = 27,
XR_FACE_PARAMETER_INDICES_LID_TIGHTENER_L_ANDROID = 28,
XR_FACE_PARAMETER_INDICES_LID_TIGHTENER_R_ANDROID = 29,
XR_FACE_PARAMETER_INDICES_LIP_CORNER_DEPRESSOR_L_ANDROID = 30,
XR_FACE_PARAMETER_INDICES_LIP_CORNER_DEPRESSOR_R_ANDROID = 31,
XR_FACE_PARAMETER_INDICES_LIP_CORNER_PULLER_L_ANDROID = 32,
XR_FACE_PARAMETER_INDICES_LIP_CORNER_PULLER_R_ANDROID = 33,
XR_FACE_PARAMETER_INDICES_LIP_FUNNELER_LB_ANDROID = 34,
XR_FACE_PARAMETER_INDICES_LIP_FUNNELER_LT_ANDROID = 35,
XR_FACE_PARAMETER_INDICES_LIP_FUNNELER_RB_ANDROID = 36,
XR_FACE_PARAMETER_INDICES_LIP_FUNNELER_RT_ANDROID = 37,
XR_FACE_PARAMETER_INDICES_LIP_PRESSOR_L_ANDROID = 38,
XR_FACE_PARAMETER_INDICES_LIP_PRESSOR_R_ANDROID = 39,
XR_FACE_PARAMETER_INDICES_LIP_PUCKER_L_ANDROID = 40,
XR_FACE_PARAMETER_INDICES_LIP_PUCKER_R_ANDROID = 41,
XR_FACE_PARAMETER_INDICES_LIP_STRETCHER_L_ANDROID = 42,
XR_FACE_PARAMETER_INDICES_LIP_STRETCHER_R_ANDROID = 43,
XR_FACE_PARAMETER_INDICES_LIP_SUCK_LB_ANDROID = 44,
XR_FACE_PARAMETER_INDICES_LIP_SUCK_LT_ANDROID = 45,
XR_FACE_PARAMETER_INDICES_LIP_SUCK_RB_ANDROID = 46,
XR_FACE_PARAMETER_INDICES_LIP_SUCK_RT_ANDROID = 47,
XR_FACE_PARAMETER_INDICES_LIP_TIGHTENER_L_ANDROID = 48,
XR_FACE_PARAMETER_INDICES_LIP_TIGHTENER_R_ANDROID = 49,
XR_FACE_PARAMETER_INDICES_LIPS_TOWARD_ANDROID = 50,
XR_FACE_PARAMETER_INDICES_LOWER_LIP_DEPRESSOR_L_ANDROID = 51,
XR_FACE_PARAMETER_INDICES_LOWER_LIP_DEPRESSOR_R_ANDROID = 52,
XR_FACE_PARAMETER_INDICES_MOUTH_LEFT_ANDROID = 53,
XR_FACE_PARAMETER_INDICES_MOUTH_RIGHT_ANDROID = 54,
XR_FACE_PARAMETER_INDICES_NOSE_WRINKLER_L_ANDROID = 55,
XR_FACE_PARAMETER_INDICES_NOSE_WRINKLER_R_ANDROID = 56,
XR_FACE_PARAMETER_INDICES_OUTER_BROW_RAISER_L_ANDROID = 57,
XR_FACE_PARAMETER_INDICES_OUTER_BROW_RAISER_R_ANDROID = 58,
XR_FACE_PARAMETER_INDICES_UPPER_LID_RAISER_L_ANDROID = 59,
XR_FACE_PARAMETER_INDICES_UPPER_LID_RAISER_R_ANDROID = 60,
XR_FACE_PARAMETER_INDICES_UPPER_LIP_RAISER_L_ANDROID = 61,
XR_FACE_PARAMETER_INDICES_UPPER_LIP_RAISER_R_ANDROID = 62,
XR_FACE_PARAMETER_INDICES_TONGUE_OUT_ANDROID = 63,
XR_FACE_PARAMETER_INDICES_TONGUE_LEFT_ANDROID = 64,
XR_FACE_PARAMETER_INDICES_TONGUE_RIGHT_ANDROID = 65,
XR_FACE_PARAMETER_INDICES_TONGUE_UP_ANDROID = 66,
XR_FACE_PARAMETER_INDICES_TONGUE_DOWN_ANDROID = 67,
XR_FACE_PARAMETER_INDICES_MAX_ENUM_ANDROID = 0x7FFFFFFF
} XrFaceParameterIndicesANDROID;
| Enum | Description |
|---|---|
|
The left brow lowerer blendshape parameter. |
|
The right brow lowerer blendshape parameter. |
|
The left cheek puff blendshape parameter. |
|
The right cheek puff blendshape parameter. |
|
The left cheek raiser blendshape parameter. |
|
The right cheek raiser blendshape parameter. |
|
The left cheek suck blendshape parameter. |
|
The right cheek suck blendshape parameter. |
|
The bottom chin raiser blendshape parameter. |
|
The top chin raiser blendshape parameter. |
|
The left dimpler blendshape parameter. |
|
The right dimpler lowerer blendshape parameter. |
|
The left eyes closed blendshape parameter. |
|
The right eyes closed blendshape parameter. |
|
The left eyes look down blendshape parameter. |
|
The right eyes look down blendshape parameter. |
|
The left look left blendshape parameter. |
|
The left look right blendshape parameter. |
|
The right look left blendshape parameter. |
|
The right look right blendshape parameter. |
|
The left eyes look up blendshape parameter. |
|
The right eyes look up blendshape parameter. |
|
The left inner brow raiser blendshape parameter. |
|
The right inner brow raiser blendshape parameter. |
|
The jaw drop blendshape parameter. |
|
The left jaw sideways blendshape parameter. |
|
The right jaw sideways blendshape parameter. |
|
The jaw thrust blendshape parameter. |
|
The left lid tightener blendshape parameter. |
|
The right lid tightener blendshape parameter. |
|
The left corner lip depressor blendshape parameter. |
|
The right corner lip depressor blendshape parameter. |
|
The left corner lip puller blendshape parameter. |
|
The right corner lip puller blendshape parameter. |
|
The left bottom lip funneler blendshape parameter. |
|
The left top lip funneler blendshape parameter. |
|
The right bottom lip funneler blendshape parameter. |
|
The right top lip funneler blendshape parameter. |
|
The left lip pressor blendshape parameter. |
|
The right lip pressor blendshape parameter. |
|
The left lip pucker blendshape parameter. |
|
The right lip pucker blendshape parameter. |
|
The left lip stretcher blendshape parameter. |
|
The right lip stretcher blendshape parameter. |
|
The left bottom lip suck blendshape parameter. |
|
The left top lip suck blendshape parameter. |
|
The right bottom lip suck blendshape parameter. |
|
The right top lip suck blendshape parameter. |
|
The left lip tightener blendshape parameter. |
|
The right lip tightener blendshape parameter. |
|
The lips toward blendshape parameter. |
|
The left lower lip depressor blendshape parameter. |
|
The right lower lip depressor blendshape parameter. |
|
The mouth move left blendshape parameter. |
|
The mouth move right blendshape parameter. |
|
The left nose wrinkler blendshape parameter. |
|
The right nose wrinkler blendshape parameter. |
|
The left outer brow raiser blendshape parameter. |
|
The right outer brow raiser blendshape parameter. |
|
The left lid raiser blendshape parameter. |
|
The right lid raiser blendshape parameter. |
|
The left lip raiser blendshape parameter. |
|
The right lip raiser blendshape parameter. |
|
The tongue out blendshape parameter. |
|
The tongue left puller blendshape parameter. |
|
The right right puller blendshape parameter. |
|
The right up puller blendshape parameter. |
|
The right down puller blendshape parameter. |
12.57.8. Example code for face tracking
The following example code demonstrates how to get all weights for facial expression blend shapes.
XrInstance instance; // previously initialized
XrSystemId systemId; // previously initialized
XrSession session; // previously initialized, e.g. created at app startup.
// The function pointers are previously initialized using xrGetInstanceProcAddr.
PFN_xrCreateFaceTrackerANDROID xrCreateFaceTrackerANDROID; // previously initialized
PFN_xrDestroyFaceTrackerANDROID xrDestroyFaceTrackerANDROID; // previously initialized
PFN_xrGetFaceStateANDROID xrGetFaceStateANDROID; // previously initialized
PFN_xrGetFaceCalibrationStateANDROID xrGetFaceCalibrationStateANDROID; // previously initialized
// Inspect system capability
XrSystemProperties properties{XR_TYPE_SYSTEM_PROPERTIES};
XrSystemFaceTrackingPropertiesANDROID faceTrackingProperties{XR_TYPE_SYSTEM_FACE_TRACKING_PROPERTIES_ANDROID};
properties.next = &faceTrackingProperties;
CHK_XR(xrGetSystemProperties(instance, systemId, &properties));
if (!faceTrackingProperties.supportsFaceTracking) {
// face tracking is not supported.
return;
}
XrFaceTrackerANDROID faceTracker;
XrFaceTrackerCreateInfoANDROID
createInfo{.type = XR_TYPE_FACE_TRACKER_CREATE_INFO_ANDROID,
.next = nullptr};
CHK_XR(xrCreateFaceTrackerANDROID(session, &createInfo, &faceTracker));
// If the system supports face calibration:
XrBool32 isCalibrated;
CHK_XR(xrGetFaceCalibrationStateANDROID(faceTracker, &isCalibrated));
if (!isCalibrated) {
// Redirect the user to system calibration setting.
}
XrFaceStateANDROID faceState;
float faceExpressionParameters[XR_FACE_PARAMETER_COUNT_ANDROID];
faceState.type = XR_TYPE_FACE_STATE_ANDROID;
faceState.next = nullptr;
faceState.parametersCapacityInput = XR_FACE_PARAMETER_COUNT_ANDROID;
faceState.parameters = faceExpressionParameters;
while (1) {
// ...
// For every frame in the frame loop
// ...
XrFrameState frameState; // previously returned from xrWaitFrame
XrFaceStateGetInfoANDROID faceGetInfo{
.type = XR_TYPE_FACE_STATE_GET_INFO_ANDROID,
.next = nullptr,
.time = frameState.predictedDisplayTime,
};
CHK_XR(xrGetFaceStateANDROID(faceTracker, &faceGetInfo, &faceState));
if (faceState.isValid) {
for (uint32_t i = 0; i < XR_FACE_PARAMETER_COUNT_ANDROID; ++i) {
// parameters[i] contains a weight of specific blend shape
}
}
}
// after usage
CHK_XR(xrDestroyFaceTrackerANDROID(faceTracker));
12.57.13. New Enum Constants
-
XR_ANDROID_FACE_TRACKING_EXTENSION_NAME -
XR_ANDROID_face_tracking_SPEC_VERSION -
XR_FACE_PARAMETER_COUNT_ANDROID -
XR_FACE_REGION_CONFIDENCE_COUNT_ANDROID -
Extending XrObjectType:
-
XR_OBJECT_TYPE_FACE_TRACKER_ANDROID
-
-
Extending XrResult:
-
XR_ERROR_SERVICE_NOT_READY_ANDROID
-
-
Extending XrStructureType:
-
XR_TYPE_FACE_STATE_ANDROID -
XR_TYPE_FACE_STATE_GET_INFO_ANDROID -
XR_TYPE_FACE_TRACKER_CREATE_INFO_ANDROID -
XR_TYPE_SYSTEM_FACE_TRACKING_PROPERTIES_ANDROID
-
12.58. XR_ANDROID_passthrough_camera_state
- Name String
-
XR_ANDROID_passthrough_camera_state - Extension Type
-
Instance extension
- Registered Extension Number
-
461
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2025-07-23
- Contributors
-
Spencer Quin, Google
Jared Finder, Google
Kevin Moule, Google
Nihav Jain, Google
12.58.1. Overview
Passthrough cameras may take time to start up and are not immediately available. This extension lets applications know the current state of the passthrough camera.
12.58.2. Inspect system capability
The XrSystemPassthroughCameraStatePropertiesANDROID structure is defined as:
// Provided by XR_ANDROID_passthrough_camera_state
typedef struct XrSystemPassthroughCameraStatePropertiesANDROID {
XrStructureType type;
void* next;
XrBool32 supportsPassthroughCameraState;
} XrSystemPassthroughCameraStatePropertiesANDROID;
Applications inspect whether the system is capable of querying the passthrough camera state by extending the XrSystemProperties with an XrSystemPassthroughCameraStatePropertiesANDROID structure when calling xrGetSystemProperties.
If and only if a runtime returns XR_FALSE for
supportsPassthroughCameraState, the runtime must return
XR_ERROR_FEATURE_UNSUPPORTED from
xrGetPassthroughCameraStateANDROID.
12.58.3. Get the current passthrough camera state
// Provided by XR_ANDROID_passthrough_camera_state
XrResult xrGetPassthroughCameraStateANDROID(
XrSession session,
const XrPassthroughCameraStateGetInfoANDROID* getInfo,
XrPassthroughCameraStateANDROID* cameraStateOutput);
xrGetPassthroughCameraStateANDROID retrieves the current state of the passthrough camera.
The XrPassthroughCameraStateGetInfoANDROID structure is an input struct which specifies the camera state request parameters.
// Provided by XR_ANDROID_passthrough_camera_state
typedef struct XrPassthroughCameraStateGetInfoANDROID {
XrStructureType type;
const void* next;
} XrPassthroughCameraStateGetInfoANDROID;
The XrPassthroughCameraStateANDROID enumeration identifies the different possible states of the passthrough camera.
// Provided by XR_ANDROID_passthrough_camera_state
typedef enum XrPassthroughCameraStateANDROID {
XR_PASSTHROUGH_CAMERA_STATE_DISABLED_ANDROID = 0,
XR_PASSTHROUGH_CAMERA_STATE_INITIALIZING_ANDROID = 1,
XR_PASSTHROUGH_CAMERA_STATE_READY_ANDROID = 2,
XR_PASSTHROUGH_CAMERA_STATE_ERROR_ANDROID = 3,
XR_PASSTHROUGH_CAMERA_STATE_MAX_ENUM_ANDROID = 0x7FFFFFFF
} XrPassthroughCameraStateANDROID;
| Enum | Description |
|---|---|
|
The camera has been disabled by an app, the system or the user. |
|
The camera is still coming online and not yet ready to use. The runtime may render a black background where the passthrough video is supposed to appear. |
|
The camera is ready to use. |
|
The camera is in an unrecoverable error state. |
12.58.7. New Enum Constants
-
XR_ANDROID_PASSTHROUGH_CAMERA_STATE_EXTENSION_NAME -
XR_ANDROID_passthrough_camera_state_SPEC_VERSION -
Extending XrStructureType:
-
XR_TYPE_PASSTHROUGH_CAMERA_STATE_GET_INFO_ANDROID -
XR_TYPE_SYSTEM_PASSTHROUGH_CAMERA_STATE_PROPERTIES_ANDROID
-
12.59. XR_ANDROID_raycast
- Name String
-
XR_ANDROID_raycast - Extension Type
-
Instance extension
- Registered Extension Number
-
464
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2025-07-23
- IP Status
-
No known IP claims.
- Contributors
-
Spencer Quin, Google
Nihav Jain, Google
John Pursey, Google
Jared Finder, Google
Levana Chen, Google
Kenny Vercaemer, Google
12.59.1. Overview
This extension allows the application to perform raycasts against trackables in the environment.
Raycasts are useful for detecting objects in the environment that are in the trajectory of a ray from a given origin. For example:
-
To determine where a floating object will fall when dropped, by using a vertical raycast.
-
To determine where a user is looking, by using a forward-facing raycast.
|
Permissions
Android applications must have the
android.permission.SCENE_UNDERSTANDING_COARSE permission listed in their
manifest as this extension depends on (protection level: dangerous) |
12.59.2. Query supported raycast capabilities
The xrEnumerateRaycastSupportedTrackableTypesANDROID function is defined as:
// Provided by XR_ANDROID_raycast
XrResult xrEnumerateRaycastSupportedTrackableTypesANDROID(
XrInstance instance,
XrSystemId systemId,
uint32_t trackableTypeCapacityInput,
uint32_t* trackableTypeCountOutput,
XrTrackableTypeANDROID* trackableTypes);
xrEnumerateRaycastSupportedTrackableTypesANDROID enumerates which trackable types the runtime supports raycasting for.
12.59.3. Performing a raycast
The xrRaycastANDROID function is defined as:
// Provided by XR_ANDROID_raycast
XrResult xrRaycastANDROID(
XrSession session,
const XrRaycastInfoANDROID* rayInfo,
XrRaycastHitResultsANDROID* results);
To perform raycasts, the application calls xrRaycastANDROID.
-
If a raycast intersects more trackables than XrRaycastInfoANDROID::
maxResults, the runtime must return the hit results that are closest to the XrRaycastInfoANDROID::originof the ray. -
If a raycast intersects a trackable of type
XR_TRACKABLE_TYPE_PLANE_ANDROID, that is subsumed by another plane, the runtime must return the hit result for the subsuming plane only. -
The runtime must return the hit results in closest-to-farthest order from XrRaycastInfoANDROID::
originalong the XrRaycastInfoANDROID::trajectoryvector. -
The runtime must return
XR_ERROR_TRACKABLE_TYPE_NOT_SUPPORTED_ANDROIDif the trackable type corresponding to the XrTrackableTrackerANDROID handles in XrRaycastInfoANDROID::trackersare not enumerated by xrEnumerateRaycastSupportedTrackableTypesANDROID.
The XrRaycastInfoANDROID structure is defined as:
// Provided by XR_ANDROID_raycast
typedef struct XrRaycastInfoANDROID {
XrStructureType type;
const void* next;
uint32_t maxResults;
uint32_t trackerCount;
const XrTrackableTrackerANDROID* trackers;
XrVector3f origin;
XrVector3f trajectory;
XrSpace space;
XrTime time;
} XrRaycastInfoANDROID;
The XrRaycastInfoANDROID structure describes the ray to cast.
-
The XrRaycastInfoANDROID::
trackersarray may contain trackers of different types. -
The XrRaycastInfoANDROID::
trackersarray must not contain multiple trackers of the same type, otherwise the runtime must returnXR_ERROR_VALIDATION_FAILURE.
The XrRaycastHitResultsANDROID structure is defined as:
// Provided by XR_ANDROID_raycast
typedef struct XrRaycastHitResultsANDROID {
XrStructureType type;
void* next;
uint32_t resultsCapacityInput;
uint32_t resultsCountOutput;
XrRaycastHitResultANDROID* results;
} XrRaycastHitResultsANDROID;
The XrRaycastHitResultsANDROID contains the array of hits of a raycast.
The runtime must set resultsCountOutput to be less than or equal to
XrRaycastInfoANDROID::maxResults.
The XrRaycastHitResultANDROID structure is defined as:
// Provided by XR_ANDROID_raycast
typedef struct XrRaycastHitResultANDROID {
XrTrackableTypeANDROID type;
XrTrackableANDROID trackable;
XrPosef pose;
} XrRaycastHitResultANDROID;
The XrRaycastHitResultANDROID contains the details of a raycast hit.
The XrRaycastHitResultANDROID::pose for a plane hit must be
such that X+ is perpendicular to the cast ray and parallel to the physical
surface centered around the hit, Y+ points along the estimated surface
normal, and Z+ points roughly toward the ray origin.
The XrRaycastHitResultANDROID::pose for a depth hit is analogous
to a plane hit, using an estimated surface normal.
X+ is perpendicular to the cast ray and parallel to the physical surface
centered around the hit, Y+ points along the estimated surface normal, and
Z+ points roughly toward the ray origin.
| Type of trackable hit | Description |
|---|---|
|
Hits horizontal and/or vertical surfaces to determine a point’s correct depth and orientation. |
|
Uses depth information from the entire scene to determine a point’s correct depth and orientation. |
Other extensions may implement raycasting for other types of trackables.
12.59.4. Example code for raycasting
The following example code demonstrates how to perform raycasts.
XrSession session; // previously initialized
XrTime updateTime; // previously initialized
XrSpace appSpace; // space created for XR_REFERENCE_SPACE_TYPE_LOCAL.
XrPosef headPose; // latest pose of the HMD.
XrTrackableTrackerANDROID planeTracker; // tracker for plane trackables.
XrTrackableTrackerANDROID depthTracker; // tracker for depth trackables.
PFN_xrRaycastANDROID xrRaycastANDROID; // previously initialized
XrVector3f CalculateForwardDirectionFromHeadPose(XrPosef); // defined elsewhere
// Perform a raycast against multiple trackers.
constexpr uint32_t NUM_DESIRED_RESULTS = 2;
XrTrackableTrackerANDROID trackers[] = {
planeTracker,
depthTracker,
};
XrRaycastInfoANDROID rayInfo = {XR_TYPE_RAYCAST_INFO_ANDROID};
rayInfo.trackerCount = sizeof(trackers) / sizeof(XrTrackableTrackerANDROID);
rayInfo.trackers = trackers;
rayInfo.origin = headPose.position;
rayInfo.trajectory = CalculateForwardDirectionFromHeadPose(headPose);
rayInfo.space = appSpace;
rayInfo.time = updateTime;
rayInfo.maxResults = NUM_DESIRED_RESULTS;
uint32_t totalHitResults = 0;
XrRaycastHitResultANDROID hitResult[NUM_DESIRED_RESULTS];
XrRaycastHitResultsANDROID hitResults = {XR_TYPE_RAYCAST_HIT_RESULTS_ANDROID};
hitResults.resultsCapacityInput = NUM_DESIRED_RESULTS;
hitResults.results = hitResult;
XrResult result = xrRaycastANDROID(session, &rayInfo, &hitResults);
if (result == XR_SUCCESS && hitResults.resultsCountOutput >= 1) {
// Hit results are returned in closest-to-farthest order in
// hitResults.results[0] .. hitResults.results[hitResults.resultsCountOutput - 1]
}
12.59.7. New Enum Constants
-
XR_ANDROID_RAYCAST_EXTENSION_NAME -
XR_ANDROID_raycast_SPEC_VERSION -
Extending XrStructureType:
-
XR_TYPE_RAYCAST_HIT_RESULTS_ANDROID -
XR_TYPE_RAYCAST_INFO_ANDROID
-
-
Extending XrTrackableTypeANDROID:
-
XR_TRACKABLE_TYPE_DEPTH_ANDROID
-
12.60. XR_ANDROID_trackables
- Name String
-
XR_ANDROID_trackables - Extension Type
-
Instance extension
- Registered Extension Number
-
456
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2025-07-21
- IP Status
-
No known IP claims.
- Contributors
-
Spencer Quin, Google
Nihav Jain, Google
John Pursey, Google
Jared Finder, Google
Levana Chen, Google
Kenny Vercaemer, Google
12.60.1. Overview
This extension allows the application to access trackables from the physical environment, and create anchors attached to a trackable. It also allows applications to create world-locked spatial anchors.
A trackable is something that is tracked in the physical environment:
-
a plane (e.g. wall, floor, ceiling, table)
-
an object (e.g. keyboard, mouse, laptop)
See also XrTrackableTypeANDROID.
This extension defines plane trackables.
Additional trackable types are added by other extensions.
For example, XR_ANDROID_trackables_object adds object trackables, and
XR_ANDROID_raycast adds depth trackables that allow raycasting to
arbitrary points in the environment.
|
Permissions
Android applications must have the android.permission.SCENE_UNDERSTANDING_COARSE permission listed in their manifest as this extension exposes the geometry of the environment. The android.permission.SCENE_UNDERSTANDING_COARSE permission is considered a dangerous permission. The application must request the permission at runtime to use these functions: (protection level: dangerous) |
12.60.2. Inspect system capability
The XrSystemTrackablesPropertiesANDROID structure is defined as:
// Provided by XR_ANDROID_trackables
typedef struct XrSystemTrackablesPropertiesANDROID {
XrStructureType type;
const void* next;
XrBool32 supportsAnchor;
uint32_t maxAnchors;
} XrSystemTrackablesPropertiesANDROID;
To inspect whether the system is capable of creating spatial anchors an
application calls xrCreateAnchorSpaceANDROID by extending the
XrSystemProperties with XrSystemTrackablesPropertiesANDROID
structure when calling xrGetSystemProperties.
The runtime must return XR_ERROR_FEATURE_UNSUPPORTED for spatial
anchor creation if and only if supportsAnchor is false.
If a runtime supports anchors, it must support maxAnchors active
anchors at any given time.
To determine which trackable types are supported by the runtime for tracking and for anchor creation respectively an application calls xrEnumerateSupportedTrackableTypesANDROID and xrEnumerateSupportedAnchorTrackableTypesANDROID.
The xrEnumerateSupportedTrackableTypesANDROID function is defined as:
// Provided by XR_ANDROID_trackables
XrResult xrEnumerateSupportedTrackableTypesANDROID(
XrInstance instance,
XrSystemId systemId,
uint32_t trackableTypeCapacityInput,
uint32_t* trackableTypeCountOutput,
XrTrackableTypeANDROID* trackableTypes);
xrEnumerateSupportedTrackableTypesANDROID enumerates the trackable types supported by the runtime for xrCreateTrackableTrackerANDROID.
If and only if a trackable type X is not returned by
xrEnumerateSupportedTrackableTypesANDROID, then the runtime must
return XR_ERROR_FEATURE_UNSUPPORTED when calling
xrCreateTrackableTrackerANDROID with X as the trackable type.
The xrEnumerateSupportedAnchorTrackableTypesANDROID function is defined as:
// Provided by XR_ANDROID_trackables
XrResult xrEnumerateSupportedAnchorTrackableTypesANDROID(
XrInstance instance,
XrSystemId systemId,
uint32_t trackableTypeCapacityInput,
uint32_t* trackableTypeCountOutput,
XrTrackableTypeANDROID* trackableTypes);
xrEnumerateSupportedAnchorTrackableTypesANDROID enumerates the trackable types supported by the runtime for anchor creation.
If and only if a trackable type X is not returned by
xrEnumerateSupportedAnchorTrackableTypesANDROID, then the runtime
must return XR_ERROR_FEATURE_UNSUPPORTED when calling
xrCreateAnchorSpaceANDROID with X as the trackable type.
12.60.3. Creating a trackable tracker
An XrTrackableTrackerANDROID is a handle that represents the resources required to discover and update trackables of a given XrTrackableTypeANDROID in the environment.
// Provided by XR_ANDROID_trackables
XR_DEFINE_HANDLE(XrTrackableTrackerANDROID)
The xrCreateTrackableTrackerANDROID function is defined as:
// Provided by XR_ANDROID_trackables
XrResult xrCreateTrackableTrackerANDROID(
XrSession session,
const XrTrackableTrackerCreateInfoANDROID* createInfo,
XrTrackableTrackerANDROID* trackableTracker);
The application creates trackable trackers with xrCreateTrackableTrackerANDROID.
-
The runtime must return
XR_ERROR_FEATURE_UNSUPPORTEDif the system does not support trackables of the specified XrTrackableTrackerCreateInfoANDROID::trackableTypeas returned by xrEnumerateSupportedTrackableTypesANDROID. -
The runtime must return
XR_ERROR_PERMISSION_INSUFFICIENTif the required permissions have not been granted to the calling app.
The XrTrackableTrackerANDROID handle must be eventually freed via the xrDestroyTrackableTrackerANDROID function or by destroying the parent XrSession handle.
The runtime may use the creation of an XrTrackableTrackerANDROID to
prepare itself for discovering trackables of the selected
XrTrackableTrackerCreateInfoANDROID::trackableType.
For example, the runtime may only begin its plane tracking system when a
trackable tracker handle for XR_TRACKABLE_TYPE_PLANE_ANDROID is
created by the application.
The XrTrackableTrackerCreateInfoANDROID structure is defined as:
// Provided by XR_ANDROID_trackables
typedef struct XrTrackableTrackerCreateInfoANDROID {
XrStructureType type;
const void* next;
XrTrackableTypeANDROID trackableType;
} XrTrackableTrackerCreateInfoANDROID;
The XrTrackableTrackerCreateInfoANDROID structure is passed to xrCreateTrackableTrackerANDROID and provides creation options for the XrTrackableTrackerANDROID.
Extensions may define structures to be provided in the
slinkXrTrackableTrackerCreateInfoANDROID::next chain to allow
additional configuration for the trackable trackers.
The XrTrackableTypeANDROID enum is defined as:
// Provided by XR_ANDROID_trackables
typedef enum XrTrackableTypeANDROID {
XR_TRACKABLE_TYPE_NOT_VALID_ANDROID = 0,
XR_TRACKABLE_TYPE_PLANE_ANDROID = 1,
// Provided by XR_ANDROID_raycast
XR_TRACKABLE_TYPE_DEPTH_ANDROID = 1000463000,
// Provided by XR_ANDROID_trackables_object
XR_TRACKABLE_TYPE_OBJECT_ANDROID = 1000466000,
// Provided by XR_ANDROID_trackables_marker
XR_TRACKABLE_TYPE_MARKER_ANDROID = 1000707000,
XR_TRACKABLE_TYPE_MAX_ENUM_ANDROID = 0x7FFFFFFF
} XrTrackableTypeANDROID;
| Enum | Description |
|---|---|
|
Indicates that the trackable is not valid. |
|
Indicates that the trackable is a plane. |
|
Indicates that the trackable is the perception depth buffer. (Added by the |
|
Indicates that the trackable is the object. (Added by the |
|
Indicates that the trackable is a marker. (Added by the |
The xrDestroyTrackableTrackerANDROID function is defined as:
// Provided by XR_ANDROID_trackables
XrResult xrDestroyTrackableTrackerANDROID(
XrTrackableTrackerANDROID trackableTracker);
The xrDestroyTrackableTrackerANDROID function destroys the trackable tracker. After this call the runtime may free all related memory and resources.
If there is no other valid XrTrackableTrackerANDROID that was created with the same XrTrackableTypeANDROID, the runtime may disable the tracking services required for that trackable type to save system resources.
12.60.4. Get all trackables
The XrTrackableANDROID atom is defined as:
// Provided by XR_ANDROID_trackables
XR_DEFINE_ATOM(XrTrackableANDROID)
XrTrackableANDROID is used to represent a single trackable and is
valid only within the lifecycle of its associated
XrTrackableTrackerANDROID.
The runtime must not reuse the same XrTrackableANDROID for
different trackables within the same XrTrackableTrackerANDROID.
The xrGetAllTrackablesANDROID function is defined as:
// Provided by XR_ANDROID_trackables
XrResult xrGetAllTrackablesANDROID(
XrTrackableTrackerANDROID trackableTracker,
uint32_t trackableCapacityInput,
uint32_t* trackableCountOutput,
XrTrackableANDROID* trackables);
xrGetAllTrackablesANDROID fills an array of
XrTrackableANDROID representing the trackables found in the
environment.
The XrTrackableTypeANDROID of the returned trackables must
match the XrTrackableTypeANDROID of the trackableTracker.
12.60.5. Get trackable plane
The xrGetTrackablePlaneANDROID function is defined as:
// Provided by XR_ANDROID_trackables
XrResult xrGetTrackablePlaneANDROID(
XrTrackableTrackerANDROID trackableTracker,
const XrTrackableGetInfoANDROID* getInfo,
XrTrackablePlaneANDROID* planeOutput);
The xrGetTrackablePlaneANDROID function returns details about the trackable plane — such as its geometry, orientation, and tracking state — at a given point in time.
The plane information returned is a best effort approximation of its state
at the given XrTrackableGetInfoANDROID::time, relative to the
XrTrackableGetInfoANDROID::baseSpace.
The runtime must return XR_ERROR_MISMATCHING_TRACKABLE_TYPE_ANDROID
if the trackable type of the
XrTrackableGetInfoANDROID::trackable is not
XR_TRACKABLE_TYPE_PLANE_ANDROID.
The XrTrackableGetInfoANDROID structure is defined as:
// Provided by XR_ANDROID_trackables
typedef struct XrTrackableGetInfoANDROID {
XrStructureType type;
const void* next;
XrTrackableANDROID trackable;
XrSpace baseSpace;
XrTime time;
} XrTrackableGetInfoANDROID;
The XrTrackableGetInfoANDROID structure provides query options when
passed to xrGetTrackablePlaneANDROID.
The trackable must correspond to the trackable used in
xrGetTrackablePlaneANDROID.
The XrTrackablePlaneANDROID structure is defined as:
// Provided by XR_ANDROID_trackables
typedef struct XrTrackablePlaneANDROID {
XrStructureType type;
void* next;
XrTrackingStateANDROID trackingState;
XrPosef centerPose;
XrExtent2Df extents;
XrPlaneTypeANDROID planeType;
XrPlaneLabelANDROID planeLabel;
XrTrackableANDROID subsumedByPlane;
XrTime lastUpdatedTime;
uint32_t vertexCapacityInput;
uint32_t* vertexCountOutput;
XrVector2f* vertices;
} XrTrackablePlaneANDROID;
When the runtime has acquired enough environment information to detect that
2 tracked planes are actually the same plane, it must set the
XrTrackablePlaneANDROID::subsumedByPlane of one of the planes to
the handle of the other.
When this happens, the plane information returned in both of the associated
planes must be identical.
The application should stop querying for information about planes that have
been reported as subsumed.
The XrTrackingStateANDROID enum describes the tracking state of an
XrTrackableANDROID.
// Provided by XR_ANDROID_trackables
typedef enum XrTrackingStateANDROID {
XR_TRACKING_STATE_PAUSED_ANDROID = 0,
XR_TRACKING_STATE_STOPPED_ANDROID = 1,
XR_TRACKING_STATE_TRACKING_ANDROID = 2,
XR_TRACKING_STATE_MAX_ENUM_ANDROID = 0x7FFFFFFF
} XrTrackingStateANDROID;
| Enum | Description |
|---|---|
|
Indicates that the trackable or anchor tracking is paused but may be resumed in the future. |
|
Tracking has stopped on this Trackable and will never be resumed. |
|
The object is currently tracked and its pose is current. |
The XrPlaneTypeANDROID enum is the type of an
XrTrackableANDROID plane.
// Provided by XR_ANDROID_trackables
typedef enum XrPlaneTypeANDROID {
XR_PLANE_TYPE_HORIZONTAL_DOWNWARD_FACING_ANDROID = 0,
XR_PLANE_TYPE_HORIZONTAL_UPWARD_FACING_ANDROID = 1,
XR_PLANE_TYPE_VERTICAL_ANDROID = 2,
XR_PLANE_TYPE_ARBITRARY_ANDROID = 3,
XR_PLANE_TYPE_MAX_ENUM_ANDROID = 0x7FFFFFFF
} XrPlaneTypeANDROID;
| Enum | Description |
|---|---|
|
A horizontal plane facing downward (for example a ceiling). |
|
A horizontal plane facing upward (for example a floor or tabletop). |
|
A vertical plane (for example a wall). |
|
A plane with an arbitrary orientation. |
The XrPlaneLabelANDROID enum is a label for an
XrTrackableANDROID plane.
// Provided by XR_ANDROID_trackables
typedef enum XrPlaneLabelANDROID {
XR_PLANE_LABEL_UNKNOWN_ANDROID = 0,
XR_PLANE_LABEL_WALL_ANDROID = 1,
XR_PLANE_LABEL_FLOOR_ANDROID = 2,
XR_PLANE_LABEL_CEILING_ANDROID = 3,
XR_PLANE_LABEL_TABLE_ANDROID = 4,
XR_PLANE_LABEL_MAX_ENUM_ANDROID = 0x7FFFFFFF
} XrPlaneLabelANDROID;
| Enum | Description |
|---|---|
|
It was not possible to label the plane |
|
The plane is a wall. |
|
The plane is a floor. |
|
The plane is a ceiling. |
|
The plane is a table. |
12.60.6. Create anchor space
The xrCreateAnchorSpaceANDROID function is defined as:
// Provided by XR_ANDROID_trackables
XrResult xrCreateAnchorSpaceANDROID(
XrSession session,
const XrAnchorSpaceCreateInfoANDROID* createInfo,
XrSpace* anchorOutput);
At any point in time both the position and direction of the anchor is
tracked or untracked together.
This means that the runtime must either set both
XR_SPACE_LOCATION_POSITION_TRACKED_BIT and
XR_SPACE_LOCATION_ORIENTATION_TRACKED_BIT or clear both
XR_SPACE_LOCATION_POSITION_TRACKED_BIT and
XR_SPACE_LOCATION_ORIENTATION_TRACKED_BIT when the application calls
xrLocateSpace or xrLocateSpaces for anchorOutput.
The application must eventually free the returned XrSpace via xrDestroySpace.
-
The runtime must return
XR_ERROR_FEATURE_UNSUPPORTEDif the runtime does not support anchors. -
The runtime must return
XR_ERROR_TRACKABLE_TYPE_NOT_SUPPORTED_ANDROIDif the specific anchor attachment is not supported for the type of the passed XrAnchorSpaceCreateInfoANDROID::trackableas returned by xrEnumerateSupportedAnchorTrackableTypesANDROID.
The XrAnchorSpaceCreateInfoANDROID structure is defined as:
// Provided by XR_ANDROID_trackables
typedef struct XrAnchorSpaceCreateInfoANDROID {
XrStructureType type;
const void* next;
XrSpace space;
XrTime time;
XrPosef pose;
XrTrackableANDROID trackable;
} XrAnchorSpaceCreateInfoANDROID;
Example code for getting all trackables
The following example code demonstrates how to get all trackables of a given type.
XrSession session; // previously initialized
// The function pointers are previously initialized using xrGetInstanceProcAddr.
PFN_xrCreateTrackableTrackerANDROID xrCreateTrackableTrackerANDROID; // previously initialized
PFN_xrGetAllTrackablesANDROID xrGetAllTrackablesANDROID; // previously initialized
PFN_xrDestroyTrackableTrackerANDROID xrDestroyTrackableTrackerANDROID; // previously initialized
XrTrackableTrackerCreateInfoANDROID createInfo{XR_TYPE_TRACKABLE_TRACKER_CREATE_INFO_ANDROID};
createInfo.trackableType = XR_TRACKABLE_TYPE_PLANE_ANDROID;
XrTrackableTrackerANDROID planeTrackableTracker;
XrResult result = xrCreateTrackableTrackerANDROID(
session,
&createInfo,
&planeTrackableTracker);
if (result != XR_SUCCESS) { /* Handle failures. */ }
uint32_t trackableCountOutput = 0;
std::vector<XrTrackableANDROID> allPlaneTrackables;
// Query the number of trackables available.
result = xrGetAllTrackablesANDROID(
planeTrackableTracker,
0,
&trackableCountOutput,
nullptr
);
if (result == XR_SUCCESS) {
allPlaneTrackables.resize(trackableCountOutput, XR_NULL_TRACKABLE_ANDROID);
// Fetch the actual trackable handles in the appropriately resized array.
result = xrGetAllTrackablesANDROID(
planeTrackableTracker,
trackableCountOutput,
&trackableCountOutput,
allPlaneTrackables.data());
if (result == XR_SUCCESS) {
for (XrTrackableANDROID trackable : allPlaneTrackables) {
// You now have all trackables of the specified type.
}
}
}
// Release trackable tracker.
result = xrDestroyTrackableTrackerANDROID(planeTrackableTracker);
Example code for getting trackable plane
The following example code demonstrates how to get a trackable plane from an
existing XrTrackableANDROID, obtained from a hit result
(XR_ANDROID_raycast) or xrGetAllTrackablesANDROID.
XrTrackableTrackerANDROID planeTracker; // previously created
// The function pointers are previously initialized using xrGetInstanceProcAddr.
PFN_xrGetTrackablePlaneANDROID xrGetTrackablePlaneANDROID; // previously initialized
XrTime updateTime; // Time used for the current frame's simulation update.
XrSpace appSpace; // Space created for XR_REFERENCE_SPACE_TYPE_LOCAL.
XrTrackableANDROID planeTrackable; // Acquired from a hit result or xrGetAllTrackablesANDROID().
XrTrackableGetInfoANDROID planeGetInfo;
planeGetInfo.type = XR_TYPE_TRACKABLE_GET_INFO_ANDROID;
planeGetInfo.next = nullptr;
planeGetInfo.trackable = planeTrackable;
planeGetInfo.baseSpace = appSpace;
planeGetInfo.time = updateTime;
XrTrackablePlaneANDROID plane = { XR_TYPE_TRACKABLE_PLANE_ANDROID };
XrResult result = xrGetTrackablePlaneANDROID(
planeTracker,
&planeGetInfo,
&plane
);
if (result == XR_SUCCESS) {
// Plane tracking state, center pose, extents, type now available in plane.
}
Example code for creating anchor space
The following example code demonstrates how to create an anchor space attached to a trackable.
XrSession session; // Created at app startup.
XrTime updateTime; // Time used for the current frame's simulation update.
XrSpace appSpace; // Space created for XR_REFERENCE_SPACE_TYPE_LOCAL.
XrTrackableANDROID planeTrackable; // Acquired from a hit result or xrGetAllTrackablesANDROID().
PFN_xrCreateAnchorSpaceANDROID xrCreateAnchorSpaceANDROID; // Previously initialized.
// Create an anchor at (2, 2, 2) world-coordinates.
XrAnchorSpaceCreateInfoANDROID spatialAnchorCreateInfo;
spatialAnchorCreateInfo.type = XR_TYPE_ANCHOR_SPACE_CREATE_INFO_ANDROID;
spatialAnchorCreateInfo.next = nullptr;
spatialAnchorCreateInfo.space = appSpace;
spatialAnchorCreateInfo.time = updateTime;
spatialAnchorCreateInfo.pose = { { 0, 0, 0, 1 }, { 2, 2, 2 } };
XrSpace spatialAnchor = XR_NULL_HANDLE;
XrResult result = xrCreateAnchorSpaceANDROID(
session,
&spatialAnchorCreateInfo,
&spatialAnchor
);
// Create an anchor attached to a trackable.
XrTrackablePlaneANDROID plane;
XrAnchorSpaceCreateInfoANDROID trackableAnchorCreateInfo;
trackableAnchorCreateInfo.type = XR_TYPE_ANCHOR_SPACE_CREATE_INFO_ANDROID;
trackableAnchorCreateInfo.next = nullptr;
trackableAnchorCreateInfo.space = appSpace;
trackableAnchorCreateInfo.pose = plane.centerPose;
trackableAnchorCreateInfo.trackable = planeTrackable;
XrSpace trackableAnchor = XR_NULL_HANDLE;
result = xrCreateAnchorSpaceANDROID(
session,
&trackableAnchorCreateInfo,
&trackableAnchor
);
while (true) {
// app update loop
// ...
// Get the current location of the anchor's space w.r.t the world.
XrSpaceLocation anchorLocation = { XR_TYPE_SPACE_LOCATION };
result = xrLocateSpace(trackableAnchor, appSpace, updateTime, &anchorLocation);
if (plane.trackingState == XR_TRACKING_STATE_TRACKING_ANDROID) {
// Update anchor pose.
/* doDrawingForAnchor(anchorLocation.pose); */
} else {
// ...
}
}
// Cleanup - destroy the space, detach the anchor so its no longer tracked by the
// runtime and then release all resources held by it.
xrDestroySpace(spatialAnchor);
xrDestroySpace(trackableAnchor);
New Defines
// Provided by XR_ANDROID_trackables
#define XR_NULL_TRACKABLE_ANDROID 0
XR_NULL_TRACKABLE_ANDROID defines an invalid trackable atom.
12.60.13. New Enum Constants
-
XR_ANDROID_TRACKABLES_EXTENSION_NAME -
XR_ANDROID_trackables_SPEC_VERSION -
Extending XrObjectType:
-
XR_OBJECT_TYPE_TRACKABLE_TRACKER_ANDROID
-
-
Extending XrResult:
-
XR_ERROR_MISMATCHING_TRACKABLE_TYPE_ANDROID -
XR_ERROR_TRACKABLE_TYPE_NOT_SUPPORTED_ANDROID
-
-
Extending XrStructureType:
-
XR_TYPE_ANCHOR_SPACE_CREATE_INFO_ANDROID -
XR_TYPE_SYSTEM_TRACKABLES_PROPERTIES_ANDROID -
XR_TYPE_TRACKABLE_GET_INFO_ANDROID -
XR_TYPE_TRACKABLE_PLANE_ANDROID -
XR_TYPE_TRACKABLE_TRACKER_CREATE_INFO_ANDROID
-
12.60.15. Version History
-
Revision 1, 2025-07-21 (Kenny Vercaemer)
-
Initial extension description.
-
-
Revision 2, 2025-09-10 (Nihav Jain)
-
Update XrTrackablePlaneANDROID::
vertexCountOutputto a pointer.
-
12.61. XR_ANDROID_trackables_marker
- Name String
-
XR_ANDROID_trackables_marker - Extension Type
-
Instance extension
- Registered Extension Number
-
708
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2025-07-23
- IP Status
-
No known IP claims.
- Contributors
-
Christopher Doer, Google
Diego Tipaldi, Google
Levana Chen, Google
Jared Finder, Google
Spencer Quin, Google
Nihav Jain, Google
Ken Mackay, Google
Daniel Guttenberg, Qualcomm
12.61.1. Overview
This extension enables physical marker tracking, and enables applications to attach XR content to physical markers in an efficient way.
The extension supports well known marker types, specifically ArUco and April Tags. It enables runtimes to optionally support marker size estimation.
|
Permissions
Android applications must have the
android.permission.SCENE_UNDERSTANDING_COARSE permission listed in their
manifest as this extension depends on (protection level: dangerous) |
12.61.2. Inspect system capability
The XrSystemMarkerTrackingPropertiesANDROID structure is defined as:
// Provided by XR_ANDROID_trackables_marker
typedef struct XrSystemMarkerTrackingPropertiesANDROID {
XrStructureType type;
void* next;
XrBool32 supportsMarkerTracking;
XrBool32 supportsMarkerSizeEstimation;
uint16_t maxMarkerCount;
} XrSystemMarkerTrackingPropertiesANDROID;
An application can inspect whether the system is capable of marker tracking
by extending the XrSystemProperties with
XrSystemMarkerTrackingPropertiesANDROID structure when calling
xrGetSystemProperties.
The runtime must return XR_ERROR_FEATURE_UNSUPPORTED for marker
tracker creation if and only if supportsMarkerTracking is
XR_FALSE.
If a runtime supports marker tracking, maxMarkerCount must be at
least 1.
12.61.3. Tracking markers
This extension adds XR_TRACKABLE_TYPE_MARKER_ANDROID to
XrTrackableTypeANDROID.
The application creates an XrTrackableTrackerANDROID by calling
xrCreateTrackableTrackerANDROID and specifying
XR_TRACKABLE_TYPE_MARKER_ANDROID as the trackable type in
XrTrackableTrackerCreateInfoANDROID::trackableType as well as
setting a setting a valid configuration by adding a
XrTrackableMarkerConfigurationANDROID to the next chain of
XrTrackableTrackerCreateInfoANDROID.
The runtime must return XR_ERROR_FEATURE_UNSUPPORTED if
XrTrackableTrackerCreateInfoANDROID::trackableType is
XR_TRACKABLE_TYPE_MARKER_ANDROID and
XrSystemMarkerTrackingPropertiesANDROID::supportsMarkerTracking
returns XR_FALSE via xrGetSystemProperties.
The XrTrackableMarkerConfigurationANDROID structure is defined as:
// Provided by XR_ANDROID_trackables_marker
typedef struct XrTrackableMarkerConfigurationANDROID {
XrStructureType type;
void* next;
XrTrackableMarkerTrackingModeANDROID trackingMode;
uint32_t databaseCount;
const XrTrackableMarkerDatabaseANDROID* databases;
} XrTrackableMarkerConfigurationANDROID;
The application must set a valid configuration by adding a
XrTrackableMarkerConfigurationANDROID to the
XrTrackableTrackerCreateInfoANDROID::next chain when calling
xrCreateTrackableTrackerANDROID with
XrTrackableTrackerCreateInfoANDROID::trackableType set to
XR_TRACKABLE_TYPE_MARKER_ANDROID.
Otherwise, if the tracker type is set as above but the configuration
structure is not present or not valid, the runtime must return
XR_ERROR_VALIDATION_FAILURE.
If a runtime supports marker size estimation, the application can set
XrTrackableMarkerDatabaseEntryANDROID::edgeSize to 0 in
XrTrackableMarkerDatabaseANDROID::entries to indicate the usage
of size estimation.
Otherwise, the application must set
XrTrackableMarkerDatabaseEntryANDROID::edgeSize to a positive
value or the runtime must return XR_ERROR_VALIDATION_FAILURE.
The runtime must filter the output from xrGetAllTrackablesANDROID to
match the trackingMode and
XrTrackableMarkerDatabaseEntryANDROID::edgeSize.
The XrTrackableMarkerTrackingModeANDROID enum describes the supported tracking modes of markers.
// Provided by XR_ANDROID_trackables_marker
typedef enum XrTrackableMarkerTrackingModeANDROID {
XR_TRACKABLE_MARKER_TRACKING_MODE_DYNAMIC_ANDROID = 0,
XR_TRACKABLE_MARKER_TRACKING_MODE_STATIC_ANDROID = 1,
XR_TRACKABLE_MARKER_TRACKING_MODE_MAX_ENUM_ANDROID = 0x7FFFFFFF
} XrTrackableMarkerTrackingModeANDROID;
The XrTrackableMarkerDatabaseANDROID structure defines a dictionary and corresponding marker ids to be tracked.
// Provided by XR_ANDROID_trackables_marker
typedef struct XrTrackableMarkerDatabaseANDROID {
XrTrackableMarkerDictionaryANDROID dictionary;
uint32_t entryCount;
const XrTrackableMarkerDatabaseEntryANDROID* entries;
} XrTrackableMarkerDatabaseANDROID;
The XrTrackableMarkerDictionaryANDROID enum describes the supported marker dictionaries.
// Provided by XR_ANDROID_trackables_marker
typedef enum XrTrackableMarkerDictionaryANDROID {
XR_TRACKABLE_MARKER_DICTIONARY_ARUCO_4X4_50_ANDROID = 0,
XR_TRACKABLE_MARKER_DICTIONARY_ARUCO_4X4_100_ANDROID = 1,
XR_TRACKABLE_MARKER_DICTIONARY_ARUCO_4X4_250_ANDROID = 2,
XR_TRACKABLE_MARKER_DICTIONARY_ARUCO_4X4_1000_ANDROID = 3,
XR_TRACKABLE_MARKER_DICTIONARY_ARUCO_5X5_50_ANDROID = 4,
XR_TRACKABLE_MARKER_DICTIONARY_ARUCO_5X5_100_ANDROID = 5,
XR_TRACKABLE_MARKER_DICTIONARY_ARUCO_5X5_250_ANDROID = 6,
XR_TRACKABLE_MARKER_DICTIONARY_ARUCO_5X5_1000_ANDROID = 7,
XR_TRACKABLE_MARKER_DICTIONARY_ARUCO_6X6_50_ANDROID = 8,
XR_TRACKABLE_MARKER_DICTIONARY_ARUCO_6X6_100_ANDROID = 9,
XR_TRACKABLE_MARKER_DICTIONARY_ARUCO_6X6_250_ANDROID = 10,
XR_TRACKABLE_MARKER_DICTIONARY_ARUCO_6X6_1000_ANDROID = 11,
XR_TRACKABLE_MARKER_DICTIONARY_ARUCO_7X7_50_ANDROID = 12,
XR_TRACKABLE_MARKER_DICTIONARY_ARUCO_7X7_100_ANDROID = 13,
XR_TRACKABLE_MARKER_DICTIONARY_ARUCO_7X7_250_ANDROID = 14,
XR_TRACKABLE_MARKER_DICTIONARY_ARUCO_7X7_1000_ANDROID = 15,
XR_TRACKABLE_MARKER_DICTIONARY_APRILTAG_16H5_ANDROID = 16,
XR_TRACKABLE_MARKER_DICTIONARY_APRILTAG_25H9_ANDROID = 17,
XR_TRACKABLE_MARKER_DICTIONARY_APRILTAG_36H10_ANDROID = 18,
XR_TRACKABLE_MARKER_DICTIONARY_APRILTAG_36H11_ANDROID = 19,
XR_TRACKABLE_MARKER_DICTIONARY_MAX_ENUM_ANDROID = 0x7FFFFFFF
} XrTrackableMarkerDictionaryANDROID;
The XrTrackableMarkerDatabaseEntryANDROID structure configures a single marker id of a dictionary.
// Provided by XR_ANDROID_trackables_marker
typedef struct XrTrackableMarkerDatabaseEntryANDROID {
int32_t id;
float edgeSize;
} XrTrackableMarkerDatabaseEntryANDROID;
12.61.4. Get markers
The xrGetTrackableMarkerANDROID function is defined as:
// Provided by XR_ANDROID_trackables_marker
XrResult xrGetTrackableMarkerANDROID(
XrTrackableTrackerANDROID tracker,
const XrTrackableGetInfoANDROID* getInfo,
XrTrackableMarkerANDROID* markerOutput);
The runtime must return XR_ERROR_MISMATCHING_TRACKABLE_TYPE_ANDROID
if the trackable type of the XrTrackableANDROID is not
XR_TRACKABLE_TYPE_MARKER_ANDROID, or if the trackable type of the
XrTrackableTrackerANDROID is not
XR_TRACKABLE_TYPE_MARKER_ANDROID.
The XrTrackableMarkerANDROID structure is defined as:
// Provided by XR_ANDROID_trackables_marker
typedef struct XrTrackableMarkerANDROID {
XrStructureType type;
void* next;
XrTrackingStateANDROID trackingState;
XrTime lastUpdatedTime;
XrTrackableMarkerDictionaryANDROID dictionary;
int32_t markerId;
XrPosef centerPose;
XrExtent2Df extents;
} XrTrackableMarkerANDROID;
12.61.5. Example code for getting trackable markers
The following example code demonstrates how to get trackable markers.
XrInstance instance; // previously initialized
XrSystemId systemId; // previously initialized
XrSession session; // previously initialized
// The function pointers are previously initialized using xrGetInstanceProcAddr.
PFN_xrGetSystemProperties xrGetSystemProperties; // previously initialized
PFN_xrCreateTrackableTrackerANDROID xrCreateTrackableTrackerANDROID; // previously initialized
PFN_xrGetAllTrackablesANDROID xrGetAllTrackablesANDROID; // previously initialized
PFN_xrGetTrackableMarkerANDROID xrGetTrackableMarkerANDROID; // previously initialized
PFN_xrDestroyTrackableTrackerANDROID xrDestroyTrackableTrackerANDROID; // previously initialized
XrTime updateTime; // Time used for the current frame's simulation update.
XrSpace appSpace; // Space created for XR_REFERENCE_SPACE_TYPE_LOCAL.
// Inspect system capability
XrSystemMarkerTrackingPropertiesANDROID markerProperty =
{.type = XR_TYPE_SYSTEM_MARKER_TRACKING_PROPERTIES_ANDROID, .next = nullptr};
XrSystemProperties systemProperties = {.type = XR_TYPE_SYSTEM_PROPERTIES,
.next = &markerProperty};
CHK_XR(xrGetSystemProperties(instance, systemId, &systemProperties));
if (!markerProperty.supportsMarkerTracking) {
// Marker tracking is not supported.
return;
}
// Create a trackable tracker for marker tracking.
// If the runtime does not support size estimation, configures marker edge size of 0.1m.
XrTrackableMarkerDatabaseEntryANDROID markerEntries =
{.id = 0, .edgeSize = markerProperty.supportsMarkerSizeEstimation ? 0.0f : 0.1f};
XrTrackableMarkerDatabaseANDROID markerDatabases =
{.dictionary = XR_TRACKABLE_MARKER_DICTIONARY_ARUCO_4X4_50_ANDROID,
.entryCount = 1,
.entries = &markerEntries};
XrTrackableMarkerConfigurationANDROID configuration =
{.type = XR_TYPE_TRACKABLE_MARKER_CONFIGURATION_ANDROID,
.next = nullptr,
.trackingMode = XR_TRACKABLE_MARKER_TRACKING_MODE_DYNAMIC_ANDROID,
.databaseCount = 1,
.databases = &markerDatabases};
XrTrackableTrackerCreateInfoANDROID createInfo =
{.type = XR_TYPE_TRACKABLE_TRACKER_CREATE_INFO_ANDROID,
.next = &configuration,
.trackableType = XR_TRACKABLE_TYPE_MARKER_ANDROID};
XrTrackableTrackerANDROID markerTracker;
auto res = xrCreateTrackableTrackerANDROID(session, &createInfo, &markerTracker);
if (res == XR_ERROR_PERMISSION_INSUFFICIENT) {
// Handle permission requests.
}
CHK_XR(res);
// Get markers.
std::vector<XrTrackableANDROID> trackables(markerProperty.maxMarkerCount);
std::vector<XrTrackableMarkerANDROID> markers(markerProperty.maxMarkerCount);
uint32_t markerSize = 0;
CHK_XR(xrGetAllTrackablesANDROID(markerTracker, markerProperty.maxMarkerCount, &markerSize,
trackables.data()));
for (int i = 0; i < markerSize; i++) {
markers[i].type = XR_TYPE_TRACKABLE_MARKER_ANDROID;
markers[i].next = nullptr;
XrTrackableGetInfoANDROID getInfo = {.type = XR_TYPE_TRACKABLE_GET_INFO_ANDROID,
.next = nullptr,
.trackable = trackables[i],
.baseSpace = appSpace,
.time = updateTime};
CHK_XR(xrGetTrackableMarkerANDROID(markerTracker, &getInfo, &markers[i]));
// Handle markers.
}
// Release trackable tracker.
CHK_XR(xrDestroyTrackableTrackerANDROID(markerTracker));
12.61.9. New Enum Constants
-
XR_ANDROID_TRACKABLES_MARKER_EXTENSION_NAME -
XR_ANDROID_trackables_marker_SPEC_VERSION -
Extending XrStructureType:
-
XR_TYPE_SYSTEM_MARKER_TRACKING_PROPERTIES_ANDROID -
XR_TYPE_TRACKABLE_MARKER_ANDROID -
XR_TYPE_TRACKABLE_MARKER_CONFIGURATION_ANDROID
-
-
Extending XrTrackableTypeANDROID:
-
XR_TRACKABLE_TYPE_MARKER_ANDROID
-
12.62. XR_ANDROID_trackables_object
- Name String
-
XR_ANDROID_trackables_object - Extension Type
-
Instance extension
- Registered Extension Number
-
467
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2025-09-23
- IP Status
-
No known IP claims.
- Contributors
-
Diego Tipaldi, Google
David Joseph Tan, Google
Christopher Doer, Google
Levana Chen, Google
Spencer Quin, Google
Jared Finder, Google
Kenny Vercaemer, Google
12.62.1. Overview
This extension enables physical object tracking such as keyboards, mice, and other objects in the environment.
|
Permissions
Android applications must have the
android.permission.SCENE_UNDERSTANDING_COARSE permission listed in their
manifest as this extension depends on (protection level: dangerous) |
12.62.2. Tracking objects
To track objects, the application creates an XrTrackableTrackerANDROID
by calling xrCreateTrackableTrackerANDROID and specifying the
XR_TRACKABLE_TYPE_OBJECT_ANDROID enumerant added to
XrTrackableTypeANDROID by this extension as the trackable type in
XrTrackableTrackerCreateInfoANDROID::trackableType.
The XrTrackableObjectConfigurationANDROID structure is defined as:
// Provided by XR_ANDROID_trackables_object
typedef struct XrTrackableObjectConfigurationANDROID {
XrStructureType type;
void* next;
uint32_t labelCount;
const XrObjectLabelANDROID* activeLabels;
} XrTrackableObjectConfigurationANDROID;
-
The application can configure the XrTrackableTrackerANDROID to track only a subset of supported objects by adding a XrTrackableObjectConfigurationANDROID structure to the XrTrackableTrackerCreateInfoANDROID::
nextchain. -
The runtime must filter the output from xrGetAllTrackablesANDROID to match any of the
activeLabels.
If the application does not add XrTrackableObjectConfigurationANDROID
to the XrTrackableTrackerCreateInfoANDROID::next chain, the
runtime must track all objects that the system has identified.
The XrObjectLabelANDROID enum describes the type of object that the system has identified.
// Provided by XR_ANDROID_trackables_object
typedef enum XrObjectLabelANDROID {
XR_OBJECT_LABEL_UNKNOWN_ANDROID = 0,
XR_OBJECT_LABEL_KEYBOARD_ANDROID = 1,
XR_OBJECT_LABEL_MOUSE_ANDROID = 2,
XR_OBJECT_LABEL_LAPTOP_ANDROID = 3,
XR_OBJECT_LABEL_MAX_ENUM_ANDROID = 0x7FFFFFFF
} XrObjectLabelANDROID;
12.62.3. Get trackable object
The xrGetTrackableObjectANDROID function is defined as:
// Provided by XR_ANDROID_trackables_object
XrResult xrGetTrackableObjectANDROID(
XrTrackableTrackerANDROID tracker,
const XrTrackableGetInfoANDROID* getInfo,
XrTrackableObjectANDROID* objectOutput);
The runtime must return XR_ERROR_MISMATCHING_TRACKABLE_TYPE_ANDROID
if the trackable type of the XrTrackableANDROID is not
XR_TRACKABLE_TYPE_OBJECT_ANDROID, or if the trackable type of the
XrTrackableTrackerANDROID is not
XR_TRACKABLE_TYPE_OBJECT_ANDROID.
The XrTrackableObjectANDROID structure is defined as:
// Provided by XR_ANDROID_trackables_object
typedef struct XrTrackableObjectANDROID {
XrStructureType type;
void* next;
XrTrackingStateANDROID trackingState;
XrPosef centerPose;
XrExtent3DfEXT extents;
XrObjectLabelANDROID objectLabel;
XrTime lastUpdatedTime;
} XrTrackableObjectANDROID;
The runtime must only return objects that it successfully identified.
I.e. XrTrackableObjectANDROID::objectLabel must not be
XR_OBJECT_LABEL_UNKNOWN_ANDROID.
12.62.4. Trackable object orientation
The orientation of the objects is shown in the following image (with xyz = rgb) originating at the centroid of the objects.
The 3D orientation of the mouse is aligned to the 3D orientation of the supporting plane. The extents enclose the keyboard and mouse completely and the base of the laptop.
12.62.5. Example code for getting trackable objects
The following example code demonstrates how to get trackable objects.
XrSession session; // previously initialized
// The function pointers are previously initialized using xrGetInstanceProcAddr.
PFN_xrCreateTrackableTrackerANDROID xrCreateTrackableTrackerANDROID; // previously initialized
PFN_xrGetAllTrackablesANDROID xrGetAllTrackablesANDROID; // previously initialized
PFN_xrGetTrackableObjectANDROID xrGetTrackableObjectANDROID; // previously initialized
PFN_xrDestroyTrackableTrackerANDROID xrDestroyTrackableTrackerANDROID; // previously initialized
XrTime updateTime; // Time used for the current frame's simulation update.
XrSpace appSpace; // Space created for XR_REFERENCE_SPACE_TYPE_LOCAL.
XrTrackableTrackerCreateInfoANDROID
createInfo{XR_TYPE_TRACKABLE_TRACKER_CREATE_INFO_ANDROID};
createInfo.trackableType = XR_TRACKABLE_TYPE_OBJECT_ANDROID;
XrTrackableTrackerANDROID objectTrackableTracker;
XrResult result = xrCreateTrackableTrackerANDROID(
session,
&createInfo,
&objectTrackableTracker);
if (result != XR_SUCCESS) { /* Handle failures. */ }
uint32_t trackableCountOutput = 0;
std::vector<XrTrackableANDROID> allObjectTrackables;
// Query the number of trackables available.
result = xrGetAllTrackablesANDROID(
objectTrackableTracker,
0,
&trackableCountOutput,
nullptr
);
if (result == XR_SUCCESS) {
allObjectTrackables.resize(trackableCountOutput, XR_NULL_TRACKABLE_ANDROID);
// Fetch the actual trackable handles in the appropriately resized array.
result = xrGetAllTrackablesANDROID(
objectTrackableTracker,
trackableCountOutput,
&trackableCountOutput,
allObjectTrackables.data());
if (result == XR_SUCCESS) {
for (XrTrackableANDROID trackable : allObjectTrackables) {
// Object trackable query information
XrTrackableGetInfoANDROID objectGetInfo;
objectGetInfo.type = XR_TYPE_TRACKABLE_GET_INFO_ANDROID;
objectGetInfo.next = nullptr;
objectGetInfo.trackable = trackable;
objectGetInfo.baseSpace = appSpace;
objectGetInfo.time = updateTime;
// Get the object trackable. Note that the tracker only returns object types.
XrTrackableObjectANDROID object = { XR_TYPE_TRACKABLE_OBJECT_ANDROID };
result = xrGetTrackableObjectANDROID(
objectTrackableTracker,
&objectGetInfo,
&object
);
if (result == XR_SUCCESS) {
/** Do Stuff with the object */
}
}
}
}
// Release trackable tracker.
result = xrDestroyTrackableTrackerANDROID(objectTrackableTracker);
12.62.9. New Enum Constants
-
XR_ANDROID_TRACKABLES_OBJECT_EXTENSION_NAME -
XR_ANDROID_trackables_object_SPEC_VERSION -
Extending XrStructureType:
-
XR_TYPE_TRACKABLE_OBJECT_ANDROID -
XR_TYPE_TRACKABLE_OBJECT_CONFIGURATION_ANDROID
-
-
Extending XrTrackableTypeANDROID:
-
XR_TRACKABLE_TYPE_OBJECT_ANDROID
-
12.63. XR_BD_body_tracking
- Name String
-
XR_BD_body_tracking - Extension Type
-
Instance extension
- Registered Extension Number
-
386
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2023-06-05
- IP Status
-
No known IP claims.
- Contributors
-
Shanliang Xu, ByteDance
Chenxi Bao, ByteDance
Shuai Liu, ByteDance
Jun Yan, ByteDance
12.63.1. Overview
This extension enables applications to locate the individual body joints that represent the position of the user. It enables applications to render the whole body in XR experiences.
12.63.2. Inspect system capability
The XrSystemBodyTrackingPropertiesBD structure is defined as:
// Provided by XR_BD_body_tracking
typedef struct XrSystemBodyTrackingPropertiesBD {
XrStructureType type;
void* next;
XrBool32 supportsBodyTracking;
} XrSystemBodyTrackingPropertiesBD;
An application can inspect whether the system is capable of body tracking by extending the XrSystemProperties with XrSystemBodyTrackingPropertiesBD structure when calling xrGetSystemProperties.
If a runtime returns XR_FALSE for supportsBodyTracking, the
runtime must return XR_ERROR_FEATURE_UNSUPPORTED from
xrCreateBodyTrackerBD.
12.63.3. Create a body tracker handle
The XrBodyTrackerBD handle represents the resources for body tracking.
// Provided by XR_BD_body_tracking
XR_DEFINE_HANDLE(XrBodyTrackerBD)
This handle can be used to locate body joints using xrLocateBodyJointsBD function.
A body tracker provides joint locations that track human body motion.
The xrCreateBodyTrackerBD function is defined as:
// Provided by XR_BD_body_tracking
XrResult xrCreateBodyTrackerBD(
XrSession session,
const XrBodyTrackerCreateInfoBD* createInfo,
XrBodyTrackerBD* bodyTracker);
An application can create an XrBodyTrackerBD handle using xrCreateBodyTrackerBD function.
If the system does not support body tracking, the runtime must return
XR_ERROR_FEATURE_UNSUPPORTED from xrCreateBodyTrackerBD.
In this case, the runtime must return XR_FALSE for
XrSystemBodyTrackingPropertiesBD::supportsBodyTracking when the
function xrGetSystemProperties is called, so that the application
avoids creating a body tracker.
If an invalid value of XrBodyTrackerCreateInfoBD::jointSet is
passed in createInfo, the runtime must return
XR_ERROR_VALIDATION_FAILURE.
The XrBodyTrackerCreateInfoBD structure is defined as:
// Provided by XR_BD_body_tracking
typedef struct XrBodyTrackerCreateInfoBD {
XrStructureType type;
const void* next;
XrBodyJointSetBD jointSet;
} XrBodyTrackerCreateInfoBD;
The XrBodyTrackerCreateInfoBD structure describes the information to create an XrBodyTrackerBD handle.
The XrBodyJointSetBD enum describes the set of body joints to track when creating an XrBodyTrackerBD.
// Provided by XR_BD_body_tracking
typedef enum XrBodyJointSetBD {
XR_BODY_JOINT_SET_BODY_WITHOUT_ARM_BD = 1,
XR_BODY_JOINT_SET_FULL_BODY_JOINTS_BD = 2,
XR_BODY_JOINT_SET_MAX_ENUM_BD = 0x7FFFFFFF
} XrBodyJointSetBD;
The joint sets have the following meaning.
The xrDestroyBodyTrackerBD function is defined as:
// Provided by XR_BD_body_tracking
XrResult xrDestroyBodyTrackerBD(
XrBodyTrackerBD bodyTracker);
xrDestroyBodyTrackerBD function releases the bodyTracker and the
underlying resources when the body tracking experience is over.
12.63.4. Locate body joints
The xrLocateBodyJointsBD function is defined as:
// Provided by XR_BD_body_tracking
XrResult xrLocateBodyJointsBD(
XrBodyTrackerBD bodyTracker,
const XrBodyJointsLocateInfoBD* locateInfo,
XrBodyJointLocationsBD* locations);
The xrLocateBodyJointsBD function locates an array of body joints relative to a base space at a given time.
The XrBodyJointsLocateInfoBD structure is defined as:
// Provided by XR_BD_body_tracking
typedef struct XrBodyJointsLocateInfoBD {
XrStructureType type;
const void* next;
XrSpace baseSpace;
XrTime time;
} XrBodyJointsLocateInfoBD;
The XrBodyJointsLocateInfoBD structure describes the information to locate body joints. Callers should request a time equal to the predicted display time for the rendered frame. The system will employ appropriate modeling to support body tracking at this time.
The XrBodyJointLocationsBD structure is defined as:
// Provided by XR_BD_body_tracking
typedef struct XrBodyJointLocationsBD {
XrStructureType type;
void* next;
XrBool32 allJointPosesTracked;
uint32_t jointLocationCount;
XrBodyJointLocationBD* jointLocations;
} XrBodyJointLocationsBD;
XrBodyJointLocationsBD structure returns the state of the body joint locations.
The application must allocate jointLocations with enough elements for
all joints of the chosen joint set, and set jointLocationCount
according.
The runtime must populate elements of the application-allocated
jointLocations array representing human body motion.
The runtime must populate the jointLocations array with elements
indexed using the corresponding body joint enumeration (e.g.
XrBodyJointBD).
The index enumeration is determined by the XrBodyJointSetBD value used
when creating the XrBodyTrackerBD.
For example, when the XrBodyTrackerBD is created with
XR_BODY_JOINT_SET_FULL_BODY_JOINTS_BD, the application must set the
jointLocationCount to XR_BODY_JOINT_COUNT_BD, allocating at
least that many elements in jointLocations, and the runtime must
populate the jointLocations array indexed by the XrBodyJointBD
enumeration.
If the value of jointLocationCount does not equal the joint count
value associated with the XrBodyJointSetBD value used when creating
the XrBodyTrackerBD, or if jointLocations is NULL, the
runtime must return XR_ERROR_VALIDATION_FAILURE.
If the returned allJointPosesTracked is true, the runtime must return
all joint locations with XR_SPACE_LOCATION_POSITION_VALID_BIT,
XR_SPACE_LOCATION_ORIENTATION_VALID_BIT,
XR_SPACE_LOCATION_POSITION_TRACKED_BIT,
XR_SPACE_LOCATION_ORIENTATION_TRACKED_BIT set.
If the returned allJointPosesTracked is false, it indicates that for
some joint(s), the body input is not detected or tracked.
The XrBodyJointLocationBD structure is defined as:
// Provided by XR_BD_body_tracking
typedef struct XrBodyJointLocationBD {
XrSpaceLocationFlags locationFlags;
XrPosef pose;
} XrBodyJointLocationBD;
XrBodyJointLocationBD structure describes the position and orientation of a body joint.
12.63.5. Example code for locating body joints
The following example code demonstrates how to locate all body joints relative to a base space.
XrInstance instance; // previously initialized
XrSystemId systemId; // previously initialized
XrSession session; // previously initialized
XrSpace baseSpace; // previously initialized, e.g. from
// XR_REFERENCE_SPACE_TYPE_LOCAL
// Inspect body tracking system properties
XrSystemBodyTrackingPropertiesBD bodyTrackingSystemProperties{
XR_TYPE_SYSTEM_BODY_TRACKING_PROPERTIES_BD};
XrSystemProperties systemProperties{XR_TYPE_SYSTEM_PROPERTIES,
&bodyTrackingSystemProperties};
CHK_XR(xrGetSystemProperties(instance, systemId, &systemProperties));
if (!bodyTrackingSystemProperties.supportsBodyTracking) {
// The system does not support body tracking
return;
}
// Get function pointer for xrCreateBodyTrackerBD
PFN_xrCreateBodyTrackerBD pfnCreateBodyTrackerBD;
CHK_XR(xrGetInstanceProcAddr(instance, "xrCreateBodyTrackerBD",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnCreateBodyTrackerBD)));
// Create a body tracker that tracks default set of body joints.
XrBodyTrackerBD bodyTracker = {};
{
XrBodyTrackerCreateInfoBD createInfo{XR_TYPE_BODY_TRACKER_CREATE_INFO_BD};
createInfo.jointSet = XR_BODY_JOINT_SET_FULL_BODY_JOINTS_BD;
CHK_XR(pfnCreateBodyTrackerBD(session, &createInfo, &bodyTracker));
}
// Allocate buffers to receive joint location data before frame
// loop starts.
XrBodyJointLocationBD jointLocations[XR_BODY_JOINT_COUNT_BD];
XrBodyJointLocationsBD locations{XR_TYPE_BODY_JOINT_LOCATIONS_BD};
locations.jointLocationCount = XR_BODY_JOINT_COUNT_BD;
locations.jointLocations = jointLocations;
// Get function pointer for xrLocateBodyJointsBD.
PFN_xrLocateBodyJointsBD pfnLocateBodyJointsBD;
CHK_XR(xrGetInstanceProcAddr(instance, "xrLocateBodyJointsBD",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnLocateBodyJointsBD)));
while (1) {
// ...
// For every frame in the frame loop
// ...
XrFrameState frameState; // previously returned from xrWaitFrame
const XrTime time = frameState.predictedDisplayTime;
XrBodyJointsLocateInfoBD locateInfo{XR_TYPE_BODY_JOINTS_LOCATE_INFO_BD};
locateInfo.baseSpace = baseSpace;
locateInfo.time = time;
CHK_XR(pfnLocateBodyJointsBD(bodyTracker, &locateInfo, &locations));
if (locations.allJointPosesTracked) {
// The returned joint location array is directly indexed with
// XrBodyJointBD enum.
const XrPosef pose = jointLocations[XR_BODY_JOINT_LEFT_HAND_BD].pose;
}
}
12.63.6. Conventions of body joints
This extension defines 24 body joints.
The body joints are sorted starting from the XR_BODY_JOINT_PELVIS_BD
and then incrementing in a spiral manner.
// Provided by XR_BD_body_tracking
typedef enum XrBodyJointBD {
XR_BODY_JOINT_PELVIS_BD = 0,
XR_BODY_JOINT_LEFT_HIP_BD = 1,
XR_BODY_JOINT_RIGHT_HIP_BD = 2,
XR_BODY_JOINT_SPINE1_BD = 3,
XR_BODY_JOINT_LEFT_KNEE_BD = 4,
XR_BODY_JOINT_RIGHT_KNEE_BD = 5,
XR_BODY_JOINT_SPINE2_BD = 6,
XR_BODY_JOINT_LEFT_ANKLE_BD = 7,
XR_BODY_JOINT_RIGHT_ANKLE_BD = 8,
XR_BODY_JOINT_SPINE3_BD = 9,
XR_BODY_JOINT_LEFT_FOOT_BD = 10,
XR_BODY_JOINT_RIGHT_FOOT_BD = 11,
XR_BODY_JOINT_NECK_BD = 12,
XR_BODY_JOINT_LEFT_COLLAR_BD = 13,
XR_BODY_JOINT_RIGHT_COLLAR_BD = 14,
XR_BODY_JOINT_HEAD_BD = 15,
XR_BODY_JOINT_LEFT_SHOULDER_BD = 16,
XR_BODY_JOINT_RIGHT_SHOULDER_BD = 17,
XR_BODY_JOINT_LEFT_ELBOW_BD = 18,
XR_BODY_JOINT_RIGHT_ELBOW_BD = 19,
XR_BODY_JOINT_LEFT_WRIST_BD = 20,
XR_BODY_JOINT_RIGHT_WRIST_BD = 21,
XR_BODY_JOINT_LEFT_HAND_BD = 22,
XR_BODY_JOINT_RIGHT_HAND_BD = 23,
XR_BODY_JOINT_MAX_ENUM_BD = 0x7FFFFFFF
} XrBodyJointBD;
Put body in a T-shape as above. The backward (+Z) direction is perpendicular to body surface and pointing towards the back of the body. The right (+X) direction points to the right side of the body. The Y direction is perpendicular to the X and Z directions and follows the right hand rule.
// Provided by XR_BD_body_tracking
#define XR_BODY_JOINT_COUNT_BD 24
XR_BODY_JOINT_COUNT_BD defines the number of body joint enumerants
defined in the full enumeration XrBodyJointBD.
This corresponds to the joint set
XR_BODY_JOINT_SET_FULL_BODY_JOINTS_BD (in XrBodyJointSetBD).
// Provided by XR_BD_body_tracking
#define XR_BODY_JOINT_WITHOUT_ARM_COUNT_BD 16
XR_BODY_JOINT_WITHOUT_ARM_COUNT_BD defines the number of body joints
in the joint set XR_BODY_JOINT_SET_BODY_WITHOUT_ARM_BD (in
XrBodyJointSetBD), which excludes the arms.
This count includes joints indexed by XrBodyJointBD in the range
XR_BODY_JOINT_PELVIS_BD through XR_BODY_JOINT_HEAD_BD inclusive.
12.63.12. New Enum Constants
-
XR_BD_BODY_TRACKING_EXTENSION_NAME -
XR_BD_body_tracking_SPEC_VERSION -
Extending XrObjectType:
-
XR_OBJECT_TYPE_BODY_TRACKER_BD
-
-
Extending XrStructureType:
-
XR_TYPE_BODY_JOINTS_LOCATE_INFO_BD -
XR_TYPE_BODY_JOINT_LOCATIONS_BD -
XR_TYPE_BODY_TRACKER_CREATE_INFO_BD -
XR_TYPE_SYSTEM_BODY_TRACKING_PROPERTIES_BD
-
12.64. XR_BD_facial_simulation
- Name String
-
XR_BD_facial_simulation - Extension Type
-
Instance extension
- Registered Extension Number
-
387
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2024-10-17
- IP Status
-
No known IP claims.
- Contributors
-
Shanliang Xu, ByteDance
Chenxi Bao, ByteDance
Shuai Liu, ByteDance
Junhuan Peng, ByteDance
Zhicheng Chen, ByteDance
12.64.1. Overview
This extension allows applications to get weights of blend shapes which express the users' face movements.
This extension defines several modes in XrFacialSimulationModeBD. These modes correspond to different output types of blend shapes, such as the XrFaceExpressionBD or XrLipExpressionBD, or even a combination of them.
Because not all modes involve direct use of private data like images, and some modes are closer to an audio-driven estimation of facial expression, this extension is referred to as Facial Expression Simulation instead of Facial Expression Tracking.
|
Permissions
Android applications must have the com.picovr.permission.FACE_TRACKING
permission and the android.permission.RECORD_AUDIO listed in their manifest
and granted to use this extension, otherwise xrCreateFaceTrackerBD
function will return a |
12.64.2. Inspect system capability
The XrSystemFacialSimulationPropertiesBD structure is defined as:
// Provided by XR_BD_facial_simulation
typedef struct XrSystemFacialSimulationPropertiesBD {
XrStructureType type;
void* next;
XrBool32 supportsFaceTracking;
} XrSystemFacialSimulationPropertiesBD;
An application can inspect whether the system is capable of providing face
tracking input by extending the XrSystemProperties::next chain
with an XrSystemFacialSimulationPropertiesBD structure when calling
xrGetSystemProperties.
If a runtime returns XR_FALSE for supportsFaceTracking, the
runtime must return XR_ERROR_FEATURE_UNSUPPORTED from
xrCreateFaceTrackerBD.
12.64.3. Enumerating tracking modes
The application can enumerate the supported modes by calling the xrEnumerateFacialSimulationModesBD function. The list of available modes depends on the internal facial simulation algorithm.
The xrEnumerateFacialSimulationModesBD function is defined as:
// Provided by XR_BD_facial_simulation
XrResult xrEnumerateFacialSimulationModesBD(
XrSession session,
uint32_t modeCapacityInput,
uint32_t* modeCountOutput,
XrFacialSimulationModeBD* modes);
This extension provides four modes to express facial blend shapes. Some modes just need voice input for audio-driven facial expression simulation. Other modes require image input, making use of face tracking.
The XrFacialSimulationModeBD enumeration describes the modes of facial simulation and tracking.
// Provided by XR_BD_facial_simulation
typedef enum XrFacialSimulationModeBD {
XR_FACIAL_SIMULATION_MODE_DEFAULT_BD = 0,
XR_FACIAL_SIMULATION_MODE_COMBINED_AUDIO_BD = 1,
XR_FACIAL_SIMULATION_MODE_COMBINED_AUDIO_WITH_LIP_BD = 2,
XR_FACIAL_SIMULATION_MODE_ONLY_AUDIO_WITH_LIP_BD = 3,
XR_FACIAL_SIMULATION_MODE_MAX_ENUM_BD = 0x7FFFFFFF
} XrFacialSimulationModeBD;
When using XR_FACIAL_SIMULATION_MODE_DEFAULT_BD and
XR_FACIAL_SIMULATION_MODE_COMBINED_AUDIO_BD, only
XrFacialSimulationDataBD is populated in the output.
When using XR_FACIAL_SIMULATION_MODE_COMBINED_AUDIO_WITH_LIP_BD the
application must chain an XrLipExpressionDataBD to
XrFacialSimulationDataBD::next when calling
xrGetFacialSimulationDataBD.
When using XR_FACIAL_SIMULATION_MODE_ONLY_AUDIO_WITH_LIP_BD the
runtime may generate predicted animations in face expression in
XrFacialSimulationDataBD and XrLipExpressionDataBD structures
when calling xrGetFacialSimulationDataBD, to make the avatar look more
natural.
12.64.4. Create a face tracker handle
The XrFaceTrackerBD handle represents the resources for face tracking.
// Provided by XR_BD_facial_simulation
XR_DEFINE_HANDLE(XrFaceTrackerBD)
This handle is used to obtain blend shapes using the xrGetFacialSimulationDataBD function.
The xrCreateFaceTrackerBD function is defined as:
// Provided by XR_BD_facial_simulation
XrResult xrCreateFaceTrackerBD(
XrSession session,
const XrFaceTrackerCreateInfoBD* createInfo,
XrFaceTrackerBD* tracker);
An application can create an XrFaceTrackerBD handle by calling the xrCreateFaceTrackerBD function, specifying a supported mode from those defined in XrFacialSimulationModeBD.
If the runtime does not support the specified mode provided in
createInfo in XrFaceTrackerCreateInfoBD, the runtime must
return XR_ERROR_FEATURE_UNSUPPORTED from xrCreateFaceTrackerBD.
An application can get supported modes by calling the
xrEnumerateFacialSimulationModesBD function.
The XrFaceTrackerCreateInfoBD structure is defined as:
// Provided by XR_BD_facial_simulation
typedef struct XrFaceTrackerCreateInfoBD {
XrStructureType type;
const void* next;
XrFacialSimulationModeBD mode;
} XrFaceTrackerCreateInfoBD;
12.64.5. Destroy a face tracker handle
The xrDestroyFaceTrackerBD function is defined as:
// Provided by XR_BD_facial_simulation
XrResult xrDestroyFaceTrackerBD(
XrFaceTrackerBD tracker);
When facial expression tracking is no longer needed, release the
tracker and the resources associated with the tracker through
the xrDestroyFaceTrackerBD function.
12.64.6. Get facial expressions
The xrGetFacialSimulationDataBD function is defined as:
// Provided by XR_BD_facial_simulation
XrResult xrGetFacialSimulationDataBD(
XrFaceTrackerBD tracker,
const XrFacialSimulationDataGetInfoBD* info,
XrFacialSimulationDataBD* facialData);
The xrGetFacialSimulationDataBD function returns the blend shapes of facial expression.
The XrFacialSimulationDataGetInfoBD structure is defined as:
// Provided by XR_BD_facial_simulation
typedef struct XrFacialSimulationDataGetInfoBD {
XrStructureType type;
const void* next;
XrTime time;
} XrFacialSimulationDataGetInfoBD;
The XrFacialSimulationDataGetInfoBD structure describes the information to get facial expression. Callers should request a time equal to the predicted display time for the rendered frame. The system should employ appropriate modeling to provide expressions for this time.
The XrFacialSimulationDataBD structure is defined as:
// Provided by XR_BD_facial_simulation
typedef struct XrFacialSimulationDataBD {
XrStructureType type;
void* next;
uint32_t faceExpressionWeightCount;
float* faceExpressionWeights;
XrBool32 isUpperFaceDataValid;
XrBool32 isLowerFaceDataValid;
XrTime time;
} XrFacialSimulationDataBD;
The XrFacialSimulationDataBD structure contains the facial expression weight. See XrFacialSimulationModeBD for details and for how this interacts with lip expression tracking.
The runtime must fill the faceExpressionWeights array ordered so that
it is indexed using the corresponding facial expression enum
XrFaceExpressionBD.
An application must preallocate the output faceExpressionWeights
array that can contain at least faceExpressionWeightCount of
float.
The value of faceExpressionWeightCount must be
XR_FACE_EXPRESSION_COUNT_BD.
The XrLipExpressionDataBD structure is defined as:
// Provided by XR_BD_facial_simulation
typedef struct XrLipExpressionDataBD {
XrStructureType type;
void* next;
uint32_t lipsyncExpressionWeightCount;
float* lipsyncExpressionWeights;
} XrLipExpressionDataBD;
The XrLipExpressionDataBD structure contains the lip expression blend weights. See XrFacialSimulationModeBD for details and for how this interacts with facial expression tracking.
If an application creates XrFaceTrackerBD by calling
xrCreateFaceTrackerBD using the mode
XR_FACIAL_SIMULATION_MODE_COMBINED_AUDIO_WITH_LIP_BD or the mode
XR_FACIAL_SIMULATION_MODE_ONLY_AUDIO_WITH_LIP_BD, meaning the
application wants to get the lip expression in XrLipExpressionBD, the
application must chain an XrLipExpressionDataBD to
XrFacialSimulationDataBD::next when calling
xrGetFacialSimulationDataBD.
If no structure containing lip data is passed, the structure on this chain
will be ignored.
The runtime must return lipsyncExpressionWeights representing the
weights of the blend shapes of the current lip expression.
The runtime must update the lipsyncExpressionWeights array ordered so
that it is indexed using the corresponding lip expression enum
XrLipExpressionBD.
An application must preallocate the output lipsyncExpressionWeights
array that can contain at least lipsyncExpressionWeightCount of
float.
lipsyncExpressionWeightCount must be
XR_LIP_EXPRESSION_COUNT_BD.
// Provided by XR_BD_facial_simulation
#define XR_FACE_EXPRESSION_COUNT_BD 52
XR_FACE_EXPRESSION_COUNT_BD is the number of blend shapes of XrFaceExpressionBD.
// Provided by XR_BD_facial_simulation
#define XR_LIP_EXPRESSION_COUNT_BD 20
XR_LIP_EXPRESSION_COUNT_BD is the number of blend shapes of XrLipExpressionBD.
12.64.7. Set/Get facial simulation mode
Application can query the mode of the tracker which has been created by calling xrGetFacialSimulationModeBD, and change the tracker into another mode by calling xrSetFacialSimulationModeBD.
The xrGetFacialSimulationModeBD function is defined as:
// Provided by XR_BD_facial_simulation
XrResult xrGetFacialSimulationModeBD(
XrFaceTrackerBD tracker,
XrFacialSimulationModeBD* mode);
The xrSetFacialSimulationModeBD function is defined as:
// Provided by XR_BD_facial_simulation
XrResult xrSetFacialSimulationModeBD(
XrFaceTrackerBD tracker,
XrFacialSimulationModeBD mode);
The runtime must return XR_ERROR_FEATURE_UNSUPPORTED if the runtime
does not support the new mode.
12.64.8. Example code for getting facial expression and lip expression
The code below shows how to use this extension.
XrInstance instance; // previously initialized
XrSystemId systemId; // previously initialized
XrSession session; // previously initialized
XrSystemFacialSimulationPropertiesBD facialSimulationProperties{
XR_TYPE_SYSTEM_FACIAL_SIMULATION_PROPERTIES_BD};
XrSystemProperties sysProperties{XR_TYPE_SYSTEM_PROPERTIES,
&facialSimulationProperties};
CHK_XR(xrGetSystemProperties(instance, systemId, &sysProperties));
if (facialSimulationProperties.supportsFaceTracking == false) {
return;
}
uint32_t modeCount;
CHK_XR(xrEnumerateFacialSimulationModesBD(session, 0, &modeCount, nullptr));
std::vector<XrFacialSimulationModeBD> facialSimulationModes(modeCount);
CHK_XR(xrEnumerateFacialSimulationModesBD(session, modeCount, &modeCount,
facialSimulationModes.data()));
XrFaceTrackerBD faceTracker{XR_NULL_HANDLE};
XrFaceTrackerCreateInfoBD info{XR_TYPE_FACE_TRACKER_CREATE_INFO_BD};
info.mode = XR_FACIAL_SIMULATION_MODE_DEFAULT_BD;
CHK_XR(xrCreateFaceTrackerBD(session, &info, &faceTracker));
XrFrameState frameState; // previously returned from xrWaitFrame
const XrTime time = frameState.predictedDisplayTime;
XrFacialSimulationDataGetInfoBD infoBd;
infoBd.type = XR_TYPE_FACIAL_SIMULATION_DATA_GET_INFO_BD;
infoBd.time = time;
XrFacialSimulationDataGetInfoBD getInfo{
XR_TYPE_FACIAL_SIMULATION_DATA_GET_INFO_BD};
getInfo.time = time;
std::vector<float> faceExpressionWeights(XR_FACE_EXPRESSION_COUNT_BD);
XrFacialSimulationDataBD facedata{XR_TYPE_FACIAL_SIMULATION_DATA_BD};
facedata.faceExpressionWeights = faceExpressionWeights.data();
CHK_XR(xrGetFacialSimulationDataBD(faceTracker, &getInfo, &facedata));
assert(facedata.isUpperFaceDataValid == true);
assert(facedata.isLowerFaceDataValid == true);
for (auto blendWeight : faceExpressionWeights) {
std::cout << blendWeight << '\n'; // print each data of face expression
}
// change mode from default to COMBINED_AUDIO_WITH_LIP
CHK_XR(xrSetFacialSimulationModeBD(
faceTracker, XR_FACIAL_SIMULATION_MODE_COMBINED_AUDIO_WITH_LIP_BD));
std::vector<float> lipExpressionWeights(XR_LIP_EXPRESSION_COUNT_BD);
XrLipExpressionDataBD lipdata{XR_TYPE_LIP_EXPRESSION_DATA_BD};
lipdata.lipsyncExpressionWeights = lipExpressionWeights.data();
facedata.next = &lipdata;
CHK_XR(xrGetFacialSimulationDataBD(faceTracker, &getInfo, &facedata));
for (auto blendWeight : lipExpressionWeights) {
std::cout << blendWeight << '\n'; // print each data of lip expression
}
XrFacialSimulationModeBD currentMode;
CHK_XR(xrGetFacialSimulationModeBD(faceTracker, ¤tMode));
assert(currentMode == XR_FACIAL_SIMULATION_MODE_COMBINED_AUDIO_WITH_LIP_BD);
CHK_XR(xrDestroyFaceTrackerBD(faceTracker));
12.64.9. Conventions of blend shapes
This extension defines 52 blend shapes for facial expression and 20 blend shapes for lip expression.
// Provided by XR_BD_facial_simulation
typedef enum XrFaceExpressionBD {
XR_FACE_EXPRESSION_BROW_DROP_L_BD = 0,
XR_FACE_EXPRESSION_BROW_DROP_R_BD = 1,
XR_FACE_EXPRESSION_BROW_INNER_UPWARDS_BD = 2,
XR_FACE_EXPRESSION_BROW_OUTER_UPWARDS_L_BD = 3,
XR_FACE_EXPRESSION_BROW_OUTER_UPWARDS_R_BD = 4,
XR_FACE_EXPRESSION_EYE_BLINK_L_BD = 5,
XR_FACE_EXPRESSION_EYE_LOOK_DROP_L_BD = 6,
XR_FACE_EXPRESSION_EYE_LOOK_IN_L_BD = 7,
XR_FACE_EXPRESSION_EYE_LOOK_OUT_L_BD = 8,
XR_FACE_EXPRESSION_EYE_LOOK_UPWARDS_L_BD = 9,
XR_FACE_EXPRESSION_EYE_LOOK_SQUINT_L_BD = 10,
XR_FACE_EXPRESSION_EYE_LOOK_WIDE_L_BD = 11,
XR_FACE_EXPRESSION_EYE_BLINK_R_BD = 12,
XR_FACE_EXPRESSION_EYE_LOOK_DROP_R_BD = 13,
XR_FACE_EXPRESSION_EYE_LOOK_IN_R_BD = 14,
XR_FACE_EXPRESSION_EYE_LOOK_OUT_R_BD = 15,
XR_FACE_EXPRESSION_EYE_LOOK_UPWARDS_R_BD = 16,
XR_FACE_EXPRESSION_EYE_LOOK_SQUINT_R_BD = 17,
XR_FACE_EXPRESSION_EYE_LOOK_WIDE_R_BD = 18,
XR_FACE_EXPRESSION_NOSE_SNEER_L_BD = 19,
XR_FACE_EXPRESSION_NOSE_SNEER_R_BD = 20,
XR_FACE_EXPRESSION_CHEEK_PUFF_BD = 21,
XR_FACE_EXPRESSION_CHEEK_SQUINT_L_BD = 22,
XR_FACE_EXPRESSION_CHEEK_SQUINT_R_BD = 23,
XR_FACE_EXPRESSION_MOUTH_CLOSE_BD = 24,
XR_FACE_EXPRESSION_MOUTH_FUNNEL_BD = 25,
XR_FACE_EXPRESSION_MOUTH_PUCKER_BD = 26,
XR_FACE_EXPRESSION_MOUTH_L_BD = 27,
XR_FACE_EXPRESSION_MOUTH_R_BD = 28,
XR_FACE_EXPRESSION_MOUTH_SMILE_L_BD = 29,
XR_FACE_EXPRESSION_MOUTH_SMILE_R_BD = 30,
XR_FACE_EXPRESSION_MOUTH_FROWN_L_BD = 31,
XR_FACE_EXPRESSION_MOUTH_FROWN_R_BD = 32,
XR_FACE_EXPRESSION_MOUTH_DIMPLE_L_BD = 33,
XR_FACE_EXPRESSION_MOUTH_DIMPLE_R_BD = 34,
XR_FACE_EXPRESSION_MOUTH_STRETCH_L_BD = 35,
XR_FACE_EXPRESSION_MOUTH_STRETCH_R_BD = 36,
XR_FACE_EXPRESSION_MOUTH_ROLL_LOWER_BD = 37,
XR_FACE_EXPRESSION_MOUTH_ROLL_UPPER_BD = 38,
XR_FACE_EXPRESSION_MOUTH_SHRUG_LOWER_BD = 39,
XR_FACE_EXPRESSION_MOUTH_SHRUG_UPPER_BD = 40,
XR_FACE_EXPRESSION_MOUTH_PRESS_L_BD = 41,
XR_FACE_EXPRESSION_MOUTH_PRESS_R_BD = 42,
XR_FACE_EXPRESSION_MOUTH_LOWER_DROP_L_BD = 43,
XR_FACE_EXPRESSION_MOUTH_LOWER_DROP_R_BD = 44,
XR_FACE_EXPRESSION_MOUTH_UPPER_UPWARDS_L_BD = 45,
XR_FACE_EXPRESSION_MOUTH_UPPER_UPWARDS_R_BD = 46,
XR_FACE_EXPRESSION_JAW_FORWARD_BD = 47,
XR_FACE_EXPRESSION_JAW_L_BD = 48,
XR_FACE_EXPRESSION_JAW_R_BD = 49,
XR_FACE_EXPRESSION_JAW_OPEN_BD = 50,
XR_FACE_EXPRESSION_TONGUE_OUT_BD = 51,
XR_FACE_EXPRESSION_MAX_ENUM_BD = 0x7FFFFFFF
} XrFaceExpressionBD;
XR_FACE_EXPRESSION_BROW_DROP_L_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_BROW_DROP_R_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_BROW_INNER_UPWARDS_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_BROW_OUTER_UPWARDS_L_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_BROW_OUTER_UPWARDS_R_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_EYE_BLINK_L_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_EYE_LOOK_DROP_L_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_EYE_LOOK_IN_L_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_EYE_LOOK_OUT_L_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_EYE_LOOK_UPWARDS_L_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_EYE_LOOK_SQUINT_L_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_EYE_LOOK_WIDE_L_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_EYE_BLINK_R_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_EYE_LOOK_DROP_R_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_EYE_LOOK_IN_R_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_EYE_LOOK_OUT_R_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_EYE_LOOK_UPWARDS_R_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_EYE_LOOK_SQUINT_R_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_EYE_LOOK_WIDE_R_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_NOSE_SNEER_L_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_NOSE_SNEER_R_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_CHEEK_PUFF_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_CHEEK_SQUINT_L_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_CHEEK_SQUINT_R_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_MOUTH_CLOSE_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_MOUTH_FUNNEL_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_MOUTH_PUCKER_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_MOUTH_L_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_MOUTH_R_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_MOUTH_SMILE_L_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_MOUTH_SMILE_R_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_MOUTH_FROWN_L_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_MOUTH_FROWN_R_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_MOUTH_DIMPLE_L_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_MOUTH_DIMPLE_R_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_MOUTH_STRETCH_L_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_MOUTH_STRETCH_R_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_MOUTH_ROLL_LOWER_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_MOUTH_ROLL_UPPER_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_MOUTH_SHRUG_LOWER_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_MOUTH_SHRUG_UPPER_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_MOUTH_PRESS_L_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_MOUTH_PRESS_R_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_MOUTH_LOWER_DROP_L_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_MOUTH_LOWER_DROP_R_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_MOUTH_UPPER_UPWARDS_L_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_MOUTH_UPPER_UPWARDS_R_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_JAW_FORWARD_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_JAW_L_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_JAW_R_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_JAW_OPEN_BD |
|
|---|---|
Description |
|
XR_FACE_EXPRESSION_TONGUE_OUT_BD |
|
|---|---|
Description |
|
// Provided by XR_BD_facial_simulation
typedef enum XrLipExpressionBD {
XR_LIP_EXPRESSION_PP_BD = 0,
XR_LIP_EXPRESSION_CH_BD = 1,
XR_LIP_EXPRESSION_LO_BD = 2,
XR_LIP_EXPRESSION_O_BD = 3,
XR_LIP_EXPRESSION_I_BD = 4,
XR_LIP_EXPRESSION_LU_BD = 5,
XR_LIP_EXPRESSION_RR_BD = 6,
XR_LIP_EXPRESSION_XX_BD = 7,
XR_LIP_EXPRESSION_LAA_BD = 8,
XR_LIP_EXPRESSION_LI_BD = 9,
XR_LIP_EXPRESSION_FF_BD = 10,
XR_LIP_EXPRESSION_U_BD = 11,
XR_LIP_EXPRESSION_TH_BD = 12,
XR_LIP_EXPRESSION_LKK_BD = 13,
XR_LIP_EXPRESSION_SS_BD = 14,
XR_LIP_EXPRESSION_LE_BD = 15,
XR_LIP_EXPRESSION_DD_BD = 16,
XR_LIP_EXPRESSION_E_BD = 17,
XR_LIP_EXPRESSION_LNN_BD = 18,
XR_LIP_EXPRESSION_SIL_BD = 19,
XR_LIP_EXPRESSION_MAX_ENUM_BD = 0x7FFFFFFF
} XrLipExpressionBD;
XR_LIP_EXPRESSION_PP_BD |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_CH_BD |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_LO_BD |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_O_BD |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_I_BD |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_LU_BD |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_RR_BD |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_XX_BD |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_LAA_BD |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_LI_BD |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_FF_BD |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_U_BD |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_TH_BD |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_LKK_BD |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_SS_BD |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_LE_BD |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_DD_BD |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_E_BD |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_LNN_BD |
|
|---|---|
Description |
|
12.64.15. New Enum Constants
-
XR_BD_FACIAL_SIMULATION_EXTENSION_NAME -
XR_BD_facial_simulation_SPEC_VERSION -
Extending XrObjectType:
-
XR_OBJECT_TYPE_FACE_TRACKER_BD
-
-
Extending XrStructureType:
-
XR_TYPE_FACE_TRACKER_CREATE_INFO_BD -
XR_TYPE_FACIAL_SIMULATION_DATA_BD -
XR_TYPE_FACIAL_SIMULATION_DATA_GET_INFO_BD -
XR_TYPE_LIP_EXPRESSION_DATA_BD -
XR_TYPE_SYSTEM_FACIAL_SIMULATION_PROPERTIES_BD
-
12.65. XR_BD_future_progress
- Name String
-
XR_BD_future_progress - Extension Type
-
Instance extension
- Registered Extension Number
-
395
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2025-02-25
- IP Status
-
No known IP claims.
- Contributors
-
Zhipeng Liu, Bytedance
Ya Huang, Bytedance
12.65.1. Overview
Asynchronous operations may take a long time.
If some progress hints can be provided before the asynchronous operation is
completed, it will significantly improve the user experience.
This extension allows an application to get the rough progress percentage of
an asynchronous operation before it is completed.
This extension requires XR_EXT_future to be enabled.
12.65.2. Get Future Progress
The XrFuturePollResultProgressBD structure is defined as:
// Provided by XR_BD_future_progress
typedef struct XrFuturePollResultProgressBD {
XrStructureType type;
void* next;
XrBool32 isSupported;
uint32_t progressPercentage;
} XrFuturePollResultProgressBD;
As defined in XR_EXT_future, an XrFutureEXT is returned by a
successful call to an asynchronous function.
Applications can use xrPollFutureEXT to check the current state of a future, typically while waiting for the async operation to complete and the future to become "ready" to complete. An XrFuturePollResultEXT structure is used to return the result of xrPollFutureEXT.
With this extension, the application can chain an
XrFuturePollResultProgressBD to
XrFuturePollResultEXT::next to get a rough progress percentage
of the asynchronous operation.
If the runtime does not support reporting progress for a specific future, it
must set isSupported to false.
And in this case, the application should ignore the value of
progressPercentage.
If the runtime supports reporting progress for the specific future, it must
set isSupported to true.
The progressPercentage is only valid when the future is in either the
XR_FUTURE_STATE_PENDING_EXT or XR_FUTURE_STATE_READY_EXT state.
The runtime must not set progressPercentage to a value less than 0 or
greater than 100.
The runtime must set progressPercentage to 100 if the future is in
the state XR_FUTURE_STATE_READY_EXT.
12.65.4. New Enum Constants
-
XR_BD_FUTURE_PROGRESS_EXTENSION_NAME -
XR_BD_future_progress_SPEC_VERSION -
Extending XrStructureType:
-
XR_TYPE_FUTURE_POLL_RESULT_PROGRESS_BD
-
12.66. XR_BD_spatial_anchor
- Name String
-
XR_BD_spatial_anchor - Extension Type
-
Instance extension
- Registered Extension Number
-
391
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2025-06-10
- IP Status
-
No known IP claims.
- Contributors
-
Zhipeng Liu, ByteDance
Zijian Wang, ByteDance
Zhao Li, ByteDance
Jun Yan, ByteDance
Jimmy Alamparambil, ByteDance
12.66.1. Overview
This extension allows an application to create and destroy a spatial anchor, representing an arbitrary point of interest in the user’s physical environment. The position and orientation of each spatial anchor will be tracked by the runtime over time. The runtime adjusts the position and orientation of the spatial anchor over time as needed, to ensure that it maintains its original mapping to the real world.
This extension also allows an application to persist or unpersist spatial
anchors.
Persisting a spatial anchor means saving it to persistent storage, and the
runtime can reload and locate it in a subsequent session.
Unpersisting an anchor means erasing it from persistent storage, which
causes it to not appear in snapshots or queried data in subsequent sessions,
but it still appears in snapshots and queried data in the same session.
Both the persisting and unpersisting operations must not affect the
existing spatial anchor handles in the same session.
This allows spatial anchors to be shared and localized across application
sessions on the device for the same application.
This extension requires XR_BD_spatial_sensing to be enabled.
12.66.2. Permission
A runtime on an Android-based platform must verify that applications have
the com.picovr.permission.SPATIAL_DATA permission both listed in their
manifest and granted to use XR_BD_spatial_anchor functionality.
Without it, the runtime must set
XrFutureCompletionEXT::futureResult to
XR_ERROR_PERMISSION_INSUFFICIENT when the
xrStartSenseDataProviderCompleteBD is called.
This is an auto-requested permission: if it is listed in the manifest but not yet granted or denied, the runtime must prompt the user to grant or deny the permission when xrCreateSenseDataProviderBD is called with a provider type that requires it.
This permission is also used by XR_BD_spatial_mesh.
This permission is also used by XR_BD_spatial_scene.
This permission is also used by XR_BD_spatial_plane.
12.66.3. Inspect System Capability
The XrSystemSpatialAnchorPropertiesBD structure is defined as:
// Provided by XR_BD_spatial_anchor
typedef struct XrSystemSpatialAnchorPropertiesBD {
XrStructureType type;
void* next;
XrBool32 supportsSpatialAnchor;
} XrSystemSpatialAnchorPropertiesBD;
An application can inspect whether the system is capable of application
created spatial anchor by chaining an
XrSystemSpatialAnchorPropertiesBD structure to the
XrSystemProperties::next chain when calling
xrGetSystemProperties.
If a runtime returns XR_FALSE for supportsSpatialAnchor, the
system does not support creating arbitrary spatial anchors, and must return
XR_ERROR_FEATURE_UNSUPPORTED from xrCreateSpatialAnchorAsyncBD,
as well as from xrCreateSenseDataProviderBD when passing the
XrSenseDataProviderTypeBD value
XR_SENSE_DATA_PROVIDER_TYPE_ANCHOR_BD.
The application should avoid using spatial anchor functionality when
supportsSpatialAnchor is XR_FALSE.
If XrSystemSpatialSensingPropertiesBD::supportsSpatialSensing is
XR_FALSE, then supportsSpatialAnchor must also be
XR_FALSE.
If a runtime returns XR_TRUE for supportsSpatialAnchor, the
system supports creating and storing arbitrary spatial anchors.
This implies that
XrSystemSpatialSensingPropertiesBD::supportsSpatialSensing must
also be XR_TRUE.
Note that supportsSpatialAnchor may be XR_TRUE even if running
on an Android-based platform and the application does not have the required
com.picovr.permission.SPATIAL_DATA permission both declared in the
manifest and granted at runtime.
Evaluation of permissions takes place later, in the asynchronous operation
started by xrStartSenseDataProviderAsyncBD.
12.66.4. Create Spatial Anchor Provider
An application creates an XrSenseDataProviderBD handle representing a
spatial anchor provider by calling xrCreateSenseDataProviderBD and
setting XrSenseDataProviderCreateInfoBD::providerType equal to
the XrSenseDataProviderTypeBD value
XR_SENSE_DATA_PROVIDER_TYPE_ANCHOR_BD.
An application uses such a provider to create or query spatial anchors.
This provider type does not define any configuration and does not require a chained structure.
12.66.5. Start Spatial Anchor Provider
Applications start the spatial anchor data provider by calling xrStartSenseDataProviderAsyncBD after it is successfully created. To check the data provider state, call xrGetSenseDataProviderStateBD.
Subsequent application operations using this provider handle must not be
performed unless the data provider state is
XR_SENSE_DATA_PROVIDER_STATE_RUNNING_BD.
If the data provider state is not
XR_SENSE_DATA_PROVIDER_STATE_RUNNING_BD and the application needs to
use the provider, the application must take appropriate action and try to
call xrStartSenseDataProviderAsyncBD again before using the handle.
Detailed definitions and usage details are described in
XR_BD_spatial_sensing.
Upon start, the provider immediately begins trying to load and locate any previously persisted spatial anchors. See Query Spatial Anchor and xrPersistSpatialAnchorAsyncBD for information on how to work with previously-persisted anchors.
12.66.6. Create Spatial Anchor
The xrCreateSpatialAnchorAsyncBD function is defined as:
// Provided by XR_BD_spatial_anchor
XrResult xrCreateSpatialAnchorAsyncBD(
XrSenseDataProviderBD provider,
const XrSpatialAnchorCreateInfoBD* info,
XrFutureEXT* future);
An application creates a spatial anchor by calling this function.
This function starts an asynchronous operation and creates a corresponding
XrFutureEXT, usable with xrPollFutureEXT and related
functions.
The return value of this function only indicates whether the parameters were
acceptable to schedule the asynchronous operation.
The corresponding completion function is
xrCreateSpatialAnchorCompleteBD, usable when a future from this
function is in the READY state, with outputs populated
by that function in the completion structure
XrSpatialAnchorCreateCompletionBD.
If the provider was not created with the
XrSenseDataProviderTypeBD value
XR_SENSE_DATA_PROVIDER_TYPE_ANCHOR_BD, the function
xrCreateSpatialAnchorAsyncBD must return
XR_ERROR_VALIDATION_FAILURE.
The XrSpatialAnchorCreateInfoBD structure is defined as:
// Provided by XR_BD_spatial_anchor
typedef struct XrSpatialAnchorCreateInfoBD {
XrStructureType type;
const void* next;
XrSpace space;
XrPosef pose;
XrTime time;
} XrSpatialAnchorCreateInfoBD;
The xrCreateSpatialAnchorCompleteBD function is defined as:
// Provided by XR_BD_spatial_anchor
XrResult xrCreateSpatialAnchorCompleteBD(
XrSenseDataProviderBD provider,
XrFutureEXT future,
XrSpatialAnchorCreateCompletionBD* completion);
The application obtains the spatial anchor create result using xrCreateSpatialAnchorCompleteBD, after the future is ready.
This is the completion function corresponding to
xrCreateSpatialAnchorAsyncBD.
It completes the asynchronous operation and returns the results.
Do not call until the future is READY.
The XrSpatialAnchorCreateCompletionBD structure is defined as:
// Provided by XR_BD_spatial_anchor
typedef struct XrSpatialAnchorCreateCompletionBD {
XrStructureType type;
void* next;
XrResult futureResult;
XrUuidEXT uuid;
XrAnchorBD anchor;
} XrSpatialAnchorCreateCompletionBD;
This is the completion data structure associated with the asynchronous operation started by xrCreateSpatialAnchorAsyncBD and completed by xrCreateSpatialAnchorCompleteBD.
It is populated by a valid call to xrCreateSpatialAnchorCompleteBD on a corresponding, READY future.
12.66.7. Locate Spatial Anchor
Locating an anchor relative to a base space is performed similarly to
locating other spatial objects: through use of an XrSpace handle and
functions like xrLocateSpace and xrLocateSpaces.
To locate an anchor in a base space, first create an XrSpace handle
for that anchor using xrCreateAnchorSpaceBD.
The function and related behaviors are defined in
XR_BD_spatial_sensing.
12.66.8. Persist Spatial Anchor
The xrPersistSpatialAnchorAsyncBD function is defined as:
// Provided by XR_BD_spatial_anchor
XrResult xrPersistSpatialAnchorAsyncBD(
XrSenseDataProviderBD provider,
const XrSpatialAnchorPersistInfoBD* info,
XrFutureEXT* future);
To persist a spatial anchor, call xrPersistSpatialAnchorAsyncBD.
Persisting a spatial anchor means storing it to a persistent storage, allowing it to be reloaded and located in subsequent sessions by the same application.
This function starts an asynchronous operation and creates a corresponding
XrFutureEXT, usable with xrPollFutureEXT and related
functions.
The return value of this function only indicates whether the parameters were
acceptable to schedule the asynchronous operation.
The corresponding completion function is
xrPersistSpatialAnchorCompleteBD, usable when a future from this
function is in the READY state, with outputs populated
by that function in the completion structure XrFutureCompletionEXT.
If a previously persisted spatial anchor is successfully loaded and located by the runtime after starting a spatial anchor provider, the runtime must queue an XrEventDataSenseDataUpdatedBD event.
In subsequent sessions, to retrieve the previously persisted spatial anchor
or anchors, after receiving the XrEventDataSenseDataUpdatedBD event,
use the normal practice of creating a snapshot and getting the queried sense
data (call xrQuerySenseDataAsyncBD, xrQuerySenseDataCompleteBD,
xrGetQueriedSenseDataBD per the specification).
Tracking data is accessible by calling xrCreateSpatialEntityAnchorBD
and xrCreateAnchorSpaceBD as specified.
All of these functions are defined in XR_BD_spatial_sensing.
If the XrAnchorBD is already persisted, calling this function does not
change the persistence status.
In this case, the runtime must still set the return value of
xrPersistSpatialAnchorAsyncBD to XR_SUCCESS.
The runtime must set the corresponding future result returned by
xrPersistSpatialAnchorCompleteBD to XR_SUCCESS.
The XrSpatialAnchorPersistInfoBD structure is defined as:
// Provided by XR_BD_spatial_anchor
typedef struct XrSpatialAnchorPersistInfoBD {
XrStructureType type;
const void* next;
XrPersistenceLocationBD location;
XrAnchorBD anchor;
} XrSpatialAnchorPersistInfoBD;
The XrPersistenceLocationBD enumeration identifies the different persistence locations.
// Provided by XR_BD_spatial_anchor
typedef enum XrPersistenceLocationBD {
XR_PERSISTENCE_LOCATION_LOCAL_BD = 0,
XR_PERSISTENCE_LOCATION_MAX_ENUM_BD = 0x7FFFFFFF
} XrPersistenceLocationBD;
| Enum | Description |
|---|---|
|
The persistence storage location is local to the device. |
The xrPersistSpatialAnchorCompleteBD function is defined as:
// Provided by XR_BD_spatial_anchor
XrResult xrPersistSpatialAnchorCompleteBD(
XrSenseDataProviderBD provider,
XrFutureEXT future,
XrFutureCompletionEXT* completion);
To complete persisting an anchor and retrieve the result, call xrPersistSpatialAnchorCompleteBD.
This is the completion function corresponding to
xrPersistSpatialAnchorAsyncBD.
It completes the asynchronous operation and returns the results.
Do not call until the future is READY.
12.66.9. Unpersist Spatial Anchor
The xrUnpersistSpatialAnchorAsyncBD function is defined as:
// Provided by XR_BD_spatial_anchor
XrResult xrUnpersistSpatialAnchorAsyncBD(
XrSenseDataProviderBD provider,
const XrSpatialAnchorUnpersistInfoBD* info,
XrFutureEXT* future);
To unpersist a spatial anchor, call xrUnpersistSpatialAnchorAsyncBD.
Unpersisting an anchor means erasing it from persistent storage, which causes it to not appear in snapshots or queried data in subsequent sessions. However, it must still appear in snapshots and queried data in the same session.
This function starts an asynchronous operation and creates a corresponding
XrFutureEXT, usable with xrPollFutureEXT and related
functions.
The return value of this function only indicates whether the parameters were
acceptable to schedule the asynchronous operation.
The corresponding completion function is
xrUnpersistSpatialAnchorCompleteBD, usable when a future from this
function is in the READY state, with outputs populated
by that function in the completion structure XrFutureCompletionEXT.
The XrSpatialAnchorUnpersistInfoBD structure is defined as:
// Provided by XR_BD_spatial_anchor
typedef struct XrSpatialAnchorUnpersistInfoBD {
XrStructureType type;
const void* next;
XrPersistenceLocationBD location;
XrAnchorBD anchor;
} XrSpatialAnchorUnpersistInfoBD;
The xrUnpersistSpatialAnchorCompleteBD function is defined as:
// Provided by XR_BD_spatial_anchor
XrResult xrUnpersistSpatialAnchorCompleteBD(
XrSenseDataProviderBD provider,
XrFutureEXT future,
XrFutureCompletionEXT* completion);
The application obtains the spatial anchor unpersist result using xrUnpersistSpatialAnchorCompleteBD.
This is the completion function corresponding to
xrUnpersistSpatialAnchorAsyncBD.
It completes the asynchronous operation and returns the results.
Do not call until the future is READY.
The runtime must set XrFutureCompletionEXT::futureResult to
XR_ERROR_SPATIAL_ANCHOR_NOT_FOUND_BD if the spatial anchor is not
found in XrSpatialAnchorUnpersistInfoBD::location.
12.66.10. Query Spatial Anchor
If spatial anchors were previously persisted, they may be detected and located again by the runtime when the user is in the same physical space where they are persisted. For details on operations made available by persisting an anchor, see xrPersistSpatialAnchorAsyncBD.
12.66.14. New Enum Constants
-
XR_BD_SPATIAL_ANCHOR_EXTENSION_NAME -
XR_BD_spatial_anchor_SPEC_VERSION -
Extending XrResult:
-
XR_ERROR_SPATIAL_ANCHOR_NOT_FOUND_BD
-
-
Extending XrSenseDataProviderTypeBD:
-
XR_SENSE_DATA_PROVIDER_TYPE_ANCHOR_BD
-
-
Extending XrStructureType:
-
XR_TYPE_SPATIAL_ANCHOR_CREATE_COMPLETION_BD -
XR_TYPE_SPATIAL_ANCHOR_CREATE_INFO_BD -
XR_TYPE_SPATIAL_ANCHOR_PERSIST_INFO_BD -
XR_TYPE_SPATIAL_ANCHOR_UNPERSIST_INFO_BD -
XR_TYPE_SYSTEM_SPATIAL_ANCHOR_PROPERTIES_BD
-
12.66.16. Version History
-
Revision 1, 2024-05-06 (Zhipeng Liu)
-
Initial extension description
-
-
Revision 2, 2025-06-10 (Zhipeng Liu)
-
Move
anchorto the last parameter of XrSpatialAnchorCreateCompletionBD.
-
12.67. XR_BD_spatial_anchor_sharing
- Name String
-
XR_BD_spatial_anchor_sharing - Extension Type
-
Instance extension
- Registered Extension Number
-
392
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2025-06-02
- IP Status
-
No known IP claims.
- Contributors
-
Zhipeng Liu, ByteDance
Ya Huang, ByteDance
Xiangxin Liu, ByteDance
Jun Yan, ByteDance
Jimmy Alamparambil, ByteDance
12.67.1. Overview
This extension extends XR_BD_spatial_anchor and allows applications
to share spatial anchors between different sessions on different devices.
This extension requires XR_BD_spatial_anchor to be enabled.
12.67.2. Inspect System Capability
// Provided by XR_BD_spatial_anchor_sharing
typedef struct XrSystemSpatialAnchorSharingPropertiesBD {
XrStructureType type;
void* next;
XrBool32 supportsSpatialAnchorSharing;
} XrSystemSpatialAnchorSharingPropertiesBD;
An application can inspect whether the system is capable of spatial anchor
sharing by chaining an XrSystemSpatialAnchorSharingPropertiesBD
structure to the XrSystemProperties::next chain when calling
xrGetSystemProperties.
If a runtime returns XR_FALSE for supportsSpatialAnchorSharing,
the system does not support spatial anchor sharing.
The application should avoid using spatial anchor sharing functionality
when supportsSpatialAnchorSharing is XR_FALSE.
If XrSystemSpatialSensingPropertiesBD::supportsSpatialSensing is
XR_FALSE, then supportsSpatialAnchorSharing must also be
XR_FALSE.
Similarly, if
XrSystemSpatialAnchorPropertiesBD::supportsSpatialAnchor is
XR_FALSE, then supportsSpatialAnchorSharing must also be
XR_FALSE.
If a runtime returns XR_FALSE for supportsSpatialAnchorSharing,
the runtime must return XR_ERROR_FEATURE_UNSUPPORTED for all the
functions defined in this extension.
If a runtime returns XR_TRUE for supportsSpatialAnchorSharing,
the system supports spatial anchor sharing.
This implies that
XrSystemSpatialSensingPropertiesBD::supportsSpatialSensing and
XrSystemSpatialAnchorPropertiesBD::supportsSpatialAnchor must
also be also XR_TRUE.
12.67.3. Create Spatial Anchor Provider
See Create Spatial Anchor Provider in XR_BD_spatial_anchor for
information on creating a provider.
12.67.4. Share Spatial Anchor
The xrShareSpatialAnchorAsyncBD function is defined as:
// Provided by XR_BD_spatial_anchor_sharing
XrResult xrShareSpatialAnchorAsyncBD(
XrSenseDataProviderBD provider,
const XrSpatialAnchorShareInfoBD* info,
XrFutureEXT* future);
To share a spatial anchor, call xrShareSpatialAnchorAsyncBD.
This function starts an asynchronous operation and creates a corresponding
XrFutureEXT, usable with xrPollFutureEXT and related
functions.
The return value of this function only indicates whether the parameters were
acceptable to schedule the asynchronous operation.
The corresponding completion function is
xrShareSpatialAnchorCompleteBD, usable when a future from this
function is in the READY state, with outputs populated by that function in
the completion structure XrFutureCompletionEXT.
Shared spatial anchors are shared with other sessions and devices using a runtime defined method such as a cloud storage.
The XrSpatialAnchorShareInfoBD structure is defined as:
// Provided by XR_BD_spatial_anchor_sharing
typedef struct XrSpatialAnchorShareInfoBD {
XrStructureType type;
const void* next;
XrAnchorBD anchor;
} XrSpatialAnchorShareInfoBD;
The xrShareSpatialAnchorCompleteBD function is defined as:
// Provided by XR_BD_spatial_anchor_sharing
XrResult xrShareSpatialAnchorCompleteBD(
XrSenseDataProviderBD provider,
XrFutureEXT future,
XrFutureCompletionEXT* completion);
The application obtains the spatial anchor sharing result using xrShareSpatialAnchorCompleteBD.
This is the completion function corresponding to the operation started by
xrShareSpatialAnchorAsyncBD.
Do not call until the future is READY.
The XrFutureCompletionEXT structure is defined in
XR_EXT_future.
12.67.5. Download Shared Spatial Anchor
The xrDownloadSharedSpatialAnchorAsyncBD function is defined as:
// Provided by XR_BD_spatial_anchor_sharing
XrResult xrDownloadSharedSpatialAnchorAsyncBD(
XrSenseDataProviderBD provider,
const XrSharedSpatialAnchorDownloadInfoBD* info,
XrFutureEXT* future);
In order to access the shared spatial anchor, the application first downloads the anchor. To download a shared spatial anchor, call xrDownloadSharedSpatialAnchorAsyncBD.
The spatial anchor will be downloaded to the local device and runtime will then locate it.
This function starts an asynchronous operation and creates a corresponding
XrFutureEXT, usable with xrPollFutureEXT and related
functions.
The return value of this function only indicates whether the parameters were
acceptable to schedule the asynchronous operation.
The corresponding completion function is
xrDownloadSharedSpatialAnchorCompleteBD, usable when a future from
this function is in the READY state, with outputs
populated by that function in the completion structure
XrFutureCompletionEXT.
The XrSharedSpatialAnchorDownloadInfoBD structure is defined as:
// Provided by XR_BD_spatial_anchor_sharing
typedef struct XrSharedSpatialAnchorDownloadInfoBD {
XrStructureType type;
const void* next;
XrUuidEXT uuid;
} XrSharedSpatialAnchorDownloadInfoBD;
The xrDownloadSharedSpatialAnchorCompleteBD function is defined as:
// Provided by XR_BD_spatial_anchor_sharing
XrResult xrDownloadSharedSpatialAnchorCompleteBD(
XrSenseDataProviderBD provider,
XrFutureEXT future,
XrFutureCompletionEXT* completion);
The application obtains the spatial anchor download result using xrDownloadSharedSpatialAnchorCompleteBD.
This is the completion function corresponding to the operation started by
xrDownloadSharedSpatialAnchorAsyncBD.
Do not call until the future is READY.
The XrFutureCompletionEXT structure is defined in
XR_EXT_future.
If the spatial anchor is downloaded to the local device, the runtime must
set the XrFutureCompletionEXT::futureResult value to
XR_SUCCESS.
This indicates that the spatial anchor has been successfully downloaded to
the local device and located.
To obtain the spatial anchors from the spatial anchor data provider, call
xrQuerySenseDataAsyncBD, xrQuerySenseDataCompleteBD,
xrGetQueriedSenseDataBD, and xrCreateSpatialEntityAnchorBD.
All of these functions are defined in XR_BD_spatial_sensing.
|
Note
It is implementation defined whether the runtime implements permission
controls, in which case it may set
XrFutureCompletionEXT:: |
12.67.8. New Enum Constants
-
XR_BD_SPATIAL_ANCHOR_SHARING_EXTENSION_NAME -
XR_BD_spatial_anchor_sharing_SPEC_VERSION -
Extending XrResult:
-
XR_ERROR_SPATIAL_ANCHOR_SHARING_AUTHENTICATION_FAILURE_BD -
XR_ERROR_SPATIAL_ANCHOR_SHARING_LOCALIZATION_FAIL_BD -
XR_ERROR_SPATIAL_ANCHOR_SHARING_MAP_INSUFFICIENT_BD -
XR_ERROR_SPATIAL_ANCHOR_SHARING_NETWORK_FAILURE_BD -
XR_ERROR_SPATIAL_ANCHOR_SHARING_NETWORK_TIMEOUT_BD
-
-
Extending XrStructureType:
-
XR_TYPE_SHARED_SPATIAL_ANCHOR_DOWNLOAD_INFO_BD -
XR_TYPE_SPATIAL_ANCHOR_SHARE_INFO_BD -
XR_TYPE_SYSTEM_SPATIAL_ANCHOR_SHARING_PROPERTIES_BD
-
12.68. XR_BD_spatial_mesh
- Name String
-
XR_BD_spatial_mesh - Extension Type
-
Instance extension
- Registered Extension Number
-
394
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2025-03-26
- IP Status
-
No known IP claims.
- Contributors
-
Zhipeng Liu, ByteDance
Zhanrui Jia, ByteDance
Xu Yang, ByteDance
Jun Yan, ByteDance
Jimmy Alamparambil, ByteDance
12.68.1. Overview
This extension allows applications to request the runtime to detect and track spatial scene meshes of the physical environment.
This extension requires XR_BD_spatial_sensing to be enabled.
12.68.2. Permission
A runtime on an Android-based platform must verify that applications have
the com.picovr.permission.SPATIAL_DATA permission both listed in their
manifest and granted to use spatial mesh functionality.
Without it, runtime must set
XrFutureCompletionEXT::futureResult to
XR_ERROR_PERMISSION_INSUFFICIENT when the
xrStartSenseDataProviderCompleteBD is called.
This is an auto-requested permission: if it is listed in the manifest but not yet granted or denied, the runtime must prompt the user to grant or deny the permission when xrCreateSenseDataProviderBD is called with a provider type that requires it.
This permission is also used by XR_BD_spatial_anchor.
This permission is also used by XR_BD_spatial_scene.
This permission is also used by XR_BD_spatial_plane.
12.68.3. Inspect System Capability
The XrSystemSpatialMeshPropertiesBD structure is defined as:
// Provided by XR_BD_spatial_mesh
typedef struct XrSystemSpatialMeshPropertiesBD {
XrStructureType type;
void* next;
XrBool32 supportsSpatialMesh;
} XrSystemSpatialMeshPropertiesBD;
An application can inspect whether the system is capable of spatial mesh
functionality by chaining an XrSystemSpatialMeshPropertiesBD structure
to the XrSystemProperties::next chain when calling
xrGetSystemProperties.
If a runtime returns XR_FALSE for supportsSpatialMesh, the
system does not support spatial mesh functionality, and the runtime must
return XR_ERROR_FEATURE_UNSUPPORTED from
xrCreateSenseDataProviderBD when called with
XR_SENSE_DATA_PROVIDER_TYPE_MESH_BD.
The application should avoid using spatial mesh functionality when
supportsSpatialMesh is XR_FALSE.
If XrSystemSpatialSensingPropertiesBD::supportsSpatialSensing is
XR_FALSE, then supportsSpatialMesh must also be XR_FALSE.
If a runtime returns XR_TRUE for supportsSpatialMesh, the system
supports spatial mesh functionality.
This implies that
XrSystemSpatialSensingPropertiesBD::supportsSpatialSensing must
also be XR_TRUE.
Note that supportsSpatialMesh may be XR_TRUE even if running on
an Android-based platform and the application does not have the required
com.picovr.permission.SPATIAL_DATA permission both declared in the
manifest and granted at runtime.
Evaluation of permissions takes place later, in the asynchronous operation
started by xrStartSenseDataProviderAsyncBD.
12.68.4. Create Spatial Mesh Provider
The XrSenseDataProviderCreateInfoSpatialMeshBD structure is defined as:
// Provided by XR_BD_spatial_mesh
typedef struct XrSenseDataProviderCreateInfoSpatialMeshBD {
XrStructureType type;
const void* next;
XrSpatialMeshConfigFlagsBD configFlags;
XrSpatialMeshLodBD lod;
} XrSenseDataProviderCreateInfoSpatialMeshBD;
An application creates an XrSenseDataProviderBD handle representing a
a spatial mesh provider by calling xrCreateSenseDataProviderBD,
setting XrSenseDataProviderCreateInfoBD::providerType equal to
the XrSenseDataProviderTypeBD value
XR_SENSE_DATA_PROVIDER_TYPE_MESH_BD, and chaining
XrSenseDataProviderCreateInfoSpatialMeshBD to
XrSenseDataProviderCreateInfoBD::next.
Both the XrSenseDataProviderTypeBD value and the chained
XrSenseDataProviderCreateInfoSpatialMeshBD structure are required.
If XrSenseDataProviderCreateInfoBD::providerType is equal to
XR_SENSE_DATA_PROVIDER_TYPE_MESH_BD but
XrSenseDataProviderCreateInfoSpatialMeshBD is not in the next
chain, the runtime must return XR_ERROR_VALIDATION_FAILURE.
An application uses such a provider to obtain the spatial mesh info that is detected and tracked by the runtime. This data generally consists of spatial entities with at least a XrSpatialEntityComponentDataTriangleMeshBD component.
The enumeration XrSpatialMeshConfigFlagsBD is defined as:
// Provided by XR_BD_spatial_mesh
typedef XrFlags64 XrSpatialMeshConfigFlagsBD;
The XrSenseDataProviderCreateInfoSpatialMeshBD::configFlags
member is of the type XrSpatialMeshConfigFlagsBD, and contains a
bitwise-OR of zero or more of the bits defined in
XrSpatialMeshConfigFlagBitsBD.
Valid bits for XrSpatialMeshConfigFlagsBD are defined by XrSpatialMeshConfigFlagBitsBD, which is specified as:
// Provided by XR_BD_spatial_mesh
// Flag bits for XrSpatialMeshConfigFlagsBD
static const XrSpatialMeshConfigFlagsBD XR_SPATIAL_MESH_CONFIG_SEMANTIC_BIT_BD = 0x00000001;
static const XrSpatialMeshConfigFlagsBD XR_SPATIAL_MESH_CONFIG_ALIGN_SEMANTIC_WITH_VERTEX_BIT_BD = 0x00000002;
If XR_SPATIAL_MESH_CONFIG_SEMANTIC_BIT_BD is not set, the
XR_SPATIAL_MESH_CONFIG_ALIGN_SEMANTIC_WITH_VERTEX_BIT_BD takes no
effect.
If XR_SPATIAL_MESH_CONFIG_ALIGN_SEMANTIC_WITH_VERTEX_BIT_BD is not
set, each semantic label corresponds to each vertex.
If XR_SPATIAL_MESH_CONFIG_ALIGN_SEMANTIC_WITH_VERTEX_BIT_BD is set,
each semantic label corresponds to three indices.
The XrSpatialMeshLodBD enumeration identifies the different LOD levels.
// Provided by XR_BD_spatial_mesh
typedef enum XrSpatialMeshLodBD {
XR_SPATIAL_MESH_LOD_COARSE_BD = 0,
XR_SPATIAL_MESH_LOD_MEDIUM_BD = 1,
XR_SPATIAL_MESH_LOD_FINE_BD = 2,
XR_SPATIAL_MESH_LOD_MAX_ENUM_BD = 0x7FFFFFFF
} XrSpatialMeshLodBD;
| Enum | Description |
|---|---|
|
Coarse level for the mesh with less details. |
|
Medium level for the mesh. |
|
Fine level for the mesh with more details. |
12.68.5. Start Spatial Mesh Provider
Applications start the spatial mesh data provider by calling xrStartSenseDataProviderAsyncBD after it is successfully created. To check the data provider state, call xrGetSenseDataProviderStateBD.
Subsequent application operations using this handle must not be performed
unless the futureResult is XR_SUCCESS.
If the futureResult returns an error code and the application needs to
use the provider, the application must take appropriate action and try to
call xrStartSenseDataProviderAsyncBD again before using the handle.
Detailed definitions and usage details are described in
XR_BD_spatial_sensing.
12.68.6. Get Spatial Mesh Data
Applications query the latest detected spatial meshes from the spatial mesh data provider by calling xrQuerySenseDataAsyncBD. The runtime generates a snapshot of the spatial mesh information, from which the application can obtain detailed spatial mesh information. The mesh information is presented in the form of spatial entities which the application queries by calling xrGetQueriedSenseDataBD. Use xrEnumerateSpatialEntityComponentTypesBD to get the types of components contained within these entities, such as location, semantics, and mesh vertices and indices. To further retrieve component data information, call xrGetSpatialEntityComponentDataBD.
When the spatial mesh information changes, the runtime must queue an XrEventDataSenseDataUpdatedBD event with the handle of the spatial mesh provider.
When the application receives this event, it means the sense data provider has updated mesh data. A new query request for this sense data provider will get the latest, updated mesh data. This is the recommended way to get updated data.
Alternatively, the application may query the latest spatial mesh information at a time that suits its needs, independent of the update events.
All the functions to get spatial mesh info are defined in
XR_BD_spatial_sensing.
12.68.10. New Enum Constants
-
XR_BD_SPATIAL_MESH_EXTENSION_NAME -
XR_BD_spatial_mesh_SPEC_VERSION -
Extending XrSenseDataProviderTypeBD:
-
XR_SENSE_DATA_PROVIDER_TYPE_MESH_BD
-
-
Extending XrStructureType:
-
XR_TYPE_SENSE_DATA_PROVIDER_CREATE_INFO_SPATIAL_MESH_BD -
XR_TYPE_SYSTEM_SPATIAL_MESH_PROPERTIES_BD
-
12.69. XR_BD_spatial_plane
- Name String
-
XR_BD_spatial_plane - Extension Type
-
Instance extension
- Registered Extension Number
-
397
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2025-03-31
- IP Status
-
No known IP claims.
- Contributors
-
Xu Yang, ByteDance
Zhipeng Liu, ByteDance
12.69.1. Overview
This extension allows applications to request the runtime to detect and
track planes in the physical environment.
This extension requires XR_BD_spatial_sensing to be enabled.
12.69.2. Permission
A runtime on an Android-based platform must verify that applications have
the com.picovr.permission.SPATIAL_DATA permission both listed in
their manifest and granted to use spatial plane functionality.
Without it, runtime must set
XrFutureCompletionEXT::futureResult to
XR_ERROR_PERMISSION_INSUFFICIENT when the
xrStartSenseDataProviderCompleteBD function is called.
This is an auto-requested permission: if it is listed in the manifest but not yet granted or denied, the runtime must prompt the user to grant or deny the permission when xrCreateSenseDataProviderBD is called with a provider type that requires it.
This permission is also used by XR_BD_spatial_anchor.
This permission is also used by XR_BD_spatial_mesh.
This permission is also used by XR_BD_spatial_scene.
12.69.3. Inspect System Capability
The XrSystemSpatialPlanePropertiesBD structure is defined as:
// Provided by XR_BD_spatial_plane
typedef struct XrSystemSpatialPlanePropertiesBD {
XrStructureType type;
void* next;
XrBool32 supportsSpatialPlane;
} XrSystemSpatialPlanePropertiesBD;
An application can inspect whether the system is capable of spatial plane
functionality by chaining an XrSystemSpatialPlanePropertiesBD
structure to the XrSystemProperties::next chain when calling
xrGetSystemProperties.
If a runtime returns XR_FALSE for supportsSpatialPlane, the
system does not support spatial plane functionality, and the runtime must
return XR_ERROR_FEATURE_UNSUPPORTED from
xrCreateSenseDataProviderBD when called with
XR_SENSE_DATA_PROVIDER_TYPE_PLANE_BD.
The application should avoid using spatial plane functionality when
supportsSpatialPlane is XR_FALSE.
If XrSystemSpatialSensingPropertiesBD::supportsSpatialSensing is
XR_FALSE, then supportsSpatialPlane must also be
XR_FALSE.
If the runtime returns XR_TRUE for supportsSpatialPlane, the
system supports spatial plane functionality.
This implies that
XrSystemSpatialSensingPropertiesBD::supportsSpatialSensing must
also be XR_TRUE.
Note that supportsSpatialPlane may be XR_TRUE even if running
on an Android-based platform and the application does not have the required
com.picovr.permission.SPATIAL_DATA permission both declared in the
manifest and granted at runtime.
Evaluation of permissions takes place later, in the asynchronous operation
started by xrStartSenseDataProviderAsyncBD.
12.69.4. Create Spatial Plane Provider
Applications create an XrSenseDataProviderBD handle representing a
spatial plane provider by calling xrCreateSenseDataProviderBD, and
setting XrSenseDataProviderCreateInfoBD::providerType to the
XrSenseDataProviderTypeBD value
XR_SENSE_DATA_PROVIDER_TYPE_PLANE_BD.
Applications use this provider handle to obtain information about spatial
planes that are detected and tracked by the runtime.
This data generally consists of spatial entities with at least a plane
orientation component which corresponds to an
XrSpatialEntityComponentTypeBD value of
XR_SPATIAL_ENTITY_COMPONENT_TYPE_PLANE_ORIENTATION_BD and a triangle
mesh component which corresponds to an XrSpatialEntityComponentTypeBD
value of XR_SPATIAL_ENTITY_COMPONENT_TYPE_TRIANGLE_MESH_BD.
This provider type does not define any configuration and does not require a chained structure.
12.69.5. Start Spatial Plane Provider
Applications start the spatial plane data provider by calling xrStartSenseDataProviderAsyncBD after it has been successfully created. To check the data provider state, call xrGetSenseDataProviderStateBD.
Subsequent application operations using this provider handle must not be
performed unless the data provider state is
XR_SENSE_DATA_PROVIDER_STATE_RUNNING_BD.
If the data provider state is not
XR_SENSE_DATA_PROVIDER_STATE_RUNNING_BD and the application intends to
use the provider, the application must take appropriate action and call
xrStartSenseDataProviderAsyncBD again before using the handle.
Detailed definitions and usage details are described in
XR_BD_spatial_sensing.
12.69.6. Get Spatial Plane Data
Applications query the latest detected spatial planes from the spatial plane data provider by calling xrQuerySenseDataAsyncBD. The runtime generates a snapshot of the spatial plane information, from which the application can obtain detailed spatial plane information. The plane information is presented in the form of spatial entities which the application queries by calling xrGetQueriedSenseDataBD. Use xrEnumerateSpatialEntityComponentTypesBD to get the types of components contained within these entities, such as location, semantics, vertices and indices. To further retrieve component data information, call xrGetSpatialEntityComponentDataBD.
When the runtime’s understanding of the detected spatial planes changes, the runtime must queue an XrEventDataSenseDataUpdatedBD event with the handle of the spatial plane provider.
Receiving this event signals to the application that updated plane data is available to query from the sense data provider. A new query request for this sense data provider will get the latest, updated plane data. This is the recommended way to get updated data.
The application may also initiate a new query request at any time without receiving this event, but the queried data may be unmodified.
All the functions to get spatial plane info are defined in
XR_BD_spatial_sensing.
12.69.7. Acquire Plane Orientation
The XrPlaneOrientationBD enumeration identifies the different orientation of a spatial plane.
typedef enum XrPlaneOrientationBD {
XR_PLANE_ORIENTATION_HORIZONTAL_UPWARD_BD = 0,
XR_PLANE_ORIENTATION_HORIZONTAL_DOWNWARD_BD = 1,
XR_PLANE_ORIENTATION_VERTICAL_BD = 2,
XR_PLANE_ORIENTATION_ARBITRARY_BD = 3,
XR_PLANE_ORIENTATION_MAX_ENUM_BD = 0x7FFFFFFF
} XrPlaneOrientationBD;
| Enum | Description |
|---|---|
|
The detected plane is horizontal and faces upward (e.g. floor). |
|
The detected plane is horizontal and faces downward (e.g. ceiling). |
|
The detected plane is vertical (e.g. wall). |
|
The detected plane has an arbitrary, non-vertical and non-horizontal orientation. |
The XrSpatialEntityComponentDataPlaneOrientationBD structure is defined as:
// Provided by XR_BD_spatial_plane
typedef struct XrSpatialEntityComponentDataPlaneOrientationBD {
XrStructureType type;
void* next;
XrPlaneOrientationBD orientation;
} XrSpatialEntityComponentDataPlaneOrientationBD;
XrSpatialEntityComponentDataPlaneOrientationBD is an output structure
for getting the plane orientation component data from a snapshot.
This corresponds to an XrSpatialEntityComponentTypeBD value of
XR_SPATIAL_ENTITY_COMPONENT_TYPE_PLANE_ORIENTATION_BD.
12.69.8. Spatial Plane Component Orientation Filter
The XrSenseDataFilterPlaneOrientationBD structure is defined as:
// Provided by XR_BD_spatial_plane
typedef struct XrSenseDataFilterPlaneOrientationBD {
XrStructureType type;
const void* next;
uint32_t orientationCount;
XrPlaneOrientationBD* orientations;
} XrSenseDataFilterPlaneOrientationBD;
The XrSenseDataFilterPlaneOrientationBD structure contains a list of plane orientations. When the application passes this filter, all sense data that matches the criteria in the filter is included in the result returned. The runtime must not include sense data that does not match the provided filter.
12.69.10. New Enum Constants
-
XR_BD_SPATIAL_PLANE_EXTENSION_NAME -
XR_BD_spatial_plane_SPEC_VERSION -
Extending XrSenseDataProviderTypeBD:
-
XR_SENSE_DATA_PROVIDER_TYPE_PLANE_BD
-
-
Extending XrSpatialEntityComponentTypeBD:
-
XR_SPATIAL_ENTITY_COMPONENT_TYPE_PLANE_ORIENTATION_BD
-
-
Extending XrStructureType:
-
XR_TYPE_SENSE_DATA_FILTER_PLANE_ORIENTATION_BD -
XR_TYPE_SPATIAL_ENTITY_COMPONENT_DATA_PLANE_ORIENTATION_BD -
XR_TYPE_SYSTEM_SPATIAL_PLANE_PROPERTIES_BD
-
12.70. XR_BD_spatial_scene
- Name String
-
XR_BD_spatial_scene - Extension Type
-
Instance extension
- Registered Extension Number
-
393
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2025-03-26
- IP Status
-
No known IP claims.
- Contributors
-
Zhipeng Liu, ByteDance
Zhao Li, ByteDance
Zijian Wang, ByteDance
Jun Yan, ByteDance
Jimmy Alamparambil, ByteDance
12.70.1. Overview
This extension allows applications to request the runtime to start capturing spatial scene information from the physical environment and query the information that the system has captured and stored.
Different objects in the scene are be represented by different component data. For example, 2D bounding boxes or polygons might be associated with walls, floors, ceilings, doors, and windows; 3D bounding boxes might be associated with tables, chairs and beds.
The general workflow for applications to use the scene capture process includes the following typical steps:
-
Initiate the scene capture process by invoking xrCaptureSceneAsyncBD. Once the call to xrCaptureSceneAsyncBD is successful, poll the returned future by using xrPollFutureEXT.
-
Typically, upon successful initiation of the scene capture process, the current application will be sent to the background. Therefore, the application should call xrEndSession to stop the current session upon receiving the XrEventDataSessionStateChanged event with XrEventDataSessionStateChanged::
stateset toXR_SESSION_STATE_STOPPING. -
After the scene capture process is completed, the application will be brought back to the foreground. At this point, the application should call xrBeginSession to resume the session, when it receives the XrEventDataSessionStateChanged event with XrEventDataSessionStateChanged::
stateset toXR_SESSION_STATE_READY. -
Generally, the future will be ready once the session has been restarted. If an error occurs during the scene capture process, the future result is an error code. For instance, if the scene capture process exits abnormally, an
XR_ERROR_SCENE_CAPTURE_FAILURE_BDfuture result will be returned. -
After receiving an XrEventDataSenseDataUpdatedBD, query the updated spatial scene data by calling xrQuerySenseDataAsyncBD. Subsequently, the application can obtain the updated data by following the steps outlined in
XR_BD_spatial_sensing.
This extension requires XR_BD_spatial_sensing to also be enabled.
12.70.2. Permission
A runtime on an Android-based platform must verify that applications have
the com.picovr.permission.SPATIAL_DATA permission both listed in their
manifest and granted to use spatial scene capture.
Without it, runtime must set
XrFutureCompletionEXT::futureResult to
XR_ERROR_PERMISSION_INSUFFICIENT when the
xrStartSenseDataProviderCompleteBD is called.
This is an auto-requested permission: if it is listed in the manifest but not yet granted or denied, the runtime must prompt the user to grant or deny the permission when xrCreateSenseDataProviderBD is called with a provider type that requires it.
This permission is also used by XR_BD_spatial_anchor.
This permission is also used by XR_BD_spatial_mesh.
This permission is also used by XR_BD_spatial_plane.
12.70.3. Inspect System Capability
The XrSystemSpatialScenePropertiesBD structure is defined as:
// Provided by XR_BD_spatial_scene
typedef struct XrSystemSpatialScenePropertiesBD {
XrStructureType type;
void* next;
XrBool32 supportsSpatialScene;
} XrSystemSpatialScenePropertiesBD;
An application can inspect whether the system is capable of spatial scene
functionality by chaining an XrSystemSpatialScenePropertiesBD
structure to the XrSystemProperties::next chain when calling
xrGetSystemProperties.
If a runtime returns XR_FALSE for supportsSpatialScene, the
system does not support spatial scene functionality, and must return
XR_ERROR_FEATURE_UNSUPPORTED from xrCreateSenseDataProviderBD
when passing the XrSenseDataProviderTypeBD value
XR_SENSE_DATA_PROVIDER_TYPE_SCENE_BD.
The application should avoid using spatial scene functionality when
supportsSpatialScene is XR_FALSE.
If XrSystemSpatialSensingPropertiesBD::supportsSpatialSensing is
XR_FALSE, then supportsSpatialScene must also be
XR_FALSE.
If a runtime returns XR_TRUE for supportsSpatialScene, the
system supports spatial scene functionality.
This implies that
XrSystemSpatialSensingPropertiesBD::supportsSpatialSensing must
also be XR_TRUE.
Note that supportsSpatialScene may be XR_TRUE even if running
on an Android-based platform and the application does not have the required
com.picovr.permission.SPATIAL_DATA permission both declared in the
manifest and granted at runtime.
Evaluation of permissions takes place later, in the asynchronous operation
started by xrStartSenseDataProviderAsyncBD.
12.70.4. Create Spatial Scene Provider
An application creates an XrSenseDataProviderBD handle representing a
spatial scene data provider by calling xrCreateSenseDataProviderBD
after setting XrSenseDataProviderCreateInfoBD::providerType
equal to the XrSenseDataProviderTypeBD value
XR_SENSE_DATA_PROVIDER_TYPE_SCENE_BD.
An application uses such a provider to obtain the spatial scene info that is captured and stored by the system scene capture process.
This provider type does not define any configuration and does not require a chained structure.
12.70.5. Start Spatial Scene Provider
Applications start the spatial scene data provider by calling xrStartSenseDataProviderAsyncBD after it is successfully created. To check the data provider state, call xrGetSenseDataProviderStateBD.
Subsequent application operations using this provider handle must not be
performed unless the data provider state is
XR_SENSE_DATA_PROVIDER_STATE_RUNNING_BD.
If the data provider state is not
XR_SENSE_DATA_PROVIDER_STATE_RUNNING_BD and the application needs to
use the provider, the application must take appropriate action and try to
call xrStartSenseDataProviderAsyncBD again before using the handle.
Detailed definitions and usage details are described in
XR_BD_spatial_sensing.
12.70.6. Get Captured Scene Info
Applications query the latest captured scene information from the spatial scene data provider by calling xrQuerySenseDataAsyncBD. The runtime generates a snapshot of the spatial scene information, from which the application can obtain detailed scene information. The scene information is presented in the form of spatial entities which the application queries by calling xrGetQueriedSenseDataBD. Use xrEnumerateSpatialEntityComponentTypesBD to get the types of components contained within these entities, such as location, semantics, and bounding boxes. To further retrieve component information, call xrGetSpatialEntityComponentDataBD
When the scene information changes, the runtime must queue an XrEventDataSenseDataUpdatedBD event with the handle of the spatial scene provider.
When the application receives this event, it means the sense data provider has updated scene data. A new query request for this sense data provider will get the latest, updated scene data. This is the recommended way to get updated data. Without receiving this event, the application also may initiate a new query request at any time, but the queried data may be unmodified.
All the functions to get captured scene info are defined in
XR_BD_spatial_sensing.
12.70.7. Start Scene Capture
The xrCaptureSceneAsyncBD function is defined as:
// Provided by XR_BD_spatial_scene
XrResult xrCaptureSceneAsyncBD(
XrSenseDataProviderBD provider,
const XrSceneCaptureInfoBD* info,
XrFutureEXT* future);
The application starts the scene capture process, which guides the user to capture the scene of the physical environment, by calling xrCaptureSceneAsyncBD.
This function starts an asynchronous operation and creates a corresponding
XrFutureEXT, usable with xrPollFutureEXT and related
functions.
The return value of this function only indicates whether the parameters were
acceptable to schedule the asynchronous operation.
The corresponding completion function is
xrStartSenseDataProviderCompleteBD, usable when a future from this
function is in the READY state, with outputs populated
by that function in the completion structure XrFutureCompletionEXT.
Note that scene capture may involve interaction by the user with system
user interfaces.
After calling this function, the system may switch the current application
to the scene capture process.
Accordingly, the runtime may emit an XrEventDataSessionStateChanged
event with XrEventDataSessionStateChanged::state set to
XR_SESSION_STATE_STOPPING, for the application to handle as usual,
before scene capture results arrive.
The XrSceneCaptureInfoBD structure is defined as:
// Provided by XR_BD_spatial_scene
typedef struct XrSceneCaptureInfoBD {
XrStructureType type;
const void* next;
} XrSceneCaptureInfoBD;
This structure is defined for future extension.
The xrCaptureSceneCompleteBD function is defined as:
// Provided by XR_BD_spatial_scene
XrResult xrCaptureSceneCompleteBD(
XrSenseDataProviderBD provider,
XrFutureEXT future,
XrFutureCompletionEXT* completion);
The application obtains the scene capture starting result using xrCaptureSceneCompleteBD.
This is the completion function corresponding to the operation started by
xrCaptureSceneAsyncBD.
Do not call until the future is READY.
The XrFutureCompletionEXT structure is defined in
XR_EXT_future.
12.70.10. New Enum Constants
-
XR_BD_SPATIAL_SCENE_EXTENSION_NAME -
XR_BD_spatial_scene_SPEC_VERSION -
Extending XrResult:
-
XR_ERROR_SCENE_CAPTURE_FAILURE_BD
-
-
Extending XrSenseDataProviderTypeBD:
-
XR_SENSE_DATA_PROVIDER_TYPE_SCENE_BD
-
-
Extending XrStructureType:
-
XR_TYPE_SCENE_CAPTURE_INFO_BD -
XR_TYPE_SYSTEM_SPATIAL_SCENE_PROPERTIES_BD
-
12.71. XR_BD_spatial_sensing
- Name String
-
XR_BD_spatial_sensing - Extension Type
-
Instance extension
- Registered Extension Number
-
390
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2025-03-26
- IP Status
-
No known IP claims.
- Contributors
-
Zhipeng Liu, ByteDance
Ya Huang, ByteDance
Xiangxin Liu, ByteDance
Zijian Wang, ByteDance
Zhao Li, ByteDance
Zhanrui Jia, ByteDance
Xu Yang, ByteDance
Jun Yan, ByteDance
Jimmy Alamparambil, ByteDance
12.71.1. Overview
This extension introduces a spatial sensing API framework which contains a set of common functions and data structures for various spatial sensing capabilities. In this extension, the EC (Entity-Component) pattern is used to denote the sensed object in the physical environment via spatial entity.
A spatial entity may be points of interest, physical plane surfaces, physical objects, mesh blocks, images, etc., and it may have different components. Having different components indicates that different data are attached to the spatial entity and different operations can be performed on it. Applications can get the component types that a given spatial entity has as well as the information of a specific component.
Different kinds of spatial sensing data are provided via different sense data providers. Applications can get the sensed spatial entity details from sense data providers.
The general workflow for applications to includes the following typical steps:
-
Create an XrSenseDataProviderBD with relevant configuration information via xrCreateSenseDataProviderBD.
-
Start the XrSenseDataProviderBD to let the runtime provide sensed data for that provider via xrStartSenseDataProviderAsyncBD.
-
Get the state of XrSenseDataProviderBD via xrGetSenseDataProviderStateBD, and make sure it is in the running state.
-
Initiate a query request to the XrSenseDataProviderBD via xrQuerySenseDataAsyncBD after receiving an XrEventDataSenseDataUpdatedBD, and the runtime will produce an XrSenseDataSnapshotBD, which contains the most recent data associated with this provider.
-
Retrieve the spatial entity info as well as the detailed component data from the XrSenseDataSnapshotBD via xrGetQueriedSenseDataBD and xrGetSpatialEntityComponentDataBD.
-
Destroy the XrSenseDataSnapshotBD after all the data infos are retrieved via xrDestroySenseDataSnapshotBD to release related resources. Applications can repeat step 4-6 to get the updated sense data if necessary.
-
Stop the XrSenseDataProviderBD via xrStopSenseDataProviderBD.
-
Destroy the XrSenseDataProviderBD via xrDestroySenseDataProviderBD to release the system resources associated with the provider.
|
Note
If any function returns an
|
12.71.2. Inspect System Capability
The XrSystemSpatialSensingPropertiesBD structure is defined as:
// Provided by XR_BD_spatial_sensing
typedef struct XrSystemSpatialSensingPropertiesBD {
XrStructureType type;
void* next;
XrBool32 supportsSpatialSensing;
} XrSystemSpatialSensingPropertiesBD;
An application can inspect whether the system is capable of spatial sensing
by chaining an XrSystemSpatialSensingPropertiesBD structure to the
XrSystemProperties::next chain when calling
xrGetSystemProperties.
If a runtime returns XR_FALSE for supportsSpatialSensing, the
system does not support spatial sensing, and the runtime must return
XR_ERROR_FEATURE_UNSUPPORTED from xrCreateSenseDataProviderBD.
If a runtime returns XR_TRUE for supportsSpatialSensing, the
system supports spatial sensing.
12.71.3. Create Sense Data Provider
The XrSenseDataProviderBD handle is defined as:
// Provided by XR_BD_spatial_sensing
XR_DEFINE_HANDLE(XrSenseDataProviderBD)
XrSenseDataProviderBD is a handle which the application creates to do spatial sensing for the user’s physical environment.
The xrCreateSenseDataProviderBD function is defined as:
// Provided by XR_BD_spatial_sensing
XrResult xrCreateSenseDataProviderBD(
XrSession session,
const XrSenseDataProviderCreateInfoBD* createInfo,
XrSenseDataProviderBD* provider);
The application can create a sense data provider using
xrCreateSenseDataProviderBD.
Different providers are associated with different
XrSenseDataProviderTypeBD provider type values in
XrSenseDataProviderCreateInfoBD::providerType.
Some require additional parameters in structure types chained on to
createInfo.
The XrSenseDataProviderCreateInfoBD structure is defined as:
// Provided by XR_BD_spatial_sensing
typedef struct XrSenseDataProviderCreateInfoBD {
XrStructureType type;
const void* next;
XrSenseDataProviderTypeBD providerType;
} XrSenseDataProviderCreateInfoBD;
When the application creates an XrSenseDataProviderBD, it must
specify the type of provider via providerType.
If the value of providerType is not valid given the current enabled
extensions, the runtime must return XR_ERROR_VALIDATION_FAILURE.
Some provider types have additional parameters passed via the
XrSenseDataProviderCreateInfoBD::next chain.
If a valid provider type is specified in providerType but that type
requires an additional configuration structure that is not provided in the
next chain, the runtime must return
XR_ERROR_VALIDATION_FAILURE.
Note that in accordance with
Valid Usage for Structure Pointer Chains, configuration
structures in the next chain for provider types other than the one
indicated by providerType are ignored.
The XrSenseDataProviderTypeBD enumeration is defined as:
// Provided by XR_BD_spatial_sensing
typedef enum XrSenseDataProviderTypeBD {
// Provided by XR_BD_spatial_anchor
XR_SENSE_DATA_PROVIDER_TYPE_ANCHOR_BD = 1000390000,
// Provided by XR_BD_spatial_scene
XR_SENSE_DATA_PROVIDER_TYPE_SCENE_BD = 1000392000,
// Provided by XR_BD_spatial_mesh
XR_SENSE_DATA_PROVIDER_TYPE_MESH_BD = 1000393000,
// Provided by XR_BD_spatial_plane
XR_SENSE_DATA_PROVIDER_TYPE_PLANE_BD = 1000396000,
XR_SENSE_DATA_PROVIDER_TYPE_MAX_ENUM_BD = 0x7FFFFFFF
} XrSenseDataProviderTypeBD;
| Enum | Description |
|---|---|
|
Create arbitrary spatial anchors. (Added by the |
|
Access spatial scene capture data. (Added by the |
|
Capture spatial mesh data. (Added by the |
|
Capture spatial plane data. (Added by the |
12.71.4. Start Sense Data Provider
The xrStartSenseDataProviderAsyncBD function is defined as:
// Provided by XR_BD_spatial_sensing
XrResult xrStartSenseDataProviderAsyncBD(
XrSenseDataProviderBD provider,
const XrSenseDataProviderStartInfoBD* startInfo,
XrFutureEXT* future);
The application can start a previously created sense data provider using
xrStartSenseDataProviderAsyncBD when it is in either the
XR_SENSE_DATA_PROVIDER_STATE_INITIALIZED_BD or the
XR_SENSE_DATA_PROVIDER_STATE_STOPPED_BD state.
This function starts an asynchronous operation and creates a corresponding
XrFutureEXT, usable with xrPollFutureEXT and related
functions.
The return value of this function only indicates whether the parameters were
acceptable to schedule the asynchronous operation.
The corresponding completion function is
xrStartSenseDataProviderCompleteBD, usable when a future from this
function is in the READY state, with outputs populated
by that function in the completion structure XrFutureCompletionEXT.
If the sense data provider is already in the
XR_SENSE_DATA_PROVIDER_STATE_RUNNING_BD state, calling this function
does not change the provider state.
In this case, the runtime must still set the return value of
xrStartSenseDataProviderAsyncBD to XR_SUCCESS, and the future
state will be XR_FUTURE_STATE_READY_EXT immediately.
The runtime must set the corresponding future result returned by
xrStartSenseDataProviderCompleteBD to XR_SUCCESS.
Individual sense data provider types may specify required a permission or
permissions to use their capabilities on Android-based platforms.
If the specified provider type requires a permission and that permission is
not listed in the application’s manifest, or the permission is listed but
denied, the future state will be XR_FUTURE_STATE_READY_EXT immediately
and the runtime must set the corresponding future result returned by
xrStartSenseDataProviderCompleteBD to
XR_ERROR_PERMISSION_INSUFFICIENT.
The XrSenseDataProviderStartInfoBD structure is defined as:
// Provided by XR_BD_spatial_sensing
typedef struct XrSenseDataProviderStartInfoBD {
XrStructureType type;
const void* next;
} XrSenseDataProviderStartInfoBD;
This structure is defined for future extension.
The xrStartSenseDataProviderCompleteBD function is defined as:
// Provided by XR_BD_spatial_sensing
XrResult xrStartSenseDataProviderCompleteBD(
XrSession session,
XrFutureEXT future,
XrFutureCompletionEXT* completion);
The application can obtain the sense data provider start result using xrStartSenseDataProviderCompleteBD.
This is the completion function corresponding to the operation started by
xrStartSenseDataProviderAsyncBD.
Do not call until the future is READY.
The XrFutureCompletionEXT structure is defined in
XR_EXT_future.
Subsequent application operations using this handle must not be performed
unless the XrFutureCompletionEXT::futureResult is
XR_SUCCESS.
For example, if the XrFutureCompletionEXT::futureResult is
XR_ERROR_PERMISSION_INSUFFICIENT, it indicates that the application
does not have sufficient permissions and needs to apply for the
corresponding permissions.
12.71.5. Get Sense Data Provider State
The xrGetSenseDataProviderStateBD function is defined as:
// Provided by XR_BD_spatial_sensing
XrResult xrGetSenseDataProviderStateBD(
XrSenseDataProviderBD provider,
XrSenseDataProviderStateBD* state);
The application can get a sense data provider’s state by calling xrGetSenseDataProviderStateBD.
The XrSenseDataProviderStateBD enumeration identifies the various states of the sense data provider.
// Provided by XR_BD_spatial_sensing
typedef enum XrSenseDataProviderStateBD {
XR_SENSE_DATA_PROVIDER_STATE_INITIALIZED_BD = 0,
XR_SENSE_DATA_PROVIDER_STATE_RUNNING_BD = 1,
XR_SENSE_DATA_PROVIDER_STATE_STOPPED_BD = 2,
XR_SENSE_DATA_PROVIDER_STATE_MAX_ENUM_BD = 0x7FFFFFFF
} XrSenseDataProviderStateBD;
| Enum | Description |
|---|---|
|
The state after the provider is successfully created, which means the provider is ready to start. |
|
The state when the provider is running normally. |
|
The state after the provider is successfully stopped, or when an unexpected error occurs. |
The XrEventDataSenseDataProviderStateChangedBD structure is defined as:
// Provided by XR_BD_spatial_sensing
typedef struct XrEventDataSenseDataProviderStateChangedBD {
XrStructureType type;
const void* next;
XrSenseDataProviderBD provider;
XrSenseDataProviderStateBD newState;
} XrEventDataSenseDataProviderStateChangedBD;
This event indicates the state change of the sense data provider.
The runtime must queue an event of type
XR_TYPE_EVENT_DATA_SENSE_DATA_PROVIDER_STATE_CHANGED_BD when a
specific sense data provider changes from one state to another state.
When the application calls xrCreateSenseDataProviderBD and the return
code is XR_SUCCESS, the runtime must set the provider’s state to
XR_SENSE_DATA_PROVIDER_STATE_INITIALIZED_BD without queuing an event
of type XR_TYPE_EVENT_DATA_SENSE_DATA_PROVIDER_STATE_CHANGED_BD.
12.71.6. Query Sense Data
When a sense data provider is running, the application can initiate a query that creates a snapshot of the sense data from the current sense data provider.
The XrSenseDataSnapshotBD handle is defined as:
// Provided by XR_BD_spatial_sensing
XR_DEFINE_HANDLE(XrSenseDataSnapshotBD)
The xrQuerySenseDataAsyncBD function is defined as:
// Provided by XR_BD_spatial_sensing
XrResult xrQuerySenseDataAsyncBD(
XrSenseDataProviderBD provider,
const XrSenseDataQueryInfoBD* queryInfo,
XrFutureEXT* future);
The application can query a sense data provider by calling xrQuerySenseDataAsyncBD.
The application must not call this function to initiate a query request for
the spatial sense data unless the xrStartSenseDataProviderCompleteBD
has been called successfully, the future completed, and returned a future
result of XR_SUCCESS.
Otherwise, the runtime must return XR_ERROR_CALL_ORDER_INVALID.
The query result is a snapshot of the sense data from the current sense data
provider.
This function starts an asynchronous operation and creates a corresponding
XrFutureEXT, usable with xrPollFutureEXT and related
functions.
The return value of this function only indicates whether the parameters were
acceptable to schedule the asynchronous operation.
The corresponding completion function is xrQuerySenseDataCompleteBD,
usable when a future from this function is in the READY
state, with outputs populated by that function in the completion structure
XrSenseDataQueryCompletionBD.
The XrSenseDataQueryInfoBD structure is defined as:
// Provided by XR_BD_spatial_sensing
typedef struct XrSenseDataQueryInfoBD {
XrStructureType type;
const void* next;
} XrSenseDataQueryInfoBD;
When the application initiates a query request, it can chain a
XrSenseDataFilter* structure to
XrSenseDataQueryInfoBD::next.
Then all sense data that matches the criteria in the filter is included in
the result returned.
The runtime must not include sense data that does not match the provided
filter.
If no XrSenseDataFilter* structure is chained to
XrSenseDataQueryInfoBD::next, it means no filter is applied, and
runtime must include all currently detected sense data of this provider in
the result.
|
Note
Currently, only the first filter in the
XrSenseDataQueryInfoBD:: |
Some commonly used filters are defined in this extension.
The XrSenseDataFilterUuidBD structure is defined as:
// Provided by XR_BD_spatial_sensing
typedef struct XrSenseDataFilterUuidBD {
XrStructureType type;
const void* next;
uint32_t uuidCount;
const XrUuidEXT* uuids;
} XrSenseDataFilterUuidBD;
The XrSenseDataFilterUuidBD contains a list of UUIDs. When the application passes this filter, the query result will only contain entities with UUIDs that appear in this list.
The XrSenseDataFilterSemanticBD structure is defined as:
// Provided by XR_BD_spatial_sensing
typedef struct XrSenseDataFilterSemanticBD {
XrStructureType type;
const void* next;
uint32_t labelCount;
const XrSemanticLabelBD* labels;
} XrSenseDataFilterSemanticBD;
The XrSenseDataFilterSemanticBD contains a list of semantic labels. When the application passes this filter, the query result will only contain entities with semantic labels that appear in this list.
The xrQuerySenseDataCompleteBD function is defined as:
// Provided by XR_BD_spatial_sensing
XrResult xrQuerySenseDataCompleteBD(
XrSenseDataProviderBD provider,
XrFutureEXT future,
XrSenseDataQueryCompletionBD* completion);
This is the completion function corresponding to
xrQuerySenseDataAsyncBD.
It completes the asynchronous operation and returns the results.
Do not call until the future is READY.
The XrSenseDataQueryCompletionBD structure is defined as:
// Provided by XR_BD_spatial_sensing
typedef struct XrSenseDataQueryCompletionBD {
XrStructureType type;
void* next;
XrResult futureResult;
XrSenseDataSnapshotBD snapshot;
} XrSenseDataQueryCompletionBD;
If the futureResult is not XR_SUCCESS, the runtime must set
snapshot to XR_NULL_HANDLE.
The xrDestroySenseDataSnapshotBD function is defined as:
// Provided by XR_BD_spatial_sensing
XrResult xrDestroySenseDataSnapshotBD(
XrSenseDataSnapshotBD snapshot);
The application can destroy a sense data snapshot using xrDestroySenseDataSnapshotBD.
After all the spatial entity info are retrieved from the snapshot,
application should destroy the XrSenseDataSnapshotBD handle to
release related resources.
The runtime may have an upper limit on the number of snapshots it supports.
When the application exceeds the runtime’s limit, the runtime must set
XrSenseDataQueryCompletionBD::futureResult to
XR_ERROR_LIMIT_REACHED when xrQuerySenseDataCompleteBD is
called.
To obtain new snapshots once the limit is reached, destroy at least one of
the previously obtained snapshots.
12.71.7. Spatial Entity
The XrSpatialEntityIdBD atom is defined as:
// Provided by XR_BD_spatial_sensing
XR_DEFINE_ATOM(XrSpatialEntityIdBD)
The XrSpatialEntityIdBD is used to represent the spatial entities
that are sensed by the runtime in the user’s physical environment.
The application can obtain spatial entity information from the
XrSenseDataSnapshotBD.
This atom is retrieved from XrSenseDataSnapshotBD and shares its
lifetime.
The application must not use an XrSpatialEntityIdBD with a
snapshot it was not retrieved from.
The same numerical value of ID may be reused between different snapshots
for different entities.
See xrGetSpatialEntityUuidBD for a more stable ID.
12.71.8. Spatial Entity UUID
An XrUuidEXT is generated by the runtime when it creates a spatial entity, and the runtime must guarantee the UUID refers to the same spatial entity for the whole life of the spatial entity. For spatial entities that are persisted, the runtime must guarantee the UUID remain unchanged across application sessions.
The xrGetSpatialEntityUuidBD function is defined as:
// Provided by XR_BD_spatial_sensing
XrResult xrGetSpatialEntityUuidBD(
XrSenseDataSnapshotBD snapshot,
XrSpatialEntityIdBD entityId,
XrUuidEXT* uuid);
Applications can get the UUID for a given spatial entity by calling xrGetSpatialEntityUuidBD.
12.71.9. Get Sense Data Info
The xrGetQueriedSenseDataBD function is defined as:
// Provided by XR_BD_spatial_sensing
XrResult xrGetQueriedSenseDataBD(
XrSenseDataSnapshotBD snapshot,
XrQueriedSenseDataGetInfoBD* getInfo,
XrQueriedSenseDataBD* queriedSenseData);
The application can get queried sense data from the
XrSenseDataSnapshotBD using xrGetQueriedSenseDataBD.
queriedSenseData contains an application-allocated array that is
populated by the runtime, after capacity negotiation using the
two-call idiom, with entity ID, UUID, and last
update times for each entity in the snapshot.
The XrQueriedSenseDataGetInfoBD structure is defined as:
// Provided by XR_BD_spatial_sensing
typedef struct XrQueriedSenseDataGetInfoBD {
XrStructureType type;
const void* next;
} XrQueriedSenseDataGetInfoBD;
The XrQueriedSenseDataBD structure is defined as:
// Provided by XR_BD_spatial_sensing
typedef struct XrQueriedSenseDataBD {
XrStructureType type;
void* next;
uint32_t stateCapacityInput;
uint32_t stateCountOutput;
XrSpatialEntityStateBD* states;
} XrQueriedSenseDataBD;
The XrSpatialEntityStateBD structure is defined as:
// Provided by XR_BD_spatial_sensing
typedef struct XrSpatialEntityStateBD {
XrStructureType type;
void* next;
XrSpatialEntityIdBD entityId;
XrTime lastUpdateTime;
XrUuidEXT uuid;
} XrSpatialEntityStateBD;
The XrSpatialEntityStateBD contains the general info of the spatial entity.
12.71.10. Spatial Entity Component Type
A spatial entity may have several components which provide different data for the entity.
The xrEnumerateSpatialEntityComponentTypesBD function is defined as:
// Provided by XR_BD_spatial_sensing
XrResult xrEnumerateSpatialEntityComponentTypesBD(
XrSenseDataSnapshotBD snapshot,
XrSpatialEntityIdBD entityId,
uint32_t componentTypeCapacityInput,
uint32_t* componentTypeCountOutput,
XrSpatialEntityComponentTypeBD* componentTypes);
The application inspects the component types for a given
XrSpatialEntityIdBD using
xrEnumerateSpatialEntityComponentTypesBD.
The application may skip enumerating component types and proceed directly
to attempting to access component data.
The XrSpatialEntityComponentTypeBD enumeration identifies the different types of components that spatial entities may support.
// Provided by XR_BD_spatial_sensing
typedef enum XrSpatialEntityComponentTypeBD {
XR_SPATIAL_ENTITY_COMPONENT_TYPE_LOCATION_BD = 0,
XR_SPATIAL_ENTITY_COMPONENT_TYPE_SEMANTIC_BD = 1,
XR_SPATIAL_ENTITY_COMPONENT_TYPE_BOUNDING_BOX_2D_BD = 2,
XR_SPATIAL_ENTITY_COMPONENT_TYPE_POLYGON_BD = 3,
XR_SPATIAL_ENTITY_COMPONENT_TYPE_BOUNDING_BOX_3D_BD = 4,
XR_SPATIAL_ENTITY_COMPONENT_TYPE_TRIANGLE_MESH_BD = 5,
// Provided by XR_BD_spatial_plane
XR_SPATIAL_ENTITY_COMPONENT_TYPE_PLANE_ORIENTATION_BD = 1000396000,
XR_SPATIAL_ENTITY_COMPONENT_TYPE_MAX_ENUM_BD = 0x7FFFFFFF
} XrSpatialEntityComponentTypeBD;
| Enum | Description |
|---|---|
|
The location including position and rotation. Corresponds to component data structure XrSpatialEntityComponentDataLocationBD. |
|
The semantic label. Corresponds to component data structure XrSpatialEntityComponentDataSemanticBD. |
|
The two-dimensional bounding box. Corresponds to component data structure XrSpatialEntityComponentDataBoundingBox2DBD. |
|
The two-dimensional polygon. Corresponds to component data structure XrSpatialEntityComponentDataPolygonBD. |
|
The three-dimensional bounding box. Corresponds to component data structure XrSpatialEntityComponentDataBoundingBox3DBD. |
|
The triangle mesh. Corresponds to component data structure XrSpatialEntityComponentDataTriangleMeshBD. |
|
The plane orientation. Corresponds to component data structure XrSpatialEntityComponentDataPlaneOrientationBD. (Added by the |
12.71.11. Get Spatial Entity Component Data
Applications can get the spatial entity component data of a given spatial entity.
The xrGetSpatialEntityComponentDataBD function is defined as:
// Provided by XR_BD_spatial_sensing
XrResult xrGetSpatialEntityComponentDataBD(
XrSenseDataSnapshotBD snapshot,
const XrSpatialEntityComponentGetInfoBD* getInfo,
XrSpatialEntityComponentDataBaseHeaderBD* componentData);
The application can use xrGetSpatialEntityComponentDataBD to get the component data of a spatial entity.
The runtime must return XR_ERROR_VALIDATION_FAILURE if the component
type is not present on a given entity.
The XrSpatialEntityComponentGetInfoBD::componentType must match
the type of the output XrSpatialEntityComponentData* structure passed
as componentData when querying certain component data with
xrGetSpatialEntityComponentDataBD.
Otherwise, the runtime must return XR_ERROR_VALIDATION_FAILURE from
xrGetSpatialEntityComponentDataBD.
If a given component type requires passing additional input through the
XrSpatialEntityComponentGetInfoBD::next chain, but that input is
not present, the runtime must return XR_ERROR_VALIDATION_FAILURE.
Some component data types (types passed to componentData) use the
structure form of the two call idiom
to populate buffers of variable sizes.
The runtime must not change the component data in the snapshot until the snapshot is destroyed.
The XrSpatialEntityComponentGetInfoBD structure is defined as:
// Provided by XR_BD_spatial_sensing
typedef struct XrSpatialEntityComponentGetInfoBD {
XrStructureType type;
const void* next;
XrSpatialEntityIdBD entityId;
XrSpatialEntityComponentTypeBD componentType;
} XrSpatialEntityComponentGetInfoBD;
The XrSpatialEntityComponentGetInfoBD is a common structure that is
used to get component data for a given component type.
For some component types, applications may need to chain additional
getInfo structures on the
XrSpatialEntityComponentGetInfoBD::next chain, when passing it
to the xrGetSpatialEntityComponentDataBD function.
The XrSpatialEntityComponentDataBaseHeaderBD structure is defined as:
// Provided by XR_BD_spatial_sensing
typedef struct XrSpatialEntityComponentDataBaseHeaderBD {
XrStructureType type;
void* next;
} XrSpatialEntityComponentDataBaseHeaderBD;
The XrSpatialEntityComponentDataBaseHeaderBD is a base structure that is not intended to be directly used, but forms a basis for specific component info types. All component info structures begin with the elements described in the XrSpatialEntityComponentDataBaseHeaderBD, and a component info pointer must be cast to a pointer to XrSpatialEntityComponentDataBaseHeaderBD when passing it to the xrGetSpatialEntityComponentDataBD function.
There are some commonly used component data structures defined in this extension which provide access to different spatial sensing features.
The XrSpatialEntityLocationGetInfoBD structure is defined as:
// Provided by XR_BD_spatial_sensing
typedef struct XrSpatialEntityLocationGetInfoBD {
XrStructureType type;
const void* next;
XrSpace baseSpace;
} XrSpatialEntityLocationGetInfoBD;
Chaining this struct to XrSpatialEntityComponentGetInfoBD::next
is required when querying certain component data with
xrGetSpatialEntityComponentDataBD.
Otherwise, the runtime must return XR_ERROR_VALIDATION_FAILURE from
xrGetSpatialEntityComponentDataBD.
See the XrSpatialEntityComponentData* struct details for more
information which components require this struct to be chained.
The XrSpatialEntityComponentDataLocationBD structure is defined as:
// Provided by XR_BD_spatial_sensing
typedef struct XrSpatialEntityComponentDataLocationBD {
XrStructureType type;
void* next;
XrSpaceLocation location;
} XrSpatialEntityComponentDataLocationBD;
The XrSpatialEntityComponentDataLocationBD is an output struct for
getting the location component data from the snapshot.
This corresponds to an XrSpatialEntityComponentTypeBD value of
XR_SPATIAL_ENTITY_COMPONENT_TYPE_LOCATION_BD.
The application must chain XrSpatialEntityLocationGetInfoBD to
XrSpatialEntityComponentGetInfoBD::next when querying
XrSpatialEntityComponentDataLocationBD.
Otherwise, the runtime must return XR_ERROR_VALIDATION_FAILURE from
xrGetSpatialEntityComponentDataBD.
The XrSpatialEntityComponentDataSemanticBD structure is defined as:
// Provided by XR_BD_spatial_sensing
typedef struct XrSpatialEntityComponentDataSemanticBD {
XrStructureType type;
void* next;
uint32_t labelCapacityInput;
uint32_t labelCountOutput;
XrSemanticLabelBD* labels;
} XrSpatialEntityComponentDataSemanticBD;
The XrSpatialEntityComponentDataSemanticBD is an output struct for
getting the semantic labels component data from an entity in the snapshot.
This corresponds to an XrSpatialEntityComponentTypeBD value of
XR_SPATIAL_ENTITY_COMPONENT_TYPE_SEMANTIC_BD.
Specify the semantic labels of spatial entity.
// Provided by XR_BD_spatial_sensing
typedef enum XrSemanticLabelBD {
XR_SEMANTIC_LABEL_UNKNOWN_BD = 0,
XR_SEMANTIC_LABEL_FLOOR_BD = 1,
XR_SEMANTIC_LABEL_CEILING_BD = 2,
XR_SEMANTIC_LABEL_WALL_BD = 3,
XR_SEMANTIC_LABEL_DOOR_BD = 4,
XR_SEMANTIC_LABEL_WINDOW_BD = 5,
XR_SEMANTIC_LABEL_OPENING_BD = 6,
XR_SEMANTIC_LABEL_TABLE_BD = 7,
XR_SEMANTIC_LABEL_SOFA_BD = 8,
XR_SEMANTIC_LABEL_CHAIR_BD = 9,
XR_SEMANTIC_LABEL_HUMAN_BD = 10,
XR_SEMANTIC_LABEL_BEAM_BD = 11,
XR_SEMANTIC_LABEL_COLUMN_BD = 12,
XR_SEMANTIC_LABEL_CURTAIN_BD = 13,
XR_SEMANTIC_LABEL_CABINET_BD = 14,
XR_SEMANTIC_LABEL_BED_BD = 15,
XR_SEMANTIC_LABEL_PLANT_BD = 16,
XR_SEMANTIC_LABEL_SCREEN_BD = 17,
XR_SEMANTIC_LABEL_VIRTUAL_WALL_BD = 18,
XR_SEMANTIC_LABEL_REFRIGERATOR_BD = 19,
XR_SEMANTIC_LABEL_WASHING_MACHINE_BD = 20,
XR_SEMANTIC_LABEL_AIR_CONDITIONER_BD = 21,
XR_SEMANTIC_LABEL_LAMP_BD = 22,
XR_SEMANTIC_LABEL_WALL_ART_BD = 23,
XR_SEMANTIC_LABEL_STAIRWAY_BD = 24,
XR_SEMANTIC_LABEL_MAX_ENUM_BD = 0x7FFFFFFF
} XrSemanticLabelBD;
| Enum | Description |
|---|---|
|
Semantic label that the runtime does not know. |
|
Semantic label of floor. |
|
Semantic label of ceiling. |
|
Semantic label of wall. |
|
Semantic label of door. |
|
Semantic label of window. |
|
Semantic label of opening, usually refers to a space that something or someone can pass through. |
|
Semantic label of table. |
|
Semantic label of sofa, usually refers to a seat that multiple people can sit on. |
|
Semantic label of chair, usually refers to a seat that for one person. |
|
Semantic label of human. |
|
Semantic label of beam, which usually supports weight in a building or other structure. |
|
Semantic label of column, which is vertical and used as a support for the roof of a building. |
|
Semantic label of curtain. |
|
Semantic label of cabinet. |
|
Semantic label of bed. |
|
Semantic label of plant. |
|
Semantic label of screen. |
|
Semantic label of virtual wall, which is generated by the system scene capture app in order to create a closed space for users. |
|
Semantic label of refrigerator. |
|
Semantic label of washing machine. |
|
Semantic label of air conditioner. |
|
Semantic label of lamp. |
|
Semantic label of wall art, like a painting or a photo frame. |
|
Semantic label of stairway. |
The XrSpatialEntityComponentDataBoundingBox2DBD structure is defined as:
// Provided by XR_BD_spatial_sensing
typedef struct XrSpatialEntityComponentDataBoundingBox2DBD {
XrStructureType type;
void* next;
XrRect2Df boundingBox2D;
} XrSpatialEntityComponentDataBoundingBox2DBD;
The XrSpatialEntityComponentDataBoundingBox2DBD is an output struct
for getting the bounding box 2D component data from the snapshot.
This corresponds to an XrSpatialEntityComponentTypeBD value of
XR_SPATIAL_ENTITY_COMPONENT_TYPE_BOUNDING_BOX_2D_BD.
The x-axis and y-axis that describe boundingBox2D are defined by the
spatial entity’s coordinate space.
The XrSpatialEntityComponentDataPolygonBD structure is defined as:
// Provided by XR_BD_spatial_sensing
typedef struct XrSpatialEntityComponentDataPolygonBD {
XrStructureType type;
void* next;
uint32_t vertexCapacityInput;
uint32_t vertexCountOutput;
XrVector2f* vertices;
} XrSpatialEntityComponentDataPolygonBD;
The XrSpatialEntityComponentDataPolygonBD is an output struct for
getting the polygon component data from an entity in the snapshot.
This corresponds to an XrSpatialEntityComponentTypeBD value of
XR_SPATIAL_ENTITY_COMPONENT_TYPE_POLYGON_BD.
The vertices is a list of vertices described on the XY plane, which is
defined by the spatial entity’s coordinate space.
The XrSpatialEntityComponentDataBoundingBox3DBD structure is defined as:
// Provided by XR_BD_spatial_sensing
typedef struct XrSpatialEntityComponentDataBoundingBox3DBD {
XrStructureType type;
void* next;
XrBoxf boundingBox3D;
} XrSpatialEntityComponentDataBoundingBox3DBD;
The XrSpatialEntityComponentDataBoundingBox3DBD is an output struct
for getting the bounding box 3D component data from an entity in the
snapshot.
This corresponds to an XrSpatialEntityComponentTypeBD value of
XR_SPATIAL_ENTITY_COMPONENT_TYPE_BOUNDING_BOX_3D_BD.
The x-axis, y-axis, and z-axis that describe boundingBox3D are defined
by the spatial entity’s coordinate space.
The XrSpatialEntityComponentDataTriangleMeshBD structure is defined as:
// Provided by XR_BD_spatial_sensing
typedef struct XrSpatialEntityComponentDataTriangleMeshBD {
XrStructureType type;
void* next;
uint32_t vertexCapacityInput;
uint32_t vertexCountOutput;
XrVector3f* vertices;
uint32_t indexCapacityInput;
uint32_t indexCountOutput;
uint16_t* indices;
} XrSpatialEntityComponentDataTriangleMeshBD;
The XrSpatialEntityComponentDataTriangleMeshBD is an output struct for
getting the triangle mesh component data from the snapshot.
This corresponds to an XrSpatialEntityComponentTypeBD value of
XR_SPATIAL_ENTITY_COMPONENT_TYPE_TRIANGLE_MESH_BD.
The vertices array is a list of vertices described by the spatial
entity’s coordinate space.
The triangle vertices are in counter-clockwise order as viewed from the user
perspective.
The indices array defines the topology of the triangle mesh.
Each triplet of consecutive elements points to three vertices in the
vertices array and thus form a triangle.
12.71.12. Get Sense Data Update Event
The XrEventDataSenseDataUpdatedBD structure is defined as:
// Provided by XR_BD_spatial_sensing
typedef struct XrEventDataSenseDataUpdatedBD {
XrStructureType type;
const void* next;
XrSenseDataProviderBD provider;
} XrEventDataSenseDataUpdatedBD;
When the application receives this event, it means one of the sense data providers has updated sense data, and the application can initiate a new query request to this sense data provider to get the latest sense data, as a new XrSenseDataSnapshotBD. This is the recommended way to get updated data. The application may initiate a new query request even without receiving this event, but the sense data provider may not have acquired new data and therefore the contents of the snapshot may be functionally identical.
12.71.13. Stop Sense Data Provider
The xrStopSenseDataProviderBD function is defined as:
// Provided by XR_BD_spatial_sensing
XrResult xrStopSenseDataProviderBD(
XrSenseDataProviderBD provider);
The application can stop a sense data provider using xrStopSenseDataProviderBD if it no longer needs to obtain new sense data from the sense data provider. Stopping the provider must not affect the usage of existing valid XrSenseDataSnapshotBD handles and their corresponding entity IDs.
12.71.14. Destroy Sense Data Provider
The xrDestroySenseDataProviderBD function is defined as:
// Provided by XR_BD_spatial_sensing
XrResult xrDestroySenseDataProviderBD(
XrSenseDataProviderBD provider);
The application should destroy a sense data provider using xrDestroySenseDataProviderBD to release the resources related to it.
12.71.15. Anchor
The XrAnchorBD handle is defined as:
// Provided by XR_BD_spatial_sensing
XR_DEFINE_HANDLE(XrAnchorBD)
An anchor is bound to a point in the real physical world, or to a spatial entity.
The XrAnchorBD handle is alive until it is explicitly destroyed or the XrSenseDataProviderBD is destroyed.
Create an Anchor from a Spatial Entity
The xrCreateSpatialEntityAnchorBD function is defined as:
// Provided by XR_BD_spatial_sensing
XrResult xrCreateSpatialEntityAnchorBD(
XrSenseDataProviderBD provider,
const XrSpatialEntityAnchorCreateInfoBD* createInfo,
XrAnchorBD* anchor);
Some spatial entities may have frequently changing poses, requiring real-time acquisition of their poses. The application can create the XrAnchorBD handle bound to the spatial entity through xrCreateSpatialEntityAnchorBD, and then create the XrSpace from it using xrCreateAnchorSpaceBD, so that the application can obtain its latest pose in real time through xrLocateSpace or xrLocateSpaces.
This function is not supported for all types of sense data providers.
Whether this is supported in a type of sense data provider will be explained
in the extension that defines the specific type of sense data provider.
Otherwise, the runtime must return
XR_ERROR_ANCHOR_NOT_SUPPORTED_FOR_ENTITY_BD.
This handle and associated handles continue to be valid and usable even if the XrSenseDataSnapshotBD used at its creation has since been destroyed.
The XrSpatialEntityAnchorCreateInfoBD structure is defined as:
// Provided by XR_BD_spatial_sensing
typedef struct XrSpatialEntityAnchorCreateInfoBD {
XrStructureType type;
const void* next;
XrSenseDataSnapshotBD snapshot;
XrSpatialEntityIdBD entityId;
} XrSpatialEntityAnchorCreateInfoBD;
12.71.16. Get Anchor UUID
The xrGetAnchorUuidBD function is defined as:
// Provided by XR_BD_spatial_sensing
XrResult xrGetAnchorUuidBD(
XrAnchorBD anchor,
XrUuidEXT* uuid);
The application can get the anchor UUID using xrGetAnchorUuidBD. If the XrAnchorBD is created from a spatial entity, the anchor’s UUID is the same as the spatial entity’s UUID. .Valid Usage (Implicit)
12.71.17. Locate an Anchor with Anchor Spaces
Locating an anchor relative to a base space is performed similarly to locating other spatial objects: through use of an XrSpace handle and functions like xrLocateSpace and xrLocateSpaces. To locate an anchor in a base space, first create an XrSpace handle for that anchor using xrCreateAnchorSpaceBD.
The xrCreateAnchorSpaceBD function is defined as:
// Provided by XR_BD_spatial_sensing
XrResult xrCreateAnchorSpaceBD(
XrSession session,
const XrAnchorSpaceCreateInfoBD* createInfo,
XrSpace* space);
The application can create an XrSpace for an anchor using
xrCreateAnchorSpaceBD.
Using this handle, the application can use calls like xrLocateSpace
and xrLocateSpaces to locate the anchor in a given base space at a
given time.
The createInfo parameter contains the XrAnchorBD as well as a
pose offset to apply.
Multiple XrSpace handles for a given XrAnchorBD may exist simultaneously, up to some limit imposed by the runtime. The XrSpace handle must be eventually freed via the xrDestroySpace function or by destroying the parent XrSession handle.
As the parent of all XrSpace handles, including those created with this function, is an XrSession handle, an anchor space may outlive the XrAnchorBD handle used to create it. Additionally, the ability to locate anchor spaces depends on spatial sensing being active. A valid XrSpace handle created for an XrAnchorBD must be unlocatable unless the associated XrSenseDataProviderBD has been started without since being stopped.
Such an unlocatable anchor space behaves the same as an unlocatable action space as discussed in Action Spaces Lifetime.
Note that destroying the XrAnchorBD used in creating an anchor space does not itself make the anchor space unlocatable; it only prevents creation of additional anchor spaces from that anchor. This mirrors the behavior of action spaces and destruction of their corresponding pose actions.
The XrAnchorSpaceCreateInfoBD structure is defined as:
// Provided by XR_BD_spatial_sensing
typedef struct XrAnchorSpaceCreateInfoBD {
XrStructureType type;
const void* next;
XrAnchorBD anchor;
XrPosef poseInAnchorSpace;
} XrAnchorSpaceCreateInfoBD;
12.71.18. Destroy an Anchor
The xrDestroyAnchorBD function is defined as:
// Provided by XR_BD_spatial_sensing
XrResult xrDestroyAnchorBD(
XrAnchorBD anchor);
The application can destroy an anchor using xrDestroyAnchorBD.
12.71.22. New Structures
-
Extending XrSenseDataQueryInfoBD:
-
Extending XrSpatialEntityComponentDataBaseHeaderBD:
-
Extending XrSpatialEntityComponentGetInfoBD:
-
Extending XrSystemProperties:
12.71.24. New Enum Constants
-
XR_BD_SPATIAL_SENSING_EXTENSION_NAME -
XR_BD_spatial_sensing_SPEC_VERSION -
Extending XrObjectType:
-
XR_OBJECT_TYPE_ANCHOR_BD -
XR_OBJECT_TYPE_SENSE_DATA_PROVIDER_BD -
XR_OBJECT_TYPE_SENSE_DATA_SNAPSHOT_BD
-
-
Extending XrResult:
-
XR_ERROR_ANCHOR_NOT_SUPPORTED_FOR_ENTITY_BD -
XR_ERROR_SPATIAL_ENTITY_ID_INVALID_BD -
XR_ERROR_SPATIAL_SENSING_SERVICE_UNAVAILABLE_BD
-
-
Extending XrStructureType:
-
XR_TYPE_ANCHOR_SPACE_CREATE_INFO_BD -
XR_TYPE_EVENT_DATA_SENSE_DATA_PROVIDER_STATE_CHANGED_BD -
XR_TYPE_EVENT_DATA_SENSE_DATA_UPDATED_BD -
XR_TYPE_QUERIED_SENSE_DATA_BD -
XR_TYPE_QUERIED_SENSE_DATA_GET_INFO_BD -
XR_TYPE_SENSE_DATA_FILTER_SEMANTIC_BD -
XR_TYPE_SENSE_DATA_FILTER_UUID_BD -
XR_TYPE_SENSE_DATA_PROVIDER_CREATE_INFO_BD -
XR_TYPE_SENSE_DATA_PROVIDER_START_INFO_BD -
XR_TYPE_SENSE_DATA_QUERY_COMPLETION_BD -
XR_TYPE_SENSE_DATA_QUERY_INFO_BD -
XR_TYPE_SPATIAL_ENTITY_ANCHOR_CREATE_INFO_BD -
XR_TYPE_SPATIAL_ENTITY_COMPONENT_DATA_BOUNDING_BOX_2D_BD -
XR_TYPE_SPATIAL_ENTITY_COMPONENT_DATA_BOUNDING_BOX_3D_BD -
XR_TYPE_SPATIAL_ENTITY_COMPONENT_DATA_LOCATION_BD -
XR_TYPE_SPATIAL_ENTITY_COMPONENT_DATA_POLYGON_BD -
XR_TYPE_SPATIAL_ENTITY_COMPONENT_DATA_SEMANTIC_BD -
XR_TYPE_SPATIAL_ENTITY_COMPONENT_DATA_TRIANGLE_MESH_BD -
XR_TYPE_SPATIAL_ENTITY_COMPONENT_GET_INFO_BD -
XR_TYPE_SPATIAL_ENTITY_LOCATION_GET_INFO_BD -
XR_TYPE_SPATIAL_ENTITY_STATE_BD -
XR_TYPE_SYSTEM_SPATIAL_SENSING_PROPERTIES_BD
-
12.71.25. Sample Code
Create sense data provider & query component data
The following example code demonstrates how to create and start spatial sense data provider for capability "Foo", as well as how to query its component "Bar" data.
XrInstance instance; // previously initialized
XrSystemId systemId; // previously initialized
XrSession session; // previously initialized
// The function pointers are previously initialized using xrGetInstanceProcAddr.
PFN_xrCreateSenseDataProviderBD xrCreateSenseDataProviderBD; // previously initialized
PFN_xrStartSenseDataProviderAsyncBD xrStartSenseDataProviderAsyncBD; // previously initialized
PFN_xrStartSenseDataProviderCompleteBD xrStartSenseDataProviderCompleteBD; // previously initialized
PFN_xrStopSenseDataProviderBD xrStopSenseDataProviderBD; // previously initialized
PFN_xrDestroySenseDataProviderBD xrDestroySenseDataProviderBD; // previously initialized
PFN_xrQuerySenseDataAsyncBD xrQuerySenseDataAsyncBD; // previously initialized
PFN_xrQuerySenseDataCompleteBD xrQuerySenseDataCompleteBD; // previously initialized
PFN_xrGetSenseDataProviderStateBD xrGetSenseDataProviderStateBD; // previously initialized
PFN_xrGetQueriedSenseDataBD xrGetQueriedSenseDataBD; // previously initialized
PFN_xrEnumerateSpatialEntityComponentTypesBD xrEnumerateSpatialEntityComponentTypesBD; // previously initialized
PFN_xrGetSpatialEntityComponentDataBD xrGetSpatialEntityComponentDataBD; // previously initialized
PFN_xrDestroySenseDataSnapshotBD xrDestroySenseDataSnapshotBD; // previously initialized
PFN_xrPollFutureEXT xrPollFutureEXT; // previously initialized
// Define structure to create data provider for spatial sensing capability foo
#define XR_TYPE_SENSE_DATA_PROVIDER_CREATE_INFO_FOO_BD ((XrStructureType)1000389000U)
// Derives from XrSenseDataProviderCreateInfoBD
typedef struct XrSenseDataProviderCreateInfoFooBD
{
XrStructureType type;
const void *XR_MAY_ALIAS next;
} XrSenseDataProviderCreateInfoFooBD;
// Define type for component bar and data structure to get component bar data
#define XR_SPATIAL_ENTITY_COMPONENT_TYPE_BAR_BD ((XrSpatialEntityComponentTypeBD)1000389001U)
#define XR_TYPE_SPATIAL_ENTITY_COMPONENT_DATA_BAR_BD ((XrStructureType)1000389002U)
// Derives from XrSpatialEntityComponentDataBaseHeaderBD
typedef struct XrSpatialEntityComponentDataBarBD
{
XrStructureType type;
const void *XR_MAY_ALIAS next;
uint32_t barData;
} XrSpatialEntityComponentDataBarBD;
// Create data provider for spatial sensing capability foo
XrSenseDataProviderBD fooProvider;
XrFutureEXT providerCreateFuture;
XrSenseDataProviderCreateInfoFooBD providerCreateInfoFoo{XR_TYPE_SENSE_DATA_PROVIDER_CREATE_INFO_FOO_BD};
XrSenseDataProviderCreateInfoBD
providerCreateInfo{XR_TYPE_SENSE_DATA_PROVIDER_CREATE_INFO_BD, &providerCreateInfoFoo};
CHK_XR(xrCreateSenseDataProviderBD(session, &providerCreateInfo, &fooProvider));
auto waitUntilFutureReady = [&](XrFutureEXT future)
{
XrFuturePollInfoEXT pollInfo{XR_TYPE_FUTURE_POLL_INFO_EXT};
XrFuturePollResultEXT pollResult{XR_TYPE_FUTURE_POLL_RESULT_EXT};
pollInfo.future = future;
do
{
// sleep(1);
CHK_XR(xrPollFutureEXT(instance, &pollInfo, &pollResult));
} while (pollResult.state != XR_FUTURE_STATE_READY_EXT);
};
// Start foo provider
XrFutureEXT providerStartFuture;
XrSenseDataProviderStartInfoBD startInfo{XR_TYPE_SENSE_DATA_PROVIDER_START_INFO_BD};
CHK_XR(xrStartSenseDataProviderAsyncBD(fooProvider, &startInfo, &providerStartFuture));
waitUntilFutureReady(providerStartFuture);
XrFutureCompletionEXT completion{XR_TYPE_FUTURE_COMPLETION_EXT};
XrResult result = xrStartSenseDataProviderCompleteBD(session, providerStartFuture, &completion);
CHK_XR(result); // Result of the complete function
CHK_XR(completion.futureResult);
// Check the provider state.
XrSenseDataProviderStateBD providerState;
CHK_XR(xrGetSenseDataProviderStateBD(fooProvider, &providerState));
if (providerState != XR_SENSE_DATA_PROVIDER_STATE_RUNNING_BD)
{
// Provider not running, handle the case: start the provider again, or quit
// ...
return;
}
// Query foo spatial entities
auto queryFooSpatialEntities = [&]()
{
XrSenseDataSnapshotBD querySnapshotHandle{XR_NULL_HANDLE};
XrFutureEXT queryFuture;
XrSenseDataQueryInfoBD queryInfo{XR_TYPE_SENSE_DATA_QUERY_INFO_BD};
CHK_XR(xrQuerySenseDataAsyncBD(fooProvider, &queryInfo, &queryFuture));
waitUntilFutureReady(queryFuture);
XrSenseDataQueryCompletionBD completion{XR_TYPE_SENSE_DATA_QUERY_COMPLETION_BD};
result = xrQuerySenseDataCompleteBD(fooProvider, queryFuture, &completion);
CHK_XR(result); // Result of the complete function
CHK_XR(completion.futureResult);
querySnapshotHandle = completion.snapshot;
// Retrieve queried sense data
XrQueriedSenseDataGetInfoBD queriedDataGetInfo{XR_TYPE_QUERIED_SENSE_DATA_GET_INFO_BD};
XrQueriedSenseDataBD queriedSenseData{XR_TYPE_QUERIED_SENSE_DATA_BD,
nullptr,
0,
0,
nullptr};
// First call to get spatial entity count
CHK_XR(xrGetQueriedSenseDataBD(querySnapshotHandle, &queriedDataGetInfo, &queriedSenseData));
if (queriedSenseData.stateCountOutput > 0)
{
std::vector<XrSpatialEntityStateBD> queriedFooStates(queriedSenseData.stateCountOutput);
queriedSenseData.stateCapacityInput = queriedSenseData.stateCountOutput;
queriedSenseData.states = queriedFooStates.data();
// Second call to get the spatial entities in the provider's query result
CHK_XR(xrGetQueriedSenseDataBD(querySnapshotHandle, &queriedDataGetInfo, &queriedSenseData));
// Process the data
for (int i = 0; i < queriedSenseData.stateCountOutput; ++i)
{
// Process the spatial entities, and get spatial entity component data
XrSpatialEntityIdBD entityId = queriedFooStates[i].entityId;
XrUuidEXT uuid = queriedFooStates[i].uuid;
// Enumerate the component types of the entity
uint32_t componentTypeCountOutput = 0;
CHK_XR(xrEnumerateSpatialEntityComponentTypesBD(querySnapshotHandle, entityId, 0, &componentTypeCountOutput, nullptr));
if (componentTypeCountOutput > 0)
{
std::vector<XrSpatialEntityComponentTypeBD> entityComponentTypes(componentTypeCountOutput);
uint32_t componentTypeCapacityInput = componentTypeCountOutput;
CHK_XR(xrEnumerateSpatialEntityComponentTypesBD(querySnapshotHandle, entityId, componentTypeCapacityInput,
&componentTypeCountOutput, entityComponentTypes.data()));
// Check the entity component types and get the bar component data
for (int j = 0; j < componentTypeCountOutput; ++j)
{
if (entityComponentTypes[j] == XR_SPATIAL_ENTITY_COMPONENT_TYPE_BAR_BD)
{
XrSpatialEntityComponentGetInfoBD componentBarGetInfo{XR_TYPE_SPATIAL_ENTITY_COMPONENT_GET_INFO_BD,
nullptr,
entityId,
XR_SPATIAL_ENTITY_COMPONENT_TYPE_BAR_BD};
XrSpatialEntityComponentDataBarBD componentBarData{XR_TYPE_SPATIAL_ENTITY_COMPONENT_DATA_BAR_BD};
CHK_XR(xrGetSpatialEntityComponentDataBD(querySnapshotHandle, &componentBarGetInfo,
reinterpret_cast<XrSpatialEntityComponentDataBaseHeaderBD *>(&componentBarData)));
// Component bar data now in componentBarData
// ...
}
}
}
}
}
// Destroy query result
CHK_XR(xrDestroySenseDataSnapshotBD(querySnapshotHandle));
};
auto processSenseDataEvent = [&]()
{
while (true)
{
XrEventDataBuffer event = {XR_TYPE_EVENT_DATA_BUFFER};
XrResult result = xrPollEvent(instance, &event);
if (result == XR_EVENT_UNAVAILABLE)
{
// No event available, quit the loop.
break;
}
if (result == XR_SUCCESS)
{
// Process the event
switch (event.type)
{
case XR_TYPE_EVENT_DATA_SENSE_DATA_UPDATED_BD:
{
const XrEventDataSenseDataUpdatedBD &senseDataUpdatedEvent =
*reinterpret_cast<XrEventDataSenseDataUpdatedBD *>(&event);
if (senseDataUpdatedEvent.provider == fooProvider)
{
// Sense data update in foo provider
queryFooSpatialEntities();
}
break;
}
default:
break;
}
}
}
};
//////////////////////////////////////
// Per-frame work //
//////////////////////////////////////
while (1)
{
//...
processSenseDataEvent();
//...
// ...
// Finish loop
// ...
}
// Stop and destroy the provider
CHK_XR(xrStopSenseDataProviderBD(fooProvider));
CHK_XR(xrDestroySenseDataProviderBD(fooProvider));
Get location component data and other component data coordinate with location
The following example code demonstrates how to get location component data from a snapshot, and how to get the bounding box 2D component data as well as the triangle mesh component data that are defined in the spatial entity’s location component data’s coordinate system.
XrSenseDataProviderBD provider; // previously returned from xrCreateSenseDataProviderBD
XrSenseDataSnapshotBD querySnapshotHandle; // previously returned from xrQuerySenseDataCompleteBD
XrSpace localSpace; // previously initialized, e.g. from XR_REFERENCE_SPACE_TYPE_LOCAL
XrFrameState frameState; // previously returned from xrWaitFrame
PFN_xrGetQueriedSenseDataBD xrGetQueriedSenseDataBD; // previously initialized
PFN_xrEnumerateSpatialEntityComponentTypesBD xrEnumerateSpatialEntityComponentTypesBD; // previously initialized
PFN_xrGetSpatialEntityComponentDataBD xrGetSpatialEntityComponentDataBD; // previously initialized
auto hasComponentType = [&](std::vector<XrSpatialEntityComponentTypeBD>& typeList, XrSpatialEntityComponentTypeBD type)->bool
{
if(typeList.empty())
{
return false;
}
for(int i = 0; i < typeList.size(); i++)
{
if(typeList[i] == type)
{
return true;
}
}
return false;
};
XrQueriedSenseDataGetInfoBD queriedDataGetInfo{XR_TYPE_QUERIED_SENSE_DATA_GET_INFO_BD};
XrQueriedSenseDataBD queriedSenseData{XR_TYPE_QUERIED_SENSE_DATA_BD};
// First call to get spatial entity count
CHK_XR(xrGetQueriedSenseDataBD(querySnapshotHandle, &queriedDataGetInfo, &queriedSenseData));
if (queriedSenseData.stateCountOutput > 0)
{
std::vector<XrSpatialEntityStateBD> queriedEntityStates(queriedSenseData.stateCountOutput);
queriedSenseData.stateCapacityInput = queriedSenseData.stateCountOutput;
queriedSenseData.states = queriedEntityStates.data();
// Second call to get the spatial entities in the provider's query result
CHK_XR(xrGetQueriedSenseDataBD(querySnapshotHandle, &queriedDataGetInfo, &queriedSenseData));
// Process the data
for (int i = 0; i < queriedSenseData.stateCountOutput; ++i)
{
// Process the spatial entities, and get spatial entity component data
XrSpatialEntityIdBD entityId = queriedEntityStates[i].entityId;
XrUuidEXT uuid = queriedEntityStates[i].uuid;
// Enumerate the component types of the entity
uint32_t componentTypeCountOutput = 0;
CHK_XR(xrEnumerateSpatialEntityComponentTypesBD(querySnapshotHandle, entityId, 0, &componentTypeCountOutput, nullptr));
if (componentTypeCountOutput > 0)
{
std::vector<XrSpatialEntityComponentTypeBD> entityComponentTypes(componentTypeCountOutput);
uint32_t componentTypeCapacityInput = componentTypeCountOutput;
CHK_XR(xrEnumerateSpatialEntityComponentTypesBD(querySnapshotHandle, entityId, componentTypeCapacityInput,
&componentTypeCountOutput, entityComponentTypes.data()));
// Check the entity component types and get the location component data
if(hasComponentType(entityComponentTypes, XR_SPATIAL_ENTITY_COMPONENT_TYPE_LOCATION_BD))
{
XrSpatialEntityLocationGetInfoBD locationGetInfo{XR_TYPE_SPATIAL_ENTITY_LOCATION_GET_INFO_BD};
locationGetInfo.baseSpace = localSpace;
// Chain the XrSpatialEntityLocationGetInfoBD struct to XrSpatialEntityComponentGetInfoBD when getting location component data.
XrSpatialEntityComponentGetInfoBD componentGetInfo{XR_TYPE_SPATIAL_ENTITY_COMPONENT_GET_INFO_BD};
componentGetInfo.next = &locationGetInfo;
componentGetInfo.entityId = entityId;
componentGetInfo.componentType = XR_SPATIAL_ENTITY_COMPONENT_TYPE_LOCATION_BD;
XrSpatialEntityComponentDataLocationBD componentLocationData{XR_TYPE_SPATIAL_ENTITY_COMPONENT_DATA_LOCATION_BD};
CHK_XR(xrGetSpatialEntityComponentDataBD(querySnapshotHandle, &componentGetInfo,
reinterpret_cast<XrSpatialEntityComponentDataBaseHeaderBD *>(&componentLocationData)));
// The location data now in componentLocationData
// ...
}
// Check the entity component types and get the bounding box 2D component data
if(hasComponentType(entityComponentTypes, XR_SPATIAL_ENTITY_COMPONENT_TYPE_BOUNDING_BOX_2D_BD))
{
XrSpatialEntityComponentGetInfoBD componentGetInfo{XR_TYPE_SPATIAL_ENTITY_COMPONENT_GET_INFO_BD};
componentGetInfo.next = nullptr;
componentGetInfo.entityId = entityId;
componentGetInfo.componentType = XR_SPATIAL_ENTITY_COMPONENT_TYPE_BOUNDING_BOX_2D_BD;
XrSpatialEntityComponentDataBoundingBox2DBD componentBoundingBox2DData{XR_TYPE_SPATIAL_ENTITY_COMPONENT_DATA_BOUNDING_BOX_2D_BD};
CHK_XR(xrGetSpatialEntityComponentDataBD(querySnapshotHandle, &componentGetInfo,
reinterpret_cast<XrSpatialEntityComponentDataBaseHeaderBD *>(&componentBoundingBox2DData)));
// The bounding box 2D data now in componentBoundingBox2DData
// The bounding box 2D data is an XrRect2Df defining the offset and extent along the x-axis (width) and y-axis (height).
// The x-axis and y-axis are defined by the spatial entity's location component data
}
// Check the entity component types and get the triangle mesh component data
if(hasComponentType(entityComponentTypes, XR_SPATIAL_ENTITY_COMPONENT_TYPE_TRIANGLE_MESH_BD))
{
XrSpatialEntityComponentGetInfoBD componentGetInfo{XR_TYPE_SPATIAL_ENTITY_COMPONENT_GET_INFO_BD};
componentGetInfo.next = nullptr;
componentGetInfo.entityId = entityId;
componentGetInfo.componentType = XR_SPATIAL_ENTITY_COMPONENT_TYPE_TRIANGLE_MESH_BD;
XrSpatialEntityComponentDataTriangleMeshBD componentTriangleMeshData{XR_TYPE_SPATIAL_ENTITY_COMPONENT_DATA_TRIANGLE_MESH_BD};
componentTriangleMeshData.next = nullptr;
componentTriangleMeshData.vertexCapacityInput = 0;
componentTriangleMeshData.indexCapacityInput = 0;
// First call to get vertex and index counts.
CHK_XR(xrGetSpatialEntityComponentDataBD(querySnapshotHandle, &componentGetInfo,
reinterpret_cast<XrSpatialEntityComponentDataBaseHeaderBD *>(&componentTriangleMeshData)));
if(componentTriangleMeshData.vertexCountOutput > 0 && componentTriangleMeshData.indexCountOutput > 0)
{
std::vector<XrVector3f> vertices(componentTriangleMeshData.vertexCountOutput);
std::vector<uint16_t> indices(componentTriangleMeshData.indexCountOutput);
componentTriangleMeshData.vertexCapacityInput = componentTriangleMeshData.vertexCountOutput;
componentTriangleMeshData.vertices = vertices.data();
componentTriangleMeshData.indexCapacityInput = componentTriangleMeshData.indexCountOutput;
componentTriangleMeshData.indices = indices.data();
// Second call to get vertices and indices .
CHK_XR(xrGetSpatialEntityComponentDataBD(querySnapshotHandle, &componentGetInfo,
reinterpret_cast<XrSpatialEntityComponentDataBaseHeaderBD *>(&componentTriangleMeshData)));
// The triangle mesh data now in componentTriangleMeshData
// The vertices are XrVector3f structs defining the positions along the x-axis, y-axis and z-axis.
// The x-axis, y-axis and z-axis are defined by the spatial entity's location component data.
// ...
}
}
}
}
}
12.72. XR_BD_ultra_controller_interaction
- Name String
-
XR_BD_ultra_controller_interaction - Extension Type
-
Instance extension
- Registered Extension Number
-
404
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- API Interactions
-
-
Interacts with
XR_EXT_dpad_binding -
Interacts with
XR_EXT_hand_interaction -
Interacts with
XR_EXT_palm_pose
-
- Last Modified Date
-
2025-04-15
- IP Status
-
No known IP claims.
- Contributors
-
Shanliang Xu, ByteDance
Shuai Liu, ByteDance
Chenxi Bao,ByteDance
Chen Han,ByteDance
Zhao Jiabin,ByteDance
12.72.1. Overview
This extension defines the interaction profile for PICO Ultra Controllers.
Interaction profile path for PICO Ultra Controllers:
-
/interaction_profiles/bytedance/pico_ultra_controller_bd
Valid for user paths for pico_ultra_controller_bd:
-
/user/hand/left
-
/user/hand/right
Supported component paths for pico_ultra_controller_bd:
-
On /user/hand/left only:
-
…/input/x/click
-
…/input/x/touch
-
…/input/y/click
-
…/input/y/touch
-
…/input/menu/click
-
-
On /user/hand/right only:
-
…/input/a/click
-
…/input/a/touch
-
…/input/b/click
-
…/input/b/touch
-
-
…/input/system/click (may not be available for application use)
-
…/input/trigger/click
-
…/input/trigger/value
-
…/input/trigger/touch
-
…/input/thumbstick/y
-
…/input/thumbstick/x
-
…/input/thumbstick/click
-
…/input/thumbstick/touch
-
…/input/squeeze/click
-
…/input/squeeze/value
-
…/input/grip/pose
-
…/input/grip_surface/pose
-
…/input/aim/pose
-
…/output/haptic
Be careful with the following component:
-
Ultra Controller support …/input/menu/click only on /user/hand/left.
|
Note
When the
|
|
Note
When the
|
12.73. XR_EPIC_view_configuration_fov
- Name String
-
XR_EPIC_view_configuration_fov - Extension Type
-
Instance extension
- Registered Extension Number
-
60
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2020-03-05
- IP Status
-
No known IP claims.
- Contributors
-
Jules Blok, Epic Games
Overview
This extension allows the application to retrieve the recommended and maximum field-of-view using xrEnumerateViewConfigurationViews. These field-of-view parameters can be used during initialization of the application before creating a session.
The field-of-view given here should not be used for rendering, see xrLocateViews to retrieve the field-of-view for rendering.
For views with fovMutable set to XR_TRUE the maximum field-of-view
should specify the upper limit that runtime can support.
If the view has fovMutable set to XR_FALSE the runtime must set
maxMutableFov to be the same as recommendedFov.
New Object Types
New Flag Types
New Enum Constants
New Enums
New Structures
The XrViewConfigurationViewFovEPIC structure is an output struct which can be added to the next chain of XrViewConfigurationView to retrieve the field-of-view for that view.
// Provided by XR_EPIC_view_configuration_fov
typedef struct XrViewConfigurationViewFovEPIC {
XrStructureType type;
const void* next;
XrFovf recommendedFov;
XrFovf maxMutableFov;
} XrViewConfigurationViewFovEPIC;
New Functions
Issues
Version History
-
Revision 2, 2020-06-04 (Jules Blok)
-
Fixed incorrect member name.
-
-
Revision 1, 2020-03-05 (Jules Blok)
-
Initial version.
-
12.74. XR_FB_android_surface_swapchain_create
- Name String
-
XR_FB_android_surface_swapchain_create - Extension Type
-
Instance extension
- Registered Extension Number
-
71
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Cass Everitt, Facebook
Gloria Kennickell, Facebook
Tomislav Novak, Facebook
Overview
This extension provides support for the specification of Android Surface specific swapchain create flags.
In order to enable the functionality of this extension, the application
must pass the name of the extension into xrCreateInstance via the
XrInstanceCreateInfo::enabledExtensionNames parameter as
indicated in the Extensions section.
These additional create flags are specified by attaching a
XrAndroidSurfaceSwapchainCreateInfoFB structure to the next
chain of an XrSwapchainCreateInfo structure.
New Object Types
New Flag Types
typedef XrFlags64 XrAndroidSurfaceSwapchainFlagsFB;
// Flag bits for XrAndroidSurfaceSwapchainFlagsFB
static const XrAndroidSurfaceSwapchainFlagsFB XR_ANDROID_SURFACE_SWAPCHAIN_SYNCHRONOUS_BIT_FB = 0x00000001;
static const XrAndroidSurfaceSwapchainFlagsFB XR_ANDROID_SURFACE_SWAPCHAIN_USE_TIMESTAMPS_BIT_FB = 0x00000002;
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_ANDROID_SURFACE_SWAPCHAIN_CREATE_INFO_FB
New Enums
-
XR_ANDROID_SURFACE_SWAPCHAIN_SYNCHRONOUS_BIT_FB -
XR_ANDROID_SURFACE_SWAPCHAIN_USE_TIMESTAMPS_BIT_FB
New Structures
The XrAndroidSurfaceSwapchainCreateInfoFB structure is defined as:
// Provided by XR_FB_android_surface_swapchain_create
typedef struct XrAndroidSurfaceSwapchainCreateInfoFB {
XrStructureType type;
const void* next;
XrAndroidSurfaceSwapchainFlagsFB createFlags;
} XrAndroidSurfaceSwapchainCreateInfoFB;
XrAndroidSurfaceSwapchainCreateInfoFB contains additional Android
Surface specific create flags when calling
xrCreateSwapchainAndroidSurfaceKHR.
The XrAndroidSurfaceSwapchainCreateInfoFB structure must be provided
in the next chain of the XrSwapchainCreateInfo structure when
calling xrCreateSwapchainAndroidSurfaceKHR.
New Functions
Issues
Version History
-
Revision 1, 2020-12-10 (Gloria Kennickell)
-
Initial draft
-
12.75. XR_FB_body_tracking
- Name String
-
XR_FB_body_tracking - Extension Type
-
Instance extension
- Registered Extension Number
-
77
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2022-07-18
- IP Status
-
No known IP claims.
- Contributors
-
Giancarlo Di Biase, Meta
Dikpal Reddy, Meta
Igor Tceglevskii, Meta
12.75.1. Overview
This extension enables applications to locate the individual body joints that represent the estimated position of the user of the device. It enables applications to render the upper body in XR experiences.
12.75.2. Inspect system capability
An application can inspect whether the system is capable of body tracking by extending the XrSystemProperties with XrSystemBodyTrackingPropertiesFB structure when calling xrGetSystemProperties.
// Provided by XR_FB_body_tracking
typedef struct XrSystemBodyTrackingPropertiesFB {
XrStructureType type;
void* next;
XrBool32 supportsBodyTracking;
} XrSystemBodyTrackingPropertiesFB;
If a runtime returns XR_FALSE for supportsBodyTracking, the
runtime must return XR_ERROR_FEATURE_UNSUPPORTED from
xrCreateBodyTrackerFB.
12.75.3. Create a body tracker handle
The XrBodyTrackerFB handle represents the resources for body tracking.
// Provided by XR_FB_body_tracking
XR_DEFINE_HANDLE(XrBodyTrackerFB)
This handle can be used to locate body joints using xrLocateBodyJointsFB function.
A body tracker provides joint locations with an unobstructed range of human body motion.
It also provides the estimated scale of this body.
An application can create an XrBodyTrackerFB handle using xrCreateBodyTrackerFB function.
// Provided by XR_FB_body_tracking
XrResult xrCreateBodyTrackerFB(
XrSession session,
const XrBodyTrackerCreateInfoFB* createInfo,
XrBodyTrackerFB* bodyTracker);
If the system does not support body tracking, the runtime must return
XR_ERROR_FEATURE_UNSUPPORTED from xrCreateBodyTrackerFB.
In this case, the runtime must return XR_FALSE for
XrSystemBodyTrackingPropertiesFB::supportsBodyTracking when the
function xrGetSystemProperties is called, so that the application can
avoid creating a body tracker.
The XrBodyTrackerCreateInfoFB structure describes the information to create an XrBodyTrackerFB handle.
// Provided by XR_FB_body_tracking
typedef struct XrBodyTrackerCreateInfoFB {
XrStructureType type;
const void* next;
XrBodyJointSetFB bodyJointSet;
} XrBodyTrackerCreateInfoFB;
The XrBodyJointSetFB enum describes the set of body joints to track when creating an XrBodyTrackerFB.
// Provided by XR_FB_body_tracking
typedef enum XrBodyJointSetFB {
XR_BODY_JOINT_SET_DEFAULT_FB = 0,
// Provided by XR_META_body_tracking_full_body
XR_BODY_JOINT_SET_FULL_BODY_META = 1000274000,
XR_BODY_JOINT_SET_MAX_ENUM_FB = 0x7FFFFFFF
} XrBodyJointSetFB;
xrDestroyBodyTrackerFB function releases the bodyTracker and the
underlying resources when the body tracking experience is over.
// Provided by XR_FB_body_tracking
XrResult xrDestroyBodyTrackerFB(
XrBodyTrackerFB bodyTracker);
12.75.4. Locate body joints
The xrLocateBodyJointsFB function locates an array of body joints to a base space at a given time.
// Provided by XR_FB_body_tracking
XrResult xrLocateBodyJointsFB(
XrBodyTrackerFB bodyTracker,
const XrBodyJointsLocateInfoFB* locateInfo,
XrBodyJointLocationsFB* locations);
The XrBodyJointsLocateInfoFB structure describes the information to locate body joints.
// Provided by XR_FB_body_tracking
typedef struct XrBodyJointsLocateInfoFB {
XrStructureType type;
const void* next;
XrSpace baseSpace;
XrTime time;
} XrBodyJointsLocateInfoFB;
Callers should request a time equal to the predicted display time for the rendered frame. The system will employ appropriate modeling to support body tracking at this time.
XrBodyJointLocationsFB structure returns the state of the body joint locations.
// Provided by XR_FB_body_tracking
typedef struct XrBodyJointLocationsFB {
XrStructureType type;
void* next;
XrBool32 isActive;
float confidence;
uint32_t jointCount;
XrBodyJointLocationFB* jointLocations;
uint32_t skeletonChangedCount;
XrTime time;
} XrBodyJointLocationsFB;
The runtime must return XR_ERROR_VALIDATION_FAILURE if
jointCount does not equal to the number of joints defined by the
XrBodyJointSetFB used to create the XrBodyTrackerFB.
The runtime must return jointLocations representing the range of
human body motion, without any obstructions.
Input systems that either obstruct the movement of the user’s body (for
example, a held controller preventing the user from making a fist) or input
systems that have only limited ability to track finger positions must use
the information available to them to emulate an unobstructed range of
motion.
The runtime must update the jointLocations array ordered so that it
is indexed using the corresponding body joint enum (e.g.
XrBodyJointFB) as described by XrBodyJointSetFB when creating
the XrBodyTrackerFB.
For example, when the XrBodyTrackerFB is created with
XR_BODY_JOINT_SET_DEFAULT_FB, the application must set the
jointCount to XR_BODY_JOINT_COUNT_FB, and the runtime must fill
the jointLocations array ordered so that it is indexed by the
XrBodyJointFB enum.
If the returned isActive is true, the runtime must return all joint
locations with both XR_SPACE_LOCATION_POSITION_VALID_BIT and
XR_SPACE_LOCATION_ORIENTATION_VALID_BIT set.
However, in this case, some joint space locations may be untracked (i.e.
XR_SPACE_LOCATION_POSITION_TRACKED_BIT or
XR_SPACE_LOCATION_ORIENTATION_TRACKED_BIT is unset).
If the returned isActive is false, it indicates that the body tracker
did not detect the body input, the application lost input focus, or the
consent for body tracking was denied by the user.
In this case, the runtime must return all jointLocations with neither
XR_SPACE_LOCATION_POSITION_VALID_BIT nor
XR_SPACE_LOCATION_ORIENTATION_VALID_BIT set.
XrBodyJointLocationFB structure describes the position, orientation, and radius of a body joint.
// Provided by XR_FB_body_tracking
typedef struct XrBodyJointLocationFB {
XrSpaceLocationFlags locationFlags;
XrPosef pose;
} XrBodyJointLocationFB;
12.75.5. Retrieve body skeleton
The xrGetBodySkeletonFB function returns the body skeleton in T-pose.
// Provided by XR_FB_body_tracking
XrResult xrGetBodySkeletonFB(
XrBodyTrackerFB bodyTracker,
XrBodySkeletonFB* skeleton);
This function can be used to query the skeleton scale and proportions in
conjunction with XrBodyJointLocationsFB::skeletonChangedCount.
XrBodyJointLocationsFB::skeletonChangedCount is incremented
whenever the tracking auto-calibrates the user skeleton scale and
proportions.
The XrBodySkeletonFB structure is a container to represent the body skeleton in T-pose including the joint hierarchy.
// Provided by XR_FB_body_tracking
typedef struct XrBodySkeletonFB {
XrStructureType type;
void* next;
uint32_t jointCount;
XrBodySkeletonJointFB* joints;
} XrBodySkeletonFB;
The runtime must return XR_ERROR_VALIDATION_FAILURE if
jointCount does not equal to the number of joints defined by the
XrBodyJointSetFB used to create the XrBodyTrackerFB.
The runtime must return joints representing the default pose of the
current estimation regarding the user’s skeleton.
XrBodySkeletonJointFB structure describes the position, orientation of the joint in space, and position of the joint in the skeleton hierarchy.
// Provided by XR_FB_body_tracking
typedef struct XrBodySkeletonJointFB {
int32_t joint;
int32_t parentJoint;
XrPosef pose;
} XrBodySkeletonJointFB;
12.75.6. Example code for locating body joints
The following example code demonstrates how to locate all body joints relatively to a base space.
XrInstance instance; // previously initialized
XrSystemId systemId; // previously initialized
XrSession session; // previously initialized
XrSpace baseSpace; // previously initialized, e.g. from
// XR_REFERENCE_SPACE_TYPE_LOCAL
// Inspect body tracking system properties
XrSystemBodyTrackingPropertiesFB bodyTrackingSystemProperties{
XR_TYPE_SYSTEM_BODY_TRACKING_PROPERTIES_FB};
XrSystemProperties systemProperties{XR_TYPE_SYSTEM_PROPERTIES,
&bodyTrackingSystemProperties};
CHK_XR(xrGetSystemProperties(instance, systemId, &systemProperties));
if (!bodyTrackingSystemProperties.supportsBodyTracking) {
// The system does not support body tracking
return;
}
// Get function pointer for xrCreateBodyTrackerFB
PFN_xrCreateBodyTrackerFB pfnCreateBodyTrackerFB;
CHK_XR(xrGetInstanceProcAddr(instance, "xrCreateBodyTrackerFB",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnCreateBodyTrackerFB)));
// Create a body tracker that tracks default set of body joints.
XrBodyTrackerFB bodyTracker = {};
{
XrBodyTrackerCreateInfoFB createInfo{XR_TYPE_BODY_TRACKER_CREATE_INFO_FB};
createInfo.bodyJointSet = XR_BODY_JOINT_SET_DEFAULT_FB;
CHK_XR(pfnCreateBodyTrackerFB(session, &createInfo, &bodyTracker));
}
// Allocate buffers to receive joint location data before frame
// loop starts.
XrBodyJointLocationFB jointLocations[XR_BODY_JOINT_COUNT_FB];
XrBodyJointLocationsFB locations{XR_TYPE_BODY_JOINT_LOCATIONS_FB};
locations.jointCount = XR_BODY_JOINT_COUNT_FB;
locations.jointLocations = jointLocations;
// Get function pointer for xrLocateBodyJointsFB.
PFN_xrLocateBodyJointsFB pfnLocateBodyJointsFB;
CHK_XR(xrGetInstanceProcAddr(instance, "xrLocateBodyJointsFB",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnLocateBodyJointsFB)));
while (1) {
// ...
// For every frame in the frame loop
// ...
XrFrameState frameState; // previously returned from xrWaitFrame
const XrTime time = frameState.predictedDisplayTime;
XrBodyJointsLocateInfoFB locateInfo{XR_TYPE_BODY_JOINTS_LOCATE_INFO_FB};
locateInfo.baseSpace = baseSpace;
locateInfo.time = time;
CHK_XR(pfnLocateBodyJointsFB(bodyTracker, &locateInfo, &locations));
if (locations.isActive) {
// The returned joint location array is directly indexed with
// XrBodyJointFB enum.
const XrPosef &indexTip =
jointLocations[XR_BODY_JOINT_LEFT_HAND_INDEX_TIP_FB].pose;
}
}
12.75.7. Conventions of body joints
This extension defines 70 joints for body tracking: 18 core body joints + 52 hand joints.
// Provided by XR_FB_body_tracking
typedef enum XrBodyJointFB {
XR_BODY_JOINT_ROOT_FB = 0,
XR_BODY_JOINT_HIPS_FB = 1,
XR_BODY_JOINT_SPINE_LOWER_FB = 2,
XR_BODY_JOINT_SPINE_MIDDLE_FB = 3,
XR_BODY_JOINT_SPINE_UPPER_FB = 4,
XR_BODY_JOINT_CHEST_FB = 5,
XR_BODY_JOINT_NECK_FB = 6,
XR_BODY_JOINT_HEAD_FB = 7,
XR_BODY_JOINT_LEFT_SHOULDER_FB = 8,
XR_BODY_JOINT_LEFT_SCAPULA_FB = 9,
XR_BODY_JOINT_LEFT_ARM_UPPER_FB = 10,
XR_BODY_JOINT_LEFT_ARM_LOWER_FB = 11,
XR_BODY_JOINT_LEFT_HAND_WRIST_TWIST_FB = 12,
XR_BODY_JOINT_RIGHT_SHOULDER_FB = 13,
XR_BODY_JOINT_RIGHT_SCAPULA_FB = 14,
XR_BODY_JOINT_RIGHT_ARM_UPPER_FB = 15,
XR_BODY_JOINT_RIGHT_ARM_LOWER_FB = 16,
XR_BODY_JOINT_RIGHT_HAND_WRIST_TWIST_FB = 17,
XR_BODY_JOINT_LEFT_HAND_PALM_FB = 18,
XR_BODY_JOINT_LEFT_HAND_WRIST_FB = 19,
XR_BODY_JOINT_LEFT_HAND_THUMB_METACARPAL_FB = 20,
XR_BODY_JOINT_LEFT_HAND_THUMB_PROXIMAL_FB = 21,
XR_BODY_JOINT_LEFT_HAND_THUMB_DISTAL_FB = 22,
XR_BODY_JOINT_LEFT_HAND_THUMB_TIP_FB = 23,
XR_BODY_JOINT_LEFT_HAND_INDEX_METACARPAL_FB = 24,
XR_BODY_JOINT_LEFT_HAND_INDEX_PROXIMAL_FB = 25,
XR_BODY_JOINT_LEFT_HAND_INDEX_INTERMEDIATE_FB = 26,
XR_BODY_JOINT_LEFT_HAND_INDEX_DISTAL_FB = 27,
XR_BODY_JOINT_LEFT_HAND_INDEX_TIP_FB = 28,
XR_BODY_JOINT_LEFT_HAND_MIDDLE_METACARPAL_FB = 29,
XR_BODY_JOINT_LEFT_HAND_MIDDLE_PROXIMAL_FB = 30,
XR_BODY_JOINT_LEFT_HAND_MIDDLE_INTERMEDIATE_FB = 31,
XR_BODY_JOINT_LEFT_HAND_MIDDLE_DISTAL_FB = 32,
XR_BODY_JOINT_LEFT_HAND_MIDDLE_TIP_FB = 33,
XR_BODY_JOINT_LEFT_HAND_RING_METACARPAL_FB = 34,
XR_BODY_JOINT_LEFT_HAND_RING_PROXIMAL_FB = 35,
XR_BODY_JOINT_LEFT_HAND_RING_INTERMEDIATE_FB = 36,
XR_BODY_JOINT_LEFT_HAND_RING_DISTAL_FB = 37,
XR_BODY_JOINT_LEFT_HAND_RING_TIP_FB = 38,
XR_BODY_JOINT_LEFT_HAND_LITTLE_METACARPAL_FB = 39,
XR_BODY_JOINT_LEFT_HAND_LITTLE_PROXIMAL_FB = 40,
XR_BODY_JOINT_LEFT_HAND_LITTLE_INTERMEDIATE_FB = 41,
XR_BODY_JOINT_LEFT_HAND_LITTLE_DISTAL_FB = 42,
XR_BODY_JOINT_LEFT_HAND_LITTLE_TIP_FB = 43,
XR_BODY_JOINT_RIGHT_HAND_PALM_FB = 44,
XR_BODY_JOINT_RIGHT_HAND_WRIST_FB = 45,
XR_BODY_JOINT_RIGHT_HAND_THUMB_METACARPAL_FB = 46,
XR_BODY_JOINT_RIGHT_HAND_THUMB_PROXIMAL_FB = 47,
XR_BODY_JOINT_RIGHT_HAND_THUMB_DISTAL_FB = 48,
XR_BODY_JOINT_RIGHT_HAND_THUMB_TIP_FB = 49,
XR_BODY_JOINT_RIGHT_HAND_INDEX_METACARPAL_FB = 50,
XR_BODY_JOINT_RIGHT_HAND_INDEX_PROXIMAL_FB = 51,
XR_BODY_JOINT_RIGHT_HAND_INDEX_INTERMEDIATE_FB = 52,
XR_BODY_JOINT_RIGHT_HAND_INDEX_DISTAL_FB = 53,
XR_BODY_JOINT_RIGHT_HAND_INDEX_TIP_FB = 54,
XR_BODY_JOINT_RIGHT_HAND_MIDDLE_METACARPAL_FB = 55,
XR_BODY_JOINT_RIGHT_HAND_MIDDLE_PROXIMAL_FB = 56,
XR_BODY_JOINT_RIGHT_HAND_MIDDLE_INTERMEDIATE_FB = 57,
XR_BODY_JOINT_RIGHT_HAND_MIDDLE_DISTAL_FB = 58,
XR_BODY_JOINT_RIGHT_HAND_MIDDLE_TIP_FB = 59,
XR_BODY_JOINT_RIGHT_HAND_RING_METACARPAL_FB = 60,
XR_BODY_JOINT_RIGHT_HAND_RING_PROXIMAL_FB = 61,
XR_BODY_JOINT_RIGHT_HAND_RING_INTERMEDIATE_FB = 62,
XR_BODY_JOINT_RIGHT_HAND_RING_DISTAL_FB = 63,
XR_BODY_JOINT_RIGHT_HAND_RING_TIP_FB = 64,
XR_BODY_JOINT_RIGHT_HAND_LITTLE_METACARPAL_FB = 65,
XR_BODY_JOINT_RIGHT_HAND_LITTLE_PROXIMAL_FB = 66,
XR_BODY_JOINT_RIGHT_HAND_LITTLE_INTERMEDIATE_FB = 67,
XR_BODY_JOINT_RIGHT_HAND_LITTLE_DISTAL_FB = 68,
XR_BODY_JOINT_RIGHT_HAND_LITTLE_TIP_FB = 69,
XR_BODY_JOINT_COUNT_FB = 70,
XR_BODY_JOINT_NONE_FB = -1,
XR_BODY_JOINT_MAX_ENUM_FB = 0x7FFFFFFF
} XrBodyJointFB;
The backward (+Z) direction is parallel to the corresponding bone and points away from the finger tip. The up (+Y) direction is pointing out of the back of and perpendicular to the corresponding finger nail at the fully opened hand pose. The X direction is perpendicular to Y and Z and follows the right hand rule.
The wrist joint is located at the pivot point of the wrist, which is location invariant when twisting the hand without moving the forearm. The backward (+Z) direction is parallel to the line from wrist joint to middle finger metacarpal joint, and points away from the finger tips. The up (+Y) direction points out towards back of the hand and perpendicular to the skin at wrist. The X direction is perpendicular to the Y and Z directions and follows the right hand rule.
The palm joint is located at the center of the middle finger’s metacarpal bone. The backward (+Z) direction is parallel to the middle finger’s metacarpal bone, and points away from the finger tips. The up (+Y) direction is perpendicular to palm surface and pointing towards the back of the hand. The X direction is perpendicular to the Y and Z directions and follows the right hand rule.
Body skeleton has the full set of body joints (e.g. defined by XrBodyJointFB), organized in a hierarchy with a default T-shape body pose.
The purpose of the skeleton is to provide data about the body size. Coordinates are relative to each other, so there is no any relation to any space.
The calculation of the body size may be updated during a session.
Each time the calculation of the size is changed, skeletonChangedCount
of XrBodyJointLocationsFB is changed to indicate that a new skeleton
may be retrieved.
New Object Types
New Flag Types
New Enum Constants
-
XR_BODY_JOINT_COUNT_FB
XrObjectType enumeration is extended with:
-
XR_OBJECT_TYPE_BODY_TRACKER_FB
XrStructureType enumeration is extended with:
-
XR_TYPE_SYSTEM_BODY_TRACKING_PROPERTIES_FB -
XR_TYPE_BODY_TRACKER_CREATE_INFO_FB -
XR_TYPE_BODY_JOINTS_LOCATE_INFO_FB -
XR_TYPE_BODY_JOINT_LOCATIONS_FB -
XR_TYPE_BODY_SKELETON_FB
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2022-07-18 (Igor Tceglevskii)
-
Initial extension description
-
12.76. XR_FB_color_space
- Name String
-
XR_FB_color_space - Extension Type
-
Instance extension
- Registered Extension Number
-
109
- Revision
-
3
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Volga Aksoy, Facebook
Cass Everitt, Facebook
Gloria Kennickell, Facebook
Overview
XR devices may use a color space that is different from many monitors used in development. Application developers may desire to specify the color space in which they have authored their application so appropriate colors are shown when the application is running on the XR device.
This extension allows:
-
An application to get the native color space of the XR device.
-
An application to enumerate the supported color spaces for the session.
-
An application to set the color space for the session.
In order to enable the functionality of this extension, the application
must pass the name of the extension into xrCreateInstance via the
XrInstanceCreateInfo::enabledExtensionNames parameter as
indicated in the Extensions section.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_SYSTEM_COLOR_SPACE_PROPERTIES_FB
XrResult enumeration is extended with:
-
XR_ERROR_COLOR_SPACE_UNSUPPORTED_FB
New Enums
The possible color spaces are specified by the XrColorSpaceFB enumeration.
// Provided by XR_FB_color_space
typedef enum XrColorSpaceFB {
XR_COLOR_SPACE_UNMANAGED_FB = 0,
XR_COLOR_SPACE_REC2020_FB = 1,
XR_COLOR_SPACE_REC709_FB = 2,
XR_COLOR_SPACE_RIFT_CV1_FB = 3,
XR_COLOR_SPACE_RIFT_S_FB = 4,
XR_COLOR_SPACE_QUEST_FB = 5,
XR_COLOR_SPACE_P3_FB = 6,
XR_COLOR_SPACE_ADOBE_RGB_FB = 7,
XR_COLOR_SPACE_MAX_ENUM_FB = 0x7FFFFFFF
} XrColorSpaceFB;
New Structures
An application may inspect the native color space of the system by chaining an XrSystemColorSpacePropertiesFB structure to the XrSystemProperties when calling xrGetSystemProperties.
The XrSystemColorSpacePropertiesFB structure is defined as:
// Provided by XR_FB_color_space
typedef struct XrSystemColorSpacePropertiesFB {
XrStructureType type;
void* next;
XrColorSpaceFB colorSpace;
} XrSystemColorSpacePropertiesFB;
New Functions
The xrEnumerateColorSpacesFB function is defined as:
// Provided by XR_FB_color_space
XrResult xrEnumerateColorSpacesFB(
XrSession session,
uint32_t colorSpaceCapacityInput,
uint32_t* colorSpaceCountOutput,
XrColorSpaceFB* colorSpaces);
xrEnumerateColorSpacesFB enumerates the color spaces supported by the current session. Runtimes must always return identical buffer contents from this enumeration for the lifetime of the session.
The xrSetColorSpaceFB function is defined as:
// Provided by XR_FB_color_space
XrResult xrSetColorSpaceFB(
XrSession session,
const XrColorSpaceFB colorSpace);
xrSetColorSpaceFB provides a mechanism for an application to specify
the color space used in the final rendered frame.
If this function is not called, the session will use the color space deemed
appropriate by the runtime.
Oculus HMDs for both PC and Mobile product lines default to
XR_COLOR_SPACE_RIFT_CV1_FB.
The runtime must return XR_ERROR_COLOR_SPACE_UNSUPPORTED_FB if
colorSpace is not one of the values enumerated by
xrEnumerateColorSpacesFB.
Formal definitions of color spaces contain a number of aspects such as gamma
correction, max luminance and more.
However, xrSetColorSpaceFB will only affect the color gamut of the
output by transforming the color gamut from the source (defined by the
colorSpace parameter) to the HMD display’s color gamut (defined by the
hardware internally).
This call will not affect gamma correction, leaving that to follow the GPU
texture format standards.
Luminance, tonemapping, and other aspects of the color space will also
remain unaffected.
For more info on color management in Oculus HMDs, please refer to this guide: Color Management in Oculus Headsets
Issues
Version History
-
Revision 1, 2020-11-09 (Gloria Kennickell)
-
Initial extension description
-
-
Revision 2, 2021-09-28 (Rylie Pavlik, Collabora, Ltd.)
-
Fix XML markup to indicate that
XrSystemColorSpacePropertiesFBis chained toXrSystemProperties.
-
-
Revision 3, 2022-09-01 (Rylie Pavlik, Collabora, Ltd.)
-
Fix XML markup to indicate that
XrSystemColorSpacePropertiesFBis returned-only.
-
12.77. XR_FB_composition_layer_alpha_blend
- Name String
-
XR_FB_composition_layer_alpha_blend - Extension Type
-
Instance extension
- Registered Extension Number
-
42
- Revision
-
3
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Cass Everitt, Facebook
Gloria Kennickell, Facebook
Johannes Schmid, Facebook
Overview
This extension provides explicit control over source and destination blend
factors, with separate controls for color and alpha.
When specified, these blend controls supersede the behavior of
XR_COMPOSITION_LAYER_BLEND_TEXTURE_SOURCE_ALPHA_BIT, as well as the
layer blending operation which defined in the section of
Composition Layer Blending.
When XR_COMPOSITION_LAYER_UNPREMULTIPLIED_ALPHA_BIT is specified, the
source color is unpremultiplied alpha.
Like color, destination alpha is initialized to 0 before composition begins.
In order to enable the functionality of this extension, the application
must pass the name of the extension into xrCreateInstance via the
XrInstanceCreateInfo::enabledExtensionNames parameter as
indicated in the Extensions section.
These blend factors are specified by attaching a
XrCompositionLayerAlphaBlendFB structure to the next chain of a
layer structure derived from XrCompositionLayerBaseHeader.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_COMPOSITION_LAYER_ALPHA_BLEND_FB
New Enums
The possible blend factors are specified by the XrBlendFactorFB enumeration.
// Provided by XR_FB_composition_layer_alpha_blend
typedef enum XrBlendFactorFB {
XR_BLEND_FACTOR_ZERO_FB = 0,
XR_BLEND_FACTOR_ONE_FB = 1,
XR_BLEND_FACTOR_SRC_ALPHA_FB = 2,
XR_BLEND_FACTOR_ONE_MINUS_SRC_ALPHA_FB = 3,
XR_BLEND_FACTOR_DST_ALPHA_FB = 4,
XR_BLEND_FACTOR_ONE_MINUS_DST_ALPHA_FB = 5,
XR_BLEND_FACTOR_MAX_ENUM_FB = 0x7FFFFFFF
} XrBlendFactorFB;
New Structures
The XrCompositionLayerAlphaBlendFB structure is defined as:
// Provided by XR_FB_composition_layer_alpha_blend
typedef struct XrCompositionLayerAlphaBlendFB {
XrStructureType type;
void* next;
XrBlendFactorFB srcFactorColor;
XrBlendFactorFB dstFactorColor;
XrBlendFactorFB srcFactorAlpha;
XrBlendFactorFB dstFactorAlpha;
} XrCompositionLayerAlphaBlendFB;
XrCompositionLayerAlphaBlendFB provides applications with explicit control over source and destination blend factors.
The XrCompositionLayerAlphaBlendFB structure must be provided in the
next chain of the XrCompositionLayerBaseHeader structure.
New Functions
Issues
-
Should we add separate blend controls for color and alpha?
-
Yes. New use cases necessitated adding separate blend controls for color and alpha.
-
Version History
-
Revision 1, 2020-06-22 (Gloria Kennickell)
-
Initial draft
-
-
Revision 2, 2020-06-22 (Gloria Kennickell)
-
Provide separate controls for color and alpha blend factors.
-
-
Revision 3, 2024-03-04 (Xiang Wei)
-
Clarify the superseding of layer blending operation.
-
Add warning about the exclusive usage with non-opaque environment blend modes.
-
12.78. XR_FB_composition_layer_depth_test
- Name String
-
XR_FB_composition_layer_depth_test - Extension Type
-
Instance extension
- Registered Extension Number
-
213
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Guodong Rong, Meta
Cass Everitt, Meta
Jian Zhang, Meta
Overview
This extension enables depth-tested layer composition. The compositor will maintain a depth buffer in addition to a color buffer. The depth buffer is cleared to a depth corresponding to the infinitely far distance at the beginning of composition.
When composing each layer, if depth testing is requested, the incoming layer depths are transformed into the compositor window space depth and compared to the depth stored in the frame buffer. After the transformation, incoming depths that are outside of the range of the compositor window space depth must be clamped. If the depth test fails, the fragment is discarded. If the depth test passes the depth buffer is updated if depth writes are enabled, and color processing continues.
Depth testing requires depth values for the layer.
For projection layers, this can be supplied via the
XR_KHR_composition_layer_depth extension.
For geometric primitive layers, the runtime computes the depth of the sample
directly from the layer parameters.
An XrCompositionLayerDepthTestFB chained to layers without depth must
be ignored.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_COMPOSITION_LAYER_DEPTH_TEST_FB
New Enums
The possible comparison operations are specified by the XrCompareOpFB enumeration.
// Provided by XR_FB_composition_layer_depth_test
typedef enum XrCompareOpFB {
XR_COMPARE_OP_NEVER_FB = 0,
XR_COMPARE_OP_LESS_FB = 1,
XR_COMPARE_OP_EQUAL_FB = 2,
XR_COMPARE_OP_LESS_OR_EQUAL_FB = 3,
XR_COMPARE_OP_GREATER_FB = 4,
XR_COMPARE_OP_NOT_EQUAL_FB = 5,
XR_COMPARE_OP_GREATER_OR_EQUAL_FB = 6,
XR_COMPARE_OP_ALWAYS_FB = 7,
XR_COMPARE_OP_MAX_ENUM_FB = 0x7FFFFFFF
} XrCompareOpFB;
New Structures
The XrCompositionLayerDepthTestFB structure is defined as:
// Provided by XR_FB_composition_layer_depth_test
typedef struct XrCompositionLayerDepthTestFB {
XrStructureType type;
const void* next;
XrBool32 depthMask;
XrCompareOpFB compareOp;
} XrCompositionLayerDepthTestFB;
To specify that a layer should be depth tested, a
XrCompositionLayerDepthTestFB structure must be passed via the
polymorphic XrCompositionLayerBaseHeader structure’s next
parameter chain.
New Functions
Issues
Version History
-
Revision 1, 2022-02-17 (Cass Everitt)
-
Initial draft
-
12.79. XR_FB_composition_layer_image_layout
- Name String
-
XR_FB_composition_layer_image_layout - Extension Type
-
Instance extension
- Registered Extension Number
-
41
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Cass Everitt, Facebook
Gloria Kennickell, Facebook
Overview
This extension does not define a new composition layer type, but rather it defines parameters that change the interpretation of the image layout, where the default image layout is dictated by the Graphics API.
In order to enable the functionality of this extension, you must pass the
name of the extension into xrCreateInstance via the
XrInstanceCreateInfo::enabledExtensionNames parameter as
indicated in the Extensions section.
New Object Types
New Flag Types
typedef XrFlags64 XrCompositionLayerImageLayoutFlagsFB;
// Flag bits for XrCompositionLayerImageLayoutFlagsFB
static const XrCompositionLayerImageLayoutFlagsFB XR_COMPOSITION_LAYER_IMAGE_LAYOUT_VERTICAL_FLIP_BIT_FB = 0x00000001;
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_COMPOSITION_LAYER_IMAGE_LAYOUT_FB
New Enums
-
XR_COMPOSITION_LAYER_IMAGE_LAYOUT_VERTICAL_FLIP_BIT_FB
New Structures
The XrCompositionLayerImageLayoutFB structure is defined as:
// Provided by XR_FB_composition_layer_image_layout
typedef struct XrCompositionLayerImageLayoutFB {
XrStructureType type;
void* next;
XrCompositionLayerImageLayoutFlagsFB flags;
} XrCompositionLayerImageLayoutFB;
XrCompositionLayerImageLayoutFB contains additional flags used to change the interpretation of the image layout for a composition layer.
To specify the additional flags, you must create a
XrCompositionLayerImageLayoutFB structure and pass it via the
XrCompositionLayerBaseHeader structure’s next parameter.
New Functions
Issues
Version History
-
Revision 1, 2020-07-06 (Gloria Kennickell)
-
Initial draft
-
12.80. XR_FB_composition_layer_secure_content
- Name String
-
XR_FB_composition_layer_secure_content - Extension Type
-
Instance extension
- Registered Extension Number
-
73
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Cass Everitt, Facebook
Gloria Kennickell, Facebook
Overview
This extension does not define a new composition layer type, but rather it provides support for the application to specify an existing composition layer type has secure content and whether it must be completely excluded from external outputs, like video or screen capture, or if proxy content must be rendered in its place.
In order to enable the functionality of this extension, you must pass the
name of the extension into xrCreateInstance via the
XrInstanceCreateInfo::enabledExtensionNames parameter as
indicated in the Extensions section.
New Object Types
New Flag Types
typedef XrFlags64 XrCompositionLayerSecureContentFlagsFB;
// Flag bits for XrCompositionLayerSecureContentFlagsFB
static const XrCompositionLayerSecureContentFlagsFB XR_COMPOSITION_LAYER_SECURE_CONTENT_EXCLUDE_LAYER_BIT_FB = 0x00000001;
static const XrCompositionLayerSecureContentFlagsFB XR_COMPOSITION_LAYER_SECURE_CONTENT_REPLACE_LAYER_BIT_FB = 0x00000002;
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_COMPOSITION_LAYER_SECURE_CONTENT_FB
New Enums
-
XR_COMPOSITION_LAYER_SECURE_CONTENT_EXCLUDE_LAYER_BIT_FB -
XR_COMPOSITION_LAYER_SECURE_CONTENT_REPLACE_LAYER_BIT_FB
New Structures
The XrCompositionLayerSecureContentFB structure is defined as:
// Provided by XR_FB_composition_layer_secure_content
typedef struct XrCompositionLayerSecureContentFB {
XrStructureType type;
const void* next;
XrCompositionLayerSecureContentFlagsFB flags;
} XrCompositionLayerSecureContentFB;
XrCompositionLayerSecureContentFB contains additional flags to indicate a composition layer contains secure content and must not be written to external outputs.
If both XR_COMPOSITION_LAYER_SECURE_CONTENT_EXCLUDE_LAYER_BIT_FB and
XR_COMPOSITION_LAYER_SECURE_CONTENT_REPLACE_LAYER_BIT_FB are set,
XR_COMPOSITION_LAYER_SECURE_CONTENT_EXCLUDE_LAYER_BIT_FB will take
precedence.
To specify the additional flags, you must create a
XrCompositionLayerSecureContentFB structure and pass it via the
XrCompositionLayerBaseHeader structure’s next parameter.
New Functions
Issues
Version History
-
Revision 1, 2020-06-16 (Gloria Kennickell)
-
Initial draft
-
12.81. XR_FB_composition_layer_settings
- Name String
-
XR_FB_composition_layer_settings - Extension Type
-
Instance extension
- Registered Extension Number
-
205
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Grant Yang, Meta Platforms
Overview
This extension allows applications to request the use of processing options such as sharpening or super-sampling on a composition layer.
In order to enable the functionality of this extension, you must pass the
name of the extension into xrCreateInstance via the
XrInstanceCreateInfo::enabledExtensionNames parameter as
indicated in the Extensions section.
New Object Types
New Flag Types
typedef XrFlags64 XrCompositionLayerSettingsFlagsFB;
// Flag bits for XrCompositionLayerSettingsFlagsFB
static const XrCompositionLayerSettingsFlagsFB XR_COMPOSITION_LAYER_SETTINGS_NORMAL_SUPER_SAMPLING_BIT_FB = 0x00000001;
static const XrCompositionLayerSettingsFlagsFB XR_COMPOSITION_LAYER_SETTINGS_QUALITY_SUPER_SAMPLING_BIT_FB = 0x00000002;
static const XrCompositionLayerSettingsFlagsFB XR_COMPOSITION_LAYER_SETTINGS_NORMAL_SHARPENING_BIT_FB = 0x00000004;
static const XrCompositionLayerSettingsFlagsFB XR_COMPOSITION_LAYER_SETTINGS_QUALITY_SHARPENING_BIT_FB = 0x00000008;
static const XrCompositionLayerSettingsFlagsFB XR_COMPOSITION_LAYER_SETTINGS_AUTO_LAYER_FILTER_BIT_META = 0x00000020;
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_COMPOSITION_LAYER_SETTINGS_FB
New Enums
-
XR_COMPOSITION_LAYER_SETTINGS_NORMAL_SUPER_SAMPLING_BIT_FB -
XR_COMPOSITION_LAYER_SETTINGS_QUALITY_SUPER_SAMPLING_BIT_FB -
XR_COMPOSITION_LAYER_SETTINGS_NORMAL_SHARPENING_BIT_FB -
XR_COMPOSITION_LAYER_SETTINGS_QUALITY_SHARPENING_BIT_FB
New Structures
The XrCompositionLayerSettingsFB structure is defined as:
// Provided by XR_FB_composition_layer_settings
typedef struct XrCompositionLayerSettingsFB {
XrStructureType type;
const void* next;
XrCompositionLayerSettingsFlagsFB layerFlags;
} XrCompositionLayerSettingsFB;
XrCompositionLayerSettingsFB contains additional flags to indicate which processing steps to perform on a composition layer.
If both XR_COMPOSITION_LAYER_SETTINGS_NORMAL_SUPER_SAMPLING_BIT_FB and
XR_COMPOSITION_LAYER_SETTINGS_QUALITY_SUPER_SAMPLING_BIT_FB are set,
XR_COMPOSITION_LAYER_SETTINGS_NORMAL_SUPER_SAMPLING_BIT_FB will take
precedence.
If both XR_COMPOSITION_LAYER_SETTINGS_NORMAL_SHARPENING_BIT_FB and
XR_COMPOSITION_LAYER_SETTINGS_QUALITY_SHARPENING_BIT_FB are set,
XR_COMPOSITION_LAYER_SETTINGS_NORMAL_SHARPENING_BIT_FB will take
precedence.
To specify the additional flags, create an
XrCompositionLayerSettingsFB structure and pass it via the
XrCompositionLayerBaseHeader structure’s next parameter.
New Functions
Issues
Version History
-
Revision 1, 2022-03-08 (Grant Yang)
-
Initial draft
-
12.82. XR_FB_display_refresh_rate
- Name String
-
XR_FB_display_refresh_rate - Extension Type
-
Instance extension
- Registered Extension Number
-
102
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- IP Status
-
No known IP claims.
- Contributors
-
Cass Everitt, Facebook
Gloria Kennickell, Facebook
Overview
On platforms which support dynamically adjusting the display refresh rate, application developers may request a specific display refresh rate in order to improve the overall user experience, examples include:
-
A video application may choose a display refresh rate which better matches the video content playback rate in order to achieve smoother video frames.
-
An application which can support a higher frame rate may choose to render at the higher rate to improve the overall perceptual quality, for example, lower latency and less flicker.
This extension allows:
-
An application to identify what display refresh rates the session supports and the current display refresh rate.
-
An application to request a display refresh rate to indicate its preference to the runtime.
-
An application to receive notification of changes to the display refresh rate which are delivered via events.
In order to enable the functionality of this extension, the application
must pass the name of the extension into xrCreateInstance via the
XrInstanceCreateInfo::enabledExtensionNames parameter as
indicated in the Extensions section.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_EVENT_DATA_DISPLAY_REFRESH_RATE_CHANGED_FB
XrResult enumeration is extended with:
-
XR_ERROR_DISPLAY_REFRESH_RATE_UNSUPPORTED_FB
New Enums
New Structures
Receiving the XrEventDataDisplayRefreshRateChangedFB event structure indicates that the display refresh rate has changed.
The XrEventDataDisplayRefreshRateChangedFB structure is defined as:
// Provided by XR_FB_display_refresh_rate
typedef struct XrEventDataDisplayRefreshRateChangedFB {
XrStructureType type;
const void* next;
float fromDisplayRefreshRate;
float toDisplayRefreshRate;
} XrEventDataDisplayRefreshRateChangedFB;
New Functions
The xrEnumerateDisplayRefreshRatesFB function is defined as:
// Provided by XR_FB_display_refresh_rate
XrResult xrEnumerateDisplayRefreshRatesFB(
XrSession session,
uint32_t displayRefreshRateCapacityInput,
uint32_t* displayRefreshRateCountOutput,
float* displayRefreshRates);
xrEnumerateDisplayRefreshRatesFB enumerates the display refresh rates supported by the current session. Display refresh rates must be in order from lowest to highest supported display refresh rates. Runtimes must always return identical buffer contents from this enumeration for the lifetime of the session.
The xrGetDisplayRefreshRateFB function is defined as:
// Provided by XR_FB_display_refresh_rate
XrResult xrGetDisplayRefreshRateFB(
XrSession session,
float* displayRefreshRate);
xrGetDisplayRefreshRateFB retrieves the current display refresh rate.
The xrRequestDisplayRefreshRateFB function is defined as:
// Provided by XR_FB_display_refresh_rate
XrResult xrRequestDisplayRefreshRateFB(
XrSession session,
float displayRefreshRate);
xrRequestDisplayRefreshRateFB provides a mechanism for an application
to request the system to dynamically change the display refresh rate to the
application preferred value.
The runtime must return XR_ERROR_DISPLAY_REFRESH_RATE_UNSUPPORTED_FB
if displayRefreshRate is not either 0.0f or one of the values
enumerated by xrEnumerateDisplayRefreshRatesFB.
A display refresh rate of 0.0f indicates the application has no
preference.
Note that this is only a request and does not guarantee the system will switch to the requested display refresh rate.
Issues
Changing the display refresh rate from its system default does not come without trade-offs. Increasing the display refresh rate puts more load on the entire system and can lead to thermal degradation. Conversely, lowering the display refresh rate can provide better thermal sustainability but at the cost of more perceptual issues, like higher latency and flickering.
Version History
-
Revision 1, 2020-10-05 (Gloria Kennickell)
-
Initial extension description
-
12.83. XR_FB_eye_tracking_social
- Name String
-
XR_FB_eye_tracking_social - Extension Type
-
Instance extension
- Registered Extension Number
-
203
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2022-07-17
- IP Status
-
No known IP claims.
- Contributors
-
Scott Ramsby, Meta
Dikpal Reddy, Meta
Igor Tceglevskii, Meta
12.83.1. Overview
This extension enables applications to obtain position and orientation of the user’s eyes. It enables applications to render eyes in XR experiences.
This extension is intended to drive animation of avatar eyes.
So, for that purpose, the runtimes may filter the poses in ways that are
suitable for avatar eye interaction but detrimental to other use cases.
This extension should not be used for other eye tracking purposes.
For interaction, XR_EXT_eye_gaze_interaction should be used.
Eye tracking data is sensitive personal information and is closely linked to personal privacy and integrity. It is strongly recommended that applications that store or transfer eye tracking data always ask the user for active and specific acceptance to do so.
If a runtime supports a permission system to control application access to
the eye tracker, then the runtime must set the isValid field to
XR_FALSE on the supplied XrEyeGazeFB structure until the
application has been allowed access to the eye tracker.
When the application access has been allowed, the runtime may set
isValid on the supplied XrEyeGazeFB structure to XR_TRUE.
12.83.2. Inspect system capability
The XrSystemEyeTrackingPropertiesFB structure is defined as:
// Provided by XR_FB_eye_tracking_social
typedef struct XrSystemEyeTrackingPropertiesFB {
XrStructureType type;
void* next;
XrBool32 supportsEyeTracking;
} XrSystemEyeTrackingPropertiesFB;
An application can inspect whether the system is capable of eye tracking input by extending the XrSystemProperties with XrSystemEyeTrackingPropertiesFB structure when calling xrGetSystemProperties.
If a runtime returns XR_FALSE for supportsEyeTracking, the
runtime must return XR_ERROR_FEATURE_UNSUPPORTED from
xrCreateEyeTrackerFB.
12.83.3. Create an eye tracker handle
The XrEyeTrackerFB handle represents the resources for eye tracking.
// Provided by XR_FB_eye_tracking_social
XR_DEFINE_HANDLE(XrEyeTrackerFB)
This handle is used for getting eye gaze using xrGetEyeGazesFB function.
An eye tracker provides eye gaze directions.
An application creates an XrEyeTrackerFB handle using xrCreateEyeTrackerFB function.
// Provided by XR_FB_eye_tracking_social
XrResult xrCreateEyeTrackerFB(
XrSession session,
const XrEyeTrackerCreateInfoFB* createInfo,
XrEyeTrackerFB* eyeTracker);
If the system does not support eye tracking, the runtime must return
XR_ERROR_FEATURE_UNSUPPORTED from xrCreateEyeTrackerFB.
In this case, the runtime must return XR_FALSE for
XrSystemEyeTrackingPropertiesFB::supportsEyeTracking when the
function xrGetSystemProperties is called, so that the application can
avoid creating an eye tracker.
The XrEyeTrackerCreateInfoFB structure is defined as:
// Provided by XR_FB_eye_tracking_social
typedef struct XrEyeTrackerCreateInfoFB {
XrStructureType type;
const void* next;
} XrEyeTrackerCreateInfoFB;
The XrEyeTrackerCreateInfoFB structure describes the information to create an XrEyeTrackerFB handle.
12.83.4. Destroy an eye tracker handle
xrDestroyEyeTrackerFB function releases the eyeTracker and the
underlying resources when the eye tracking experience is over.
// Provided by XR_FB_eye_tracking_social
XrResult xrDestroyEyeTrackerFB(
XrEyeTrackerFB eyeTracker);
12.83.5. Get eye gaze
The xrGetEyeGazesFB function is defined as:
// Provided by XR_FB_eye_tracking_social
XrResult xrGetEyeGazesFB(
XrEyeTrackerFB eyeTracker,
const XrEyeGazesInfoFB* gazeInfo,
XrEyeGazesFB* eyeGazes);
The xrGetEyeGazesFB function obtains pose for a user’s eyes at a specific time and within a specific coordinate system.
The XrEyeGazesInfoFB structure describes the information to get eye gaze directions.
// Provided by XR_FB_eye_tracking_social
typedef struct XrEyeGazesInfoFB {
XrStructureType type;
const void* next;
XrSpace baseSpace;
XrTime time;
} XrEyeGazesInfoFB;
The application should request a time equal to the predicted display time for the rendered frame. The system will employ appropriate modeling to provide eye gaze at this time.
XrEyeGazesFB structure returns the state of the eye gaze directions.
// Provided by XR_FB_eye_tracking_social
typedef struct XrEyeGazesFB {
XrStructureType type;
void* next;
XrEyeGazeFB gaze[XR_EYE_POSITION_COUNT_FB];
XrTime time;
} XrEyeGazesFB;
XrEyeGazeFB structure describes the validity, direction, and confidence of a social eye gaze observation.
// Provided by XR_FB_eye_tracking_social
typedef struct XrEyeGazeFB {
XrBool32 isValid;
XrPosef gazePose;
float gazeConfidence;
} XrEyeGazeFB;
If the returned isValid is true, the runtime must return
gazePose and gazeConfidence.
If the returned isValid is false, it indicates either the eye tracker
did not detect the eye gaze or the application lost input focus.
The eye gaze pose is natively oriented with +Y up, +X to the right, and -Z
forward and not gravity-aligned, similar to the
XR_REFERENCE_SPACE_TYPE_VIEW.
The XrEyePositionFB describes which eye in the specific position of
the gaze is in the XrEyeGazesFB.
// Provided by XR_FB_eye_tracking_social
typedef enum XrEyePositionFB {
XR_EYE_POSITION_LEFT_FB = 0,
XR_EYE_POSITION_RIGHT_FB = 1,
XR_EYE_POSITION_COUNT_FB = 2,
XR_EYE_POSITION_MAX_ENUM_FB = 0x7FFFFFFF
} XrEyePositionFB;
12.83.6. Example code for locating eye gaze
The following example code demonstrates how to locate eye gaze relative to a world space.
XrInstance instance; // previously initialized
XrSystemId systemId; // previously initialized
XrSession session; // previously initialized
XrSpace worldSpace; // previously initialized, e.g. from
// XR_REFERENCE_SPACE_TYPE_LOCAL
XrSystemEyeTrackingPropertiesFB eyeTrackingSystemProperties{
XR_TYPE_SYSTEM_EYE_TRACKING_PROPERTIES_FB};
XrSystemProperties systemProperties{
XR_TYPE_SYSTEM_PROPERTIES, &eyeTrackingSystemProperties};
CHK_XR(xrGetSystemProperties(instance, systemId, &systemProperties));
if (!eyeTrackingSystemProperties.supportsEyeTracking) {
// The system does not support eye tracking.
return;
}
// Get function pointer for xrCreateEyeTrackerFB.
PFN_xrCreateEyeTrackerFB pfnCreateEyeTrackerFB;
CHK_XR(xrGetInstanceProcAddr(instance, "xrCreateEyeTrackerFB",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnCreateEyeTrackerFB)));
// Create an eye tracker.
XrEyeTrackerFB eyeTracker{};
{
XrEyeTrackerCreateInfoFB createInfo{XR_TYPE_EYE_TRACKER_CREATE_INFO_FB};
CHK_XR(pfnCreateEyeTrackerFB(session, &createInfo, &eyeTracker));
}
// Allocate buffers to receive eyes pose and confidence data before frame
// the loop starts.
XrEyeGazesFB eyeGazes{XR_TYPE_EYE_GAZES_FB};
eyeGazes.next = nullptr;
// Get function pointer for xrGetEyeGazesFB.
PFN_xrGetEyeGazesFB pfnGetEyeGazesFB;
CHK_XR(xrGetInstanceProcAddr(instance, "xrGetEyeGazesFB",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnGetEyeGazesFB)));
while (1) {
// ...
// For every frame in frame loop
// ...
XrFrameState frameState; // previously returned from xrWaitFrame
const XrTime time = frameState.predictedDisplayTime;
XrEyeGazesInfoFB gazesInfo{XR_TYPE_EYE_GAZES_INFO_FB};
gazesInfo.baseSpace = worldSpace;
gazesInfo.time = time;
CHK_XR(pfnGetEyeGazesFB(eyeTracker, &gazesInfo, &eyeGazes));
if (eyeGazes.gaze[XR_EYE_POSITION_LEFT_FB].isValid) {
// ....
}
}
New Object Types
New Flag Types
New Enum Constants
XrObjectType enumeration is extended with:
-
XR_OBJECT_TYPE_EYE_TRACKER_FB
XrStructureType enumeration is extended with:
-
XR_TYPE_SYSTEM_EYE_TRACKING_PROPERTIES_FB -
XR_TYPE_EYE_TRACKER_CREATE_INFO_FB -
XR_TYPE_EYE_GAZES_INFO_FB -
XR_TYPE_EYE_GAZES_FB
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2022-07-17 (Igor Tceglevskii)
-
Initial extension description
-
12.84. XR_FB_face_tracking
- Name String
-
XR_FB_face_tracking - Extension Type
-
Instance extension
- Registered Extension Number
-
202
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2022-07-15
- IP Status
-
No known IP claims.
- Contributors
-
Jaebong Lee, Meta
Dikpal Reddy, Meta
Igor Tceglevskii, Meta
12.84.1. Overview
This extension enables applications to get weights of blend shapes. It also enables applications to render facial expressions in XR experiences.
Face tracking data is sensitive personal information and is closely linked to personal privacy and integrity. It is strongly recommended that applications storing or transferring face tracking data always ask the user for active and specific acceptance to do so.
If a runtime supports a permission system to control application access to
the face tracker, then the runtime must set the isValid field to
XR_FALSE on the supplied XrFaceExpressionStatusFB structure
until the user allows the application to access the face tracker.
When the application access has been allowed, the runtime may set
isValid on the supplied XrFaceExpressionStatusFB structure to
XR_TRUE.
Some permission systems may control access to the eye tracking separately
from access to the face tracking, even though the eyes are part of the face.
In case the user denied tracking of the eyes, yet, allowed tracking of the
face, then the runtime must set the isEyeFollowingBlendshapesValid
field to XR_FALSE on the supplied XrFaceExpressionStatusFB for
indicating that eye tracking data is not available, but at the same time
may set the isValid field to XR_TRUE on the supplied
XrFaceExpressionStatusFB for indicating that another part of the face
is tracked properly.
12.84.2. Inspect system capability
// Provided by XR_FB_face_tracking
typedef struct XrSystemFaceTrackingPropertiesFB {
XrStructureType type;
void* next;
XrBool32 supportsFaceTracking;
} XrSystemFaceTrackingPropertiesFB;
An application can inspect whether the system is capable of receiving face tracking input by extending the XrSystemProperties with XrSystemFaceTrackingPropertiesFB structure when calling xrGetSystemProperties.
If a runtime returns XR_FALSE for supportsFaceTracking, the
runtime must return XR_ERROR_FEATURE_UNSUPPORTED from
xrCreateFaceTrackerFB.
12.84.3. Create a face tracker handle
The XrFaceTrackerFB handle represents the resources for face tracking.
// Provided by XR_FB_face_tracking
XR_DEFINE_HANDLE(XrFaceTrackerFB)
This handle is used to obtain blend shapes using the xrGetFaceExpressionWeightsFB function.
The xrCreateFaceTrackerFB function is defined as:
// Provided by XR_FB_face_tracking
XrResult xrCreateFaceTrackerFB(
XrSession session,
const XrFaceTrackerCreateInfoFB* createInfo,
XrFaceTrackerFB* faceTracker);
An application can create an XrFaceTrackerFB handle using xrCreateFaceTrackerFB function.
If the system does not support face tracking, the runtime must return
XR_ERROR_FEATURE_UNSUPPORTED from xrCreateFaceTrackerFB.
In this case, the runtime must return XR_FALSE for
XrSystemFaceTrackingPropertiesFB::supportsFaceTracking when the
function xrGetSystemProperties is called, so that the application can
avoid creating a face tracker.
The XrFaceTrackerCreateInfoFB structure is described as follows:
// Provided by XR_FB_face_tracking
typedef struct XrFaceTrackerCreateInfoFB {
XrStructureType type;
const void* next;
XrFaceExpressionSetFB faceExpressionSet;
} XrFaceTrackerCreateInfoFB;
The XrFaceTrackerCreateInfoFB structure describes the information to create an XrFaceTrackerFB handle.
The XrFaceExpressionSetFB enum describes the set of blend shapes of a facial expression to track when creating an XrFaceTrackerFB.
// Provided by XR_FB_face_tracking
typedef enum XrFaceExpressionSetFB {
XR_FACE_EXPRESSION_SET_DEFAULT_FB = 0,
XR_FACE_EXPRESSION_SET_MAX_ENUM_FB = 0x7FFFFFFF
} XrFaceExpressionSetFB;
// Provided by XR_FB_face_tracking
#define XR_FACE_EXPRESSSION_SET_DEFAULT_FB XR_FACE_EXPRESSION_SET_DEFAULT_FB
The XR_FACE_EXPRESSSION_SET_DEFAULT_FB is an alias for
XR_FACE_EXPRESSION_SET_DEFAULT_FB for backward compatibility,
deprecated and should not be used.
12.84.4. Delete a face tracker handle
The xrDestroyFaceTrackerFB function releases the faceTracker and
the underlying resources when face tracking experience is over.
// Provided by XR_FB_face_tracking
XrResult xrDestroyFaceTrackerFB(
XrFaceTrackerFB faceTracker);
12.84.5. Obtain facial expressions
The xrGetFaceExpressionWeightsFB function return blend shapes of facial expression at a given time.
// Provided by XR_FB_face_tracking
XrResult xrGetFaceExpressionWeightsFB(
XrFaceTrackerFB faceTracker,
const XrFaceExpressionInfoFB* expressionInfo,
XrFaceExpressionWeightsFB* expressionWeights);
The XrFaceExpressionInfoFB structure describes the information to obtain facial expression.
// Provided by XR_FB_face_tracking
typedef struct XrFaceExpressionInfoFB {
XrStructureType type;
const void* next;
XrTime time;
} XrFaceExpressionInfoFB;
Callers should request a time equal to the predicted display time for the rendered frame. The system will employ appropriate modeling to provide expressions for this time.
XrFaceExpressionWeightsFB structure returns the facial expression.
// Provided by XR_FB_face_tracking
typedef struct XrFaceExpressionWeightsFB {
XrStructureType type;
void* next;
uint32_t weightCount;
float* weights;
uint32_t confidenceCount;
float* confidences;
XrFaceExpressionStatusFB status;
XrTime time;
} XrFaceExpressionWeightsFB;
The runtime must return XR_ERROR_VALIDATION_FAILURE if
weightCount is not equal to the number of blend shapes defined by the
XrFaceExpressionSetFB used to create the XrFaceTrackerFB.
The runtime must return XR_ERROR_VALIDATION_FAILURE if
confidenceCount is not equal to the number of confidence areas defined
by the XrFaceExpressionSetFB used to create the XrFaceTrackerFB.
The runtime must return weights representing the weights of blend
shapes of current facial expression.
The runtime must update the weights array ordered so that the
application can index elements using the corresponding facial expression
enum (e.g. XrFaceExpressionFB) as described by
XrFaceExpressionSetFB when creating the XrFaceTrackerFB.
For example, when the XrFaceTrackerFB is created with
XR_FACE_EXPRESSION_SET_DEFAULT_FB, the application sets the
weightCount to XR_FACE_EXPRESSION_COUNT_FB, and the runtime
must fill the weights array ordered so that it can be indexed by the
XrFaceExpressionFB enum.
The runtime must update the confidences array ordered so that the
application can index elements using the corresponding confidence area enum
(e.g. XrFaceConfidenceFB) as described by XrFaceExpressionSetFB
when creating the XrFaceTrackerFB.
For example, when the XrFaceTrackerFB is created with
XR_FACE_EXPRESSION_SET_DEFAULT_FB, the application sets the
confidenceCount to XR_FACE_CONFIDENCE_COUNT_FB, and the runtime
must fill the confidences array ordered so that it can be indexed by
the XrFaceConfidenceFB enum.
XrFaceExpressionStatusFB structure describes the validity of facial expression weights.
// Provided by XR_FB_face_tracking
typedef struct XrFaceExpressionStatusFB {
XrBool32 isValid;
XrBool32 isEyeFollowingBlendshapesValid;
} XrFaceExpressionStatusFB;
If the returned isValid is XR_FALSE, then it indicates that the
face tracker failed to track or lost track of the face, or the application
lost focus, or the consent for face tracking was denied.
If the returned isValid is XR_TRUE, the runtime must return all
weights (or all weights except eyes related weights, see
isEyeFollowingBlendshapesValid).
If the returned isEyeFollowingBlendshapesValid is XR_FALSE, then
it indicates that the eye tracking driving blendshapes with prefix
XR_FACE_EXPRESSION_EYES_LOOK_* lost track or the consent for eye tracking
was denied.
12.84.6. Example code for obtaining facial expression
The following example code demonstrates how to obtain all weights for facial expression blend shapes.
XrInstance instance; // previously initialized
XrSystemId systemId; // previously initialized
XrSession session; // previously initialized
// Confirm face tracking system support.
XrSystemFaceTrackingPropertiesFB faceTrackingSystemProperties{
XR_TYPE_SYSTEM_FACE_TRACKING_PROPERTIES_FB};
XrSystemProperties systemProperties{XR_TYPE_SYSTEM_PROPERTIES,
&faceTrackingSystemProperties};
CHK_XR(xrGetSystemProperties(instance, systemId, &systemProperties));
if (!faceTrackingSystemProperties.supportsFaceTracking) {
// The system does not support face tracking
return;
}
// Get function pointer for xrCreateFaceTrackerFB.
PFN_xrCreateFaceTrackerFB pfnCreateFaceTrackerFB;
CHK_XR(xrGetInstanceProcAddr(instance, "xrCreateFaceTrackerFB",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnCreateFaceTrackerFB)));
// Create a face tracker for default set of facial expressions.
XrFaceTrackerFB faceTracker = {};
{
XrFaceTrackerCreateInfoFB createInfo{XR_TYPE_FACE_TRACKER_CREATE_INFO_FB};
createInfo.faceExpressionSet = XR_FACE_EXPRESSION_SET_DEFAULT_FB;
CHK_XR(pfnCreateFaceTrackerFB(session, &createInfo, &faceTracker));
}
// Allocate buffers to receive facial expression data before frame
// loop starts.
float weights[XR_FACE_EXPRESSION_COUNT_FB];
float confidences[XR_FACE_CONFIDENCE_COUNT_FB];
XrFaceExpressionWeightsFB expressionWeights{XR_TYPE_FACE_EXPRESSION_WEIGHTS_FB};
expressionWeights.weightCount = XR_FACE_EXPRESSION_COUNT_FB;
expressionWeights.weights = weights;
expressionWeights.confidenceCount = XR_FACE_CONFIDENCE_COUNT_FB;
expressionWeights.confidences = confidences;
// Get function pointer for xrGetFaceExpressionWeightsFB.
PFN_xrGetFaceExpressionWeightsFB pfnGetFaceExpressionWeights;
CHK_XR(xrGetInstanceProcAddr(instance, "xrGetFaceExpressionWeightsFB",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnGetFaceExpressionWeights)));
while (1) {
// ...
// For every frame in the frame loop
// ...
XrFrameState frameState; // previously returned from xrWaitFrame
const XrTime time = frameState.predictedDisplayTime;
XrFaceExpressionInfoFB expressionInfo{XR_TYPE_FACE_EXPRESSION_INFO_FB};
expressionInfo.time = time;
CHK_XR(pfnGetFaceExpressionWeights(faceTracker, &expressionInfo, &expressionWeights));
if (expressionWeights.status.isValid) {
for (uint32_t i = 0; i < XR_FACE_EXPRESSION_COUNT_FB; ++i) {
// weights[i] contains a weight of specific blend shape
}
}
}
12.84.7. Conventions of blend shapes
This extension defines 63 blend shapes for tracking facial expressions.
// Provided by XR_FB_face_tracking
typedef enum XrFaceExpressionFB {
XR_FACE_EXPRESSION_BROW_LOWERER_L_FB = 0,
XR_FACE_EXPRESSION_BROW_LOWERER_R_FB = 1,
XR_FACE_EXPRESSION_CHEEK_PUFF_L_FB = 2,
XR_FACE_EXPRESSION_CHEEK_PUFF_R_FB = 3,
XR_FACE_EXPRESSION_CHEEK_RAISER_L_FB = 4,
XR_FACE_EXPRESSION_CHEEK_RAISER_R_FB = 5,
XR_FACE_EXPRESSION_CHEEK_SUCK_L_FB = 6,
XR_FACE_EXPRESSION_CHEEK_SUCK_R_FB = 7,
XR_FACE_EXPRESSION_CHIN_RAISER_B_FB = 8,
XR_FACE_EXPRESSION_CHIN_RAISER_T_FB = 9,
XR_FACE_EXPRESSION_DIMPLER_L_FB = 10,
XR_FACE_EXPRESSION_DIMPLER_R_FB = 11,
XR_FACE_EXPRESSION_EYES_CLOSED_L_FB = 12,
XR_FACE_EXPRESSION_EYES_CLOSED_R_FB = 13,
XR_FACE_EXPRESSION_EYES_LOOK_DOWN_L_FB = 14,
XR_FACE_EXPRESSION_EYES_LOOK_DOWN_R_FB = 15,
XR_FACE_EXPRESSION_EYES_LOOK_LEFT_L_FB = 16,
XR_FACE_EXPRESSION_EYES_LOOK_LEFT_R_FB = 17,
XR_FACE_EXPRESSION_EYES_LOOK_RIGHT_L_FB = 18,
XR_FACE_EXPRESSION_EYES_LOOK_RIGHT_R_FB = 19,
XR_FACE_EXPRESSION_EYES_LOOK_UP_L_FB = 20,
XR_FACE_EXPRESSION_EYES_LOOK_UP_R_FB = 21,
XR_FACE_EXPRESSION_INNER_BROW_RAISER_L_FB = 22,
XR_FACE_EXPRESSION_INNER_BROW_RAISER_R_FB = 23,
XR_FACE_EXPRESSION_JAW_DROP_FB = 24,
XR_FACE_EXPRESSION_JAW_SIDEWAYS_LEFT_FB = 25,
XR_FACE_EXPRESSION_JAW_SIDEWAYS_RIGHT_FB = 26,
XR_FACE_EXPRESSION_JAW_THRUST_FB = 27,
XR_FACE_EXPRESSION_LID_TIGHTENER_L_FB = 28,
XR_FACE_EXPRESSION_LID_TIGHTENER_R_FB = 29,
XR_FACE_EXPRESSION_LIP_CORNER_DEPRESSOR_L_FB = 30,
XR_FACE_EXPRESSION_LIP_CORNER_DEPRESSOR_R_FB = 31,
XR_FACE_EXPRESSION_LIP_CORNER_PULLER_L_FB = 32,
XR_FACE_EXPRESSION_LIP_CORNER_PULLER_R_FB = 33,
XR_FACE_EXPRESSION_LIP_FUNNELER_LB_FB = 34,
XR_FACE_EXPRESSION_LIP_FUNNELER_LT_FB = 35,
XR_FACE_EXPRESSION_LIP_FUNNELER_RB_FB = 36,
XR_FACE_EXPRESSION_LIP_FUNNELER_RT_FB = 37,
XR_FACE_EXPRESSION_LIP_PRESSOR_L_FB = 38,
XR_FACE_EXPRESSION_LIP_PRESSOR_R_FB = 39,
XR_FACE_EXPRESSION_LIP_PUCKER_L_FB = 40,
XR_FACE_EXPRESSION_LIP_PUCKER_R_FB = 41,
XR_FACE_EXPRESSION_LIP_STRETCHER_L_FB = 42,
XR_FACE_EXPRESSION_LIP_STRETCHER_R_FB = 43,
XR_FACE_EXPRESSION_LIP_SUCK_LB_FB = 44,
XR_FACE_EXPRESSION_LIP_SUCK_LT_FB = 45,
XR_FACE_EXPRESSION_LIP_SUCK_RB_FB = 46,
XR_FACE_EXPRESSION_LIP_SUCK_RT_FB = 47,
XR_FACE_EXPRESSION_LIP_TIGHTENER_L_FB = 48,
XR_FACE_EXPRESSION_LIP_TIGHTENER_R_FB = 49,
XR_FACE_EXPRESSION_LIPS_TOWARD_FB = 50,
XR_FACE_EXPRESSION_LOWER_LIP_DEPRESSOR_L_FB = 51,
XR_FACE_EXPRESSION_LOWER_LIP_DEPRESSOR_R_FB = 52,
XR_FACE_EXPRESSION_MOUTH_LEFT_FB = 53,
XR_FACE_EXPRESSION_MOUTH_RIGHT_FB = 54,
XR_FACE_EXPRESSION_NOSE_WRINKLER_L_FB = 55,
XR_FACE_EXPRESSION_NOSE_WRINKLER_R_FB = 56,
XR_FACE_EXPRESSION_OUTER_BROW_RAISER_L_FB = 57,
XR_FACE_EXPRESSION_OUTER_BROW_RAISER_R_FB = 58,
XR_FACE_EXPRESSION_UPPER_LID_RAISER_L_FB = 59,
XR_FACE_EXPRESSION_UPPER_LID_RAISER_R_FB = 60,
XR_FACE_EXPRESSION_UPPER_LIP_RAISER_L_FB = 61,
XR_FACE_EXPRESSION_UPPER_LIP_RAISER_R_FB = 62,
XR_FACE_EXPRESSION_COUNT_FB = 63,
XR_FACE_EXPRESSION_MAX_ENUM_FB = 0x7FFFFFFF
} XrFaceExpressionFB;
12.84.8. Conventions of confidence areas
This extension defines two separate areas of confidence.
// Provided by XR_FB_face_tracking
typedef enum XrFaceConfidenceFB {
XR_FACE_CONFIDENCE_LOWER_FACE_FB = 0,
XR_FACE_CONFIDENCE_UPPER_FACE_FB = 1,
XR_FACE_CONFIDENCE_COUNT_FB = 2,
XR_FACE_CONFIDENCE_MAX_ENUM_FB = 0x7FFFFFFF
} XrFaceConfidenceFB;
The "upper face" area represents everything above the upper lip, including eye, eyebrows + cheek, and nose. The "lower face" area represents everything under eyes, including mouth, chin + cheek, and nose. Cheek and nose areas contribute to both "upper face" and "lower face" areas.
New Object Types
New Flag Types
New Enum Constants
XrObjectType enumeration is extended with:
-
XR_OBJECT_TYPE_FACE_TRACKER_FB
XrStructureType enumeration is extended with:
-
XR_TYPE_SYSTEM_FACE_TRACKING_PROPERTIES_FB -
XR_TYPE_FACE_TRACKER_CREATE_INFO_FB -
XR_TYPE_FACE_EXPRESSION_INFO_FB -
XR_TYPE_FACE_EXPRESSION_WEIGHTS_FB
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2022-07-15 (Igor Tceglevskii)
-
Initial extension description
-
12.85. XR_FB_face_tracking2
- Name String
-
XR_FB_face_tracking2 - Extension Type
-
Instance extension
- Registered Extension Number
-
288
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2023-10-06
- IP Status
-
No known IP claims.
- Contributors
-
Jaebong Lee, Meta
Dikpal Reddy, Meta
Igor Tceglevskii, Meta
Bill Orr, Meta
Scott Ramsby, Meta
12.85.1. Overview
This extension enables applications to get weights of blend shapes. It also enables applications to render facial expressions in XR experiences.
It is recommended to choose this extension over the
XR_FB_face_tracking extension, if it is supported by the runtime,
because this extension provides the following two additional capabilities to
the application:
-
This extension provides additional seven blend shapes that estimate tongue movement.
-
This extension allows an application and the runtime to communicate about the data sources that are used to estimate facial expression in a cooperative manner.
Face tracking data is sensitive personal information and is closely linked to personal privacy and integrity. Applications storing or transferring face tracking data should always ask the user for active and specific acceptance to do so.
If the runtime supports a permission system to control application access to
the face tracker, then the runtime must set the isValid field to
XR_FALSE on the supplied XrFaceExpressionWeights2FB structure
until the user allows the application to access the face tracker.
When the application access has been allowed, the runtime should set
isValid on the supplied XrFaceExpressionWeights2FB structure to
XR_TRUE.
Some permission systems may control access to the eye tracking separately
from access to the face tracking, even though the eyes are part of the face.
In case the user denied tracking of the eyes, yet, allowed tracking of the
face, then the runtime must set the isEyeFollowingBlendshapesValid
field to XR_FALSE on the supplied XrFaceExpressionWeights2FB for
indicating that eye tracking data is not available, but at the same time
may set the isValid field to XR_TRUE on the supplied
XrFaceExpressionWeights2FB for indicating that another part of the
face is tracked properly.
12.85.2. Inspect system capability
// Provided by XR_FB_face_tracking2
typedef struct XrSystemFaceTrackingProperties2FB {
XrStructureType type;
void* next;
XrBool32 supportsVisualFaceTracking;
XrBool32 supportsAudioFaceTracking;
} XrSystemFaceTrackingProperties2FB;
An application can inspect whether the system is capable of receiving face tracking input by extending the XrSystemProperties with XrSystemFaceTrackingProperties2FB structure when calling xrGetSystemProperties.
If an application calls xrCreateFaceTracker2FB only with unsupported
XrFaceTrackerCreateInfo2FB::requestedDataSources, the runtime
must return XR_ERROR_FEATURE_UNSUPPORTED from
xrCreateFaceTracker2FB.
For example, if an application calls xrCreateFaceTracker2FB only with
XR_FACE_TRACKING_DATA_SOURCE2_AUDIO_FB in
XrFaceTrackerCreateInfo2FB::requestedDataSources when the
runtime returns XR_FALSE for supportsAudioFaceTracking, the
runtime must return XR_ERROR_FEATURE_UNSUPPORTED from
xrCreateFaceTracker2FB.
12.85.3. Create a face tracker handle
The XrFaceTracker2FB handle represents the resources for face tracking.
// Provided by XR_FB_face_tracking2
XR_DEFINE_HANDLE(XrFaceTracker2FB)
This handle is used to obtain blend shapes using the xrGetFaceExpressionWeights2FB function.
The xrCreateFaceTracker2FB function is defined as:
// Provided by XR_FB_face_tracking2
XrResult xrCreateFaceTracker2FB(
XrSession session,
const XrFaceTrackerCreateInfo2FB* createInfo,
XrFaceTracker2FB* faceTracker);
An application can create an XrFaceTracker2FB handle using xrCreateFaceTracker2FB function.
If the system does not support face tracking, the runtime must return
XR_ERROR_FEATURE_UNSUPPORTED from xrCreateFaceTracker2FB.
In this case, the runtime must return XR_FALSE for both
XrSystemFaceTrackingProperties2FB::supportsVisualFaceTracking
and XrSystemFaceTrackingProperties2FB::supportsAudioFaceTracking
when the function xrGetSystemProperties is called, so that the
application can avoid creating a face tracker.
The XrFaceTrackerCreateInfo2FB structure is described as follows:
// Provided by XR_FB_face_tracking2
typedef struct XrFaceTrackerCreateInfo2FB {
XrStructureType type;
const void* next;
XrFaceExpressionSet2FB faceExpressionSet;
uint32_t requestedDataSourceCount;
XrFaceTrackingDataSource2FB* requestedDataSources;
} XrFaceTrackerCreateInfo2FB;
The XrFaceTrackerCreateInfo2FB structure describes the information to create an XrFaceTracker2FB handle.
Runtimes may support a variety of data sources for estimations of facial expression, and some runtimes and devices may use data from multiple data sources. The application tells the runtime all data sources that the runtime may use to provide facial expressions for the application.
Because the device setting may change during a running session, the runtime
may return a valid XrFaceTracker2FB handle even if the device is
unable to estimate facial expression using the data sources requested by the
application’s call to xrCreateFaceTracker2FB.
The runtime must instead return XR_ERROR_FEATURE_UNSUPPORTED from
xrCreateFaceTracker2FB, if for example the runtime believes it will
never be able to satisfy the request.
If requestedDataSourceCount is 0, the runtime may choose any
supported data source, preferably one that is more expressive than the
others.
If any value in requestedDataSources is duplicated the runtime must
return XR_ERROR_VALIDATION_FAILURE from the call to
xrCreateFaceTracker2FB.
The XrFaceExpressionSet2FB enum describes the set of blend shapes of a facial expression to track when creating an XrFaceTracker2FB.
// Provided by XR_FB_face_tracking2
typedef enum XrFaceExpressionSet2FB {
XR_FACE_EXPRESSION_SET2_DEFAULT_FB = 0,
XR_FACE_EXPRESSION_SET_2FB_MAX_ENUM_FB = 0x7FFFFFFF
} XrFaceExpressionSet2FB;
The XrFaceTrackingDataSource2FB enumeration is defined as:
// Provided by XR_FB_face_tracking2
typedef enum XrFaceTrackingDataSource2FB {
XR_FACE_TRACKING_DATA_SOURCE2_VISUAL_FB = 0,
XR_FACE_TRACKING_DATA_SOURCE2_AUDIO_FB = 1,
XR_FACE_TRACKING_DATA_SOURCE_2FB_MAX_ENUM_FB = 0x7FFFFFFF
} XrFaceTrackingDataSource2FB;
12.85.4. Delete a face tracker handle
The xrDestroyFaceTracker2FB function is defined as:
// Provided by XR_FB_face_tracking2
XrResult xrDestroyFaceTracker2FB(
XrFaceTracker2FB faceTracker);
The xrDestroyFaceTracker2FB function releases the faceTracker
and the underlying resources when face tracking experience is over.
12.85.5. Obtain facial expressions
The xrGetFaceExpressionWeights2FB function is defined as:
// Provided by XR_FB_face_tracking2
XrResult xrGetFaceExpressionWeights2FB(
XrFaceTracker2FB faceTracker,
const XrFaceExpressionInfo2FB* expressionInfo,
XrFaceExpressionWeights2FB* expressionWeights);
The xrGetFaceExpressionWeights2FB function return blend shapes of facial expression at a given time.
The XrFaceExpressionInfo2FB structure is defined as:
// Provided by XR_FB_face_tracking2
typedef struct XrFaceExpressionInfo2FB {
XrStructureType type;
const void* next;
XrTime time;
} XrFaceExpressionInfo2FB;
The XrFaceExpressionInfo2FB structure describes the information to obtain facial expression. The application should pass a time equal to the predicted display time for the rendered frame. The system must employ appropriate modeling to provide expressions for this time.
The XrFaceExpressionWeights2FB structure is defined as:
// Provided by XR_FB_face_tracking2
typedef struct XrFaceExpressionWeights2FB {
XrStructureType type;
void* next;
uint32_t weightCount;
float* weights;
uint32_t confidenceCount;
float* confidences;
XrBool32 isValid;
XrBool32 isEyeFollowingBlendshapesValid;
XrFaceTrackingDataSource2FB dataSource;
XrTime time;
} XrFaceExpressionWeights2FB;
XrFaceExpressionWeights2FB structure returns the facial expression.
The runtime must return XR_ERROR_VALIDATION_FAILURE if
weightCount is not equal to the number of blend shapes defined by the
XrFaceExpressionSet2FB used to create the XrFaceTracker2FB.
The runtime must return XR_ERROR_VALIDATION_FAILURE if
confidenceCount is not equal to the number of confidence areas defined
by the XrFaceExpressionSet2FB used to create the
XrFaceTracker2FB.
The runtime must return weights representing the weights of blend
shapes of current facial expression.
The runtime must update the weights array ordered so that the
application can index elements using the corresponding facial expression
enum (e.g. XrFaceExpression2FB) as described by
XrFaceExpressionSet2FB when creating the XrFaceTracker2FB.
For example, when the XrFaceTracker2FB is created with
XR_FACE_EXPRESSION_SET2_DEFAULT_FB, the application sets the
weightCount to XR_FACE_EXPRESSION2_COUNT_FB, and the runtime
must fill the weights array ordered so that it can be indexed by the
XrFaceExpression2FB enum.
The runtime must update the confidences array ordered so that the
application can index elements using the corresponding confidence area enum
(e.g. XrFaceConfidence2FB) as described by
XrFaceExpressionSet2FB when creating the XrFaceTracker2FB.
For example, when the XrFaceTracker2FB is created with
XR_FACE_EXPRESSION_SET2_DEFAULT_FB, the application sets the
confidenceCount to XR_FACE_CONFIDENCE2_COUNT_FB, and the runtime
must fill the confidences array ordered so that it can be indexed by
the XrFaceConfidence2FB enum.
The runtime must set isValid to XR_FALSE and it must also set
all elements of weights to zero, if one of the following is true:
-
the face tracker failed to track or lost track of the face
-
the application lost focus
-
the consent for face tracking was denied
-
the runtime is unable to estimate facial expression from the data sources specified when xrCreateFaceTracker2FB function was called
If the returned isValid is XR_TRUE, the runtime must return all
weights (or all weights except eyes related weights, see
isEyeFollowingBlendshapesValid).
The runtime must set isEyeFollowingBlendshapesValid to XR_FALSE
and it must also set 8 expression weights with prefix
XR_FACE_EXPRESSION2_EYES_LOOK_* to zero, if one of the following is true:
-
the eye tracking driving blendshapes with prefix
XR_FACE_EXPRESSION2_EYES_LOOK_*lost track -
the consent for eye tracking was denied
12.85.6. Example code for obtaining facial expression
The following example code demonstrates how to obtain all weights for facial expression blend shapes.
XrInstance instance; // previously initialized
XrSystemId systemId; // previously initialized
XrSession session; // previously initialized
// Confirm face tracking system support.
XrSystemFaceTrackingProperties2FB faceTrackingSystemProperties{
XR_TYPE_SYSTEM_FACE_TRACKING_PROPERTIES2_FB};
XrSystemProperties systemProperties{XR_TYPE_SYSTEM_PROPERTIES,
&faceTrackingSystemProperties};
CHK_XR(xrGetSystemProperties(instance, systemId, &systemProperties));
if (!faceTrackingSystemProperties.supportsVisualFaceTracking &&
!faceTrackingSystemProperties.supportsAudioFaceTracking) {
// The system does not support face tracking
return;
}
// Get function pointer for xrCreateFaceTracker2FB.
PFN_xrCreateFaceTracker2FB pfnCreateFaceTracker2FB;
CHK_XR(xrGetInstanceProcAddr(instance, "xrCreateFaceTracker2FB",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnCreateFaceTracker2FB)));
// Create a face tracker for default set of facial expressions.
XrFaceTracker2FB faceTracker = {};
{
XrFaceTrackerCreateInfo2FB createInfo{XR_TYPE_FACE_TRACKER_CREATE_INFO2_FB};
createInfo.faceExpressionSet = XR_FACE_EXPRESSION_SET2_DEFAULT_FB;
// This tells the runtime that the application can take
// facial expression from any of two data sources.
createInfo.requestedDataSourceCount = 2;
XrFaceTrackingDataSource2FB dataSources[2] = {
XR_FACE_TRACKING_DATA_SOURCE2_VISUAL_FB,
XR_FACE_TRACKING_DATA_SOURCE2_AUDIO_FB};
createInfo.requestedDataSources = dataSources;
CHK_XR(pfnCreateFaceTracker2FB(session, &createInfo, &faceTracker));
}
// Allocate buffers to receive facial expression data before frame
// loop starts.
float weights[XR_FACE_EXPRESSION2_COUNT_FB];
float confidences[XR_FACE_CONFIDENCE2_COUNT_FB];
XrFaceExpressionWeights2FB expressionWeights{XR_TYPE_FACE_EXPRESSION_WEIGHTS2_FB};
expressionWeights.weightCount = XR_FACE_EXPRESSION2_COUNT_FB;
expressionWeights.weights = weights;
expressionWeights.confidenceCount = XR_FACE_CONFIDENCE2_COUNT_FB;
expressionWeights.confidences = confidences;
// Get function pointer for xrGetFaceExpressionWeights2FB.
PFN_xrGetFaceExpressionWeights2FB pfnGetFaceExpressionWeights;
CHK_XR(xrGetInstanceProcAddr(instance, "xrGetFaceExpressionWeights2FB",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnGetFaceExpressionWeights)));
while (1) {
// ...
// For every frame in the frame loop
// ...
XrFrameState frameState; // previously returned from xrWaitFrame
const XrTime time = frameState.predictedDisplayTime;
XrFaceExpressionInfo2FB expressionInfo{XR_TYPE_FACE_EXPRESSION_INFO2_FB};
expressionInfo.time = time;
CHK_XR(pfnGetFaceExpressionWeights(faceTracker, &expressionInfo, &expressionWeights));
if (expressionWeights.isValid) {
// If you want to do something depending on the data source.
if (expressionWeights.dataSource == XR_FACE_TRACKING_DATA_SOURCE2_VISUAL_FB) {
// do something when visual or audiovisual data source was used.
} else if (expressionWeights.dataSource == XR_FACE_TRACKING_DATA_SOURCE2_AUDIO_FB) {
// do something when audio data source was used.
}
for (uint32_t i = 0; i < XR_FACE_EXPRESSION2_COUNT_FB; ++i) {
// weights[i] contains a weight of specific blend shape
}
}
}
12.85.7. Conventions of blend shapes
This extension defines 70 blend shapes for tracking facial expressions.
// Provided by XR_FB_face_tracking2
typedef enum XrFaceExpression2FB {
XR_FACE_EXPRESSION2_BROW_LOWERER_L_FB = 0,
XR_FACE_EXPRESSION2_BROW_LOWERER_R_FB = 1,
XR_FACE_EXPRESSION2_CHEEK_PUFF_L_FB = 2,
XR_FACE_EXPRESSION2_CHEEK_PUFF_R_FB = 3,
XR_FACE_EXPRESSION2_CHEEK_RAISER_L_FB = 4,
XR_FACE_EXPRESSION2_CHEEK_RAISER_R_FB = 5,
XR_FACE_EXPRESSION2_CHEEK_SUCK_L_FB = 6,
XR_FACE_EXPRESSION2_CHEEK_SUCK_R_FB = 7,
XR_FACE_EXPRESSION2_CHIN_RAISER_B_FB = 8,
XR_FACE_EXPRESSION2_CHIN_RAISER_T_FB = 9,
XR_FACE_EXPRESSION2_DIMPLER_L_FB = 10,
XR_FACE_EXPRESSION2_DIMPLER_R_FB = 11,
XR_FACE_EXPRESSION2_EYES_CLOSED_L_FB = 12,
XR_FACE_EXPRESSION2_EYES_CLOSED_R_FB = 13,
XR_FACE_EXPRESSION2_EYES_LOOK_DOWN_L_FB = 14,
XR_FACE_EXPRESSION2_EYES_LOOK_DOWN_R_FB = 15,
XR_FACE_EXPRESSION2_EYES_LOOK_LEFT_L_FB = 16,
XR_FACE_EXPRESSION2_EYES_LOOK_LEFT_R_FB = 17,
XR_FACE_EXPRESSION2_EYES_LOOK_RIGHT_L_FB = 18,
XR_FACE_EXPRESSION2_EYES_LOOK_RIGHT_R_FB = 19,
XR_FACE_EXPRESSION2_EYES_LOOK_UP_L_FB = 20,
XR_FACE_EXPRESSION2_EYES_LOOK_UP_R_FB = 21,
XR_FACE_EXPRESSION2_INNER_BROW_RAISER_L_FB = 22,
XR_FACE_EXPRESSION2_INNER_BROW_RAISER_R_FB = 23,
XR_FACE_EXPRESSION2_JAW_DROP_FB = 24,
XR_FACE_EXPRESSION2_JAW_SIDEWAYS_LEFT_FB = 25,
XR_FACE_EXPRESSION2_JAW_SIDEWAYS_RIGHT_FB = 26,
XR_FACE_EXPRESSION2_JAW_THRUST_FB = 27,
XR_FACE_EXPRESSION2_LID_TIGHTENER_L_FB = 28,
XR_FACE_EXPRESSION2_LID_TIGHTENER_R_FB = 29,
XR_FACE_EXPRESSION2_LIP_CORNER_DEPRESSOR_L_FB = 30,
XR_FACE_EXPRESSION2_LIP_CORNER_DEPRESSOR_R_FB = 31,
XR_FACE_EXPRESSION2_LIP_CORNER_PULLER_L_FB = 32,
XR_FACE_EXPRESSION2_LIP_CORNER_PULLER_R_FB = 33,
XR_FACE_EXPRESSION2_LIP_FUNNELER_LB_FB = 34,
XR_FACE_EXPRESSION2_LIP_FUNNELER_LT_FB = 35,
XR_FACE_EXPRESSION2_LIP_FUNNELER_RB_FB = 36,
XR_FACE_EXPRESSION2_LIP_FUNNELER_RT_FB = 37,
XR_FACE_EXPRESSION2_LIP_PRESSOR_L_FB = 38,
XR_FACE_EXPRESSION2_LIP_PRESSOR_R_FB = 39,
XR_FACE_EXPRESSION2_LIP_PUCKER_L_FB = 40,
XR_FACE_EXPRESSION2_LIP_PUCKER_R_FB = 41,
XR_FACE_EXPRESSION2_LIP_STRETCHER_L_FB = 42,
XR_FACE_EXPRESSION2_LIP_STRETCHER_R_FB = 43,
XR_FACE_EXPRESSION2_LIP_SUCK_LB_FB = 44,
XR_FACE_EXPRESSION2_LIP_SUCK_LT_FB = 45,
XR_FACE_EXPRESSION2_LIP_SUCK_RB_FB = 46,
XR_FACE_EXPRESSION2_LIP_SUCK_RT_FB = 47,
XR_FACE_EXPRESSION2_LIP_TIGHTENER_L_FB = 48,
XR_FACE_EXPRESSION2_LIP_TIGHTENER_R_FB = 49,
XR_FACE_EXPRESSION2_LIPS_TOWARD_FB = 50,
XR_FACE_EXPRESSION2_LOWER_LIP_DEPRESSOR_L_FB = 51,
XR_FACE_EXPRESSION2_LOWER_LIP_DEPRESSOR_R_FB = 52,
XR_FACE_EXPRESSION2_MOUTH_LEFT_FB = 53,
XR_FACE_EXPRESSION2_MOUTH_RIGHT_FB = 54,
XR_FACE_EXPRESSION2_NOSE_WRINKLER_L_FB = 55,
XR_FACE_EXPRESSION2_NOSE_WRINKLER_R_FB = 56,
XR_FACE_EXPRESSION2_OUTER_BROW_RAISER_L_FB = 57,
XR_FACE_EXPRESSION2_OUTER_BROW_RAISER_R_FB = 58,
XR_FACE_EXPRESSION2_UPPER_LID_RAISER_L_FB = 59,
XR_FACE_EXPRESSION2_UPPER_LID_RAISER_R_FB = 60,
XR_FACE_EXPRESSION2_UPPER_LIP_RAISER_L_FB = 61,
XR_FACE_EXPRESSION2_UPPER_LIP_RAISER_R_FB = 62,
XR_FACE_EXPRESSION2_TONGUE_TIP_INTERDENTAL_FB = 63,
XR_FACE_EXPRESSION2_TONGUE_TIP_ALVEOLAR_FB = 64,
XR_FACE_EXPRESSION2_TONGUE_FRONT_DORSAL_PALATE_FB = 65,
XR_FACE_EXPRESSION2_TONGUE_MID_DORSAL_PALATE_FB = 66,
XR_FACE_EXPRESSION2_TONGUE_BACK_DORSAL_VELAR_FB = 67,
XR_FACE_EXPRESSION2_TONGUE_OUT_FB = 68,
XR_FACE_EXPRESSION2_TONGUE_RETREAT_FB = 69,
XR_FACE_EXPRESSION2_COUNT_FB = 70,
XR_FACE_EXPRESSION_2FB_MAX_ENUM_FB = 0x7FFFFFFF
} XrFaceExpression2FB;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12.85.8. Conventions of confidence areas
This extension defines two separate areas of confidence.
// Provided by XR_FB_face_tracking2
typedef enum XrFaceConfidence2FB {
XR_FACE_CONFIDENCE2_LOWER_FACE_FB = 0,
XR_FACE_CONFIDENCE2_UPPER_FACE_FB = 1,
XR_FACE_CONFIDENCE2_COUNT_FB = 2,
XR_FACE_CONFIDENCE_2FB_MAX_ENUM_FB = 0x7FFFFFFF
} XrFaceConfidence2FB;
The "upper face" area represents everything above the upper lip, including the eyes and eyebrows. The "lower face" area represents everything under the eyes, including the mouth and chin. Cheek and nose areas contribute to both "upper face" and "lower face" areas.
New Object Types
New Flag Types
New Enum Constants
XrObjectType enumeration is extended with:
-
XR_OBJECT_TYPE_FACE_TRACKER2_FB
XrStructureType enumeration is extended with:
-
XR_TYPE_SYSTEM_FACE_TRACKING_PROPERTIES2_FB -
XR_TYPE_FACE_TRACKER_CREATE_INFO2_FB -
XR_TYPE_FACE_EXPRESSION_INFO2_FB -
XR_TYPE_FACE_EXPRESSION_WEIGHTS2_FB
New Enums
New Structures
New Functions
Issues
-
Should we add the tongue shapes to
XR_FB_face_trackingas a new enum value in XrFaceExpressionSetFB?-
Resolved. We expect that all applications should use
XR_FB_face_tracking2in the future and thatXR_FB_face_trackingwill ultimately be replaced by this extension.
-
Version History
-
Revision 1, 2023-10-06 (Jaebong Lee)
-
Initial extension description
-
12.86. XR_FB_foveation
- Name String
-
XR_FB_foveation - Extension Type
-
Instance extension
- Registered Extension Number
-
115
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Kevin Xiao, Facebook
Ross Ning, Facebook
Remi Palandri, Facebook
Cass Everitt, Facebook
Gloria Kennickell, Facebook
Overview
Foveation in the context of XR is a rendering technique that allows the area of an image near the focal point or fovea of the eye to be displayed at higher resolution than areas in the periphery. This trades some visual fidelity in the periphery, where it is less noticeable for the user, for improved rendering performance, most notably regarding the fragment shader, as fewer pixels or subpixels in the periphery need to be shaded and processed. On platforms which support foveation patterns and features tailored towards the optical properties, performance profiles, and hardware support of specific HMDs, application developers may request and use available foveation profiles from the runtime. Foveation profiles refer to a set of properties describing how, when, and where foveation will be applied.
This extension allows:
-
An application to create swapchains that can support foveation for its graphics API.
-
An application to request foveation profiles supported by the runtime and apply them to foveation-supported swapchains.
In order to enable the functionality of this extension, you must pass the
name of the extension into xrCreateInstance via the
XrInstanceCreateInfo enabledExtensionNames parameter as
indicated in the Extensions section.
New Object Types
XR_DEFINE_HANDLE(XrFoveationProfileFB)
XrFoveationProfileFB represents a set of properties and resources that define a foveation pattern for the runtime, which can be applied to individual swapchains.
New Flag Types
typedef XrFlags64 XrSwapchainCreateFoveationFlagsFB;
// Flag bits for XrSwapchainCreateFoveationFlagsFB
static const XrSwapchainCreateFoveationFlagsFB XR_SWAPCHAIN_CREATE_FOVEATION_SCALED_BIN_BIT_FB = 0x00000001;
static const XrSwapchainCreateFoveationFlagsFB XR_SWAPCHAIN_CREATE_FOVEATION_FRAGMENT_DENSITY_MAP_BIT_FB = 0x00000002;
typedef XrFlags64 XrSwapchainStateFoveationFlagsFB;
// Flag bits for XrSwapchainStateFoveationFlagsFB
There are currently no foveation swapchain state flags. This is reserved for future use.
New Enum Constants
XrObjectType enumeration is extended with:
-
XR_OBJECT_TYPE_FOVEATION_PROFILE_FB
XrStructureType enumeration is extended with:
-
XR_TYPE_FOVEATION_PROFILE_CREATE_INFO_FB -
XR_TYPE_SWAPCHAIN_CREATE_INFO_FOVEATION_FB -
XR_TYPE_SWAPCHAIN_STATE_FOVEATION_FB
New Enums
New Structures
XrFoveationProfileCreateInfoFB must be provided when calling
xrCreateFoveationProfileFB.
The runtime must interpret XrFoveationProfileCreateInfoFB without any
additional structs in its next chain as a request to create a
foveation profile that will apply no foveation to any area of the swapchain.
The XrFoveationProfileCreateInfoFB structure is defined as:
// Provided by XR_FB_foveation
typedef struct XrFoveationProfileCreateInfoFB {
XrStructureType type;
void* next;
} XrFoveationProfileCreateInfoFB;
XrSwapchainCreateInfoFoveationFB can be provided in the next
chain of XrSwapchainCreateInfo when calling xrCreateSwapchain to
indicate to the runtime that the swapchain must be created with foveation
support in the corresponding graphics API.
XrSwapchainCreateInfoFoveationFB contains additional
foveation-specific flags for swapchain creation.
The XrSwapchainCreateInfoFoveationFB structure is defined as:
// Provided by XR_FB_foveation
typedef struct XrSwapchainCreateInfoFoveationFB {
XrStructureType type;
void* next;
XrSwapchainCreateFoveationFlagsFB flags;
} XrSwapchainCreateInfoFoveationFB;
XrSwapchainStateFoveationFB can be provided in place of XrSwapchainStateBaseHeaderFB when calling xrUpdateSwapchainFB to update the foveation properties of the swapchain. XrSwapchainCreateInfoFoveationFB contains the desired foveation profile and additional foveation specific flags for updating the swapchain.
The XrSwapchainStateFoveationFB structure is defined as:
// Provided by XR_FB_foveation
typedef struct XrSwapchainStateFoveationFB {
XrStructureType type;
void* next;
XrSwapchainStateFoveationFlagsFB flags;
XrFoveationProfileFB profile;
} XrSwapchainStateFoveationFB;
New Functions
The xrCreateFoveationProfileFB function is defined as:
// Provided by XR_FB_foveation
XrResult xrCreateFoveationProfileFB(
XrSession session,
const XrFoveationProfileCreateInfoFB* createInfo,
XrFoveationProfileFB* profile);
Creates an XrFoveationProfileFB handle. The returned foveation profile handle may be subsequently used in API calls.
The xrDestroyFoveationProfileFB function is defined as:
// Provided by XR_FB_foveation
XrResult xrDestroyFoveationProfileFB(
XrFoveationProfileFB profile);
XrFoveationProfileFB handles are destroyed using xrDestroyFoveationProfileFB. A XrFoveationProfileFB may be safely destroyed after being applied to a swapchain state using xrUpdateSwapchainFB without affecting the foveation parameters of the swapchain. The application is responsible for ensuring that it has no calls using profile in progress when the foveation profile is destroyed.
Issues
Version History
-
Revision 1, 2021-05-13 (Kevin Xiao)
-
Initial extension description
-
12.87. XR_FB_foveation_configuration
- Name String
-
XR_FB_foveation_configuration - Extension Type
-
Instance extension
- Registered Extension Number
-
116
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Kevin Xiao, Facebook
Ross Ning, Facebook
Remi Palandri, Facebook
Cass Everitt, Facebook
Gloria Kennickell, Facebook
Overview
On Facebook HMDs, developers may create foveation profiles generated by the runtime for the optical properties and performance profile of the specific HMD.
This extension allows:
-
An application to request foveation profiles generated by the runtime for the current HMD.
In order to enable the functionality of this extension, you must pass the
name of the extension into xrCreateInstance via the
XrInstanceCreateInfo enabledExtensionNames parameter as
indicated in the Extensions section.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_FOVEATION_LEVEL_PROFILE_CREATE_INFO_FB
New Enums
The possible foveation levels are specified by the XrFoveationLevelFB enumeration:
// Provided by XR_FB_foveation_configuration
typedef enum XrFoveationLevelFB {
XR_FOVEATION_LEVEL_NONE_FB = 0,
XR_FOVEATION_LEVEL_LOW_FB = 1,
XR_FOVEATION_LEVEL_MEDIUM_FB = 2,
XR_FOVEATION_LEVEL_HIGH_FB = 3,
XR_FOVEATION_LEVEL_MAX_ENUM_FB = 0x7FFFFFFF
} XrFoveationLevelFB;
The possible foveation levels are specified by the XrFoveationDynamicFB enumeration:
// Provided by XR_FB_foveation_configuration
typedef enum XrFoveationDynamicFB {
XR_FOVEATION_DYNAMIC_DISABLED_FB = 0,
XR_FOVEATION_DYNAMIC_LEVEL_ENABLED_FB = 1,
XR_FOVEATION_DYNAMIC_MAX_ENUM_FB = 0x7FFFFFFF
} XrFoveationDynamicFB;
New Structures
XrFoveationLevelProfileCreateInfoFB can be provided in the next
chain of XrFoveationProfileCreateInfoFB when calling
xrCreateFoveationProfileFB.
The runtime must interpret XrSwapchainCreateInfoFoveationFB with
XrFoveationLevelProfileCreateInfoFB in its next chain as a
request to create a foveation profile that will apply a fixed foveation
pattern according to the parameters defined in the
XrFoveationLevelProfileCreateInfoFB.
The XrFoveationLevelProfileCreateInfoFB structure is defined as:
// Provided by XR_FB_foveation_configuration
typedef struct XrFoveationLevelProfileCreateInfoFB {
XrStructureType type;
void* next;
XrFoveationLevelFB level;
float verticalOffset;
XrFoveationDynamicFB dynamic;
} XrFoveationLevelProfileCreateInfoFB;
New Functions
Issues
Version History
-
Revision 1, 2021-05-13 (Kevin Xiao)
-
Initial extension description
-
12.88. XR_FB_foveation_vulkan
- Name String
-
XR_FB_foveation_vulkan - Extension Type
-
Instance extension
- Registered Extension Number
-
161
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Kevin Xiao, Facebook
Ross Ning, Facebook
Remi Palandri, Facebook
Cass Everitt, Facebook
Gloria Kennickell, Facebook
Overview
The Vulkan graphics API requires an image to be applied to the swapchain to apply a foveation pattern.
This extension allows:
-
An application to obtain foveation textures or constructs needed for foveated rendering in Vulkan.
In order to enable the functionality of this extension, you must pass the
name of the extension into xrCreateInstance via the
XrInstanceCreateInfo enabledExtensionNames parameter as
indicated in the Extensions section.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_SWAPCHAIN_IMAGE_FOVEATION_VULKAN_FB
New Enums
New Structures
XrSwapchainImageFoveationVulkanFB can be provided in the next
chain of XrSwapchainImageVulkanKHR when calling
xrEnumerateSwapchainImages on a swapchain created with
xrCreateSwapchain, if XrSwapchainCreateInfoFoveationFB was in
the next chain of XrSwapchainCreateInfo and
XrSwapchainCreateInfoFoveationFB had the
XR_SWAPCHAIN_CREATE_FOVEATION_FRAGMENT_DENSITY_MAP_BIT_FB flag set.
The image, width, and height will be populated by
xrEnumerateSwapchainImages to be compatible with the corresponding
XrSwapchainImageVulkanKHR.
The XrSwapchainImageFoveationVulkanFB structure is defined as:
// Provided by XR_FB_foveation_vulkan
typedef struct XrSwapchainImageFoveationVulkanFB {
XrStructureType type;
void* next;
VkImage image;
uint32_t width;
uint32_t height;
} XrSwapchainImageFoveationVulkanFB;
New Functions
Issues
Version History
-
Revision 1, 2021-05-26 (Kevin Xiao)
-
Initial extension description
-
12.89. XR_FB_hand_tracking_aim
- Name String
-
XR_FB_hand_tracking_aim - Extension Type
-
Instance extension
- Registered Extension Number
-
112
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Federico Schliemann, Facebook
James Hillery, Facebook
Gloria Kennickell, Facebook
Overview
The XR_EXT_hand_tracking extension provides a list of hand joint
poses which represent the current configuration of the tracked hands.
This extension adds a layer of gesture recognition that is used by the
system.
This extension allows:
-
An application to get a set of basic gesture states for the hand when using the
XR_EXT_hand_trackingextension.
New Object Types
New Flag Types
typedef XrFlags64 XrHandTrackingAimFlagsFB;
// Flag bits for XrHandTrackingAimFlagsFB
static const XrHandTrackingAimFlagsFB XR_HAND_TRACKING_AIM_COMPUTED_BIT_FB = 0x00000001;
static const XrHandTrackingAimFlagsFB XR_HAND_TRACKING_AIM_VALID_BIT_FB = 0x00000002;
static const XrHandTrackingAimFlagsFB XR_HAND_TRACKING_AIM_INDEX_PINCHING_BIT_FB = 0x00000004;
static const XrHandTrackingAimFlagsFB XR_HAND_TRACKING_AIM_MIDDLE_PINCHING_BIT_FB = 0x00000008;
static const XrHandTrackingAimFlagsFB XR_HAND_TRACKING_AIM_RING_PINCHING_BIT_FB = 0x00000010;
static const XrHandTrackingAimFlagsFB XR_HAND_TRACKING_AIM_LITTLE_PINCHING_BIT_FB = 0x00000020;
static const XrHandTrackingAimFlagsFB XR_HAND_TRACKING_AIM_SYSTEM_GESTURE_BIT_FB = 0x00000040;
static const XrHandTrackingAimFlagsFB XR_HAND_TRACKING_AIM_DOMINANT_HAND_BIT_FB = 0x00000080;
static const XrHandTrackingAimFlagsFB XR_HAND_TRACKING_AIM_MENU_PRESSED_BIT_FB = 0x00000100;
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_HAND_TRACKING_AIM_STATE_FB
New Enums
New Structures
XrHandTrackingAimStateFB can be provided in the next chain of
XrHandJointLocationsEXT when calling xrLocateHandJointsEXT to
request aiming gesture information associated with this hand.
The XrHandTrackingAimStateFB structure is defined as:
// Provided by XR_FB_hand_tracking_aim
typedef struct XrHandTrackingAimStateFB {
XrStructureType type;
void* next;
XrHandTrackingAimFlagsFB status;
XrPosef aimPose;
float pinchStrengthIndex;
float pinchStrengthMiddle;
float pinchStrengthRing;
float pinchStrengthLittle;
} XrHandTrackingAimStateFB;
New Functions
Issues
Version History
-
Revision 1, 2021-07-07 (Federico Schliemann)
-
Initial extension description
-
-
Revision 2, 2022-04-20 (John Kearney)
-
Correct next chain parent for
XrHandTrackingAimStateFBtoXrHandJointLocationsEXT
-
12.90. XR_FB_hand_tracking_capsules
- Name String
-
XR_FB_hand_tracking_capsules - Extension Type
-
Instance extension
- Registered Extension Number
-
113
- Revision
-
3
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Federico Schliemann, Facebook
James Hillery, Facebook
Gloria Kennickell, Facebook
Overview
The XR_EXT_hand_tracking extension provides a list of hand joint
poses which include a collision sphere for each joint.
However some physics systems prefer to use capsules as a collision stand in
for the hands.
This extension allows:
-
An application to get a list of capsules that represent the volume of the hand when using the
XR_EXT_hand_trackingextension.
New Object Types
New Flag Types
New Enum Constants
-
XR_HAND_TRACKING_CAPSULE_POINT_COUNT_FB-
XR_FB_HAND_TRACKING_CAPSULE_POINT_COUNTwas the original name, and is still provided as an alias for backward compatibility.
-
-
XR_HAND_TRACKING_CAPSULE_COUNT_FB-
XR_FB_HAND_TRACKING_CAPSULE_COUNTwas the original name, and is still provided as an alias for backward compatibility.
-
XrStructureType enumeration is extended with:
-
XR_TYPE_HAND_TRACKING_CAPSULES_STATE_FB
New Enums
New Structures
The XrHandCapsuleFB structure is defined as:
// Provided by XR_FB_hand_tracking_capsules
typedef struct XrHandCapsuleFB {
XrVector3f points[XR_HAND_TRACKING_CAPSULE_POINT_COUNT_FB];
float radius;
XrHandJointEXT joint;
} XrHandCapsuleFB;
It describes a collision capsule associated with a hand joint.
XrHandTrackingCapsulesStateFB can be provided in the next chain
of XrHandJointLocationsEXT when calling xrLocateHandJointsEXT to
request collision capsule information associated with this hand.
The XrHandTrackingCapsulesStateFB structure is defined as:
// Provided by XR_FB_hand_tracking_capsules
typedef struct XrHandTrackingCapsulesStateFB {
XrStructureType type;
void* next;
XrHandCapsuleFB capsules[XR_HAND_TRACKING_CAPSULE_COUNT_FB];
} XrHandTrackingCapsulesStateFB;
New Functions
Issues
Version History
-
Revision 1, 2021-07-07 (Federico Schliemann)
-
Initial extension description
-
-
Revision 2, 2021-11-18 (Rylie Pavlik, Collabora, Ltd.)
-
Fix typos/naming convention errors: rename
XR_FB_HAND_TRACKING_CAPSULE_POINT_COUNTtoXR_HAND_TRACKING_CAPSULE_POINT_COUNT_FBandXR_FB_HAND_TRACKING_CAPSULE_COUNTtoXR_HAND_TRACKING_CAPSULE_COUNT_FB, providing the old names as compatibility aliases.
-
-
Revision 3, 2022-04-20 (John Kearney)
-
Correct next chain parent for
XrHandTrackingCapsulesStateFBtoXrHandJointLocationsEXT
-
12.91. XR_FB_hand_tracking_mesh
- Name String
-
XR_FB_hand_tracking_mesh - Extension Type
-
Instance extension
- Registered Extension Number
-
111
- Revision
-
3
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Federico Schliemann, Facebook
James Hillery, Facebook
Gloria Kennickell, Facebook
Overview
The XR_EXT_hand_tracking extension provides a list of hand joint
poses but no mechanism to render a skinned hand mesh.
This extension allows:
-
An application to get a skinned hand mesh and a bind pose skeleton that can be used to render a hand object driven by the joints from the
XR_EXT_hand_trackingextension. -
Control the scale of the hand joints returned by
XR_EXT_hand_tracking.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_HAND_TRACKING_MESH_FB -
XR_TYPE_HAND_TRACKING_SCALE_FB
New Enums
New Structures
The XrVector4sFB structure is defined as:
// Provided by XR_FB_hand_tracking_mesh
typedef struct XrVector4sFB {
int16_t x;
int16_t y;
int16_t z;
int16_t w;
} XrVector4sFB;
This is a short integer, four component vector type, used for per-vertex joint indexing for mesh skinning.
The XrHandTrackingMeshFB structure contains three sets of parallel, application-allocated arrays: one with per-joint data, one with vertex data, and one with index data.
The XrHandTrackingMeshFB structure is defined as:
// Provided by XR_FB_hand_tracking_mesh
typedef struct XrHandTrackingMeshFB {
XrStructureType type;
void* next;
uint32_t jointCapacityInput;
uint32_t jointCountOutput;
XrPosef* jointBindPoses;
float* jointRadii;
XrHandJointEXT* jointParents;
uint32_t vertexCapacityInput;
uint32_t vertexCountOutput;
XrVector3f* vertexPositions;
XrVector3f* vertexNormals;
XrVector2f* vertexUVs;
XrVector4sFB* vertexBlendIndices;
XrVector4f* vertexBlendWeights;
uint32_t indexCapacityInput;
uint32_t indexCountOutput;
int16_t* indices;
} XrHandTrackingMeshFB;
All arrays are application-allocated, and all may be NULL if any of
jointCapacityInput, vertexCapacityInput, or
indexCapacityInput is 0.
The data in a fully-populated XrHandTrackingMeshFB is immutable during the lifetime of the corresponding XrInstance, and is intended to be retrieved once then used in combination with data changing per-frame retrieved from xrLocateHandJointsEXT.
XrHandTrackingScaleFB can be provided in the next chain of
XrHandJointLocationsEXT when calling xrLocateHandJointsEXT to
indicate to the runtime that the requested joints need to be scaled to a
different size and to query the existing scale value.
This is useful in breaking up the overall scale out of the skinning
transforms.
The XrHandTrackingScaleFB structure is defined as:
// Provided by XR_FB_hand_tracking_mesh
typedef struct XrHandTrackingScaleFB {
XrStructureType type;
void* next;
float sensorOutput;
float currentOutput;
XrBool32 overrideHandScale;
float overrideValueInput;
} XrHandTrackingScaleFB;
New Functions
The xrGetHandMeshFB function is defined as:
// Provided by XR_FB_hand_tracking_mesh
XrResult xrGetHandMeshFB(
XrHandTrackerEXT handTracker,
XrHandTrackingMeshFB* mesh);
The xrGetHandMeshFB function populates an XrHandTrackingMeshFB structure with enough information to render a skinned mesh driven by the hand joints. As discussed in the specification for that structure, the data enumerated by this call is constant during the lifetime of an XrInstance.
Issues
Version History
-
Revision 1, 2021-07-07 (Federico Schliemann)
-
Initial extension description
-
-
Revision 2, 2022-04-20 (John Kearney)
-
Correct next chain parent for
XrHandTrackingScaleFBtoXrHandJointLocationsEXT
-
-
Revision 3, 2022-07-07 (Rylie Pavlik, Collabora, Ltd.)
-
Correct markup and thus generated valid usage for two-call idiom.
-
12.92. XR_FB_haptic_amplitude_envelope
- Name String
-
XR_FB_haptic_amplitude_envelope - Extension Type
-
Instance extension
- Registered Extension Number
-
174
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2022-06-27
- IP Status
-
No known IP claims.
- Contributors
-
Aanchal Dalmia, Meta
Federico Schliemann, Meta
12.92.1. Overview
This extension enables applications to trigger haptic effect using an Amplitude Envelope buffer.
Trigger haptics
An application can trigger an amplitude envelope haptic effect by creating a XrHapticAmplitudeEnvelopeVibrationFB structure and calling xrApplyHapticFeedback.
The XrHapticAmplitudeEnvelopeVibrationFB structure is defined as:
// Provided by XR_FB_haptic_amplitude_envelope
typedef struct XrHapticAmplitudeEnvelopeVibrationFB {
XrStructureType type;
const void* next;
XrDuration duration;
uint32_t amplitudeCount;
const float* amplitudes;
} XrHapticAmplitudeEnvelopeVibrationFB;
This structure describes an amplitude envelope haptic effect.
The runtime should resample the provided samples in the amplitudes,
and maintain an internal buffer which should be of
XR_MAX_HAPTIC_AMPLITUDE_ENVELOPE_SAMPLES_FB length.
The resampling should happen based on the duration,
amplitudeCount, and the device’s sample rate.
New Object Types
New Flag Types
New Enum Constants
-
XR_TYPE_HAPTIC_AMPLITUDE_ENVELOPE_VIBRATION_FB
New Defines
// Provided by XR_FB_haptic_amplitude_envelope
#define XR_MAX_HAPTIC_AMPLITUDE_ENVELOPE_SAMPLES_FB 4000u
XR_MAX_HAPTIC_AMPLITUDE_ENVELOPE_SAMPLES_FB defines the maximum number of sample the runtime should store in memory.
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2022-06-27 (Aanchal Dalmia)
-
Initial extension description
-
12.93. XR_FB_haptic_pcm
- Name String
-
XR_FB_haptic_pcm - Extension Type
-
Instance extension
- Registered Extension Number
-
210
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2022-06-27
- IP Status
-
No known IP claims.
- Contributors
-
Aanchal Dalmia, Meta
Adam Bengis, Meta
12.93.1. Overview
This extension enables applications to trigger haptic effects using Pulse Code Modulation (PCM) buffers.
Trigger haptics
An application can trigger PCM haptic effect by creating a XrHapticPcmVibrationFB structure and calling xrApplyHapticFeedback.
The XrHapticPcmVibrationFB structure is defined as:
// Provided by XR_FB_haptic_pcm
typedef struct XrHapticPcmVibrationFB {
XrStructureType type;
const void* next;
uint32_t bufferSize;
const float* buffer;
float sampleRate;
XrBool32 append;
uint32_t* samplesConsumed;
} XrHapticPcmVibrationFB;
This structure describes a PCM haptic effect.
The runtime may resample the provided samples in the buffer, and
maintain an internal buffer which should be of
XR_MAX_HAPTIC_PCM_BUFFER_SIZE_FB length.
The resampling should happen based on the sampleRate and the device’s
sample rate.
If append is XR_TRUE and a preceding
XrHapticPcmVibrationFB haptic effect on this action has not yet
completed, then the runtime must finish playing the preceding samples and
then play the new haptic effect.
If a preceding haptic event on this action has not yet completed, and either
the preceding effect is not an XrHapticPcmVibrationFB haptic effect or
append is XR_FALSE, the runtime must cancel the preceding
incomplete effects on that action and start playing the new haptic effect,
as usual for the core specification.
When append is true and a preceding XrHapticPcmVibrationFB
haptic effect on this action has not yet completed, then the application can
provide a different sampleRate in the new haptic effect.
The runtime must populate the samplesConsumed with the count of the
samples from buffer which were consumed.
The samplesConsumed is populated before the
xrApplyHapticFeedback returns.
Get the device sample rate
An application can use the xrGetDeviceSampleRateFB function to get
the sample rate of the currently bound device on which the haptic action is
triggered.
If the application does not want any resampling to occur, then it can use
this function to know the currently bound device sample rate, and pass that
value in sampleRate of XrHapticPcmVibrationFB.
// Provided by XR_FB_haptic_pcm
XrResult xrGetDeviceSampleRateFB(
XrSession session,
const XrHapticActionInfo* hapticActionInfo,
XrDevicePcmSampleRateGetInfoFB* deviceSampleRate);
The runtime must use the hapticActionInfo to get the sample rate of
the currently bound device on which haptics is triggered and populate the
deviceSampleRate structure.
The device is determined by the XrHapticActionInfo::action and
XrHapticActionInfo::subactionPath.
If the hapticActionInfo is bound to more than one device, then runtime
should assume that the all these bound devices have the same
deviceSampleRate and the runtime should return the sampleRate for any
of those bound devices.
If the device is invalid, the runtime must populate the
deviceSampleRate of XrDevicePcmSampleRateStateFB as 0.
A device can be invalid if the runtime does not find any device (which can
play haptics) connected to the headset, or if the device does not support
PCM haptic effect.
The XrDevicePcmSampleRateStateFB structure is defined as:
// Provided by XR_FB_haptic_pcm
typedef struct XrDevicePcmSampleRateStateFB {
XrStructureType type;
void* next;
float sampleRate;
} XrDevicePcmSampleRateStateFB;
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_HAPTIC_PCM_VIBRATION_FB -
XR_TYPE_DEVICE_PCM_SAMPLE_RATE_STATE_FB
New Defines
// Provided by XR_FB_haptic_pcm
#define XR_MAX_HAPTIC_PCM_BUFFER_SIZE_FB 4000
XR_MAX_HAPTIC_PCM_BUFFER_SIZE_FB defines the maximum number of samples the runtime can store.
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2022-06-27 (Aanchal Dalmia)
-
Initial extension description
-
12.94. XR_FB_keyboard_tracking
- Name String
-
XR_FB_keyboard_tracking - Extension Type
-
Instance extension
- Registered Extension Number
-
117
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Federico Schliemann, Facebook
Robert Memmott, Facebook
Cass Everitt, Facebook
Overview
This extension allows the application to query the system for a supported trackable keyboard type and obtain an XrSpace handle to track it. It also provides relevant metadata about the keyboard itself, including bounds and a human readable identifier.
New Object Types
New Flag Types
typedef XrFlags64 XrKeyboardTrackingFlagsFB;
// Flag bits for XrKeyboardTrackingFlagsFB
static const XrKeyboardTrackingFlagsFB XR_KEYBOARD_TRACKING_EXISTS_BIT_FB = 0x00000001;
static const XrKeyboardTrackingFlagsFB XR_KEYBOARD_TRACKING_LOCAL_BIT_FB = 0x00000002;
static const XrKeyboardTrackingFlagsFB XR_KEYBOARD_TRACKING_REMOTE_BIT_FB = 0x00000004;
static const XrKeyboardTrackingFlagsFB XR_KEYBOARD_TRACKING_CONNECTED_BIT_FB = 0x00000008;
typedef XrFlags64 XrKeyboardTrackingQueryFlagsFB;
// Flag bits for XrKeyboardTrackingQueryFlagsFB
static const XrKeyboardTrackingQueryFlagsFB XR_KEYBOARD_TRACKING_QUERY_LOCAL_BIT_FB = 0x00000002;
static const XrKeyboardTrackingQueryFlagsFB XR_KEYBOARD_TRACKING_QUERY_REMOTE_BIT_FB = 0x00000004;
New Enum Constants
-
XR_MAX_KEYBOARD_TRACKING_NAME_SIZE_FB
XrStructureType enumeration is extended with:
-
XR_TYPE_KEYBOARD_SPACE_CREATE_INFO_FB -
XR_TYPE_KEYBOARD_TRACKING_QUERY_FB -
XR_TYPE_SYSTEM_KEYBOARD_TRACKING_PROPERTIES_FB
New Enums
New Structures
The XrSystemKeyboardTrackingPropertiesFB structure is defined as:
// Provided by XR_FB_keyboard_tracking
typedef struct XrSystemKeyboardTrackingPropertiesFB {
XrStructureType type;
void* next;
XrBool32 supportsKeyboardTracking;
} XrSystemKeyboardTrackingPropertiesFB;
XrSystemKeyboardTrackingPropertiesFB is populated with information from the system about tracked keyboard support.
The XrKeyboardTrackingQueryFB structure is defined as:
// Provided by XR_FB_keyboard_tracking
typedef struct XrKeyboardTrackingQueryFB {
XrStructureType type;
void* next;
XrKeyboardTrackingQueryFlagsFB flags;
} XrKeyboardTrackingQueryFB;
XrKeyboardTrackingQueryFB specifies input data needed to determine which type of tracked keyboard to query for.
The XrKeyboardTrackingDescriptionFB structure is defined as:
// Provided by XR_FB_keyboard_tracking
typedef struct XrKeyboardTrackingDescriptionFB {
uint64_t trackedKeyboardId;
XrVector3f size;
XrKeyboardTrackingFlagsFB flags;
char name[XR_MAX_KEYBOARD_TRACKING_NAME_SIZE_FB];
} XrKeyboardTrackingDescriptionFB;
XrKeyboardTrackingDescriptionFB describes a trackable keyboard and its associated metadata.
The XrKeyboardSpaceCreateInfoFB structure is defined as:
// Provided by XR_FB_keyboard_tracking
typedef struct XrKeyboardSpaceCreateInfoFB {
XrStructureType type;
void* next;
uint64_t trackedKeyboardId;
} XrKeyboardSpaceCreateInfoFB;
XrKeyboardSpaceCreateInfoFB describes a request for the system needed to create a trackable XrSpace associated with the keyboard.
New Functions
The xrQuerySystemTrackedKeyboardFB function is defined as:
// Provided by XR_FB_keyboard_tracking
XrResult xrQuerySystemTrackedKeyboardFB(
XrSession session,
const XrKeyboardTrackingQueryFB* queryInfo,
XrKeyboardTrackingDescriptionFB* keyboard);
The xrQuerySystemTrackedKeyboardFB function populates an XrKeyboardTrackingDescriptionFB structure with enough information to describe a keyboard that the system can locate.
The xrCreateKeyboardSpaceFB function is defined as:
// Provided by XR_FB_keyboard_tracking
XrResult xrCreateKeyboardSpaceFB(
XrSession session,
const XrKeyboardSpaceCreateInfoFB* createInfo,
XrSpace* keyboardSpace);
The xrCreateKeyboardSpaceFB function returns an XrSpace that can be used to locate a physical keyboard in space. The origin of the created XrSpace is located in the center of the bounding box in the x and z axes, and at the top of the y axis (meaning the keyboard is located entirely in negative y).
Issues
Version History
-
Revision 1, 2021-08-27 (Federico Schliemann)
-
Initial extension description
-
12.95. XR_FB_passthrough
- Name String
-
XR_FB_passthrough - Extension Type
-
Instance extension
- Registered Extension Number
-
119
- Revision
-
4
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Anton Vaneev, Facebook
Cass Everitt, Facebook
Federico Schliemann, Facebook
Johannes Schmid, Facebook
Overview
Passthrough is a way to show a user their physical environment in a light-blocking VR headset. Applications may use passthrough in a multitude of ways, including:
-
Creating AR-like experiences, where virtual objects augment the user’s environment.
-
Bringing real objects into a VR experience.
-
Mapping the playspace such that a VR experience is customized to it.
This extension allows:
-
An application to request passthrough to be composited with the application content.
-
An application to specify the compositing and blending rules between passthrough and VR content.
-
An application to apply styles, such as color mapping and edge rendering, to passthrough.
-
An application to provide a geometry to be used in place of the user’s physical environment. Camera images will be projected onto the surface provided by the application. In some cases where a part of the environment, such as a desk, can be approximated well, this provides better visual experience.
New Object Types
XR_DEFINE_HANDLE(XrPassthroughLayerFB)
XrPassthroughLayerFB represents a layer of passthrough content.
XR_DEFINE_HANDLE(XrGeometryInstanceFB)
XrGeometryInstanceFB represents a geometry instance used in a passthrough layer.
New Flag Types
typedef XrFlags64 XrPassthroughFlagsFB;
Specify additional creation behavior.
// Flag bits for XrPassthroughFlagsFB
static const XrPassthroughFlagsFB XR_PASSTHROUGH_IS_RUNNING_AT_CREATION_BIT_FB = 0x00000001;
static const XrPassthroughFlagsFB XR_PASSTHROUGH_LAYER_DEPTH_BIT_FB = 0x00000002;
typedef XrFlags64 XrPassthroughStateChangedFlagsFB;
Specify additional state change behavior.
// Flag bits for XrPassthroughStateChangedFlagsFB
static const XrPassthroughStateChangedFlagsFB XR_PASSTHROUGH_STATE_CHANGED_REINIT_REQUIRED_BIT_FB = 0x00000001;
static const XrPassthroughStateChangedFlagsFB XR_PASSTHROUGH_STATE_CHANGED_NON_RECOVERABLE_ERROR_BIT_FB = 0x00000002;
static const XrPassthroughStateChangedFlagsFB XR_PASSTHROUGH_STATE_CHANGED_RECOVERABLE_ERROR_BIT_FB = 0x00000004;
static const XrPassthroughStateChangedFlagsFB XR_PASSTHROUGH_STATE_CHANGED_RESTORED_ERROR_BIT_FB = 0x00000008;
typedef XrFlags64 XrPassthroughCapabilityFlagsFB;
Specify passthrough system capabilities.
// Flag bits for XrPassthroughCapabilityFlagsFB
static const XrPassthroughCapabilityFlagsFB XR_PASSTHROUGH_CAPABILITY_BIT_FB = 0x00000001;
static const XrPassthroughCapabilityFlagsFB XR_PASSTHROUGH_CAPABILITY_COLOR_BIT_FB = 0x00000002;
static const XrPassthroughCapabilityFlagsFB XR_PASSTHROUGH_CAPABILITY_LAYER_DEPTH_BIT_FB = 0x00000004;
New Enum Constants
-
XR_PASSTHROUGH_COLOR_MAP_MONO_SIZE_FB
XrStructureType enumeration is extended with:
-
XR_TYPE_SYSTEM_PASSTHROUGH_PROPERTIES_FB -
XR_TYPE_PASSTHROUGH_CREATE_INFO_FB -
XR_TYPE_PASSTHROUGH_LAYER_CREATE_INFO_FB -
XR_TYPE_COMPOSITION_LAYER_PASSTHROUGH_FB -
XR_TYPE_GEOMETRY_INSTANCE_CREATE_INFO_FB -
XR_TYPE_GEOMETRY_INSTANCE_TRANSFORM_FB -
XR_TYPE_PASSTHROUGH_STYLE_FB -
XR_TYPE_PASSTHROUGH_COLOR_MAP_MONO_TO_RGBA_FB -
XR_TYPE_PASSTHROUGH_COLOR_MAP_MONO_TO_MONO_FB -
XR_TYPE_PASSTHROUGH_BRIGHTNESS_CONTRAST_SATURATION_FB -
XR_TYPE_EVENT_DATA_PASSTHROUGH_STATE_CHANGED_FB
XrResult enumeration is extended with:
-
XR_ERROR_UNEXPECTED_STATE_PASSTHROUGH_FBThe state of an object for which a function is called is not one of the expected states for that function. -
XR_ERROR_FEATURE_ALREADY_CREATED_PASSTHROUGH_FBAn application attempted to create a feature when one has already been created and only one can exist. -
XR_ERROR_FEATURE_REQUIRED_PASSTHROUGH_FBA feature is required before the function can be called. -
XR_ERROR_NOT_PERMITTED_PASSTHROUGH_FBOperation is not permitted. -
XR_ERROR_INSUFFICIENT_RESOURCES_PASSTHROUGH_FBThe runtime does not have sufficient resources to perform the operation. Either the object being created is too large, or too many objects of a specific kind have been created.
New Enums
Specify the kind of passthrough behavior the layer provides.
typedef enum XrPassthroughLayerPurposeFB {
XR_PASSTHROUGH_LAYER_PURPOSE_RECONSTRUCTION_FB = 0,
XR_PASSTHROUGH_LAYER_PURPOSE_PROJECTED_FB = 1,
// Provided by XR_FB_passthrough_keyboard_hands
XR_PASSTHROUGH_LAYER_PURPOSE_TRACKED_KEYBOARD_HANDS_FB = 1000203001,
// Provided by XR_FB_passthrough_keyboard_hands
XR_PASSTHROUGH_LAYER_PURPOSE_TRACKED_KEYBOARD_MASKED_HANDS_FB = 1000203002,
XR_PASSTHROUGH_LAYER_PURPOSE_MAX_ENUM_FB = 0x7FFFFFFF
} XrPassthroughLayerPurposeFB;
New Structures
The XrSystemPassthroughPropertiesFB structure is defined as:
// Provided by XR_FB_passthrough
typedef struct XrSystemPassthroughPropertiesFB {
XrStructureType type;
const void* next;
XrBool32 supportsPassthrough;
} XrSystemPassthroughPropertiesFB;
It describes a passthrough system property.
New Structures
The XrSystemPassthroughProperties2FB structure is defined as:
// Provided by XR_FB_passthrough
typedef struct XrSystemPassthroughProperties2FB {
XrStructureType type;
const void* next;
XrPassthroughCapabilityFlagsFB capabilities;
} XrSystemPassthroughProperties2FB;
Applications can pass this structure in a call to
xrGetSystemProperties to query passthrough system properties.
Applications should verify that the runtime implements
XR_FB_passthrough spec version 3 or newer before doing so.
In older versions, this structure is not supported and will be left
unpopulated.
Applications should use XrSystemPassthroughPropertiesFB in that case.
The XrPassthroughCreateInfoFB structure is defined as:
// Provided by XR_FB_passthrough
typedef struct XrPassthroughCreateInfoFB {
XrStructureType type;
const void* next;
XrPassthroughFlagsFB flags;
} XrPassthroughCreateInfoFB;
It contains parameters used to specify a new passthrough feature.
The XrPassthroughLayerCreateInfoFB structure is defined as:
// Provided by XR_FB_passthrough
typedef struct XrPassthroughLayerCreateInfoFB {
XrStructureType type;
const void* next;
XrPassthroughFB passthrough;
XrPassthroughFlagsFB flags;
XrPassthroughLayerPurposeFB purpose;
} XrPassthroughLayerCreateInfoFB;
It contains parameters used to specify a new passthrough layer.
The XrCompositionLayerPassthroughFB structure is defined as:
// Provided by XR_FB_passthrough
typedef struct XrCompositionLayerPassthroughFB {
XrStructureType type;
const void* next;
XrCompositionLayerFlags flags;
XrSpace space;
XrPassthroughLayerFB layerHandle;
} XrCompositionLayerPassthroughFB;
It is a composition layer type that may be submitted in xrEndFrame where an XrCompositionLayerBaseHeader is specified, as a stand-in for the actual passthrough contents.
Errata: the third field of this structure is named flags rather than
layerFlags as expected and as documented for for the parent type
XrCompositionLayerBaseHeader.
The XrGeometryInstanceCreateInfoFB structure is defined as:
// Provided by XR_FB_passthrough
typedef struct XrGeometryInstanceCreateInfoFB {
XrStructureType type;
const void* next;
XrPassthroughLayerFB layer;
XrTriangleMeshFB mesh;
XrSpace baseSpace;
XrPosef pose;
XrVector3f scale;
} XrGeometryInstanceCreateInfoFB;
It contains parameters to specify a new geometry instance.
The XrGeometryInstanceTransformFB structure is defined as:
// Provided by XR_FB_passthrough
typedef struct XrGeometryInstanceTransformFB {
XrStructureType type;
const void* next;
XrSpace baseSpace;
XrTime time;
XrPosef pose;
XrVector3f scale;
} XrGeometryInstanceTransformFB;
It describes a transformation for a geometry instance.
The XrPassthroughStyleFB structure is defined as:
// Provided by XR_FB_passthrough
typedef struct XrPassthroughStyleFB {
XrStructureType type;
const void* next;
float textureOpacityFactor;
XrColor4f edgeColor;
} XrPassthroughStyleFB;
XrPassthroughStyleFB lets applications customize the appearance of
passthrough layers.
In addition to the parameters specified here, applications may add one of
the following structures to the structure chain:
XrPassthroughColorMapMonoToRgbaFB,
XrPassthroughColorMapMonoToMonoFB,
XrPassthroughBrightnessContrastSaturationFB.
These structures are mutually exclusive.
The runtime must return XR_ERROR_VALIDATION_FAILURE if more than one
of them are present in the structure chain.
The XrPassthroughColorMapMonoToRgbaFB structure is defined as:
// Provided by XR_FB_passthrough
typedef struct XrPassthroughColorMapMonoToRgbaFB {
XrStructureType type;
const void* next;
XrColor4f textureColorMap[XR_PASSTHROUGH_COLOR_MAP_MONO_SIZE_FB];
} XrPassthroughColorMapMonoToRgbaFB;
XrPassthroughColorMapMonoToRgbaFB lets applications define a map which replaces each input luminance value in the passthrough imagery with an RGBA color value. The map is applied before any additional effects (such as edges) are rendered on top.
XrPassthroughColorMapMonoToRgbaFB is provided in the next chain
of XrPassthroughStyleFB.
The XrPassthroughColorMapMonoToMonoFB structure is defined as:
// Provided by XR_FB_passthrough
typedef struct XrPassthroughColorMapMonoToMonoFB {
XrStructureType type;
const void* next;
uint8_t textureColorMap[XR_PASSTHROUGH_COLOR_MAP_MONO_SIZE_FB];
} XrPassthroughColorMapMonoToMonoFB;
XrPassthroughColorMapMonoToMonoFB lets applications define a map which
replaces each input luminance value in the passthrough imagery with a
grayscale color value defined in textureColorMap.
The map is applied before any additional effects (such as edges) are
rendered on top.
XrPassthroughColorMapMonoToMonoFB is provided in the next chain
of XrPassthroughStyleFB.
The XrPassthroughBrightnessContrastSaturationFB structure is defined as:
// Provided by XR_FB_passthrough
typedef struct XrPassthroughBrightnessContrastSaturationFB {
XrStructureType type;
const void* next;
float brightness;
float contrast;
float saturation;
} XrPassthroughBrightnessContrastSaturationFB;
XrPassthroughBrightnessContrastSaturationFB lets applications adjust the brightness, contrast, and saturation of passthrough layers. The adjustments only are applied before any additional effects (such as edges) are rendered on top.
The adjustments are applied in CIELAB color space (white point D65) using the following formulas:
-
L*' = clamp((L* - 50) × contrast + 50, 0, 100)
-
L*'' = clamp(L*' + brightness, 0, 100)
-
(a*', b*') = (a*, b*) × saturation
-
Resulting color: (L*'', a*', b*')
XrPassthroughBrightnessContrastSaturationFB is provided in the
next chain of XrPassthroughStyleFB.
The XrEventDataPassthroughStateChangedFB structure is defined as:
// Provided by XR_FB_passthrough
typedef struct XrEventDataPassthroughStateChangedFB {
XrStructureType type;
const void* next;
XrPassthroughStateChangedFlagsFB flags;
} XrEventDataPassthroughStateChangedFB;
It describes event data for state changes returned by xrPollEvent.
New Functions
The xrCreatePassthroughFB function is defined as:
// Provided by XR_FB_passthrough
XrResult xrCreatePassthroughFB(
XrSession session,
const XrPassthroughCreateInfoFB* createInfo,
XrPassthroughFB* outPassthrough);
Creates an XrPassthroughFB handle. The returned passthrough handle may be subsequently used in API calls.
The xrDestroyPassthroughFB function is defined as:
// Provided by XR_FB_passthrough
XrResult xrDestroyPassthroughFB(
XrPassthroughFB passthrough);
Destroys an XrPassthroughFB handle.
The xrPassthroughStartFB function is defined as:
// Provided by XR_FB_passthrough
XrResult xrPassthroughStartFB(
XrPassthroughFB passthrough);
Starts an XrPassthroughFB feature. If the feature is not started, either explicitly with a call to xrPassthroughStartFB, or implicitly at creation using the behavior flags, it is considered paused. When the feature is paused, runtime will stop rendering and compositing all passthrough layers produced on behalf of the application, and may free up some or all the resources used to produce passthrough until xrPassthroughStartFB is called.
The xrPassthroughPauseFB function is defined as:
// Provided by XR_FB_passthrough
XrResult xrPassthroughPauseFB(
XrPassthroughFB passthrough);
Pauses an XrPassthroughFB feature. When the feature is paused, runtime will stop rendering and compositing all passthrough layers produced on behalf of the application, and may free up some or all the resources used to produce passthrough until xrPassthroughStartFB is called.
The xrCreatePassthroughLayerFB function is defined as:
// Provided by XR_FB_passthrough
XrResult xrCreatePassthroughLayerFB(
XrSession session,
const XrPassthroughLayerCreateInfoFB* createInfo,
XrPassthroughLayerFB* outLayer);
Creates an XrPassthroughLayerFB handle. The returned layer handle may be subsequently used in API calls. Layer objects may be used to specify rendering properties of the layer, such as styles, and compositing rules.
The xrDestroyPassthroughLayerFB function is defined as:
// Provided by XR_FB_passthrough
XrResult xrDestroyPassthroughLayerFB(
XrPassthroughLayerFB layer);
Destroys an XrPassthroughLayerFB handle.
The xrPassthroughLayerPauseFB function is defined as:
// Provided by XR_FB_passthrough
XrResult xrPassthroughLayerPauseFB(
XrPassthroughLayerFB layer);
Pauses an XrPassthroughLayerFB layer. Runtime will not render or composite paused layers.
The xrPassthroughLayerResumeFB function is defined as:
// Provided by XR_FB_passthrough
XrResult xrPassthroughLayerResumeFB(
XrPassthroughLayerFB layer);
Resumes an XrPassthroughLayerFB layer.
The xrPassthroughLayerSetStyleFB function is defined as:
// Provided by XR_FB_passthrough
XrResult xrPassthroughLayerSetStyleFB(
XrPassthroughLayerFB layer,
const XrPassthroughStyleFB* style);
Sets an XrPassthroughStyleFB style on an XrPassthroughLayerFB layer.
The xrCreateGeometryInstanceFB function is defined as:
// Provided by XR_FB_passthrough
XrResult xrCreateGeometryInstanceFB(
XrSession session,
const XrGeometryInstanceCreateInfoFB* createInfo,
XrGeometryInstanceFB* outGeometryInstance);
Creates an XrGeometryInstanceFB handle.
Geometry instance functionality requires XR_FB_triangle_mesh
extension to be enabled.
An XrGeometryInstanceFB connects a layer, a mesh, and a
transformation, with the semantics that a specific mesh will be instantiated
in a specific layer with a specific transformation.
A mesh can be instantiated multiple times, in the same or in different
layers.
The xrDestroyGeometryInstanceFB function is defined as:
// Provided by XR_FB_passthrough
XrResult xrDestroyGeometryInstanceFB(
XrGeometryInstanceFB instance);
Destroys an XrGeometryInstanceFB handle. Destroying an XrGeometryInstanceFB does not destroy a mesh and does not free mesh resources. Destroying a layer invalidates all geometry instances attached to it. Destroying a mesh invalidates all its instances.
The xrGeometryInstanceSetTransformFB function is defined as:
// Provided by XR_FB_passthrough
XrResult xrGeometryInstanceSetTransformFB(
XrGeometryInstanceFB instance,
const XrGeometryInstanceTransformFB* transformation);
Sets an XrGeometryInstanceTransformFB transform on an XrGeometryInstanceFB geometry instance.
Issues
Version History
-
Revision 1, 2021-09-01 (Anton Vaneev)
-
Initial extension description
-
-
Revision 2, 2022-03-16 (Johannes Schmid)
-
Introduce XrPassthroughBrightnessContrastSaturationFB.
-
Revise the documentation of XrPassthroughStyleFB and its descendants.
-
-
Revision 3, 2022-07-14 (Johannes Schmid)
-
Introduce a new struct for querying passthrough system capabilities: XrSystemPassthroughProperties2FB.
-
Introduce a new flag bit that enables submission of depth maps for compositing:
XR_PASSTHROUGH_LAYER_DEPTH_BIT_FB.
-
-
Revision 4, 2024-06-03 (Rylie Pavlik, Collabora)
-
Correct registry for
XrCompositionLayerPassthroughFB, note errata regarding field name.
-
12.96. XR_FB_passthrough_keyboard_hands
- Name String
-
XR_FB_passthrough_keyboard_hands - Extension Type
-
Instance extension
- Registered Extension Number
-
204
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Ante Trbojevic, Facebook
Cass Everitt, Facebook
Federico Schliemann, Facebook
Anton Vaneev, Facebook
Johannes Schmid, Facebook
Overview
This extension enables applications to show passthrough hands when hands are
placed over the tracked keyboard.
It enables users to see their hands over the keyboard in a mixed reality
application.
This extension is dependent on XR_FB_passthrough extension which can
be used to create a passthrough layer for hand presence use-case.
The extension supports a single pair of hands (one left and one right hand), multiple pair of hands are not supported.
This extension allows:
-
Creation of keyboard hands passthrough layer using xrCreatePassthroughLayerFB
-
Setting the level of intensity for the hand mask in a passthrough layer with purpose XrPassthroughLayerPurposeFB as
XR_PASSTHROUGH_LAYER_PURPOSE_TRACKED_KEYBOARD_HANDS_FBorXR_PASSTHROUGH_LAYER_PURPOSE_TRACKED_KEYBOARD_MASKED_HANDS_FB
New Enum Constants
XrPassthroughLayerPurposeFB enumeration is extended with a new constant:
-
XR_PASSTHROUGH_LAYER_PURPOSE_TRACKED_KEYBOARD_HANDS_FB- It defines a keyboard hands presence purpose of passthrough layer (i.e. basic mode, without hand transitions). -
XR_PASSTHROUGH_LAYER_PURPOSE_TRACKED_KEYBOARD_MASKED_HANDS_FB- It defines a keyboard hands presence purpose of passthrough layer with keyboard masked hand transitions. A hand mask will be visible only when hands are inside the region of VR keyboard (i.e. hands over the keyboard).
XrStructureType enumeration is extended with:
-
XR_TYPE_PASSTHROUGH_KEYBOARD_HANDS_INTENSITY_FB
New Structures
The XrPassthroughKeyboardHandsIntensityFB structure is defined as:
// Provided by XR_FB_passthrough_keyboard_hands
typedef struct XrPassthroughKeyboardHandsIntensityFB {
XrStructureType type;
const void* next;
float leftHandIntensity;
float rightHandIntensity;
} XrPassthroughKeyboardHandsIntensityFB;
XrPassthroughKeyboardHandsIntensityFB describes intensities of passthrough hands, and is used as a parameter to xrPassthroughLayerSetKeyboardHandsIntensityFB.
Each of the intensity values leftHandIntensity and
rightHandIntensity must be in the range [0.0, 1.0].
The hand intensity value represents the level of visibility of rendered
hand, the minimal value of the intensity 0.0 represents the fully
transparent hand (not visible), the maximal value of 1.0 represented fully
opaque hands (maximal visibility).
If either leftHandIntensity or rightHandIntensity is outside the
range [0.0, 1.0], the runtime must return XR_ERROR_VALIDATION_FAILURE.
New Functions
The xrPassthroughLayerSetKeyboardHandsIntensityFB function is defined as:
// Provided by XR_FB_passthrough_keyboard_hands
XrResult xrPassthroughLayerSetKeyboardHandsIntensityFB(
XrPassthroughLayerFB layer,
const XrPassthroughKeyboardHandsIntensityFB* intensity);
Sets an XrPassthroughKeyboardHandsIntensityFB intensity on an XrPassthroughLayerFB layer.
Issues
Version History
-
Revision 1, 2021-11-23 (Ante Trbojevic)
-
Initial extension description
-
-
Revision 2, 2022-03-16 (Ante Trbojevic)
-
Introduce
XR_PASSTHROUGH_LAYER_PURPOSE_TRACKED_KEYBOARD_MASKED_HANDS_FB
-
12.97. XR_FB_render_model
- Name String
-
XR_FB_render_model - Extension Type
-
Instance extension
- Registered Extension Number
-
120
- Revision
-
4
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Leonard Tsai, Meta
Xiang Wei, Meta
Robert Memmott, Meta
Overview
This extension allows applications to request GLTF models for certain connected devices supported by the runtime. Paths that correspond to these devices will be provided through the extension and can be used to get information about the models as well as loading them.
New Flag Types
typedef XrFlags64 XrRenderModelFlagsFB;
// Flag bits for XrRenderModelFlagsFB
static const XrRenderModelFlagsFB XR_RENDER_MODEL_SUPPORTS_GLTF_2_0_SUBSET_1_BIT_FB = 0x00000001;
static const XrRenderModelFlagsFB XR_RENDER_MODEL_SUPPORTS_GLTF_2_0_SUBSET_2_BIT_FB = 0x00000002;
Render Model Support Levels: An application should request a model of a certain complexity via the XrRenderModelCapabilitiesRequestFB on the structure chain of XrRenderModelPropertiesFB passed into xrGetRenderModelPropertiesFB. The flags on the XrRenderModelCapabilitiesRequestFB are an acknowledgement of the application’s ability to render such a model. Multiple values of XrRenderModelFlagBitsFB can be set on this variable to indicate acceptance of different support levels. The flags parameter on the XrRenderModelPropertiesFB will indicate what capabilities the model in the runtime actually requires. It will be set to a single value of XrRenderModelFlagBitsFB.
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_SYSTEM_RENDER_MODEL_PROPERTIES_FB -
XR_TYPE_RENDER_MODEL_PATH_INFO_FB -
XR_TYPE_RENDER_MODEL_PROPERTIES_FB -
XR_TYPE_RENDER_MODEL_BUFFER_FB -
XR_TYPE_RENDER_MODEL_LOAD_INFO_FB -
XR_MAX_RENDER_MODEL_NAME_SIZE_FB
New Defines
// Provided by XR_FB_render_model
#define XR_NULL_RENDER_MODEL_KEY_FB 0
XR_NULL_RENDER_MODEL_KEY_FB defines an invalid model key atom.
New Base Types
// Provided by XR_FB_render_model
XR_DEFINE_ATOM(XrRenderModelKeyFB)
The unique model key used to retrieve the data for the render model that is valid across multiple instances and installs. The application can use this key along with the model version to update its cached or saved version of the model.
New Structures
The XrSystemRenderModelPropertiesFB structure is defined as:
// Provided by XR_FB_render_model
typedef struct XrSystemRenderModelPropertiesFB {
XrStructureType type;
void* next;
XrBool32 supportsRenderModelLoading;
} XrSystemRenderModelPropertiesFB;
It describes a render model system property.
The XrRenderModelPathInfoFB structure is defined as:
// Provided by XR_FB_render_model
typedef struct XrRenderModelPathInfoFB {
XrStructureType type;
void* next;
XrPath path;
} XrRenderModelPathInfoFB;
XrRenderModelPathInfoFB contains a model path supported by the device when returned from xrEnumerateRenderModelPathsFB. This path can be used to request information about the render model for the connected device that the path represents using xrGetRenderModelPropertiesFB.
The XrRenderModelPropertiesFB structure is defined as:
// Provided by XR_FB_render_model
typedef struct XrRenderModelPropertiesFB {
XrStructureType type;
void* next;
uint32_t vendorId;
char modelName[XR_MAX_RENDER_MODEL_NAME_SIZE_FB];
XrRenderModelKeyFB modelKey;
uint32_t modelVersion;
XrRenderModelFlagsFB flags;
} XrRenderModelPropertiesFB;
XrRenderModelPropertiesFB contains information about the render model
for a device.
XrRenderModelPropertiesFB must be provided when calling
xrGetRenderModelPropertiesFB.
The XrRenderModelKeyFB included in the properties is a unique key
for each render model that is valid across multiple instances and installs.
If the application decides to cache or save the render model in any way,
modelVersion can be used to determine if the render model has changed.
The application should then update its cached or saved version.
The XrRenderModelCapabilitiesRequestFB structure is defined as:
// Provided by XR_FB_render_model
typedef struct XrRenderModelCapabilitiesRequestFB {
XrStructureType type;
void* next;
XrRenderModelFlagsFB flags;
} XrRenderModelCapabilitiesRequestFB;
XrRenderModelCapabilitiesRequestFB contains information about the
render capabilities requested for a model.
XrRenderModelCapabilitiesRequestFB must be set in the structure chain
of the next pointer on the XrRenderModelPropertiesFB passed into
the xrGetRenderModelPropertiesFB call.
The flags on XrRenderModelCapabilitiesRequestFB represent an
acknowledgement of being able to handle the individual model capability
levels.
If no XrRenderModelCapabilitiesRequestFB is on the structure chain
then the runtime should treat it as if a value of
XR_RENDER_MODEL_SUPPORTS_GLTF_2_0_SUBSET_1_BIT_FB was set.
If the runtime does not have a model available that matches any of the
supports flags set, then it must return a
XR_RENDER_MODEL_UNAVAILABLE_FB result.
The XrRenderModelLoadInfoFB structure is defined as:
// Provided by XR_FB_render_model
typedef struct XrRenderModelLoadInfoFB {
XrStructureType type;
void* next;
XrRenderModelKeyFB modelKey;
} XrRenderModelLoadInfoFB;
XrRenderModelLoadInfoFB is used to provide information about which render model to load. XrRenderModelLoadInfoFB must be provided when calling xrLoadRenderModelFB.
The XrRenderModelBufferFB structure is defined as:
// Provided by XR_FB_render_model
typedef struct XrRenderModelBufferFB {
XrStructureType type;
void* next;
uint32_t bufferCapacityInput;
uint32_t bufferCountOutput;
uint8_t* buffer;
} XrRenderModelBufferFB;
XrRenderModelBufferFB is used when loading the binary data for a render model. XrRenderModelBufferFB must be provided when calling xrLoadRenderModelFB.
New Functions
The xrEnumerateRenderModelPathsFB function is defined as:
// Provided by XR_FB_render_model
XrResult xrEnumerateRenderModelPathsFB(
XrSession session,
uint32_t pathCapacityInput,
uint32_t* pathCountOutput,
XrRenderModelPathInfoFB* paths);
The application must call xrEnumerateRenderModelPathsFB to enumerate the valid render model paths that are supported by the runtime before calling xrGetRenderModelPropertiesFB. The paths returned may be used later in xrGetRenderModelPropertiesFB.
The xrGetRenderModelPropertiesFB function is defined as:
// Provided by XR_FB_render_model
XrResult xrGetRenderModelPropertiesFB(
XrSession session,
XrPath path,
XrRenderModelPropertiesFB* properties);
xrGetRenderModelPropertiesFB is used for getting information for a render model using a path retrieved from xrEnumerateRenderModelPathsFB. The information returned will be for the connected device that corresponds to the path given. For example, using /model_fb/controller/left will return information for the left controller that is currently connected and will change if a different device that also represents a left controller is connected.
The runtime must return XR_ERROR_CALL_ORDER_INVALID if
xrGetRenderModelPropertiesFB is called with render model paths before
calling xrEnumerateRenderModelPathsFB.
The runtime must return XR_ERROR_PATH_INVALID if a path not given by
xrEnumerateRenderModelPathsFB is used.
If xrGetRenderModelPropertiesFB returns a success code of
XR_RENDER_MODEL_UNAVAILABLE_FB and has a
XrRenderModelPropertiesFB::modelKey of
XR_NULL_RENDER_MODEL_KEY_FB, this indicates that the model for the
device is unavailable.
The application may keep calling xrGetRenderModelPropertiesFB because
the model may become available later when a device is connected.
The xrLoadRenderModelFB function is defined as:
// Provided by XR_FB_render_model
XrResult xrLoadRenderModelFB(
XrSession session,
const XrRenderModelLoadInfoFB* info,
XrRenderModelBufferFB* buffer);
xrLoadRenderModelFB is used to load the GLTF model data using a valid
XrRenderModelLoadInfoFB::modelKey.
xrLoadRenderModelFB loads the model as a byte buffer containing the
GLTF in the binary format (GLB).
The GLB data must conform to the glTF 2.0 format defined at https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html.
The GLB may contain texture data in a format that requires the use of the
KHR_texture_basisu GLTF extension defined at https://github.com/KhronosGroup/glTF/tree/main/extensions/2.0/Khronos/KHR_texture_basisu.
Therefore, the application should ensure it can handle this extension.
If the device for the requested model is disconnected or does not match the
XrRenderModelLoadInfoFB::modelKey provided,
xrLoadRenderModelFB must return XR_RENDER_MODEL_UNAVAILABLE_FB
as well as an XrRenderModelBufferFB::bufferCountOutput value of
0 indicating that the model was not available.
The xrLoadRenderModelFB function may be slow, therefore applications should call it from a non-time sensitive thread.
Issues
Version History
-
Revision 1, 2021-08-17 (Leonard Tsai)
-
Initial extension description
-
-
Revision 2, 2022-05-03 (Robert Memmott)
-
Render Model Support Subsets
-
-
Revision 3, 2022-07-07 (Rylie Pavlik, Collabora, Ltd.)
-
Fix implicit valid usage for
XrRenderModelCapabilitiesRequestFB
-
-
Revision 4, 2023-04-14 (Peter Chan)
-
Add possible render model path for
XR_META_virtual_keyboard
-
12.98. XR_FB_scene
- Name String
-
XR_FB_scene - Extension Type
-
Instance extension
- Registered Extension Number
-
176
- Revision
-
4
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
John Schofield, Facebook
Andrew Kim, Facebook
Yuichi Taguchi, Facebook
Cass Everitt, Facebook
Overview
This extension expands on the concept of spatial entities to include a way for a spatial entity to represent rooms, objects, or other boundaries in a scene.
In order to enable the functionality of this extension, you must pass the
name of the extension into xrCreateInstance via the
XrInstanceCreateInfo::enabledExtensionNames parameter as
indicated in the Extensions section.
New Object Types
New Flag Types
// Provided by XR_FB_scene
typedef XrFlags64 XrSemanticLabelsSupportFlagsFB;
// Provided by XR_FB_scene
// Flag bits for XrSemanticLabelsSupportFlagsFB
static const XrSemanticLabelsSupportFlagsFB XR_SEMANTIC_LABELS_SUPPORT_MULTIPLE_SEMANTIC_LABELS_BIT_FB = 0x00000001;
static const XrSemanticLabelsSupportFlagsFB XR_SEMANTIC_LABELS_SUPPORT_ACCEPT_DESK_TO_TABLE_MIGRATION_BIT_FB = 0x00000002;
static const XrSemanticLabelsSupportFlagsFB XR_SEMANTIC_LABELS_SUPPORT_ACCEPT_INVISIBLE_WALL_FACE_BIT_FB = 0x00000004;
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_SEMANTIC_LABELS_FB -
XR_TYPE_ROOM_LAYOUT_FB -
XR_TYPE_BOUNDARY_2D_FB -
XR_TYPE_SEMANTIC_LABELS_SUPPORT_INFO_FB
New Enums
New Structures
The XrExtent3DfFB structure is defined as:
// Provided by XR_FB_scene
// XrExtent3DfFB is an alias for XrExtent3Df
typedef struct XrExtent3Df {
float width;
float height;
float depth;
} XrExtent3Df;
typedef XrExtent3Df XrExtent3DfFB;
This structure is used for component values that may be fractional (floating-point). If used to represent physical distances, values must be in meters. The width, height, and depth values must be non-negative.
The XrOffset3DfFB structure is defined as:
// Provided by XR_FB_scene
typedef struct XrOffset3DfFB {
float x;
float y;
float z;
} XrOffset3DfFB;
This structure is used for component values that may be fractional (floating-point). If used to represent physical distances, values must be in meters.
The XrRect3DfFB structure is defined as:
// Provided by XR_FB_scene
typedef struct XrRect3DfFB {
XrOffset3DfFB offset;
XrExtent3DfFB extent;
} XrRect3DfFB;
This structure is used for component values that may be fractional (floating-point).
The bounding box is defined by an offset and extent.
The offset refers to the coordinate of the minimum corner of the box
in the local space of the XrSpace; that is, the corner whose
coordinate has the minimum value on each axis.
The extent refers to the dimensions of the box along each axis.
The maximum corner can therefore be computed as offset
extent.
The XrSemanticLabelsFB structure is defined as:
// Provided by XR_FB_scene
typedef struct XrSemanticLabelsFB {
XrStructureType type;
const void* next;
uint32_t bufferCapacityInput;
uint32_t bufferCountOutput;
char* buffer;
} XrSemanticLabelsFB;
This structure is used by the xrGetSpaceSemanticLabelsFB function to provide the application with the intended usage of the spatial entity.
The XrRoomLayoutFB structure is defined as:
// Provided by XR_FB_scene
typedef struct XrRoomLayoutFB {
XrStructureType type;
const void* next;
XrUuidEXT floorUuid;
XrUuidEXT ceilingUuid;
uint32_t wallUuidCapacityInput;
uint32_t wallUuidCountOutput;
XrUuidEXT* wallUuids;
} XrRoomLayoutFB;
This structure is used by the xrGetSpaceRoomLayoutFB function to provide the application with the XrUuidEXT handles representing the various surfaces of a room.
The XrBoundary2DFB structure is defined as:
// Provided by XR_FB_scene
typedef struct XrBoundary2DFB {
XrStructureType type;
const void* next;
uint32_t vertexCapacityInput;
uint32_t vertexCountOutput;
XrVector2f* vertices;
} XrBoundary2DFB;
This structure is used by the xrGetSpaceBoundary2DFB function to provide the application with the XrVector2f vertices representing the a spatial entity with a boundary.
The XrSemanticLabelsSupportInfoFB structure is defined as:
// Provided by XR_FB_scene
typedef struct XrSemanticLabelsSupportInfoFB {
XrStructureType type;
const void* next;
XrSemanticLabelsSupportFlagsFB flags;
const char* recognizedLabels;
} XrSemanticLabelsSupportInfoFB;
The XrSemanticLabelsSupportInfoFB structure may be specified in the
next chain of XrSemanticLabelsFB to specify additional behaviors
of the xrGetSpaceSemanticLabelsFB function.
The runtime must follow the behaviors specified in flags according to
the descriptions of XrSemanticLabelsSupportFlagBitsFB.
The runtime must return any semantic label that is not included in
recognizedLabels as "OTHER" to the application.
The runtime must follow this direction only if the runtime reports the
XrExtensionProperties::extensionVersion as 2 or greater,
otherwise the runtime must ignore this as an unknown chained structure.
If the XrSemanticLabelsSupportInfoFB structure is not present in the
next chain of XrSemanticLabelsFB, the runtime may return any
semantic labels to the application.
New Functions
The xrGetSpaceBoundingBox2DFB function is defined as:
// Provided by XR_FB_scene
XrResult xrGetSpaceBoundingBox2DFB(
XrSession session,
XrSpace space,
XrRect2Df* boundingBox2DOutput);
Gets the 2D bounding box for a spatial entity with the
XR_SPACE_COMPONENT_TYPE_BOUNDED_2D_FB component type enabled.
The bounding box is defined by an XrRect2Df::offset and
XrRect2Df::extent.
The XrRect2Df::offset refers to the coordinate of the minimum
corner of the box in the x-y plane of the given XrSpace’s coordinate
system; that is, the corner whose coordinate has the minimum value on each
axis.
The XrRect2Df::extent refers to the dimensions of the box along
each axis.
The maximum corner can therefore be computed as
XrRect2Df::offset
XrRect2Df::extent.
The xrGetSpaceBoundingBox3DFB function is defined as:
// Provided by XR_FB_scene
XrResult xrGetSpaceBoundingBox3DFB(
XrSession session,
XrSpace space,
XrRect3DfFB* boundingBox3DOutput);
Gets the 3D bounding box for a spatial entity with the
XR_SPACE_COMPONENT_TYPE_BOUNDED_3D_FB component type enabled.
The xrGetSpaceSemanticLabelsFB function is defined as:
// Provided by XR_FB_scene
XrResult xrGetSpaceSemanticLabelsFB(
XrSession session,
XrSpace space,
XrSemanticLabelsFB* semanticLabelsOutput);
Gets the semantic labels for a spatial entity with the
XR_SPACE_COMPONENT_TYPE_SEMANTIC_LABELS_FB component type enabled.
The xrGetSpaceBoundary2DFB function is defined as:
// Provided by XR_FB_scene
XrResult xrGetSpaceBoundary2DFB(
XrSession session,
XrSpace space,
XrBoundary2DFB* boundary2DOutput);
Gets the 2D boundary, specified by vertices, for a spatial entity with the
XR_SPACE_COMPONENT_TYPE_BOUNDED_2D_FB component type enabled.
The xrGetSpaceRoomLayoutFB function is defined as:
// Provided by XR_FB_scene
XrResult xrGetSpaceRoomLayoutFB(
XrSession session,
XrSpace space,
XrRoomLayoutFB* roomLayoutOutput);
Gets the room layout, specified by UUIDs for each surface, for a spatial
entity with the XR_SPACE_COMPONENT_TYPE_ROOM_LAYOUT_FB component type
enabled.
If the XrRoomLayoutFB::wallUuidCapacityInput field is zero
(indicating a request to retrieve the required capacity for the
XrRoomLayoutFB::wallUuids array), or if
xrGetSpaceRoomLayoutFB returns failure, then the values of floorUuid
and ceilingUuid are unspecified and should not be used.
Issues
Version History
-
Revision 1, 2022-03-09 (John Schofield)
-
Initial draft
-
-
Revision 2, 2023-04-03 (Yuichi Taguchi)
-
Introduce XrSemanticLabelsSupportInfoFB.
-
-
Revision 3, 2023-04-03 (Yuichi Taguchi)
-
Introduce
XR_SEMANTIC_LABELS_SUPPORT_ACCEPT_DESK_TO_TABLE_MIGRATION_BIT_FB.
-
-
Revision 4, 2023-06-12 (Yuichi Taguchi)
-
Introduce
XR_SEMANTIC_LABELS_SUPPORT_ACCEPT_INVISIBLE_WALL_FACE_BIT_FB.
-
12.99. XR_FB_scene_capture
- Name String
-
XR_FB_scene_capture - Extension Type
-
Instance extension
- Registered Extension Number
-
199
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
John Schofield, Facebook
Andrew Kim, Facebook
Yuichi Taguchi, Facebook
Cass Everitt, Facebook
Overview
This extension allows an application to request that the system begin capturing information about what is in the environment around the user.
In order to enable the functionality of this extension, you must pass the
name of the extension into xrCreateInstance via the
XrInstanceCreateInfo::enabledExtensionNames parameter as
indicated in the Extensions section.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_SCENE_CAPTURE_REQUEST_INFO_FB -
XR_TYPE_EVENT_DATA_SCENE_CAPTURE_COMPLETE_FB
New Enums
New Structures
The XrSceneCaptureRequestInfoFB structure is defined as:
// Provided by XR_FB_scene_capture
typedef struct XrSceneCaptureRequestInfoFB {
XrStructureType type;
const void* next;
uint32_t requestByteCount;
const char* request;
} XrSceneCaptureRequestInfoFB;
The XrSceneCaptureRequestInfoFB structure is used by an application to
instruct the system what to look for during a scene capture.
If the request parameter is NULL, then the runtime must conduct
a default scene capture.
The XrEventDataSceneCaptureCompleteFB structure is defined as:
// Provided by XR_FB_scene_capture
typedef struct XrEventDataSceneCaptureCompleteFB {
XrStructureType type;
const void* next;
XrAsyncRequestIdFB requestId;
XrResult result;
} XrEventDataSceneCaptureCompleteFB;
The XrEventDataSceneCaptureCompleteFB structure is used by an application to instruct the system what to look for during a scene capture.
New Functions
The xrRequestSceneCaptureFB function is defined as:
// Provided by XR_FB_scene_capture
XrResult xrRequestSceneCaptureFB(
XrSession session,
const XrSceneCaptureRequestInfoFB* info,
XrAsyncRequestIdFB* requestId);
The xrRequestSceneCaptureFB function is used by an application to begin capturing the scene around the user. This is an asynchronous operation.
Issues
Version History
-
Revision 1, 2022-03-09 (John Schofield)
-
Initial draft
-
12.100. XR_FB_space_warp
- Name String
-
XR_FB_space_warp - Extension Type
-
Instance extension
- Registered Extension Number
-
172
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Jian Zhang, Facebook
Neel Bedekar, Facebook
Xiang Wei, Facebook
Overview
This extension provides support to enable space warp technology on application. By feeding application generated motion vector and depth buffer images, the runtime can do high quality frame extrapolation and reprojection, allow applications to run at half fps but still providing smooth experience to users.
In order to enable the functionality of this extension, the application
must pass the name of the extension into xrCreateInstance via the
XrInstanceCreateInfo::enabledExtensionNames parameter as
indicated in the Extensions section.
|
Note
This extension is independent of |
New Flag Types
typedef XrFlags64 XrCompositionLayerSpaceWarpInfoFlagsFB;
// Flag bits for XrCompositionLayerSpaceWarpInfoFlagsFB
static const XrCompositionLayerSpaceWarpInfoFlagsFB XR_COMPOSITION_LAYER_SPACE_WARP_INFO_FRAME_SKIP_BIT_FB = 0x00000001;
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_COMPOSITION_LAYER_SPACE_WARP_INFO_FB -
XR_TYPE_SYSTEM_SPACE_WARP_PROPERTIES_FB
New Enums
-
XR_COMPOSITION_LAYER_SPACE_WARP_INFO_FRAME_SKIP_BIT_FB
New Structures
When submitting motion vector buffer and depth buffers along with projection
layers, add an XrCompositionLayerSpaceWarpInfoFB structure to the
XrCompositionLayerProjectionView::next chain, for each
XrCompositionLayerProjectionView structure in the given layer.
The XrCompositionLayerSpaceWarpInfoFB structure is defined as:
// Provided by XR_FB_space_warp
typedef struct XrCompositionLayerSpaceWarpInfoFB {
XrStructureType type;
const void* next;
XrCompositionLayerSpaceWarpInfoFlagsFB layerFlags;
XrSwapchainSubImage motionVectorSubImage;
XrPosef appSpaceDeltaPose;
XrSwapchainSubImage depthSubImage;
float minDepth;
float maxDepth;
float nearZ;
float farZ;
} XrCompositionLayerSpaceWarpInfoFB;
The motion vector data is stored in the motionVectorSubImage’s RGB
channels, defined in NDC (normalized device coordinates) space, for example,
the same surface point’s NDC is PrevNDC in previous frame, CurrNDC in
current frame, then the motion vector value is "highp vec3 motionVector = (
CurrNDC - PrevNDC ).xyz;".
Signed 16 bit float pixel format is recommended for this image.
The runtime must return error XR_ERROR_VALIDATION_FAILURE if
nearZ == farZ.
When this extension is enabled, an application can pass in an
XrSystemSpaceWarpPropertiesFB structure in the
XrSystemProperties::next chain when calling
xrGetSystemProperties to acquire information about recommended motion
vector buffer resolution.
The XrSystemSpaceWarpPropertiesFB structure is defined as:
// Provided by XR_FB_space_warp
typedef struct XrSystemSpaceWarpPropertiesFB {
XrStructureType type;
void* next;
uint32_t recommendedMotionVectorImageRectWidth;
uint32_t recommendedMotionVectorImageRectHeight;
} XrSystemSpaceWarpPropertiesFB;
Issues
Version History
-
Revision 1, 2021-08-04 (Jian Zhang)
-
Initial extension description
-
-
Revision 2, 2022-02-07 (Jian Zhang)
-
Add
XR_COMPOSITION_LAYER_SPACE_WARP_INFO_FRAME_SKIP_BIT_FB
-
12.101. XR_FB_spatial_entity
- Name String
-
XR_FB_spatial_entity - Extension Type
-
Instance extension
- Registered Extension Number
-
114
- Revision
-
3
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
John Schofield, Facebook
Andrew Kim, Facebook
Yuichi Taguchi, Facebook
Cass Everitt, Facebook
Curtis Arink, Facebook
Overview
This extension enables applications to use spatial entities to specify world-locked frames of reference. It enables applications to persist the real world location of content over time and contains definitions for the Entity-Component System. All Facebook spatial entity and scene extensions are dependent on this one.
We use OpenXR XrSpace handles to give applications access to spatial entities such as Spatial Anchors. In other words, any operation which involves spatial entities uses XrSpace handles to identify the affected spatial entities.
In order to enable the functionality of this extension, you must pass the
name of the extension into xrCreateInstance via the
XrInstanceCreateInfo::enabledExtensionNames parameter as
indicated in the Extensions section.
This extension allows:
-
An application to create a Spatial Anchor (a type of spatial entity).
-
An application to enumerate supported components for a given spatial entity.
-
An application to enable or disable a component for a given spatial entity.
-
An application to get the status of a component for a given spatial entity.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_SYSTEM_SPATIAL_ENTITY_PROPERTIES_FB -
XR_TYPE_SPATIAL_ANCHOR_CREATE_INFO_FB -
XR_TYPE_SPACE_COMPONENT_STATUS_SET_INFO_FB -
XR_TYPE_SPACE_COMPONENT_STATUS_FB -
XR_TYPE_EVENT_DATA_SPATIAL_ANCHOR_CREATE_COMPLETE_FB -
XR_TYPE_EVENT_DATA_SPACE_SET_STATUS_COMPLETE_FB
XrResult enumeration is extended with:
-
XR_ERROR_SPACE_COMPONENT_NOT_SUPPORTED_FB -
XR_ERROR_SPACE_COMPONENT_NOT_ENABLED_FB -
XR_ERROR_SPACE_COMPONENT_STATUS_PENDING_FB -
XR_ERROR_SPACE_COMPONENT_STATUS_ALREADY_SET_FB
New Enums
// Provided by XR_FB_spatial_entity
typedef enum XrSpaceComponentTypeFB {
XR_SPACE_COMPONENT_TYPE_LOCATABLE_FB = 0,
XR_SPACE_COMPONENT_TYPE_STORABLE_FB = 1,
XR_SPACE_COMPONENT_TYPE_SHARABLE_FB = 2,
XR_SPACE_COMPONENT_TYPE_BOUNDED_2D_FB = 3,
XR_SPACE_COMPONENT_TYPE_BOUNDED_3D_FB = 4,
XR_SPACE_COMPONENT_TYPE_SEMANTIC_LABELS_FB = 5,
XR_SPACE_COMPONENT_TYPE_ROOM_LAYOUT_FB = 6,
XR_SPACE_COMPONENT_TYPE_SPACE_CONTAINER_FB = 7,
// Provided by XR_META_spatial_entity_mesh
XR_SPACE_COMPONENT_TYPE_TRIANGLE_MESH_META = 1000269000,
XR_SPACE_COMPONENT_TYPE_MAX_ENUM_FB = 0x7FFFFFFF
} XrSpaceComponentTypeFB;
Specify the component interfaces attached to the spatial entity.
New Base Types
The XrAsyncRequestIdFB base type is defined as:
// Provided by XR_FB_spatial_entity
XR_DEFINE_ATOM(XrAsyncRequestIdFB)
Represents a request to the spatial entity system. Several functions in this and other extensions will populate an output variable of this type so that an application can use it when referring to a specific request.
New Structures
The XrSystemSpatialEntityPropertiesFB structure is defined as:
// Provided by XR_FB_spatial_entity
typedef struct XrSystemSpatialEntityPropertiesFB {
XrStructureType type;
const void* next;
XrBool32 supportsSpatialEntity;
} XrSystemSpatialEntityPropertiesFB;
An application can inspect whether the system is capable of spatial entity operations by extending the XrSystemProperties with XrSystemSpatialEntityPropertiesFB structure when calling xrGetSystemProperties.
If a runtime returns XR_FALSE for supportsSpatialEntity, the
runtime must return XR_ERROR_FEATURE_UNSUPPORTED from
xrGetSpaceUuidFB.
The XrSpatialAnchorCreateInfoFB structure is defined as:
// Provided by XR_FB_spatial_entity
typedef struct XrSpatialAnchorCreateInfoFB {
XrStructureType type;
const void* next;
XrSpace space;
XrPosef poseInSpace;
XrTime time;
} XrSpatialAnchorCreateInfoFB;
Parameters to create a new spatial anchor.
The XrSpaceComponentStatusSetInfoFB structure is defined as:
// Provided by XR_FB_spatial_entity
typedef struct XrSpaceComponentStatusSetInfoFB {
XrStructureType type;
const void* next;
XrSpaceComponentTypeFB componentType;
XrBool32 enabled;
XrDuration timeout;
} XrSpaceComponentStatusSetInfoFB;
Enables or disables the specified component for the specified spatial entity.
The XrSpaceComponentStatusFB structure is defined as:
// Provided by XR_FB_spatial_entity
typedef struct XrSpaceComponentStatusFB {
XrStructureType type;
void* next;
XrBool32 enabled;
XrBool32 changePending;
} XrSpaceComponentStatusFB;
It holds information on the current state of a component.
The XrEventDataSpatialAnchorCreateCompleteFB structure is defined as:
// Provided by XR_FB_spatial_entity
typedef struct XrEventDataSpatialAnchorCreateCompleteFB {
XrStructureType type;
const void* next;
XrAsyncRequestIdFB requestId;
XrResult result;
XrSpace space;
XrUuidEXT uuid;
} XrEventDataSpatialAnchorCreateCompleteFB;
It describes the result of a request to create a new spatial anchor. Once this event is posted, it is the applications responsibility to take ownership of the XrSpace. The XrSession passed into xrCreateSpatialAnchorFB is the parent handle of the newly created XrSpace.
The XrEventDataSpaceSetStatusCompleteFB structure is defined as:
// Provided by XR_FB_spatial_entity
typedef struct XrEventDataSpaceSetStatusCompleteFB {
XrStructureType type;
const void* next;
XrAsyncRequestIdFB requestId;
XrResult result;
XrSpace space;
XrUuidEXT uuid;
XrSpaceComponentTypeFB componentType;
XrBool32 enabled;
} XrEventDataSpaceSetStatusCompleteFB;
It describes the result of a request to enable or disable a component of a spatial entity.
New Functions
The xrCreateSpatialAnchorFB function is defined as:
// Provided by XR_FB_spatial_entity
XrResult xrCreateSpatialAnchorFB(
XrSession session,
const XrSpatialAnchorCreateInfoFB* info,
XrAsyncRequestIdFB* requestId);
Creates a Spatial Anchor using the specified tracking origin and pose
relative to the specified tracking origin.
The anchor will be locatable at the time of creation, and the 6 DOF pose
relative to the tracking origin can be queried using the
xrLocateSpace method.
This operation is asynchronous and the runtime must post an
XrEventDataSpatialAnchorCreateCompleteFB event when the operation
completes successfully or encounters an error.
If this function returns a failure code, no event is posted.
The requestId can be used to later refer to the request, such as
identifying which request has completed when an
XrEventDataSpatialAnchorCreateCompleteFB is posted to the event queue.
The xrGetSpaceUuidFB function is defined as:
// Provided by XR_FB_spatial_entity
XrResult xrGetSpaceUuidFB(
XrSpace space,
XrUuidEXT* uuid);
Gets the UUID for a spatial entity.
If this space was previously created as a spatial anchor, uuid must
be equal to the XrEventDataSpatialAnchorCreateCompleteFB::uuid
in the event corresponding to the creation of that space.
Subsequent calls to xrGetSpaceUuidFB using the same XrSpace
must return the same XrUuidEXT.
The xrEnumerateSpaceSupportedComponentsFB function is defined as:
// Provided by XR_FB_spatial_entity
XrResult xrEnumerateSpaceSupportedComponentsFB(
XrSpace space,
uint32_t componentTypeCapacityInput,
uint32_t* componentTypeCountOutput,
XrSpaceComponentTypeFB* componentTypes);
Lists any component types that an entity supports. The list of component types available for an entity depends on which extensions are enabled. Component types must not be enumerated unless the corresponding extension that defines them is also enabled.
The xrSetSpaceComponentStatusFB function is defined as:
// Provided by XR_FB_spatial_entity
XrResult xrSetSpaceComponentStatusFB(
XrSpace space,
const XrSpaceComponentStatusSetInfoFB* info,
XrAsyncRequestIdFB* requestId);
Enables or disables the specified component for the specified entity.
This operation is asynchronous and always returns immediately, regardless of
the value of XrSpaceComponentStatusSetInfoFB::timeout.
The requestId can be used to later refer to the request, such as
identifying which request has completed when an
XrEventDataSpaceSetStatusCompleteFB is posted to the event queue.
If this function returns a failure code, no event is posted.
This function must return XR_ERROR_SPACE_COMPONENT_NOT_SUPPORTED_FB
if the XrSpace does not support the specified component type.
The xrGetSpaceComponentStatusFB function is defined as:
// Provided by XR_FB_spatial_entity
XrResult xrGetSpaceComponentStatusFB(
XrSpace space,
XrSpaceComponentTypeFB componentType,
XrSpaceComponentStatusFB* status);
Gets the current status of the specified component for the specified entity.
This function must return XR_ERROR_SPACE_COMPONENT_NOT_SUPPORTED_FB
if the XrSpace does not support the specified component type.
Issues
Version History
-
Revision 1, 2022-01-22 (John Schofield)
-
Initial draft
-
-
Revision 2, 2023-01-18 (Andrew Kim)
-
Added a new component enum value
-
-
Revision 3, 2023-01-30 (Wenlin Mao)
-
Drop requirement for
XR_EXT_uuidmust be enabled
-
12.102. XR_FB_spatial_entity_container
- Name String
-
XR_FB_spatial_entity_container - Extension Type
-
Instance extension
- Registered Extension Number
-
200
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
John Schofield, Facebook
Andrew Kim, Facebook
Yuichi Taguchi, Facebook
Overview
This extension expands on the concept of spatial entities to include a way for one spatial entity to contain multiple child spatial entities, forming a hierarchy.
In order to enable the functionality of this extension, you must pass the
name of the extension into xrCreateInstance via the
XrInstanceCreateInfo::enabledExtensionNames parameter as
indicated in the Extensions section.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_SPACE_CONTAINER_FB
New Enums
New Structures
The XrSpaceContainerFB structure is defined as:
// Provided by XR_FB_spatial_entity_container
typedef struct XrSpaceContainerFB {
XrStructureType type;
const void* next;
uint32_t uuidCapacityInput;
uint32_t uuidCountOutput;
XrUuidEXT* uuids;
} XrSpaceContainerFB;
The XrSpaceContainerFB structure can be used by an application to perform the two calls required to obtain information about which spatial entities are contained by a specified spatial entity.
New Functions
The xrGetSpaceContainerFB function is defined as:
// Provided by XR_FB_spatial_entity_container
XrResult xrGetSpaceContainerFB(
XrSession session,
XrSpace space,
XrSpaceContainerFB* spaceContainerOutput);
The xrGetSpaceContainerFB function is used by an application to perform the two calls required to obtain information about which spatial entities are contained by a specified spatial entity.
The XR_SPACE_COMPONENT_TYPE_SPACE_CONTAINER_FB component type must be
enabled, otherwise this function will return
XR_ERROR_SPACE_COMPONENT_NOT_ENABLED_FB.
Issues
Version History
-
Revision 1, 2022-03-09 (John Schofield)
-
Initial draft
-
-
Revision 2, 2022-05-31 (John Schofield)
-
Fix types of
XrSpaceContainerFBfields.
-
12.103. XR_FB_spatial_entity_query
- Name String
-
XR_FB_spatial_entity_query - Extension Type
-
Instance extension
- Registered Extension Number
-
157
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
John Schofield, Facebook
Andrew Kim, Facebook
Yuichi Taguchi, Facebook
Cass Everitt, Facebook
Curtis Arink, Facebook
Overview
This extension enables an application to discover persistent spatial entities in the area and restore them. Using the query system, the application can load persistent spatial entities from storage. The query system consists of a set of filters to define the spatial entity search query and an operation that needs to be performed on the search results.
In order to enable the functionality of this extension, you must pass the
name of the extension into xrCreateInstance via the
XrInstanceCreateInfo::enabledExtensionNames parameter as
indicated in the Extensions section.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_SPACE_QUERY_INFO_FB -
XR_TYPE_SPACE_QUERY_RESULTS_FB -
XR_TYPE_SPACE_STORAGE_LOCATION_FILTER_INFO_FB -
XR_TYPE_SPACE_UUID_FILTER_INFO_FB -
XR_TYPE_SPACE_COMPONENT_FILTER_INFO_FB -
XR_TYPE_EVENT_DATA_SPACE_QUERY_RESULTS_AVAILABLE_FB -
XR_TYPE_EVENT_DATA_SPACE_QUERY_COMPLETE_FB
New Enums
// Provided by XR_FB_spatial_entity_query
typedef enum XrSpaceQueryActionFB {
XR_SPACE_QUERY_ACTION_LOAD_FB = 0,
XR_SPACE_QUERY_ACTION_MAX_ENUM_FB = 0x7FFFFFFF
} XrSpaceQueryActionFB;
Specify the type of query being performed.
New Structures
The XrSpaceQueryInfoBaseHeaderFB structure is defined as:
// Provided by XR_FB_spatial_entity_query
typedef struct XrSpaceQueryInfoBaseHeaderFB {
XrStructureType type;
const void* next;
} XrSpaceQueryInfoBaseHeaderFB;
The XrSpaceQueryInfoBaseHeaderFB is a base structure that is not intended to be directly used, but forms a basis for specific query info types. All query info structures begin with the elements described in the XrSpaceQueryInfoBaseHeaderFB, and a query info pointer must be cast to a pointer to XrSpaceQueryInfoBaseHeaderFB when passing it to the xrQuerySpacesFB function.
The XrSpaceFilterInfoBaseHeaderFB structure is defined as:
// Provided by XR_FB_spatial_entity_query
typedef struct XrSpaceFilterInfoBaseHeaderFB {
XrStructureType type;
const void* next;
} XrSpaceFilterInfoBaseHeaderFB;
The XrSpaceFilterInfoBaseHeaderFB is a base structure that is not
intended to be directly used, but forms a basis for specific filter info
types.
All filter info structures begin with the elements described in the
XrSpaceFilterInfoBaseHeaderFB, and a filter info pointer must be cast
to a pointer to XrSpaceFilterInfoBaseHeaderFB when populating
XrSpaceQueryInfoFB::filter and
XrSpaceQueryInfoFB::excludeFilter to pass to the
xrQuerySpacesFB function.
The XrSpaceQueryInfoFB structure is defined as:
// Provided by XR_FB_spatial_entity_query
typedef struct XrSpaceQueryInfoFB {
XrStructureType type;
const void* next;
XrSpaceQueryActionFB queryAction;
uint32_t maxResultCount;
XrDuration timeout;
const XrSpaceFilterInfoBaseHeaderFB* filter;
const XrSpaceFilterInfoBaseHeaderFB* excludeFilter;
} XrSpaceQueryInfoFB;
May be used to query for spaces and perform a specific action on the spaces
returned.
The available actions are enumerated in XrSpaceQueryActionFB.
The filter info provided to the filter member of the struct is used as
an inclusive filter.
The filter info provided to the excludeFilter member of the structure
is used to exclude spaces from the results returned from the filter.
All spaces that match the criteria in filter, and that do not match
the criteria in excludeFilter, must be included in the results
returned.
This is to allow for a more selective style query.
The XrSpaceStorageLocationFilterInfoFB structure is defined as:
// Provided by XR_FB_spatial_entity_query
typedef struct XrSpaceStorageLocationFilterInfoFB {
XrStructureType type;
const void* next;
XrSpaceStorageLocationFB location;
} XrSpaceStorageLocationFilterInfoFB;
Extends a query filter to limit a query to a specific storage location.
Set the next pointer of an XrSpaceFilterInfoBaseHeaderFB to
chain this extra filtering functionality.
The XrSpaceUuidFilterInfoFB structure is defined as:
// Provided by XR_FB_spatial_entity_query
typedef struct XrSpaceUuidFilterInfoFB {
XrStructureType type;
const void* next;
uint32_t uuidCount;
XrUuidEXT* uuids;
} XrSpaceUuidFilterInfoFB;
The XrSpaceUuidFilterInfoFB structure is a filter an application can use to find XrSpace entities that match specified UUIDs, to include or exclude them from a query.
The XrSpaceComponentFilterInfoFB structure is defined as:
// Provided by XR_FB_spatial_entity_query
typedef struct XrSpaceComponentFilterInfoFB {
XrStructureType type;
const void* next;
XrSpaceComponentTypeFB componentType;
} XrSpaceComponentFilterInfoFB;
The XrSpaceComponentFilterInfoFB structure is a filter an application
can use to find XrSpace entities which have the componentType
enabled, to include or exclude them from a query.
The XrSpaceQueryResultFB structure is defined as:
// Provided by XR_FB_spatial_entity_query
typedef struct XrSpaceQueryResultFB {
XrSpace space;
XrUuidEXT uuid;
} XrSpaceQueryResultFB;
The XrSpaceQueryResultFB structure is a query result returned in the
xrRetrieveSpaceQueryResultsFB::results output parameter of the
xrRetrieveSpaceQueryResultsFB function.
The XrSpaceQueryResultsFB structure is defined as:
// Provided by XR_FB_spatial_entity_query
typedef struct XrSpaceQueryResultsFB {
XrStructureType type;
void* next;
uint32_t resultCapacityInput;
uint32_t resultCountOutput;
XrSpaceQueryResultFB* results;
} XrSpaceQueryResultsFB;
The XrSpaceQueryResultsFB structure is used by the xrRetrieveSpaceQueryResultsFB function to retrieve query results.
The XrEventDataSpaceQueryResultsAvailableFB structure is defined as:
// Provided by XR_FB_spatial_entity_query
typedef struct XrEventDataSpaceQueryResultsAvailableFB {
XrStructureType type;
const void* next;
XrAsyncRequestIdFB requestId;
} XrEventDataSpaceQueryResultsAvailableFB;
It indicates a query request has produced some number of results. If a query yields results this event must be delivered before the XrEventDataSpaceQueryCompleteFB event is delivered. Call xrRetrieveSpaceQueryResultsFB to retrieve those results.
The XrEventDataSpaceQueryCompleteFB structure is defined as:
// Provided by XR_FB_spatial_entity_query
typedef struct XrEventDataSpaceQueryCompleteFB {
XrStructureType type;
const void* next;
XrAsyncRequestIdFB requestId;
XrResult result;
} XrEventDataSpaceQueryCompleteFB;
It indicates a query request has completed and specifies the request result. This event must be delivered when a query has completed, regardless of the number of results found. If any results have been found, then this event must be delivered after any XrEventDataSpaceQueryResultsAvailableFB events have been delivered.
New Functions
The xrQuerySpacesFB function is defined as:
// Provided by XR_FB_spatial_entity_query
XrResult xrQuerySpacesFB(
XrSession session,
const XrSpaceQueryInfoBaseHeaderFB* info,
XrAsyncRequestIdFB* requestId);
The xrQuerySpacesFB function enables an application to find and
retrieve spatial entities from storage.
Cast an XrSpaceQueryInfoFB pointer to a
XrSpaceQueryInfoBaseHeaderFB pointer to pass as info.
The application should keep the returned requestId for the duration
of the request as it is used to refer to the request when calling
xrRetrieveSpaceQueryResultsFB and is used to map completion events to
the request.
This operation is asynchronous and the runtime must post an
XrEventDataSpaceQueryCompleteFB event when the operation completes
successfully or encounters an error.
If this function returns a failure code, no event is posted.
The runtime must post an XrEventDataSpaceQueryResultsAvailableFB
before XrEventDataSpaceQueryCompleteFB if any results are found.
Once an XrEventDataSpaceQueryResultsAvailableFB event has been posted,
the application may call xrRetrieveSpaceQueryResultsFB to retrieve
the available results.
The xrRetrieveSpaceQueryResultsFB function is defined as:
// Provided by XR_FB_spatial_entity_query
XrResult xrRetrieveSpaceQueryResultsFB(
XrSession session,
XrAsyncRequestIdFB requestId,
XrSpaceQueryResultsFB* results);
Allows an application to retrieve all available results for a specified
query.
Call this function once to get the number of results found and then once
more to copy the results into a buffer provided by the application.
The number of results will not change between the two calls used to retrieve
results.
This function must only retrieve each query result once.
After the application has used this function to retrieve a query result, the
runtime frees its copy.
The runtime must return XR_ERROR_VALIDATION_FAILURE if
requestId refers to a request that is not yet complete, a request for
which results have already been retrieved, or if requestId does not
refer to a known request.
Issues
Version History
-
Revision 1, 2022-01-22 (John Schofield)
-
Initial draft
-
12.104. XR_FB_spatial_entity_sharing
- Name String
-
XR_FB_spatial_entity_sharing - Extension Type
-
Instance extension
- Registered Extension Number
-
170
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
John Schofield, Facebook
Andrew Kim, Facebook
Overview
This extension enables spatial entities to be shared between users.
If the XR_SPACE_COMPONENT_TYPE_SHARABLE_FB component has been enabled
on the spatial entity, application developers may share XrSpace
entities between users.
In order to enable the functionality of this extension, you must pass the
name of the extension into xrCreateInstance via the
XrInstanceCreateInfo::enabledExtensionNames parameter as
indicated in the Extensions section.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_SPACE_SHARE_INFO_FB -
XR_TYPE_EVENT_DATA_SPACE_SHARE_COMPLETE_FB
XrResult enumeration is extended with:
-
XR_ERROR_SPACE_MAPPING_INSUFFICIENT_FB -
XR_ERROR_SPACE_LOCALIZATION_FAILED_FB -
XR_ERROR_SPACE_NETWORK_TIMEOUT_FB -
XR_ERROR_SPACE_NETWORK_REQUEST_FAILED_FB -
XR_ERROR_SPACE_CLOUD_STORAGE_DISABLED_FB
New Enums
New Base Types
New Structures
The XrSpaceShareInfoFB structure is defined as:
// Provided by XR_FB_spatial_entity_sharing
typedef struct XrSpaceShareInfoFB {
XrStructureType type;
const void* next;
uint32_t spaceCount;
XrSpace* spaces;
uint32_t userCount;
XrSpaceUserFB* users;
} XrSpaceShareInfoFB;
The XrSpaceShareInfoFB structure describes a request to share one or more spatial entities with one or more users.
The XrEventDataSpaceShareCompleteFB structure is defined as:
// Provided by XR_FB_spatial_entity_sharing
typedef struct XrEventDataSpaceShareCompleteFB {
XrStructureType type;
const void* next;
XrAsyncRequestIdFB requestId;
XrResult result;
} XrEventDataSpaceShareCompleteFB;
It indicates that the request to share one or more spatial entities has
completed.
The application can use result to check if the request was successful
or if an error occurred.
New Functions
The xrShareSpacesFB function is defined as:
// Provided by XR_FB_spatial_entity_sharing
XrResult xrShareSpacesFB(
XrSession session,
const XrSpaceShareInfoFB* info,
XrAsyncRequestIdFB* requestId);
This operation is asynchronous and the runtime must post an
XrEventDataSpaceShareCompleteFB event when the operation completes
successfully or encounters an error.
If this function returns a failure code, no event is posted.
The requestId can be used to later refer to the request, such as
identifying which request has completed when an
XrEventDataSpaceShareCompleteFB is posted to the event queue.
Issues
Version History
-
Revision 1, 2022-06-08 (John Schofield)
-
Initial draft
-
12.105. XR_FB_spatial_entity_storage
- Name String
-
XR_FB_spatial_entity_storage - Extension Type
-
Instance extension
- Registered Extension Number
-
159
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
John Schofield, Facebook
Andrew Kim, Facebook
Yuichi Taguchi, Facebook
Cass Everitt, Facebook
Curtis Arink, Facebook
Overview
This extension enables spatial entities to be stored and persisted across
sessions.
If the XR_SPACE_COMPONENT_TYPE_STORABLE_FB component has been enabled
on the spatial entity, application developers may save, load, and erase
persisted XrSpace entities.
In order to enable the functionality of this extension, you must pass the
name of the extension into xrCreateInstance via the
XrInstanceCreateInfo::enabledExtensionNames parameter as
indicated in the Extensions section.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_SPACE_SAVE_INFO_FB -
XR_TYPE_SPACE_ERASE_INFO_FB -
XR_TYPE_EVENT_DATA_SPACE_SAVE_COMPLETE_FB -
XR_TYPE_EVENT_DATA_SPACE_ERASE_COMPLETE_FB
New Enums
// Provided by XR_FB_spatial_entity_storage
typedef enum XrSpaceStorageLocationFB {
XR_SPACE_STORAGE_LOCATION_INVALID_FB = 0,
XR_SPACE_STORAGE_LOCATION_LOCAL_FB = 1,
XR_SPACE_STORAGE_LOCATION_CLOUD_FB = 2,
XR_SPACE_STORAGE_LOCATION_MAX_ENUM_FB = 0x7FFFFFFF
} XrSpaceStorageLocationFB;
The XrSpaceStorageLocationFB enumeration contains the storage locations used to store, erase, and query spatial entities.
// Provided by XR_FB_spatial_entity_storage
typedef enum XrSpacePersistenceModeFB {
XR_SPACE_PERSISTENCE_MODE_INVALID_FB = 0,
XR_SPACE_PERSISTENCE_MODE_INDEFINITE_FB = 1,
XR_SPACE_PERSISTENCE_MODE_MAX_ENUM_FB = 0x7FFFFFFF
} XrSpacePersistenceModeFB;
The XrSpacePersistenceModeFB enumeration specifies the persistence mode for the save operation.
New Structures
The XrSpaceSaveInfoFB structure is defined as:
// Provided by XR_FB_spatial_entity_storage
typedef struct XrSpaceSaveInfoFB {
XrStructureType type;
const void* next;
XrSpace space;
XrSpaceStorageLocationFB location;
XrSpacePersistenceModeFB persistenceMode;
} XrSpaceSaveInfoFB;
The XrSpaceSaveInfoFB structure contains information used to save the spatial entity.
The XrSpaceEraseInfoFB structure is defined as:
// Provided by XR_FB_spatial_entity_storage
typedef struct XrSpaceEraseInfoFB {
XrStructureType type;
const void* next;
XrSpace space;
XrSpaceStorageLocationFB location;
} XrSpaceEraseInfoFB;
The XrSpaceEraseInfoFB structure contains information used to erase the spatial entity.
The XrEventDataSpaceSaveCompleteFB structure is defined as:
// Provided by XR_FB_spatial_entity_storage
typedef struct XrEventDataSpaceSaveCompleteFB {
XrStructureType type;
const void* next;
XrAsyncRequestIdFB requestId;
XrResult result;
XrSpace space;
XrUuidEXT uuid;
XrSpaceStorageLocationFB location;
} XrEventDataSpaceSaveCompleteFB;
The save result event contains the success of the save/write operation to the specified location, as well as the XrSpace handle on which the save operation was attempted on, the unique UUID, and the triggered async request ID from the initial calling function.
The XrEventDataSpaceEraseCompleteFB structure is defined as:
// Provided by XR_FB_spatial_entity_storage
typedef struct XrEventDataSpaceEraseCompleteFB {
XrStructureType type;
const void* next;
XrAsyncRequestIdFB requestId;
XrResult result;
XrSpace space;
XrUuidEXT uuid;
XrSpaceStorageLocationFB location;
} XrEventDataSpaceEraseCompleteFB;
The erase result event contains the success of the erase operation from the specified storage location. It also provides the UUID of the entity and the async request ID from the initial calling function.
New Functions
The xrSaveSpaceFB function is defined as:
// Provided by XR_FB_spatial_entity_storage
XrResult xrSaveSpaceFB(
XrSession session,
const XrSpaceSaveInfoFB* info,
XrAsyncRequestIdFB* requestId);
The xrSaveSpaceFB function persists the spatial entity at the
specified location with the specified mode.
The runtime must return XR_ERROR_VALIDATION_FAILURE if
XrSpaceSaveInfoFB::space is XR_NULL_HANDLE or otherwise
invalid.
The runtime must return XR_ERROR_VALIDATION_FAILURE if
XrSpaceSaveInfoFB::location or
XrSpaceSaveInfoFB::persistenceMode is invalid.
This operation is asynchronous and the runtime must post an
XrEventDataSpaceSaveCompleteFB event when the operation completes
successfully or encounters an error.
If this function returns a failure code, no event is posted.
The xrEraseSpaceFB function is defined as:
// Provided by XR_FB_spatial_entity_storage
XrResult xrEraseSpaceFB(
XrSession session,
const XrSpaceEraseInfoFB* info,
XrAsyncRequestIdFB* requestId);
The xrEraseSpaceFB function erases a spatial entity from storage at
the specified location.
The XrSpace remains valid in the current session until the application
destroys it or the session ends.
The runtime must return XR_ERROR_VALIDATION_FAILURE if
XrSpaceEraseInfoFB::space is XR_NULL_HANDLE or otherwise
invalid.
The runtime must return XR_ERROR_VALIDATION_FAILURE if
XrSpaceEraseInfoFB::location is invalid.
This operation is asynchronous and the runtime must post an
XrEventDataSpaceEraseCompleteFB event when the operation completes
successfully or encounters an error.
If this function returns a failure code, no event is posted.
Issues
Version History
-
Revision 1, 2022-01-22 (John Schofield)
-
Initial draft
-
12.106. XR_FB_spatial_entity_storage_batch
- Name String
-
XR_FB_spatial_entity_storage_batch - Extension Type
-
Instance extension
- Registered Extension Number
-
239
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
John Schofield, Facebook
Andrew Kim, Facebook
Overview
This extension enables multiple spatial entities at a time to be persisted
across sessions.
If the XR_SPACE_COMPONENT_TYPE_STORABLE_FB component has been enabled
on the spatial entity, application developers may save and erase
XrSpace entities.
In order to enable the functionality of this extension, you must pass the
name of the extension into xrCreateInstance via the
XrInstanceCreateInfo::enabledExtensionNames parameter as
indicated in the Extensions section.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_SPACE_LIST_SAVE_INFO_FB -
XR_TYPE_EVENT_DATA_SPACE_LIST_SAVE_COMPLETE_FB
New Enums
New Structures
The XrSpaceListSaveInfoFB structure is defined as:
// Provided by XR_FB_spatial_entity_storage_batch
typedef struct XrSpaceListSaveInfoFB {
XrStructureType type;
const void* next;
uint32_t spaceCount;
XrSpace* spaces;
XrSpaceStorageLocationFB location;
} XrSpaceListSaveInfoFB;
The XrSpaceListSaveInfoFB structure contains information used to save multiple spatial entities.
The XrEventDataSpaceListSaveCompleteFB structure is defined as:
// Provided by XR_FB_spatial_entity_storage_batch
typedef struct XrEventDataSpaceListSaveCompleteFB {
XrStructureType type;
const void* next;
XrAsyncRequestIdFB requestId;
XrResult result;
} XrEventDataSpaceListSaveCompleteFB;
This completion event indicates that a request to save a list of
XrSpace objects has completed.
The application can use result to check if the request was successful
or if an error occurred.
New Functions
The xrSaveSpaceListFB function is defined as:
// Provided by XR_FB_spatial_entity_storage_batch
XrResult xrSaveSpaceListFB(
XrSession session,
const XrSpaceListSaveInfoFB* info,
XrAsyncRequestIdFB* requestId);
The xrSaveSpaceListFB function persists the specified spatial entities
at the specified storage location.
The runtime must return XR_ERROR_VALIDATION_FAILURE if
XrSpaceSaveInfoFB::location is invalid.
This operation is asynchronous and the runtime must post an
XrEventDataSpaceListSaveCompleteFB event when the operation completes
successfully or encounters an error.
If this function returns a failure code, no event is posted.
Issues
Version History
-
Revision 1, 2022-06-08 (John Schofield)
-
Initial draft
-
12.107. XR_FB_spatial_entity_user
- Name String
-
XR_FB_spatial_entity_user - Extension Type
-
Instance extension
- Registered Extension Number
-
242
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
John Schofield, Facebook
Andrew Kim, Facebook
Andreas Selvik, Facebook
Overview
This extension enables creation and management of user objects which can be used by the application to reference a user other than the current user.
In order to enable the functionality of this extension, you must pass the
name of the extension into xrCreateInstance via the
XrInstanceCreateInfo::enabledExtensionNames parameter as
indicated in the Extensions section.
New Object Types
XR_DEFINE_HANDLE(XrSpaceUserFB)
Represents a user with which the application can interact using various
extensions including XR_FB_spatial_entity_sharing.
See xrCreateSpaceUserFB for how to declare a user.
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_SPACE_USER_CREATE_INFO_FB
New Enums
New Base Types
The XrSpaceUserIdFB type is defined as:
typedef uint64_t XrSpaceUserIdFB;
An implementation-defined ID of the underlying user.
New Structures
The XrSpaceUserCreateInfoFB structure is defined as:
// Provided by XR_FB_spatial_entity_user
typedef struct XrSpaceUserCreateInfoFB {
XrStructureType type;
const void* next;
XrSpaceUserIdFB userId;
} XrSpaceUserCreateInfoFB;
The XrSpaceUserCreateInfoFB structure describes a user with which the application can interact.
New Functions
The xrCreateSpaceUserFB function is defined as:
// Provided by XR_FB_spatial_entity_user
XrResult xrCreateSpaceUserFB(
XrSession session,
const XrSpaceUserCreateInfoFB* info,
XrSpaceUserFB* user);
The application can use this function to create a user handle with which it can then interact, such as sharing XrSpace objects.
The xrGetSpaceUserIdFB function is defined as:
// Provided by XR_FB_spatial_entity_user
XrResult xrGetSpaceUserIdFB(
XrSpaceUserFB user,
XrSpaceUserIdFB* userId);
The application can use this function to retrieve the user ID of a given user handle.
The xrDestroySpaceUserFB function is defined as:
// Provided by XR_FB_spatial_entity_user
XrResult xrDestroySpaceUserFB(
XrSpaceUserFB user);
The application should use this function to release resources tied to a given XrSpaceUserFB once the application no longer needs to reference the user.
Issues
Version History
-
Revision 1, 2022-07-28 (John Schofield)
-
Initial draft
-
12.108. XR_FB_swapchain_update_state
- Name String
-
XR_FB_swapchain_update_state - Extension Type
-
Instance extension
- Registered Extension Number
-
72
- Revision
-
3
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Cass Everitt, Facebook
Gloria Kennickell, Facebook
Overview
This extension enables the application to modify and query specific mutable state associated with a swapchain.
In order to enable the functionality of this extension, the application
must pass the name of the extension into xrCreateInstance via the
XrInstanceCreateInfo::enabledExtensionNames parameter as
indicated in the Extensions section.
New Object Types
New Flag Types
New Enum Constants
New Enums
New Structures
The XrSwapchainStateBaseHeaderFB structure is defined as:
// Provided by XR_FB_swapchain_update_state
typedef struct XrSwapchainStateBaseHeaderFB {
XrStructureType type;
void* next;
} XrSwapchainStateBaseHeaderFB;
The XrSwapchainStateBaseHeaderFB is a base structure that can be
overridden by a specific XrSwapchainState* child structure.
New Functions
The xrUpdateSwapchainFB function is defined as:
// Provided by XR_FB_swapchain_update_state
XrResult xrUpdateSwapchainFB(
XrSwapchain swapchain,
const XrSwapchainStateBaseHeaderFB* state);
xrUpdateSwapchainFB provides support for an application to update specific mutable state associated with an XrSwapchain.
The xrGetSwapchainStateFB function is defined as:
// Provided by XR_FB_swapchain_update_state
XrResult xrGetSwapchainStateFB(
XrSwapchain swapchain,
XrSwapchainStateBaseHeaderFB* state);
xrGetSwapchainStateFB provides support for an application to query specific mutable state associated with an XrSwapchain.
Issues
-
Should we add a method to query the current state?
-
Yes. Given that we allow mutable state to be updated by the application, it is useful to have a query mechanism to get the current state for all state structures.
-
Version History
-
Revision 1, 2021-04-16 (Gloria Kennickell)
-
Initial extension description
-
-
Revision 2, 2021-05-13 (Gloria Kennickell)
-
Add mechanism to query current state for all state structures.
-
-
Revision 3, 2021-05-27 (Gloria Kennickell)
-
Move platform and graphics API specific structs into separate extensions.
-
12.109. XR_FB_swapchain_update_state_android_surface
- Name String
-
XR_FB_swapchain_update_state_android_surface - Extension Type
-
Instance extension
- Registered Extension Number
-
162
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Cass Everitt, Facebook
Gloria Kennickell, Facebook
Overview
This extension enables the application to modify and query specific mutable state associated with an Android surface swapchain, examples include:
-
A video application may need to update the default size of the image buffers associated with an Android Surface Swapchain.
-
A video application may need to communicate a new width and height for an Android Surface Swapchain, as the surface dimensions may be implicitly updated by the producer during the life of the Swapchain. This is important for correct application of the non-normalized
imageRectspecified via XrSwapchainSubImage.
In order to enable the functionality of this extension, the application
must pass the name of the extension into xrCreateInstance via the
XrInstanceCreateInfo enabledExtensionNames parameter as
indicated in the Extensions section.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_SWAPCHAIN_STATE_ANDROID_SURFACE_DIMENSIONS_FB
New Enums
New Structures
The XrSwapchainStateAndroidSurfaceDimensionsFB structure is defined as:
// Provided by XR_FB_swapchain_update_state_android_surface
typedef struct XrSwapchainStateAndroidSurfaceDimensionsFB {
XrStructureType type;
void* next;
uint32_t width;
uint32_t height;
} XrSwapchainStateAndroidSurfaceDimensionsFB;
When XrSwapchainStateAndroidSurfaceDimensionsFB is specified in the call to xrUpdateSwapchainFB, the dimensions provided will be used to update the default size of the image buffers associated with the Android Surface swapchain.
Additionally, the dimensions provided will become the new source of truth for the swapchain width and height, affecting operations such as computing the normalized imageRect for the swapchain.
When XrSwapchainStateAndroidSurfaceDimensionsFB is specified in the call to xrGetSwapchainStateFB, the dimensions will be populated with the current swapchain width and height.
To use XrSwapchainStateAndroidSurfaceDimensionsFB,
XR_USE_PLATFORM_ANDROID must be defined before including
openxr_platform.h.
New Functions
Issues
Version History
-
Revision 1, 2021-05-27 (Gloria Kennickell)
-
Initial draft
-
12.110. XR_FB_swapchain_update_state_opengl_es
- Name String
-
XR_FB_swapchain_update_state_opengl_es - Extension Type
-
Instance extension
- Registered Extension Number
-
163
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Cass Everitt, Facebook
Gloria Kennickell, Facebook
Overview
This extension enables the application to modify and query OpenGL ES-specific mutable state associated with a swapchain, examples include:
-
On platforms where composition runs in a separate process from the application, swapchains must be created in a cross-process friendly way. In such cases, the texture image memory may be shared between processes, but the texture state may not; and, an explicit mechanism to synchronize this texture state between the application and the compositor is required.
In order to enable the functionality of this extension, the application
must pass the name of the extension into xrCreateInstance via the
XrInstanceCreateInfo enabledExtensionNames parameter as
indicated in the Extensions section.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_SWAPCHAIN_STATE_SAMPLER_OPENGL_ES_FB
New Enums
New Structures
The XrSwapchainStateSamplerOpenGLESFB structure is defined as:
// Provided by XR_FB_swapchain_update_state_opengl_es
typedef struct XrSwapchainStateSamplerOpenGLESFB {
XrStructureType type;
void* next;
EGLenum minFilter;
EGLenum magFilter;
EGLenum wrapModeS;
EGLenum wrapModeT;
EGLenum swizzleRed;
EGLenum swizzleGreen;
EGLenum swizzleBlue;
EGLenum swizzleAlpha;
float maxAnisotropy;
XrColor4f borderColor;
} XrSwapchainStateSamplerOpenGLESFB;
When XrSwapchainStateSamplerOpenGLESFB is specified in the call to xrUpdateSwapchainFB, texture sampler state for all images in the XrSwapchain will be updated for both the application and compositor processes.
For most cases, the sampler state update is only required compositor-side, as that is where the swapchain images are sampled. For completeness, the application-side sampler state is additionally updated to support cases where the application may choose to directly sample the swapchain images.
Applications are expected to handle synchronization of the sampler state update with application-side rendering. Similarly, the compositor will synchronize the sampler state update with rendering of the next compositor frame.
An EGLContext, either the EGLContext bound during
XrSwapchain creation or an EGLContext in the same share group,
is required to be bound on the application calling thread.
Current texture bindings may be altered by the call, including the active
texture.
When XrSwapchainStateSamplerOpenGLESFB is specified in the call to xrGetSwapchainStateFB, the sampler state will be populated with the current swapchain sampler state.
To use XrSwapchainStateSamplerOpenGLESFB,
XR_USE_GRAPHICS_API_OPENGL_ES must be defined before including
openxr_platform.h.
New Functions
Issues
Version History
-
Revision 1, 2021-05-27 (Gloria Kennickell)
-
Initial draft
-
12.111. XR_FB_swapchain_update_state_vulkan
- Name String
-
XR_FB_swapchain_update_state_vulkan - Extension Type
-
Instance extension
- Registered Extension Number
-
164
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Cass Everitt, Facebook
Gloria Kennickell, Facebook
Overview
This extension enables the application to modify and query Vulkan-specific mutable state associated with a swapchain, examples include:
-
On platforms where composition runs in a separate process from the application, swapchains must be created in a cross-process friendly way. In such cases, the texture image memory may be shared between processes, but the texture state may not; and, an explicit mechanism to synchronize this texture state between the application and the compositor is required.
In order to enable the functionality of this extension, the application
must pass the name of the extension into xrCreateInstance via the
XrInstanceCreateInfo enabledExtensionNames parameter as
indicated in the Extensions section.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_SWAPCHAIN_STATE_SAMPLER_VULKAN_FB
New Enums
New Structures
The XrSwapchainStateSamplerVulkanFB structure is defined as:
// Provided by XR_FB_swapchain_update_state_vulkan
typedef struct XrSwapchainStateSamplerVulkanFB {
XrStructureType type;
void* next;
VkFilter minFilter;
VkFilter magFilter;
VkSamplerMipmapMode mipmapMode;
VkSamplerAddressMode wrapModeS;
VkSamplerAddressMode wrapModeT;
VkComponentSwizzle swizzleRed;
VkComponentSwizzle swizzleGreen;
VkComponentSwizzle swizzleBlue;
VkComponentSwizzle swizzleAlpha;
float maxAnisotropy;
XrColor4f borderColor;
} XrSwapchainStateSamplerVulkanFB;
When XrSwapchainStateSamplerVulkanFB is specified in the call to xrUpdateSwapchainFB, texture sampler state for all images in the XrSwapchain will be updated for the compositor process. For most cases, the sampler state update is only required compositor-side, as that is where the swapchain images are sampled. If the application requires sampling of the swapchain images, the application will be responsible for updating the texture state using normal Vulkan mechanisms and synchronizing appropriately with application-side rendering.
When XrSwapchainStateSamplerVulkanFB is specified in the call to xrGetSwapchainStateFB, the sampler state will be populated with the current swapchain sampler state.
To use XrSwapchainStateSamplerVulkanFB,
XR_USE_GRAPHICS_API_VULKAN must be defined before including
openxr_platform.h.
New Functions
Issues
Version History
-
Revision 1, 2021-05-27 (Gloria Kennickell)
-
Initial draft
-
12.112. XR_FB_touch_controller_proximity
- Name String
-
XR_FB_touch_controller_proximity - Extension Type
-
Instance extension
- Registered Extension Number
-
207
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2022-09-12
- IP Status
-
No known IP claims.
- Contributors
-
Tony Targonski, Meta Platforms
Aanchal Dalmia, Meta Platforms
Andreas Loeve Selvik, Meta Platforms
John Kearney, Meta Platforms
James Hillery, Meta Platforms
12.112.1. Overview
This extension introduces a new component path, proximity_fb, and adds support for it for the /interaction_profiles/oculus/touch_controller interaction profile.
12.112.2. New Interaction Profile Component Paths
-
proximity_fb - The user is in physical proximity of input source. This may be present for any kind of input source representing a physical component, such as a button, if the device includes the necessary sensor. The state of a "proximity_fb" component must be
XR_TRUEif the same input source is returningXR_TRUEfor either a "touch" or any other component that implies physical contact. The runtime may returnXR_TRUEfor "proximity_fb" when "touch" returnsXR_FALSE. This indicate that the user is hovering just above, but not touching the input source in question. "proximity_fb" components are always boolean.
12.112.3. Interaction Profile Changes
Interaction profile: /interaction_profiles/oculus/touch_controller
Additional supported component paths for the above profile enabled by this extension:
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
On both:
-
…/input/trigger/proximity_fb This represents whether the user is in proximity of the trigger button, usually with their index finger.
-
…/input/thumb_fb/proximity_fb This represents whether the user is in proximity of the input sources at the top of the controller, usually with their thumb.
12.112.4. Example code
The following example code demonstrates detecting when a user lifts their finger off the trigger button.
XrInstance instance; // previously initialized
XrSession session; // previously initialized
XrActionSet inGameActionSet; // previously initialized
XrAction indexProximityAction; // previously initialized
XrAction indexTouchAction; // previously initialized
// ----------
// Bind actions to trigger/proximity_fb and trigger/touch
// ----------
XrPath indexProximityPath, indexTouchPath;
// New component exposed by this extension:
CHK_XR(xrStringToPath(instance, "/user/hand/right/input/trigger/proximity_fb", &indexProximityPath));
// Existing component that is useful together with proximity_fb
CHK_XR(xrStringToPath(instance, "/user/hand/right/input/trigger/touch", &indexTouchPath))
XrPath interactionProfilePath;
CHK_XR(xrStringToPath(instance, "/interaction_profiles/oculus/touch_controller", &interactionProfilePath));
XrActionSuggestedBinding bindings[2];
bindings[0].action = indexProximityAction;
bindings[0].binding = indexProximityPath;
bindings[1].action = indexTouchAction;
bindings[1].binding = indexTouchPath;
XrInteractionProfileSuggestedBinding suggestedBindings{XR_TYPE_INTERACTION_PROFILE_SUGGESTED_BINDING};
suggestedBindings.interactionProfile = interactionProfilePath;
suggestedBindings.suggestedBindings = bindings;
suggestedBindings.countSuggestedBindings = 2;
CHK_XR(xrSuggestInteractionProfileBindings(instance, &suggestedBindings));
// ----------
// Application main loop
// ----------
while (1)
{
// ...
// ----------
// Query input state
// ----------
XrActionStateBoolean indexTouchState{XR_TYPE_ACTION_STATE_BOOLEAN};
XrActionStateBoolean indexProximityState{XR_TYPE_ACTION_STATE_BOOLEAN};
XrActionStateGetInfo getInfo{XR_TYPE_ACTION_STATE_GET_INFO};
getInfo.action = indexTouchAction;
CHK_XR(xrGetActionStateBoolean(session, &getInfo, &indexTouchState));
getInfo.action = indexProximityAction;
CHK_XR(xrGetActionStateBoolean(session, &getInfo, &indexProximityState));
// ----------
// Proximity and touch logic
// ----------
// There are only three valid combinations of the proximity and touch values
if (!indexProximityState.currentState)
{
// Index is not in proximity of the trigger button (they might be pointing!)
// Implies that TouchState.currentState == XR_FALSE
}
if (indexProximityState.currentState && !indexTouchState.currentState)
{
// Index finger of user is in proximity of, but not touching, the trigger button
// i.e. they are hovering above the button
}
if (indexTouchState.currentState)
{
// Index finger of user is touching the trigger button
// Implies that ProximityState.currentState == XR_TRUE
}
}
New Object Types
New Flag Types
New Enum Constants
New Enums
New Structures
New Object Types
New Flag Types
New Enum Constants
New Enums
New Structures
Version History
-
Revision 1, 2022-09-12 (Andreas Loeve Selvik)
-
Initial extension proposal
-
12.113. XR_FB_triangle_mesh
- Name String
-
XR_FB_triangle_mesh - Extension Type
-
Instance extension
- Registered Extension Number
-
118
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Anton Vaneev, Facebook
Cass Everitt, Facebook
Federico Schliemann, Facebook
Johannes Schmid, Facebook
Overview
Meshes may be useful in XR applications when representing parts of the environment. In particular, application may provide the surfaces of real-world objects tagged manually to the runtime, or obtain automatically detected environment contents.
This extension allows:
-
An application to create a triangle mesh and specify the mesh data.
-
An application to update mesh contents if a mesh is mutable.
In order to enable the functionality of this extension, the application
must pass the name of the extension into xrCreateInstance via the
XrInstanceCreateInfo::enabledExtensionNames parameter as
indicated in the Extensions section.
New Object Types
XR_DEFINE_HANDLE(XrTriangleMeshFB)
XrTriangleMeshFB represents a triangle mesh with its corresponding mesh data: a vertex buffer and an index buffer.
New Flag Types
// Provided by XR_FB_triangle_mesh
typedef XrFlags64 XrTriangleMeshFlagsFB;
// Flag bits for XrTriangleMeshFlagsFB
static const XrTriangleMeshFlagsFB XR_TRIANGLE_MESH_MUTABLE_BIT_FB = 0x00000001;
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_TRIANGLE_MESH_CREATE_INFO_FB
New Enums
Applications may specify the triangle winding order of a mesh - whether the vertices of an outward-facing side of a triangle appear in clockwise or counter-clockwise order - using XrWindingOrderFB enumeration.
// Provided by XR_FB_triangle_mesh
typedef enum XrWindingOrderFB {
XR_WINDING_ORDER_UNKNOWN_FB = 0,
XR_WINDING_ORDER_CW_FB = 1,
XR_WINDING_ORDER_CCW_FB = 2,
XR_WINDING_ORDER_MAX_ENUM_FB = 0x7FFFFFFF
} XrWindingOrderFB;
New Structures
XrTriangleMeshCreateInfoFB must be provided when calling xrCreateTriangleMeshFB.
The XrTriangleMeshCreateInfoFB structure is defined as:
// Provided by XR_FB_triangle_mesh
typedef struct XrTriangleMeshCreateInfoFB {
XrStructureType type;
const void* next;
XrTriangleMeshFlagsFB flags;
XrWindingOrderFB windingOrder;
uint32_t vertexCount;
const XrVector3f* vertexBuffer;
uint32_t triangleCount;
const uint32_t* indexBuffer;
} XrTriangleMeshCreateInfoFB;
Mesh buffers can be updated between xrTriangleMeshBeginUpdateFB and xrTriangleMeshEndUpdateFB calls.
If the mesh is non-mutable, vertexBuffer must be a pointer to an
array of vertexCount XrVector3f structures.
If the mesh is non-mutable, indexBuffer must be a pointer to an array
of 3 * triangleCount uint32_t vertex indices.
Mutable Mesh Update States
Mutable meshes have a state machine controlling how they may be updated.
The states are as follows:
- Undefined Topology
-
The default state immediately after creation of a mutable mesh. Move to Defining Topology by calling xrTriangleMeshBeginUpdateFB.
- Defining Topology
-
The application must set the initial vertex buffer and index buffer before moving to Ready by calling xrTriangleMeshEndUpdateFB.
- Ready
-
In this state, the buffer contents/size must not be modified. To move to Updating Mesh call xrTriangleMeshBeginUpdateFB. To move to Updating Vertices call xrTriangleMeshBeginVertexBufferUpdateFB.
- Updating Mesh
-
The application may modify the vertex buffer contents and/or the vertex count. The application may modify the index buffer contents and/or the index buffer element count. Move to Ready and commit changes by calling xrTriangleMeshEndUpdateFB.
- Updating Vertices
-
The application may modify the vertex buffer contents, but not the vertex count. Move to Ready and commit changes by calling xrTriangleMeshEndVertexBufferUpdateFB.
New Functions
The xrCreateTriangleMeshFB function is defined as:
// Provided by XR_FB_triangle_mesh
XrResult xrCreateTriangleMeshFB(
XrSession session,
const XrTriangleMeshCreateInfoFB* createInfo,
XrTriangleMeshFB* outTriangleMesh);
This creates an XrTriangleMeshFB handle. The returned triangle mesh handle may be subsequently used in API calls.
When the mesh is mutable (the XR_TRIANGLE_MESH_MUTABLE_BIT_FB bit is
set in XrTriangleMeshCreateInfoFB::flags), the created triangle
mesh starts in the Undefined Topology state.
Immutable meshes have no state machine; they may be considered to be in state Ready with no valid edges leaving that state.
The xrDestroyTriangleMeshFB function is defined as:
// Provided by XR_FB_triangle_mesh
XrResult xrDestroyTriangleMeshFB(
XrTriangleMeshFB mesh);
XrTriangleMeshFB handles and their associated data are destroyed by xrDestroyTriangleMeshFB. The mesh buffers retrieved by xrTriangleMeshGetVertexBufferFB and xrTriangleMeshGetIndexBufferFB must not be accessed anymore after their parent mesh object has been destroyed.
The xrTriangleMeshGetVertexBufferFB function is defined as:
// Provided by XR_FB_triangle_mesh
XrResult xrTriangleMeshGetVertexBufferFB(
XrTriangleMeshFB mesh,
XrVector3f** outVertexBuffer);
Retrieves a pointer to the vertex buffer.
The vertex buffer is structured as an array of XrVector3f.
The size of the buffer is
XrTriangleMeshCreateInfoFB::vertexCount elements.
The buffer location is guaranteed to remain constant over the lifecycle of
the mesh object.
A mesh must be mutable and in a specific state for the application to modify it through the retrieved vertex buffer.
-
A mutable triangle mesh must be in state Defining Topology, Updating Mesh, or Updating Vertices to modify the contents of the vertex buffer retrieved by this function.
-
A mutable triangle mesh must be in state Defining Topology or Updating Mesh to modify the count of elements in the vertex buffer retrieved by this function. The new count is passed as a parameter to xrTriangleMeshEndUpdateFB.
The xrTriangleMeshGetIndexBufferFB function is defined as:
// Provided by XR_FB_triangle_mesh
XrResult xrTriangleMeshGetIndexBufferFB(
XrTriangleMeshFB mesh,
uint32_t** outIndexBuffer);
Retrieves a pointer to the index buffer that defines the topology of the
triangle mesh.
Each triplet of consecutive elements points to three vertices in the vertex
buffer and thus form a triangle.
The size of the index buffer is 3 *
XrTriangleMeshCreateInfoFB::triangleCount elements.
The buffer location is guaranteed to remain constant over the lifecycle of
the mesh object.
A triangle mesh must be mutable and in state Defining Topology or Updating Mesh for the application to modify the contents and/or triangle count in the index buffer retrieved by this function.
The xrTriangleMeshBeginUpdateFB function is defined as:
// Provided by XR_FB_triangle_mesh
XrResult xrTriangleMeshBeginUpdateFB(
XrTriangleMeshFB mesh);
Begins updating the mesh buffer data. The application must call this function before it makes any modifications to the buffers retrieved by xrTriangleMeshGetVertexBufferFB and xrTriangleMeshGetIndexBufferFB. If only the vertex buffer contents need to be updated, and the mesh is in state Ready, xrTriangleMeshBeginVertexBufferUpdateFB may be used instead. To commit the modifications, the application must call xrTriangleMeshEndUpdateFB.
The triangle mesh mesh must be mutable.
The runtime must return XR_ERROR_VALIDATION_FAILURE if the mesh is
immutable.
The triangle mesh mesh must be in state
Undefined Topology or
Ready.
-
If the triangle mesh is in state Undefined Topology before this call, a successful call moves it to state Defining Topology.
-
If the triangle mesh is in state Ready before this call, a successful call moves it to state Updating Mesh.
The xrTriangleMeshEndUpdateFB function is defined as:
// Provided by XR_FB_triangle_mesh
XrResult xrTriangleMeshEndUpdateFB(
XrTriangleMeshFB mesh,
uint32_t vertexCount,
uint32_t triangleCount);
Signals to the runtime that the application has finished initially
populating or updating the mesh buffers.
vertexCount and triangleCount specify the actual number of
primitives that make up the mesh after the update.
They must be larger than zero but smaller or equal to the maximum counts
defined at create time.
The runtime must return XR_ERROR_VALIDATION_FAILURE if an invalid
count is passed.
The triangle mesh mesh must be mutable.
The runtime must return XR_ERROR_VALIDATION_FAILURE if the mesh is
immutable.
The triangle mesh mesh must be in state
Defining Topology or
Updating Mesh.
A successful call moves mesh to state
Ready.
The xrTriangleMeshBeginVertexBufferUpdateFB function is defined as:
// Provided by XR_FB_triangle_mesh
XrResult xrTriangleMeshBeginVertexBufferUpdateFB(
XrTriangleMeshFB mesh,
uint32_t* outVertexCount);
Begins an update of the vertex positions of a mutable triangle mesh.
The vertex count returned through outVertexCount is defined by the
last call to xrTriangleMeshEndUpdateFB.
Once the modification is done, call
xrTriangleMeshEndVertexBufferUpdateFB to commit the changes and move
to state Ready.
The triangle mesh mesh must be mutable.
The runtime must return XR_ERROR_VALIDATION_FAILURE if the mesh is
immutable.
The triangle mesh mesh must be in state
Ready.
A successful call moves mesh to state
Updating Vertices.
The xrTriangleMeshEndVertexBufferUpdateFB function is defined as:
// Provided by XR_FB_triangle_mesh
XrResult xrTriangleMeshEndVertexBufferUpdateFB(
XrTriangleMeshFB mesh);
Signals to the runtime that the application has finished updating the vertex buffer data following a call to xrTriangleMeshBeginVertexBufferUpdateFB.
The triangle mesh mesh must be mutable.
The runtime must return XR_ERROR_VALIDATION_FAILURE if the mesh is
immutable.
The triangle mesh mesh must be in state
Updating Vertices.
A successful call moves mesh to state
Ready.
Issues
Version History
-
Revision 1, 2021-09-01 (Anton Vaneev)
-
Initial extension description
-
-
Revision 2, 2022-01-07 (Rylie Pavlik, Collabora, Ltd.)
-
Add a state diagram to clarify valid usage, and allow
XR_ERROR_CALL_ORDER_INVALID.
-
12.114. XR_HTC_anchor
- Name String
-
XR_HTC_anchor - Extension Type
-
Instance extension
- Registered Extension Number
-
320
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2023-09-14
- IP Status
-
No known IP claims.
- Contributors
-
CheHsuan Shu, HTC
Bill Chang, HTC
Overview
This extension allows an application to create a spatial anchor to track a point in the physical environment. The runtime adjusts the pose of the anchor over time to align it with the real world.
Inspect system capability
The XrSystemAnchorPropertiesHTC structure is defined as:
// Provided by XR_HTC_anchor
typedef struct XrSystemAnchorPropertiesHTC {
XrStructureType type;
void* next;
XrBool32 supportsAnchor;
} XrSystemAnchorPropertiesHTC;
An application can inspect whether the system is capable of anchor
functionality by chaining an XrSystemAnchorPropertiesHTC structure to
the XrSystemProperties when calling xrGetSystemProperties.
The runtime must return XR_ERROR_FEATURE_UNSUPPORTED if
XrSystemAnchorPropertiesHTC::supportsAnchor was XR_FALSE.
The xrCreateSpatialAnchorHTC function is defined as:
// Provided by XR_HTC_anchor
XrResult xrCreateSpatialAnchorHTC(
XrSession session,
const XrSpatialAnchorCreateInfoHTC* createInfo,
XrSpace* anchor);
The xrCreateSpatialAnchorHTC function creates a spatial anchor with specified base space and pose in the space. The anchor is represented by an XrSpace and its pose can be tracked via xrLocateSpace. Once the anchor is no longer needed, call xrDestroySpace to erase the anchor.
The XrSpatialAnchorCreateInfoHTC structure is defined as:
// Provided by XR_HTC_anchor
typedef struct XrSpatialAnchorCreateInfoHTC {
XrStructureType type;
const void* next;
XrSpace space;
XrPosef poseInSpace;
XrSpatialAnchorNameHTC name;
} XrSpatialAnchorCreateInfoHTC;
The poseInSpace is transformed into world space to specify the point
in the real world.
The anchor tracks changes of the reality and may not be affected by the
changes of space.
The XrSpatialAnchorNameHTC structure is defined as:
// Provided by XR_HTC_anchor
typedef struct XrSpatialAnchorNameHTC {
char name[XR_MAX_SPATIAL_ANCHOR_NAME_SIZE_HTC];
} XrSpatialAnchorNameHTC;
The xrGetSpatialAnchorNameHTC function is defined as:
// Provided by XR_HTC_anchor
XrResult xrGetSpatialAnchorNameHTC(
XrSpace anchor,
XrSpatialAnchorNameHTC* name);
The xrGetSpatialAnchorNameHTC function gets the name of an anchor.
If the provided anchor is a valid space handle but was not created
with xrCreateSpatialAnchorHTC, the runtime must return
XR_ERROR_NOT_AN_ANCHOR_HTC.
New Object Types
New Flag Types
New Enum Constants
-
XR_MAX_SPATIAL_ANCHOR_NAME_SIZE_HTC
XrStructureType enumeration is extended with:
-
XR_TYPE_SYSTEM_ANCHOR_PROPERTIES_HTC -
XR_TYPE_SPATIAL_ANCHOR_CREATE_INFO_HTC
XrResult enumeration is extended with:
-
XR_ERROR_NOT_AN_ANCHOR_HTC
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2023-09-14 (CheHsuan Shu)
-
Initial extension description
-
12.115. XR_HTC_body_tracking
- Name String
-
XR_HTC_body_tracking - Extension Type
-
Instance extension
- Registered Extension Number
-
321
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2023-01-17
- IP Status
-
No known IP claims.
- Contributors
-
Kyle Chen, HTC
Chris Kuo, HTC
12.115.1. Overview
This extension allows an application to locate the user’s individual body joints. It enables applications to render the full body in XR experience.
12.115.2. Inspect system capability
The XrSystemBodyTrackingPropertiesHTC structure is defined as:
// Provided by XR_HTC_body_tracking
typedef struct XrSystemBodyTrackingPropertiesHTC {
XrStructureType type;
void* next;
XrBool32 supportsBodyTracking;
} XrSystemBodyTrackingPropertiesHTC;
An application can inspect whether the system is capable of body tracking by extending the XrSystemProperties with XrSystemBodyTrackingPropertiesHTC structure when calling xrGetSystemProperties.
If a runtime returns XR_FALSE for supportsBodyTracking, the
runtime must return XR_ERROR_FEATURE_UNSUPPORTED from
xrCreateBodyTrackerHTC.
12.115.3. Create a body tracker handle
The XrBodyTrackerHTC handle represents the resources for a body tracker.
XR_DEFINE_HANDLE(XrBodyTrackerHTC)
An application can create an XrBodyTrackerHTC handle which is used to locate individual body joints with an unobstructed range of motion using xrLocateBodyJointsHTC function.
The xrCreateBodyTrackerHTC function is defined as
// Provided by XR_HTC_body_tracking
XrResult xrCreateBodyTrackerHTC(
XrSession session,
const XrBodyTrackerCreateInfoHTC* createInfo,
XrBodyTrackerHTC* bodyTracker);
An application can create an XrBodyTrackerHTC handle using xrCreateBodyTrackerHTC.
If the system does not support body tracking, the runtime must return
XR_ERROR_FEATURE_UNSUPPORTED from xrCreateBodyTrackerHTC.
In this case, the runtime must return XR_FALSE for
XrSystemBodyTrackingPropertiesHTC::supportsBodyTracking in
XrSystemBodyTrackingPropertiesHTC when the function
xrGetSystemProperties is called, so that the application avoids
creating a body tracker.
The XrBodyTrackerCreateInfoHTC structure is defined as:
// Provided by XR_HTC_body_tracking
typedef struct XrBodyTrackerCreateInfoHTC {
XrStructureType type;
const void* next;
XrBodyJointSetHTC bodyJointSet;
} XrBodyTrackerCreateInfoHTC;
The XrBodyTrackerCreateInfoHTC structure describes the information to
create an XrBodyTrackerHTC handle.
If the supplied bodyJointSet is not valid, the runtime must return
XR_ERROR_VALIDATION_FAILURE.
.Valid Usage (Implicit)
The xrDestroyBodyTrackerHTC function is defined as:
// Provided by XR_HTC_body_tracking
XrResult xrDestroyBodyTrackerHTC(
XrBodyTrackerHTC bodyTracker);
xrDestroyBodyTrackerHTC releases the bodyTracker and the
underlying resources when finished with body tracking experiences.
12.115.4. Locate body joints
The xrLocateBodyJointsHTC function is defined as:
// Provided by XR_HTC_body_tracking
XrResult xrLocateBodyJointsHTC(
XrBodyTrackerHTC bodyTracker,
const XrBodyJointsLocateInfoHTC* locateInfo,
XrBodyJointLocationsHTC* locations);
The xrLocateBodyJointsHTC function locates an array of body joints relative to a base space at a given time.
If XrBodyJointLocationsHTC::jointLocationCount does not match
the value associated with the supplied XrBodyJointSetHTC value, the
runtime must return XR_ERROR_VALIDATION_FAILURE.
The XrBodyJointsLocateInfoHTC structure is defined as:
// Provided by XR_HTC_body_tracking
typedef struct XrBodyJointsLocateInfoHTC {
XrStructureType type;
const void* next;
XrSpace baseSpace;
XrTime time;
} XrBodyJointsLocateInfoHTC;
The XrBodyJointsLocateInfoHTC structure describes the information to locate individual body joints.
The XrBodyJointLocationsHTC structure is defined as:
// Provided by XR_HTC_body_tracking
typedef struct XrBodyJointLocationsHTC {
XrStructureType type;
void* next;
XrSpaceLocationFlags combinedLocationFlags;
XrBodyJointConfidenceHTC confidenceLevel;
uint32_t jointLocationCount;
XrBodyJointLocationHTC* jointLocations;
uint32_t skeletonGenerationId;
} XrBodyJointLocationsHTC;
The application must set jointLocationCount as appropriate for the
chosen XrBodyJointSetHTC value when creating the
XrBodyTrackerHTC.
If jointLocationCount does not match the value associated with the
supplied XrBodyJointSetHTC value, the runtime must return
XR_ERROR_VALIDATION_FAILURE from xrLocateBodyJointsHTC.
An application must allocate the output jointLocations array with a
minimum capacity of jointLocationCount of XrBodyJointLocationHTC
elements.
If the application supplies a NULL value for jointLocations, the
runtime must return XR_ERROR_VALIDATION_FAILURE.
The runtime must update the jointLocations array elements indexed
using the corresponding body joint enumeration (e.g. XrBodyJointHTC
for the joint set XR_BODY_JOINT_SET_FULL_HTC) as described by
XrBodyJointSetHTC when creating the XrBodyTrackerHTC.
For example, when the XrBodyTrackerHTC is created with
XR_BODY_JOINT_SET_FULL_HTC, the runtime must fill the
jointLocations array with body joint data indexed by the
XrBodyJointHTC enumeration.
If the runtime returns combinedLocationFlags with
XR_SPACE_LOCATION_POSITION_VALID_BIT and
XR_SPACE_LOCATION_ORIENTATION_VALID_BIT set, it indicates that the
body tracker detects the joint space locations.
If the runtime returns combinedLocationFlags with neither
XR_SPACE_LOCATION_POSITION_VALID_BIT nor
XR_SPACE_LOCATION_ORIENTATION_VALID_BIT set, it indicates that the
body tracker did not detect the joint space locations.
The purpose of the skeleton is to provide data about the body size.
The calculation of the body size may be updated during a session.
Each time the calculation of the size is changed,
XrBodyJointLocationsHTC::skeletonGenerationId is changed to
indicate that a new skeleton may be retrieved.
xrGetBodySkeletonHTC can be called with the specified
skeletonGenerationId to get the corresponding skeleton.
The XrBodyJointConfidenceHTC enumeration is defined as:
// Provided by XR_HTC_body_tracking
typedef enum XrBodyJointConfidenceHTC {
XR_BODY_JOINT_CONFIDENCE_NONE_HTC = 0,
XR_BODY_JOINT_CONFIDENCE_LOW_HTC = 1,
XR_BODY_JOINT_CONFIDENCE_HIGH_HTC = 2,
XR_BODY_JOINT_CONFIDENCE_MAX_ENUM_HTC = 0x7FFFFFFF
} XrBodyJointConfidenceHTC;
The XrBodyJointConfidenceHTC enumeration describes the confidence level for the returned body joint pose.
The XrBodyJointLocationHTC structure is defined as:
// Provided by XR_HTC_body_tracking
typedef struct XrBodyJointLocationHTC {
XrSpaceLocationFlags locationFlags;
XrPosef pose;
} XrBodyJointLocationHTC;
XrBodyJointLocationHTC structure describes the position, orientation, and location flag of a body joint. It is populated by the runtime during a call to xrLocateBodyJointsHTC.
12.115.5. Get body skeleton
The xrGetBodySkeletonHTC function is defined as:
// Provided by XR_HTC_body_tracking
XrResult xrGetBodySkeletonHTC(
XrBodyTrackerHTC bodyTracker,
XrSpace baseSpace,
uint32_t skeletonGenerationId,
XrBodySkeletonHTC* skeleton);
The xrGetBodySkeletonHTC function returns the body skeleton in T-pose.
This function can be used to get body skeleton and infer the skeleton scale
and proportions in conjunction with
XrBodyJointLocationsHTC::skeletonGenerationId.
XrBodyJointLocationsHTC::skeletonGenerationId is generated when
the tracking auto-calibrates the user skeleton scale and proportions.
If the application supplies a skeletonGenerationId that does not match
any value returned in
XrBodyJointLocationsHTC::skeletonGenerationId during the current
session, the runtime must return XR_ERROR_VALIDATION_FAILURE.
The XrBodySkeletonHTC structure is defined as:
// Provided by XR_HTC_body_tracking
typedef struct XrBodySkeletonHTC {
XrStructureType type;
void* next;
uint32_t jointCount;
XrBodySkeletonJointHTC* joints;
} XrBodySkeletonHTC;
The XrBodySkeletonHTC structure is a container to represent the body
skeleton in T-pose including each joint pose.
The runtime must return XR_ERROR_VALIDATION_FAILURE if
jointCount does not equal the number of joints associated with the
XrBodyJointSetHTC value used to create the XrBodyTrackerHTC.
The application must allocate an array of at least jointCount
elements for joints, to be populated by the runtime.
If joints is NULL, the runtime must return
XR_ERROR_VALIDATION_FAILURE.
The runtime must return joints representing the default pose of the
current estimation regarding the user’s skeleton.
The runtime must update the joints array ordered so that it is
indexed using the corresponding body joint enumeration (e.g.
XrBodyJointHTC for the joint set XR_BODY_JOINT_SET_FULL_HTC) as
associated with the XrBodyJointSetHTC value used when creating the
XrBodyTrackerHTC.
For example, when the XrBodyTrackerHTC is created with
XR_BODY_JOINT_SET_FULL_HTC, the runtime must fill the joints
array indexed by the XrBodyJointHTC enumeration.
The XrBodySkeletonJointHTC structure is defined as:
typedef struct XrBodySkeletonJointHTC {
XrPosef pose;
} XrBodySkeletonJointHTC;
XrBodySkeletonJointHTC structure describes the position, orientation of the joint in space, and position of the joint in the skeleton.
12.115.6. Conventions of body joints
The XrBodyJointSetHTC enumeration is defined as:
// Provided by XR_HTC_body_tracking
typedef enum XrBodyJointSetHTC {
XR_BODY_JOINT_SET_FULL_HTC = 0,
XR_BODY_JOINT_SET_MAX_ENUM_HTC = 0x7FFFFFFF
} XrBodyJointSetHTC;
The XrBodyJointSetHTC enumeration describes the set of body joints to track when creating an XrBodyTrackerHTC.
The XrBodyJointHTC enumeration is defined as:
// Provided by XR_HTC_body_tracking
typedef enum XrBodyJointHTC {
XR_BODY_JOINT_PELVIS_HTC = 0,
XR_BODY_JOINT_LEFT_HIP_HTC = 1,
XR_BODY_JOINT_LEFT_KNEE_HTC = 2,
XR_BODY_JOINT_LEFT_ANKLE_HTC = 3,
XR_BODY_JOINT_LEFT_FEET_HTC = 4,
XR_BODY_JOINT_RIGHT_HIP_HTC = 5,
XR_BODY_JOINT_RIGHT_KNEE_HTC = 6,
XR_BODY_JOINT_RIGHT_ANKLE_HTC = 7,
XR_BODY_JOINT_RIGHT_FEET_HTC = 8,
XR_BODY_JOINT_WAIST_HTC = 9,
XR_BODY_JOINT_SPINE_LOWER_HTC = 10,
XR_BODY_JOINT_SPINE_MIDDLE_HTC = 11,
XR_BODY_JOINT_SPINE_HIGH_HTC = 12,
XR_BODY_JOINT_CHEST_HTC = 13,
XR_BODY_JOINT_NECK_HTC = 14,
XR_BODY_JOINT_HEAD_HTC = 15,
XR_BODY_JOINT_LEFT_CLAVICLE_HTC = 16,
XR_BODY_JOINT_LEFT_SCAPULA_HTC = 17,
XR_BODY_JOINT_LEFT_ARM_HTC = 18,
XR_BODY_JOINT_LEFT_ELBOW_HTC = 19,
XR_BODY_JOINT_LEFT_WRIST_HTC = 20,
XR_BODY_JOINT_RIGHT_CLAVICLE_HTC = 21,
XR_BODY_JOINT_RIGHT_SCAPULA_HTC = 22,
XR_BODY_JOINT_RIGHT_ARM_HTC = 23,
XR_BODY_JOINT_RIGHT_ELBOW_HTC = 24,
XR_BODY_JOINT_RIGHT_WRIST_HTC = 25,
XR_BODY_JOINT_MAX_ENUM_HTC = 0x7FFFFFFF
} XrBodyJointHTC;
It is used to index into a joint location array when the joint set in use
(XrBodyJointSetHTC) is XR_BODY_JOINT_SET_FULL_HTC.
This extension defines 26 joints for body tracking: 6 joints for the torso, 5 joints for each arm, 4 joints for each leg, and the other 2 joints for the head and neck. The definitions of these joints are based on human skeletal joints.
As shown in the figure below, the following conventions are stated with a T-shape body pose in which the palms are facing down to the ground.
The right direction (+X) is pointing from left hand to right hand in T-pose.
The up direction (+Y) is pointing from foot to head in T-pose.
The Z direction is perpendicular to X and Y and follows the right hand rule in T-pose.
// Provided by XR_HTC_body_tracking
#define XR_BODY_JOINT_COUNT_HTC 26
XR_BODY_JOINT_COUNT_HTC defines the number of body joint enumerants defined in XrBodyJointHTC.
12.115.7. Example code for locating body joints
The following example code demonstrates how to locate all individual body joints relative to a world space.
XrInstance instance; // previously initialized
XrSystemId systemId; // previously initialized
XrSession session; // previously initialized
XrSpace worldSpace; // previously initialized, e.g. from
// XR_REFERENCE_SPACE_TYPE_LOCAL
// Inspect body tracking system properties
XrSystemBodyTrackingPropertiesHTC bodyTrackingSystemProperties{
XR_TYPE_SYSTEM_BODY_TRACKING_PROPERTIES_HTC};
XrSystemProperties systemProperties{XR_TYPE_SYSTEM_PROPERTIES,
&bodyTrackingSystemProperties};
CHK_XR(xrGetSystemProperties(instance, systemId, &systemProperties));
if (!bodyTrackingSystemProperties.supportsBodyTracking) {
// The system does not support body tracking
return;
}
// Get function pointer for xrCreateBodyTrackerHTC
PFN_xrCreateBodyTrackerHTC pfnCreateBodyTrackerHTC;
CHK_XR(xrGetInstanceProcAddr(instance, "xrCreateBodyTrackerHTC",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnCreateBodyTrackerHTC)));
// Create a body tracker that tracks default set of individual body joints.
XrBodyTrackerHTC bodyTracker{XR_NULL_HANDLE};
{
XrBodyTrackerCreateInfoHTC createInfo{XR_TYPE_BODY_TRACKER_CREATE_INFO_HTC};
createInfo.bodyJointSet = XR_BODY_JOINT_SET_FULL_HTC;
CHK_XR(pfnCreateBodyTrackerHTC(session, &createInfo, &bodyTracker));
}
// Allocate buffers to receive joint location before frame loop starts
XrBodyJointLocationHTC jointLocations[XR_BODY_JOINT_COUNT_HTC];
XrBodyJointLocationsHTC locations{XR_TYPE_BODY_JOINT_LOCATIONS_HTC};
locations.jointLocationCount = XR_BODY_JOINT_COUNT_HTC;
locations.jointLocations = jointLocations;
// Get function pointer for xrLocateBodyJointsHTC
PFN_xrLocateBodyJointsHTC pfnLocateBodyJointsHTC;
CHK_XR(xrGetInstanceProcAddr(instance, "xrLocateBodyJointsHTC",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnLocateBodyJointsHTC)));
while (1) {
// ...
// For every frame in frame loop
// ...
XrFrameState frameState; // previously returned from xrWaitFrame
const XrTime time = frameState.predictedDisplayTime;
XrBodyJointsLocateInfoHTC locateInfo{XR_TYPE_BODY_JOINTS_LOCATE_INFO_HTC};
locateInfo.baseSpace = worldSpace;
locateInfo.time = time;
CHK_XR(pfnLocateBodyJointsHTC(bodyTracker, &locateInfo, &locations));
// The returned joint location array is directly indexed with
// XrBodyJointHTC enum.
const XrPosef &pelvisInWorld =
jointLocations[XR_BODY_JOINT_PELVIS_HTC].pose;
const XrPosef &headInWorld =
jointLocations[XR_BODY_JOINT_HEAD_HTC].pose;
}
12.115.12. New Enum Constants
-
XR_HTC_BODY_TRACKING_EXTENSION_NAME -
XR_HTC_body_tracking_SPEC_VERSION -
Extending XrObjectType:
-
XR_OBJECT_TYPE_BODY_TRACKER_HTC
-
-
Extending XrStructureType:
-
XR_TYPE_BODY_JOINTS_LOCATE_INFO_HTC -
XR_TYPE_BODY_JOINT_LOCATIONS_HTC -
XR_TYPE_BODY_SKELETON_HTC -
XR_TYPE_BODY_TRACKER_CREATE_INFO_HTC -
XR_TYPE_SYSTEM_BODY_TRACKING_PROPERTIES_HTC
-
12.116. XR_HTC_facial_tracking
- Name String
-
XR_HTC_facial_tracking - Extension Type
-
Instance extension
- Registered Extension Number
-
105
- Revision
-
3
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2024-07-26
- IP Status
-
No known IP claims.
- Contributors
-
Kyle Chen, HTC
Chris Kuo, HTC
Andy Chen, HTC
Overview
This extension allows an application to track and integrate users' eye and lip movements, empowering developers to read intention and model facial expressions.
Inspect system capability
XrSystemFacialTrackingPropertiesHTC is defined as:
// Provided by XR_HTC_facial_tracking
typedef struct XrSystemFacialTrackingPropertiesHTC {
XrStructureType type;
void* next;
XrBool32 supportEyeFacialTracking;
XrBool32 supportLipFacialTracking;
} XrSystemFacialTrackingPropertiesHTC;
An application can inspect whether the system is capable of two of the facial tracking by extending the XrSystemProperties with XrSystemFacialTrackingPropertiesHTC structure when calling xrGetSystemProperties.
If a runtime returns XR_FALSE for supportEyeFacialTracking, the
runtime must return XR_ERROR_FEATURE_UNSUPPORTED from
xrCreateFacialTrackerHTC with
XR_FACIAL_TRACKING_TYPE_EYE_DEFAULT_HTC set for
XrFacialTrackingTypeHTC in XrFacialTrackerCreateInfoHTC.
Similarly, if a runtime returns XR_FALSE for
supportLipFacialTracking the runtime must return
XR_ERROR_FEATURE_UNSUPPORTED from xrCreateFacialTrackerHTC with
XR_FACIAL_TRACKING_TYPE_LIP_DEFAULT_HTC set for
XrFacialTrackingTypeHTC in XrFacialTrackerCreateInfoHTC.
Create a facial tracker handle
The XrFacialTrackerHTC handle represents the resources for an facial tracker of the specific facial tracking type.
XR_DEFINE_HANDLE(XrFacialTrackerHTC)
An application creates separate XrFacialTrackerHTC handles for eye tracker or lip tracker. This handle can be used to retrieve corresponding facial expressions using xrGetFacialExpressionsHTC function.
The xrCreateFacialTrackerHTC function is defined as
// Provided by XR_HTC_facial_tracking
XrResult xrCreateFacialTrackerHTC(
XrSession session,
const XrFacialTrackerCreateInfoHTC* createInfo,
XrFacialTrackerHTC* facialTracker);
An application can create an XrFacialTrackerHTC handle using xrCreateFacialTrackerHTC.
If the system does not support eye tracking or lip tracking, runtime must
return XR_ERROR_FEATURE_UNSUPPORTED from
xrCreateFacialTrackerHTC according to the corresponding case.
In this case, the runtime must return XR_FALSE for
XrSystemFacialTrackingPropertiesHTC::supportEyeFacialTracking or
XrSystemFacialTrackingPropertiesHTC::supportLipFacialTracking
when the function xrGetSystemProperties is called, so that the
application may avoid creating a facial tracker.
The XrFacialTrackerCreateInfoHTC structure is defined as:
// Provided by XR_HTC_facial_tracking
typedef struct XrFacialTrackerCreateInfoHTC {
XrStructureType type;
const void* next;
XrFacialTrackingTypeHTC facialTrackingType;
} XrFacialTrackerCreateInfoHTC;
The XrFacialTrackerCreateInfoHTC structure describes the information to create an XrFacialTrackerHTC handle.
The XrFacialTrackingTypeHTC describes which type of tracking the XrFacialTrackerHTC is using.
// Provided by XR_HTC_facial_tracking
typedef enum XrFacialTrackingTypeHTC {
XR_FACIAL_TRACKING_TYPE_EYE_DEFAULT_HTC = 1,
XR_FACIAL_TRACKING_TYPE_LIP_DEFAULT_HTC = 2,
XR_FACIAL_TRACKING_TYPE_MAX_ENUM_HTC = 0x7FFFFFFF
} XrFacialTrackingTypeHTC;
The xrDestroyFacialTrackerHTC function is defined as:
// Provided by XR_HTC_facial_tracking
XrResult xrDestroyFacialTrackerHTC(
XrFacialTrackerHTC facialTracker);
xrDestroyFacialTrackerHTC releases the facialTracker and the
underlying resources when finished with facial tracking experiences.
Retrieve facial expressions
The xrGetFacialExpressionsHTC function is defined as:
// Provided by XR_HTC_facial_tracking
XrResult xrGetFacialExpressionsHTC(
XrFacialTrackerHTC facialTracker,
XrFacialExpressionsHTC* facialExpressions);
xrGetFacialExpressionsHTC retrieves an array of values of blend shapes for a facial expression on a given time.
The XrFacialExpressionsHTC structure is defined as:
// Provided by XR_HTC_facial_tracking
typedef struct XrFacialExpressionsHTC {
XrStructureType type;
const void* next;
XrBool32 isActive;
XrTime sampleTime;
uint32_t expressionCount;
float* expressionWeightings;
} XrFacialExpressionsHTC;
XrFacialExpressionsHTC structure returns data of a lip facial expression or an eye facial expression.
An application must preallocate the output expressionWeightings array
that can contain at least expressionCount of float.
expressionCount must be at least
XR_FACIAL_EXPRESSION_LIP_COUNT_HTC for
XR_FACIAL_TRACKING_TYPE_LIP_DEFAULT_HTC, and at least
XR_FACIAL_EXPRESSION_EYE_COUNT_HTC for
XR_FACIAL_TRACKING_TYPE_EYE_DEFAULT_HTC.
The application must set expressionCount as described by the
XrFacialTrackingTypeHTC when creating the XrFacialTrackerHTC
otherwise the runtime must return XR_ERROR_VALIDATION_FAILURE.
The runtime must update the expressionWeightings array ordered so
that the application can index elements using the corresponding facial
tracker enum (e.g. XrEyeExpressionHTC or XrLipExpressionHTC) as
described by XrFacialTrackingTypeHTC when creating the
XrFacialTrackerHTC.
For example, when the XrFacialTrackerHTC is created with
XrFacialTrackerHTC::facialTrackingType set to
XR_FACIAL_TRACKING_TYPE_EYE_DEFAULT_HTC, the application must set the
expressionCount to XR_FACIAL_EXPRESSION_EYE_COUNT_HTC, and the
runtime must fill the expressionWeightings array ordered with eye
expression data so that it can be indexed by the XrEyeExpressionHTC
enum.
If the returned isActive is true, the runtime must fill the
expressionWeightings array ordered.
If the returned isActive is false, it indicates the facial tracker did
not detect the corresponding facial input or the application lost input
focus.
If the input expressionCount is not sufficient to contain all output
indices, the runtime must return XR_ERROR_SIZE_INSUFFICIENT on calls
to xrGetFacialExpressionsHTC and not change the content in
expressionWeightings.
// Provided by XR_HTC_facial_tracking
#define XR_FACIAL_EXPRESSION_EYE_COUNT_HTC 14
The number of blend shapes in an expression of type
XR_FACIAL_TRACKING_TYPE_EYE_DEFAULT_HTC.
// Provided by XR_HTC_facial_tracking
#define XR_FACIAL_EXPRESSION_LIP_COUNT_HTC 37
The number of blend shapes in an expression of type
XR_FACIAL_TRACKING_TYPE_LIP_DEFAULT_HTC.
Facial Expression List
-
Eye Blend Shapes
Through feeding the blend shape values of eye expression to an avatar, its facial expression can be animated with the player’s eye movement. The following pictures show how the facial expression acts on the avatar according to each set of eye blend shape values.
// Provided by XR_HTC_facial_tracking
typedef enum XrEyeExpressionHTC {
XR_EYE_EXPRESSION_LEFT_BLINK_HTC = 0,
XR_EYE_EXPRESSION_LEFT_WIDE_HTC = 1,
XR_EYE_EXPRESSION_RIGHT_BLINK_HTC = 2,
XR_EYE_EXPRESSION_RIGHT_WIDE_HTC = 3,
XR_EYE_EXPRESSION_LEFT_SQUEEZE_HTC = 4,
XR_EYE_EXPRESSION_RIGHT_SQUEEZE_HTC = 5,
XR_EYE_EXPRESSION_LEFT_DOWN_HTC = 6,
XR_EYE_EXPRESSION_RIGHT_DOWN_HTC = 7,
XR_EYE_EXPRESSION_LEFT_OUT_HTC = 8,
XR_EYE_EXPRESSION_RIGHT_IN_HTC = 9,
XR_EYE_EXPRESSION_LEFT_IN_HTC = 10,
XR_EYE_EXPRESSION_RIGHT_OUT_HTC = 11,
XR_EYE_EXPRESSION_LEFT_UP_HTC = 12,
XR_EYE_EXPRESSION_RIGHT_UP_HTC = 13,
XR_EYE_EXPRESSION_MAX_ENUM_HTC = 0x7FFFFFFF
} XrEyeExpressionHTC;
XR_EYE_EXPRESSION_LEFT_WIDE_HTC |
|
|---|---|
Description |
|
XR_EYE_EXPRESSION_RIGHT_WIDE_HTC |
|
|---|---|
Description |
|
XR_EYE_EXPRESSION_LEFT_BLINK_HTC |
|
|---|---|
Description |
|
XR_EYE_EXPRESSION_RIGHT_BLINK_HTC |
|
|---|---|
Description |
|
XR_EYE_EXPRESSION_LEFT_SQUEEZE_HTC |
|
|---|---|
Description |
|
XR_EYE_EXPRESSION_RIGHT_SQUEEZE_HTC |
|
|---|---|
Description |
|
XR_EYE_EXPRESSION_LEFT_DOWN_HTC |
|
|---|---|
Description |
|
XR_EYE_EXPRESSION_RIGHT_DOWN_HTC |
|
|---|---|
Description |
|
XR_EYE_EXPRESSION_LEFT_OUT_HTC |
|
|---|---|
Description |
|
XR_EYE_EXPRESSION_RIGHT_IN_HTC |
|
|---|---|
Description |
|
XR_EYE_EXPRESSION_LEFT_IN_HTC |
|
|---|---|
Description |
|
XR_EYE_EXPRESSION_RIGHT_OUT_HTC |
|
|---|---|
Description |
|
XR_EYE_EXPRESSION_LEFT_UP_HTC |
|
|---|---|
Description |
|
XR_EYE_EXPRESSION_RIGHT_UP_HTC |
|
|---|---|
Description |
|
-
Lip Blend Shapes
Through feeding the blend shape values of lip expression to an avatar, its facial expression can be animated with the player’s lip movement. The following pictures show how the facial expression acts on the avatar according to each set of lip blend shape values.
// Provided by XR_HTC_facial_tracking
typedef enum XrLipExpressionHTC {
XR_LIP_EXPRESSION_JAW_RIGHT_HTC = 0,
XR_LIP_EXPRESSION_JAW_LEFT_HTC = 1,
XR_LIP_EXPRESSION_JAW_FORWARD_HTC = 2,
XR_LIP_EXPRESSION_JAW_OPEN_HTC = 3,
XR_LIP_EXPRESSION_MOUTH_APE_SHAPE_HTC = 4,
XR_LIP_EXPRESSION_MOUTH_UPPER_RIGHT_HTC = 5,
XR_LIP_EXPRESSION_MOUTH_UPPER_LEFT_HTC = 6,
XR_LIP_EXPRESSION_MOUTH_LOWER_RIGHT_HTC = 7,
XR_LIP_EXPRESSION_MOUTH_LOWER_LEFT_HTC = 8,
XR_LIP_EXPRESSION_MOUTH_UPPER_OVERTURN_HTC = 9,
XR_LIP_EXPRESSION_MOUTH_LOWER_OVERTURN_HTC = 10,
XR_LIP_EXPRESSION_MOUTH_POUT_HTC = 11,
XR_LIP_EXPRESSION_MOUTH_RAISER_RIGHT_HTC = 12,
XR_LIP_EXPRESSION_MOUTH_RAISER_LEFT_HTC = 13,
XR_LIP_EXPRESSION_MOUTH_STRETCHER_RIGHT_HTC = 14,
XR_LIP_EXPRESSION_MOUTH_STRETCHER_LEFT_HTC = 15,
XR_LIP_EXPRESSION_CHEEK_PUFF_RIGHT_HTC = 16,
XR_LIP_EXPRESSION_CHEEK_PUFF_LEFT_HTC = 17,
XR_LIP_EXPRESSION_CHEEK_SUCK_HTC = 18,
XR_LIP_EXPRESSION_MOUTH_UPPER_UPRIGHT_HTC = 19,
XR_LIP_EXPRESSION_MOUTH_UPPER_UPLEFT_HTC = 20,
XR_LIP_EXPRESSION_MOUTH_LOWER_DOWNRIGHT_HTC = 21,
XR_LIP_EXPRESSION_MOUTH_LOWER_DOWNLEFT_HTC = 22,
XR_LIP_EXPRESSION_MOUTH_UPPER_INSIDE_HTC = 23,
XR_LIP_EXPRESSION_MOUTH_LOWER_INSIDE_HTC = 24,
XR_LIP_EXPRESSION_MOUTH_LOWER_OVERLAY_HTC = 25,
XR_LIP_EXPRESSION_TONGUE_LONGSTEP1_HTC = 26,
XR_LIP_EXPRESSION_TONGUE_LEFT_HTC = 27,
XR_LIP_EXPRESSION_TONGUE_RIGHT_HTC = 28,
XR_LIP_EXPRESSION_TONGUE_UP_HTC = 29,
XR_LIP_EXPRESSION_TONGUE_DOWN_HTC = 30,
XR_LIP_EXPRESSION_TONGUE_ROLL_HTC = 31,
XR_LIP_EXPRESSION_TONGUE_LONGSTEP2_HTC = 32,
XR_LIP_EXPRESSION_TONGUE_UPRIGHT_MORPH_HTC = 33,
XR_LIP_EXPRESSION_TONGUE_UPLEFT_MORPH_HTC = 34,
XR_LIP_EXPRESSION_TONGUE_DOWNRIGHT_MORPH_HTC = 35,
XR_LIP_EXPRESSION_TONGUE_DOWNLEFT_MORPH_HTC = 36,
// Provided by XR_HTC_facial_tracking
XR_LIP_EXPRESSION_MOUTH_SMILE_RIGHT_HTC = XR_LIP_EXPRESSION_MOUTH_RAISER_RIGHT_HTC,
// Provided by XR_HTC_facial_tracking
XR_LIP_EXPRESSION_MOUTH_SMILE_LEFT_HTC = XR_LIP_EXPRESSION_MOUTH_RAISER_LEFT_HTC,
// Provided by XR_HTC_facial_tracking
XR_LIP_EXPRESSION_MOUTH_SAD_RIGHT_HTC = XR_LIP_EXPRESSION_MOUTH_STRETCHER_RIGHT_HTC,
// Provided by XR_HTC_facial_tracking
XR_LIP_EXPRESSION_MOUTH_SAD_LEFT_HTC = XR_LIP_EXPRESSION_MOUTH_STRETCHER_LEFT_HTC,
XR_LIP_EXPRESSION_MAX_ENUM_HTC = 0x7FFFFFFF
} XrLipExpressionHTC;
XR_LIP_EXPRESSION_JAW_LEFT_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_JAW_RIGHT_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_JAW_FORWARD_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_JAW_OPEN_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_MOUTH_APE_SHAPE_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_MOUTH_UPPER_LEFT_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_MOUTH_UPPER_RIGHT_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_MOUTH_LOWER_LEFT_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_MOUTH_LOWER_RIGHT_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_MOUTH_UPPER_OVERTURN_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_MOUTH_LOWER_OVERTURN_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_MOUTH_POUT_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_MOUTH_RAISER_LEFT_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_MOUTH_RAISER_RIGHT_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_MOUTH_STRETCHER_LEFT_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_MOUTH_STRETCHER_RIGHT_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_CHEEK_PUFF_RIGHT_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_CHEEK_PUFF_LEFT_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_CHEEK_SUCK_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_MOUTH_UPPER_UPLEFT_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_MOUTH_UPPER_UPRIGHT_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_MOUTH_LOWER_DOWNLEFT_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_MOUTH_LOWER_DOWNRIGHT_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_MOUTH_LOWER_INSIDE_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_MOUTH_UPPER_INSIDE_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_MOUTH_LOWER_OVERLAY_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_TONGUE_LONGSTEP1_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_TONGUE_LONGSTEP2_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_TONGUE_DOWN_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_TONGUE_UP_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_TONGUE_RIGHT_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_TONGUE_LEFT_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_TONGUE_ROLL_HTC |
|
|---|---|
Description |
|
XR_LIP_EXPRESSION_TONGUE_UPRIGHT_MORPH_HTC |
|
|---|---|
Description |
|
Description |
|
XR_LIP_EXPRESSION_TONGUE_UPLEFT_MORPH_HTC |
|
|---|---|
Description |
|
Description |
|
XR_LIP_EXPRESSION_TONGUE_DOWNRIGHT_MORPH_HTC |
|
|---|---|
Description |
|
Description |
|
XR_LIP_EXPRESSION_TONGUE_DOWNLEFT_MORPH_HTC |
|
|---|---|
Description |
|
Description |
|
| O shape | |
|---|---|
Description |
|
New Object Types
New Flag Types
New Enum Constants
XrObjectType enumeration is extended with:
-
XR_OBJECT_TYPE_FACIAL_TRACKER_HTC
XrStructureType enumeration is extended with:
-
XR_TYPE_SYSTEM_FACIAL_TRACKING_PROPERTIES_HTC -
XR_TYPE_FACIAL_TRACKER_CREATE_INFO_HTC -
XR_TYPE_FACIAL_EXPRESSIONS_HTC
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2021-12-16 (Kyle Chen)
-
Initial extension description
-
-
Revision 2, 2022-09-22 (Andy Chen)
-
Correct the range of the blink blend shapes.
-
-
Revision 3, 2024-07-26 (Andy Chen)
-
Change expression naming convention: rename
XR_LIP_EXPRESSION_MOUTH_SMILE_RIGHT_HTCtoXR_LIP_EXPRESSION_MOUTH_RAISER_RIGHT_HTC,XR_LIP_EXPRESSION_MOUTH_SMILE_LEFT_HTCtoXR_LIP_EXPRESSION_MOUTH_RAISER_LEFT_HTC,XR_LIP_EXPRESSION_MOUTH_SAD_RIGHT_HTCtoXR_LIP_EXPRESSION_MOUTH_STRETCHER_RIGHT_HTCandXR_LIP_EXPRESSION_MOUTH_SAD_LEFT_HTCtoXR_LIP_EXPRESSION_MOUTH_STRETCHER_LEFT_HTC, providing the old names as compatibility aliases.
-
12.117. XR_HTC_foveation
- Name String
-
XR_HTC_foveation - Extension Type
-
Instance extension
- Registered Extension Number
-
319
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2022-09-14
- IP Status
-
No known IP claims.
- Contributors
-
Billy Chang, HTC
Bill Chang, HTC
Overview
This extension enables an application to gain rendering performance improvement by reducing the pixel density of areas in the peripheral vision. The areas near the focal point still sustains the original pixel density than periphery.
The application can use this extension in the following steps:
-
Create an XrFoveationApplyInfoHTC structure with the desired foveation configurations.
-
Apply the foveation configuration by calling xrApplyFoveationHTC with desired XrFoveationApplyInfoHTC.
|
Note
This extension is recommended for XrSession whose
XrViewConfigurationType is
|
Operate foveated rendering
The application can operate foveated rendering by calling xrApplyFoveationHTC with the corresponding foveation configuration and the specified XrSwapchainSubImage.
The xrApplyFoveationHTC function is defined as:
// Provided by XR_HTC_foveation
XrResult xrApplyFoveationHTC(
XrSession session,
const XrFoveationApplyInfoHTC* applyInfo);
The foveation configuration will be applied after this call, and the state will persist until the next call to xrApplyFoveationHTC or the end of this XrSession, whichever comes first. You should not call xrApplyFoveationHTC during rendering to target image layer XrSwapchainSubImage in render loop.
The XrFoveationApplyInfoHTC structure is defined as:
// Provided by XR_HTC_foveation
typedef struct XrFoveationApplyInfoHTC {
XrStructureType type;
const void* next;
XrFoveationModeHTC mode;
uint32_t subImageCount;
XrSwapchainSubImage* subImages;
} XrFoveationApplyInfoHTC;
The application should set the following configurations in XrFoveationApplyInfoHTC:
-
The foveation mode to be applied.
-
The specified XrSwapchainSubImage to the corresponding view.
The XrSwapchain::faceCount of the swapchain in
XrSwapchainSubImage must be 1 since this extension does not support
cubemaps.
If mode is XR_FOVEATION_MODE_DYNAMIC_HTC, the next chain
for this structure must include XrFoveationDynamicModeInfoHTC
structure.
If mode is XR_FOVEATION_MODE_CUSTOM_HTC, the next chain
for this structure must include XrFoveationCustomModeInfoHTC
structure.
The order of subImages must be the same order as in
XrCompositionLayerProjectionView when submitted in xrEndFrame.
XrFoveationModeHTC identifies the different foveation modes.
// Provided by XR_HTC_foveation
typedef enum XrFoveationModeHTC {
XR_FOVEATION_MODE_DISABLE_HTC = 0,
XR_FOVEATION_MODE_FIXED_HTC = 1,
XR_FOVEATION_MODE_DYNAMIC_HTC = 2,
XR_FOVEATION_MODE_CUSTOM_HTC = 3,
XR_FOVEATION_MODE_MAX_ENUM_HTC = 0x7FFFFFFF
} XrFoveationModeHTC;
Dynamic foveation mode
The application allows runtime to configure the foveation settings dynamically according to the system metrics or other extensions.
The XrFoveationDynamicModeInfoHTC structure is defined as:
// Provided by XR_HTC_foveation
typedef struct XrFoveationDynamicModeInfoHTC {
XrStructureType type;
const void* next;
XrFoveationDynamicFlagsHTC dynamicFlags;
} XrFoveationDynamicModeInfoHTC;
The application must chain an XrFoveationDynamicModeInfoHTC structure to XrFoveationApplyInfoHTC if dynamic mode is set.
typedef XrFlags64 XrFoveationDynamicFlagsHTC;
// Flag bits for XrFoveationDynamicFlagsHTC
static const XrFoveationDynamicFlagsHTC XR_FOVEATION_DYNAMIC_LEVEL_ENABLED_BIT_HTC = 0x00000001;
static const XrFoveationDynamicFlagsHTC XR_FOVEATION_DYNAMIC_CLEAR_FOV_ENABLED_BIT_HTC = 0x00000002;
static const XrFoveationDynamicFlagsHTC XR_FOVEATION_DYNAMIC_FOCAL_CENTER_OFFSET_ENABLED_BIT_HTC = 0x00000004;
Custom foveation mode
The application can configure the foveation settings according to the preference of content.
The XrFoveationCustomModeInfoHTC structure is defined as:
// Provided by XR_HTC_foveation
typedef struct XrFoveationCustomModeInfoHTC {
XrStructureType type;
const void* next;
uint32_t configCount;
const XrFoveationConfigurationHTC* configs;
} XrFoveationCustomModeInfoHTC;
The application must chain an XrFoveationCustomModeInfoHTC structure to XrFoveationApplyInfoHTC to customize foveation if custom mode is set.
The XrFoveationConfigurationHTC structure is defined as:
// Provided by XR_HTC_foveation
typedef struct XrFoveationConfigurationHTC {
XrFoveationLevelHTC level;
float clearFovDegree;
XrVector2f focalCenterOffset;
} XrFoveationConfigurationHTC;
// Provided by XR_HTC_foveation
typedef enum XrFoveationLevelHTC {
XR_FOVEATION_LEVEL_NONE_HTC = 0,
XR_FOVEATION_LEVEL_LOW_HTC = 1,
XR_FOVEATION_LEVEL_MEDIUM_HTC = 2,
XR_FOVEATION_LEVEL_HIGH_HTC = 3,
XR_FOVEATION_LEVEL_MAX_ENUM_HTC = 0x7FFFFFFF
} XrFoveationLevelHTC;
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_FOVEATION_APPLY_INFO_HTC -
XR_TYPE_FOVEATION_DYNAMIC_MODE_INFO_HTC -
XR_TYPE_FOVEATION_CUSTOM_MODE_INFO_HTC
New Enum Constants
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2022-09-14 (Billy Chang)
-
Initial extension description
-
12.118. XR_HTC_hand_interaction
- Name String
-
XR_HTC_hand_interaction - Extension Type
-
Instance extension
- Registered Extension Number
-
107
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- API Interactions
-
-
Interacts with
XR_EXT_hand_interaction -
Interacts with
XR_EXT_palm_pose
-
- Last Modified Date
-
2022-05-27
- IP Status
-
No known IP claims.
- Contributors
-
Ria Hsu, HTC
Bill Chang, HTC
Overview
This extension defines a new interaction profile for tracked hands.
Hand interaction profile
Interaction profile path:
-
/interaction_profiles/htc/hand_interaction
Valid for user paths:
-
/user/hand_htc/left
-
/user/hand_htc/right
This interaction profile represents basic pose and actions for interaction of tracked hands.
Supported component paths for far interaction:
-
…/input/select/value
-
…/input/aim/pose
The application should use …/input/aim/pose path to aim at
objects in the world and use …/input/select/value path to decide
user selection from pinch shape strength which the range of value is 0.0f
to 1.0f, with 1.0f meaning pinch fingers touched.
Supported component paths for near interaction:
-
…/input/squeeze/value
-
…/input/grip/pose
The application should use …/input/grip/pose path to interact
with the nearby objects and locate the position of handheld objects, and use
…/input/squeeze/value path to decide the hand picking up or
holding the nearby objects from grip shape strength which the range of value
is 0.0f to 1.0f, with 1.0f meaning hand grip shape is closed.
|
Note
Far and near interaction depends on the support capabilities of hand tracking engine. The application can check isActive of XrActionStatePose of aim and grip to know far and near interaction supported or not then decide the interaction behavior in content. |
Version History
-
Revision 1, 2022-05-27 (Ria Hsu)
-
Initial extension description
-
12.119. XR_HTC_passthrough
- Name String
-
XR_HTC_passthrough - Extension Type
-
Instance extension
- Registered Extension Number
-
318
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2022-09-14
- IP Status
-
No known IP claims.
- Contributors
-
Livi Lin, HTC
Sacdar Hsu, HTC
Bill Chang, HTC
Overview
This extension enables an application to show the passthrough image to see the surrounding environment from the VR headset. The application is allowed to configure the passthrough image with the different appearances according to the demand of the application.
The passthrough configurations that runtime provides to applications contain:
-
Decide the passthrough layer shown over or under the frame submitted by the application.
-
Specify the passthrough form with full of the entire screen or projection onto the mesh specified by the application.
-
Set the alpha blending level for the composition of the passthrough layer.
Create a passthrough handle
An application can create an XrPassthroughHTC handle by calling xrCreatePassthroughHTC. The returned passthrough handle can be subsequently used in API calls.
// Provided by XR_HTC_passthrough
XR_DEFINE_HANDLE(XrPassthroughHTC)
The xrCreatePassthroughHTC function is defined as:
// Provided by XR_HTC_passthrough
XrResult xrCreatePassthroughHTC(
XrSession session,
const XrPassthroughCreateInfoHTC* createInfo,
XrPassthroughHTC* passthrough);
Creates an XrPassthroughHTC handle.
If the function successfully returned, the output passthrough must be
a valid handle.
The XrPassthroughCreateInfoHTC structure is defined as:
// Provided by XR_HTC_passthrough
typedef struct XrPassthroughCreateInfoHTC {
XrStructureType type;
const void* next;
XrPassthroughFormHTC form;
} XrPassthroughCreateInfoHTC;
The XrPassthroughFormHTC enumeration identifies the form of the passthrough, presenting the passthrough fill the full screen or project onto a specified mesh.
// Provided by XR_HTC_passthrough
typedef enum XrPassthroughFormHTC {
XR_PASSTHROUGH_FORM_PLANAR_HTC = 0,
XR_PASSTHROUGH_FORM_PROJECTED_HTC = 1,
XR_PASSTHROUGH_FORM_MAX_ENUM_HTC = 0x7FFFFFFF
} XrPassthroughFormHTC;
The xrDestroyPassthroughHTC function is defined as:
// Provided by XR_HTC_passthrough
XrResult xrDestroyPassthroughHTC(
XrPassthroughHTC passthrough);
The xrDestroyPassthroughHTC function releases the passthrough and the underlying resources.
Composite the passthrough layer
The XrCompositionLayerPassthroughHTC structure is defined as:
// Provided by XR_HTC_passthrough
typedef struct XrCompositionLayerPassthroughHTC {
XrStructureType type;
const void* next;
XrCompositionLayerFlags layerFlags;
XrSpace space;
XrPassthroughHTC passthrough;
XrPassthroughColorHTC color;
} XrCompositionLayerPassthroughHTC;
The application can create an XrCompositionLayerPassthroughHTC
structure with the created passthrough and the corresponding
information.
A pointer to XrCompositionLayerPassthroughHTC may be submitted in
xrEndFrame as a pointer to the base structure
XrCompositionLayerBaseHeader, in the desired layer order, to request
the runtime to composite a passthrough layer into the final frame output.
If the passthrough form specified to xrCreatePassthroughHTC is
XR_PASSTHROUGH_FORM_PROJECTED_HTC,
XrPassthroughMeshTransformInfoHTC must appear in the next
chain.
If they are absent, the runtime must return error
XR_ERROR_VALIDATION_FAILURE.
The XrPassthroughColorHTC structure is defined as:
// Provided by XR_HTC_passthrough
typedef struct XrPassthroughColorHTC {
XrStructureType type;
const void* next;
float alpha;
} XrPassthroughColorHTC;
The application can specify the XrPassthroughColorHTC to adjust the alpha value of the passthrough. The range is between 0.0f and 1.0f, 1.0f means opaque.
The XrPassthroughMeshTransformInfoHTC structure is defined as:
// Provided by XR_HTC_passthrough
typedef struct XrPassthroughMeshTransformInfoHTC {
XrStructureType type;
const void* next;
uint32_t vertexCount;
const XrVector3f* vertices;
uint32_t indexCount;
const uint32_t* indices;
XrSpace baseSpace;
XrTime time;
XrPosef pose;
XrVector3f scale;
} XrPassthroughMeshTransformInfoHTC;
The XrPassthroughMeshTransformInfoHTC structure describes the mesh and transformation.
The application must specify the XrPassthroughMeshTransformInfoHTC in
the next chain of XrCompositionLayerPassthroughHTC if the
specified form of passthrough layer previously created by
xrCreatePassthroughHTC is XR_PASSTHROUGH_FORM_PROJECTED_HTC.
Passing XrPassthroughMeshTransformInfoHTC updates the projected mesh information in the runtime for passthrough layer composition.
If XrPassthroughMeshTransformInfoHTC is not set correctly, runtime
must return error XR_ERROR_VALIDATION_FAILURE when xrEndFrame
is called with composition layer XrCompositionLayerPassthroughHTC.
New Object Types
New Flag Types
New Enum Constants
XrObjectType enumeration is extended with:
-
XR_OBJECT_TYPE_PASSTHROUGH_HTC
XrStructureType enumeration is extended with:
-
XR_TYPE_PASSTHROUGH_CREATE_INFO_HTC -
XR_TYPE_PASSTHROUGH_COLOR_HTC -
XR_TYPE_PASSTHROUGH_MESH_TRANSFORM_INFO_HTC -
XR_TYPE_COMPOSITION_LAYER_PASSTHROUGH_HTC
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2022-09-14 (Sacdar Hsu)
-
Initial extension description
-
12.120. XR_HTC_vive_wrist_tracker_interaction
- Name String
-
XR_HTC_vive_wrist_tracker_interaction - Extension Type
-
Instance extension
- Registered Extension Number
-
108
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2022-05-27
- IP Status
-
No known IP claims.
- Contributors
-
Ria Hsu, HTC
Bill Chang, HTC
Overview
This extension provides an XrPath for getting device input from a
VIVE wrist tracker to enable its interactions.
VIVE wrist tracker is a tracked device mainly worn on user’s wrist for pose
tracking.
Besides this use case, user also can tie it to a physical object to track
its object pose, e.g. tie on a gun.
VIVE Wrist Tracker input
This extension exposes a new interaction profile path /interaction_profiles/htc/vive_wrist_tracker that is valid for the user path
-
/user/wrist_htc/left
-
/user/wrist_htc/right
with supported input subpaths
-
On /user/wrist_htc/left only:
-
…/input/menu/click
-
…/input/x/click
-
-
On /user/wrist_htc/right only:
-
…/input/system/click (may not be available for application use)
-
…/input/a/click
-
-
…/input/entity_htc/pose
The entity_htc pose allows the applications to recognize the origin of a tracked input device, especially for the wearable devices which are not held in the user’s hand. The entity_htc pose is defined as follows:
-
The entity position: The center position of the tracked device.
-
The entity orientation: Oriented with +Y up, +X to the right, and -Z forward.
Version History
-
Revision 1, 2022-05-27 (Ria Hsu)
-
Initial extension description
-
12.121. XR_HUAWEI_controller_interaction
- Name String
-
XR_HUAWEI_controller_interaction - Extension Type
-
Instance extension
- Registered Extension Number
-
70
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- API Interactions
-
-
Interacts with
XR_EXT_dpad_binding -
Interacts with
XR_EXT_hand_interaction -
Interacts with
XR_EXT_palm_pose
-
- Last Modified Date
-
2020-05-26
- IP Status
-
No known IP claims.
- Contributors
-
Guodong Chen, Huawei
Kai Shao, Huawei
Yang Tao, Huawei
Gang Shen, Huawei
Yihong Huang, Huawei
Overview
This extension defines a new interaction profile for the Huawei Controller, including but not limited to Huawei VR Glasses Controller.
Huawei Controller interaction profile
Interaction profile path:
-
/interaction_profiles/huawei/controller
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile represents the input sources and haptics on the Huawei Controller.
Supported component paths:
-
…/input/home/click
-
…/input/back/click
-
…/input/volume_up/click
-
…/input/volume_down/click
-
…/input/trigger/value
-
…/input/trigger/click
-
…/input/trackpad/x
-
…/input/trackpad/y
-
…/input/trackpad/click
-
…/input/trackpad/touch
-
…/input/aim/pose
-
…/input/grip/pose
-
…/output/haptic
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
New Object Types
New Flag Types
New Enum Constants
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2020-04-28 (Yihong Huang)
-
Initial extension description
-
12.122. XR_LOGITECH_mx_ink_stylus_interaction
- Name String
-
XR_LOGITECH_mx_ink_stylus_interaction - Extension Type
-
Instance extension
- Registered Extension Number
-
746
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- API Interactions
-
-
Interacts with
XR_EXT_hand_interaction -
Interacts with
XR_EXT_palm_pose
-
- Last Modified Date
-
2024-06-02
- IP Status
-
No known IP claims.
- Contributors
-
Mario Gutierrez, Logitech
Aidan Kehoe, Logitech
Fabien Zellweger, Logitech
Overview
This extension defines a new interaction profile for the Logitech MX Ink, a 6-DOF tracked stylus.
12.122.1. Logitech MX Ink Stylus Interaction Profile
Interaction profile path:
-
/interaction_profiles/logitech/mx_ink_stylus_logitech
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile provides inputs and outputs for a 6-DOF tracked controller shaped as a stylus. This controller allows for writing/sketching on 2D surfaces using an analog force sensor at the tip. In addition to the tip, the controller has an input cluster composed of three buttons. The middle button features a force sensor. The front and back buttons provide two different inputs: click and double tap detection. Double tap events are treated as virtual buttons. The stylus also includes a system button, situated at the rear end of the device.
Supported component paths:
-
…/input/system/click (may not be available for application use)
-
…/input/grip/pose
-
…/input/aim/pose
-
…/input/cluster_front_logitech/click
-
…/input/cluster_front_logitech/double_tap_logitech
-
…/input/cluster_middle_logitech/force
-
…/input/cluster_back_logitech/click
-
…/input/cluster_back_logitech/double_tap_logitech
-
…/input/dock_logitech/docked_logitech
-
…/input/tip_logitech/force
-
…/input/tip_logitech/pose
-
…/output/haptic
12.122.2. New Identifiers and Components
Types of Holds
The device associated with this interaction profile is a stylus, and thus affords two basic ways to hold and use it. The device does not sense which hold is in use, but some pose input subpaths only make sense in the context of one of the two holds. Application developers should consider which hold is expected in a given setting when designing suggested bindings.
A "precise grip" is a pencil-like hold allowing precision control using the arm, wrist, hand, and fingers. A precise grip is the primary hold encouraged by the affordances of the device design. It provides fine control over close interaction. While the stylus shape allows several orientations of this hold about the major axis of the device, the design intent is for the control cluster buttons to be accessible to the index finger and/or thumb while maintaining a precise grip.
A "power grip", by contrast, is a closed-fist hold, providing more coarse control using the arm and wrist. The control cluster buttons may be inaccessible when a user is employing a power grip. This is a secondary method of using the device. Most other XR controllers with interaction profiles in this specification are designed primarily for a power grip, and the "grip" standard pose identifier is intended to be a power grip in all of those profiles.
-
…/input/tip_logitech/pose is intended for close selection, interaction, and drawing. This pose input subpath is referenced to the physical hardware rather than an assumed or designed hold. Therefore, its pose is meaningful in both precise grip and power grip. However, it also serves as the primary interaction pose for close-range use in the precise grip.
-
…/input/aim/pose is intended for selection and interaction at a distance according to platform conventions. When used with this interaction profile, the position and orientation of this subpath relative to the device assumes a precise grip. This pose input subpath is not necessarily suitable for use with a power grip.
-
…/input/grip/pose in this interaction profile is for close, coarse interaction while using a power grip, or virtual tool/object holding in a power grip. The relative pose of this subpath is based on the design intent for holding in a power grip. Be aware that due to the shape of the device, a user’s actual power grip may easily vary from the design intent by a rotation about the Z axis, as well as translation along it. As such, inferring hand position and orientation from this pose is not recommended. Discouraged from use when a precise grip is anticipated, as this pose has very little to no meaningful relationship to the device or the user’s hand in a precise grip.
-
…/input/grip_surface/pose (when available — see
XR_KHR_maintenance1or XR_VERSION_1_1) in this interaction profile is established assuming a power grip, holding the stylus against the palm in a closed fist. The predecessor of this input subpath, …/input/palm_ext/pose, is identically established, when available (seeXR_EXT_palm_pose). The descriptions of positions and axes in the "grip_surface" standard pose identifier definition are not valid when the device is held in a precise grip rather than a power grip. -
…/input/poke_ext/pose and …/input/pinch_ext/pose (when available — see
XR_EXT_hand_interaction) in this interaction profile are established assuming a precise grip interacting using the stylus tip. Discouraged from use when a power grip is anticipated.-
NB: The device associated with this interaction profile also supports use through the /interaction_profile/ext/hand_interaction interaction profile provided by
XR_EXT_hand_interaction.
-
Designing Suggesting Bindings
The table below provides recommended translations for suggested interaction bindings from other controllers to this interaction profile, serving as a starting point for development.
As this controller is ambidextrous, the runtime should provide a platform settings method to select the stylus handedness and map the input paths for the A/B and X/Y buttons according to the current handedness.
The runtime may provide a platform settings method to define on which button the double tap events will be detected, and which virtual button it is mapped to. It is recommended to suggest binding a single action to both double tap subpaths for consistent behavior regardless of user settings.
| Input subpath on common controller designs | Recommended input subpath to suggest when using with /interaction_profiles/logitech/mx_ink_stylus_logitech |
|---|---|
…/input/squeeze/click |
…/input/cluster_front_logitech/click |
…/input/trigger/value |
…/input/cluster_middle_logitech/force |
…/input/a/click |
…/input/cluster_back_logitech/click |
…/input/b/click |
…/input/cluster_back_logitech/double_tap_logitech or …/input/cluster_front_logitech/double_tap_logitech |
…/input/x/click |
…/input/cluster_back_logitech/click |
…/input/y/click |
…/input/cluster_back_logitech/double_tap_logitech or …/input/cluster_front_logitech/double_tap_logitech |
12.123. XR_META_automatic_layer_filter
- Name String
-
XR_META_automatic_layer_filter - Extension Type
-
Instance extension
- Registered Extension Number
-
272
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Rohit Rao Padebettu, Meta
Grant Yang, Meta
Overview
This extension defines a new flag in XrCompositionLayerSettingsFlagBitsFB that allows applications to provide a hint to the runtime to automatically toggle a layer filtering mechanism. The layer filtering helps alleviate visual quality artifacts such as blur and flicker.
Note: The runtime may use any factors it wishes to apply a filter to the layer. These may include not only fixed factors such as screen resolution, HMD type, and swapchain resolution, but also dynamic ones such as layer pose and system-wide GPU utilization.
Automatic Layer Filtering
XrCompositionLayerSettingsFlagBitsFB is extended with
XR_COMPOSITION_LAYER_SETTINGS_AUTO_LAYER_FILTER_BIT_META
To enable automatic selection of layer filtering method,
XR_COMPOSITION_LAYER_SETTINGS_AUTO_LAYER_FILTER_BIT_META is passed to
the runtime in XrCompositionLayerSettingsFB::layerFlags.
A candidate pool of preferred layer filtering methods from
XrCompositionLayerSettingsFlagBitsFB must be passed along with
XR_COMPOSITION_LAYER_SETTINGS_AUTO_LAYER_FILTER_BIT_META.
The runtime may apply the appropriate filter when rendering the layer.
The runtime must return XR_ERROR_VALIDATION_FAILURE from
xrEndFrame when an XrCompositionLayerSettingsFB structure is
submitted with one or more of the layers if no other flag bits are supplied
with XR_COMPOSITION_LAYER_SETTINGS_AUTO_LAYER_FILTER_BIT_META.
Version History
-
Revision 1, 2023-04-21 (Rohit Rao Padebettu)
-
Initial extension description
-
12.124. XR_META_body_tracking_calibration
- Name String
-
XR_META_body_tracking_calibration - Extension Type
-
Instance extension
- Registered Extension Number
-
284
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2025-07-23
- IP Status
-
No known IP claims.
- Contributors
-
Giancarlo Di Biase, Meta
Dikpal Reddy, Meta
Igor Tceglevskii, Meta
Bill Orr, Meta
Alexey Sidnev, Meta
Alexandra Movsesyan, Meta
12.124.1. Overview
This extension enables applications utilizing XR_FB_body_tracking to
override automatic calibration, and to determine the current calibration
status for automatic calibration.
Applications can call the xrSuggestBodyTrackingCalibrationOverrideMETA function to override calibration on request, for example, when the user indicates it is desired through an in-app UI.
Typical times to calibrate are during app startup (when body tracking is first initialized), on user request, when calibration is detected to be low quality, and on user switch.
Manual calibration is validated, with applications providing accurate values to the runtime.
12.124.2. Inspect system capability
The XrSystemPropertiesBodyTrackingCalibrationMETA structure is defined as:
// Provided by XR_META_body_tracking_calibration
typedef struct XrSystemPropertiesBodyTrackingCalibrationMETA {
XrStructureType type;
void* next;
XrBool32 supportsHeightOverride;
} XrSystemPropertiesBodyTrackingCalibrationMETA;
An application can inspect whether the system supports body tracking and body calibration by extending the XrSystemProperties with XrSystemPropertiesBodyTrackingCalibrationMETA structure when calling xrGetSystemProperties.
If a runtime returns XR_FALSE for supportsHeightOverride, the
runtime must return XR_ERROR_FEATURE_UNSUPPORTED from
xrSuggestBodyTrackingCalibrationOverrideMETA and from
xrResetBodyTrackingCalibrationMETA.
12.124.3. Overriding the user height
The xrSuggestBodyTrackingCalibrationOverrideMETA function is defined as:
// Provided by XR_META_body_tracking_calibration
XrResult xrSuggestBodyTrackingCalibrationOverrideMETA(
XrBodyTrackerFB bodyTracker,
const XrBodyTrackingCalibrationInfoMETA* calibrationInfo);
Applications can choose to call xrSuggestBodyTrackingCalibrationOverrideMETA on app startup, or implement a UI to allow users to specify when it should be called.
Runtimes must return XR_ERROR_VALIDATION_FAILURE if the specified
height falls outside the range (0.5 meters, 3 meters).
Runtimes should use the provided calibration data as an input to calibrate any body tracking algorithms.
The XrBodyTrackingCalibrationInfoMETA structure is defined as:
// Provided by XR_META_body_tracking_calibration
typedef struct XrBodyTrackingCalibrationInfoMETA {
XrStructureType type;
const void* next;
float bodyHeight;
} XrBodyTrackingCalibrationInfoMETA;
The XrBodyTrackingCalibrationInfoMETA structure contains the user’s height that the runtime should use to calibrate body tracking.
12.124.4. Reset body tracking calibration.
The xrResetBodyTrackingCalibrationMETA function is defined as:
// Provided by XR_META_body_tracking_calibration
XrResult xrResetBodyTrackingCalibrationMETA(
XrBodyTrackerFB bodyTracker);
This function removes any height override previously set by xrSuggestBodyTrackingCalibrationOverrideMETA and allows the runtime to return to automatic calibration.
12.124.5. Body Tracking Calibration Status
The XrBodyTrackingCalibrationStateMETA enum describes the current calibration state.
// Provided by XR_META_body_tracking_calibration
typedef enum XrBodyTrackingCalibrationStateMETA {
XR_BODY_TRACKING_CALIBRATION_STATE_VALID_META = 1,
XR_BODY_TRACKING_CALIBRATION_STATE_CALIBRATING_META = 2,
XR_BODY_TRACKING_CALIBRATION_STATE_INVALID_META = 3,
XR_BODY_TRACKING_CALIBRATION_STATE_MAX_ENUM_META = 0x7FFFFFFF
} XrBodyTrackingCalibrationStateMETA;
| Enum | Description |
|---|---|
|
Valid calibration, body tracking expected to be stable. |
|
Calibration is in progress, body joint poses from |
|
Calibration is invalid, accessing the body joint poses from |
The XrBodyTrackingCalibrationStatusMETA structure is defined as:
// Provided by XR_META_body_tracking_calibration
typedef struct XrBodyTrackingCalibrationStatusMETA {
XrStructureType type;
void* next;
XrBodyTrackingCalibrationStateMETA status;
} XrBodyTrackingCalibrationStatusMETA;
The XrBodyTrackingCalibrationStatusMETA structure contains a XrBodyTrackingCalibrationStateMETA that describes the current calibration status.
The application can obtain the calibration status by adding
XrBodyTrackingCalibrationStatusMETA to the
XrBodyJointLocationsFB::next chain when calling
xrLocateBodyJointsFB.
12.124.9. New Enum Constants
-
XR_META_BODY_TRACKING_CALIBRATION_EXTENSION_NAME -
XR_META_body_tracking_calibration_SPEC_VERSION -
Extending XrStructureType:
-
XR_TYPE_BODY_TRACKING_CALIBRATION_INFO_META -
XR_TYPE_BODY_TRACKING_CALIBRATION_STATUS_META -
XR_TYPE_SYSTEM_PROPERTIES_BODY_TRACKING_CALIBRATION_META
-
12.125. XR_META_body_tracking_full_body
- Name String
-
XR_META_body_tracking_full_body - Extension Type
-
Instance extension
- Registered Extension Number
-
275
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2024-08-26
- IP Status
-
No known IP claims.
- Contributors
-
Giancarlo Di Biase, Meta
Dikpal Reddy, Meta
Igor Tceglevskii, Meta
Bill Orr, Meta
12.125.1. Overview
This extends body tracking to support the full body including the lower body.
This extension builds on top of the XR_FB_body_tracking extension,
which only exposed the upper body, torso, and hands.
12.125.2. Inspect system capability
The XrSystemPropertiesBodyTrackingFullBodyMETA structure is defined as:
// Provided by XR_META_body_tracking_full_body
typedef struct XrSystemPropertiesBodyTrackingFullBodyMETA {
XrStructureType type;
void* next;
XrBool32 supportsFullBodyTracking;
} XrSystemPropertiesBodyTrackingFullBodyMETA;
An application can inspect whether the system is capable of full body tracking by extending the XrSystemProperties with XrSystemPropertiesBodyTrackingFullBodyMETA structure when calling xrGetSystemProperties.
If a runtime returns XR_FALSE for supportsFullBodyTracking, the
runtime must return XR_ERROR_VALIDATION_FAILURE from
xrCreateBodyTrackerFB when trying to create a body tracker for full
body tracking using the joint set XR_BODY_JOINT_SET_FULL_BODY_META.
12.125.3. Creating body tracker for full body tracking
The XrBodyJointSetFB enum is extended with a new joint set
XR_BODY_JOINT_SET_FULL_BODY_META, which can be set as the joint set in
the XrBodyTrackerCreateInfoFB info parameter to
xrCreateBodyTrackerFB.
// Provided by XR_META_body_tracking_full_body
typedef enum XrFullBodyJointMETA {
XR_FULL_BODY_JOINT_ROOT_META = 0,
XR_FULL_BODY_JOINT_HIPS_META = 1,
XR_FULL_BODY_JOINT_SPINE_LOWER_META = 2,
XR_FULL_BODY_JOINT_SPINE_MIDDLE_META = 3,
XR_FULL_BODY_JOINT_SPINE_UPPER_META = 4,
XR_FULL_BODY_JOINT_CHEST_META = 5,
XR_FULL_BODY_JOINT_NECK_META = 6,
XR_FULL_BODY_JOINT_HEAD_META = 7,
XR_FULL_BODY_JOINT_LEFT_SHOULDER_META = 8,
XR_FULL_BODY_JOINT_LEFT_SCAPULA_META = 9,
XR_FULL_BODY_JOINT_LEFT_ARM_UPPER_META = 10,
XR_FULL_BODY_JOINT_LEFT_ARM_LOWER_META = 11,
XR_FULL_BODY_JOINT_LEFT_HAND_WRIST_TWIST_META = 12,
XR_FULL_BODY_JOINT_RIGHT_SHOULDER_META = 13,
XR_FULL_BODY_JOINT_RIGHT_SCAPULA_META = 14,
XR_FULL_BODY_JOINT_RIGHT_ARM_UPPER_META = 15,
XR_FULL_BODY_JOINT_RIGHT_ARM_LOWER_META = 16,
XR_FULL_BODY_JOINT_RIGHT_HAND_WRIST_TWIST_META = 17,
XR_FULL_BODY_JOINT_LEFT_HAND_PALM_META = 18,
XR_FULL_BODY_JOINT_LEFT_HAND_WRIST_META = 19,
XR_FULL_BODY_JOINT_LEFT_HAND_THUMB_METACARPAL_META = 20,
XR_FULL_BODY_JOINT_LEFT_HAND_THUMB_PROXIMAL_META = 21,
XR_FULL_BODY_JOINT_LEFT_HAND_THUMB_DISTAL_META = 22,
XR_FULL_BODY_JOINT_LEFT_HAND_THUMB_TIP_META = 23,
XR_FULL_BODY_JOINT_LEFT_HAND_INDEX_METACARPAL_META = 24,
XR_FULL_BODY_JOINT_LEFT_HAND_INDEX_PROXIMAL_META = 25,
XR_FULL_BODY_JOINT_LEFT_HAND_INDEX_INTERMEDIATE_META = 26,
XR_FULL_BODY_JOINT_LEFT_HAND_INDEX_DISTAL_META = 27,
XR_FULL_BODY_JOINT_LEFT_HAND_INDEX_TIP_META = 28,
XR_FULL_BODY_JOINT_LEFT_HAND_MIDDLE_METACARPAL_META = 29,
XR_FULL_BODY_JOINT_LEFT_HAND_MIDDLE_PROXIMAL_META = 30,
XR_FULL_BODY_JOINT_LEFT_HAND_MIDDLE_INTERMEDIATE_META = 31,
XR_FULL_BODY_JOINT_LEFT_HAND_MIDDLE_DISTAL_META = 32,
XR_FULL_BODY_JOINT_LEFT_HAND_MIDDLE_TIP_META = 33,
XR_FULL_BODY_JOINT_LEFT_HAND_RING_METACARPAL_META = 34,
XR_FULL_BODY_JOINT_LEFT_HAND_RING_PROXIMAL_META = 35,
XR_FULL_BODY_JOINT_LEFT_HAND_RING_INTERMEDIATE_META = 36,
XR_FULL_BODY_JOINT_LEFT_HAND_RING_DISTAL_META = 37,
XR_FULL_BODY_JOINT_LEFT_HAND_RING_TIP_META = 38,
XR_FULL_BODY_JOINT_LEFT_HAND_LITTLE_METACARPAL_META = 39,
XR_FULL_BODY_JOINT_LEFT_HAND_LITTLE_PROXIMAL_META = 40,
XR_FULL_BODY_JOINT_LEFT_HAND_LITTLE_INTERMEDIATE_META = 41,
XR_FULL_BODY_JOINT_LEFT_HAND_LITTLE_DISTAL_META = 42,
XR_FULL_BODY_JOINT_LEFT_HAND_LITTLE_TIP_META = 43,
XR_FULL_BODY_JOINT_RIGHT_HAND_PALM_META = 44,
XR_FULL_BODY_JOINT_RIGHT_HAND_WRIST_META = 45,
XR_FULL_BODY_JOINT_RIGHT_HAND_THUMB_METACARPAL_META = 46,
XR_FULL_BODY_JOINT_RIGHT_HAND_THUMB_PROXIMAL_META = 47,
XR_FULL_BODY_JOINT_RIGHT_HAND_THUMB_DISTAL_META = 48,
XR_FULL_BODY_JOINT_RIGHT_HAND_THUMB_TIP_META = 49,
XR_FULL_BODY_JOINT_RIGHT_HAND_INDEX_METACARPAL_META = 50,
XR_FULL_BODY_JOINT_RIGHT_HAND_INDEX_PROXIMAL_META = 51,
XR_FULL_BODY_JOINT_RIGHT_HAND_INDEX_INTERMEDIATE_META = 52,
XR_FULL_BODY_JOINT_RIGHT_HAND_INDEX_DISTAL_META = 53,
XR_FULL_BODY_JOINT_RIGHT_HAND_INDEX_TIP_META = 54,
XR_FULL_BODY_JOINT_RIGHT_HAND_MIDDLE_METACARPAL_META = 55,
XR_FULL_BODY_JOINT_RIGHT_HAND_MIDDLE_PROXIMAL_META = 56,
XR_FULL_BODY_JOINT_RIGHT_HAND_MIDDLE_INTERMEDIATE_META = 57,
XR_FULL_BODY_JOINT_RIGHT_HAND_MIDDLE_DISTAL_META = 58,
XR_FULL_BODY_JOINT_RIGHT_HAND_MIDDLE_TIP_META = 59,
XR_FULL_BODY_JOINT_RIGHT_HAND_RING_METACARPAL_META = 60,
XR_FULL_BODY_JOINT_RIGHT_HAND_RING_PROXIMAL_META = 61,
XR_FULL_BODY_JOINT_RIGHT_HAND_RING_INTERMEDIATE_META = 62,
XR_FULL_BODY_JOINT_RIGHT_HAND_RING_DISTAL_META = 63,
XR_FULL_BODY_JOINT_RIGHT_HAND_RING_TIP_META = 64,
XR_FULL_BODY_JOINT_RIGHT_HAND_LITTLE_METACARPAL_META = 65,
XR_FULL_BODY_JOINT_RIGHT_HAND_LITTLE_PROXIMAL_META = 66,
XR_FULL_BODY_JOINT_RIGHT_HAND_LITTLE_INTERMEDIATE_META = 67,
XR_FULL_BODY_JOINT_RIGHT_HAND_LITTLE_DISTAL_META = 68,
XR_FULL_BODY_JOINT_RIGHT_HAND_LITTLE_TIP_META = 69,
XR_FULL_BODY_JOINT_LEFT_UPPER_LEG_META = 70,
XR_FULL_BODY_JOINT_LEFT_LOWER_LEG_META = 71,
XR_FULL_BODY_JOINT_LEFT_FOOT_ANKLE_TWIST_META = 72,
XR_FULL_BODY_JOINT_LEFT_FOOT_ANKLE_META = 73,
XR_FULL_BODY_JOINT_LEFT_FOOT_SUBTALAR_META = 74,
XR_FULL_BODY_JOINT_LEFT_FOOT_TRANSVERSE_META = 75,
XR_FULL_BODY_JOINT_LEFT_FOOT_BALL_META = 76,
XR_FULL_BODY_JOINT_RIGHT_UPPER_LEG_META = 77,
XR_FULL_BODY_JOINT_RIGHT_LOWER_LEG_META = 78,
XR_FULL_BODY_JOINT_RIGHT_FOOT_ANKLE_TWIST_META = 79,
XR_FULL_BODY_JOINT_RIGHT_FOOT_ANKLE_META = 80,
XR_FULL_BODY_JOINT_RIGHT_FOOT_SUBTALAR_META = 81,
XR_FULL_BODY_JOINT_RIGHT_FOOT_TRANSVERSE_META = 82,
XR_FULL_BODY_JOINT_RIGHT_FOOT_BALL_META = 83,
XR_FULL_BODY_JOINT_COUNT_META = 84,
XR_FULL_BODY_JOINT_NONE_META = 85,
XR_FULL_BODY_JOINT_MAX_ENUM_META = 0x7FFFFFFF
} XrFullBodyJointMETA;
These joint enumeration values index into the array returned by body
tracking when the joint set XR_BODY_JOINT_SET_FULL_BODY_META is used.
There are a total of XR_FULL_BODY_JOINT_COUNT_META joints in this set.
The joint indices shared with XrBodyJointFB have the same semantic
meaning.
The meaning of joint index 0 through XR_BODY_JOINT_COUNT_FB - 1 (69)
matches the corresponding index in XrBodyJointFB.
12.125.4. Example code for setting full body tracking
The following example code demonstrates how to create a body tracker with full body joint set.
XrInstance instance; // previously initialized
XrSystemId systemId; // previously initialized
XrSession session; // previously initialized
PFN_xrCreateBodyTrackerFB pfnCreateBodyTrackerFB; // previously initialized
PFN_xrLocateBodyJointsFB pfnLocateBodyJointsFB; // previously initialized
XrSpace stageSpace; // previously initialized
// Confirm full body tracking system support.
XrSystemPropertiesBodyTrackingFullBodyMETA bodyTrackingFullBodySystemProperties{
XR_TYPE_SYSTEM_PROPERTIES_BODY_TRACKING_FULL_BODY_META};
XrSystemProperties systemProperties{XR_TYPE_SYSTEM_PROPERTIES,
&bodyTrackingFullBodySystemProperties};
CHK_XR(xrGetSystemProperties(instance, systemId, &systemProperties));
if (!bodyTrackingFullBodySystemProperties.supportsFullBodyTracking) {
// The system does not support full body tracking.
return;
}
XrBodyTrackerCreateInfoFB createInfo{XR_TYPE_BODY_TRACKER_CREATE_INFO_FB};
createInfo.bodyJointSet = XR_BODY_JOINT_SET_FULL_BODY_META;
XrBodyTrackerFB bodyTracker = XR_NULL_HANDLE;
pfnCreateBodyTrackerFB(session, &createInfo, &bodyTracker);
while (1) {
// ...
// For every frame in the frame loop
// ...
XrFrameState frameState; // previously returned from xrWaitFrame
const XrTime time = frameState.predictedDisplayTime;
XrBodyJointsLocateInfoFB locateInfo{XR_TYPE_BODY_JOINTS_LOCATE_INFO_FB};
locateInfo.baseSpace = stageSpace;
locateInfo.time = time;
XrBodyJointLocationsFB bodyLocations{XR_TYPE_BODY_JOINT_LOCATIONS_FB};
bodyLocations.jointCount = XR_FULL_BODY_JOINT_COUNT_META;
XrBodyJointLocationFB jointLocations[XR_FULL_BODY_JOINT_COUNT_META];
bodyLocations.jointLocations = jointLocations;
CHK_XR(pfnLocateBodyJointsFB(bodyTracker, &locateInfo, &bodyLocations));
}
12.125.7. New Enum Constants
-
XR_META_BODY_TRACKING_FULL_BODY_EXTENSION_NAME -
XR_META_body_tracking_full_body_SPEC_VERSION -
Extending XrBodyJointSetFB:
-
XR_BODY_JOINT_SET_FULL_BODY_META
-
-
Extending XrStructureType:
-
XR_TYPE_SYSTEM_PROPERTIES_BODY_TRACKING_FULL_BODY_META
-
Issues
Version History
-
Revision 1, 2024-08-26 (Bill Orr)
-
Initial extension description
-
12.126. XR_META_colocation_discovery
- Name String
-
XR_META_colocation_discovery - Extension Type
-
Instance extension
- Registered Extension Number
-
572
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2024-06-15
- IP Status
-
No known IP claims.
- Contributors
-
TJ Gilbrough, Meta Platforms
Lionel Reyero, Meta Platforms
Scott Dewald, Meta Platforms
12.126.1. Overview
Colocation discovery is a capability available through the
XR_META_colocation_discovery extension that allows apps to discover
physically colocated devices running the same app.
In the context of this extension, "the same application" means "bytewise identical Android package name" when running on an Android-based platform.
12.126.2. Check compatibility
The XrSystemColocationDiscoveryPropertiesMETA structure is defined as:
// Provided by XR_META_colocation_discovery
typedef struct XrSystemColocationDiscoveryPropertiesMETA {
XrStructureType type;
void* next;
XrBool32 supportsColocationDiscovery;
} XrSystemColocationDiscoveryPropertiesMETA;
An application can inspect whether the system is capable of colocation advertisement and discovery by extending the XrSystemProperties with XrSystemColocationDiscoveryPropertiesMETA structure when calling xrGetSystemProperties.
If a runtime returns XR_FALSE for supportsColocationDiscovery,
the runtime must return XR_ERROR_FEATURE_UNSUPPORTED for all
functions in the XR_META_colocation_discovery extension.
Colocation advertisement controls whether a device is discoverable using colocation discovery, so the term "colocation discovery" on its own is used here to refer to the combined capability of colocation advertisement and colocation discovery.
12.126.3. Controlling Colocation Advertisement
The ability for other physically colocated devices running the same
application to discover the current device is known as "colocation
advertisement".
The xrStartColocationAdvertisementMETA function requests starting
colocation advertisement, while the xrStopColocationAdvertisementMETA
requests that colocation advertisement stop.
Both of these functions initiate an asynchronous operation similar to that
found in extensions built on XR_FB_spatial_entity, with their
asynchronous completion results returned in an event structure
(XrEventDataStartColocationAdvertisementCompleteMETA and
XrEventDataStopColocationAdvertisementCompleteMETA, respectively).
Colocation advertisement may stop without being explicitly requested for a variety of reasons. If it stops, whether subsequent to an xrStopColocationAdvertisementMETA call or not, an XrEventDataColocationAdvertisementCompleteMETA event is queued.
The following figures show examples of the advertisement process in two general circumstances: normal use and the case where the runtime stops advertisement before the application requests it.
The xrStartColocationAdvertisementMETA function is defined as:
// Provided by XR_META_colocation_discovery
XrResult xrStartColocationAdvertisementMETA(
XrSession session,
const XrColocationAdvertisementStartInfoMETA* info,
XrAsyncRequestIdFB* advertisementRequestId);
The xrStartColocationAdvertisementMETA function requests that the current device become discoverable by other physically colocated devices running the same application.
If the system does not support colocation advertisement and discovery, the
runtime must return XR_ERROR_FEATURE_UNSUPPORTED from
xrStartColocationAdvertisementMETA.
In this case, the runtime must return XR_FALSE for
XrSystemColocationDiscoveryPropertiesMETA::supportsColocationDiscovery
when the function xrGetSystemProperties is called, so that the
application knows to not use this functionality.
This is an asynchronous operation. Completion results are conveyed in the event XrEventDataStartColocationAdvertisementCompleteMETA.
If the asynchronous operation is scheduled successfully, the runtime must
return XR_SUCCESS.
If and only if the runtime returns XR_SUCCESS, the runtime must queue
a single XrEventDataStartColocationAdvertisementCompleteMETA event
identified with a advertisementRequestId matching the
advertisementRequestId value output by this function, referred to as
the "corresponding completion event."
(This implies that if the runtime returns anything other than
XR_SUCCESS, the runtime must not queue any
XrEventDataStartColocationAdvertisementCompleteMETA events with
advertisementRequestId field matching the advertisementRequestId
populated by this function.)
If the asynchronous operation is successful, in the corresponding completion
event, the runtime must set the
XrEventDataStartColocationAdvertisementCompleteMETA::result
field to XR_SUCCESS.
If the asynchronous operation is scheduled but not successful, in the
corresponding completion event, the runtime must set the
XrEventDataStartColocationAdvertisementCompleteMETA::result
field to an appropriate error code instead of XR_SUCCESS.
See Figure 28 and Figure 29 for sample flows incorporating use of xrStartColocationAdvertisementMETA.
The XrColocationAdvertisementStartInfoMETA structure is defined as:
// Provided by XR_META_colocation_discovery
typedef struct XrColocationAdvertisementStartInfoMETA {
XrStructureType type;
const void* next;
uint32_t bufferSize;
uint8_t* buffer;
} XrColocationAdvertisementStartInfoMETA;
XrColocationAdvertisementStartInfoMETA is the input data for xrStartColocationAdvertisementMETA. Implicitly, while the application has an active advertisement, the runtime will retain a copy of the XrColocationAdvertisementStartInfoMETA submitted with xrStartColocationAdvertisementMETA.
#define XR_MAX_COLOCATION_DISCOVERY_BUFFER_SIZE_META 1024
XR_MAX_COLOCATION_DISCOVERY_BUFFER_SIZE_META is the maximum size of
data supported in a colocation advertisement.
The XrEventDataStartColocationAdvertisementCompleteMETA event structure is defined as:
// Provided by XR_META_colocation_discovery
typedef struct XrEventDataStartColocationAdvertisementCompleteMETA {
XrStructureType type;
const void* next;
XrAsyncRequestIdFB advertisementRequestId;
XrResult result;
XrUuid advertisementUuid;
} XrEventDataStartColocationAdvertisementCompleteMETA;
This event conveys the results of the asynchronous operation started by xrStopColocationAdvertisementMETA.
The XrEventDataColocationAdvertisementCompleteMETA structure is defined as:
// Provided by XR_META_colocation_discovery
typedef struct XrEventDataColocationAdvertisementCompleteMETA {
XrStructureType type;
const void* next;
XrAsyncRequestIdFB advertisementRequestId;
XrResult result;
} XrEventDataColocationAdvertisementCompleteMETA;
The runtime must queue exactly one XrEventDataColocationAdvertisementCompleteMETA event whenever an active colocation advertisement is stopped. This includes if the colocation advertisement is stopped due to an application calling xrStopColocationAdvertisementMETA, or the runtime needs to stop the colocation advertisement for any reason. If the colocation advertisement is stopped due to an application calling xrStopColocationAdvertisementMETA, the runtime must queue the XrEventDataColocationAdvertisementCompleteMETA event before queuing the corresponding XrEventDataStopColocationAdvertisementCompleteMETA event. When the XrSession is destroyed, the runtime must stop all active advertisements started from the same XrSession.
See Figure 28 and Figure 29 for sample flows that show how XrEventDataColocationAdvertisementCompleteMETA is used.
The xrStopColocationAdvertisementMETA function is defined as:
// Provided by XR_META_colocation_discovery
XrResult xrStopColocationAdvertisementMETA(
XrSession session,
const XrColocationAdvertisementStopInfoMETA* info,
XrAsyncRequestIdFB* requestId);
The application can use the xrStopColocationAdvertisementMETA function to disable the ability for other physically colocated devices running the same application to discover the current device.
If the system does not support colocation advertisement and discovery, the
runtime must return XR_ERROR_FEATURE_UNSUPPORTED from
xrStopColocationAdvertisementMETA.
In this case, the runtime must return XR_FALSE for
XrSystemColocationDiscoveryPropertiesMETA::supportsColocationDiscovery
when the function xrGetSystemProperties is called, so that the
application knows to not use this functionality.
This is an asynchronous operation. Completion results are conveyed in the event XrEventDataStopColocationAdvertisementCompleteMETA.
If the asynchronous operation is scheduled successfully, the runtime must
return XR_SUCCESS.
If and only if the runtime returns XR_SUCCESS, the runtime must queue
a single XrEventDataStopColocationAdvertisementCompleteMETA event
identified with a requestId matching the requestId value output by
this function, referred to as the "corresponding completion event."
(This implies that if the runtime returns anything other than
XR_SUCCESS, the runtime must not queue any
XrEventDataStopColocationAdvertisementCompleteMETA events with
requestId field matching the requestId populated by this function.)
If the asynchronous operation is successful, in the corresponding completion
event, the runtime must set the
XrEventDataStopColocationAdvertisementCompleteMETA::result field
to XR_SUCCESS.
If the asynchronous operation is scheduled but not successful, in the
corresponding completion event, the runtime must set the
XrEventDataStopColocationAdvertisementCompleteMETA::result field
to an appropriate error code instead of XR_SUCCESS.
See Figure 28 for a sample flow incorporating use of xrStopColocationAdvertisementMETA.
The XrColocationAdvertisementStopInfoMETA structure is defined as:
// Provided by XR_META_colocation_discovery
typedef struct XrColocationAdvertisementStopInfoMETA {
XrStructureType type;
const void* next;
} XrColocationAdvertisementStopInfoMETA;
The XrEventDataStopColocationAdvertisementCompleteMETA event structure is defined as:
// Provided by XR_META_colocation_discovery
typedef struct XrEventDataStopColocationAdvertisementCompleteMETA {
XrStructureType type;
const void* next;
XrAsyncRequestIdFB requestId;
XrResult result;
} XrEventDataStopColocationAdvertisementCompleteMETA;
This event conveys the results of the asynchronous operation started by xrStopColocationAdvertisementMETA.
12.126.4. Colocation Discovery
Discovering other physically colocated devices, running the same application
and currently advertising, is known as "colocation discovery".
It is a background process that is controlled by
xrStartColocationDiscoveryMETA and
xrStopColocationDiscoveryMETA.
Both of these functions initiate an asynchronous operation similar to that
found in extensions built on XR_FB_spatial_entity, with their
asynchronous completion results returned in an event structure
(XrEventDataStartColocationDiscoveryCompleteMETA and
XrEventDataStopColocationAdvertisementCompleteMETA, respectively).
Results from colocation discovery, if it is successfully started, are returned through a sequence of XrEventDataColocationDiscoveryResultMETA events. When colocation discovery stops for any reason (application request or otherwise), an XrEventDataColocationDiscoveryCompleteMETA event is enqueued.
The following figures show examples of the discovery process in two general circumstances: normal use and the case where the runtime stops discovery before the application requests it.
The xrStartColocationDiscoveryMETA function is defined as:
// Provided by XR_META_colocation_discovery
XrResult xrStartColocationDiscoveryMETA(
XrSession session,
const XrColocationDiscoveryStartInfoMETA* info,
XrAsyncRequestIdFB* discoveryRequestId);
The application can call xrStartColocationDiscoveryMETA to start discovering physically colocated devices.
If the system does not support colocation advertisement and discovery, the
runtime must return XR_ERROR_FEATURE_UNSUPPORTED from
xrStartColocationDiscoveryMETA.
In this case, the runtime must return XR_FALSE for
XrSystemColocationDiscoveryPropertiesMETA::supportsColocationDiscovery
when the function xrGetSystemProperties is called, so that the
application knows to not use this functionality.
This is an asynchronous operation. Completion results are conveyed in the event XrEventDataStartColocationDiscoveryCompleteMETA.
If the asynchronous operation is scheduled successfully, the runtime must
return XR_SUCCESS.
If and only if the runtime returns XR_SUCCESS, the runtime must queue
a single XrEventDataStartColocationDiscoveryCompleteMETA event
identified with a discoveryRequestId matching the discoveryRequestId
value output by this function, referred to as the "corresponding completion
event."
(This implies that if the runtime returns anything other than
XR_SUCCESS, the runtime must not queue any
XrEventDataStartColocationDiscoveryCompleteMETA events with
discoveryRequestId field matching the discoveryRequestId populated
by this function.)
If the asynchronous operation is successful, in the corresponding completion
event, the runtime must set the
XrEventDataStartColocationDiscoveryCompleteMETA::result field to
XR_SUCCESS.
The runtime may subsequently queue zero or more
XrEventDataColocationDiscoveryResultMETA events asynchronously as the
runtime discovers nearby advertisements.
Once the application or runtime stops the colocation discovery, the runtime
must queue a single XrEventDataColocationDiscoveryCompleteMETA event.
All XrEventDataColocationDiscoveryResultMETA and
XrEventDataColocationDiscoveryCompleteMETA events will identified with
discoveryRequestId matching the value populated in
discoveryRequestId by xrStartColocationDiscoveryMETA.
If the asynchronous operation is scheduled but not successful, in the
corresponding completion event, the runtime must set the
XrEventDataStartColocationDiscoveryCompleteMETA::result field to
an appropriate error code instead of XR_SUCCESS.
If the application already has an active colocation discovery, in the
corresponding completion event, the runtime must set the
XrEventDataStartColocationDiscoveryCompleteMETA::result field to
XR_COLOCATION_DISCOVERY_ALREADY_DISCOVERING_META.
See [XR_META_colocation_discovery-dicovery-normal] and [XR_META_colocation_discovery-dicovery-runtime-stop] for sample flows incorporating use of xrStartColocationDiscoveryMETA.
The XrColocationDiscoveryStartInfoMETA structure is defined as:
// Provided by XR_META_colocation_discovery
typedef struct XrColocationDiscoveryStartInfoMETA {
XrStructureType type;
const void* next;
} XrColocationDiscoveryStartInfoMETA;
The XrEventDataStartColocationDiscoveryCompleteMETA structure is defined as:
// Provided by XR_META_colocation_discovery
typedef struct XrEventDataStartColocationDiscoveryCompleteMETA {
XrStructureType type;
const void* next;
XrAsyncRequestIdFB discoveryRequestId;
XrResult result;
} XrEventDataStartColocationDiscoveryCompleteMETA;
This event conveys the results of the asynchronous operation started by xrStartColocationDiscoveryMETA.
The XrEventDataColocationDiscoveryResultMETA structure is defined as:
// Provided by XR_META_colocation_discovery
typedef struct XrEventDataColocationDiscoveryResultMETA {
XrStructureType type;
const void* next;
XrAsyncRequestIdFB discoveryRequestId;
XrUuid advertisementUuid;
uint32_t bufferSize;
uint8_t buffer[XR_MAX_COLOCATION_DISCOVERY_BUFFER_SIZE_META];
} XrEventDataColocationDiscoveryResultMETA;
advertisementUuid and buffer are both considered the payload of
colocated advertisements.
The value of advertisementUuid matches the value returned in
XrEventDataStartColocationAdvertisementCompleteMETA::advertisementUuid
on the advertising device.
See [XR_META_colocation_discovery-dicovery-normal] and [XR_META_colocation_discovery-dicovery-runtime-stop] for sample flows that show how XrEventDataColocationDiscoveryResultMETA is used.
The XrEventDataColocationDiscoveryCompleteMETA structure is defined as:
// Provided by XR_META_colocation_discovery
typedef struct XrEventDataColocationDiscoveryCompleteMETA {
XrStructureType type;
const void* next;
XrAsyncRequestIdFB discoveryRequestId;
XrResult result;
} XrEventDataColocationDiscoveryCompleteMETA;
The runtime must queue exactly one XrEventDataColocationDiscoveryCompleteMETA event whenever an active colocation discovery is stopped. This includes if the colocation discovery is stopped due to an application calling xrStopColocationDiscoveryMETA, or the runtime needs to stop the colocation discovery for any reason. When the XrSession is destroyed, the runtime must stop all active advertisements started from the same XrSession.
See [XR_META_colocation_discovery-dicovery-normal] and [XR_META_colocation_discovery-dicovery-runtime-stop] for sample flows that show how XrEventDataColocationDiscoveryCompleteMETA is used.
The xrStopColocationDiscoveryMETA function is defined as:
// Provided by XR_META_colocation_discovery
XrResult xrStopColocationDiscoveryMETA(
XrSession session,
const XrColocationDiscoveryStopInfoMETA* info,
XrAsyncRequestIdFB* requestId);
The application can call xrStopColocationDiscoveryMETA to stop an ongoing discovery process that was started by xrStartColocationDiscoveryMETA.
If the system does not support colocation advertisement and discovery, the
runtime must return XR_ERROR_FEATURE_UNSUPPORTED from
xrStopColocationDiscoveryMETA.
In this case, the runtime must return XR_FALSE for
XrSystemColocationDiscoveryPropertiesMETA::supportsColocationDiscovery
when the function xrGetSystemProperties is called, so that the
application knows to not use this functionality.
This is an asynchronous operation. Completion results are conveyed in the event XrEventDataStopColocationDiscoveryCompleteMETA.
If the asynchronous operation is scheduled successfully, the runtime must
return XR_SUCCESS.
If and only if the runtime returns XR_SUCCESS, the runtime must queue
a single XrEventDataStopColocationDiscoveryCompleteMETA event
identified with a requestId matching the requestId value output by
this function, referred to as the "corresponding completion event."
(This implies that if the runtime returns anything other than
XR_SUCCESS, the runtime must not queue any
XrEventDataStopColocationDiscoveryCompleteMETA events with requestId
field matching the requestId populated by this function.)
If the asynchronous operation is successful, in the corresponding completion
event, the runtime must set the
XrEventDataStopColocationDiscoveryCompleteMETA::result field to
XR_SUCCESS.
If the asynchronous operation is scheduled but not successful, in the
corresponding completion event, the runtime must set the
XrEventDataStopColocationDiscoveryCompleteMETA::result field to
an appropriate error code instead of XR_SUCCESS.
See Figure 30 for a sample flow incorporating use of xrStopColocationDiscoveryMETA.
The XrColocationDiscoveryStopInfoMETA structure is defined as:
// Provided by XR_META_colocation_discovery
typedef struct XrColocationDiscoveryStopInfoMETA {
XrStructureType type;
const void* next;
} XrColocationDiscoveryStopInfoMETA;
The XrEventDataStopColocationDiscoveryCompleteMETA structure is defined as:
// Provided by XR_META_colocation_discovery
typedef struct XrEventDataStopColocationDiscoveryCompleteMETA {
XrStructureType type;
const void* next;
XrAsyncRequestIdFB requestId;
XrResult result;
} XrEventDataStopColocationDiscoveryCompleteMETA;
This event conveys the results of the asynchronous operation started by xrStopColocationDiscoveryMETA.
12.126.6. New Structures
-
Extending XrSystemProperties:
12.126.7. New Enum Constants
-
XR_MAX_COLOCATION_DISCOVERY_BUFFER_SIZE_META -
XR_META_COLOCATION_DISCOVERY_EXTENSION_NAME -
XR_META_colocation_discovery_SPEC_VERSION -
Extending XrResult:
-
XR_COLOCATION_DISCOVERY_ALREADY_ADVERTISING_META -
XR_COLOCATION_DISCOVERY_ALREADY_DISCOVERING_META -
XR_ERROR_COLOCATION_DISCOVERY_NETWORK_FAILED_META -
XR_ERROR_COLOCATION_DISCOVERY_NO_DISCOVERY_METHOD_META
-
-
Extending XrStructureType:
-
XR_TYPE_COLOCATION_ADVERTISEMENT_START_INFO_META -
XR_TYPE_COLOCATION_ADVERTISEMENT_STOP_INFO_META -
XR_TYPE_COLOCATION_DISCOVERY_START_INFO_META -
XR_TYPE_COLOCATION_DISCOVERY_STOP_INFO_META -
XR_TYPE_EVENT_DATA_COLOCATION_ADVERTISEMENT_COMPLETE_META -
XR_TYPE_EVENT_DATA_COLOCATION_DISCOVERY_COMPLETE_META -
XR_TYPE_EVENT_DATA_COLOCATION_DISCOVERY_RESULT_META -
XR_TYPE_EVENT_DATA_START_COLOCATION_ADVERTISEMENT_COMPLETE_META -
XR_TYPE_EVENT_DATA_START_COLOCATION_DISCOVERY_COMPLETE_META -
XR_TYPE_EVENT_DATA_STOP_COLOCATION_ADVERTISEMENT_COMPLETE_META -
XR_TYPE_EVENT_DATA_STOP_COLOCATION_DISCOVERY_COMPLETE_META -
XR_TYPE_SYSTEM_COLOCATION_DISCOVERY_PROPERTIES_META
-
Version History
-
Revision 1, 2024-06-15 (TJ Gilbrough)
-
Initial extension description
-
12.127. XR_META_detached_controllers
- Name String
-
XR_META_detached_controllers - Extension Type
-
Instance extension
- Registered Extension Number
-
241
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Matthew Langille, Meta Platforms
Andreas Selvik, Meta Platforms
John Kearney, Meta Platforms
12.127.1. Overview
On platforms which support both hand tracking and controller input concurrently, application developers have various use cases that require the ability to track controllers when they are not being held.
-
Controllers that are "not held" can have some meaning in-application, for example the ability to act as some sort of marker or anchor.
-
In cases when neither controller is currently held, access to pose data for the not-held controllers allows the application to render them for the user to locate, without removing the headset or remembering their physical location in the real world.
-
Asymmetric multi-user applications could be created where the user with the immersive display is not holding the controller, but a second person manipulates it.
-
In applications that support a tracked hand in combination with a single held controller, a not-held controller might remain relevant to interaction for the application.
A runtime may require the app to enable
XR_META_simultaneous_hands_and_controllers in order to fully enable
the functionality exposed by this extension.
If that extension is not enabled, the runtime must still accept the
bindings, but may choose to not report any current interaction profile for
/user/detached_controller_meta paths and not provide action data
accordingly.
12.127.2. New Paths
This extension defines two new top level /user paths:
-
/user/detached_controller_meta/left
-
/user/detached_controller_meta/right
Interaction profiles are extended as follows:
-
For any interaction profile valid for /user/hand/left for which …/input/grip/pose is a valid binding sub-path, the runtime must accept the same input/output source subpaths on /user/detached_controller_meta/left in suggested bindings.
-
For any interaction profile valid for /user/hand/right for which …/input/grip/pose is a valid binding sub-path, the runtime must accept the same input/output source subpaths on /user/detached_controller_meta/right in suggested bindings.
12.127.3. Beginning Detached Controller Interaction
In-hand detection is left to the implementing runtime. The runtime may use proximity data, orientation data, or any other data for both hands and both controllers for in-hand detection. Controllers with handedness should be considered "in hand" only when held by their corresponding hand (a left controller held in the right hand should be considered detached, as should a right controller held in the left hand).
The same interaction profile may be used to suggest bindings for top level /user paths beginning with both /user/hand and /user/detached_controller_meta paths, but the runtime must only report any given interaction profile as current for at most one of these paired top level /user paths at any given time. That is, if a given interaction profile is reported as current for /user/detached_controller_meta/left, that same profile must not be current for /user/hand/left (and vice versa), and similarly for /user/detached_controller_meta/right and /user/hand/right.
This is done individually such that any combination of held or not-held controllers is representable (both or neither controller held, or one controller held in one hand for left or right controllers).
12.127.4. Ending Detached Controller Interaction
If an interaction profile previously current on a detached path becomes current on the corresponding hand, the current interaction profile of the detached path must become XR_NULL_PATH.
However, if a (controller) interaction profile becomes current on a "detached" top level /user path, a hand-related interaction profile should become current on the corresponding /user/hand/ path if the application has submitted a suggested binding for the corresponding /user/hand/ top level path.
An application may use the
XR_TYPE_EVENT_DATA_INTERACTION_PROFILE_CHANGED and change in output
value from xrGetCurrentInteractionProfile to determine when the user
put down or picked up a controller.
12.127.5. Changes to Action Space Behavior
In some cases, this extension modifies the behavior specified in Resolving a single action bound to multiple inputs or outputs. Specifically, given the following:
-
A pose action with both a path under /user/detached_controller_meta as a sub-action path, as well as the corresponding path under /user/hand,
-
Suggested bindings for that pose action for a single interaction profile on both a binding path under /user/detached_controller_meta/ path and the corresponding binding path under /user/hand/ (e.g. /user/detached_controller_meta/right/input/grip/pose and /user/hand/right/input/grip/pose),
-
And an action space created for the pose action without specifying a sub-action path,
the runtime should associate the action space with the appropriate pose on the controller, whether it is in-hand or detached. That is, such an action space "follows" the controller. The core specification does not specify any particular behavior for action spaces created without a sub-action path on actions created with sub-action paths, other than not changing the association between calls to xrSyncActions.
A more common practice of defining a pose action with both /user/hand/left and /user/hand/right as sub-action paths does not change behavior with this extension. That is, if the application creates an action space from such a pose action, without passing a sub-action path to xrCreateActionSpace, the runtime must choose a single pose source. However, it usually does not make sense to create an action space without specifying a sub-action path for such an action.
12.127.6. Example Usage
Detecting hand/controller modality change per hand:
const int NUM_HANDS = 2; // previously defined
XrInstance instance; // previously initialized
XrSession session; // previously initialized
XrPath path; // previously initialized for path in question
XrInteractionProfileState currentState[NUM_HANDS]; // previously initialized for NUM_HANDS
// In event polling code
// Initialize an event buffer to hold the output.
XrEventDataBuffer event = {XR_TYPE_EVENT_DATA_BUFFER};
XrResult result = xrPollEvent(instance, &event);
if (result == XR_SUCCESS) {
// ...
switch (event.type) {
case XR_TYPE_EVENT_DATA_INTERACTION_PROFILE_CHANGED: {
for (int handIndex = 0; handIndex < NUM_HANDS; ++handIndex) {
XrInteractionProfileState profileState;
CHK_XR(xrGetCurrentInteractionProfile(session, path, &profileState));
// Inspect profileState to understand if this hand is now holding a
// controller, or if the controller has been placed somewhere
if (currentState[handIndex].interactionProfile != profileState.interactionProfile) {
// Log state change, etc
// LOG_OUT("Detected state change in {} from {} to {}", handIndex,
// currentState[handIndex].interactionProfile,
// profileState.interactionProfile);
// Do whatever processing is required when this state has changed
}
}
break;
}
}
}
Accessing data is effectively the same as existing hands and controllers input code, except that we no longer make the (invalid) assumption that both /user/hand/left and /user/hand/right are the same modality.
|
Detecting controller locations when not in hands
Suggest interaction profile bindings for the use cases your app cares about. For an app that cares about detached controllers, you can set up bindings for either of the in-hand case or detached case, or both together. For example, for many controllers you might suggest the following bindings with XrInteractionProfileSuggestedBinding:
|
|
Using the same controller pose when in hand as when detached with hands
The two extensions can be brought together to have a controller pose that follows the controller when detached, and a hand pose that flips from tracking the controller to tracking the hand when the controller is put down For example, suggest bindings for /interaction_profiles/oculus/touch_controller as follows with xrSuggestInteractionProfileBindings:
Further, suggest bindings for /interaction_profiles/ext/hand_interaction as follows with xrSuggestInteractionProfileBindings:
Proceed with usage similar to above.
When controllers are in hand, the binding source for
/user/hand/left/grip/pose path provides controller-based data to
both When controllers are placed on a surface, the application will receive the
Note that the same can be done for actions; |
12.128. XR_META_environment_depth
- Name String
-
XR_META_environment_depth - Extension Type
-
Instance extension
- Registered Extension Number
-
292
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2025-09-19
- IP Status
-
No known IP claims.
- Contributors
-
Andreas Selvik, Meta Platforms
Cass Everitt, Meta Platforms
Daniel Henell, Meta Platforms
John Kearney, Meta Platforms
Urs Niesen, Meta Platforms
Martin Sherburn, Meta Platforms
12.128.1. Overview
This extension allows the application to request depth maps of the real-world environment around the headset. The depth maps are generated by the runtime and shared with the application using an XrEnvironmentDepthSwapchainMETA.
12.128.2. Inspect System Capability
The XrSystemEnvironmentDepthPropertiesMETA structure is defined as:
// Provided by XR_META_environment_depth
typedef struct XrSystemEnvironmentDepthPropertiesMETA {
XrStructureType type;
void* next;
XrBool32 supportsEnvironmentDepth;
XrBool32 supportsHandRemoval;
} XrSystemEnvironmentDepthPropertiesMETA;
An application can inspect whether the system is capable of supporting environment depth by extending the XrSystemProperties with XrSystemEnvironmentDepthPropertiesMETA structure when calling xrGetSystemProperties.
If and only if a runtime returns XR_FALSE for
supportsEnvironmentDepth, the runtime must return
XR_ERROR_FEATURE_UNSUPPORTED from
xrCreateEnvironmentDepthProviderMETA.
If and only if a runtime returns XR_FALSE for
supportsHandRemoval, the runtime must return
XR_ERROR_FEATURE_UNSUPPORTED from
xrSetEnvironmentDepthHandRemovalMETA.
12.128.3. Creating and Destroying a Depth Provider
// Provided by XR_META_environment_depth
XR_DEFINE_HANDLE(XrEnvironmentDepthProviderMETA)
An XrEnvironmentDepthProviderMETA is a handle to a depth provider.
The xrCreateEnvironmentDepthProviderMETA function is defined as:
// Provided by XR_META_environment_depth
XrResult xrCreateEnvironmentDepthProviderMETA(
XrSession session,
const XrEnvironmentDepthProviderCreateInfoMETA* createInfo,
XrEnvironmentDepthProviderMETA* environmentDepthProvider);
The xrCreateEnvironmentDepthProviderMETA function creates a depth provider instance.
Creating the depth provider may allocate resources, but should not incur any per-frame compute costs until the provider has been started.
-
Runtimes must create the provider in a stopped state.
-
Runtimes may limit the number of depth providers per XrInstance. If xrCreateEnvironmentDepthProviderMETA fails due to reaching this limit, the runtime must return
XR_ERROR_LIMIT_REACHED. -
Runtimes must support at least 1 provider per XrInstance.
-
Runtimes may return
XR_ERROR_NOT_PERMITTED_PASSTHROUGH_FBif the app permissions have not been granted to the calling app. -
Applications can call xrStartEnvironmentDepthProviderMETA to start the generation of depth maps.
The XrEnvironmentDepthProviderCreateInfoMETA structure is defined as:
// Provided by XR_META_environment_depth
typedef struct XrEnvironmentDepthProviderCreateInfoMETA {
XrStructureType type;
const void* next;
XrEnvironmentDepthProviderCreateFlagsMETA createFlags;
} XrEnvironmentDepthProviderCreateInfoMETA;
The XrEnvironmentDepthProviderCreateInfoMETA structure provides creation options for the XrEnvironmentDepthProviderMETA when passed to xrCreateEnvironmentDepthProviderMETA.
The XrEnvironmentDepthProviderCreateFlagsMETA specifies creation options for XrEnvironmentDepthProviderMETA.
// Provided by XR_META_environment_depth
typedef XrFlags64 XrEnvironmentDepthProviderCreateFlagsMETA;
Valid bits for XrEnvironmentDepthProviderCreateFlagsMETA are defined by XrEnvironmentDepthProviderCreateFlagBitsMETA, which is specified as:
// Provided by XR_META_environment_depth
// Flag bits for XrEnvironmentDepthProviderCreateFlagsMETA
There are currently no flag bits defined. This is reserved for future use.
The xrDestroyEnvironmentDepthProviderMETA function is defined as:
// Provided by XR_META_environment_depth
XrResult xrDestroyEnvironmentDepthProviderMETA(
XrEnvironmentDepthProviderMETA environmentDepthProvider);
The xrDestroyEnvironmentDepthProviderMETA function destroys the depth provider. After this call the runtime may free all related memory and resources.
12.128.4. Starting and Stopping a Depth Provider
The xrStartEnvironmentDepthProviderMETA function is defined as:
// Provided by XR_META_environment_depth
XrResult xrStartEnvironmentDepthProviderMETA(
XrEnvironmentDepthProviderMETA environmentDepthProvider);
The xrStartEnvironmentDepthProviderMETA function starts the asynchronous generation of depth maps.
Starting the depth provider may use CPU and GPU resources.
Runtimes must return XR_ERROR_UNEXPECTED_STATE_PASSTHROUGH_FB if
xrStartEnvironmentDepthProviderMETA is called on an already started
XrEnvironmentDepthProviderMETA.
The xrStopEnvironmentDepthProviderMETA function is defined as:
// Provided by XR_META_environment_depth
XrResult xrStopEnvironmentDepthProviderMETA(
XrEnvironmentDepthProviderMETA environmentDepthProvider);
The xrStopEnvironmentDepthProviderMETA function stops the generation of depth maps. This stops all per frame computation of environment depth for the application.
Runtimes must return XR_ERROR_UNEXPECTED_STATE_PASSTHROUGH_FB if
xrStopEnvironmentDepthProviderMETA is called on an already stopped
XrEnvironmentDepthProviderMETA.
12.128.5. Hand Removal
Runtimes may provide functionality to remove hands from the depth map and filling in estimated background depth values. This is useful to support other occlusion methods specialized for hands to coexist with the Environment Depth extension.
The xrSetEnvironmentDepthHandRemovalMETA function is defined as:
// Provided by XR_META_environment_depth
XrResult xrSetEnvironmentDepthHandRemovalMETA(
XrEnvironmentDepthProviderMETA environmentDepthProvider,
const XrEnvironmentDepthHandRemovalSetInfoMETA* setInfo);
The xrSetEnvironmentDepthHandRemovalMETA function sets hand removal options.
Runtimes should enable or disable the removal of the hand depths from the
depth map.
If enabled, the corresponding depth pixels should be replaced with the
estimated background depth behind the hands.
Runtimes must return XR_ERROR_FEATURE_UNSUPPORTED if and only if
XrSystemEnvironmentDepthPropertiesMETA::supportsHandRemoval is
XR_FALSE.
The XrEnvironmentDepthHandRemovalSetInfoMETA structure is defined as:
// Provided by XR_META_environment_depth
typedef struct XrEnvironmentDepthHandRemovalSetInfoMETA {
XrStructureType type;
const void* next;
XrBool32 enabled;
} XrEnvironmentDepthHandRemovalSetInfoMETA;
This structure contains options passed to xrSetEnvironmentDepthHandRemovalMETA.
12.128.6. Creating a Readable Depth Swapchain
The depth data is generated in the runtime and shared to the application though an XrEnvironmentDepthSwapchainMETA. This swapchain is different from regular swapchains in that it provides a data channel from the runtime to the application instead of the other way around.
// Provided by XR_META_environment_depth
XR_DEFINE_HANDLE(XrEnvironmentDepthSwapchainMETA)
XrEnvironmentDepthSwapchainMETA is a handle to a readable depth swapchain.
The xrCreateEnvironmentDepthSwapchainMETA function is defined as:
// Provided by XR_META_environment_depth
XrResult xrCreateEnvironmentDepthSwapchainMETA(
XrEnvironmentDepthProviderMETA environmentDepthProvider,
const XrEnvironmentDepthSwapchainCreateInfoMETA* createInfo,
XrEnvironmentDepthSwapchainMETA* swapchain);
The xrCreateEnvironmentDepthSwapchainMETA function creates a readable swapchain, which is used for accessing the depth data.
The runtime decides on the resolution and length of the swapchain. Additional information about the swapchain can be accessed by calling xrGetEnvironmentDepthSwapchainStateMETA.
Runtimes must create a swapchain with array textures of length 2, which map
to a left-eye and right-eye view.
View index 0 must represent the left eye and view index 1 must represent
the right eye.
This is the same convention as for
XR_VIEW_CONFIGURATION_TYPE_PRIMARY_STEREO in
XrViewConfigurationType.
Runtimes must create the swapchain with the following image formats
depending on the graphics API associated with the session:
-
OpenGL:
GL_DEPTH_COMPONENT16 -
Vulkan:
VK_FORMAT_D16_UNORM -
Direct3D:
DXGI_FORMAT_D16_UNORM
Runtimes must only allow maximum one swapchain to exist per depth provider
at any given time, and must return XR_ERROR_LIMIT_REACHED if
xrCreateEnvironmentDepthSwapchainMETA is called to create more.
Applications should destroy the swapchain when no longer needed.
Applications must be able to handle different swapchain lengths and
resolutions.
The XrEnvironmentDepthSwapchainCreateInfoMETA structure is defined as:
// Provided by XR_META_environment_depth
typedef struct XrEnvironmentDepthSwapchainCreateInfoMETA {
XrStructureType type;
const void* next;
XrEnvironmentDepthSwapchainCreateFlagsMETA createFlags;
} XrEnvironmentDepthSwapchainCreateInfoMETA;
XrEnvironmentDepthSwapchainCreateInfoMETA contains creation options for the readable depth swapchain, and is passed to xrCreateEnvironmentDepthSwapchainMETA.
The XrEnvironmentDepthSwapchainCreateFlagsMETA specifies creation options for XrEnvironmentDepthSwapchainCreateInfoMETA.
// Provided by XR_META_environment_depth
typedef XrFlags64 XrEnvironmentDepthSwapchainCreateFlagsMETA;
Valid bits for XrEnvironmentDepthProviderCreateFlagsMETA are defined by XrEnvironmentDepthSwapchainCreateFlagBitsMETA, which is specified as:
// Provided by XR_META_environment_depth
// Flag bits for XrEnvironmentDepthSwapchainCreateFlagsMETA
There are currently no flag bits defined. This is reserved for future use.
The xrGetEnvironmentDepthSwapchainStateMETA function is defined as:
// Provided by XR_META_environment_depth
XrResult xrGetEnvironmentDepthSwapchainStateMETA(
XrEnvironmentDepthSwapchainMETA swapchain,
XrEnvironmentDepthSwapchainStateMETA* state);
xrGetEnvironmentDepthSwapchainStateMETA retrieves information about the XrEnvironmentDepthSwapchainMETA. This information is constant throughout the lifetime of the XrEnvironmentDepthSwapchainMETA.
The XrEnvironmentDepthSwapchainStateMETA structure is defined as:
// Provided by XR_META_environment_depth
typedef struct XrEnvironmentDepthSwapchainStateMETA {
XrStructureType type;
void* next;
uint32_t width;
uint32_t height;
} XrEnvironmentDepthSwapchainStateMETA;
The xrDestroyEnvironmentDepthSwapchainMETA function is defined as:
// Provided by XR_META_environment_depth
XrResult xrDestroyEnvironmentDepthSwapchainMETA(
XrEnvironmentDepthSwapchainMETA swapchain);
The xrDestroyEnvironmentDepthSwapchainMETA function destroys a readable environment depth swapchain.
All submitted graphics API commands that refer to swapchain must have
completed execution.
Runtimes may continue to utilize swapchain images after
xrDestroyEnvironmentDepthSwapchainMETA is called.
12.128.7. Accessing the Readable Depth Swapchain During Rendering
The xrEnumerateEnvironmentDepthSwapchainImagesMETA function is defined as:
// Provided by XR_META_environment_depth
XrResult xrEnumerateEnvironmentDepthSwapchainImagesMETA(
XrEnvironmentDepthSwapchainMETA swapchain,
uint32_t imageCapacityInput,
uint32_t* imageCountOutput,
XrSwapchainImageBaseHeader* images);
xrEnumerateEnvironmentDepthSwapchainImagesMETA fills an array of
graphics API-specific XrSwapchainImage* structures derived from
XrSwapchainImageBaseHeader.
The resources must be constant and valid for the lifetime of the
XrEnvironmentDepthSwapchainMETA.
This function behaves analogously to xrEnumerateSwapchainImages.
Runtimes must always return identical buffer contents from this enumeration for the lifetime of the swapchain.
Note: images is a pointer to an array of structures of graphics
API-specific type, not an array of structure pointers.
The pointer submitted as images will be treated as an array of the
expected graphics API-specific type based on the graphics API used at
session creation time.
If the type member of any array element accessed in this way does not match
the expected value, the runtime must return
XR_ERROR_VALIDATION_FAILURE.
The xrAcquireEnvironmentDepthImageMETA function is defined as:
// Provided by XR_META_environment_depth
XrResult xrAcquireEnvironmentDepthImageMETA(
XrEnvironmentDepthProviderMETA environmentDepthProvider,
const XrEnvironmentDepthImageAcquireInfoMETA* acquireInfo,
XrEnvironmentDepthImageMETA* environmentDepthImage);
Acquires the latest available swapchain image that has been generated by the depth provider and ensures it is ready to be accessed by the application. The application may access and queue GPU operations using the acquired image until the next xrEndFrame call, when the image is released and the depth provider may write new depth data into it after completion of all work queued before the xrEndFrame call.
The returned XrEnvironmentDepthImageMETA contains the swapchain index into the array enumerated by xrEnumerateEnvironmentDepthSwapchainImagesMETA. It also contains other information such as the field of view and pose that are necessary to interpret the depth data.
There must be no more than one call to xrAcquireEnvironmentDepthImageMETA between any pair of corresponding xrBeginFrame and xrEndFrame calls in a session.
-
The runtime may block if previously acquired swapchain images are still being used by the graphics API.
-
The runtime must return
XR_ERROR_CALL_ORDER_INVALIDif xrAcquireEnvironmentDepthImageMETA is called before xrBeginFrame or after xrEndFrame. -
The runtime must return
XR_ERROR_CALL_ORDER_INVALIDif xrAcquireEnvironmentDepthImageMETA is called on a stopped XrEnvironmentDepthProviderMETA. -
The runtime must return
XR_ERROR_LIMIT_REACHEDif xrAcquireEnvironmentDepthImageMETA is called more than once per frame - i.e. in a running session, after a call to xrBeginFrame that has not had an associated xrEndFrame. -
Runtimes must return
XR_ENVIRONMENT_DEPTH_NOT_AVAILABLE_METAif no depth frame is available yet (i.e. the provider was recently started and did not yet have time to compute depth). Note that this is a success code. In this case the output parameters must be unchanged. -
The application must not utilize the swapchain image in calls to the graphics API after xrEndFrame has been called.
-
A runtime may use the graphics API specific contexts provided to OpenXR. In particular:
-
For OpenGL, a runtime may use the OpenGL context specified in the call to xrCreateSession, which needs external synchronization.
-
For Vulkan, a runtime may use the
VkQueuespecified in the XrGraphicsBindingVulkan2KHR, which needs external synchronization. -
For Direct3D12, a runtime may use the
ID3D12CommandQueuespecified in the XrGraphicsBindingD3D12KHR, which needs external synchronization.
-
The XrEnvironmentDepthImageAcquireInfoMETA structure is defined as:
// Provided by XR_META_environment_depth
typedef struct XrEnvironmentDepthImageAcquireInfoMETA {
XrStructureType type;
const void* next;
XrSpace space;
XrTime displayTime;
} XrEnvironmentDepthImageAcquireInfoMETA;
The XrEnvironmentDepthImageViewMETA structure is defined as:
// Provided by XR_META_environment_depth
typedef struct XrEnvironmentDepthImageViewMETA {
XrStructureType type;
const void* next;
XrFovf fov;
XrPosef pose;
} XrEnvironmentDepthImageViewMETA;
The XrEnvironmentDepthImageMETA structure is defined as:
// Provided by XR_META_environment_depth
typedef struct XrEnvironmentDepthImageMETA {
XrStructureType type;
const void* next;
uint32_t swapchainIndex;
float nearZ;
float farZ;
XrEnvironmentDepthImageViewMETA views[2];
} XrEnvironmentDepthImageMETA;
Depth is provided as textures in the same format as described in the
XR_KHR_composition_layer_depth extension.
The frustum’s Z-planes are placed at nearZ and farZ meters.
When farZ is less than nearZ, an infinite projection matrix is
used.
The XrEnvironmentDepthImageTimestampMETA structure is defined as:
// Provided by XR_META_environment_depth
typedef struct XrEnvironmentDepthImageTimestampMETA {
XrStructureType type;
const void* next;
XrTime captureTime;
} XrEnvironmentDepthImageTimestampMETA;
The XrEnvironmentDepthImageTimestampMETA structure provides timestamp information for environment depth data. This structure can be chained to XrEnvironmentDepthImageMETA to provide temporal context for the depth information.
The captureTime field indicates the capture time of the images used to
reconstruct the environment depth map.
Applications can use this timestamp information for latency measurements and synchronization with other time-based data such as passthrough camera.
If present in the structure chain, this structure must be populated by the runtime if and only if the runtime reports version 2 or greater of this extension. This is an "unknown structure" to runtimes reporting version 1 of this extension, and is therefore ignored/unmodified according to Valid Usage for Structure Pointer Chains.
Applications should initialize captureTime to 0 before chaining this
structure and calling xrAcquireEnvironmentDepthImageMETA.
After the call, a non-zero value in captureTime indicates that the
runtime has populated the timestamp field.
12.128.8. Vulkan Swapchain Image Layout
For an application using Vulkan, after a successful call to
xrAcquireEnvironmentDepthImageMETA that does not return
XR_ENVIRONMENT_DEPTH_NOT_AVAILABLE_META, the following conditions
apply to the runtime:
-
The runtime must ensure the acquired readable depth swapchain image has a memory layout compatible with
VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL. Note that this is different from xrAcquireSwapchainImage which guaranteesVK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL. -
The runtime must ensure the
VkQueuespecified in XrGraphicsBindingVulkanKHR / XrGraphicsBindingVulkan2KHR has ownership of the acquired readable depth swapchain image.
Upon next calling xrEndFrame after such an acquire call, the following conditions apply to the application:
-
The application must ensure that the readable depth swapchain image has a memory layout compatible with
VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL. -
The application must ensure that the readable depth swapchain image is owned by the
VkQueuespecified in XrGraphicsBindingVulkanKHR / XrGraphicsBindingVulkan2KHR.
The application is responsible for transitioning the swapchain image back to the image layout and queue ownership that the OpenXR runtime requires. If the image is not in a layout compatible with the above specifications, the runtime may exhibit undefined behavior.
12.128.9. Direct3D 12 Swapchain Image Resource State
For an application using D3D12, after a successful call to
xrAcquireEnvironmentDepthImageMETA that does not return
XR_ENVIRONMENT_DEPTH_NOT_AVAILABLE_META, the following conditions
apply to the runtime:
-
The runtime must ensure the acquired readable depth swapchain image has a resource state match with
D3D12_RESOURCE_STATE_ALL_SHADER_RESOURCE. Note that this is different from xrAcquireSwapchainImage which guaranteesD3D12_RESOURCE_STATE_DEPTH_WRITEfor swapchain images with depth formats. -
The runtime must ensure that the
ID3D12CommandQueuespecified in XrGraphicsBindingD3D12KHR may read from the acquired readable depth swapchain image.
Upon next calling xrEndFrame after such an acquire call, the following conditions apply to the application:
-
The application must ensure that the readable depth swapchain image has a resource state match with
D3D12_RESOURCE_STATE_ALL_SHADER_RESOURCE. -
The application must ensure that the readable depth swapchain image is available for read/write on the
ID3D12CommandQueuespecified in XrGraphicsBindingD3D12KHR.
The application is responsible for transitioning the swapchain image back to the resource state and queue availability that the OpenXR runtime requires. If the image is not in a resource state match with the above specifications the runtime may exhibit undefined behavior.
Version History
-
Revision 1, 2023-08-24 (Daniel Henell)
-
Initial extension description
-
-
Revision 2, 2025-09-19 (Martin Sherburn)
-
Add support for accessing capture time stamps for depth images.
-
12.129. XR_META_foveation_eye_tracked
- Name String
-
XR_META_foveation_eye_tracked - Extension Type
-
Instance extension
- Registered Extension Number
-
201
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Ross Ning, Facebook
Kevin Xiao, Facebook
Remi Palandri, Facebook
Jian Zhang, Facebook
Neel Bedekar, Facebook
Overview
Eye tracked foveated rendering renders lower pixel density in the periphery of the user’s gaze, taking advantage of low peripheral acuity.
This extension allows:
-
An application to query eye tracked foveation availability.
-
An application to request eye tracked foveation profile supported by the runtime and apply them to foveation-supported swapchains.
-
An application to query foveation center position every frame.
-
An application to request a foveation pattern update from the runtime. As a consequence, runtime knows how to adjust the eye tracking camera exposure start time in order to optimize the total pipeline latency.
In order to enable the functionality of this extension, the application
must pass the name of the extension into xrCreateInstance via the
XrInstanceCreateInfo::enabledExtensionNames parameter as
indicated in the Extensions section.
New Object Types
New Flag Types
// Provided by XR_META_foveation_eye_tracked
typedef XrFlags64 XrFoveationEyeTrackedProfileCreateFlagsMETA;
// Provided by XR_META_foveation_eye_tracked
// Flag bits for XrFoveationEyeTrackedProfileCreateFlagsMETA
There are currently no eye tracked profile create flags. This is reserved for future use.
// Provided by XR_META_foveation_eye_tracked
typedef XrFlags64 XrFoveationEyeTrackedStateFlagsMETA;
// Provided by XR_META_foveation_eye_tracked
// Flag bits for XrFoveationEyeTrackedStateFlagsMETA
static const XrFoveationEyeTrackedStateFlagsMETA XR_FOVEATION_EYE_TRACKED_STATE_VALID_BIT_META = 0x00000001;
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_FOVEATION_EYE_TRACKED_PROFILE_CREATE_INFO_META -
XR_TYPE_FOVEATION_EYE_TRACKED_STATE_META -
XR_TYPE_SYSTEM_FOVEATION_EYE_TRACKED_PROPERTIES_META
New Enums
New Structures
The XrFoveationEyeTrackedProfileCreateInfoMETA structure is defined as:
// Provided by XR_META_foveation_eye_tracked
typedef struct XrFoveationEyeTrackedProfileCreateInfoMETA {
XrStructureType type;
const void* next;
XrFoveationEyeTrackedProfileCreateFlagsMETA flags;
} XrFoveationEyeTrackedProfileCreateInfoMETA;
XrFoveationEyeTrackedProfileCreateInfoMETA can be added to the
next chain of XrFoveationLevelProfileCreateInfoFB in order to
enable eye tracked foveation.
The runtime must apply an eye tracked foveation pattern according to the
parameters defined in the XrFoveationLevelProfileCreateInfoFB.
The XrFoveationEyeTrackedStateMETA structure is defined as:
// Provided by XR_META_foveation_eye_tracked
typedef struct XrFoveationEyeTrackedStateMETA {
XrStructureType type;
void* next;
XrVector2f foveationCenter[XR_FOVEATION_CENTER_SIZE_META];
XrFoveationEyeTrackedStateFlagsMETA flags;
} XrFoveationEyeTrackedStateMETA;
XrFoveationEyeTrackedStateMETA must be provided when calling
xrGetFoveationEyeTrackedStateMETA.
The runtime must interpret XrFoveationEyeTrackedStateMETA without any
additional structs in its next chain in order to query eye tracked
foveation state, e.g. the center of the foveal region.
The XrSystemFoveationEyeTrackedPropertiesMETA structure is defined as:
// Provided by XR_META_foveation_eye_tracked
typedef struct XrSystemFoveationEyeTrackedPropertiesMETA {
XrStructureType type;
void* next;
XrBool32 supportsFoveationEyeTracked;
} XrSystemFoveationEyeTrackedPropertiesMETA;
An application can inspect whether the system is capable of eye tracked foveation by extending the XrSystemProperties with XrSystemFoveationEyeTrackedPropertiesMETA structure when calling xrGetSystemProperties.
New Functions
The xrGetFoveationEyeTrackedStateMETA function is defined as:
// Provided by XR_META_foveation_eye_tracked
XrResult xrGetFoveationEyeTrackedStateMETA(
XrSession session,
XrFoveationEyeTrackedStateMETA* foveationState);
The xrGetFoveationEyeTrackedStateMETA function returns the current eye tracked foveation state including the center of the foveal region, validity of the foveation data, etc.
Note that xrUpdateSwapchainFB should be called right before the xrGetFoveationEyeTrackedStateMETA function in order to (1) request a foveation pattern update by the runtime (2) optionally instruct the runtime to adjust the eye tracking camera capture start time in order to optimize for pipeline latency.
Issues
Version History
-
Revision 1, 2022-04-08 (Ross Ning)
-
Initial extension description
-
12.130. XR_META_hand_tracking_microgestures
- Name String
-
XR_META_hand_tracking_microgestures - Extension Type
-
Instance extension
- Registered Extension Number
-
253
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2024-09-03
- IP Status
-
No known IP claims.
- Contributors
-
Matthew Langille, Meta
Kenrick Kin, Meta
Chengde Wan, Meta
Ken Koh, Meta
Necati Cihan Camgoz, Meta
Shugao Ma, Meta
Andrei Marin, Meta
Eric Sauser, Meta
Muzaffer Akbay, Meta
Fengyang Zhang, Meta
Jingming Dong, Meta
Yujun Cai, Meta
Matthew Longest, Meta
12.130.1. Overview
Microgestures expand the capabilities of hand tracking by enabling low-calorie thumb tap and swipe motions to trigger discrete D-pad-like directional commands.
The hand pose and motion of the thumb is as follows: initially, the user must raise their thumb above their index finger (not touching the index finger). For best results, the user should slightly curl the other fingers as in the following illustration; i.e. not too extended, nor completely curled into a fist.
A tap is performed by touching the middle segment of the index finger with the thumb, and then lifting the thumb.
The four directional thumb swipes performed on the surface of the index finger are:
- Left swipe
-
a swipe towards the index fingertip on the right hand, and away from the index fingertip on the left hand. On the right hand for example, the motion is as follows: the thumb starts raised above the index finger, touches the middle segment of the index finger, slides towards the index fingertip, and lifts.
- Right swipe
-
the same motion as the left swipe, but in the opposite direction. On the right hand for example, the thumb starts raised above the index finger, touches the middle segment of the index finger, slides away from the index fingertip, and lifts.
- Forward swipe
-
the thumb starts raised above the index finger, touches the middle segment of the index finger, slides forward, and lifts.
- Backward swipe
-
the thumb starts raised above the index finger, touches the middle segment of the index finger, slides backward/downward, and lifts.
Note that the motions are performed at moderate to quick speeds, and are intended to be performed in one smooth motion. The detection of the gesture happens at the end of the motion, regardless of speed.
This extension exposes these discrete signals through the OpenXR
action system.
It augments XR_EXT_hand_interaction by adding a series of component
paths to the interaction profile.
12.130.2. Enabling Microgestures
In order to use the binding paths defined in this extension in addition to
those already present in XR_EXT_hand_interaction, applications must
enable both XR_EXT_hand_interaction and
XR_META_hand_tracking_microgestures.
If the application passes XR_META_hand_tracking_microgestures but
does not pass XR_EXT_hand_interaction then xrCreateInstance
must return XR_ERROR_EXTENSION_DEPENDENCY_NOT_ENABLED.
12.130.3. Action paths for Microgestures
Interaction profile path:
-
/interaction_profiles/ext/hand_interaction_ext
Valid for top level user path:
-
/user/hand/left
-
/user/hand/right
Additional supported component paths:
-
…/input/swipe_left_meta/click
-
…/input/swipe_right_meta/click
-
…/input/swipe_forward_meta/click
-
…/input/swipe_backward_meta/click
-
…/input/tap_thumb_meta/click
All listed inputs are boolean that become XR_TRUE once the
corresponding gesture has been completed and recognized.
Corresponding inputs are XR_FALSE otherwise, even during the progress
of a gesture.
12.130.4. New Enum Constants
-
XR_META_HAND_TRACKING_MICROGESTURES_EXTENSION_NAME -
XR_META_hand_tracking_microgestures_SPEC_VERSION
12.130.5. Issues
When the XR_EXT_hand_interaction and
XR_META_hand_tracking_microgestures extensions are available and
enabled, the runtime should avoid interferences between the detection of
pinches and microgestures, gestures that are similar in nature.
Specifically, a swipe towards the tip of the index finger should not be
misclassified as a pinch.
12.131. XR_META_headset_id
- Name String
-
XR_META_headset_id - Extension Type
-
Instance extension
- Registered Extension Number
-
246
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2022-08-11
- IP Status
-
No known IP claims.
- Contributors
-
Wenlin Mao, Meta Platforms
Andreas Loeve Selvik, Meta Platforms
Rémi Palandri, Meta Platforms
John Kearney, Meta Platforms
Jonathan Wright, Meta Platforms - Contacts
-
Wenlin Mao, Meta Platforms
|
Note
Using the headset ID to alter application behavior is discouraged, as it
interferes with compatibility with current and future headsets.
The OpenXR specification is designed with the goal of avoiding the need for
explicit per-device logic.
If the use of this extension is required, it is encouraged to let the OpenXR
working group know about the use case, through a communication channel like
email or GitHub.
While this usage is discouraged, applications that need this functionality
are encouraged to use this extension instead of the |
The XrSystemHeadsetIdPropertiesMETA structure is defined as:
// Provided by XR_META_headset_id
typedef struct XrSystemHeadsetIdPropertiesMETA {
XrStructureType type;
void* next;
XrUuidEXT id;
} XrSystemHeadsetIdPropertiesMETA;
An application can get a corresponding headset UUID of the headset model by chaining an XrSystemHeadsetIdPropertiesMETA structure to the XrSystemProperties when calling xrGetSystemProperties.
The UUID returned in the XrSystemHeadsetIdPropertiesMETA structure is an opaque UUID that identifies a runtime / headset model combo.
The runtime should always return the same UUID for a given headset model for the entire lifetime of that product.
The runtime may report a different UUID to some applications for compatibility purposes.
This is in contrast to the XrSystemProperties::systemName field
which is not required to be consistent across product renames.
This is intended to be a temporary feature that will be deprecated along
with its extension as soon as motivating use cases are resolved in a better
way.
See the disclaimer at the start of the XR_META_headset_id extension
documentation for more details.
New Object Types
New Atom
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_SYSTEM_HEADSET_ID_PROPERTIES_META
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2022-08-11 (Wenlin Mao)
-
Initial extension description
-
-
Revision 2, 2023-01-30 (Wenlin Mao)
-
Drop requirement for
XR_EXT_uuidbeing enabled
-
12.132. XR_META_local_dimming
- Name String
-
XR_META_local_dimming - Extension Type
-
Instance extension
- Registered Extension Number
-
217
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2022-05-05
- IP Status
-
No known IP claims.
- Contributors
-
Ross Ning, Meta Platforms
Haomiao Jiang, Meta Platforms
Remi Palandri, Meta Platforms
Xiang Wei, Meta Platforms
Overview
Local dimming allows to adjust backlight intensity of dark areas on the screen in order to increase content dynamic range. Local dimming feature is not intended for optical see-through HMDs.
An application can request the local dimming mode on a frame basis by chaining an XrLocalDimmingFrameEndInfoMETA structure to the XrFrameEndInfo.
-
Using XrFrameEndInfoLocalDimmingFB is considered as a hint and will not trigger xrEndFrame errors whether or not the requested dimming mode is fulfilled by the runtime.
-
The runtime will have full control of the local dimming mode and may disregard app requests. For example, the runtime may allow only one primary client to control the local dimming mode.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_LOCAL_DIMMING_FRAME_END_INFO_META
New Enums
The local dimming mode is specified by the XrLocalDimmingModeMETA enumeration:
// Provided by XR_META_local_dimming
typedef enum XrLocalDimmingModeMETA {
XR_LOCAL_DIMMING_MODE_OFF_META = 0,
XR_LOCAL_DIMMING_MODE_ON_META = 1,
XR_LOCAL_DIMMING_MODE_MAX_ENUM_META = 0x7FFFFFFF
} XrLocalDimmingModeMETA;
New Structures
The XrLocalDimmingFrameEndInfoMETA structure is defined as:
// Provided by XR_META_local_dimming
typedef struct XrLocalDimmingFrameEndInfoMETA {
XrStructureType type;
const void* next;
XrLocalDimmingModeMETA localDimmingMode;
} XrLocalDimmingFrameEndInfoMETA;
The XrLocalDimmingFrameEndInfoMETA is a structure that an application can chain in XrFrameEndInfo in order to request a local dimming mode.
New Functions
Issues
Version History
-
Revision 1, 2022-05-05 (Ross Ning)
-
Initial draft
-
12.133. XR_META_passthrough_color_lut
- Name String
-
XR_META_passthrough_color_lut - Extension Type
-
Instance extension
- Registered Extension Number
-
267
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2022-11-28
- IP Status
-
No known IP claims.
- Contributors
-
Andreas Loeve Selvik, Meta Platforms
Johannes Schmid, Meta Platforms
John Kearney, Meta Platforms
Overview
This extension adds the capability to define and apply RGB to RGB(A) color
look-up tables (LUTs) to passthrough layers created using
XR_FB_passthrough.
Color LUTs are 3-dimensional arrays which map each input color to a different output color. When applied to a Passthrough layer, the runtime must transform Passthrough camera images according to this map before display. Color LUTs may be used to achieve effects such as color grading, level control, color filtering, or chroma keying.
Color LUTs must be created using xrCreatePassthroughColorLutMETA before they can be applied to a Passthrough layer in a call to xrPassthroughLayerSetStyleFB (as a part of XrPassthroughColorMapLutMETA or XrPassthroughColorMapInterpolatedLutMETA). A color LUT may be applied to multiple Passthrough layers simultaneously.
New Object Types
XR_DEFINE_HANDLE(XrPassthroughColorLutMETA)
XrPassthroughColorLutMETA represents the definition and data for a color LUT which may be applied to a passthrough layer using xrPassthroughLayerSetStyleFB.
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_SYSTEM_PASSTHROUGH_COLOR_LUT_PROPERTIES_META -
XR_TYPE_PASSTHROUGH_COLOR_LUT_CREATE_INFO_META -
XR_TYPE_PASSTHROUGH_COLOR_LUT_UPDATE_INFO_META -
XR_TYPE_PASSTHROUGH_COLOR_MAP_LUT_META -
XR_TYPE_PASSTHROUGH_COLOR_MAP_INTERPOLATED_LUT_META
New Enums
Specify the color channels contained in the color LUT.
typedef enum XrPassthroughColorLutChannelsMETA {
XR_PASSTHROUGH_COLOR_LUT_CHANNELS_RGB_META = 1,
XR_PASSTHROUGH_COLOR_LUT_CHANNELS_RGBA_META = 2,
XR_PASSTHROUGH_COLOR_LUT_CHANNELS_MAX_ENUM_META = 0x7FFFFFFF
} XrPassthroughColorLutChannelsMETA;
New Structures
The XrSystemPassthroughColorLutPropertiesMETA structure is defined as:
// Provided by XR_META_passthrough_color_lut
typedef struct XrSystemPassthroughColorLutPropertiesMETA {
XrStructureType type;
const void* next;
uint32_t maxColorLutResolution;
} XrSystemPassthroughColorLutPropertiesMETA;
When the XR_META_passthrough_color_lut extension is enabled, an
application may pass in an XrSystemPassthroughColorLutPropertiesMETA
structure in next chain structure when calling xrGetSystemProperties
to acquire information about the connected system.
The runtime must populate the XrSystemPassthroughColorLutPropertiesMETA structure with the relevant information to the XrSystemProperties returned by the xrGetSystemProperties call.
The XrPassthroughColorLutDataMETA structure is defined as:
// Provided by XR_META_passthrough_color_lut
typedef struct XrPassthroughColorLutDataMETA {
uint32_t bufferSize;
const uint8_t* buffer;
} XrPassthroughColorLutDataMETA;
XrPassthroughColorLutDataMETA defines the LUT data for a color LUT. This structure is used when creating and updating color LUTs.
The XrPassthroughColorLutCreateInfoMETA structure is defined as:
// Provided by XR_META_passthrough_color_lut
typedef struct XrPassthroughColorLutCreateInfoMETA {
XrStructureType type;
const void* next;
XrPassthroughColorLutChannelsMETA channels;
uint32_t resolution;
XrPassthroughColorLutDataMETA data;
} XrPassthroughColorLutCreateInfoMETA;
resolution must be a power of 2, otherwise the runtime must return
XR_ERROR_VALIDATION_FAILURE.
The runtime may impose a limit on the maximum supported resolution, which
is indicated in XrSystemPassthroughColorLutPropertiesMETA.
If resolution exceeds that limit, the runtime must return
XR_ERROR_VALIDATION_FAILURE.
data contains a 3-dimensional array which defines an output color for
each RGB input color.
The input color is scaled to be in the range [0, resolution].
For an RGBA LUT, the RGBA tuple of output colors for an input color
(Rin, Gin, Bin) is found in the four bytes starting at the
offset 4 * (Rin + Gin * resolution + Bin *
resolution2).
For an RGB LUT, the RGB tuple of output colors for an input color
(Rin, Gin, Bin) is found in the three bytes starting at the
offset 3 * (Rin + Gin * resolution + Bin *
resolution2).
Color LUT data must be specified and interpreted in sRGB color space.
Runtimes must employ trilinear interpolation of neighboring color values if the resolution of the color LUT is smaller than the bit depth of the input colors.
The value of XrPassthroughColorLutDataMETA::bufferSize in
data must be equal to resolution3 * bytesPerElement,
where bytesPerElement is either 3 or 4 depending on channels.
Otherwise, the runtime must return
XR_ERROR_PASSTHROUGH_COLOR_LUT_BUFFER_SIZE_MISMATCH_META.
The XrPassthroughColorLutUpdateInfoMETA structure is defined as:
// Provided by XR_META_passthrough_color_lut
typedef struct XrPassthroughColorLutUpdateInfoMETA {
XrStructureType type;
const void* next;
XrPassthroughColorLutDataMETA data;
} XrPassthroughColorLutUpdateInfoMETA;
The LUT data may be updated for an existing color LUT, while channels and
resolution remain constant after creation.
Hence, the value of XrPassthroughColorLutDataMETA::bufferSize in
data must be equal to the buffer size specified at creation.
Otherwise, the runtime must return
XR_ERROR_PASSTHROUGH_COLOR_LUT_BUFFER_SIZE_MISMATCH_META.
The XrPassthroughColorMapLutMETA structure is defined as:
// Provided by XR_META_passthrough_color_lut
typedef struct XrPassthroughColorMapLutMETA {
XrStructureType type;
const void* next;
XrPassthroughColorLutMETA colorLut;
float weight;
} XrPassthroughColorMapLutMETA;
XrPassthroughColorMapLutMETA lets applications apply a color LUT to a passthrough layer. Other Passthrough style elements (such as edges) must not be affected by color LUTs.
Applications may use weight to efficiently blend between the original
colors and the mapped colors.
The blend is computed as (1 - weight) * Cin + weight *
colorLut [Cin].
XrPassthroughColorMapLutMETA is provided in the next chain of
XrPassthroughStyleFB when calling xrPassthroughLayerSetStyleFB.
Subsequent calls to xrPassthroughLayerSetStyleFB with
XrPassthroughColorMapLutMETA in the next chain update the color
LUT for that layer.
Subsequent calls to xrPassthroughLayerSetStyleFB without this
XrPassthroughColorMapLutMETA (or
XrPassthroughColorMapInterpolatedLutMETA) in the next chain disable
color LUTs for that layer.
The XrPassthroughColorMapInterpolatedLutMETA structure is defined as:
// Provided by XR_META_passthrough_color_lut
typedef struct XrPassthroughColorMapInterpolatedLutMETA {
XrStructureType type;
const void* next;
XrPassthroughColorLutMETA sourceColorLut;
XrPassthroughColorLutMETA targetColorLut;
float weight;
} XrPassthroughColorMapInterpolatedLutMETA;
XrPassthroughColorMapInterpolatedLutMETA lets applications apply the interpolation between two color LUTs to a passthrough layer. Applications may use this feature to smoothly transition between two color LUTs. Other Passthrough style elements (such as edges) must not be affected by color LUTs.
The blend between sourceColorLut and targetColorLut is computed
as (1 - weight) * sourceColorLut [Cin] + weight *
targetColorLut [Cin].
XrPassthroughColorMapInterpolatedLutMETA is provided in the next
chain of XrPassthroughStyleFB when calling
xrPassthroughLayerSetStyleFB.
Subsequent calls to xrPassthroughLayerSetStyleFB with
XrPassthroughColorMapInterpolatedLutMETA in the next chain update the
color LUT for that layer.
Subsequent calls to xrPassthroughLayerSetStyleFB without this
XrPassthroughColorMapInterpolatedLutMETA (or
XrPassthroughColorMapLutMETA) in the next chain disable color LUTs for
that layer.
New Functions
The xrCreatePassthroughColorLutMETA function is defined as:
// Provided by XR_META_passthrough_color_lut
XrResult xrCreatePassthroughColorLutMETA(
XrPassthroughFB passthrough,
const XrPassthroughColorLutCreateInfoMETA* createInfo,
XrPassthroughColorLutMETA* colorLut);
Creates a passthrough color LUT. The resulting XrPassthroughColorLutMETA may be referenced in XrPassthroughColorMapLutMETA and XrPassthroughColorMapInterpolatedLutMETA in subsequent calls to xrPassthroughLayerSetStyleFB.
The xrDestroyPassthroughColorLutMETA function is defined as:
// Provided by XR_META_passthrough_color_lut
XrResult xrDestroyPassthroughColorLutMETA(
XrPassthroughColorLutMETA colorLut);
Destroys a passthrough color LUT. If the color LUT is still in use (i.e. if for at least one passthrough layer, xrPassthroughLayerSetStyleFB has last been called with an instance of XrPassthroughColorMapLutMETA or XrPassthroughColorMapInterpolatedLutMETA in the next chain that references this color LUT), the runtime must retain the color LUT data and continue applying it to the affected passthrough layer until a different style is applied.
The xrUpdatePassthroughColorLutMETA function is defined as:
// Provided by XR_META_passthrough_color_lut
XrResult xrUpdatePassthroughColorLutMETA(
XrPassthroughColorLutMETA colorLut,
const XrPassthroughColorLutUpdateInfoMETA* updateInfo);
Updates the LUT data of a passthrough color LUT.
The data type of the color LUT (resolution and channels) is immutable.
The provided data in this call must therefore match the data type specified
at creation time.
Specifically, XrPassthroughColorLutDataMETA::bufferSize of the
new data must be equal to the
XrPassthroughColorLutDataMETA::bufferSize specified during
creation.
Otherwise, the runtime must return XR_ERROR_VALIDATION_FAILURE.
The runtime must reflect changes to color LUT data on all Passthrough layers the color LUT is currently applied to.
Version History
-
Revision 1, 2022-12-08 (Johannes Schmid)
-
Initial extension description
-
12.134. XR_META_passthrough_layer_resumed_event
- Name String
-
XR_META_passthrough_layer_resumed_event - Extension Type
-
Instance extension
- Registered Extension Number
-
283
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2023-12-12
- IP Status
-
No known IP claims.
- Contributors
-
Ante Trbojevic, Meta Platforms
12.134.1. Overview
This extension defines an event that is emitted once a passthrough layer
(created using XR_FB_passthrough) is resumed and ready for displaying
after a resume command or when the passthrough layer was created with
XR_PASSTHROUGH_IS_RUNNING_AT_CREATION_BIT_FB.
The functions xrPassthroughLayerResumeFB and
xrCreatePassthroughLayerFB from XR_FB_passthrough are
asynchronous without any guarantees on when their effect will be visible on
the display.
Runtimes may asynchronously perform operations which may take several
frames to complete, such as turning on sensor hardware.
Runtimes queue this event under the aforementioned conditions when this
extension is requested during instance creation.
Unlike most extensions, to start receiving the event, an app only needs to enable this extension.
This extension depends on XR_FB_passthrough.
12.134.2. New Event
The XrEventDataPassthroughLayerResumedMETA structure is defined as:
// Provided by XR_META_passthrough_layer_resumed_event
typedef struct XrEventDataPassthroughLayerResumedMETA {
XrStructureType type;
const void* next;
XrPassthroughLayerFB layer;
} XrEventDataPassthroughLayerResumedMETA;
Runtimes must queue the event exactly once when first presenting passthrough after an app successfully calls one of the following:
-
xrCreatePassthroughLayerFB with flag
XR_PASSTHROUGH_IS_RUNNING_AT_CREATION_BIT_FBset
The passthrough layer state is reset when the app calls xrPassthroughLayerPauseFB.
Runtimes must queue the event again, if xrPassthroughLayerPauseFB is followed by xrPassthroughLayerResumeFB. During the transition from paused to resumed state, the event is queued exactly once when passthrough has been presented for the first time. If the passthrough feature is not active during the transition, for example because it has been paused using xrPassthroughPauseFB, the event is queued when passthrough becomes active.
12.134.4. New Enum Constants
-
XR_META_PASSTHROUGH_LAYER_RESUMED_EVENT_EXTENSION_NAME -
XR_META_passthrough_layer_resumed_event_SPEC_VERSION -
Extending XrStructureType:
-
XR_TYPE_EVENT_DATA_PASSTHROUGH_LAYER_RESUMED_META
-
Version History
-
Revision 1, 2023-05-31 (Ante Trbojevic)
-
Initial extension description
-
12.135. XR_META_passthrough_preferences
- Name String
-
XR_META_passthrough_preferences - Extension Type
-
Instance extension
- Registered Extension Number
-
218
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2023-04-25
- IP Status
-
No known IP claims.
- Contributors
-
Johannes Schmid, Meta Platforms
Overview
This extension provides applications with access to system preferences
concerning passthrough.
For more information on how applications can control the display of
passthrough, see XR_FB_passthrough.
New Flag Types
// Provided by XR_META_passthrough_preferences
typedef XrFlags64 XrPassthroughPreferenceFlagsMETA;
// Provided by XR_META_passthrough_preferences
// Flag bits for XrPassthroughPreferenceFlagsMETA
static const XrPassthroughPreferenceFlagsMETA XR_PASSTHROUGH_PREFERENCE_DEFAULT_TO_ACTIVE_BIT_META = 0x00000001;
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_PASSTHROUGH_PREFERENCES_META
New Structures
The XrPassthroughPreferencesMETA structure is defined as:
// Provided by XR_META_passthrough_preferences
typedef struct XrPassthroughPreferencesMETA {
XrStructureType type;
const void* next;
XrPassthroughPreferenceFlagsMETA flags;
} XrPassthroughPreferencesMETA;
The runtime must populate the XrPassthroughPreferencesMETA structure with the relevant information when the app calls xrGetPassthroughPreferencesMETA.
Presence of the bit flag
XR_PASSTHROUGH_PREFERENCE_DEFAULT_TO_ACTIVE_BIT_META does not indicate
a guarantee that applications can enable and use passthrough in practice.
The runtime may impose restrictions on passthrough usage (e.g. based on
hardware availability or permission models) independently of the state of
this flag bit.
Apps should test for this flag explicitly, as more flag bits may be
introduced in the future.
New Functions
The xrGetPassthroughPreferencesMETA function is defined as:
// Provided by XR_META_passthrough_preferences
XrResult xrGetPassthroughPreferencesMETA(
XrSession session,
XrPassthroughPreferencesMETA* preferences);
An application can call xrGetPassthroughPreferencesMETA to retrieve passthrough-related preferences from the system.
Version History
-
Revision 1, 2023-04-25 (Johannes Schmid)
-
Initial extension description
-
12.136. XR_META_performance_metrics
- Name String
-
XR_META_performance_metrics - Extension Type
-
Instance extension
- Registered Extension Number
-
233
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Xiang Wei, Meta Platforms
Overview
This extension provides APIs to enumerate and query performance metrics counters of the current XR device and XR application. Developers can perform performance analysis and do targeted optimization to the XR application using the performance metrics counters being collected. The application should not change its behavior based on the counter reads.
The performance metrics counters are organized into predefined
XrPath values, under the root path /perfmetrics_meta.
An application can query the available counters through
xrEnumeratePerformanceMetricsCounterPathsMETA.
Here is a list of the performance metrics counter paths that may be
provided on Meta devices:
-
/perfmetrics_meta/app/cpu_frametime
-
/perfmetrics_meta/app/gpu_frametime
-
/perfmetrics_meta/app/motion_to_photon_latency
-
/perfmetrics_meta/compositor/cpu_frametime
-
/perfmetrics_meta/compositor/gpu_frametime
-
/perfmetrics_meta/compositor/dropped_frame_count
-
/perfmetrics_meta/compositor/spacewarp_mode
-
/perfmetrics_meta/device/cpu_utilization_average
-
/perfmetrics_meta/device/cpu_utilization_worst
-
/perfmetrics_meta/device/gpu_utilization
-
/perfmetrics_meta/device/cpu0_utilization through /perfmetrics_meta/device/cpuX_utilization
After a session is created, an application can use xrSetPerformanceMetricsStateMETA to enable the performance metrics system for that session. An application can use xrQueryPerformanceMetricsCounterMETA to query a performance metrics counter on a session that has the performance metrics system enabled, or use xrGetPerformanceMetricsStateMETA to query if the performance metrics system is enabled.
Note: the measurement intervals of individual performance metrics counters are defined by the OpenXR runtime. The application must not make assumptions or change its behavior at runtime by measuring them.
In order to enable the functionality of this extension, the application
must pass the name of the extension into xrCreateInstance via the
XrInstanceCreateInfo::enabledExtensionNames parameter as
indicated in the Extensions section.
New Flag Types
typedef XrFlags64 XrPerformanceMetricsCounterFlagsMETA;
// Flag bits for XrPerformanceMetricsCounterFlagsMETA
static const XrPerformanceMetricsCounterFlagsMETA XR_PERFORMANCE_METRICS_COUNTER_ANY_VALUE_VALID_BIT_META = 0x00000001;
static const XrPerformanceMetricsCounterFlagsMETA XR_PERFORMANCE_METRICS_COUNTER_UINT_VALUE_VALID_BIT_META = 0x00000002;
static const XrPerformanceMetricsCounterFlagsMETA XR_PERFORMANCE_METRICS_COUNTER_FLOAT_VALUE_VALID_BIT_META = 0x00000004;
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_PERFORMANCE_METRICS_STATE_META -
XR_TYPE_PERFORMANCE_METRICS_COUNTER_META
New Enums
// Provided by XR_META_performance_metrics
typedef enum XrPerformanceMetricsCounterUnitMETA {
XR_PERFORMANCE_METRICS_COUNTER_UNIT_GENERIC_META = 0,
XR_PERFORMANCE_METRICS_COUNTER_UNIT_PERCENTAGE_META = 1,
XR_PERFORMANCE_METRICS_COUNTER_UNIT_MILLISECONDS_META = 2,
XR_PERFORMANCE_METRICS_COUNTER_UNIT_BYTES_META = 3,
XR_PERFORMANCE_METRICS_COUNTER_UNIT_HERTZ_META = 4,
XR_PERFORMANCE_METRICS_COUNTER_UNIT_MAX_ENUM_META = 0x7FFFFFFF
} XrPerformanceMetricsCounterUnitMETA;
| Enum | Description |
|---|---|
|
the performance counter unit is generic (unspecified). |
|
the performance counter unit is percentage (%). |
|
the performance counter unit is millisecond. |
|
the performance counter unit is byte. |
|
the performance counter unit is hertz (Hz). |
New Structures
The XrPerformanceMetricsStateMETA structure is defined as:
// Provided by XR_META_performance_metrics
typedef struct XrPerformanceMetricsStateMETA {
XrStructureType type;
const void* next;
XrBool32 enabled;
} XrPerformanceMetricsStateMETA;
XrPerformanceMetricsStateMETA is provided as input when calling xrSetPerformanceMetricsStateMETA to enable or disable the performance metrics system. XrPerformanceMetricsStateMETA is populated as an output parameter when calling xrGetPerformanceMetricsStateMETA to query if the performance metrics system is enabled.
The XrPerformanceMetricsCounterMETA structure is defined as:
// Provided by XR_META_performance_metrics
typedef struct XrPerformanceMetricsCounterMETA {
XrStructureType type;
const void* next;
XrPerformanceMetricsCounterFlagsMETA counterFlags;
XrPerformanceMetricsCounterUnitMETA counterUnit;
uint32_t uintValue;
float floatValue;
} XrPerformanceMetricsCounterMETA;
XrPerformanceMetricsCounterMETA is populated by calling xrQueryPerformanceMetricsCounterMETA to query real-time performance metrics counter information.
New Functions
The xrEnumeratePerformanceMetricsCounterPathsMETA function enumerates all performance metrics counter paths that supported by the runtime, it is defined as:
// Provided by XR_META_performance_metrics
XrResult xrEnumeratePerformanceMetricsCounterPathsMETA(
XrInstance instance,
uint32_t counterPathCapacityInput,
uint32_t* counterPathCountOutput,
XrPath* counterPaths);
The xrSetPerformanceMetricsStateMETA function is defined as:
// Provided by XR_META_performance_metrics
XrResult xrSetPerformanceMetricsStateMETA(
XrSession session,
const XrPerformanceMetricsStateMETA* state);
The xrSetPerformanceMetricsStateMETA function enables or disables the performance metrics system.
The xrGetPerformanceMetricsStateMETA function is defined as:
// Provided by XR_META_performance_metrics
XrResult xrGetPerformanceMetricsStateMETA(
XrSession session,
XrPerformanceMetricsStateMETA* state);
The xrGetPerformanceMetricsStateMETA function gets the current state of the performance metrics system.
The xrQueryPerformanceMetricsCounterMETA function is defined as:
// Provided by XR_META_performance_metrics
XrResult xrQueryPerformanceMetricsCounterMETA(
XrSession session,
XrPath counterPath,
XrPerformanceMetricsCounterMETA* counter);
The xrQueryPerformanceMetricsCounterMETA function queries a performance metrics counter.
The application should enable the performance metrics system (by calling
xrSetPerformanceMetricsStateMETA) before querying metrics using
xrQueryPerformanceMetricsCounterMETA.
If the performance metrics system has not been enabled before calling
xrQueryPerformanceMetricsCounterMETA, the runtime must return
XR_ERROR_VALIDATION_FAILURE.
If counterPath is not in the list returned by
xrEnumeratePerformanceMetricsCounterPathsMETA, the runtime must return
XR_ERROR_PATH_UNSUPPORTED.
Issues
Version History
-
Revision 1, 2022-04-28 (Xiang Wei)
-
Initial extension description
-
-
Revision 2, 2022-09-16 (John Kearney)
-
Clarification of error codes
-
12.137. XR_META_recommended_layer_resolution
- Name String
-
XR_META_recommended_layer_resolution - Extension Type
-
Instance extension
- Registered Extension Number
-
255
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Rohit Rao Padebettu, Meta
Remi Palandri, Meta
Ben Cumings, Meta
Overview
The extension allows an application to request a recommended swapchain resolution from the runtime, in order to either allocate a swapchain of a more appropriate size, or to render into a smaller image rect according to the recommendation. For layers with multiple views such as XrCompositionLayerProjection, the application may scale the individual views to match the scaled swapchain resolution.
The runtime may use any factors to drive the recommendation it wishes to return to the application. Those include static properties such as screen resolution and HMD type, but also dynamic ones such as layer positioning and system-wide GPU utilization.
Application may also use this extension to allocate the swapchain by passing in a layer with a swapchain handle XR_NULL_HANDLE.
New Structures
The XrRecommendedLayerResolutionMETA structure is defined as:
// Provided by XR_META_recommended_layer_resolution
typedef struct XrRecommendedLayerResolutionMETA {
XrStructureType type;
void* next;
XrExtent2Di recommendedImageDimensions;
XrBool32 isValid;
} XrRecommendedLayerResolutionMETA;
If the runtime does not wish to make a recommendation, isValid must
be XR_FALSE and recommendedImageDimensions must be {0,0}.
The XrRecommendedLayerResolutionGetInfoMETA structure is defined as:
// Provided by XR_META_recommended_layer_resolution
typedef struct XrRecommendedLayerResolutionGetInfoMETA {
XrStructureType type;
const void* next;
const XrCompositionLayerBaseHeader* layer;
XrTime predictedDisplayTime;
} XrRecommendedLayerResolutionGetInfoMETA;
If predictedDisplayTime is older than the predicted display time
returned from most recent xrWaitFrame then, the runtime must return
XR_ERROR_TIME_INVALID.
New Functions
The xrGetRecommendedLayerResolutionMETA function is defined as:
// Provided by XR_META_recommended_layer_resolution
XrResult xrGetRecommendedLayerResolutionMETA(
XrSession session,
const XrRecommendedLayerResolutionGetInfoMETA* info,
XrRecommendedLayerResolutionMETA* resolution);
The xrGetRecommendedLayerResolutionMETA function returns the recommendation that the runtime wishes to make to the application for the layer provided in the XrRecommendedLayerResolutionGetInfoMETA structure. Application may choose to reallocate their swapchain or scale view resolution accordingly. Applications rendering multiple views into the swapchain may scale individual views to match the recommended swapchain resolution.
The runtime may not wish to make any recommendation, in which case it must
return an XrRecommendedLayerResolutionMETA::isValid value of
XR_FALSE.
If the XrRecommendedLayerResolutionGetInfoMETA::layer attribute
of the info argument of the function contains valid swapchain handles
in all fields where required, the runtime must return a resolution
recommendation which is less than or equal to the size of that swapchain, so
that the application may render into an existing swapchain or swapchains
without reallocation.
As an exception to valid usage, an otherwise-valid structure passed as
XrRecommendedLayerResolutionGetInfoMETA::layer may contain
XR_NULL_HANDLE in place of valid XrSwapchain handle(s) for this
function only, to obtain a recommended resolution resolution for the purpose
of allocating a swapchain.
If at least one otherwise-required XrSwapchain handle within
XrRecommendedLayerResolutionGetInfoMETA::layer is
XR_NULL_HANDLE, the runtime must interpret this as a request for
recommended resolution without limitation to the allocated size of any
existing swapchain.
If the runtime makes a recommendation, it should make a recommendation that is directly usable by the application to render its frames without creating adverse visual effects for the user.
Issues
-
Should this extension be leveraging events instead of being queried potentially every frame?
RESOLVED: Yes.
We want to provide the runtime the flexibility to smoothly transition the application from one resolution to another in a dynamic resolution usecase without any reallocation. To do so with an event system would send an event every frame which we preferred to avoid.
Version History
-
Revision 1, 2023-12-10 (Remi Palandri)
-
Initial extension description
-
12.138. XR_META_simultaneous_hands_and_controllers
- Name String
-
XR_META_simultaneous_hands_and_controllers - Extension Type
-
Instance extension
- Registered Extension Number
-
533
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2025-05-07
- IP Status
-
No known IP claims.
- Contributors
-
Matthew Langille, Meta Platforms
12.138.1. Overview
Some XR systems have the ability to track both hands and controllers simultaneously (commonly referred to as multimodal hands and controllers input), but this may consume additional power and system resources. This extension defines two new functions that applications can use to control when the simultaneous hands and controller tracking is activated.
12.138.2. Inspect System Capability
The XrSystemSimultaneousHandsAndControllersPropertiesMETA structure is defined as:
// Provided by XR_META_simultaneous_hands_and_controllers
typedef struct XrSystemSimultaneousHandsAndControllersPropertiesMETA {
XrStructureType type;
void* next;
XrBool32 supportsSimultaneousHandsAndControllers;
} XrSystemSimultaneousHandsAndControllersPropertiesMETA;
An application can inspect whether the system is capable of enabling simultaneous hands and controller tracking by extending the XrSystemProperties with XrSystemSimultaneousHandsAndControllersPropertiesMETA structure when calling xrGetSystemProperties.
If and only if a runtime returns XR_FALSE for
supportsSimultaneousHandsAndControllers, the runtime must return
XR_ERROR_FEATURE_UNSUPPORTED from
xrResumeSimultaneousHandsAndControllersTrackingMETA and
xrPauseSimultaneousHandsAndControllersTrackingMETA.
12.138.3. Enable Simultaneous Tracking
The xrResumeSimultaneousHandsAndControllersTrackingMETA function is defined as:
// Provided by XR_META_simultaneous_hands_and_controllers
XrResult xrResumeSimultaneousHandsAndControllersTrackingMETA(
XrSession session,
const XrSimultaneousHandsAndControllersTrackingResumeInfoMETA* resumeInfo);
An application can call xrResumeSimultaneousHandsAndControllersTrackingMETA to enable simultaneous hands and controller tracking.
Runtimes must initialize the simultaneous tracking feature in a paused state, so applications call the resume function for the simultaneous tracking to start.
If xrResumeSimultaneousHandsAndControllersTrackingMETA is called when
the feature is already in a resumed state, the runtime must return
XR_SUCCESS.
If a system supports detection of whether a controller is currently held by the user, the runtime should represent this transition by switching the active interaction profile in the relevant hand from the active controller interaction profile to an interaction profile representing hands if available.
For example, the returned interaction from
xrGetCurrentInteractionProfile on /user/hand/left might
change from /interaction_profiles/facebook/touch_controller_pro to
/interaction_profiles/ext/hand_interaction_ext, generating an
XR_TYPE_EVENT_DATA_INTERACTION_PROFILE_CHANGED
(XrEventDataInteractionProfileChanged) event, assuming that bindings
were suggested for both of those interaction profiles on that path.
This would indicate that the user put down the controller and is no longer holding a controller in their left hand. Note that in this example, if the user is holding a controller in their right hand, xrGetCurrentInteractionProfile on /user/hand/right still returns /interaction_profiles/facebook/touch_controller_pro.
Also note that if the XR_META_detached_controllers extension is used
alongside XR_META_simultaneous_hands_and_controllers, then when a
controller leaves the hand, it may report that interaction profile as
current for the corresponding /user/detached_controller_meta path
if bindings have been suggested for that path.
See XR_META_simultaneous_hands_and_controllers for details.
When simultaneous tracking is resumed, runtimes should allow xrGetCurrentInteractionProfile to return different interaction profiles for different top level /user paths (e.g. /user/hand/left and /user/hand/right). Note that this behavior is already allowed by the specification, even without this extension, but runtimes exposing this extension may choose to not expose different interaction profiles for different top level /user paths unless it is enabled.
When a held controller transitions to an unheld state, the current interaction profile for the relevant top level path must change to an interaction profile representing hand tracking, if available.
The XrSimultaneousHandsAndControllersTrackingResumeInfoMETA structure is defined as:
// Provided by XR_META_simultaneous_hands_and_controllers
typedef struct XrSimultaneousHandsAndControllersTrackingResumeInfoMETA {
XrStructureType type;
const void* next;
} XrSimultaneousHandsAndControllersTrackingResumeInfoMETA;
This structure only exists to point to future extension structures.
12.138.4. Disable Simultaneous Tracking
The xrPauseSimultaneousHandsAndControllersTrackingMETA function is defined as:
// Provided by XR_META_simultaneous_hands_and_controllers
XrResult xrPauseSimultaneousHandsAndControllersTrackingMETA(
XrSession session,
const XrSimultaneousHandsAndControllersTrackingPauseInfoMETA* pauseInfo);
An application can call xrPauseSimultaneousHandsAndControllersTrackingMETA to disable simultaneous hands and controller tracking.
If xrPauseSimultaneousHandsAndControllersTrackingMETA is called when
the feature is not running, the runtime must return XR_SUCCESS.
Tracking systems consume system resources and it is desirable to be able to stop them when they are not in use; a strong motivation for this extension is that it provides the ability for clients to dynamically switch to a multiple tracking system operating mode only as needed, thus preserving system resources and improving battery performance.
The XrSimultaneousHandsAndControllersTrackingPauseInfoMETA structure is defined as:
// Provided by XR_META_simultaneous_hands_and_controllers
typedef struct XrSimultaneousHandsAndControllersTrackingPauseInfoMETA {
XrStructureType type;
const void* next;
} XrSimultaneousHandsAndControllersTrackingPauseInfoMETA;
This structure only exists to point to future extension structures.
12.138.7. New Enum Constants
-
XR_META_SIMULTANEOUS_HANDS_AND_CONTROLLERS_EXTENSION_NAME -
XR_META_simultaneous_hands_and_controllers_SPEC_VERSION -
Extending XrStructureType:
-
XR_TYPE_SIMULTANEOUS_HANDS_AND_CONTROLLERS_TRACKING_PAUSE_INFO_META -
XR_TYPE_SIMULTANEOUS_HANDS_AND_CONTROLLERS_TRACKING_RESUME_INFO_META -
XR_TYPE_SYSTEM_SIMULTANEOUS_HANDS_AND_CONTROLLERS_PROPERTIES_META
-
12.139. XR_META_spatial_entity_discovery
- Name String
-
XR_META_spatial_entity_discovery - Extension Type
-
Instance extension
- Registered Extension Number
-
248
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2024-11-10
- IP Status
-
No known IP claims.
- Contributors
-
Natalie Fleury, Meta Platforms
Abhishek Shrivastava, Meta Platforms
Adrian Mancilla, Meta Platforms
12.139.1. Overview
The XR_META_spatial_entity_discovery extension supports spatial
entity retrieval in larger areas.
It offers entity filters like component and UUID filters.
This allows for more efficient and targeted discovery of spatial entities.
Technical overview
This extension enables finding and loading persisted Spatial Entities which can then be tracked across different sessions and over time by applications.
If the XR_SPACE_COMPONENT_TYPE_STORABLE_FB component has been enabled
on a space, and that space has previously been persisted, application
developers can discover and then track this XrSpace entity.
XR_META_spatial_entity_discovery is expected to be used alongside
XR_META_spatial_entity_persistence for space persistence and storage
management.
|
|
In order to enable the functionality of this extension, you must pass the
name of the extension into xrCreateInstance via the
XrInstanceCreateInfo::enabledExtensionNames parameter as
indicated in the [extensions] section.
12.139.2. Inspect System Capability
The XrSystemSpaceDiscoveryPropertiesMETA structure is defined as:
// Provided by XR_META_spatial_entity_discovery
typedef struct XrSystemSpaceDiscoveryPropertiesMETA {
XrStructureType type;
const void* next;
XrBool32 supportsSpaceDiscovery;
} XrSystemSpaceDiscoveryPropertiesMETA;
An application can inspect whether the system is capable of supporting space discovery by extending the XrSystemProperties with XrSystemSpaceDiscoveryPropertiesMETA structure when calling xrGetSystemProperties.
If and only if a runtime returns XR_FALSE for
supportsSpaceDiscovery, the runtime must return
XR_ERROR_FEATURE_UNSUPPORTED from xrDiscoverSpacesMETA and
xrRetrieveSpaceDiscoveryResultsMETA.
12.139.3. Discover Spaces
The xrDiscoverSpacesMETA function is defined as:
// Provided by XR_META_spatial_entity_discovery
XrResult xrDiscoverSpacesMETA(
XrSession session,
const XrSpaceDiscoveryInfoMETA* info,
XrAsyncRequestIdFB* requestId);
The xrDiscoverSpacesMETA function discovers spaces that were persisted
previously, and which comply with the filters passed in the
XrSpaceDiscoveryInfoMETA info structure.
This operation is asynchronous.
If xrDiscoverSpacesMETA returns a XR_FAILED result, no discovery operation takes place and no events will be queued for this operation.
If the asynchronous operation is scheduled successfully, the runtime must
return XR_SUCCESS and the asynchronous discovery operation will queue
at least 1 event.
If Spatial Entities have been discovered and are ready for retrieval, the runtime must queue an XrEventDataSpaceDiscoveryResultsAvailableMETA event. The runtime may queue 0, 1, or more XrEventDataSpaceDiscoveryResultsAvailableMETA events depending on the Spatial Entities found.
If and only if the runtime returns XR_SUCCESS, the runtime must queue
a single XrEventDataSpaceDiscoveryCompleteMETA event identified with a
XrEventDataSpaceDiscoveryCompleteMETA::requestId matching the
requestId value output by this function, referred to as the
"corresponding completion event." The
XrEventDataSpaceDiscoveryCompleteMETA event is queued after all
XrEventDataSpaceDiscoveryResultsAvailableMETA events for this
operation have been queued.
Completion results are conveyed in the event XrEventDataSpaceDiscoveryCompleteMETA, while availability of output for xrRetrieveSpaceDiscoveryResultsMETA is signaled by either this completion event or the event XrEventDataSpaceDiscoveryResultsAvailableMETA.
If the asynchronous operation is successful, in the corresponding completion
event, the runtime must set the
XrEventDataSpaceDiscoveryCompleteMETA::result field to
XR_SUCCESS.
If the asynchronous operation is scheduled but not successful, in the
corresponding completion event, the runtime must set the
XrEventDataSpaceDiscoveryCompleteMETA::result field to an
appropriate error code instead of XR_SUCCESS.
The XrSpaceDiscoveryInfoMETA structure is defined as:
// Provided by XR_META_spatial_entity_discovery
typedef struct XrSpaceDiscoveryInfoMETA {
XrStructureType type;
const void* next;
uint32_t filterCount;
const XrSpaceFilterBaseHeaderMETA* const * filters;
} XrSpaceDiscoveryInfoMETA;
The XrSpaceDiscoveryInfoMETA structure contains information used to discover space(s).
The XrSpaceFilterBaseHeaderMETA structure is defined as:
// Provided by XR_META_spatial_entity_discovery
typedef struct XrSpaceFilterBaseHeaderMETA {
XrStructureType type;
const void* next;
} XrSpaceFilterBaseHeaderMETA;
The XrSpaceFilterBaseHeaderMETA structure is meant to be used as the base header for filter types defined in this extension.
Two such filter types are defined in this extension: XrSpaceFilterUuidMETA and XrSpaceFilterComponentMETA.
The XrSpaceFilterUuidMETA structure is defined as:
// Provided by XR_META_spatial_entity_discovery
typedef struct XrSpaceFilterUuidMETA {
XrStructureType type;
const void* next;
uint32_t uuidCount;
const XrUuidEXT* uuids;
} XrSpaceFilterUuidMETA;
The XrSpaceFilterUuidMETA structure contains information used to discover space(s) by ID.
The XrSpaceFilterComponentMETA structure is defined as:
// Provided by XR_META_spatial_entity_discovery
typedef struct XrSpaceFilterComponentMETA {
XrStructureType type;
const void* next;
XrSpaceComponentTypeFB componentType;
} XrSpaceFilterComponentMETA;
The XrSpaceFilterComponentMETA structure contains information used to discover Spatial Entities by components enabled.
The XrEventDataSpaceDiscoveryResultsAvailableMETA structure is defined as:
// Provided by XR_META_spatial_entity_discovery
typedef struct XrEventDataSpaceDiscoveryResultsAvailableMETA {
XrStructureType type;
const void* next;
XrAsyncRequestIdFB requestId;
} XrEventDataSpaceDiscoveryResultsAvailableMETA;
The XrEventDataSpaceDiscoveryResultsAvailableMETA event indicates there are results ready to be retrieved. Any number of these XrEventDataSpaceDiscoveryResultsAvailableMETA events may be queued, until the XrEventDataSpaceDiscoveryCompleteMETA event is queued. Once the XrEventDataSpaceDiscoveryCompleteMETA event is queued, there must not be any more XrEventDataSpaceDiscoveryResultsAvailableMETA events queued.
The XrEventDataSpaceDiscoveryCompleteMETA structure is defined as:
// Provided by XR_META_spatial_entity_discovery
typedef struct XrEventDataSpaceDiscoveryCompleteMETA {
XrStructureType type;
const void* next;
XrAsyncRequestIdFB requestId;
XrResult result;
} XrEventDataSpaceDiscoveryCompleteMETA;
The XrEventDataSpaceDiscoveryCompleteMETA event indicates that there must be no more XrEventDataSpaceDiscoveryResultsAvailableMETA queued and that Discovery has completed execution.
Potential XrEventDataSpaceDiscoveryCompleteMETA::result values
include the following XrResult enumerants:
-
XR_SUCCESS -
XR_ERROR_RUNTIME_FAILURE -
XR_ERROR_SPACE_INSUFFICIENT_RESOURCES_META -
XR_ERROR_SPACE_INSUFFICIENT_VIEW_META -
XR_ERROR_SPACE_PERMISSION_INSUFFICIENT_META -
XR_ERROR_SPACE_RATE_LIMITED_META -
XR_ERROR_SPACE_TOO_DARK_META -
XR_ERROR_SPACE_TOO_BRIGHT_META
12.139.4. Retrieve Discovered Spaces
The xrRetrieveSpaceDiscoveryResultsMETA function is defined as:
// Provided by XR_META_spatial_entity_discovery
XrResult xrRetrieveSpaceDiscoveryResultsMETA(
XrSession session,
XrAsyncRequestIdFB requestId,
XrSpaceDiscoveryResultsMETA* results);
The xrRetrieveSpaceDiscoveryResultsMETA function is synchronous, and
follows the 2-call idiom, where the first call is made to determine the
number of results, populated in the
XrSpaceDiscoveryResultsMETA::resultCountOutput.
The application uses this value to initialize the
XrSpaceDiscoveryResultsMETA::resultCapacityInput and
XrSpaceDiscoveryResultsMETA::results fields.
The second call to xrRetrieveSpaceDiscoveryResultsMETA must then
populate the results field with whatever results are available.
See Buffer Size Parameters for a detailed description of
retrieving the required results size.
Note that after any results have been retrieved, those specific results will
be unavailable for retrieval again.
Application developers can choose to retrieve discovered Spatial Entities
either after receiving an
XrEventDataSpaceDiscoveryResultsAvailableMETA event or after receiving
an XrEventDataSpaceDiscoveryCompleteMETA event.
If application developers choose to retrieve after
XrEventDataSpaceDiscoveryResultsAvailableMETA events (before the
XrEventDataSpaceDiscoveryCompleteMETA event), more results may be
discovered between the first and the second call to
xrRetrieveSpaceDiscoveryResultsMETA.
If this occurs, and if the application chose the result array capacity to
match the XrSpaceDiscoveryResultsMETA::resultCountOutput, that
capacity is no longer sufficient to receive all available results, so the
second call will fail due to insufficient size.
This will not lose results, and the application can begin the two-call
process for xrRetrieveSpaceDiscoveryResultsMETA again.
xrPollEvent and xrRetrieveSpaceDiscoveryResultsMETA may be called simultaneously (without external synchronization).
The XrSpaceDiscoveryResultsMETA structure is defined as:
// Provided by XR_META_spatial_entity_discovery
typedef struct XrSpaceDiscoveryResultsMETA {
XrStructureType type;
const void* next;
uint32_t resultCapacityInput;
uint32_t resultCountOutput;
XrSpaceDiscoveryResultMETA* results;
} XrSpaceDiscoveryResultsMETA;
The XrSpaceDiscoveryResultsMETA structure is used to retrieve all results from a Discovery operation.
The XrSpaceDiscoveryResultMETA structure is defined as:
// Provided by XR_META_spatial_entity_discovery
typedef struct XrSpaceDiscoveryResultMETA {
XrSpace space;
XrUuidEXT uuid;
} XrSpaceDiscoveryResultMETA;
The XrSpaceDiscoveryResultMETA structure contains a single Space result retrieved after Discovery returns results.
12.139.7. New Enum Constants
-
XR_META_SPATIAL_ENTITY_DISCOVERY_EXTENSION_NAME -
XR_META_spatial_entity_discovery_SPEC_VERSION -
Extending XrStructureType:
-
XR_TYPE_EVENT_DATA_SPACE_DISCOVERY_COMPLETE_META -
XR_TYPE_EVENT_DATA_SPACE_DISCOVERY_RESULTS_AVAILABLE_META -
XR_TYPE_SPACE_DISCOVERY_INFO_META -
XR_TYPE_SPACE_DISCOVERY_RESULTS_META -
XR_TYPE_SPACE_DISCOVERY_RESULT_META -
XR_TYPE_SPACE_FILTER_COMPONENT_META -
XR_TYPE_SPACE_FILTER_UUID_META -
XR_TYPE_SYSTEM_SPACE_DISCOVERY_PROPERTIES_META
-
12.140. XR_META_spatial_entity_group_sharing
- Name String
-
XR_META_spatial_entity_group_sharing - Extension Type
-
Instance extension
- Registered Extension Number
-
573
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- API Interactions
-
-
Interacts with
XR_FB_spatial_entity_query -
Interacts with
XR_META_spatial_entity_sharing
-
- Last Modified Date
-
2024-06-14
- IP Status
-
No known IP claims.
- Contributors
-
TJ Gilbrough, Meta Platforms
Jiawen Zhang, Meta Platforms
Scott Dewald, Meta Platforms
Ribel Fares, Meta Platforms
12.140.1. Overview
The XR_META_spatial_entity_group_sharing extension enables
applications to share spatial entities to an application-specified group
UUID.
An application can share spatial entities with one or more group UUIDs, and
query for spatial entities previously shared with a group UUID.
A Group UUID is any application provided UUID.
The Group will be established for the application simply by sharing spatial
entities to it.
12.140.2. Check compatibility
The XrSystemSpatialEntityGroupSharingPropertiesMETA structure is defined as:
// Provided by XR_META_spatial_entity_group_sharing
typedef struct XrSystemSpatialEntityGroupSharingPropertiesMETA {
XrStructureType type;
void* next;
XrBool32 supportsSpatialEntityGroupSharing;
} XrSystemSpatialEntityGroupSharingPropertiesMETA;
An application can inspect whether the system is capable of group based sharing by extending the XrSystemProperties with XrSystemSpatialEntityGroupSharingPropertiesMETA structure when calling xrGetSystemProperties.
In order to use XrShareSpacesRecipientGroupsMETA with
xrShareSpacesMETA, the system must also support
XR_META_spatial_entity_sharing.
Please see XR_META_spatial_entity_sharing’s section for how to check
if XR_META_spatial_entity_sharing is supported.
In order to use XrSpaceGroupUuidFilterInfoMETA with
xrQuerySpacesFB, the system must also support
XR_FB_spatial_entity_query.
Please see XR_FB_spatial_entity_query’s section for how to check if
XR_FB_spatial_entity_query is supported.
12.140.3. Sharing Spaces to a Group UUID
An application can share spatial entities with any application provided UUID.
Once spatial entities are shared with this group UUID, the application can query for spatial entities previously shared with this group UUID.
Applications can share multiple spatial entities to the same group UUID.
Spatial entities will remain shared with the Group UUID for 30 days since the last successful share.
Any logged-in user using the same application may query for spatial entities shared with the Group UUID.
The XrShareSpacesRecipientGroupsMETA structure is defined as:
// Provided by XR_META_spatial_entity_group_sharing with XR_META_spatial_entity_sharing
typedef struct XrShareSpacesRecipientGroupsMETA {
XrStructureType type;
const void* next;
uint32_t groupCount;
XrUuid* groups;
} XrShareSpacesRecipientGroupsMETA;
XrShareSpacesRecipientGroupsMETA implements the XrShareSpacesRecipientBaseHeaderMETA base type. Where xrShareSpacesMETA specifies that a valid structure based on XrShareSpacesRecipientBaseHeaderMETA is to be passed, a valid XrShareSpacesRecipientGroupsMETA structure may be passed.
12.140.4. Query Spaces Shared with a Group UUID
The XrSpaceGroupUuidFilterInfoMETA structure is defined as:
// Provided by XR_FB_spatial_entity_query with XR_META_spatial_entity_group_sharing
typedef struct XrSpaceGroupUuidFilterInfoMETA {
XrStructureType type;
const void* next;
XrUuid groupUuid;
} XrSpaceGroupUuidFilterInfoMETA;
This structure is a space query filter for use with query functions
introduced in the XR_FB_spatial_entity_query extension.
To query spaces shared with a group, the application can include the XrSpaceGroupUuidFilterInfoMETA filter in the query filters when calling xrQuerySpacesFB.
If XrSpaceGroupUuidFilterInfoMETA is passed into xrQuerySpacesFB
and the group UUID is not found by the runtime, the runtime must return an
XR_ERROR_SPACE_GROUP_NOT_FOUND_META as the
XrEventDataSpaceQueryCompleteFB::result.
12.140.5. New Structures
If XR_FB_spatial_entity_query is supported:
If XR_META_spatial_entity_sharing is supported:
12.140.6. New Enum Constants
-
XR_META_SPATIAL_ENTITY_GROUP_SHARING_EXTENSION_NAME -
XR_META_spatial_entity_group_sharing_SPEC_VERSION -
Extending XrResult:
-
XR_ERROR_SPACE_GROUP_NOT_FOUND_META
-
-
Extending XrStructureType:
-
XR_TYPE_SHARE_SPACES_RECIPIENT_GROUPS_META -
XR_TYPE_SPACE_GROUP_UUID_FILTER_INFO_META -
XR_TYPE_SYSTEM_SPATIAL_ENTITY_GROUP_SHARING_PROPERTIES_META
-
Version History
-
Revision 1, 2024-06-14 (TJ Gilbrough)
-
Initial extension description
-
12.141. XR_META_spatial_entity_mesh
- Name String
-
XR_META_spatial_entity_mesh - Extension Type
-
Instance extension
- Registered Extension Number
-
270
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2023-06-12
- IP Status
-
No known IP claims.
- Contributors
-
Yuichi Taguchi, Meta Platforms
Anton Vaneev, Meta Platforms
Andreas Loeve Selvik, Meta Platforms
John Kearney, Meta Platforms
12.141.1. Overview
This extension expands on the concept of spatial entities to include a way
for a spatial entity to represent a triangle mesh that describes 3D geometry
of the spatial entity in a scene.
Spatial entities are defined in XR_FB_spatial_entity extension using
the Entity-Component System.
The triangle mesh is a component type that may be associated to a spatial
entity.
In order to enable the functionality of this extension, you must pass the
name of the extension into xrCreateInstance via the
XrInstanceCreateInfo::enabledExtensionNames parameter as
indicated in the Extensions section.
12.141.2. Retrieving a triangle mesh
The xrGetSpaceTriangleMeshMETA function is defined as:
// Provided by XR_META_spatial_entity_mesh
XrResult xrGetSpaceTriangleMeshMETA(
XrSpace space,
const XrSpaceTriangleMeshGetInfoMETA* getInfo,
XrSpaceTriangleMeshMETA* triangleMeshOutput);
The xrGetSpaceTriangleMeshMETA function is used by the application to
perform the two calls required to obtain a triangle mesh associated to a
spatial entity specified by space.
The spatial entity space must have the
XR_SPACE_COMPONENT_TYPE_TRIANGLE_MESH_META component type enabled,
otherwise this function will return
XR_ERROR_SPACE_COMPONENT_NOT_ENABLED_FB.
The XrSpaceTriangleMeshGetInfoMETA structure is defined as:
// Provided by XR_META_spatial_entity_mesh
typedef struct XrSpaceTriangleMeshGetInfoMETA {
XrStructureType type;
const void* next;
} XrSpaceTriangleMeshGetInfoMETA;
The XrSpaceTriangleMeshMETA structure is defined as:
// Provided by XR_META_spatial_entity_mesh
typedef struct XrSpaceTriangleMeshMETA {
XrStructureType type;
void* next;
uint32_t vertexCapacityInput;
uint32_t vertexCountOutput;
XrVector3f* vertices;
uint32_t indexCapacityInput;
uint32_t indexCountOutput;
uint32_t* indices;
} XrSpaceTriangleMeshMETA;
The XrSpaceTriangleMeshMETA structure can be used by the application to perform the two calls required to obtain a triangle mesh associated to a specified spatial entity.
The output values written in the indices array represent indices of
vertices: Three consecutive elements represent a triangle with a
counter-clockwise winding order.
New Object Types
New Atom
New Flag Types
New Enum Constants
XrSpaceComponentTypeFB enumeration is extended with:
-
XR_SPACE_COMPONENT_TYPE_TRIANGLE_MESH_META
XrStructureType enumeration is extended with:
-
XR_TYPE_SPACE_TRIANGLE_MESH_GET_INFO_META -
XR_TYPE_SPACE_TRIANGLE_MESH_META
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2023-06-12 (Yuichi Taguchi)
-
Initial extension description.
-
12.142. XR_META_spatial_entity_persistence
- Name String
-
XR_META_spatial_entity_persistence - Extension Type
-
Instance extension
- Registered Extension Number
-
260
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2024-11-09
- IP Status
-
No known IP claims.
- Contributors
-
Natalie Fleury, Meta Platforms
Abhishek Shrivastava, Meta Platforms
12.142.1. Overview
XR_META_spatial_entity_persistence enables saving and erasing Spatial
Entities, allowing them to be retrieved and tracked across different
sessions and over time.
Technical overview
XR_META_spatial_entity_persistence is the next generation of Meta
spatial entity storage management, following the previous generation offered
through XR_FB_spatial_entity_storage and
XR_FB_spatial_entity_storage_batch which are now obsolete.
If the XR_SPACE_COMPONENT_TYPE_STORABLE_FB component is enabled on a
space, as defined in XR_FB_spatial_entity, application developers
may save and erase app-created XrSpace corresponding to Meta spatial
entities.
XR_META_spatial_entity_persistence is expected to be used alongside
XR_META_spatial_entity_discovery for spatial entity discovery/loading
and retrieval.
In order to enable the functionality of this extension, you must pass the
name of the extension into xrCreateInstance via the
XrInstanceCreateInfo::enabledExtensionNames parameter as
indicated in the [extensions] section.
12.142.2. Inspect System Capability
The XrSystemSpacePersistencePropertiesMETA structure is defined as:
// Provided by XR_META_spatial_entity_persistence
typedef struct XrSystemSpacePersistencePropertiesMETA {
XrStructureType type;
const void* next;
XrBool32 supportsSpacePersistence;
} XrSystemSpacePersistencePropertiesMETA;
An application can inspect whether the system is capable of supporting space persistence by extending the XrSystemProperties with XrSystemSpacePersistencePropertiesMETA structure when calling xrGetSystemProperties.
If and only if a runtime returns XR_FALSE for
supportsSpacePersistence, the runtime must return
XR_ERROR_FEATURE_UNSUPPORTED from xrSaveSpacesMETA and
xrEraseSpacesMETA.
Save Spaces
The xrSaveSpacesMETA function is defined as:
// Provided by XR_META_spatial_entity_persistence
XrResult xrSaveSpacesMETA(
XrSession session,
const XrSpacesSaveInfoMETA* info,
XrAsyncRequestIdFB* requestId);
The xrSaveSpacesMETA function persists the space(s) provided. The scope of the save operation is same-user, same-device, same-app. This is an asynchronous operation. Completion results are conveyed in the event XrEventDataSpacesSaveResultMETA.
The runtime must return XR_ERROR_HANDLE_INVALID from
xrSaveSpacesMETA if any of the
XrSpacesSaveInfoMETA::spaces are XR_NULL_HANDLE or
otherwise invalid (e.g. not a Meta Spatial Entity XrSpace).
Note that saving an entity which has already been saved previously is valid
and a no-op.
The runtime must return XR_ERROR_VALIDATION_FAILURE from
xrSaveSpacesMETA if either one of the following is true:
-
The XrSpacesSaveInfoMETA::
spacesisNULL. -
The XrSpacesSaveInfoMETA::
spaceCountis 0.
The length of the array XrSpacesSaveInfoMETA::spaces must have
a size of at least XrSpacesSaveInfoMETA::spaceCount.
When initiated unsuccessfully (i.e. the immediate return value of the xrSaveSpacesMETA call is an error), the save operation terminates without saving any entities, and no save result event is queued.
When initiated successfully (i.e. the immediate return value of the xrSaveSpacesMETA call is not an error), the full save operation is asynchronous. The runtime must queue an XrEventDataSpacesSaveResultMETA event when the save operation completes, either successfully, qualified success (warning), or with an error. See the XrEventDataSpacesSaveResultMETA for which XrResult enumerants may be returned in the event.
Note that if the XrEventDataSpacesSaveResultMETA::result is an
error, it is possible that any subset of the Spatial Entity Spaces were
saved.
The XrSpacesSaveInfoMETA structure is defined as:
// Provided by XR_META_spatial_entity_persistence
typedef struct XrSpacesSaveInfoMETA {
XrStructureType type;
const void* next;
uint32_t spaceCount;
XrSpace* spaces;
} XrSpacesSaveInfoMETA;
The XrSpacesSaveInfoMETA structure contains information used to save one or more spaces with xrSaveSpacesMETA.
The XrEventDataSpacesSaveResultMETA structure is defined as:
// Provided by XR_META_spatial_entity_persistence
typedef struct XrEventDataSpacesSaveResultMETA {
XrStructureType type;
const void* next;
XrAsyncRequestIdFB requestId;
XrResult result;
} XrEventDataSpacesSaveResultMETA;
This event conveys the results of the asynchronous operation started by xrSaveSpacesMETA.
The XrEventDataSpacesSaveResultMETA event contains the result of the
save/write operation, as well as the XrAsyncRequestIdFB of the
operation.
Potential XrEventDataSpacesSaveResultMETA::result values include
the following XrResult enumerants:
Erase Spaces
The xrEraseSpacesMETA function is defined as:
// Provided by XR_META_spatial_entity_persistence
XrResult xrEraseSpacesMETA(
XrSession session,
const XrSpacesEraseInfoMETA* info,
XrAsyncRequestIdFB* requestId);
The xrEraseSpacesMETA function erases space(s) from storage. The scope of the erase operation is same-user, same-device, same-app. After a successful erase operation, the XrSpace remains valid in the current session until the application destroys the space handle or its parent, the session handle. That is, this does not destroy spaces from tracking, but if erase is successful, the spaces must be ephemeral again (undiscoverable across sessions).
This is an asynchronous operation. Completion results are conveyed in the event XrEventDataSpacesEraseResultMETA.
The runtime must return XR_ERROR_HANDLE_INVALID from
xrEraseSpacesMETA if any of the
XrSpacesEraseInfoMETA::spaces are XR_NULL_HANDLE or
otherwise invalid (e.g. not a Meta Spatial Entity XrSpace).
Note that it is valid for a Meta Spatial Entity space that has not
previously been saved to be included in an erase operation; that portion of
the operation is a no-op.
At least one of XrSpacesEraseInfoMETA::uuids or
XrSpacesEraseInfoMETA::spaces must be populated.
The runtime must return XR_ERROR_VALIDATION_FAILURE from
xrEraseSpacesMETA if either of the following are true:
-
The XrSpacesEraseInfoMETA::
spacesareNULLand the XrSpacesEraseInfoMETA::uuidsareNULL. -
The XrSpacesEraseInfoMETA::
spaceCountis 0 and the XrSpacesEraseInfoMETA::uuidCountis 0.
The lengths of the arrays must equal the corresponding counts (e.g.
spaceCount must equal the length of the spaces array).
When initiated unsuccessfully (i.e. the immediate return value of the xrEraseSpacesMETA call is an error), the erase operation terminates without erasing any Spatial Entities, and no erase result event is queued.
When initiated successfully (i.e. the immediate return value of the xrEraseSpacesMETA call is not an error), the full erase operation is asynchronous. The runtime must queue an XrEventDataSpacesEraseResultMETA event when the erase operation completes, either successfully, qualified success (warning), or with an error. See the XrEventDataSpacesEraseResultMETA for which XrResult enumerants may be returned in the event.
Note that if the XrEventDataSpacesEraseResultMETA::result is an
error, it is possible that any subset of the Spatial Entity Spaces were
erased.
The XrSpacesEraseInfoMETA structure is defined as:
// Provided by XR_META_spatial_entity_persistence
typedef struct XrSpacesEraseInfoMETA {
XrStructureType type;
const void* next;
uint32_t spaceCount;
XrSpace* spaces;
uint32_t uuidCount;
XrUuidEXT* uuids;
} XrSpacesEraseInfoMETA;
The XrSpacesEraseInfoMETA structure contains information used to erase
one or more spaces with xrEraseSpacesMETA.
Both the spaces and uuids arrays are optional, but at least one
of them must contain at least one element.
See xrEraseSpacesMETA for required validation behavior by the runtime.
The XrEventDataSpacesEraseResultMETA structure is defined as:
// Provided by XR_META_spatial_entity_persistence
typedef struct XrEventDataSpacesEraseResultMETA {
XrStructureType type;
const void* next;
XrAsyncRequestIdFB requestId;
XrResult result;
} XrEventDataSpacesEraseResultMETA;
The XrEventDataSpacesEraseResultMETA event contains the result of the
erase/write operation, as well as the XrAsyncRequestIdFB of the
operation.
Potential XrEventDataSpacesEraseResultMETA::result values
include the following XrResult enumerants:
12.142.5. New Enum Constants
-
XR_META_SPATIAL_ENTITY_PERSISTENCE_EXTENSION_NAME -
XR_META_spatial_entity_persistence_SPEC_VERSION -
Extending XrResult:
-
XR_ERROR_SPACE_INSUFFICIENT_RESOURCES_META -
XR_ERROR_SPACE_INSUFFICIENT_VIEW_META -
XR_ERROR_SPACE_PERMISSION_INSUFFICIENT_META -
XR_ERROR_SPACE_RATE_LIMITED_META -
XR_ERROR_SPACE_STORAGE_AT_CAPACITY_META -
XR_ERROR_SPACE_TOO_BRIGHT_META -
XR_ERROR_SPACE_TOO_DARK_META
-
-
Extending XrStructureType:
-
XR_TYPE_EVENT_DATA_SPACES_ERASE_RESULT_META -
XR_TYPE_EVENT_DATA_SPACES_SAVE_RESULT_META -
XR_TYPE_SPACES_ERASE_INFO_META -
XR_TYPE_SPACES_SAVE_INFO_META -
XR_TYPE_SYSTEM_SPACE_PERSISTENCE_PROPERTIES_META
-
12.143. XR_META_spatial_entity_sharing
- Name String
-
XR_META_spatial_entity_sharing - Extension Type
-
Instance extension
- Registered Extension Number
-
291
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2024-06-14
- IP Status
-
No known IP claims.
- Contributors
-
TJ Gilbrough, Meta Platforms
Jiawen Zhang, Meta Platforms
Scott Dewald, Meta Platforms
12.143.1. Overview
The XR_META_spatial_entity_sharing extension enables applications to
share spatial entities.
This base extension provides a generic space sharing endpoint.
This extension depends on other extensions (such as
XR_META_spatial_entity_group_sharing) to define concrete "recipient
info" structures, which are passed into the generic endpoint introduced in
this extension.
The scope/lifetime of the sharing action is dependent on the recipient type of the sharing. Therefore, the scope/lifetime of the sharing action is defined in the extensions which provide concrete "recipient info" structures.
XR_META_spatial_entity_sharing is a more generic and extendable
alternative to XR_FB_spatial_entity_sharing (which is tightly coupled
with XrSpaceUserFB).
12.143.2. Check compatibility
The XrSystemSpatialEntitySharingPropertiesMETA structure is defined as:
// Provided by XR_META_spatial_entity_sharing
typedef struct XrSystemSpatialEntitySharingPropertiesMETA {
XrStructureType type;
void* next;
XrBool32 supportsSpatialEntitySharing;
} XrSystemSpatialEntitySharingPropertiesMETA;
An application can inspect whether the system is capable of Spatial Entity Sharing by extending the XrSystemProperties with XrSystemSpatialEntitySharingPropertiesMETA structure when calling xrGetSystemProperties.
If a runtime returns XR_FALSE for supportsSpatialEntitySharing,
the runtime must return XR_ERROR_FEATURE_UNSUPPORTED from
xrShareSpacesMETA.
12.143.3. Sharing Spaces
The xrShareSpacesMETA function is defined as:
// Provided by XR_META_spatial_entity_sharing
XrResult xrShareSpacesMETA(
XrSession session,
const XrShareSpacesInfoMETA* info,
XrAsyncRequestIdFB* requestId);
The application may use the xrShareSpacesMETA function to share
spaces (XrSpace) if the XR_SPACE_COMPONENT_TYPE_SHARABLE_FB
component has been enabled on the space.
This is an asynchronous operation. Completion results are conveyed in the event XrEventDataShareSpacesCompleteMETA.
If the asynchronous operation is scheduled successfully, the runtime must
return XR_SUCCESS.
If and only if the runtime returns XR_SUCCESS, the runtime must queue
a single XrEventDataShareSpacesCompleteMETA event identified with a
requestId field matching the value output by this function, referred to as
the "corresponding completion event."
(If the runtime returns anything other than XR_SUCCESS, the runtime
must not queue any XrEventDataShareSpacesCompleteMETA events with
requestId field matching the requestId populated by this function.)
If the asynchronous operation is successful, the runtime must set the
XrEventDataShareSpacesCompleteMETA::result field to
XR_SUCCESS in the corresponding completion event.
If the asynchronous operation is scheduled but not successful, in the
corresponding completion event, the runtime must set the
XrEventDataShareSpacesCompleteMETA::result field to an
appropriate error code instead of XR_SUCCESS.
The XrShareSpacesInfoMETA structure is defined as:
// Provided by XR_META_spatial_entity_sharing
typedef struct XrShareSpacesInfoMETA {
XrStructureType type;
const void* next;
uint32_t spaceCount;
XrSpace* spaces;
const XrShareSpacesRecipientBaseHeaderMETA* recipientInfo;
} XrShareSpacesInfoMETA;
The XrShareSpacesRecipientBaseHeaderMETA structure is defined as:
// Provided by XR_META_spatial_entity_sharing
typedef struct XrShareSpacesRecipientBaseHeaderMETA {
XrStructureType type;
const void* next;
} XrShareSpacesRecipientBaseHeaderMETA;
XrShareSpacesRecipientBaseHeaderMETA is designed to be an abstract base struct which is to be extended by other structures.
Any valid structure that identifies XrShareSpacesRecipientBaseHeaderMETA as its parent structure may be provided anywhere a valid XrShareSpacesRecipientBaseHeaderMETA is specified to be passed.
The XrEventDataShareSpacesCompleteMETA event structure is defined as:
// Provided by XR_META_spatial_entity_sharing
typedef struct XrEventDataShareSpacesCompleteMETA {
XrStructureType type;
const void* next;
XrAsyncRequestIdFB requestId;
XrResult result;
} XrEventDataShareSpacesCompleteMETA;
This event conveys the results of the asynchronous operation started by xrShareSpacesMETA.
12.143.6. New Enum Constants
-
XR_MAX_SPACES_PER_SHARE_REQUEST_META -
XR_META_SPATIAL_ENTITY_SHARING_EXTENSION_NAME -
XR_META_spatial_entity_sharing_SPEC_VERSION -
Extending XrStructureType:
-
XR_TYPE_EVENT_DATA_SHARE_SPACES_COMPLETE_META -
XR_TYPE_SHARE_SPACES_INFO_META -
XR_TYPE_SYSTEM_SPATIAL_ENTITY_SHARING_PROPERTIES_META
-
Version History
-
Revision 1, 2024-06-14 (TJ Gilbrough)
-
Initial extension description
-
12.144. XR_META_virtual_keyboard
- Name String
-
XR_META_virtual_keyboard - Extension Type
-
Instance extension
- Registered Extension Number
-
220
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2023-04-14
- IP Status
-
No known IP claims.
- Contributors
-
Brent Housen, Meta Platforms
Chiara Coetzee, Meta Platforms
Juan Pablo León, Meta Platforms
Peter Chan, Meta Platforms - Contacts
-
Brent Housen, Meta Platforms
Peter Chan, Meta Platforms
12.144.1. Overview
The virtual keyboard extension provides a system-driven localized keyboard that the application has full control over in terms of positioning and rendering.
This is achieved by giving the application the data required to drive rendering and animation of the keyboard in response to interaction data passed from the application to the runtime.
This approach is an alternative to a potential system keyboard overlay solution and provides a keyboard that can seamlessly blend into the application environment, since it is rendered by the same system, and avoids input focus issues that might come with a system overlay.
The API is also designed to work with custom hand and/or controller models in various games and applications.
Virtual Keyboard Integration Summary
Before explaining the individual API functions, types, and events, here is an overview on how to integrate the virtual keyboard in an application.
Note that this is purely informational and does not serve as binding requirements for the runtime or the application.
-
Check if your device supports the virtual keyboard with xrGetSystemProperties.
-
Create a new keyboard with xrCreateVirtualKeyboardMETA.
-
Give it a location with xrCreateVirtualKeyboardSpaceMETA, and keep a reference to the returned XrSpace.
-
Load the virtual keyboard glTF model using
XR_FB_render_model:-
Query the render model key for path /model_meta/keyboard/virtual.
-
Using xrEnumerateRenderModelPathsFB and xrGetRenderModelPropertiesFB.
-
Make sure to set the support level to
XR_RENDER_MODEL_SUPPORTS_GLTF_2_0_SUBSET_2_BIT_FB.
-
-
Load the render model glTF data with the given key with xrLoadRenderModelFB.
-
Load the glTF data into an extendable glTF renderer (see
Extend glTF render model support). Note that this render model is hidden by default.
-
-
When the application wants to show the keyboard, call xrSetVirtualKeyboardModelVisibilityMETA to request the runtime to update the model visibility.
-
The application should wait for the XrEventDataVirtualKeyboardShownMETA event as confirmation that the runtime is ready to show the keyboard.
-
-
The application can move the keyboard by calling xrSuggestVirtualKeyboardLocationMETA to update the saved XrSpace.
-
Then for every active input type feed the keyboard input with xrSendVirtualKeyboardInputMETA:
-
For each hand/controller, use:
-
XR_VIRTUAL_KEYBOARD_INPUT_SOURCE_*_RAY_*for far input -
XR_VIRTUAL_KEYBOARD_INPUT_SOURCE_*_DIRECT_*for direct/near input -
If both near and far input types are sent, the runtime may decide which one is the most appropriate to use.
-
-
Passing in a value for the input devices interactorRoot as well, i.e. the wrist root for hands.
-
The runtime will modify the
interactorRootPoseto poke limit direct interaction.-
If poke limiting is desired, the application should reposition input render models with the modified root pose.
-
-
-
Then get the runtime keyboard pose and scale:
-
Using xrLocateSpace on the saved keyboardSpace.
-
Using xrGetVirtualKeyboardScaleMETA to get the scale.
-
-
Then check if the virtual keyboard glTF model has any textures that need to be updated with xrGetVirtualKeyboardDirtyTexturesMETA.
-
For every dirty texture, call xrGetVirtualKeyboardTextureDataMETA to get the RGBA texture data.
-
And then updating the texture in the glTF model that matches the given texture id.
-
-
Then apply any glTF model animations using xrGetVirtualKeyboardModelAnimationStatesMETA to get updated animation indices and fraction values for each animation.
-
XrEventDataVirtualKeyboardCommitTextMETA / XrEventDataVirtualKeyboardBackspaceMETA / XrEventDataVirtualKeyboardEnterMETA
-
Applications can pipe these events to a focused input field, or whatever they are expecting to handle the virtual keyboard’s input.
-
-
XrEventDataVirtualKeyboardShownMETA & XrEventDataVirtualKeyboardHiddenMETA
-
Signaled when the virtual keyboard render model animation system is hiding or showing the keyboard.
-
-
Destroy the keyboard with xrDestroyVirtualKeyboardMETA.
12.144.2. Extend glTF render model support
The virtual keyboard glTF model uses a custom texture URI for textures that the application needs to update dynamically. The application should implement a custom URI handler when loading the glTF model to check for these URIs and create writable textures identified by the corresponding texture ids.
The runtime must refer to these textures in the returned glTF model by URIs
in the following format:
metaVirtualKeyboard://texture/{textureID}?w={width}&h={height}&fmt=RGBA32
The application should retrieve new pixel data from the runtime with xrGetVirtualKeyboardDirtyTexturesMETA and xrGetVirtualKeyboardTextureDataMETA and apply them to the corresponding textures that are used to render the glTF model.
Furthermore, the runtime may use additive morph target animations to control vertex coordinates and modify UVs. The application should check the "extras" property when loading a glTF animation channel for an integer field named "additiveWeightIndex". If present, this value indicates the morph target index that the animation weight should be applied to, or apply all weights if the value is -1.
The application should check for any glTF animations to apply to the model each frame with xrGetVirtualKeyboardModelAnimationStatesMETA.
12.144.3. Collision Handling
Even though the runtime will handle any user interaction with the keyboard based on the input sent by the application, the application is responsible for managing how the keyboard should collide with other objects in the scene. To do this, the application can look for a node named "collision" in the loaded glTF model and use its mesh geometry and bound to define colliders that can be used by the application’s choice of physics system.
12.144.4. Check device compatibility
When the XR_META_virtual_keyboard extension is enabled, an
application can pass in an XrSystemVirtualKeyboardPropertiesMETA
structure in the XrSystemProperties::next chain when calling
xrGetSystemProperties to acquire information about the virtual
keyboard’s availability.
The XrSystemVirtualKeyboardPropertiesMETA structure is defined as:
// Provided by XR_META_virtual_keyboard
typedef struct XrSystemVirtualKeyboardPropertiesMETA {
XrStructureType type;
void* next;
XrBool32 supportsVirtualKeyboard;
} XrSystemVirtualKeyboardPropertiesMETA;
The struct is used for checking virtual keyboard support.
12.144.5. Create a virtual keyboard
An application can create a virtual keyboard by calling xrCreateVirtualKeyboardMETA.
The xrCreateVirtualKeyboardMETA function is defined as:
// Provided by XR_META_virtual_keyboard
XrResult xrCreateVirtualKeyboardMETA(
XrSession session,
const XrVirtualKeyboardCreateInfoMETA* createInfo,
XrVirtualKeyboardMETA* keyboard);
xrCreateVirtualKeyboardMETA creates an XrVirtualKeyboardMETA handle and establishes a keyboard within the runtime XrSession. The returned virtual keyboard handle may be subsequently used in API calls.
The XrVirtualKeyboardCreateInfoMETA structure is defined as:
// Provided by XR_META_virtual_keyboard
typedef struct XrVirtualKeyboardCreateInfoMETA {
XrStructureType type;
const void* next;
} XrVirtualKeyboardCreateInfoMETA;
The struct is used for keyboard creation. Empty with the intention of future extension.
The runtime must return XR_ERROR_FEATURE_UNSUPPORTED if
XrSystemVirtualKeyboardPropertiesMETA::supportsVirtualKeyboard
is XR_FALSE when checking the device compatibility.
12.144.6. Destroy the virtual keyboard
An application can destroy a virtual keyboard by calling xrDestroyVirtualKeyboardMETA.
The xrDestroyVirtualKeyboardMETA function is defined as:
// Provided by XR_META_virtual_keyboard
XrResult xrDestroyVirtualKeyboardMETA(
XrVirtualKeyboardMETA keyboard);
12.144.7. Place the virtual keyboard
To place the keyboard, an application can create a virtual keyboard space by calling xrCreateVirtualKeyboardSpaceMETA.
The xrCreateVirtualKeyboardSpaceMETA function is defined as:
// Provided by XR_META_virtual_keyboard
XrResult xrCreateVirtualKeyboardSpaceMETA(
XrSession session,
XrVirtualKeyboardMETA keyboard,
const XrVirtualKeyboardSpaceCreateInfoMETA* createInfo,
XrSpace* keyboardSpace);
Creates an XrSpace handle and places the keyboard in this space. The returned space handle may be subsequently used in API calls.
Once placed, the application should query the keyboard’s location each frame using xrLocateSpace. It is important to do this every frame as the runtime is in control of the keyboard’s movement.
The runtime must return XR_ERROR_HANDLE_INVALID if session is
different than what is used to create keyboard.
The XrVirtualKeyboardSpaceCreateInfoMETA structure is defined as:
// Provided by XR_META_virtual_keyboard
typedef struct XrVirtualKeyboardSpaceCreateInfoMETA {
XrStructureType type;
const void* next;
XrVirtualKeyboardLocationTypeMETA locationType;
XrSpace space;
XrPosef poseInSpace;
} XrVirtualKeyboardSpaceCreateInfoMETA;
If locationType is set to
XR_VIRTUAL_KEYBOARD_LOCATION_TYPE_CUSTOM_META, the runtime must use
the value poseInSpace set by the application.
Otherwise, the runtime must provide a default pose and ignore
poseInSpace.
In all cases, the runtime must default the scale to 1.0.
12.144.8. Move and scale the virtual keyboard
After creating a keyboard and a space, an application can request to move its location or change its scale. The application can suggest a new location or scale by calling xrSuggestVirtualKeyboardLocationMETA.
The xrSuggestVirtualKeyboardLocationMETA function is defined as:
// Provided by XR_META_virtual_keyboard
XrResult xrSuggestVirtualKeyboardLocationMETA(
XrVirtualKeyboardMETA keyboard,
const XrVirtualKeyboardLocationInfoMETA* locationInfo);
The XrVirtualKeyboardLocationInfoMETA structure is defined as:
// Provided by XR_META_virtual_keyboard
typedef struct XrVirtualKeyboardLocationInfoMETA {
XrStructureType type;
const void* next;
XrVirtualKeyboardLocationTypeMETA locationType;
XrSpace space;
XrPosef poseInSpace;
float scale;
} XrVirtualKeyboardLocationInfoMETA;
If locationType is set to
XR_VIRTUAL_KEYBOARD_LOCATION_TYPE_CUSTOM_META, the runtime must use
the values poseInSpace and scale set by the application.
Otherwise, the runtime must provide a default pose and scale and ignore
poseInSpace and scale.
12.144.9. Get the virtual keyboard scale
Since xrLocateSpace only handles the pose, the application should also get the scale every frame by calling xrGetVirtualKeyboardScaleMETA.
The xrGetVirtualKeyboardScaleMETA function is defined as:
// Provided by XR_META_virtual_keyboard
XrResult xrGetVirtualKeyboardScaleMETA(
XrVirtualKeyboardMETA keyboard,
float* scale);
With both the pose and scale, the application has all the information to draw the virtual keyboard render model.
12.144.10. Show and hide the virtual keyboard
The runtime is in control of the keyboard’s visibility to decide when to process input and reset the keyboard states. By default the keyboard render model is hidden. An application can update the render model visibility by calling xrSetVirtualKeyboardModelVisibilityMETA.
The xrSetVirtualKeyboardModelVisibilityMETA function is defined as:
// Provided by XR_META_virtual_keyboard
XrResult xrSetVirtualKeyboardModelVisibilityMETA(
XrVirtualKeyboardMETA keyboard,
const XrVirtualKeyboardModelVisibilitySetInfoMETA* modelVisibility);
Note that the runtime has final control of the model visibility. The runtime may also change the visible state in certain situations. To get the actual visibility state of the render model, the application should wait for the XrEventDataVirtualKeyboardShownMETA and XrEventDataVirtualKeyboardHiddenMETA events.
The XrVirtualKeyboardModelVisibilitySetInfoMETA structure is defined as:
// Provided by XR_META_virtual_keyboard
typedef struct XrVirtualKeyboardModelVisibilitySetInfoMETA {
XrStructureType type;
const void* next;
XrBool32 visible;
} XrVirtualKeyboardModelVisibilitySetInfoMETA;
12.144.11. Update render model textures
Each frame update the application should check for any textures that are updated by the runtime (e.g. when new swipe suggestion words are available). The application should first get the texture IDs that have updated contents (are "dirty") by calling xrGetVirtualKeyboardDirtyTexturesMETA. Then for each texture ID received, the application should create a XrVirtualKeyboardTextureDataMETA structure and call xrGetVirtualKeyboardTextureDataMETA to get the pixel data to update the corresponding texture created by the render system using the id reference.
The xrGetVirtualKeyboardDirtyTexturesMETA function is defined as:
// Provided by XR_META_virtual_keyboard
XrResult xrGetVirtualKeyboardDirtyTexturesMETA(
XrVirtualKeyboardMETA keyboard,
uint32_t textureIdCapacityInput,
uint32_t* textureIdCountOutput,
uint64_t* textureIds);
This function follows the two-call
idiom for filling the textureIds array.
Note that new texture data may be added after the runtime processes inputs
from xrSendVirtualKeyboardInputMETA.
Therefore, after sending new keyboard inputs the application should query
the buffer size again before getting any texture data.
The xrGetVirtualKeyboardTextureDataMETA function is defined as:
// Provided by XR_META_virtual_keyboard
XrResult xrGetVirtualKeyboardTextureDataMETA(
XrVirtualKeyboardMETA keyboard,
uint64_t textureId,
XrVirtualKeyboardTextureDataMETA* textureData);
This function follows the two-call
idiom for filling the textureData array in the
XrVirtualKeyboardTextureDataMETA structure.
Note that new texture data may be added after the runtime processes inputs
from xrSendVirtualKeyboardInputMETA.
Therefore, after sending new keyboard inputs the application should query
the buffer size again before getting any texture data.
The XrVirtualKeyboardTextureDataMETA structure is defined as:
// Provided by XR_META_virtual_keyboard
typedef struct XrVirtualKeyboardTextureDataMETA {
XrStructureType type;
void* next;
uint32_t textureWidth;
uint32_t textureHeight;
uint32_t bufferCapacityInput;
uint32_t bufferCountOutput;
uint8_t* buffer;
} XrVirtualKeyboardTextureDataMETA;
12.144.12. Update render model animations
Besides checking for texture updates, each frame the application should also check for any animations to be applied to the render model. The runtime may use these animations to control the visibility of different keys, layout changes, and even modify key sizes and texture coordinates via morph targets. The application can get the animation states to be applied by calling xrGetVirtualKeyboardModelAnimationStatesMETA. This will return an array of XrVirtualKeyboardAnimationStateMETA which the application should apply to the render model, indexed by the GLTF animation array index order.
The xrGetVirtualKeyboardModelAnimationStatesMETA function is defined as:
// Provided by XR_META_virtual_keyboard
XrResult xrGetVirtualKeyboardModelAnimationStatesMETA(
XrVirtualKeyboardMETA keyboard,
XrVirtualKeyboardModelAnimationStatesMETA* animationStates);
This function follows the two-call
idiom for filling the animationStates array in the
XrVirtualKeyboardModelAnimationStatesMETA structure.
Note that new animations may be added after the runtime processes inputs
from xrSendVirtualKeyboardInputMETA.
Therefore, after sending new keyboard inputs the application should query
the buffer size again before getting any animation data.
The XrVirtualKeyboardAnimationStateMETA structure is defined as:
// Provided by XR_META_virtual_keyboard
typedef struct XrVirtualKeyboardAnimationStateMETA {
XrStructureType type;
void* next;
int32_t animationIndex;
float fraction;
} XrVirtualKeyboardAnimationStateMETA;
The XrVirtualKeyboardModelAnimationStatesMETA structure is defined as:
// Provided by XR_META_virtual_keyboard
typedef struct XrVirtualKeyboardModelAnimationStatesMETA {
XrStructureType type;
void* next;
uint32_t stateCapacityInput;
uint32_t stateCountOutput;
XrVirtualKeyboardAnimationStateMETA* states;
} XrVirtualKeyboardModelAnimationStatesMETA;
12.144.13. Send user input and text context
Since the application has control over how collision should be handled between the keyboard and other objects in the scene, it is up to the application to decide when to send input to the virtual keyboard. Per frame, for every input source the application wants to be applied to the keyboard, the application should create a XrVirtualKeyboardInputInfoMETA and call xrSendVirtualKeyboardInputMETA while also supplying the root pose of the interaction source.
The runtime may modify with an offset the given interactorRootPose if
the given input is puncturing the keyboard.
This is to give the effect that the virtual object cannot push through the
keyboard and improves keyboard input perception.
This is sometimes referred to as poke limiting.
To aid features like auto complete or whole word deletion, before sending input applications should populate a XrVirtualKeyboardTextContextChangeInfoMETA structure and call xrChangeVirtualKeyboardTextContextMETA to supply the runtime with the application’s text context prior to the input cursor.
The xrSendVirtualKeyboardInputMETA function is defined as:
// Provided by XR_META_virtual_keyboard
XrResult xrSendVirtualKeyboardInputMETA(
XrVirtualKeyboardMETA keyboard,
const XrVirtualKeyboardInputInfoMETA* info,
XrPosef* interactorRootPose);
The application can use values like a pointer pose as the
interactorRootPose for
XR_VIRTUAL_KEYBOARD_INPUT_SOURCE_CONTROLLER_RAY_* or
XR_VIRTUAL_KEYBOARD_INPUT_SOURCE_HAND_RAY_* input sources, a point on
a controller model for
XR_VIRTUAL_KEYBOARD_INPUT_SOURCE_CONTROLLER_DIRECT_* input sources and
the hand index tip pose for
XR_VIRTUAL_KEYBOARD_INPUT_SOURCE_HAND_DIRECT_INDEX_TIP_*.
Different input poses can be used to accommodate application specific
controller or hand models.
The XrVirtualKeyboardInputInfoMETA structure is defined as:
// Provided by XR_META_virtual_keyboard
typedef struct XrVirtualKeyboardInputInfoMETA {
XrStructureType type;
const void* next;
XrVirtualKeyboardInputSourceMETA inputSource;
XrSpace inputSpace;
XrPosef inputPoseInSpace;
XrVirtualKeyboardInputStateFlagsMETA inputState;
} XrVirtualKeyboardInputInfoMETA;
The xrChangeVirtualKeyboardTextContextMETA function is defined as:
// Provided by XR_META_virtual_keyboard
XrResult xrChangeVirtualKeyboardTextContextMETA(
XrVirtualKeyboardMETA keyboard,
const XrVirtualKeyboardTextContextChangeInfoMETA* changeInfo);
The XrVirtualKeyboardTextContextChangeInfoMETA structure is defined as:
// Provided by XR_META_virtual_keyboard
typedef struct XrVirtualKeyboardTextContextChangeInfoMETA {
XrStructureType type;
const void* next;
const char* textContext;
} XrVirtualKeyboardTextContextChangeInfoMETA;
12.144.14. Handling events
Each frame the application should listen for the following events sent by the runtime that reflects the state of the keyboard.
The XrEventDataVirtualKeyboardCommitTextMETA structure is defined as:
// Provided by XR_META_virtual_keyboard
typedef struct XrEventDataVirtualKeyboardCommitTextMETA {
XrStructureType type;
const void* next;
XrVirtualKeyboardMETA keyboard;
char text[XR_MAX_VIRTUAL_KEYBOARD_COMMIT_TEXT_SIZE_META];
} XrEventDataVirtualKeyboardCommitTextMETA;
The XrEventDataVirtualKeyboardCommitTextMETA event must be sent by the runtime when a character or string is input by the keyboard. The application should append to the text field that the keyboard is editing.
The XrEventDataVirtualKeyboardBackspaceMETA structure is defined as:
// Provided by XR_META_virtual_keyboard
typedef struct XrEventDataVirtualKeyboardBackspaceMETA {
XrStructureType type;
const void* next;
XrVirtualKeyboardMETA keyboard;
} XrEventDataVirtualKeyboardBackspaceMETA;
The XrEventDataVirtualKeyboardBackspaceMETA event must be sent by the runtime when the [Backspace] key is pressed. The application should update the text field that the keyboard is editing.
The XrEventDataVirtualKeyboardEnterMETA structure is defined as:
// Provided by XR_META_virtual_keyboard
typedef struct XrEventDataVirtualKeyboardEnterMETA {
XrStructureType type;
const void* next;
XrVirtualKeyboardMETA keyboard;
} XrEventDataVirtualKeyboardEnterMETA;
The XrEventDataVirtualKeyboardEnterMETA event must be sent by the runtime when the [Enter] key is pressed. The application should respond accordingly (e.g. newline, accept, etc).
The XrEventDataVirtualKeyboardShownMETA structure is defined as:
// Provided by XR_META_virtual_keyboard
typedef struct XrEventDataVirtualKeyboardShownMETA {
XrStructureType type;
const void* next;
XrVirtualKeyboardMETA keyboard;
} XrEventDataVirtualKeyboardShownMETA;
The XrEventDataVirtualKeyboardShownMETA event must be sent when the runtime has shown the keyboard render model (via animation). The application should update its state accordingly (e.g. update UI, pause simulation, etc).
The XrEventDataVirtualKeyboardHiddenMETA structure is defined as:
// Provided by XR_META_virtual_keyboard
typedef struct XrEventDataVirtualKeyboardHiddenMETA {
XrStructureType type;
const void* next;
XrVirtualKeyboardMETA keyboard;
} XrEventDataVirtualKeyboardHiddenMETA;
The XrEventDataVirtualKeyboardHiddenMETA event must be sent when the keyboard render model is hidden by the runtime (via animation). The application should update its state accordingly (e.g. update UI, resume simulation, etc).
12.144.15. Example code for using virtual keyboard
The following example code demonstrates how to create and use the virtual keyboard.
XrInstance instance; // previously initialized
XrSystemId system; // previously initialized
XrSession session; // previously initialized
XrSpace localSpace; // previously initialized
XrPosef poseIdentity; // previously initialized
// XR_FB_render_model API previously initialized with xrGetInstanceProcAddr
PFN_xrEnumerateRenderModelPathsFB xrEnumerateRenderModelPathsFB;
PFN_xrGetRenderModelPropertiesFB xrGetRenderModelPropertiesFB;
PFN_xrLoadRenderModelFB xrLoadRenderModelFB;
// XR_META_virtual_keyboard API previously initialized with xrGetInstanceProcAddr
PFN_xrCreateVirtualKeyboardMETA xrCreateVirtualKeyboardMETA;
PFN_xrDestroyVirtualKeyboardMETA xrDestroyVirtualKeyboardMETA;
PFN_xrCreateVirtualKeyboardSpaceMETA xrCreateVirtualKeyboardSpaceMETA;
PFN_xrSuggestVirtualKeyboardLocationMETA xrSuggestVirtualKeyboardLocationMETA;
PFN_xrGetVirtualKeyboardScaleMETA xrGetVirtualKeyboardScaleMETA;
PFN_xrSetVirtualKeyboardModelVisibilityMETA xrSetVirtualKeyboardModelVisibilityMETA;
PFN_xrGetVirtualKeyboardModelAnimationStatesMETA xrGetVirtualKeyboardModelAnimationStatesMETA;
PFN_xrGetVirtualKeyboardDirtyTexturesMETA xrGetVirtualKeyboardDirtyTexturesMETA;
PFN_xrGetVirtualKeyboardTextureDataMETA xrGetVirtualKeyboardTextureDataMETA;
PFN_xrSendVirtualKeyboardInputMETA xrSendVirtualKeyboardInputMETA;
XrVirtualKeyboardMETA keyboardHandle{XR_NULL_HANDLE};
XrSpace keyboardSpace{XR_NULL_HANDLE};
XrRenderModelKeyFB keyboardModelKey{XR_NULL_RENDER_MODEL_KEY_FB};
/// Check virtual keyboard support
XrSystemVirtualKeyboardPropertiesMETA virtualKeyboardProps{XR_TYPE_SYSTEM_VIRTUAL_KEYBOARD_PROPERTIES_META};
XrSystemProperties systemProperties{XR_TYPE_SYSTEM_PROPERTIES, &virtualKeyboardProps};
CHK_XR(xrGetSystemProperties(instance, system, &systemProperties));
if (virtualKeyboardProps.supportsVirtualKeyboard == XR_FALSE) {
return; // Virtual keyboard not supported
}
/// Create virtual keyboard and space
XrVirtualKeyboardCreateInfoMETA createInfo{XR_TYPE_VIRTUAL_KEYBOARD_CREATE_INFO_META};
CHK_XR(xrCreateVirtualKeyboardMETA(session, &createInfo, &keyboardHandle));
XrVirtualKeyboardSpaceCreateInfoMETA spaceCreateInfo{XR_TYPE_VIRTUAL_KEYBOARD_SPACE_CREATE_INFO_META};
spaceCreateInfo.locationType = XR_VIRTUAL_KEYBOARD_LOCATION_TYPE_CUSTOM_META;
spaceCreateInfo.space = localSpace;
spaceCreateInfo.poseInSpace = poseIdentity;
CHK_XR(xrCreateVirtualKeyboardSpaceMETA(session, keyboardHandle, &spaceCreateInfo, &keyboardSpace));
/// Get render model key
uint32_t pathCount = 0;
CHK_XR(xrEnumerateRenderModelPathsFB(session, pathCount, &pathCount, nullptr));
std::vector<XrRenderModelPathInfoFB> pathInfos(pathCount, {XR_TYPE_RENDER_MODEL_PATH_INFO_FB});
CHK_XR(xrEnumerateRenderModelPathsFB(session, pathCount, &pathCount, pathInfos.data()));
for (const auto& info : pathInfos) {
char pathString[XR_MAX_PATH_LENGTH];
uint32_t countOutput = 0;
CHK_XR(xrPathToString(instance, info.path, XR_MAX_PATH_LENGTH, &countOutput, pathString));
if (strcmp(pathString, "/model_meta/keyboard/virtual") == 0) {
XrRenderModelPropertiesFB prop{XR_TYPE_RENDER_MODEL_PROPERTIES_FB};
XrRenderModelCapabilitiesRequestFB capReq{XR_TYPE_RENDER_MODEL_CAPABILITIES_REQUEST_FB};
capReq.flags = XR_RENDER_MODEL_SUPPORTS_GLTF_2_0_SUBSET_2_BIT_FB;
prop.next = &capReq;
CHK_XR(xrGetRenderModelPropertiesFB(session, info.path, &prop));
keyboardModelKey = prop.modelKey;
break;
}
}
if (keyboardModelKey == XR_NULL_RENDER_MODEL_KEY_FB) {
return; // Model not available
}
/// Load render model
XrRenderModelLoadInfoFB loadInfo{XR_TYPE_RENDER_MODEL_LOAD_INFO_FB};
loadInfo.modelKey = keyboardModelKey;
XrRenderModelBufferFB renderModelbuffer{XR_TYPE_RENDER_MODEL_BUFFER_FB};
CHK_XR((xrLoadRenderModelFB(session, &loadInfo, &renderModelbuffer)));
std::vector<uint8_t> modelBuffer(renderModelbuffer.bufferCountOutput);
renderModelbuffer.buffer = modelBuffer.data();
renderModelbuffer.bufferCapacityInput = renderModelbuffer.bufferCountOutput;
CHK_XR((xrLoadRenderModelFB(session, &loadInfo, &renderModelbuffer)));
// >>> Application loads the glTF model in `modelBuffer`, keeping a reference to the model animations and any textures with a URI texture id. See `Extend glTF render model support`.
/// Show render model
XrVirtualKeyboardModelVisibilitySetInfoMETA modelVisibility{XR_TYPE_VIRTUAL_KEYBOARD_MODEL_VISIBILITY_SET_INFO_META};
modelVisibility.visible = XR_TRUE;
CHK_XR(xrSetVirtualKeyboardModelVisibilityMETA(keyboardHandle, &modelVisibility));
while (!quit) {
// ...
// For every frame in frame loop
// ...
XrFrameState frameState; // previously returned from xrWaitFrame
const XrTime time = frameState.predictedDisplayTime;
XrVirtualKeyboardLocationInfoMETA locationInfo{XR_TYPE_VIRTUAL_KEYBOARD_LOCATION_INFO_META};
// >>> Application sets desired location and scale in `locationInfo`
CHK_XR(xrSuggestVirtualKeyboardLocationMETA(keyboardHandle, &locationInfo));
// For each input source:
{
XrVirtualKeyboardInputInfoMETA inputInfo{XR_TYPE_VIRTUAL_KEYBOARD_INPUT_INFO_META};
// >>> Application sets input source data in `inputInfo`
XrPosef interactorRootPose;
CHK_XR(xrSendVirtualKeyboardInputMETA(keyboardHandle, &inputInfo, &interactorRootPose));
// >>> Application uses `interactorRootPose` as feedback for poke limiting
}
uint32_t textureIdCountOutput = 0;
CHK_XR(xrGetVirtualKeyboardDirtyTexturesMETA(keyboardHandle, 0, &textureIdCountOutput, nullptr));
std::vector<uint64_t> dirtyTextureIds(textureIdCountOutput);
CHK_XR(xrGetVirtualKeyboardDirtyTexturesMETA(keyboardHandle, textureIdCountOutput, &textureIdCountOutput, dirtyTextureIds.data()));
for (const uint64_t textureId : dirtyTextureIds) {
XrVirtualKeyboardTextureDataMETA textureData{XR_TYPE_VIRTUAL_KEYBOARD_TEXTURE_DATA_META};
CHK_XR(xrGetVirtualKeyboardTextureDataMETA(keyboardHandle, textureId, &textureData));
std::vector<uint8_t> textureDataBuffer(textureData.bufferCountOutput);
textureData.bufferCapacityInput = textureData.bufferCountOutput;
textureData.buffer = textureDataBuffer.data();
CHK_XR(xrGetVirtualKeyboardTextureDataMETA(keyboardHandle, textureId, &textureData));
// >>> Application applies `textureData` to the glTF texture referenced by `textureId`
}
XrVirtualKeyboardModelAnimationStatesMETA animationStates{XR_TYPE_VIRTUAL_KEYBOARD_MODEL_ANIMATION_STATES_META};
CHK_XR(xrGetVirtualKeyboardModelAnimationStatesMETA(keyboardHandle, &animationStates));
std::vector<XrVirtualKeyboardAnimationStateMETA> animationStatesBuffer(animationStates.stateCountOutput, {XR_TYPE_VIRTUAL_KEYBOARD_ANIMATION_STATE_META});
animationStates.stateCapacityInput = animationStates.stateCountOutput;
animationStates.states = animationStatesBuffer.data();
CHK_XR(xrGetVirtualKeyboardModelAnimationStatesMETA(keyboardHandle, &animationStates));
for (uint32_t i = 0; i < animationStates.stateCountOutput; ++i) {
const auto& animationState = animationStates.states[i];
// >>> Application applies `animationState` to the corresponding glTF model animation
}
XrSpaceLocation keyboardLocation{XR_TYPE_SPACE_LOCATION};
CHK_XR(xrLocateSpace(keyboardSpace, localSpace, time, &keyboardLocation));
float keyboardScale;
CHK_XR(xrGetVirtualKeyboardScaleMETA(keyboardHandle, &keyboardScale));
// >>> Application renders model with `keyboardLocation` and `keyboardScale`
}
CHK_XR(xrDestroyVirtualKeyboardMETA(keyboardHandle));
New Object Types
XR_DEFINE_HANDLE(XrVirtualKeyboardMETA)
XrVirtualKeyboardMETA represents a virtual keyboard instance.
New Flag Types
typedef XrFlags64 XrVirtualKeyboardInputStateFlagsMETA;
// Flag bits for XrVirtualKeyboardInputStateFlagsMETA
static const XrVirtualKeyboardInputStateFlagsMETA XR_VIRTUAL_KEYBOARD_INPUT_STATE_PRESSED_BIT_META = 0x00000001;
New Enum Constants
-
XR_MAX_VIRTUAL_KEYBOARD_COMMIT_TEXT_SIZE_META
XrStructureType enumeration is extended with:
-
XR_TYPE_SYSTEM_VIRTUAL_KEYBOARD_PROPERTIES_META -
XR_TYPE_VIRTUAL_KEYBOARD_CREATE_INFO_META -
XR_TYPE_VIRTUAL_KEYBOARD_SPACE_CREATE_INFO_META -
XR_TYPE_VIRTUAL_KEYBOARD_LOCATION_INFO_META -
XR_TYPE_VIRTUAL_KEYBOARD_MODEL_VISIBILITY_SET_INFO_META -
XR_TYPE_VIRTUAL_KEYBOARD_ANIMATION_STATE_META -
XR_TYPE_VIRTUAL_KEYBOARD_MODEL_ANIMATION_STATES_META -
XR_TYPE_VIRTUAL_KEYBOARD_TEXTURE_DATA_META -
XR_TYPE_VIRTUAL_KEYBOARD_INPUT_INFO_META -
XR_TYPE_VIRTUAL_KEYBOARD_TEXT_CONTEXT_CHANGE_INFO_META -
XR_TYPE_EVENT_DATA_VIRTUAL_KEYBOARD_COMMIT_TEXT_META -
XR_TYPE_EVENT_DATA_VIRTUAL_KEYBOARD_BACKSPACE_META -
XR_TYPE_EVENT_DATA_VIRTUAL_KEYBOARD_ENTER_META -
XR_TYPE_EVENT_DATA_VIRTUAL_KEYBOARD_SHOWN_META -
XR_TYPE_EVENT_DATA_VIRTUAL_KEYBOARD_HIDDEN_META
New Defines
New Enums
The possible location types are specified by the XrVirtualKeyboardLocationTypeMETA enumeration:
// Provided by XR_META_virtual_keyboard
typedef enum XrVirtualKeyboardLocationTypeMETA {
XR_VIRTUAL_KEYBOARD_LOCATION_TYPE_CUSTOM_META = 0,
XR_VIRTUAL_KEYBOARD_LOCATION_TYPE_FAR_META = 1,
XR_VIRTUAL_KEYBOARD_LOCATION_TYPE_DIRECT_META = 2,
XR_VIRTUAL_KEYBOARD_LOCATION_TYPE_MAX_ENUM_META = 0x7FFFFFFF
} XrVirtualKeyboardLocationTypeMETA;
The possible input sources are specified by the XrVirtualKeyboardInputSourceMETA enumeration:
// Provided by XR_META_virtual_keyboard
typedef enum XrVirtualKeyboardInputSourceMETA {
XR_VIRTUAL_KEYBOARD_INPUT_SOURCE_CONTROLLER_RAY_LEFT_META = 1,
XR_VIRTUAL_KEYBOARD_INPUT_SOURCE_CONTROLLER_RAY_RIGHT_META = 2,
XR_VIRTUAL_KEYBOARD_INPUT_SOURCE_HAND_RAY_LEFT_META = 3,
XR_VIRTUAL_KEYBOARD_INPUT_SOURCE_HAND_RAY_RIGHT_META = 4,
XR_VIRTUAL_KEYBOARD_INPUT_SOURCE_CONTROLLER_DIRECT_LEFT_META = 5,
XR_VIRTUAL_KEYBOARD_INPUT_SOURCE_CONTROLLER_DIRECT_RIGHT_META = 6,
XR_VIRTUAL_KEYBOARD_INPUT_SOURCE_HAND_DIRECT_INDEX_TIP_LEFT_META = 7,
XR_VIRTUAL_KEYBOARD_INPUT_SOURCE_HAND_DIRECT_INDEX_TIP_RIGHT_META = 8,
XR_VIRTUAL_KEYBOARD_INPUT_SOURCE_MAX_ENUM_META = 0x7FFFFFFF
} XrVirtualKeyboardInputSourceMETA;
| Enum | Description |
|---|---|
|
Left controller ray. |
|
Right controller ray. |
|
Left hand ray. |
|
Right hand ray. |
|
Left controller direct touch. |
|
Right controller direct touch. |
|
Left hand direct touch. |
|
Right hand direct touch. |
New Structures
New Functions
Issues
Version History
-
Revision 1, 2023-04-14 (Peter Chan, Brent Housen)
-
Initial extension description
-
12.145. XR_META_vulkan_swapchain_create_info
- Name String
-
XR_META_vulkan_swapchain_create_info - Extension Type
-
Instance extension
- Registered Extension Number
-
228
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2022-05-19
- IP Status
-
No known IP claims.
- Contributors
-
John Kearney, Meta Platforms
Andreas L. Selvik, Meta Platforms
Jakob Bornecrantz, Collabora
Ross Ning, Meta Platforms
Overview
Using this extension, a Vulkan-based application can pass through
additional VkImageCreateFlags or VkImageUsageFlags by chaining
an XrVulkanSwapchainCreateInfoMETA structure to the
XrSwapchainCreateInfo when calling xrCreateSwapchain.
The application is still encouraged to use the common bits like
XR_SWAPCHAIN_USAGE_TRANSFER_SRC_BIT defined in
XrSwapchainUsageFlags.
However, the application may present both
XR_SWAPCHAIN_USAGE_TRANSFER_SRC_BIT in XrSwapchainUsageFlags and
VK_IMAGE_USAGE_TRANSFER_SRC_BIT in VkImageUsageFlags at the same
time.
The application must enable the corresponding Vulkan extensions before
requesting additional Vulkan flags.
For example, VK_EXT_fragment_density_map device extension must be
enabled if an application requests VK_IMAGE_CREATE_SUBSAMPLED_BIT_EXT
bit.
Otherwise, it may cause undefined behavior, including an application crash.
Runtimes that implement this extension must support the
XR_KHR_vulkan_enable or the XR_KHR_vulkan_enable2 extension.
New Object Types
New Flag Types
New Enum Constants
New Enums
New Structures
// Provided by XR_META_vulkan_swapchain_create_info
typedef struct XrVulkanSwapchainCreateInfoMETA {
XrStructureType type;
const void* next;
VkImageCreateFlags additionalCreateFlags;
VkImageUsageFlags additionalUsageFlags;
} XrVulkanSwapchainCreateInfoMETA;
The runtime must return XR_ERROR_FEATURE_UNSUPPORTED if any bit of
either additionalCreateFlags or additionalUsageFlags is not
supported.
New Functions
Issues
Version History
-
Revision 1, 2022-05-05 (Ross Ning)
-
Initial draft
-
12.146. XR_ML_compat
- Name String
-
XR_ML_compat - Extension Type
-
Instance extension
- Registered Extension Number
-
138
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2022-11-08
- Contributors
-
Ron Bessems, Magic Leap
Overview
This extension provides functionality to facilitate transitioning from Magic Leap SDK to OpenXR SDK, most notably interoperability between Coordinate Frame UUIDs and XrSpace.
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_COORDINATE_SPACE_CREATE_INFO_ML
New Structures
The XrCoordinateSpaceCreateInfoML structure is defined as:
// Provided by XR_ML_compat
typedef struct XrCoordinateSpaceCreateInfoML {
XrStructureType type;
const void* next;
MLCoordinateFrameUID cfuid;
XrPosef poseInCoordinateSpace;
} XrCoordinateSpaceCreateInfoML;
XrCoordinateSpaceCreateInfoML is provided as input when calling
xrCreateSpaceFromCoordinateFrameUIDML to convert a Magic Leap SDK
generated MLCoordinateFrameUID to an XrSpace.
The conversion only needs to be done once even if the underlying
MLCoordinateFrameUID changes its pose.
New Functions
The xrCreateSpaceFromCoordinateFrameUIDML function is defined as:
// Provided by XR_ML_compat
XrResult xrCreateSpaceFromCoordinateFrameUIDML(
XrSession session,
const XrCoordinateSpaceCreateInfoML * createInfo,
XrSpace* space);
The service that created the underlying
XrCoordinateSpaceCreateInfoML::cfuid must remain active for the
lifetime of the XrSpace.
If xrLocateSpace is called on a space created from an
XrCoordinateSpaceCreateInfoML::cfuid from a no-longer-active
service, the runtime may set XrSpaceLocation::locationFlags to 0.
XrSpace handles are destroyed using xrDestroySpace.
Issues
Version History
-
Revision 1, 2022-11-08 (Ron Bessems)
-
Initial extension description
-
12.147. XR_ML_facial_expression
- Name String
-
XR_ML_facial_expression - Extension Type
-
Instance extension
- Registered Extension Number
-
483
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2023-11-22
- Contributors
-
Dushan Vasilevski, Magic Leap
Karthik Kadappan, Magic Leap
Ron Bessems, Magic Leap
Johannes Fung, Magic Leap
12.147.1. Overview
This extension provides the weights of facial blend shapes usable for a variety of purposes, such as mood monitoring or interpolating the expression of an avatar’s face.
|
Permissions
Android applications must have the
com.magicleap.permission.FACIAL_EXPRESSION permission listed in their
manifest and granted to use this extension, otherwise
xrCreateFacialExpressionClientML will return a
|
12.147.2. Inspect system capability
The XrSystemFacialExpressionPropertiesML structure is defined as:
// Provided by XR_ML_facial_expression
typedef struct XrSystemFacialExpressionPropertiesML {
XrStructureType type;
void* next;
XrBool32 supportsFacialExpression;
} XrSystemFacialExpressionPropertiesML;
An application can inspect whether the system is capable of parsing facial blend shapes by extending the XrSystemProperties with XrSystemFacialExpressionPropertiesML structure when calling xrGetSystemProperties.
If a runtime returns XR_FALSE for supportsFacialExpression, the
runtime must return XR_ERROR_FEATURE_UNSUPPORTED from
xrCreateFacialExpressionClientML.
12.147.3. Create a facial expression client handle
The XrFacialExpressionClientML handle represents the resources for parsing facial expressions.
// Provided by XR_ML_facial_expression
XR_DEFINE_HANDLE(XrFacialExpressionClientML)
This handle is used to obtain blend shapes using the xrGetFacialExpressionBlendShapePropertiesML function.
The xrCreateFacialExpressionClientML function is defined as:
// Provided by XR_ML_facial_expression
XrResult xrCreateFacialExpressionClientML(
XrSession session,
const XrFacialExpressionClientCreateInfoML* createInfo,
XrFacialExpressionClientML* facialExpressionClient);
An application can create an XrFacialExpressionClientML handle using xrCreateFacialExpressionClientML function.
If the system does not support parsing facial expressions, the runtime must
return XR_ERROR_FEATURE_UNSUPPORTED from
xrCreateFacialExpressionClientML.
In this case, the runtime must also return XR_FALSE for
XrSystemFacialExpressionPropertiesML::supportsFacialExpression
when the function xrGetSystemProperties is called.
The XrFacialExpressionClientCreateInfoML structure is defined as follows:
// Provided by XR_ML_facial_expression
typedef struct XrFacialExpressionClientCreateInfoML {
XrStructureType type;
const void* next;
uint32_t requestedCount;
const XrFacialBlendShapeML* requestedFacialBlendShapes;
} XrFacialExpressionClientCreateInfoML;
Note that although the naming convention for requestedCount does not
align with requestedFacialBlendShapes, they are coupled together.
The XrFacialExpressionClientCreateInfoML structure describes the information to create an XrFacialExpressionClientML handle.
An application specifies the blend shapes they want to query by creating an
array of type XrFacialBlendShapeML and passing it to
requestedFacialBlendShapes along with the corresponding
requestedCount.
The application can also pass in NULL into
requestedFacialBlendShapes to capture the entirety of
XrFacialBlendShapeML.
However, for performance reasons, it may be better to be explicit about
what blend shapes to query for performance reasons since some blend shapes
may be queried by the runtime at a greater frequency than other blend
shapes.
12.147.4. Destroy a facial expression client handle
The xrDestroyFacialExpressionClientML function is defined as:
// Provided by XR_ML_facial_expression
XrResult xrDestroyFacialExpressionClientML(
XrFacialExpressionClientML facialExpressionClient);
The xrDestroyFacialExpressionClientML function releases the
facialExpressionClient and the underlying resources.
12.147.5. Obtain facial expression blend shapes
The xrGetFacialExpressionBlendShapePropertiesML function is defined as:
// Provided by XR_ML_facial_expression
XrResult xrGetFacialExpressionBlendShapePropertiesML(
XrFacialExpressionClientML facialExpressionClient,
const XrFacialExpressionBlendShapeGetInfoML* blendShapeGetInfo,
uint32_t blendShapeCount,
XrFacialExpressionBlendShapePropertiesML* blendShapes);
XrFacialExpressionBlendShapePropertiesML is better thought of as a mutable state rather than an immutable property. In general, OpenXR convention tries to keep property data types to be immutable.
The xrGetFacialExpressionBlendShapePropertiesML function returns the XrFacialExpressionBlendShapePropertiesML of each blend shape requested in XrFacialExpressionClientCreateInfoML
Each XrFacialExpressionBlendShapePropertiesML in blendShapes
must have its requestedFacialBlendShape member variable initialized
before being passed into xrGetFacialExpressionBlendShapePropertiesML.
If a blend shape in blendShapes is not enabled in
xrCreateFacialExpressionClientML, the runtime must return
XR_ERROR_VALIDATION_FAILURE.
For unsupported blend shapes, the runtime must clear
XrFacialExpressionBlendShapePropertiesML::flags and the runtime
must also return XR_SUCCESS.
The XrFacialExpressionBlendShapeGetInfoML structure is defined as:
// Provided by XR_ML_facial_expression
typedef struct XrFacialExpressionBlendShapeGetInfoML {
XrStructureType type;
const void* next;
} XrFacialExpressionBlendShapeGetInfoML;
The XrFacialExpressionBlendShapeGetInfoML structure specifies properties about blend shapes desired by an application. It must be passed into xrGetFacialExpressionBlendShapePropertiesML and is currently empty for future extensibility.
The XrFacialExpressionBlendShapePropertiesML structure is defined as:
// Provided by XR_ML_facial_expression
typedef struct XrFacialExpressionBlendShapePropertiesML {
XrStructureType type;
void* next;
XrFacialBlendShapeML requestedFacialBlendShape;
float weight;
XrFacialExpressionBlendShapePropertiesFlagsML flags;
XrTime time;
} XrFacialExpressionBlendShapePropertiesML;
XrFacialExpressionBlendShapePropertiesML structure holds the facial expression.
If requestedFacialBlendShape does not correspond to any
XrFacialBlendShapeML passed into
xrCreateFacialExpressionClientML then the
XR_FACIAL_EXPRESSION_BLEND_SHAPE_PROPERTIES_VALID_BIT_ML and
XR_FACIAL_EXPRESSION_BLEND_SHAPE_PROPERTIES_TRACKED_BIT_ML of
flags must be unset.
If the requestedFacialBlendShape is not available at sample time
time then
XR_FACIAL_EXPRESSION_BLEND_SHAPE_PROPERTIES_TRACKED_BIT_ML must be
set to false.
The runtime must populate weight with the weight of the queried blend
shape.
12.147.6. Conventions of blend shapes
This extension defines the following blend shapes for tracking facial expressions.
// Provided by XR_ML_facial_expression
typedef enum XrFacialBlendShapeML {
XR_FACIAL_BLEND_SHAPE_BROW_LOWERER_L_ML = 0,
XR_FACIAL_BLEND_SHAPE_BROW_LOWERER_R_ML = 1,
XR_FACIAL_BLEND_SHAPE_CHEEK_RAISER_L_ML = 2,
XR_FACIAL_BLEND_SHAPE_CHEEK_RAISER_R_ML = 3,
XR_FACIAL_BLEND_SHAPE_CHIN_RAISER_ML = 4,
XR_FACIAL_BLEND_SHAPE_DIMPLER_L_ML = 5,
XR_FACIAL_BLEND_SHAPE_DIMPLER_R_ML = 6,
XR_FACIAL_BLEND_SHAPE_EYES_CLOSED_L_ML = 7,
XR_FACIAL_BLEND_SHAPE_EYES_CLOSED_R_ML = 8,
XR_FACIAL_BLEND_SHAPE_INNER_BROW_RAISER_L_ML = 9,
XR_FACIAL_BLEND_SHAPE_INNER_BROW_RAISER_R_ML = 10,
XR_FACIAL_BLEND_SHAPE_JAW_DROP_ML = 11,
XR_FACIAL_BLEND_SHAPE_LID_TIGHTENER_L_ML = 12,
XR_FACIAL_BLEND_SHAPE_LID_TIGHTENER_R_ML = 13,
XR_FACIAL_BLEND_SHAPE_LIP_CORNER_DEPRESSOR_L_ML = 14,
XR_FACIAL_BLEND_SHAPE_LIP_CORNER_DEPRESSOR_R_ML = 15,
XR_FACIAL_BLEND_SHAPE_LIP_CORNER_PULLER_L_ML = 16,
XR_FACIAL_BLEND_SHAPE_LIP_CORNER_PULLER_R_ML = 17,
XR_FACIAL_BLEND_SHAPE_LIP_FUNNELER_LB_ML = 18,
XR_FACIAL_BLEND_SHAPE_LIP_FUNNELER_LT_ML = 19,
XR_FACIAL_BLEND_SHAPE_LIP_FUNNELER_RB_ML = 20,
XR_FACIAL_BLEND_SHAPE_LIP_FUNNELER_RT_ML = 21,
XR_FACIAL_BLEND_SHAPE_LIP_PRESSOR_L_ML = 22,
XR_FACIAL_BLEND_SHAPE_LIP_PRESSOR_R_ML = 23,
XR_FACIAL_BLEND_SHAPE_LIP_PUCKER_L_ML = 24,
XR_FACIAL_BLEND_SHAPE_LIP_PUCKER_R_ML = 25,
XR_FACIAL_BLEND_SHAPE_LIP_STRETCHER_L_ML = 26,
XR_FACIAL_BLEND_SHAPE_LIP_STRETCHER_R_ML = 27,
XR_FACIAL_BLEND_SHAPE_LIP_SUCK_LB_ML = 28,
XR_FACIAL_BLEND_SHAPE_LIP_SUCK_LT_ML = 29,
XR_FACIAL_BLEND_SHAPE_LIP_SUCK_RB_ML = 30,
XR_FACIAL_BLEND_SHAPE_LIP_SUCK_RT_ML = 31,
XR_FACIAL_BLEND_SHAPE_LIP_TIGHTENER_L_ML = 32,
XR_FACIAL_BLEND_SHAPE_LIP_TIGHTENER_R_ML = 33,
XR_FACIAL_BLEND_SHAPE_LIPS_TOWARD_ML = 34,
XR_FACIAL_BLEND_SHAPE_LOWER_LIP_DEPRESSOR_L_ML = 35,
XR_FACIAL_BLEND_SHAPE_LOWER_LIP_DEPRESSOR_R_ML = 36,
XR_FACIAL_BLEND_SHAPE_NOSE_WRINKLER_L_ML = 37,
XR_FACIAL_BLEND_SHAPE_NOSE_WRINKLER_R_ML = 38,
XR_FACIAL_BLEND_SHAPE_OUTER_BROW_RAISER_L_ML = 39,
XR_FACIAL_BLEND_SHAPE_OUTER_BROW_RAISER_R_ML = 40,
XR_FACIAL_BLEND_SHAPE_UPPER_LID_RAISER_L_ML = 41,
XR_FACIAL_BLEND_SHAPE_UPPER_LID_RAISER_R_ML = 42,
XR_FACIAL_BLEND_SHAPE_UPPER_LIP_RAISER_L_ML = 43,
XR_FACIAL_BLEND_SHAPE_UPPER_LIP_RAISER_R_ML = 44,
XR_FACIAL_BLEND_SHAPE_TONGUE_OUT_ML = 45,
XR_FACIAL_BLEND_SHAPE_MAX_ENUM_ML = 0x7FFFFFFF
} XrFacialBlendShapeML;
The XrFacialExpressionBlendShapePropertiesML::flags member is of
the following type, and contains a bitwise-OR of zero or more bits defined
in XrFacialExpressionBlendShapePropertiesFlagBitsML.
typedef XrFlags64 XrFacialExpressionBlendShapePropertiesFlagsML;
Valid bits for XrFacialExpressionBlendShapePropertiesFlagsML are defined by XrFacialExpressionBlendShapePropertiesFlagBitsML and are specified as:
// Flag bits for XrFacialExpressionBlendShapePropertiesFlagsML
static const XrFacialExpressionBlendShapePropertiesFlagsML XR_FACIAL_EXPRESSION_BLEND_SHAPE_PROPERTIES_VALID_BIT_ML = 0x00000001;
static const XrFacialExpressionBlendShapePropertiesFlagsML XR_FACIAL_EXPRESSION_BLEND_SHAPE_PROPERTIES_TRACKED_BIT_ML = 0x00000002;
The flag bits' meaning are described as below:
12.147.7. Example code for obtaining facial expression information
The following example code demonstrates how to obtain weights for facial expression blend shapes.
XrInstance instance; // previously initialized
XrSystemId systemId; // previously initialized
XrSession session; // previously initialized
// Confirm face tracking system support.
XrSystemFacialExpressionPropertiesML systemFacialExpressionClientProperties{
XR_TYPE_SYSTEM_FACIAL_EXPRESSION_PROPERTIES_ML};
XrSystemProperties systemProperties{XR_TYPE_SYSTEM_PROPERTIES,
&systemFacialExpressionClientProperties};
CHK_XR(xrGetSystemProperties(instance, systemId, &systemProperties));
if (!systemFacialExpressionClientProperties.supportsFacialExpression) {
// The system does not support face tracking
return;
}
// Get function pointer for xrCreateFacialExpressionClientML.
PFN_xrCreateFacialExpressionClientML pfnCreateFacialExpressionClientML;
CHK_XR(xrGetInstanceProcAddr(instance, "xrCreateFacialExpressionClientML",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnCreateFacialExpressionClientML)));
// Create a client that queries for default set of facial expressions.
XrFacialExpressionClientML facialExpressionClient = {};
XrFacialExpressionClientCreateInfoML createInfo{XR_TYPE_FACIAL_EXPRESSION_CLIENT_CREATE_INFO_ML};
CHK_XR(pfnCreateFacialExpressionClientML(session, &createInfo, &facialExpressionClient));
// Allocate buffers to receive facial expression data before frame
// loop starts.
const uint32_t num_blend_shapes = 2;
XrFacialExpressionBlendShapePropertiesML blendShapes[num_blend_shapes];
// User must explicitly request what facial expression blend shape to query
blendShapes[0].requestedFacialBlendShape = XR_FACIAL_BLEND_SHAPE_BROW_LOWERER_L_ML;
blendShapes[1].requestedFacialBlendShape = XR_FACIAL_BLEND_SHAPE_BROW_LOWERER_R_ML;
// Get function pointer for xrGetFacialExpressionBlendShapePropertiesML.
PFN_xrGetFacialExpressionBlendShapePropertiesML pfnGetFacialExpressionBlendShapesML;
CHK_XR(xrGetInstanceProcAddr(instance, "xrGetFacialExpressionBlendShapePropertiesML",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnGetFacialExpressionBlendShapesML)));
while (1) {
// ...
// For every frame in the frame loop
// ...
XrFrameState frameState; // previously returned from xrWaitFrame
const XrTime time = frameState.predictedDisplayTime;
XrFacialExpressionBlendShapeGetInfoML expressionInfo{XR_TYPE_FACIAL_EXPRESSION_BLEND_SHAPE_GET_INFO_ML};
CHK_XR(pfnGetFacialExpressionBlendShapesML(facialExpressionClient, &expressionInfo, num_blend_shapes, blendShapes));
for (uint32_t i = 0; i < num_blend_shapes; ++i) {
// blendShapes[i] contains the properties of specific blend shape
}
}
12.147.12. New Enum Constants
-
XR_ML_FACIAL_EXPRESSION_EXTENSION_NAME -
XR_ML_facial_expression_SPEC_VERSION -
Extending XrObjectType:
-
XR_OBJECT_TYPE_FACIAL_EXPRESSION_CLIENT_ML
-
-
Extending XrResult:
-
XR_ERROR_FACIAL_EXPRESSION_PERMISSION_DENIED_ML
-
-
Extending XrStructureType:
-
XR_TYPE_FACIAL_EXPRESSION_BLEND_SHAPE_GET_INFO_ML -
XR_TYPE_FACIAL_EXPRESSION_BLEND_SHAPE_PROPERTIES_ML -
XR_TYPE_FACIAL_EXPRESSION_CLIENT_CREATE_INFO_ML -
XR_TYPE_SYSTEM_FACIAL_EXPRESSION_PROPERTIES_ML
-
12.148. XR_ML_frame_end_info
- Name String
-
XR_ML_frame_end_info - Extension Type
-
Instance extension
- Registered Extension Number
-
136
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2022-10-26
- Contributors
-
Ron Bessems, Magic Leap
Overview
This extension provides access to Magic Leap specific extensions to frame settings like focus distance, vignette, and protection.
New Flag Types
The XrFrameEndInfoML::flags member is of the following type, and
contains a bitwise-OR of zero or more of the bits defined in
XrFrameEndInfoFlagBitsML.
typedef XrFlags64 XrFrameEndInfoFlagsML;
Valid bits for XrFrameEndInfoFlagsML are defined by XrFrameEndInfoFlagBitsML, which is specified as:
// Flag bits for XrFrameEndInfoFlagsML
static const XrFrameEndInfoFlagsML XR_FRAME_END_INFO_PROTECTED_BIT_ML = 0x00000001;
static const XrFrameEndInfoFlagsML XR_FRAME_END_INFO_VIGNETTE_BIT_ML = 0x00000002;
The flag bits have the following meanings:
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_FRAME_END_INFO_ML
New Structures
The XrFrameEndInfoML structure is defined as:
// Provided by XR_ML_frame_end_info
typedef struct XrFrameEndInfoML {
XrStructureType type;
const void* next;
float focusDistance;
XrFrameEndInfoFlagsML flags;
} XrFrameEndInfoML;
Version History
-
Revision 1, 2022-10-26 (Ron Bessems)
-
Initial extension description
-
12.149. XR_ML_global_dimmer
- Name String
-
XR_ML_global_dimmer - Extension Type
-
Instance extension
- Registered Extension Number
-
137
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2022-10-25
- Contributors
-
Ron Bessems, Magic Leap
Michał Kulągowski, Magic Leap
Overview
This extension provides control over the global dimmer panel of the Magic Leap 2. The Global Dimming™ feature dims the entire display without dimming digital content to make text and images more solid and precise.
Note that when using the XR_ENVIRONMENT_BLEND_MODE_ALPHA_BLEND mode
the alpha channel of the color swapchain image is combined with the global
dimmer value.
The global dimmer however is able to address the whole panel whereas the
alpha channel covers the video addressable portion.
New Flag Types
The XrGlobalDimmerFrameEndInfoML::flags member is of the
following type, and contains a bitwise-OR of zero or more of the bits
defined in XrFrameEndInfoFlagBitsML.
typedef XrFlags64 XrGlobalDimmerFrameEndInfoFlagsML;
Valid bits for XrGlobalDimmerFrameEndInfoFlagsML are defined by XrGlobalDimmerFrameEndInfoFlagBitsML, which is specified as:
// Flag bits for XrGlobalDimmerFrameEndInfoFlagsML
static const XrGlobalDimmerFrameEndInfoFlagsML XR_GLOBAL_DIMMER_FRAME_END_INFO_ENABLED_BIT_ML = 0x00000001;
The flag bits have the following meanings:
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_GLOBAL_DIMMER_FRAME_END_INFO_ML
New Structures
The XrGlobalDimmerFrameEndInfoML structure is defined as:
// Provided by XR_ML_global_dimmer
typedef struct XrGlobalDimmerFrameEndInfoML {
XrStructureType type;
const void* next;
float dimmerValue;
XrGlobalDimmerFrameEndInfoFlagsML flags;
} XrGlobalDimmerFrameEndInfoML;
Version History
-
Revision 1, 2022-10-25 (Ron Bessems)
-
Initial extension description
-
12.150. XR_ML_localization_map
- Name String
-
XR_ML_localization_map - Extension Type
-
Instance extension
- Registered Extension Number
-
140
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2023-09-14
- Contributors
-
Ron Bessems, Magic Leap
Karthik Kadappan, Magic Leap
12.150.1. Overview
A Magic Leap localization map is a container that holds metadata about the scanned environment. It is a digital copy of a physical place. A localization map holds spatial anchors, dense mesh, planes, feature points, and positional data.
-
Spatial anchors - Used for persistent placement of content.
-
Dense mesh - 3D triangulated geometry representing Magic Leap device understanding of the real-world geometry of an area.
-
Planes - Large, flat surfaces derived from dense mesh data.
Localization maps can be created on device or in the Magic Leap AR Cloud. There are two types - "On Device" and "Cloud".
-
"On Device" for OpenXR (local space for MagicLeap) - are for a single device and can be shared via the export/import mechanism.
-
"Cloud" for OpenXR (shared space for MagicLeap) - can be shared across multiple MagicLeap devices in the AR Cloud.
|
Note
Localization Maps are called Spaces in the Magic Leap C-API. |
|
Permissions
Android applications must have the com.magicleap.permission.SPACE_MANAGER permission listed in their manifest to use these functions: (protection level: normal) Android applications must have the com.magicleap.permission.SPACE_IMPORT_EXPORT permission listed in their manifest and granted to use these functions: (protection level: dangerous) |
12.150.2. Current Localization Map Information
Applications can receive notifications when the current localization map changes by calling xrPollEvent and handling the XrEventDataLocalizationChangedML type. To enable these events call xrEnableLocalizationEventsML.
The XrEventDataLocalizationChangedML structure is defined as:
// Provided by XR_ML_localization_map
typedef struct XrEventDataLocalizationChangedML {
XrStructureType type;
const void* next;
XrSession session;
XrLocalizationMapStateML state;
XrLocalizationMapML map;
XrLocalizationMapConfidenceML confidence;
XrLocalizationMapErrorFlagsML errorFlags;
} XrEventDataLocalizationChangedML;
By default the runtime does not send these events but calling xrEnableLocalizationEventsML function enables the events. When this function is called the XrEventDataLocalizationChangedML event will always be posted to the event queue, regardless of whether the map localization state has changed. This allows the application to synchronize with the current state.
|
Note
The arrival of the event is asynchronous to this call. |
The bitmask type XrLocalizationMapErrorFlagsML is defined as:
// Provided by XR_ML_localization_map
typedef XrFlags64 XrLocalizationMapErrorFlagsML;
As used in XrEventDataLocalizationChangedML::errorFlags field,
XrLocalizationMapErrorFlagsML contains a bitwise-OR of zero or more of
the bits defined in XrLocalizationMapErrorFlagBitsML.
// Provided by XR_ML_localization_map
// Flag bits for XrLocalizationMapErrorFlagsML
static const XrLocalizationMapErrorFlagsML XR_LOCALIZATION_MAP_ERROR_UNKNOWN_BIT_ML = 0x00000001;
static const XrLocalizationMapErrorFlagsML XR_LOCALIZATION_MAP_ERROR_OUT_OF_MAPPED_AREA_BIT_ML = 0x00000002;
static const XrLocalizationMapErrorFlagsML XR_LOCALIZATION_MAP_ERROR_LOW_FEATURE_COUNT_BIT_ML = 0x00000004;
static const XrLocalizationMapErrorFlagsML XR_LOCALIZATION_MAP_ERROR_EXCESSIVE_MOTION_BIT_ML = 0x00000008;
static const XrLocalizationMapErrorFlagsML XR_LOCALIZATION_MAP_ERROR_LOW_LIGHT_BIT_ML = 0x00000010;
static const XrLocalizationMapErrorFlagsML XR_LOCALIZATION_MAP_ERROR_HEADPOSE_BIT_ML = 0x00000020;
The flag bits have the following meanings:
The xrEnableLocalizationEventsML function is defined as:
// Provided by XR_ML_localization_map
XrResult xrEnableLocalizationEventsML(
XrSession session,
const XrLocalizationEnableEventsInfoML * info);
The XrLocalizationEnableEventsInfoML structure is defined as:
// Provided by XR_ML_localization_map
typedef struct XrLocalizationEnableEventsInfoML {
XrStructureType type;
const void* next;
XrBool32 enabled;
} XrLocalizationEnableEventsInfoML;
The XrLocalizationMapML structure is defined as:
// Provided by XR_ML_localization_map
typedef struct XrLocalizationMapML {
XrStructureType type;
void* next;
char name[XR_MAX_LOCALIZATION_MAP_NAME_LENGTH_ML];
XrUuidEXT mapUuid;
XrLocalizationMapTypeML mapType;
} XrLocalizationMapML;
12.150.3. Listing Localization Maps
Localization maps available to the application can be queried using xrQueryLocalizationMapsML.
The xrQueryLocalizationMapsML function is defined as:
// Provided by XR_ML_localization_map
XrResult xrQueryLocalizationMapsML(
XrSession session,
const XrLocalizationMapQueryInfoBaseHeaderML* queryInfo,
uint32_t mapCapacityInput,
uint32_t * mapCountOutput,
XrLocalizationMapML* maps);
The list of localization maps returned will depend on the current device
mapping mode.
Only the localization maps associated with the current mapping mode will be
returned by this call.
Device mapping mode (e.g. XR_LOCALIZATION_MAP_TYPE_ON_DEVICE_ML or
XR_LOCALIZATION_MAP_TYPE_CLOUD_ML) can only be changed via the system
application(s).
The list of maps known to the runtime may change between the two calls to
xrQueryLocalizationMapsML.
This is however a rare occurrence and the application may retry the call
again if it receives XR_ERROR_SIZE_INSUFFICIENT.
The XrLocalizationMapQueryInfoBaseHeaderML structure is defined as:
// Provided by XR_ML_localization_map
typedef struct XrLocalizationMapQueryInfoBaseHeaderML {
XrStructureType type;
const void* next;
} XrLocalizationMapQueryInfoBaseHeaderML;
Currently no filters are available.
12.150.4. Request Localization Map
Applications can change the current map by calling xrRequestMapLocalizationML.
The xrRequestMapLocalizationML function is defined as:
// Provided by XR_ML_localization_map
XrResult xrRequestMapLocalizationML(
XrSession session,
const XrMapLocalizationRequestInfoML* requestInfo);
This is an asynchronous request. Listen for XrEventDataLocalizationChangedML events to get the results of the localization. A new request for localization will override all the past requests for localization that are yet to be completed.
The runtime must return XR_ERROR_LOCALIZATION_MAP_UNAVAILABLE_ML if
the requested is not a map known to the runtime.
The XrMapLocalizationRequestInfoML structure is defined as:
// Provided by XR_ML_localization_map
typedef struct XrMapLocalizationRequestInfoML {
XrStructureType type;
const void* next;
XrUuidEXT mapUuid;
} XrMapLocalizationRequestInfoML;
12.150.5. Import and Exporting
This API supports exporting and importing of device localization maps.
The runtime must not export AR Cloud maps and must return
XR_ERROR_LOCALIZATION_MAP_CANNOT_EXPORT_CLOUD_MAP_ML if the
application attempts to do so.
The format of the exported localization map data can change with OS version updates.
-
Backwards compatibility: exports using OS version n should work on OS versions up to and including OS version n-4.
-
Forwards compatibility: exports using OS version n is not guaranteed to work on OS versions > n.
Developers are strongly encouraged to encrypt the exported localization maps.
The xrImportLocalizationMapML function is defined as:
// Provided by XR_ML_localization_map
XrResult xrImportLocalizationMapML(
XrSession session,
const XrLocalizationMapImportInfoML* importInfo,
XrUuidEXT* mapUuid);
The runtime must return XR_ERROR_LOCALIZATION_MAP_ALREADY_EXISTS_ML
if the map that is being imported already exists.
The runtime must return XR_ERROR_LOCALIZATION_MAP_INCOMPATIBLE_ML if
the map being imported is not compatible.
xrImportLocalizationMapML may take a long time to complete; as such applications should not call this from the frame loop.
The XrLocalizationMapImportInfoML structure is defined as:
// Provided by XR_ML_localization_map
typedef struct XrLocalizationMapImportInfoML {
XrStructureType type;
const void* next;
uint32_t size;
char* data;
} XrLocalizationMapImportInfoML;
Exporting
The xrCreateExportedLocalizationMapML function is defined as:
// Provided by XR_ML_localization_map
XrResult xrCreateExportedLocalizationMapML(
XrSession session,
const XrUuidEXT* mapUuid,
XrExportedLocalizationMapML* map);
xrCreateExportedLocalizationMapML creates a frozen copy of the
mapUuid localization map that can be exported using
xrGetExportedLocalizationMapDataML.
Applications should call xrDestroyExportedLocalizationMapML once they
are done with the data.
The xrDestroyExportedLocalizationMapML function is defined as:
// Provided by XR_ML_localization_map
XrResult xrDestroyExportedLocalizationMapML(
XrExportedLocalizationMapML map);
The xrGetExportedLocalizationMapDataML function is defined as:
// Provided by XR_ML_localization_map
XrResult xrGetExportedLocalizationMapDataML(
XrExportedLocalizationMapML map,
uint32_t bufferCapacityInput,
uint32_t* bufferCountOutput,
char* buffer);
xrGetExportedLocalizationMapDataML may take a long time to complete; as such applications should not call this from the frame loop.
12.150.6. Reference Space
Applications localized into the same localization map can use this reference space to place virtual content in the same physical location.
XR_REFERENCE_SPACE_TYPE_LOCALIZATION_MAP_ML is the reference space of
the current localization map.
Creating a space is done via xrCreateReferenceSpace.
The runtime must emit the XrEventDataReferenceSpaceChangePending event if the reference space is changing due to a localization map change.
The runtime may move the physical location of the origin of this space as it updates its understanding of the physical space to maintain consistency without sending the XrEventDataReferenceSpaceChangePending event.
For a given XrUuidEXT the runtime must keep the position and orientation of this space identical across more than one XrInstance, including for different users and different hardware.
The runtime must create this reference space as gravity-aligned to exclude pitch and roll, with +Y up.
12.150.7. Example code
The following code shows how to list the currently available localization maps.
uint32_t mapCount = 0;
CHK_XR(xrQueryLocalizationMapsML(session, nullptr, 0, &mapCount, nullptr));
std::vector<XrLocalizationMapML> maps(mapCount, {XR_TYPE_LOCALIZATION_MAP_ML});
CHK_XR(xrQueryLocalizationMapsML(session, nullptr, static_cast<uint32_t>(maps.size()), &mapCount, maps.data()));
This code shows how to poll for localization events.
XrEventDataBuffer event{XR_TYPE_EVENT_DATA_BUFFER};
XrResult result = xrPollEvent(instance, &event);
if (result == XR_SUCCESS) {
switch (event.type) {
case XR_TYPE_EVENT_DATA_LOCALIZATION_CHANGED_ML: {
const auto& localization_event =
*reinterpret_cast<XrEventDataLocalizationChangedML*>(&event);
// Use the data in localization_event.
break;
}
// Handle other events as well as usual.
}
}
12.150.8. Constants
New Object Types
XR_DEFINE_HANDLE(XrExportedLocalizationMapML)
XrExportedLocalizationMapML represents a frozen exported localization map.
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_LOCALIZATION_MAP_ML -
XR_TYPE_EVENT_DATA_LOCALIZATION_CHANGED_ML -
XR_TYPE_MAP_LOCALIZATION_REQUEST_INFO_ML -
XR_TYPE_LOCALIZATION_MAP_IMPORT_INFO_ML -
XR_TYPE_LOCALIZATION_ENABLE_EVENTS_INFO_ML
XrResult enumeration is extended with:
-
XR_ERROR_LOCALIZATION_MAP_INCOMPATIBLE_ML -
XR_ERROR_LOCALIZATION_MAP_UNAVAILABLE_ML -
XR_ERROR_LOCALIZATION_MAP_IMPORT_EXPORT_PERMISSION_DENIED_ML -
XR_ERROR_LOCALIZATION_MAP_PERMISSION_DENIED_ML -
XR_ERROR_LOCALIZATION_MAP_ALREADY_EXISTS_ML -
XR_ERROR_LOCALIZATION_MAP_CANNOT_EXPORT_CLOUD_MAP_ML -
XR_ERROR_LOCALIZATION_MAP_FAIL_ML
New Enums
// Provided by XR_ML_localization_map
typedef enum XrLocalizationMapStateML {
XR_LOCALIZATION_MAP_STATE_NOT_LOCALIZED_ML = 0,
XR_LOCALIZATION_MAP_STATE_LOCALIZED_ML = 1,
XR_LOCALIZATION_MAP_STATE_LOCALIZATION_PENDING_ML = 2,
XR_LOCALIZATION_MAP_STATE_LOCALIZATION_SLEEPING_BEFORE_RETRY_ML = 3,
XR_LOCALIZATION_MAP_STATE_MAX_ENUM_ML = 0x7FFFFFFF
} XrLocalizationMapStateML;
| Enum | Description |
|---|---|
|
The system is not localized into a map. Features like Spatial Anchors relying on localization will not work. |
|
The system is localized into a map. |
|
The system is localizing into a map. |
|
Initial localization failed, the system will retry localization. |
// Provided by XR_ML_localization_map
typedef enum XrLocalizationMapConfidenceML {
XR_LOCALIZATION_MAP_CONFIDENCE_POOR_ML = 0,
XR_LOCALIZATION_MAP_CONFIDENCE_FAIR_ML = 1,
XR_LOCALIZATION_MAP_CONFIDENCE_GOOD_ML = 2,
XR_LOCALIZATION_MAP_CONFIDENCE_EXCELLENT_ML = 3,
XR_LOCALIZATION_MAP_CONFIDENCE_MAX_ENUM_ML = 0x7FFFFFFF
} XrLocalizationMapConfidenceML;
| Enum | Description |
|---|---|
|
The localization map has poor confidence, systems relying on the localization map are likely to have poor performance. |
|
The confidence is fair, current environmental conditions may adversely affect localization. |
|
The confidence is high, persistent content should be stable. |
|
This is a very high-confidence localization, persistent content will be very stable. |
// Provided by XR_ML_localization_map
typedef enum XrLocalizationMapTypeML {
XR_LOCALIZATION_MAP_TYPE_ON_DEVICE_ML = 0,
XR_LOCALIZATION_MAP_TYPE_CLOUD_ML = 1,
XR_LOCALIZATION_MAP_TYPE_MAX_ENUM_ML = 0x7FFFFFFF
} XrLocalizationMapTypeML;
| Enum | Description |
|---|---|
|
The system is localized into an On-Device map, published anchors are not shared between different devices. |
|
The system is localized into a Cloud Map, anchors are shared per cloud account settings. |
New Enum Constants
XrReferenceSpaceType enumeration is extended with:
-
XR_REFERENCE_SPACE_TYPE_LOCALIZATION_MAP_ML
New Defines
Version History
-
Revision 1, 2023-06-23 (Ron Bessems)
-
Initial extension description
-
12.151. XR_ML_marker_understanding
- Name String
-
XR_ML_marker_understanding - Extension Type
-
Instance extension
- Registered Extension Number
-
139
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2023-05-18
- Contributors
-
Robbie Bridgewater, Magic Leap
Ron Bessems, Magic Leap
Karthik Kadappan, Magic Leap
12.151.1. Overview
This extension can be used to track and query fiducial markers like QR codes, AprilTag markers, and ArUco markers, and detect, but not locate, 1D barcodes like Code 128, UPC-A.
|
Permissions
Android applications must have the
|
12.151.2. Creating a Marker Detector
// Provided by XR_ML_marker_understanding
XR_DEFINE_HANDLE(XrMarkerDetectorML)
The XrMarkerDetectorML handle represents the resources for detecting one or more markers.
A marker detector handle detects a single type of marker, specified by a value of XrMarkerTypeML. To detect more than one marker type, a runtime may support creating multiple marker detector handles.
This handle can be used to detect markers using other functions in this extension.
The xrCreateMarkerDetectorML function is defined as:
// Provided by XR_ML_marker_understanding
XrResult xrCreateMarkerDetectorML(
XrSession session,
const XrMarkerDetectorCreateInfoML* createInfo,
XrMarkerDetectorML* markerDetector);
An application creates an XrMarkerDetectorML handle using the
xrCreateMarkerDetectorML function.
If createInfo contains mutually exclusive contents, the runtime must
return XR_ERROR_MARKER_DETECTOR_INVALID_CREATE_INFO_ML.
If a runtime is unable to create a marker detector due to some internal
limit, the runtime must return XR_ERROR_LIMIT_REACHED.
The XrMarkerDetectorCreateInfoML structure is defined as:
// Provided by XR_ML_marker_understanding
typedef struct XrMarkerDetectorCreateInfoML {
XrStructureType type;
const void* next;
XrMarkerDetectorProfileML profile;
XrMarkerTypeML markerType;
} XrMarkerDetectorCreateInfoML;
The possible premade profiles for an XrMarkerDetectorML are specified by the XrMarkerDetectorProfileML enumeration:
// Provided by XR_ML_marker_understanding
typedef enum XrMarkerDetectorProfileML {
XR_MARKER_DETECTOR_PROFILE_DEFAULT_ML = 0,
XR_MARKER_DETECTOR_PROFILE_SPEED_ML = 1,
XR_MARKER_DETECTOR_PROFILE_ACCURACY_ML = 2,
XR_MARKER_DETECTOR_PROFILE_SMALL_TARGETS_ML = 3,
XR_MARKER_DETECTOR_PROFILE_LARGE_FOV_ML = 4,
XR_MARKER_DETECTOR_PROFILE_CUSTOM_ML = 5,
XR_MARKER_DETECTOR_PROFILE_MAX_ENUM_ML = 0x7FFFFFFF
} XrMarkerDetectorProfileML;
The type of marker to be tracked is specified via XrMarkerDetectorML:
// Provided by XR_ML_marker_understanding
typedef enum XrMarkerTypeML {
XR_MARKER_TYPE_ARUCO_ML = 0,
XR_MARKER_TYPE_APRIL_TAG_ML = 1,
XR_MARKER_TYPE_QR_ML = 2,
XR_MARKER_TYPE_EAN_13_ML = 3,
XR_MARKER_TYPE_UPC_A_ML = 4,
XR_MARKER_TYPE_CODE_128_ML = 5,
XR_MARKER_TYPE_MAX_ENUM_ML = 0x7FFFFFFF
} XrMarkerTypeML;
An application specifies details of the type of marker to be tracked by
chaining an XrMarkerDetector*InfoML structure to
XrMarkerDetectorCreateInfoML.
Some of these structure types must be included to enable detection or
locating, depending on the marker type.
The following structures are used by the ArUco, AprilTag, and QR code detectors:
| Marker Type | Structures |
|---|---|
The XrMarkerDetectorSizeInfoML may be optional depending on runtime
support for estimating marker size.
A higher localization accuracy may be obtained by specifying the marker
size.
If the runtime does not support estimating marker size it must return
XR_ERROR_VALIDATION_FAILURE if XrMarkerDetectorSizeInfoML is
omitted.
The XrMarkerDetectorArucoInfoML structure extends XrMarkerDetectorCreateInfoML and is defined as:
// Provided by XR_ML_marker_understanding
typedef struct XrMarkerDetectorArucoInfoML {
XrStructureType type;
const void* next;
XrMarkerArucoDictML arucoDict;
} XrMarkerDetectorArucoInfoML;
This structure is required by the XR_MARKER_TYPE_ARUCO_ML detector.
The XrMarkerArucoDictML enumeration is defined as:
// Provided by XR_ML_marker_understanding
typedef enum XrMarkerArucoDictML {
XR_MARKER_ARUCO_DICT_4X4_50_ML = 0,
XR_MARKER_ARUCO_DICT_4X4_100_ML = 1,
XR_MARKER_ARUCO_DICT_4X4_250_ML = 2,
XR_MARKER_ARUCO_DICT_4X4_1000_ML = 3,
XR_MARKER_ARUCO_DICT_5X5_50_ML = 4,
XR_MARKER_ARUCO_DICT_5X5_100_ML = 5,
XR_MARKER_ARUCO_DICT_5X5_250_ML = 6,
XR_MARKER_ARUCO_DICT_5X5_1000_ML = 7,
XR_MARKER_ARUCO_DICT_6X6_50_ML = 8,
XR_MARKER_ARUCO_DICT_6X6_100_ML = 9,
XR_MARKER_ARUCO_DICT_6X6_250_ML = 10,
XR_MARKER_ARUCO_DICT_6X6_1000_ML = 11,
XR_MARKER_ARUCO_DICT_7X7_50_ML = 12,
XR_MARKER_ARUCO_DICT_7X7_100_ML = 13,
XR_MARKER_ARUCO_DICT_7X7_250_ML = 14,
XR_MARKER_ARUCO_DICT_7X7_1000_ML = 15,
XR_MARKER_ARUCO_DICT_MAX_ENUM_ML = 0x7FFFFFFF
} XrMarkerArucoDictML;
Supported predefined ArUco dictionary:
The XrMarkerDetectorAprilTagInfoML structure extends XrMarkerDetectorCreateInfoML and is defined as:
// Provided by XR_ML_marker_understanding
typedef struct XrMarkerDetectorAprilTagInfoML {
XrStructureType type;
const void* next;
XrMarkerAprilTagDictML aprilTagDict;
} XrMarkerDetectorAprilTagInfoML;
This structure is required by the XR_MARKER_TYPE_APRIL_TAG_ML
detector.
The XrMarkerAprilTagDictML enumeration is defined as:
// Provided by XR_ML_marker_understanding
typedef enum XrMarkerAprilTagDictML {
XR_MARKER_APRIL_TAG_DICT_16H5_ML = 0,
XR_MARKER_APRIL_TAG_DICT_25H9_ML = 1,
XR_MARKER_APRIL_TAG_DICT_36H10_ML = 2,
XR_MARKER_APRIL_TAG_DICT_36H11_ML = 3,
XR_MARKER_APRIL_TAG_DICT_MAX_ENUM_ML = 0x7FFFFFFF
} XrMarkerAprilTagDictML;
Supported predefined AprilTag dictionary:
The XrMarkerDetectorSizeInfoML structure extends XrMarkerDetectorCreateInfoML and is defined as:
// Provided by XR_ML_marker_understanding
typedef struct XrMarkerDetectorSizeInfoML {
XrStructureType type;
const void* next;
float markerLength;
} XrMarkerDetectorSizeInfoML;
Pose estimation accuracy depends on the accuracy of the specified
markerLength.
This structure is used by XR_MARKER_TYPE_ARUCO_ML,
XR_MARKER_TYPE_APRIL_TAG_ML, and XR_MARKER_TYPE_QR_ML detectors.
The xrDestroyMarkerDetectorML function is defined as:
// Provided by XR_ML_marker_understanding
XrResult xrDestroyMarkerDetectorML(
XrMarkerDetectorML markerDetector);
Destroy a marker detection handle.
Using a custom profile
The XrMarkerDetectorCustomProfileInfoML structure extends XrMarkerDetectorCreateInfoML and is defined as:
// Provided by XR_ML_marker_understanding
typedef struct XrMarkerDetectorCustomProfileInfoML {
XrStructureType type;
const void* next;
XrMarkerDetectorFpsML fpsHint;
XrMarkerDetectorResolutionML resolutionHint;
XrMarkerDetectorCameraML cameraHint;
XrMarkerDetectorCornerRefineMethodML cornerRefineMethod;
XrBool32 useEdgeRefinement;
XrMarkerDetectorFullAnalysisIntervalML fullAnalysisIntervalHint;
} XrMarkerDetectorCustomProfileInfoML;
All marker detectors share some underlying hardware and resources, and thus not all combinations of profiles between multiple detectors are possible. If a profile (preset or custom) specified during marker detector creation is different from those used by existing marker detectors the runtime will attempt to honor the highest frame rate and fps requested.
CPU load due to marker tracking is a function of the chosen XrMarkerTypeML, XrMarkerDetectorFpsML, and XrMarkerDetectorResolutionML.
The XrMarkerDetectorFpsML enumeration is defined as:
// Provided by XR_ML_marker_understanding
typedef enum XrMarkerDetectorFpsML {
XR_MARKER_DETECTOR_FPS_LOW_ML = 0,
XR_MARKER_DETECTOR_FPS_MEDIUM_ML = 1,
XR_MARKER_DETECTOR_FPS_HIGH_ML = 2,
XR_MARKER_DETECTOR_FPS_MAX_ML = 3,
XR_MARKER_DETECTOR_FPS_MAX_ENUM_ML = 0x7FFFFFFF
} XrMarkerDetectorFpsML;
Used to hint to the back-end the max frames per second that should be analyzed.
The XrMarkerDetectorResolutionML enumeration is defined as:
// Provided by XR_ML_marker_understanding
typedef enum XrMarkerDetectorResolutionML {
XR_MARKER_DETECTOR_RESOLUTION_LOW_ML = 0,
XR_MARKER_DETECTOR_RESOLUTION_MEDIUM_ML = 1,
XR_MARKER_DETECTOR_RESOLUTION_HIGH_ML = 2,
XR_MARKER_DETECTOR_RESOLUTION_MAX_ENUM_ML = 0x7FFFFFFF
} XrMarkerDetectorResolutionML;
Used to hint to the back-end the resolution that should be used. CPU load is a combination of chosen XrMarkerTypeML, XrMarkerDetectorFpsML, and XrMarkerDetectorResolutionML.
The XrMarkerDetectorCameraML enumeration is defined as:
// Provided by XR_ML_marker_understanding
typedef enum XrMarkerDetectorCameraML {
XR_MARKER_DETECTOR_CAMERA_RGB_CAMERA_ML = 0,
XR_MARKER_DETECTOR_CAMERA_WORLD_CAMERAS_ML = 1,
XR_MARKER_DETECTOR_CAMERA_MAX_ENUM_ML = 0x7FFFFFFF
} XrMarkerDetectorCameraML;
The XrMarkerDetectorCameraML enum values are used to hint to the camera that should be used. This is set in the XrMarkerDetectorCustomProfileInfoML.
The RGB camera has a higher resolution than world cameras and is better suited for use cases where the target to be tracked is small or needs to be detected from far away.
XR_MARKER_DETECTOR_CAMERA_WORLD_CAMERAS_ML make use of multiple
cameras to improve accuracy and increase the FoV for detection.
The XrMarkerDetectorCornerRefineMethodML enumeration is defined as:
// Provided by XR_ML_marker_understanding
typedef enum XrMarkerDetectorCornerRefineMethodML {
XR_MARKER_DETECTOR_CORNER_REFINE_METHOD_NONE_ML = 0,
XR_MARKER_DETECTOR_CORNER_REFINE_METHOD_SUBPIX_ML = 1,
XR_MARKER_DETECTOR_CORNER_REFINE_METHOD_CONTOUR_ML = 2,
XR_MARKER_DETECTOR_CORNER_REFINE_METHOD_APRIL_TAG_ML = 3,
XR_MARKER_DETECTOR_CORNER_REFINE_METHOD_MAX_ENUM_ML = 0x7FFFFFFF
} XrMarkerDetectorCornerRefineMethodML;
The ArUco/AprilTag detector comes with several corner refinement methods. Choosing the right corner refinement method has an impact on the accuracy and speed trade-off that comes with each detection pipeline.
The XrMarkerDetectorFullAnalysisIntervalML enumeration is defined as:
// Provided by XR_ML_marker_understanding
typedef enum XrMarkerDetectorFullAnalysisIntervalML {
XR_MARKER_DETECTOR_FULL_ANALYSIS_INTERVAL_MAX_ML = 0,
XR_MARKER_DETECTOR_FULL_ANALYSIS_INTERVAL_FAST_ML = 1,
XR_MARKER_DETECTOR_FULL_ANALYSIS_INTERVAL_MEDIUM_ML = 2,
XR_MARKER_DETECTOR_FULL_ANALYSIS_INTERVAL_SLOW_ML = 3,
XR_MARKER_DETECTOR_FULL_ANALYSIS_INTERVAL_MAX_ENUM_ML = 0x7FFFFFFF
} XrMarkerDetectorFullAnalysisIntervalML;
In order to improve performance, the detectors do not always run on the full frame. Full frame analysis is however necessary to detect new markers that were not detected before. Use this option to control how often the detector should detect new markers and its impact on tracking performance.
12.151.3. Scanning for markers
The xrSnapshotMarkerDetectorML function is defined as:
// Provided by XR_ML_marker_understanding
XrResult xrSnapshotMarkerDetectorML(
XrMarkerDetectorML markerDetector,
XrMarkerDetectorSnapshotInfoML* snapshotInfo);
Collects the latest marker detector state and makes it ready for inspection.
This function only snapshots the non-pose state of markers.
Once called, and if a new snapshot is not yet available a runtime must set
the state of the marker detector to
XR_MARKER_DETECTOR_STATUS_PENDING_ML.
If a new state is available the runtime must set the state to
XR_MARKER_DETECTOR_STATUS_READY_ML.
If an error occurred the runtime must set the state to
XR_MARKER_DETECTOR_STATUS_ERROR_ML.
The application may attempt the snapshot again.
Once the application has inspected the state it is interested in it can
call this function again and the state is set to
XR_MARKER_DETECTOR_STATUS_PENDING_ML until a new state has been
snapshotted.
After each snapshot, only the currently detected markers are available for
inspection, though the same marker may repeatedly be detected across
snapshots.
The XrMarkerDetectorSnapshotInfoML structure is defined as:
// Provided by XR_ML_marker_understanding
typedef struct XrMarkerDetectorSnapshotInfoML {
XrStructureType type;
const void* next;
} XrMarkerDetectorSnapshotInfoML;
The xrGetMarkerDetectorStateML function is defined as:
// Provided by XR_ML_marker_understanding
XrResult xrGetMarkerDetectorStateML(
XrMarkerDetectorML markerDetector,
XrMarkerDetectorStateML* state);
xrGetMarkerDetectorStateML is used after calling
xrSnapshotMarkerDetectorML to check the current status of the snapshot
in progress.
When XrMarkerDetectorStateML::state ==
XR_MARKER_DETECTOR_STATUS_READY_ML, the detector is ready to be
queried, while XR_MARKER_DETECTOR_STATUS_PENDING_ML indicates the
snapshot is still in progress.
XR_MARKER_DETECTOR_STATUS_ERROR_ML indicates that the runtime has
encountered an error getting a snapshot for the requested detector, which
may require user intervention to solve.
If xrSnapshotMarkerDetectorML has not yet been called for the
markerDetector, the runtime must return
XR_ERROR_CALL_ORDER_INVALID.
The XrMarkerDetectorStateML structure is defined as:
// Provided by XR_ML_marker_understanding
typedef struct XrMarkerDetectorStateML {
XrStructureType type;
void* next;
XrMarkerDetectorStatusML state;
} XrMarkerDetectorStateML;
The XrMarkerDetectorStatusML enumeration is defined as:
// Provided by XR_ML_marker_understanding
typedef enum XrMarkerDetectorStatusML {
XR_MARKER_DETECTOR_STATUS_PENDING_ML = 0,
XR_MARKER_DETECTOR_STATUS_READY_ML = 1,
XR_MARKER_DETECTOR_STATUS_ERROR_ML = 2,
XR_MARKER_DETECTOR_STATUS_MAX_ENUM_ML = 0x7FFFFFFF
} XrMarkerDetectorStatusML;
The XrMarkerDetectorStatusML enumeration describes the current state of the marker detector. It is queried via xrGetMarkerDetectorStateML to determine if the marker tracker is currently available for inspection.
12.151.4. Getting Marker Results
The xrGetMarkersML function is defined as:
// Provided by XR_ML_marker_understanding
XrResult xrGetMarkersML(
XrMarkerDetectorML markerDetector,
uint32_t markerCapacityInput,
uint32_t* markerCountOutput,
XrMarkerML* markers);
Get the list of current snapshotted marker atoms, must only be called when
the state of the detector is XR_MARKER_DETECTOR_STATUS_READY_ML.
If xrGetMarkerDetectorStateML has not been called and returned
XR_MARKER_DETECTOR_STATUS_READY_ML since the last invocation of
xrSnapshotMarkerDetectorML, the runtime must return
XR_ERROR_CALL_ORDER_INVALID.
The returned atoms are only valid while in the
XR_MARKER_DETECTOR_STATUS_READY_ML state.
The runtime must return the same atom value for the same uniquely
identifiable marker across successive snapshots.
It is unspecified what happens if the detector is observing two markers with
the same identification patterns.
Assuming the same set of markers are in view across several snapshots, the runtime should return the same set of atoms. An application can use the list of atoms as a simple test for if a particular marker has gone in or out of view.
Note that XrMarkerML atoms are only usable with the
XrMarkerDetectorML that returned them.
This function follows the two-call
idiom for filling the markers.
// Provided by XR_ML_marker_understanding
XR_DEFINE_ATOM(XrMarkerML)
The unique marker key used to retrieve the data about detected markers.
For an XrMarkerDetectorML a runtime must use the same value of
XrMarkerML each time a marker is detected in a snapshot, but an
application cannot use a cached atom if it was not present in the most
recent snapshot.
The xrGetMarkerNumberML function is defined as:
// Provided by XR_ML_marker_understanding
XrResult xrGetMarkerNumberML(
XrMarkerDetectorML markerDetector,
XrMarkerML marker,
uint64_t* number);
Get the numerical value of a marker, such as the ArUco ID.
xrGetMarkerNumberML must only be called when the state of the
detector is XR_MARKER_DETECTOR_STATUS_READY_ML.
If the marker does not have an associated numerical value, the runtime must
return XR_ERROR_MARKER_DETECTOR_INVALID_DATA_QUERY_ML.
If xrGetMarkerDetectorStateML has not been called and returned
XR_MARKER_DETECTOR_STATUS_READY_ML since the last invocation of
xrSnapshotMarkerDetectorML, the runtime must return
XR_ERROR_CALL_ORDER_INVALID.
The runtime must return XR_ERROR_MARKER_INVALID_ML if the marker atom
is invalid.
The xrGetMarkerStringML function is defined as:
// Provided by XR_ML_marker_understanding
XrResult xrGetMarkerStringML(
XrMarkerDetectorML markerDetector,
XrMarkerML marker,
uint32_t bufferCapacityInput,
uint32_t* bufferCountOutput,
char* buffer);
Get the string value of a marker, such as the QR encoded string.
xrCreateMarkerSpaceML must only be called when the state of the
detector is XR_MARKER_DETECTOR_STATUS_READY_ML.
If the marker does not have an associated string value, the runtime must
return XR_ERROR_MARKER_DETECTOR_INVALID_DATA_QUERY_ML.
If xrGetMarkerDetectorStateML has not been called and returned
XR_MARKER_DETECTOR_STATUS_READY_ML since the last invocation of
xrSnapshotMarkerDetectorML, the runtime must return
XR_ERROR_CALL_ORDER_INVALID.
This function follows the two-call
idiom for filling the buffer.
The runtime must return XR_ERROR_MARKER_INVALID_ML if the marker atom
is invalid.
The xrGetMarkerReprojectionErrorML function is defined as:
// Provided by XR_ML_marker_understanding
XrResult xrGetMarkerReprojectionErrorML(
XrMarkerDetectorML markerDetector,
XrMarkerML marker,
float* reprojectionErrorMeters);
Get the reprojection error of a marker, only available for certain types of
markers.
must only be called when the state of the detector is
XR_MARKER_DETECTOR_STATUS_READY_ML.
If xrGetMarkerDetectorStateML has not been called and returned
XR_MARKER_DETECTOR_STATUS_READY_ML since the last invocation of
xrSnapshotMarkerDetectorML, the runtime must return
XR_ERROR_CALL_ORDER_INVALID.
A high reprojection error means that the estimated pose of the marker does not match well with the 2D detection on the processed video frame and thus the pose may be inaccurate. The error is given in meters, representing the displacement between real marker and its estimated pose. This means this is a normalized number, independent of marker distance or length.
The runtime must return XR_ERROR_MARKER_INVALID_ML if the marker atom
is invalid.
The xrGetMarkerLengthML function is defined as:
// Provided by XR_ML_marker_understanding
XrResult xrGetMarkerLengthML(
XrMarkerDetectorML markerDetector,
XrMarkerML marker,
float* meters);
Get the size of the marker, defined as the length in meters per side. If the application created the detector while passing in a XrMarkerDetectorSizeInfoML, this query may be redundant. xrGetMarkerLengthML is primarily intended to query for a runtime estimated size when an application did not indicate the expected size via XrMarkerDetectorSizeInfoML.
xrGetMarkerLengthML must only be called when the state of the
detector is XR_MARKER_DETECTOR_STATUS_READY_ML.
If xrGetMarkerDetectorStateML has not been called and returned
XR_MARKER_DETECTOR_STATUS_READY_ML since the last invocation of
xrSnapshotMarkerDetectorML, the runtime must return
XR_ERROR_CALL_ORDER_INVALID.
The runtime must return XR_ERROR_MARKER_INVALID_ML if the marker atom
is invalid.
12.151.5. Getting an XrSpace from Marker Results
The xrCreateMarkerSpaceML function is defined as:
// Provided by XR_ML_marker_understanding
XrResult xrCreateMarkerSpaceML(
XrSession session,
const XrMarkerSpaceCreateInfoML* createInfo,
XrSpace* space);
Creates an XrSpace from a currently snapshotted marker.
The space may still be used even if the marker is later not in the FOV, or
even if the marker detector has been destroyed.
In such a scenario, the XR_SPACE_LOCATION_ORIENTATION_TRACKED_BIT and
XR_SPACE_LOCATION_POSITION_TRACKED_BIT must be false, but
XR_SPACE_LOCATION_POSITION_VALID_BIT and
XR_SPACE_LOCATION_ORIENTATION_VALID_BIT may be set as appropriate to
the last known location.
Once an application has created a space, it may stop calling xrSnapshotMarkerDetectorML, and the position of the marker must still be updated by the runtime whenever it is aware of a more up to date location.
If a runtime is unable to spatially locate a snapshotted marker, it may
return XR_ERROR_MARKER_DETECTOR_LOCATE_FAILED_ML.
This is most likely to happen if significant time has passed since the
snapshot of markers was acquired, and the marker in question is no longer in
the user’s FOV.
Thus, an application should call xrCreateMarkerSpaceML immediately
after examining a snapshot, but should also be prepared to try again if
needed.
must only be called when the state of the detector is
XR_MARKER_DETECTOR_STATUS_READY_ML.
If xrGetMarkerDetectorStateML has not been called and returned
XR_MARKER_DETECTOR_STATUS_READY_ML since the last invocation of
xrSnapshotMarkerDetectorML, the runtime must return
XR_ERROR_CALL_ORDER_INVALID.
session must be the same session that created the
XrMarkerSpaceCreateInfoML::markerDetector, else the runtime
must return XR_ERROR_HANDLE_INVALID.
The runtime must return XR_ERROR_MARKER_INVALID_ML if the marker atom
is invalid.
The XrSpace origin must be located at the marker’s center. The X-Y plane of the XrSpace must be aligned with the plane of the marker with the positive Z axis coming out of the marker face.
The XrMarkerSpaceCreateInfoML structure is defined as:
// Provided by XR_ML_marker_understanding
typedef struct XrMarkerSpaceCreateInfoML {
XrStructureType type;
const void* next;
XrMarkerDetectorML markerDetector;
XrMarkerML marker;
XrPosef poseInMarkerSpace;
} XrMarkerSpaceCreateInfoML;
12.151.6. Example code for locating a marker
The following example code demonstrates how to detect a marker relative to a local space, and query the contents.
XrInstance instance; // previously initialized
XrSystemId systemId; // previously initialized
XrSession session; // previously initialized
XrSpace localSpace; // previously initialized, e.g. from
// XR_REFERENCE_SPACE_TYPE_LOCAL
XrSpace viewSpace; // previously initialized, e.g. from
// XR_REFERENCE_SPACE_TYPE_VIEW
// The function pointers are previously initialized using
// xrGetInstanceProcAddr.
PFN_xrCreateMarkerDetectorML xrCreateMarkerDetectorML; // previously initialized
PFN_xrDestroyMarkerDetectorML xrDestroyMarkerDetectorML; // previously initialized
PFN_xrSnapshotMarkerDetectorML xrSnapshotMarkerDetectorML; // previously initialized
PFN_xrGetMarkerDetectorStateML xrGetMarkerDetectorStateML; // previously initialized
PFN_xrGetMarkersML xrGetMarkersML; // previously initialized
PFN_xrGetMarkerReprojectionErrorML xrGetMarkerReprojectionErrorML; // previously initialized
PFN_xrGetMarkerLengthML xrGetMarkerLengthML; // previously initialized
PFN_xrGetMarkerNumberML xrGetMarkerNumberML; // previously initialized
PFN_xrGetMarkerStringML xrGetMarkerStringML; // previously initialized
PFN_xrCreateMarkerSpaceML xrCreateMarkerSpaceML; // previously initialized
// Initialize marker detector handle
XrMarkerDetectorML markerDetector = XR_NULL_HANDLE;
XrMarkerDetectorCreateInfoML createInfo{ XR_TYPE_MARKER_DETECTOR_CREATE_INFO_ML };
createInfo.profile = XR_MARKER_DETECTOR_PROFILE_CUSTOM_ML;
createInfo.markerType = XR_MARKER_TYPE_ARUCO_ML;
// Passing a non-custom profile allows you to leave next == nullptr
XrMarkerDetectorCustomProfileInfoML customProfile{ XR_TYPE_MARKER_DETECTOR_CUSTOM_PROFILE_INFO_ML };
customProfile.fpsHint = XR_MARKER_DETECTOR_FPS_LOW_ML;
customProfile.resolutionHint = XR_MARKER_DETECTOR_RESOLUTION_HIGH_ML;
customProfile.cameraHint = XR_MARKER_DETECTOR_CAMERA_RGB_CAMERA_ML;
customProfile.cornerRefineMethod = XR_MARKER_DETECTOR_CORNER_REFINE_METHOD_CONTOUR_ML;
customProfile.useEdgeRefinement = true;
customProfile.fullAnalysisIntervalHint = XR_MARKER_DETECTOR_FULL_ANALYSIS_INTERVAL_SLOW_ML;
createInfo.next = &customProfile;
// Elect to use ArUco marker tracking, providing required dictionary
XrMarkerDetectorArucoInfoML arucoCreateInfo{ XR_TYPE_MARKER_DETECTOR_ARUCO_INFO_ML };
arucoCreateInfo.arucoDict = XR_MARKER_ARUCO_DICT_6X6_100_ML;
customProfile.next = &arucoCreateInfo;
// Specify the size of the marker to improve tracking quality
XrMarkerDetectorSizeInfoML sizeCreateInfo{ XR_TYPE_MARKER_DETECTOR_SIZE_INFO_ML };
sizeCreateInfo.markerLength = 0.2f;
arucoCreateInfo.next = &sizeCreateInfo;
CHK_XR(xrCreateMarkerDetectorML(session, &createInfo, &markerDetector));
bool queryRunning = false;
std::unordered_map <uint64_t, XrSpace> markerSpaceMap;
auto processMarkers = [&]() {
// 2 call idiom to get the markers from runtime
uint32_t markerCount;
CHK_XR(xrGetMarkersML(markerDetector, 0, &markerCount, nullptr));
std::vector<XrMarkerML> markers(markerCount);
CHK_XR(xrGetMarkersML(markerDetector, markerCount, &markerCount, markers.data()));
for(uint32_t i = 0; i < markerCount; ++i)
{
uint64_t number;
CHK_XR(xrGetMarkerNumberML(markerDetector, markers[i], &number));
// Track every marker we find.
if(markerSpaceMap.find(number) == markerSpaceMap.end())
{
// New entry
XrSpace space;
XrMarkerSpaceCreateInfoML spaceCreateInfo{ XR_TYPE_MARKER_SPACE_CREATE_INFO_ML };
spaceCreateInfo.markerDetector = markerDetector;
spaceCreateInfo.marker = markers[i];
spaceCreateInfo.poseInMarkerSpace = { {0, 0, 0, 1}, {0, 0, 0} };
CHK_XR(xrCreateMarkerSpaceML(session, &spaceCreateInfo, &space));
markerSpaceMap[number] = space;
}
// This will not work in this example with ArUco markers, but had we configured
// a marker with string content such as QR or Code 128, this is how to use it.
// uint32_t stringSize;
// CHK_XR(xrGetMarkerStringML(markerDetector, markers[i], 0, &stringSize, nullptr));
// std::string markerString(stringSize, ' ');
// CHK_XR(xrGetMarkerStringML(markerDetector, markers[i], stringSize, &stringSize, markerString.data()));
}
};
// Must be initialized to true, otherwise in the loop below, there will
// be an XR_ERROR_CALL_ORDER_INVALID due to xrSnapshotMarkerDetectorML
// not being called first
bool isReadyForSnapshot = true;
while (1) {
// ...
// For every frame in frame loop
// ...
// We have this if/else block set up so that xrSnapshotMarkerDetectorML
// is not captured per frame since the marker detector snapshot
// might still be in the midst of being processed by the runtime
if (isReadyForSnapshot) {
// Call the first snapshot
XrMarkerDetectorSnapshotInfoML detectorInfo{ XR_TYPE_MARKER_DETECTOR_SNAPSHOT_INFO_ML };
CHK_XR(xrSnapshotMarkerDetectorML(markerDetector, &detectorInfo));
isReadyForSnapshot = false;
} else {
XrMarkerDetectorStateML state{ XR_TYPE_MARKER_DETECTOR_STATE_ML };
CHK_XR(xrGetMarkerDetectorStateML(markerDetector, &state));
// For simplicity, this example will assume that the marker detector will not
// be in an erroneous state
if (state.state == XR_MARKER_DETECTOR_STATUS_READY_ML) {
processMarkers();
isReadyForSnapshot = true;
}
}
// Draw the markers as needed from markerSpaceMap.
// drawMarkers(markerSpaceMap);
// ...
// ...
}
// Cleanup
CHK_XR(xrDestroyMarkerDetectorML(markerDetector));
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_SYSTEM_MARKER_UNDERSTANDING_PROPERTIES_ML -
XR_TYPE_MARKER_DETECTOR_CREATE_INFO_ML -
XR_TYPE_MARKER_DETECTOR_ARUCO_INFO_ML -
XR_TYPE_MARKER_DETECTOR_APRIL_TAG_INFO_ML -
XR_TYPE_MARKER_DETECTOR_CUSTOM_PROFILE_INFO_ML -
XR_TYPE_MARKER_DETECTOR_SNAPSHOT_INFO_ML -
XR_TYPE_MARKER_DETECTOR_STATE_ML -
XR_TYPE_MARKER_SPACE_CREATE_INFO_ML
the XrResult enumeration is extended with:
-
XR_ERROR_MARKER_DETECTOR_PERMISSION_DENIED_ML -
XR_ERROR_MARKER_DETECTOR_LOCATE_FAILED_ML -
XR_ERROR_MARKER_DETECTOR_INVALID_DATA_QUERY_ML -
XR_ERROR_MARKER_DETECTOR_INVALID_CREATE_INFO_ML -
XR_ERROR_MARKER_INVALID_ML
New Structures
The XrSystemMarkerUnderstandingPropertiesML structure is defined as:
// Provided by XR_ML_marker_understanding
typedef struct XrSystemMarkerUnderstandingPropertiesML {
XrStructureType type;
void* next;
XrBool32 supportsMarkerUnderstanding;
} XrSystemMarkerUnderstandingPropertiesML;
Version History
-
Revision 1, 2023-05-18 (Robbie Bridgewater)
-
Initial extension skeleton
-
12.152. XR_ML_spatial_anchors
- Name String
-
XR_ML_spatial_anchors - Extension Type
-
Instance extension
- Registered Extension Number
-
141
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2023-06-09
- Contributors
-
Ron Bessems, Magic Leap
Karthik Kadappan, Magic Leap
12.152.1. Overview
Spatial anchors are XrSpace entities tied to a physical location. This allows the developer to place virtual content in real world locations.
The runtime should then adjust the XrSpace over time as needed, independently of all other spaces and anchors, to ensure that it maintains its original mapping to the real world.
|
Caution
If head pose is lost and regained, spatial anchors may also be lost.
It is therefore strongly recommended that once an anchor is created, it is
also persisted using the |
|
Permissions
Android applications must have the com.magicleap.permission.SPATIAL_ANCHOR permission listed in their manifest to use this extension. (protection level: normal) |
12.152.2. Begin spatial anchor creation
xrCreateSpatialAnchorsAsyncML is used to create spatial anchors.
It can create anchors in different ways depending on the parameter passed
in.
This extension defines one way to create single anchors using the
XrSpatialAnchorsCreateInfoFromPoseML structure.
XR_ML_spatial_anchors_storage extends this to also create spatial
anchors from a persistent storage using their XrUuidEXT.
The xrCreateSpatialAnchorsAsyncML function is defined as:
// Provided by XR_ML_spatial_anchors
XrResult xrCreateSpatialAnchorsAsyncML(
XrSession session,
const XrSpatialAnchorsCreateInfoBaseHeaderML* createInfo,
XrFutureEXT* future);
This function starts an asynchronous spatial anchor creation. Call one of the xrPollFutureEXT functions to check the ready state on the future. Once the future is in ready state, call xrCreateSpatialAnchorsCompleteML to retrieve the results.
The XrSpatialAnchorsCreateInfoFromPoseML structure can be used to create an spatial anchor from a pose in an XrSpace.
// Provided by XR_ML_spatial_anchors
typedef struct XrSpatialAnchorsCreateInfoFromPoseML {
XrStructureType type;
const void* next;
XrSpace baseSpace;
XrPosef poseInBaseSpace;
XrTime time;
} XrSpatialAnchorsCreateInfoFromPoseML;
Note that xrCreateSpatialAnchorsCompleteML must be called with
XrCreateSpatialAnchorsCompletionML::spaceCount set to 1 when
using XrSpatialAnchorsCreateInfoFromPoseML to create a spatial anchor.
The base structure for XrSpatialAnchorsCreateInfoFromPoseML is XrSpatialAnchorsCreateInfoBaseHeaderML.
The XrSpatialAnchorsCreateInfoBaseHeaderML structure is defined as:
// Provided by XR_ML_spatial_anchors
typedef struct XrSpatialAnchorsCreateInfoBaseHeaderML {
XrStructureType type;
const void* next;
} XrSpatialAnchorsCreateInfoBaseHeaderML;
This structure is not directly used in the API, please see XrSpatialAnchorsCreateInfoFromPoseML for an example of a child structure.
12.152.3. Complete spatial anchor operation
xrCreateSpatialAnchorsCompleteML completes the asynchronous operation
started by xrCreateSpatialAnchorsAsyncML.
The XrFutureEXT must be in ready state before calling the
completion function.
The xrCreateSpatialAnchorsCompleteML function is defined as:
// Provided by XR_ML_spatial_anchors
XrResult xrCreateSpatialAnchorsCompleteML(
XrSession session,
XrFutureEXT future,
XrCreateSpatialAnchorsCompletionML* completion);
The completion structure is XrCreateSpatialAnchorsCompletionML.
The XrCreateSpatialAnchorsCompletionML structure is defined as:
// Provided by XR_ML_spatial_anchors
typedef struct XrCreateSpatialAnchorsCompletionML {
XrStructureType type;
void* next;
XrResult futureResult;
uint32_t spaceCount;
XrSpace* spaces;
} XrCreateSpatialAnchorsCompletionML;
12.152.4. Retrieve spatial anchor state
Spatial anchor state can be queried using xrGetSpatialAnchorStateML.
The xrGetSpatialAnchorStateML function is defined as:
// Provided by XR_ML_spatial_anchors
XrResult xrGetSpatialAnchorStateML(
XrSpace anchor,
XrSpatialAnchorStateML* state);
The runtime must return XR_ERROR_VALIDATION_FAILURE if the
XrSpace is not an spatial anchor.
The XrSpatialAnchorStateML structure is defined as:
// Provided by XR_ML_spatial_anchors
typedef struct XrSpatialAnchorStateML {
XrStructureType type;
void* next;
XrSpatialAnchorConfidenceML confidence;
} XrSpatialAnchorStateML;
// Provided by XR_ML_spatial_anchors
typedef enum XrSpatialAnchorConfidenceML {
XR_SPATIAL_ANCHOR_CONFIDENCE_LOW_ML = 0,
XR_SPATIAL_ANCHOR_CONFIDENCE_MEDIUM_ML = 1,
XR_SPATIAL_ANCHOR_CONFIDENCE_HIGH_ML = 2,
XR_SPATIAL_ANCHOR_CONFIDENCE_MAX_ENUM_ML = 0x7FFFFFFF
} XrSpatialAnchorConfidenceML;
| Enum | Description |
|---|---|
|
Low quality, this anchor can be expected to move significantly. |
|
Medium quality, this anchor may move slightly. |
|
High quality, this anchor is stable and suitable for digital content attachment. |
12.152.5. Example code
This code shows how to create a spatial anchor in a synchronous manner.
This can be changed to be completely asynchronous by changing the
waitInfo.duration to 0 and checking it during the frame loop until the
function returns ready.
XrInstance instance; // previously initialized
XrSession session; // previously initialized
// these are setup to match the location and time
// of the position in the world you wish to set the
// spatial anchor for.
XrSpace baseSpace; // previously initialized
XrTime time; // previously initialized
XrPosef pose; // previously initialized
// Get function pointer for xrCreateSpatialAnchorsAsyncML
PFN_xrCreateSpatialAnchorsAsyncML xrCreateSpatialAnchorsAsyncML;
CHK_XR(xrGetInstanceProcAddr(instance, "xrCreateSpatialAnchorsAsyncML",
reinterpret_cast<PFN_xrVoidFunction*>(
&xrCreateSpatialAnchorsAsyncML)))
// Get function pointer for xrCreateSpatialAnchorsCompleteML
PFN_xrCreateSpatialAnchorsCompleteML xrCreateSpatialAnchorsCompleteML;
CHK_XR(xrGetInstanceProcAddr(instance, "xrCreateSpatialAnchorsCompleteML",
reinterpret_cast<PFN_xrVoidFunction*>(
&xrCreateSpatialAnchorsCompleteML)));
// Get function pointer for xrPollFutureEXT
PFN_xrPollFutureEXT xrPollFutureEXT;
CHK_XR(xrGetInstanceProcAddr(instance, "xrPollFutureEXT",
reinterpret_cast<PFN_xrVoidFunction*>(
&xrPollFutureEXT)));
XrSpatialAnchorsCreateInfoFromPoseML createInfo{XR_TYPE_SPATIAL_ANCHORS_CREATE_INFO_FROM_POSE_ML};
XrFutureEXT future;
createInfo.baseSpace = baseSpace;
createInfo.poseInBaseSpace = pose;
createInfo.time = time;
CHK_XR(xrCreateSpatialAnchorsAsyncML(session, reinterpret_cast<const XrSpatialAnchorsCreateInfoBaseHeaderML*>(&createInfo), &future));
XrFuturePollInfoEXT pollInfo{XR_TYPE_FUTURE_POLL_INFO_EXT};
XrFuturePollResultEXT pollResult{XR_TYPE_FUTURE_POLL_RESULT_EXT};
pollInfo.future = future;
pollResult.state = XR_FUTURE_STATE_PENDING_EXT;
while(pollResult.state==XR_FUTURE_STATE_PENDING_EXT) {
// Ideally this check is done in your game loop
// instead of busy waiting, this is just an
// example.
// If you do choose busy wait sleep to avoid
// CPU overloading.
CHK_XR(xrPollFutureEXT(instance, &pollInfo, &pollResult));
}
XrSpace anchor;
XrCreateSpatialAnchorsCompletionML completion{XR_TYPE_CREATE_SPATIAL_ANCHORS_COMPLETION_ML};
completion.spaceCount = 1;
completion.spaces = &anchor;
CHK_XR(xrCreateSpatialAnchorsCompleteML(session, future, &completion));
// Check the future completion result as well!
CHK_XR(completion.futureResult);
// Now the anchor is usable!
12.152.9. New Enum Constants
-
XR_ML_SPATIAL_ANCHORS_EXTENSION_NAME -
XR_ML_spatial_anchors_SPEC_VERSION -
Extending XrResult:
-
XR_ERROR_SPATIAL_ANCHORS_NOT_LOCALIZED_ML -
XR_ERROR_SPATIAL_ANCHORS_OUT_OF_MAP_BOUNDS_ML -
XR_ERROR_SPATIAL_ANCHORS_PERMISSION_DENIED_ML -
XR_ERROR_SPATIAL_ANCHORS_SPACE_NOT_LOCATABLE_ML
-
-
Extending XrStructureType:
-
XR_TYPE_CREATE_SPATIAL_ANCHORS_COMPLETION_ML -
XR_TYPE_SPATIAL_ANCHORS_CREATE_INFO_FROM_POSE_ML -
XR_TYPE_SPATIAL_ANCHOR_STATE_ML
-
12.153. XR_ML_spatial_anchors_storage
- Name String
-
XR_ML_spatial_anchors_storage - Extension Type
-
Instance extension
- Registered Extension Number
-
142
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2023-08-24
- Contributors
-
Ron Bessems, Magic Leap
Karthik Kadappan, Magic Leap
12.153.1. Overview
This extension allows spatial anchors created by
XR_ML_spatial_anchors to be persisted beyond the head pose session.
Spatial anchor management is closely tied to the selected mapping mode on the device. The modes are mutually exclusive and affect the functionality of these APIs. The available mapping modes are:
- On-Device Mode
-
A persistent mode in which anchors are persisted locally and will be available across multiple sessions when localized to the same localization map in which they were published.
- AR Cloud Mode
-
A persistent mode in which anchors are persisted in the cloud environment and will be available across multiple sessions and across multiple devices that are localized to the same localization map in which they were published.
For more details on mapping modes refer to the XR_ML_localization_map
extension.
|
Permissions
Android applications must have the com.magicleap.permission.SPATIAL_ANCHOR permission listed in their manifest to use this extension. (protection level: normal) |
12.153.2. Storage
// Provided by XR_ML_spatial_anchors_storage
XR_DEFINE_HANDLE(XrSpatialAnchorsStorageML)
The XrSpatialAnchorsStorageML handle maintains the connection with the
backend service.
This may be on device or in cloud depending on the localization map that is
active.
Use the XR_ML_localization_map extension to deduce the current mode
if the application needs to know this.
The XrSpatialAnchorsStorageML handle represents the resources for storing spatial anchors.
The xrCreateSpatialAnchorsStorageML function is defined as:
// Provided by XR_ML_spatial_anchors_storage
XrResult xrCreateSpatialAnchorsStorageML(
XrSession session,
const XrSpatialAnchorsCreateStorageInfoML* createInfo,
XrSpatialAnchorsStorageML* storage);
The xrCreateSpatialAnchorsStorageML function is used to create a XrSpatialAnchorsStorageML.
The XrSpatialAnchorsCreateStorageInfoML structure is defined as:
// Provided by XR_ML_spatial_anchors_storage
typedef struct XrSpatialAnchorsCreateStorageInfoML {
XrStructureType type;
const void* next;
} XrSpatialAnchorsCreateStorageInfoML;
Currently no extra information is needed to create this structure.
The xrDestroySpatialAnchorsStorageML function is defined as:
// Provided by XR_ML_spatial_anchors_storage
XrResult xrDestroySpatialAnchorsStorageML(
XrSpatialAnchorsStorageML storage);
12.153.3. Query Stored Anchors
To find out which spatial anchors are stored near a certain XrPosef,
this extension provides a query system.
This function is asynchronous and uses the XR_EXT_future extension.
The xrQuerySpatialAnchorsAsyncML function is defined as:
// Provided by XR_ML_spatial_anchors_storage
XrResult xrQuerySpatialAnchorsAsyncML(
XrSpatialAnchorsStorageML storage,
const XrSpatialAnchorsQueryInfoBaseHeaderML* queryInfo,
XrFutureEXT* future);
If the space was not locatable during the query the runtime must return
XR_ERROR_SPACE_NOT_LOCATABLE_EXT in
XrSpatialAnchorsQueryCompletionML::futureResult.
The XrSpatialAnchorsQueryInfoRadiusML structure is defined as:
// Provided by XR_ML_spatial_anchors_storage
typedef struct XrSpatialAnchorsQueryInfoRadiusML {
XrStructureType type;
const void* next;
XrSpace baseSpace;
XrVector3f center;
XrTime time;
float radius;
} XrSpatialAnchorsQueryInfoRadiusML;
The XrSpatialAnchorsQueryInfoBaseHeaderML structure is defined as:
// Provided by XR_ML_spatial_anchors_storage
typedef struct XrSpatialAnchorsQueryInfoBaseHeaderML {
XrStructureType type;
const void* next;
} XrSpatialAnchorsQueryInfoBaseHeaderML;
The xrQuerySpatialAnchorsCompleteML function is defined as:
// Provided by XR_ML_spatial_anchors_storage
XrResult xrQuerySpatialAnchorsCompleteML(
XrSpatialAnchorsStorageML storage,
XrFutureEXT future,
XrSpatialAnchorsQueryCompletionML* completion);
Once the XrFutureEXT has completed
xrQuerySpatialAnchorsCompleteML must be called to retrieve the
XrUuidEXT values of the found anchors.
The XrSpatialAnchorsQueryCompletionML structure is defined as:
// Provided by XR_ML_spatial_anchors_storage
typedef struct XrSpatialAnchorsQueryCompletionML {
XrStructureType type;
void* next;
XrResult futureResult;
uint32_t uuidCapacityInput;
uint32_t uuidCountOutput;
XrUuidEXT* uuids;
} XrSpatialAnchorsQueryCompletionML;
12.153.4. Publish Anchors
If the application needs to persist an anchor beyond the current head pose session it should publish the anchor. Publishing is an asynchronous operation and can publish multiple anchors at the same time.
The xrPublishSpatialAnchorsAsyncML function is defined as:
// Provided by XR_ML_spatial_anchors_storage
XrResult xrPublishSpatialAnchorsAsyncML(
XrSpatialAnchorsStorageML storage,
const XrSpatialAnchorsPublishInfoML* publishInfo,
XrFutureEXT* future);
The XrSpatialAnchorsPublishInfoML structure is defined as:
// Provided by XR_ML_spatial_anchors_storage
typedef struct XrSpatialAnchorsPublishInfoML {
XrStructureType type;
const void* next;
uint32_t anchorCount;
const XrSpace* anchors;
uint64_t expiration;
} XrSpatialAnchorsPublishInfoML;
Once the XrFutureEXT has completed
xrPublishSpatialAnchorsCompleteML must be called to retrieve the
XrUuidEXT values of the published anchors.
The xrPublishSpatialAnchorsCompleteML function is defined as:
// Provided by XR_ML_spatial_anchors_storage
XrResult xrPublishSpatialAnchorsCompleteML(
XrSpatialAnchorsStorageML storage,
XrFutureEXT future,
XrSpatialAnchorsPublishCompletionML* completion);
The XrSpatialAnchorsPublishCompletionML structure is defined as:
// Provided by XR_ML_spatial_anchors_storage
typedef struct XrSpatialAnchorsPublishCompletionML {
XrStructureType type;
void* next;
XrResult futureResult;
uint32_t uuidCount;
XrUuidEXT* uuids;
} XrSpatialAnchorsPublishCompletionML;
The XrSpatialAnchorsPublishCompletionDetailsML structure is defined as:
// Provided by XR_ML_spatial_anchors_storage
typedef struct XrSpatialAnchorsPublishCompletionDetailsML {
XrStructureType type;
void* next;
uint32_t resultCount;
XrSpatialAnchorCompletionResultML* results;
} XrSpatialAnchorsPublishCompletionDetailsML;
The XrSpatialAnchorCompletionResultML structure is defined as:
// Provided by XR_ML_spatial_anchors_storage
typedef struct XrSpatialAnchorCompletionResultML {
XrUuidEXT uuid;
XrResult result;
} XrSpatialAnchorCompletionResultML;
12.153.5. Delete Published Anchors
To delete anchors from the storage, xrDeleteSpatialAnchorsAsyncML can be used.
The xrDeleteSpatialAnchorsAsyncML function is defined as:
// Provided by XR_ML_spatial_anchors_storage
XrResult xrDeleteSpatialAnchorsAsyncML(
XrSpatialAnchorsStorageML storage,
const XrSpatialAnchorsDeleteInfoML* deleteInfo,
XrFutureEXT* future);
The XrSpatialAnchorsDeleteInfoML structure is defined as:
// Provided by XR_ML_spatial_anchors_storage
typedef struct XrSpatialAnchorsDeleteInfoML {
XrStructureType type;
const void* next;
uint32_t uuidCount;
const XrUuidEXT* uuids;
} XrSpatialAnchorsDeleteInfoML;
Once the XrFutureEXT has completed
xrPublishSpatialAnchorsCompleteML must be called to retrieve the
status of the delete operation.
The xrDeleteSpatialAnchorsCompleteML function is defined as:
// Provided by XR_ML_spatial_anchors_storage
XrResult xrDeleteSpatialAnchorsCompleteML(
XrSpatialAnchorsStorageML storage,
XrFutureEXT future,
XrSpatialAnchorsDeleteCompletionML* completion);
The XrSpatialAnchorsDeleteCompletionML structure is defined as:
// Provided by XR_ML_spatial_anchors_storage
typedef struct XrSpatialAnchorsDeleteCompletionML {
XrStructureType type;
void* next;
XrResult futureResult;
} XrSpatialAnchorsDeleteCompletionML;
The XrSpatialAnchorsDeleteCompletionDetailsML structure is defined as:
// Provided by XR_ML_spatial_anchors_storage
typedef struct XrSpatialAnchorsDeleteCompletionDetailsML {
XrStructureType type;
void* next;
uint32_t resultCount;
XrSpatialAnchorCompletionResultML* results;
} XrSpatialAnchorsDeleteCompletionDetailsML;
12.153.6. Update Published Anchors Expiration
To update the expiration time on anchors xrUpdateSpatialAnchorsExpirationAsyncML can be used.
The xrUpdateSpatialAnchorsExpirationAsyncML function is defined as:
// Provided by XR_ML_spatial_anchors_storage
XrResult xrUpdateSpatialAnchorsExpirationAsyncML(
XrSpatialAnchorsStorageML storage,
const XrSpatialAnchorsUpdateExpirationInfoML* updateInfo,
XrFutureEXT* future);
The XrSpatialAnchorsUpdateExpirationInfoML structure is defined as:
// Provided by XR_ML_spatial_anchors_storage
typedef struct XrSpatialAnchorsUpdateExpirationInfoML {
XrStructureType type;
const void* next;
uint32_t uuidCount;
const XrUuidEXT* uuids;
uint64_t expiration;
} XrSpatialAnchorsUpdateExpirationInfoML;
Once the XrFutureEXT has completed
xrUpdateSpatialAnchorsExpirationCompleteML must be called to retrieve
the status of the publish operation.
The xrUpdateSpatialAnchorsExpirationCompleteML function is defined as:
// Provided by XR_ML_spatial_anchors_storage
XrResult xrUpdateSpatialAnchorsExpirationCompleteML(
XrSpatialAnchorsStorageML storage,
XrFutureEXT future,
XrSpatialAnchorsUpdateExpirationCompletionML* completion);
The XrSpatialAnchorsUpdateExpirationCompletionML structure is defined as:
// Provided by XR_ML_spatial_anchors_storage
typedef struct XrSpatialAnchorsUpdateExpirationCompletionML {
XrStructureType type;
void* next;
XrResult futureResult;
} XrSpatialAnchorsUpdateExpirationCompletionML;
The XrSpatialAnchorsUpdateExpirationCompletionDetailsML structure is defined as:
// Provided by XR_ML_spatial_anchors_storage
typedef struct XrSpatialAnchorsUpdateExpirationCompletionDetailsML {
XrStructureType type;
void* next;
uint32_t resultCount;
XrSpatialAnchorCompletionResultML* results;
} XrSpatialAnchorsUpdateExpirationCompletionDetailsML;
12.153.7. Create Spatial Anchors from Storage
Spatial anchors can be created from the storage XrUuidEXT by passing the XrSpatialAnchorsCreateInfoFromUuidsML structure to xrCreateSpatialAnchorsAsyncML.
The XrSpatialAnchorsCreateInfoFromUuidsML structure is defined as:
// Provided by XR_ML_spatial_anchors_storage
typedef struct XrSpatialAnchorsCreateInfoFromUuidsML {
XrStructureType type;
const void* next;
XrSpatialAnchorsStorageML storage;
uint32_t uuidCount;
const XrUuidEXT* uuids;
} XrSpatialAnchorsCreateInfoFromUuidsML;
The XrSpace handle or handles returned via
XrCreateSpatialAnchorsCompletionML::spaces must be in the same
order as uuids.
The XrCreateSpatialAnchorsCompletionML::spaceCount field must
match uuidCount.
If not the runtime must return XR_ERROR_VALIDATION_FAILURE in
XrCreateSpatialAnchorsCompletionML::futureResult.
If an anchor with a given UUID is not found, the runtime must return
XR_NULL_HANDLE for the corresponding XrSpace handle(s) and
return XR_SUCCESS in
XrCreateSpatialAnchorsCompletionML::futureResult.
12.153.8. Examples
This example shows how to persist a list of anchors.
XrInstance instance; // previously initialized
XrSession session; // previously initialized
XrSpace anchor1; // previously initialized
XrSpace anchor2; // previously initialized
std::vector<XrSpace> anchors{anchor1, anchor2};
XrSpatialAnchorsCreateStorageInfoML storageCreateInfo{XR_TYPE_SPATIAL_ANCHORS_CREATE_STORAGE_INFO_ML};
XrSpatialAnchorsStorageML storage;
CHK_XR(xrCreateSpatialAnchorsStorageML(session, &storageCreateInfo, &storage));
XrSpatialAnchorsPublishInfoML publishInfo{XR_TYPE_SPATIAL_ANCHORS_PUBLISH_INFO_ML};
publishInfo.anchorCount = static_cast<uint32_t>(anchors.size());
publishInfo.anchors = anchors.data();
publishInfo.expiration = 0;
XrFutureEXT future;
CHK_XR(xrPublishSpatialAnchorsAsyncML(storage, &publishInfo, &future));
XrFuturePollInfoEXT pollInfo{XR_TYPE_FUTURE_POLL_INFO_EXT};
XrFuturePollResultEXT pollResult{XR_TYPE_FUTURE_POLL_RESULT_EXT};
pollInfo.future = future;
pollResult.state = XR_FUTURE_STATE_PENDING_EXT;
// Ideally this is done once in your
// game loop instead of a busy wait.
while(pollResult.state==XR_FUTURE_STATE_PENDING_EXT) {
CHK_XR(xrPollFutureEXT(instance, &pollInfo, &pollResult));
}
XrSpatialAnchorsPublishCompletionML completion{XR_TYPE_SPATIAL_ANCHORS_PUBLISH_COMPLETION_ML};
std::vector<XrUuidEXT> uuids(anchors.size());
completion.uuidCount = static_cast<uint32_t>(uuids.size());
completion.uuids = uuids.data();
CHK_XR(xrPublishSpatialAnchorsCompleteML(storage, future, &completion));
CHK_XR(completion.futureResult);
// completion.uuid will now contain the UUID of the corresponding spatial anchors.
CHK_XR(xrDestroySpatialAnchorsStorageML(storage));
This example shows how to query for anchors.
XrTime currentTime; // previously initialized
XrVector3f center; // previously initialized
XrSpatialAnchorsCreateStorageInfoML storageCreateInfo{XR_TYPE_SPATIAL_ANCHORS_CREATE_STORAGE_INFO_ML};
XrSpatialAnchorsStorageML storage;
CHK_XR(xrCreateSpatialAnchorsStorageML(session, &storageCreateInfo, &storage));
// set up a query around a previously initialized center position with a radius of 10 meters.
XrSpatialAnchorsQueryInfoRadiusML queryInfo{XR_TYPE_SPATIAL_ANCHORS_QUERY_INFO_RADIUS_ML};
queryInfo.baseSpace = viewSpace; // using viewspace, but can: be any space.
queryInfo.center = center;
queryInfo.time = currentTime;
queryInfo.radius = 10.0f;
XrFutureEXT future;
CHK_XR(xrQuerySpatialAnchorsAsyncML(storage, reinterpret_cast<XrSpatialAnchorsQueryInfoBaseHeaderML*>(&queryInfo), &future));
XrFuturePollInfoEXT pollInfo{XR_TYPE_FUTURE_POLL_INFO_EXT};
XrFuturePollResultEXT pollResult{XR_TYPE_FUTURE_POLL_RESULT_EXT};
pollInfo.future = future;
pollResult.state = XR_FUTURE_STATE_PENDING_EXT;
// Ideally this is done once in your
// frame loop instead of a busy wait.
while(pollResult.state==XR_FUTURE_STATE_PENDING_EXT) {
CHK_XR(xrPollFutureEXT(instance, &pollInfo, &pollResult));
}
XrSpatialAnchorsQueryCompletionML completion{XR_TYPE_SPATIAL_ANCHORS_QUERY_COMPLETION_ML};
CHK_XR(xrQuerySpatialAnchorsCompleteML(storage, future, &completion));
CHK_XR(completion.futureResult);
std::vector<XrUuidEXT> uuids(completion.uuidCountOutput);
completion.uuidCapacityInput = static_cast<uint32_t>(uuids.size());
completion.uuids = uuids.data();
CHK_XR(xrQuerySpatialAnchorsCompleteML(storage, future, &completion));
// completion.uuid will now contain the UUID of the corresponding spatial anchors.
CHK_XR(xrDestroySpatialAnchorsStorageML(storage));
12.153.11. New Structures
-
Extending XrSpatialAnchorsDeleteCompletionML:
-
Extending XrSpatialAnchorsPublishCompletionML:
-
Extending XrSpatialAnchorsUpdateExpirationCompletionML:
12.153.12. New Enum Constants
-
XR_ML_SPATIAL_ANCHORS_STORAGE_EXTENSION_NAME -
XR_ML_spatial_anchors_storage_SPEC_VERSION -
Extending XrObjectType:
-
XR_OBJECT_TYPE_SPATIAL_ANCHORS_STORAGE_ML
-
-
Extending XrResult:
-
XR_ERROR_SPATIAL_ANCHORS_ANCHOR_NOT_FOUND_ML
-
-
Extending XrStructureType:
-
XR_TYPE_SPATIAL_ANCHORS_CREATE_INFO_FROM_UUIDS_ML -
XR_TYPE_SPATIAL_ANCHORS_CREATE_STORAGE_INFO_ML -
XR_TYPE_SPATIAL_ANCHORS_DELETE_COMPLETION_DETAILS_ML -
XR_TYPE_SPATIAL_ANCHORS_DELETE_COMPLETION_ML -
XR_TYPE_SPATIAL_ANCHORS_DELETE_INFO_ML -
XR_TYPE_SPATIAL_ANCHORS_PUBLISH_COMPLETION_DETAILS_ML -
XR_TYPE_SPATIAL_ANCHORS_PUBLISH_COMPLETION_ML -
XR_TYPE_SPATIAL_ANCHORS_PUBLISH_INFO_ML -
XR_TYPE_SPATIAL_ANCHORS_QUERY_COMPLETION_ML -
XR_TYPE_SPATIAL_ANCHORS_QUERY_INFO_RADIUS_ML -
XR_TYPE_SPATIAL_ANCHORS_UPDATE_EXPIRATION_COMPLETION_DETAILS_ML -
XR_TYPE_SPATIAL_ANCHORS_UPDATE_EXPIRATION_COMPLETION_ML -
XR_TYPE_SPATIAL_ANCHORS_UPDATE_EXPIRATION_INFO_ML
-
Version History
-
Revision 1, 2023-06-22 (Ron Bessems)
-
Initial extension description
-
12.154. XR_ML_system_notifications
- Name String
-
XR_ML_system_notifications - Extension Type
-
Instance extension
- Registered Extension Number
-
474
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2023-09-06
- Contributors
-
Michał Kulągowski, Magic Leap
Ron Bessems, Magic Leap
12.154.1. Overview
This extension provides control over the system notifications. This extension allows system notifications that might obscure the field of view to be disabled.
Note that even when all system notifications have been suppressed, developers can still intercept certain events that allow them to properly react to the underlying reason of system notifications.
|
Permissions
Android applications must have the com.magicleap.permission.SYSTEM_NOTIFICATION permission listed in their manifest to use this extension. (protection level: normal) |
12.154.2. Suppressing All System Notifications
Applications can suppress system notifications from being shown while the application has focus by calling xrSetSystemNotificationsML with the properly filled XrSystemNotificationsSetInfoML structure.
The xrSetSystemNotificationsML function is defined as:
// Provided by XR_ML_system_notifications
XrResult xrSetSystemNotificationsML(
XrInstance instance,
const XrSystemNotificationsSetInfoML* info);
This API will work only on certain SKUs.
When called on an incompatible SKU the
XR_ERROR_SYSTEM_NOTIFICATION_INCOMPATIBLE_SKU_ML error must be
returned.
If the com.magicleap.permission.SYSTEM_NOTIFICATION permission is not
granted, the runtime must return
XR_ERROR_SYSTEM_NOTIFICATION_PERMISSION_DENIED_ML.
The XrSystemNotificationsSetInfoML structure is defined as:
// Provided by XR_ML_system_notifications
typedef struct XrSystemNotificationsSetInfoML {
XrStructureType type;
const void* next;
XrBool32 suppressNotifications;
} XrSystemNotificationsSetInfoML;
12.154.5. New Enum Constants
-
XR_ML_SYSTEM_NOTIFICATIONS_EXTENSION_NAME -
XR_ML_system_notifications_SPEC_VERSION -
Extending XrResult:
-
XR_ERROR_SYSTEM_NOTIFICATION_INCOMPATIBLE_SKU_ML -
XR_ERROR_SYSTEM_NOTIFICATION_PERMISSION_DENIED_ML
-
-
Extending XrStructureType:
-
XR_TYPE_SYSTEM_NOTIFICATIONS_SET_INFO_ML
-
Version History
-
Revision 1, 2023-09-06 (Michał Kulągowski)
-
Initial extension description
-
12.155. XR_ML_user_calibration
- Name String
-
XR_ML_user_calibration - Extension Type
-
Instance extension
- Registered Extension Number
-
473
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2023-08-21
- Contributors
-
Karthik Kadappan, Magic Leap
Ron Bessems, Magic Leap
12.155.1. Overview
This extension can be used to determine how well the device is calibrated for the current user of the device. The extension provides two events for this purpose:
-
Headset Fit: Provides the quality of the fit of the headset on the user.
-
Eye Calibration: Provides the quality of the user’s eye calibration.
12.155.2. Enabling user calibration events
User calibration events are requested by calling xrEnableUserCalibrationEventsML. When this function is called, each of the user calibration events must be posted to the event queue once, regardless of whether there were any changes to the event data. This allows the application to synchronize with the current state.
The xrEnableUserCalibrationEventsML function is defined as:
// Provided by XR_ML_user_calibration
XrResult xrEnableUserCalibrationEventsML(
XrInstance instance,
const XrUserCalibrationEnableEventsInfoML* enableInfo);
The XrUserCalibrationEnableEventsInfoML structure is defined as:
// Provided by XR_ML_user_calibration
typedef struct XrUserCalibrationEnableEventsInfoML {
XrStructureType type;
const void* next;
XrBool32 enabled;
} XrUserCalibrationEnableEventsInfoML;
12.155.3. Headset Fit Events
Receiving an XrEventDataHeadsetFitChangedML event from
xrPollEvent notifies the application of headset fit changes.
To enable these events call xrEnableUserCalibrationEventsML and set
XrUserCalibrationEnableEventsInfoML::enabled to true.
Headset fit is evaluated continuously and the runtime must post events
anytime it detects a change in the headset fit state.
The XrEventDataHeadsetFitChangedML structure is defined as:
// Provided by XR_ML_user_calibration
typedef struct XrEventDataHeadsetFitChangedML {
XrStructureType type;
const void* next;
XrHeadsetFitStatusML status;
XrTime time;
} XrEventDataHeadsetFitChangedML;
// Provided by XR_ML_user_calibration
typedef enum XrHeadsetFitStatusML {
XR_HEADSET_FIT_STATUS_UNKNOWN_ML = 0,
XR_HEADSET_FIT_STATUS_NOT_WORN_ML = 1,
XR_HEADSET_FIT_STATUS_GOOD_FIT_ML = 2,
XR_HEADSET_FIT_STATUS_BAD_FIT_ML = 3,
XR_HEADSET_FIT_STATUS_MAX_ENUM_ML = 0x7FFFFFFF
} XrHeadsetFitStatusML;
| Enum | Description |
|---|---|
|
Headset fit status not available for unknown reason. |
|
Headset not worn. |
|
Good fit. |
|
Bad fit. |
12.155.4. Eye Calibration Events
Receiving an XrEventDataEyeCalibrationChangedML event from
xrPollEvent notifies the application of eye calibration changes.
To enable these events call xrEnableUserCalibrationEventsML and set
XrUserCalibrationEnableEventsInfoML::enabled to true.
Runtime must post events anytime it detects a change in the eye
calibration.
The user needs to calibrate the eyes using the system app provided for this.
There is no support for in-app eye calibration in this extension.
The XrEventDataEyeCalibrationChangedML structure is defined as:
// Provided by XR_ML_user_calibration
typedef struct XrEventDataEyeCalibrationChangedML {
XrStructureType type;
const void* next;
XrEyeCalibrationStatusML status;
} XrEventDataEyeCalibrationChangedML;
// Provided by XR_ML_user_calibration
typedef enum XrEyeCalibrationStatusML {
XR_EYE_CALIBRATION_STATUS_UNKNOWN_ML = 0,
XR_EYE_CALIBRATION_STATUS_NONE_ML = 1,
XR_EYE_CALIBRATION_STATUS_COARSE_ML = 2,
XR_EYE_CALIBRATION_STATUS_FINE_ML = 3,
XR_EYE_CALIBRATION_STATUS_MAX_ENUM_ML = 0x7FFFFFFF
} XrEyeCalibrationStatusML;
| Enum | Description |
|---|---|
|
Eye calibration status not available for unknown reason. |
|
User has not performed the eye calibration step. Use system provided app to perform eye calibration. |
|
Eye calibration is of lower accuracy. |
|
Eye calibration is of higher accuracy. |
12.155.5. New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_EVENT_DATA_HEADSET_FIT_CHANGED_ML -
XR_TYPE_EVENT_DATA_EYE_CALIBRATION_CHANGED_ML -
XR_TYPE_USER_CALIBRATION_ENABLE_EVENTS_INFO_ML
Version History
-
Revision 1, 2023-06-20 (Karthik Kadappan)
-
Initial extension description
-
12.156. XR_ML_view_configuration_depth_range_change
- Name String
-
XR_ML_view_configuration_depth_range_change - Extension Type
-
Instance extension
- Registered Extension Number
-
484
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2023-11-09
- IP Status
-
No known IP claims.
- Contributors
-
Ron Bessems, Magic Leap
Andrei Aristarkhov, Magic Leap
12.156.1. Overview
There is a desire for runtimes to be able to inform applications of changes in clipping planes during the lifetime of an XrInstance.
12.156.2. Background
The XrViewConfigurationDepthRangeEXT structure is used to inform applications of the clipping plane values. However, since this information is obtained via xrEnumerateViewConfigurations its contents must not change.
12.156.3. Behavior change
When this extension is enabled, the runtime may change the contents of XrViewConfigurationDepthRangeEXT during the lifetime of an XrInstance.
Applications should track changes in the clipping plane values as provided by the runtime every frame.
Be aware that unlike most OpenXR extensions, enabling this extension is all that is required for runtime behavior to change, which may have impacts on modular applications, including applications built on engines.
12.157. XR_ML_world_mesh_detection
- Name String
-
XR_ML_world_mesh_detection - Extension Type
-
Instance extension
- Registered Extension Number
-
475
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2023-08-29
- Contributors
-
Ron Bessems, Magic Leap
Karthik Kadappan, Magic Leap
12.157.2. Creating a world mesh detector
The XrWorldMeshDetectorML handle is defined as:
// Provided by XR_ML_world_mesh_detection
XR_DEFINE_HANDLE(XrWorldMeshDetectorML)
XrWorldMeshDetectorML is created by xrCreateWorldMeshDetectorML.
The xrCreateWorldMeshDetectorML function is defined as:
// Provided by XR_ML_world_mesh_detection
XrResult xrCreateWorldMeshDetectorML(
XrSession session,
const XrWorldMeshDetectorCreateInfoML* createInfo,
XrWorldMeshDetectorML* detector);
|
Permissions
Android applications must have the com.magicleap.permission.SPATIAL_MAPPING permission listed in their manifest to use this extension. (protection level: dangerous) |
The XrWorldMeshDetectorCreateInfoML structure is defined as:
// Provided by XR_ML_world_mesh_detection
typedef struct XrWorldMeshDetectorCreateInfoML {
XrStructureType type;
const void* next;
} XrWorldMeshDetectorCreateInfoML;
12.157.3. Destroying a world mesh detector
The xrDestroyWorldMeshDetectorML function is defined as:
// Provided by XR_ML_world_mesh_detection
XrResult xrDestroyWorldMeshDetectorML(
XrWorldMeshDetectorML detector);
12.157.4. Detecting the World Mesh
Detecting the world mesh is done by blocks. Instead of returning the whole world mesh as one entity, this system returns the world mesh in chunks called blocks. To retrieve the currently detected chunks use xrRequestWorldMeshStateAsyncML.
The xrRequestWorldMeshStateAsyncML function is defined as:
// Provided by XR_ML_world_mesh_detection
XrResult xrRequestWorldMeshStateAsyncML(
XrWorldMeshDetectorML detector,
const XrWorldMeshStateRequestInfoML* stateRequest,
XrFutureEXT* future);
The XrWorldMeshStateRequestInfoML structure is defined as:
// Provided by XR_ML_world_mesh_detection
typedef struct XrWorldMeshStateRequestInfoML {
XrStructureType type;
const void* next;
XrSpace baseSpace;
XrTime time;
XrPosef boundingBoxCenter;
XrExtent3DfEXT boundingBoxExtents;
} XrWorldMeshStateRequestInfoML;
Each mesh block may have a 'skirt' if
XR_WORLD_MESH_DETECTOR_REMOVE_MESH_SKIRT_BIT_ML was not specified
during the creation of the XrWorldMeshDetectorML.
A skirt provides some overlap between adjacent mesh blocks.
While a skirt improves coverage between blocks, it does not create a
watertight mesh.
// Provided by XR_ML_world_mesh_detection
typedef XrFlags64 XrWorldMeshDetectorFlagsML;
// Provided by XR_ML_world_mesh_detection
// Flag bits for XrWorldMeshDetectorFlagsML
static const XrWorldMeshDetectorFlagsML XR_WORLD_MESH_DETECTOR_POINT_CLOUD_BIT_ML = 0x00000001;
static const XrWorldMeshDetectorFlagsML XR_WORLD_MESH_DETECTOR_COMPUTE_NORMALS_BIT_ML = 0x00000002;
static const XrWorldMeshDetectorFlagsML XR_WORLD_MESH_DETECTOR_COMPUTE_CONFIDENCE_BIT_ML = 0x00000004;
static const XrWorldMeshDetectorFlagsML XR_WORLD_MESH_DETECTOR_PLANARIZE_BIT_ML = 0x00000008;
static const XrWorldMeshDetectorFlagsML XR_WORLD_MESH_DETECTOR_REMOVE_MESH_SKIRT_BIT_ML = 0x00000010;
static const XrWorldMeshDetectorFlagsML XR_WORLD_MESH_DETECTOR_INDEX_ORDER_CW_BIT_ML = 0x00000020;
xrRequestWorldMeshStateAsyncML is an asynchronous function and xrRequestWorldMeshStateCompleteML retrieves the data once the future is ready.
The xrRequestWorldMeshStateCompleteML function is defined as:
// Provided by XR_ML_world_mesh_detection
XrResult xrRequestWorldMeshStateCompleteML(
XrWorldMeshDetectorML detector,
XrFutureEXT future,
XrWorldMeshStateRequestCompletionML* completion);
The XrWorldMeshStateRequestCompletionML structure is defined as:
// Provided by XR_ML_world_mesh_detection
typedef struct XrWorldMeshStateRequestCompletionML {
XrStructureType type;
void* next;
XrResult futureResult;
XrTime timestamp;
uint32_t meshBlockStateCapacityInput;
uint32_t meshBlockStateCountOutput;
XrWorldMeshBlockStateML* meshBlockStates;
} XrWorldMeshStateRequestCompletionML;
The XrWorldMeshBlockStateML structure is defined as:
// Provided by XR_ML_world_mesh_detection
typedef struct XrWorldMeshBlockStateML {
XrStructureType type;
void* next;
XrUuidEXT uuid;
XrPosef meshBoundingBoxCenter;
XrExtent3DfEXT meshBoundingBoxExtents;
XrTime lastUpdateTime;
XrWorldMeshBlockStatusML status;
} XrWorldMeshBlockStateML;
// Provided by XR_ML_world_mesh_detection
typedef enum XrWorldMeshBlockStatusML {
XR_WORLD_MESH_BLOCK_STATUS_NEW_ML = 0,
XR_WORLD_MESH_BLOCK_STATUS_UPDATED_ML = 1,
XR_WORLD_MESH_BLOCK_STATUS_DELETED_ML = 2,
XR_WORLD_MESH_BLOCK_STATUS_UNCHANGED_ML = 3,
XR_WORLD_MESH_BLOCK_STATUS_MAX_ENUM_ML = 0x7FFFFFFF
} XrWorldMeshBlockStatusML;
| Enum | Description |
|---|---|
|
The mesh block has been created. |
|
The mesh block has been updated. |
|
The mesh block has been deleted. |
|
The mesh block is unchanged. |
12.157.5. Allocate Mesh Block Memory
The next step is to retrieve the actual vertex data. This operation will require a buffer to be available for the duration of the asynchronous operation and for as long as the application needs access to XrWorldMeshRequestCompletionML.
This buffer must be allocated by the application, the system provides recommended buffer size using the xrGetWorldMeshBufferRecommendSizeML function.
The xrGetWorldMeshBufferRecommendSizeML function is defined as:
// Provided by XR_ML_world_mesh_detection
XrResult xrGetWorldMeshBufferRecommendSizeML(
XrWorldMeshDetectorML detector,
const XrWorldMeshBufferRecommendedSizeInfoML* sizeInfo,
XrWorldMeshBufferSizeML* size);
Errata: This function is called xrGetWorldMeshBufferRecommendSizeML rather
than the expected xrGetWorldMeshBufferRecommendedSizeML.
The XrWorldMeshBufferRecommendedSizeInfoML structure is defined as:
// Provided by XR_ML_world_mesh_detection
typedef struct XrWorldMeshBufferRecommendedSizeInfoML {
XrStructureType type;
const void* next;
uint32_t maxBlockCount;
} XrWorldMeshBufferRecommendedSizeInfoML;
The value for maxBlockCount should be populated
XrWorldMeshStateRequestCompletionML::meshBlockStateCountOutput.
As long as the maxBlockCount is equal or larger to this
XrWorldMeshStateRequestCompletionML::meshBlockStateCountOutput,
a memory block may be reused for new requests.
The XrWorldMeshBufferSizeML structure is defined as:
// Provided by XR_ML_world_mesh_detection
typedef struct XrWorldMeshBufferSizeML {
XrStructureType type;
void* next;
uint32_t size;
} XrWorldMeshBufferSizeML;
Some runtimes have optimized memory available that avoids memory copies and provides the fastest way to get the vertex data. Applications should use the xrAllocateWorldMeshBufferML function to reserve memory for the vertex data. The application however may choose to allocate its own memory using alternative methods.
The xrAllocateWorldMeshBufferML function is defined as:
// Provided by XR_ML_world_mesh_detection
XrResult xrAllocateWorldMeshBufferML(
XrWorldMeshDetectorML detector,
const XrWorldMeshBufferSizeML* size,
XrWorldMeshBufferML* buffer);
The XrWorldMeshBufferML structure is defined as:
// Provided by XR_ML_world_mesh_detection
typedef struct XrWorldMeshBufferML {
XrStructureType type;
void* next;
uint32_t bufferSize;
void* buffer;
} XrWorldMeshBufferML;
Memory blocks allocated with xrAllocateWorldMeshBufferML must be freed using xrFreeWorldMeshBufferML.
The xrFreeWorldMeshBufferML function is defined as:
// Provided by XR_ML_world_mesh_detection
XrResult xrFreeWorldMeshBufferML(
XrWorldMeshDetectorML detector,
const XrWorldMeshBufferML* buffer);
12.157.6. Start Mesh Data Query
Once a buffer has been allocated the mesh data retrieval may be started using xrRequestWorldMeshAsyncML.
The xrRequestWorldMeshAsyncML function is defined as:
// Provided by XR_ML_world_mesh_detection
XrResult xrRequestWorldMeshAsyncML(
XrWorldMeshDetectorML detector,
const XrWorldMeshGetInfoML* getInfo,
XrWorldMeshBufferML* buffer,
XrFutureEXT* future);
The XrWorldMeshGetInfoML structure is defined as:
// Provided by XR_ML_world_mesh_detection
typedef struct XrWorldMeshGetInfoML {
XrStructureType type;
const void* next;
XrWorldMeshDetectorFlagsML flags;
float fillHoleLength;
float disconnectedComponentArea;
uint32_t blockCount;
XrWorldMeshBlockRequestML* blocks;
} XrWorldMeshGetInfoML;
The XrWorldMeshBlockRequestML structure is defined as:
// Provided by XR_ML_world_mesh_detection
typedef struct XrWorldMeshBlockRequestML {
XrStructureType type;
void* next;
XrUuidEXT uuid;
XrWorldMeshDetectorLodML lod;
} XrWorldMeshBlockRequestML;
// Provided by XR_ML_world_mesh_detection
typedef enum XrWorldMeshDetectorLodML {
XR_WORLD_MESH_DETECTOR_LOD_MINIMUM_ML = 0,
XR_WORLD_MESH_DETECTOR_LOD_MEDIUM_ML = 1,
XR_WORLD_MESH_DETECTOR_LOD_MAXIMUM_ML = 2,
XR_WORLD_MESH_DETECTOR_LOD_MAX_ENUM_ML = 0x7FFFFFFF
} XrWorldMeshDetectorLodML;
| Enum | Description |
|---|---|
|
Minimum Level of Detail (LOD) for the mesh. |
|
Medium Level of Detail (LOD) for the mesh. |
|
Maximum Level of Detail (LOD) for the mesh. |
12.157.7. Complete Mesh Data Query
To complete the previously started mesh data query xrRequestWorldMeshCompleteML is used.
The xrRequestWorldMeshCompleteML function is defined as:
// Provided by XR_ML_world_mesh_detection
XrResult xrRequestWorldMeshCompleteML(
XrWorldMeshDetectorML detector,
const XrWorldMeshRequestCompletionInfoML* completionInfo,
XrFutureEXT future,
XrWorldMeshRequestCompletionML* completion);
The XrWorldMeshRequestCompletionInfoML structure is defined as:
// Provided by XR_ML_world_mesh_detection
typedef struct XrWorldMeshRequestCompletionInfoML {
XrStructureType type;
const void* next;
XrSpace meshSpace;
XrTime meshSpaceLocateTime;
} XrWorldMeshRequestCompletionInfoML;
The XrWorldMeshRequestCompletionML structure is defined as:
// Provided by XR_ML_world_mesh_detection
typedef struct XrWorldMeshRequestCompletionML {
XrStructureType type;
void* next;
XrResult futureResult;
uint32_t blockCount;
XrWorldMeshBlockML* blocks;
} XrWorldMeshRequestCompletionML;
The XrWorldMeshBlockML structure is defined as:
// Provided by XR_ML_world_mesh_detection
typedef struct XrWorldMeshBlockML {
XrStructureType type;
void* next;
XrUuidEXT uuid;
XrWorldMeshBlockResultML blockResult;
XrWorldMeshDetectorLodML lod;
XrWorldMeshDetectorFlagsML flags;
uint32_t indexCount;
uint16_t* indexBuffer;
uint32_t vertexCount;
XrVector3f* vertexBuffer;
uint32_t normalCount;
XrVector3f* normalBuffer;
uint32_t confidenceCount;
float* confidenceBuffer;
} XrWorldMeshBlockML;
normalCount must be equal to vertexCount if
XR_WORLD_MESH_DETECTOR_COMPUTE_NORMALS_BIT_ML was specified during
XrWorldMeshDetectorML creation, otherwise 0.
confidenceCount must be equal to vertexCount if
XR_WORLD_MESH_DETECTOR_COMPUTE_CONFIDENCE_BIT_ML was specified during
XrWorldMeshDetectorML creation, otherwise 0.
// Provided by XR_ML_world_mesh_detection
typedef enum XrWorldMeshBlockResultML {
XR_WORLD_MESH_BLOCK_RESULT_SUCCESS_ML = 0,
XR_WORLD_MESH_BLOCK_RESULT_FAILED_ML = 1,
XR_WORLD_MESH_BLOCK_RESULT_PENDING_ML = 2,
XR_WORLD_MESH_BLOCK_RESULT_PARTIAL_UPDATE_ML = 3,
XR_WORLD_MESH_BLOCK_RESULT_MAX_ENUM_ML = 0x7FFFFFFF
} XrWorldMeshBlockResultML;
| Enum | Description |
|---|---|
|
Mesh request has succeeded. |
|
Mesh request has failed. |
|
Mesh request is pending. |
|
There are partial updates on the mesh request. |
12.157.8. Sample code
class MeshDetector {
private:
enum State {
INFO_START, INFO_WAIT_COMPLETE, MESH_START, MESH_WAIT_COMPLETE, DONE
};
XrInstance m_Instance; // previously initialized.
XrSession m_Session; // previously initialized.
XrSpace m_ViewSpace; // previously initialized.
XrSpace m_LocalSpace; // previously initialized.
State m_State{INFO_START};
XrFutureEXT m_Future{XR_NULL_FUTURE_EXT};
XrWorldMeshDetectorML m_Detector;
std::vector<XrWorldMeshBlockStateML> m_MeshBlocks;
std::array<uint32_t,2> m_MaxBlockCounts{0};
std::array<XrWorldMeshBufferML,2> m_WorldMeshBuffers{XR_TYPE_WORLD_MESH_BUFFER_ML};
uint32_t m_QueryBuffer{0};
std::vector<XrWorldMeshBlockML> m_WorldMeshBlocks;
bool m_ApplicationCreatedMemory{false};
bool StartInfoQuery(XrTime displayTime) {
XrWorldMeshStateRequestInfoML requestInfo{XR_TYPE_WORLD_MESH_STATE_REQUEST_INFO_ML};
requestInfo.baseSpace = m_ViewSpace;
requestInfo.time = displayTime;
requestInfo.boundingBoxCenter.orientation.w = 1.0f;
requestInfo.boundingBoxExtents = {10.0f, 10.0f, 10.0f};
return xrRequestWorldMeshStateAsyncML(m_Detector, &requestInfo, &m_Future)==XR_SUCCESS;
}
bool CompleteInfoQuery() {
XrWorldMeshStateRequestCompletionML completion{XR_TYPE_WORLD_MESH_STATE_REQUEST_COMPLETION_ML};
if (xrRequestWorldMeshStateCompleteML(m_Detector, m_Future, &completion)!=XR_SUCCESS) {
return false;
}
if (completion.futureResult!=XR_SUCCESS) {
return false;
}
m_MeshBlocks.resize(completion.meshBlockStateCountOutput);
for (auto &mb:m_MeshBlocks) {
mb.type = XR_TYPE_WORLD_MESH_BLOCK_STATE_ML;
}
completion.meshBlockStateCapacityInput = completion.meshBlockStateCountOutput;
completion.meshBlockStates = m_MeshBlocks.data();
CHK_XR(xrRequestWorldMeshStateCompleteML(m_Detector, m_Future, &completion));
if (completion.meshBlockStateCountOutput==0) {
return false; // start a new query.
}
// switch to next buffer.
m_QueryBuffer = ( m_QueryBuffer + 1 ) % 2;
if (completion.meshBlockStateCountOutput>m_MaxBlockCounts[m_QueryBuffer]) {
m_MaxBlockCounts[m_QueryBuffer] = completion.meshBlockStateCountOutput;
XrWorldMeshBufferRecommendedSizeInfoML sizeInfo{XR_TYPE_WORLD_MESH_BUFFER_RECOMMENDED_SIZE_INFO_ML};
XrWorldMeshBufferSizeML bufferSize{XR_TYPE_WORLD_MESH_BUFFER_SIZE_ML};
sizeInfo.maxBlockCount = m_MaxBlockCounts[m_QueryBuffer];
CHK_XR(xrGetWorldMeshBufferRecommendSizeML(m_Detector, &sizeInfo, &bufferSize ));
if (m_ApplicationCreatedMemory) {
// It may be advantageous to use memory allocated
// specific to the use case. For example shared graphics
// memory may provide some performance benefits by avoiding
// extra copying.
m_WorldMeshBuffers[m_QueryBuffer].bufferSize = bufferSize.size;
m_WorldMeshBuffers[m_QueryBuffer].buffer = malloc(bufferSize.size);
} else {
CHK_XR(xrAllocateWorldMeshBufferML(m_Detector, &bufferSize, &m_WorldMeshBuffers[m_QueryBuffer]));
}
}
return true;
}
bool StartMeshQuery() {
std::vector<XrWorldMeshBlockRequestML> blockRequests;
blockRequests.resize(m_MeshBlocks.size());
for (size_t i = 0; i< m_MeshBlocks.size();i++) {
blockRequests[i].type = XR_TYPE_WORLD_MESH_BLOCK_REQUEST_ML;
blockRequests[i].uuid = m_MeshBlocks[i].uuid;
blockRequests[i].lod = XR_WORLD_MESH_DETECTOR_LOD_MEDIUM_ML;
}
XrWorldMeshGetInfoML getInfo{XR_TYPE_WORLD_MESH_GET_INFO_ML};
getInfo.flags = 0;
getInfo.fillHoleLength = 0.5f;
getInfo.disconnectedComponentArea = 1.0f;
getInfo.blockCount = static_cast<uint32_t>(blockRequests.size());
getInfo.blocks = blockRequests.data();
CHK_XR(xrRequestWorldMeshAsyncML(m_Detector, &getInfo, &m_WorldMeshBuffers[m_QueryBuffer], &m_Future));
return true;
}
bool CompleteMeshQuery(XrTime displayTime) {
XrWorldMeshRequestCompletionML completion{XR_TYPE_WORLD_MESH_REQUEST_COMPLETION_ML};
m_WorldMeshBlocks.resize(m_MeshBlocks.size());
completion.blockCount = static_cast<uint32_t>(m_WorldMeshBlocks.size());
completion.blocks = m_WorldMeshBlocks.data();
XrWorldMeshRequestCompletionInfoML completionInfo{XR_TYPE_WORLD_MESH_REQUEST_COMPLETION_INFO_ML};
completionInfo.meshSpace = m_LocalSpace;
completionInfo.meshSpaceLocateTime = displayTime;
CHK_XR(xrRequestWorldMeshCompleteML(m_Detector, &completionInfo, m_Future, &completion));
CHK_XR(completion.futureResult);
// The vertex data is now usable.
// the backing buffer double-buffered, so the vertex data remains valid
// even though a new request might be processing.
return true;
}
public:
MeshDetector() {
XrWorldMeshDetectorCreateInfoML createInfo{XR_TYPE_WORLD_MESH_DETECTOR_CREATE_INFO_ML};
CHK_XR(xrCreateWorldMeshDetectorML(m_Session,&createInfo, &m_Detector));
}
~MeshDetector() {
// Must ensure the future has finished before destroying.
// std::assert(IsDone());
for (auto &buffer : m_WorldMeshBuffers) {
if ( buffer.buffer != nullptr ) {
if (m_ApplicationCreatedMemory) {
free(buffer.buffer);
} else {
xrFreeWorldMeshBufferML(m_Detector, &buffer);
}
}
}
xrDestroyWorldMeshDetectorML(m_Detector);
m_Detector = XR_NULL_HANDLE;
}
void RequestShutdown() {
if ( m_Future != XR_NULL_FUTURE_EXT) {
XrFutureCancelInfoEXT cancelInfo{XR_TYPE_FUTURE_CANCEL_INFO_EXT};
cancelInfo.future = m_Future;
xrCancelFutureEXT(m_Instance, &cancelInfo);
m_State = DONE;
} else {
m_State = DONE;
}
}
bool IsDone() {
return m_State == DONE;
}
void FrameLoop(XrTime displayTime) {
if (m_Future == XR_NULL_FUTURE_EXT) {
return;
}
XrFuturePollResultEXT pollResult{XR_TYPE_FUTURE_POLL_RESULT_EXT};
pollResult.state = XR_FUTURE_STATE_PENDING_EXT;
XrFuturePollInfoEXT pollInfo{XR_TYPE_FUTURE_POLL_INFO_EXT};
pollInfo.future = m_Future;
CHK_XR(xrPollFutureEXT(m_Instance, &pollInfo, &pollResult));
switch (m_State) {
case INFO_START:
if (StartInfoQuery(displayTime)) {
m_State = INFO_WAIT_COMPLETE;
}
break;
case INFO_WAIT_COMPLETE:
if (pollResult.state==XR_FUTURE_STATE_READY_EXT) {
if (CompleteInfoQuery()) {
m_State = MESH_START;
} else {
m_State = INFO_START;
}
}
break;
case MESH_START:
if (StartMeshQuery()) {
m_State = MESH_WAIT_COMPLETE;
}
break;
case MESH_WAIT_COMPLETE:
if (pollResult.state==XR_FUTURE_STATE_READY_EXT) {
if (CompleteMeshQuery(displayTime)) {
m_State = INFO_START;
}
}
break;
case DONE:
break;
}
}
};
12.157.14. New Enum Constants
-
XR_ML_WORLD_MESH_DETECTION_EXTENSION_NAME -
XR_ML_world_mesh_detection_SPEC_VERSION -
Extending XrObjectType:
-
XR_OBJECT_TYPE_WORLD_MESH_DETECTOR_ML
-
-
Extending XrResult:
-
XR_ERROR_WORLD_MESH_DETECTOR_PERMISSION_DENIED_ML -
XR_ERROR_WORLD_MESH_DETECTOR_SPACE_NOT_LOCATABLE_ML
-
-
Extending XrStructureType:
-
XR_TYPE_WORLD_MESH_BLOCK_ML -
XR_TYPE_WORLD_MESH_BLOCK_REQUEST_ML -
XR_TYPE_WORLD_MESH_BLOCK_STATE_ML -
XR_TYPE_WORLD_MESH_BUFFER_ML -
XR_TYPE_WORLD_MESH_BUFFER_RECOMMENDED_SIZE_INFO_ML -
XR_TYPE_WORLD_MESH_BUFFER_SIZE_ML -
XR_TYPE_WORLD_MESH_DETECTOR_CREATE_INFO_ML -
XR_TYPE_WORLD_MESH_GET_INFO_ML -
XR_TYPE_WORLD_MESH_REQUEST_COMPLETION_INFO_ML -
XR_TYPE_WORLD_MESH_REQUEST_COMPLETION_ML -
XR_TYPE_WORLD_MESH_STATE_REQUEST_COMPLETION_ML -
XR_TYPE_WORLD_MESH_STATE_REQUEST_INFO_ML
-
Issues
Version History
-
Revision 1, 2023-08-29
-
Initial Revision
-
12.158. XR_MND_headless
- Name String
-
XR_MND_headless - Extension Type
-
Instance extension
- Registered Extension Number
-
43
- Revision
-
3
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2025-08-20
- IP Status
-
No known IP claims.
- Contributors
-
Rylie Pavlik, Collabora
Overview
Some applications may wish to access XR interaction devices without presenting any image content on the display(s). This extension provides a mechanism for writing such an application using the OpenXR API. It modifies the specification in the following ways, without adding any new named entities.
-
When this extension is enabled, an application may call xrCreateSession without having made a call to
xrGet*GraphicsRequirements, and without anXrGraphicsBinding*structure in thenextchain of XrSessionCreateInfo. -
If an application does not include an
XrGraphicsBinding*structure in thenextchain of XrSessionCreateInfo, the runtime must create a "headless" session that does not interact with the display. -
In a headless session, the session state should proceed to
XR_SESSION_STATE_READYdirectly fromXR_SESSION_STATE_IDLE. -
In a headless session, the XrSessionBeginInfo::
primaryViewConfigurationTypemust be ignored and may be0. -
In a headless session, the session state proceeds to
XR_SESSION_STATE_SYNCHRONIZED, thenXR_SESSION_STATE_VISIBLEandXR_SESSION_STATE_FOCUSED, after the call to xrBeginSession. The application does not need to call xrWaitFrame, xrBeginFrame, or xrEndFrame, unlike with non-headless sessions. -
In a headless session, xrEnumerateSwapchainFormats must return
XR_SUCCESSbut enumerate0formats. -
xrWaitFrame must set XrFrameState::
shouldRendertoXR_FALSEin a headless session. The VISIBLE and FOCUSED states are only used for their input-related semantics, not their rendering-related semantics, and these functions are permitted to allow minimal change between headless and non-headless code if desired.
Because xrWaitFrame is not required, an application using a headless session should sleep periodically to avoid consuming all available system resources in a busy-wait loop.
New Object Types
New Flag Types
New Enum Constants
New Enums
New Structures
New Functions
Issues
-
Not all devices with which this would be useful fit into one of the existing XrFormFactor values.
Version History
-
Revision 1, 2019-07-25 (Rylie Pavlik, Collabora, Ltd.)
-
Initial version reflecting Monado prototype.
-
-
Revision 2, 2019-10-22 (Rylie Pavlik, Collabora, Ltd.)
-
Clarify that
xrWaitFrameis permitted and should setshouldRenderto false.
-
-
Revision 3, 2025-08-20 (Dan Willmott, Valve Corporation)
-
Clarify that apps do not need to call
xrGet*GraphicsRequirementsbefore callingxrCreateSessionwhen using this extension.
-
12.159. XR_MSFT_composition_layer_reprojection
- Name String
-
XR_MSFT_composition_layer_reprojection - Extension Type
-
Instance extension
- Registered Extension Number
-
67
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2020-06-20
- IP Status
-
No known IP claims.
- Contributors
-
Zonglin Wu, Microsoft
Bryce Hutchings, Microsoft
Alex Turner, Microsoft
Yin Li, Microsoft
Overview
This extension enables an application to provide additional reprojection information for a projection composition layer to help the runtime produce better hologram stability and visual quality.
First, the application uses xrEnumerateReprojectionModesMSFT to inspect what reprojection mode the view configuration supports.
The xrEnumerateReprojectionModesMSFT function returns the supported reprojection modes of the view configuration.
// Provided by XR_MSFT_composition_layer_reprojection
XrResult xrEnumerateReprojectionModesMSFT(
XrInstance instance,
XrSystemId systemId,
XrViewConfigurationType viewConfigurationType,
uint32_t modeCapacityInput,
uint32_t* modeCountOutput,
XrReprojectionModeMSFT* modes);
A system may support different sets of reprojection modes for different view configuration types.
Then, the application can provide reprojection mode for the projection composition layer to inform the runtime that the XR experience may benefit from the provided reprojection mode.
An XrCompositionLayerReprojectionInfoMSFT structure can be added to
the next chain of XrCompositionLayerProjection structure when
calling xrEndFrame.
// Provided by XR_MSFT_composition_layer_reprojection
typedef struct XrCompositionLayerReprojectionInfoMSFT {
XrStructureType type;
const void* next;
XrReprojectionModeMSFT reprojectionMode;
} XrCompositionLayerReprojectionInfoMSFT;
When the application chained this structure when calling xrEndFrame,
the reprojectionMode must be one of the supported
XrReprojectionModeMSFT returned by
xrEnumerateReprojectionModesMSFT function for the corresponding
XrViewConfigurationType.
Otherwise, the runtime must return error
XR_ERROR_REPROJECTION_MODE_UNSUPPORTED_MSFT on the xrEndFrame
function.
The runtime must only use the given information for the corresponding frame in xrEndFrame function, and it must not affect other frames.
The XrReprojectionModeMSFT describes the reprojection mode of a projection composition layer.
// Provided by XR_MSFT_composition_layer_reprojection
typedef enum XrReprojectionModeMSFT {
XR_REPROJECTION_MODE_DEPTH_MSFT = 1,
XR_REPROJECTION_MODE_PLANAR_FROM_DEPTH_MSFT = 2,
XR_REPROJECTION_MODE_PLANAR_MANUAL_MSFT = 3,
XR_REPROJECTION_MODE_ORIENTATION_ONLY_MSFT = 4,
XR_REPROJECTION_MODE_MAX_ENUM_MSFT = 0x7FFFFFFF
} XrReprojectionModeMSFT;
When the application passes XR_REPROJECTION_MODE_DEPTH_MSFT or
XR_REPROJECTION_MODE_PLANAR_FROM_DEPTH_MSFT mode, it should also
provide the depth buffer for the corresponding layer using
XrCompositionLayerDepthInfoKHR in
XR_KHR_composition_layer_depth extension.
However, if the application does not submit this depth buffer, the runtime
must apply a runtime defined fallback reprojection mode, and must not fail
the xrEndFrame function because of this missing depth.
When the application passes XR_REPROJECTION_MODE_PLANAR_MANUAL_MSFT or
XR_REPROJECTION_MODE_ORIENTATION_ONLY_MSFT mode, it should avoid
providing a depth buffer for the corresponding layer using
XrCompositionLayerDepthInfoKHR in
XR_KHR_composition_layer_depth extension.
However, if the application does submit this depth buffer, the runtime must
not fail the xrEndFrame function because of this unused depth data.
When the application is confident that overriding the reprojection plane can benefit hologram stability, it can provide XrCompositionLayerReprojectionPlaneOverrideMSFT structure to further help the runtime to fine tune the reprojection details.
An application can add an
XrCompositionLayerReprojectionPlaneOverrideMSFT structure to the
next chain of XrCompositionLayerProjection structure.
The runtime must only use the given plane override for the corresponding frame in xrEndFrame function, and it must not affect other frames.
// Provided by XR_MSFT_composition_layer_reprojection
typedef struct XrCompositionLayerReprojectionPlaneOverrideMSFT {
XrStructureType type;
const void* next;
XrVector3f position;
XrVector3f normal;
XrVector3f velocity;
} XrCompositionLayerReprojectionPlaneOverrideMSFT;
A runtime must return XR_ERROR_VALIDATION_FAILURE if the normal
vector deviates by more than 1% from unit length.
Adding a reprojection plane override may benefit various reprojection modes
including XR_REPROJECTION_MODE_DEPTH_MSFT,
XR_REPROJECTION_MODE_PLANAR_FROM_DEPTH_MSFT and
XR_REPROJECTION_MODE_PLANAR_MANUAL_MSFT.
When application choose XR_REPROJECTION_MODE_ORIENTATION_ONLY_MSFT
mode, the reprojection plane override may be ignored by the runtime.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_COMPOSITION_LAYER_REPROJECTION_INFO_MSFT -
XR_TYPE_COMPOSITION_LAYER_REPROJECTION_PLANE_OVERRIDE_MSFT
XrResult enumeration is extended with:
-
XR_ERROR_REPROJECTION_MODE_UNSUPPORTED_MSFT
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2020-06-20 (Yin Li)
-
Initial extension proposal
-
12.160. XR_MSFT_controller_model
- Name String
-
XR_MSFT_controller_model - Extension Type
-
Instance extension
- Registered Extension Number
-
56
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Bryce Hutchings, Microsoft
Darryl Gough, Microsoft
Yin Li, Microsoft
Lachlan Ford, Microsoft
Overview
This extension provides a mechanism to load a GLTF model for controllers. An application can render the controller model using the real time pose input from controller’s grip action pose and animate controller parts representing the user’s interactions, such as pressing a button, or pulling a trigger.
This extension supports any controller interaction profile that supports …/grip/pose. The returned controller model represents the physical controller held in the user’s hands, and it may be different from the current interaction profile.
Query controller model key
xrGetControllerModelKeyMSFT retrieves the
XrControllerModelKeyMSFT for a controller.
This model key may later be used to retrieve the model data.
The xrGetControllerModelKeyMSFT function is defined as:
// Provided by XR_MSFT_controller_model
XrResult xrGetControllerModelKeyMSFT(
XrSession session,
XrPath topLevelUserPath,
XrControllerModelKeyStateMSFT* controllerModelKeyState);
The XrControllerModelKeyStateMSFT structure is defined as:
// Provided by XR_MSFT_controller_model
typedef struct XrControllerModelKeyStateMSFT {
XrStructureType type;
void* next;
XrControllerModelKeyMSFT modelKey;
} XrControllerModelKeyStateMSFT;
The modelKey value for the session represents a unique controller
model that can be retrieved from xrLoadControllerModelMSFT function.
Therefore, the application can use modelKey to cache the returned
data from xrLoadControllerModelMSFT for the session.
A modelKey value of XR_NULL_CONTROLLER_MODEL_KEY_MSFT,
represents an invalid model key and indicates there is no controller model
yet available.
The application should keep calling xrGetControllerModelKeyMSFT
because the model may become available at a later point.
The returned modelKey value depends on an active action binding to the
corresponding …/grip/pose of the controller.
Therefore, the application must have provided a valid action set containing
an action for …/grip/pose, and have successfully completed an
xrSyncActions call, in order to obtain a valid modelKey.
// Provided by XR_MSFT_controller_model
#define XR_NULL_CONTROLLER_MODEL_KEY_MSFT 0
XR_NULL_CONTROLLER_MODEL_KEY_MSFT defines an invalid model key value.
// Provided by XR_MSFT_controller_model
XR_DEFINE_ATOM(XrControllerModelKeyMSFT)
The controller model key used to retrieve the data for the renderable controller model and associated properties and state.
Load controller model as glTF 2.0 data
Once the application obtained a valid modelKey, it can use the
xrLoadControllerModelMSFT function to load the GLB data for the
controller model.
The xrLoadControllerModelMSFT function loads the controller model as a byte buffer containing a binary form of glTF (a.k.a GLB file format) for the controller. The binary glTF data must conform to glTF 2.0 format defined at https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html.
// Provided by XR_MSFT_controller_model
XrResult xrLoadControllerModelMSFT(
XrSession session,
XrControllerModelKeyMSFT modelKey,
uint32_t bufferCapacityInput,
uint32_t* bufferCountOutput,
uint8_t* buffer);
The xrLoadControllerModelMSFT function may be a slow operation and therefore should be invoked from a non-timing critical thread.
If the input modelKey is invalid, i.e. it is
XR_NULL_CONTROLLER_MODEL_KEY_MSFT or not a key returned from
XrControllerModelKeyStateMSFT, the runtime must return
XR_ERROR_CONTROLLER_MODEL_KEY_INVALID_MSFT.
Animate controller parts
The application can animate parts of the glTF model to represent the user’s interaction on the controller, such as pressing a button or pulling a trigger.
Once the application loads the glTF model of the controller, it should
first get XrControllerModelPropertiesMSFT containing an array of node
names in the glTF model that can be animated.
These properties, including the order of these node names in the array,
must be immutable for a valid modelKey in the session, and therefore
can be cached.
In the frame loop, the application should get
XrControllerModelStateMSFT to retrieve the pose of each node
representing user’s interaction on the controller and apply the transform to
the corresponding node in the glTF model using application’s glTF renderer.
The xrGetControllerModelPropertiesMSFT function returns the controller
model properties for a given modelKey.
// Provided by XR_MSFT_controller_model
XrResult xrGetControllerModelPropertiesMSFT(
XrSession session,
XrControllerModelKeyMSFT modelKey,
XrControllerModelPropertiesMSFT* properties);
The runtime must return the same data in
XrControllerModelPropertiesMSFT for a valid modelKey.
Therefore, the application can cache the returned
XrControllerModelPropertiesMSFT using modelKey and reuse the
data for each frame.
If the input modelKey is invalid, i.e. it is
XR_NULL_CONTROLLER_MODEL_KEY_MSFT or not a key returned from
XrControllerModelKeyStateMSFT, the runtime must return
XR_ERROR_CONTROLLER_MODEL_KEY_INVALID_MSFT.
The XrControllerModelPropertiesMSFT structure describes the properties of a controller model including an array of XrControllerModelNodePropertiesMSFT.
// Provided by XR_MSFT_controller_model
typedef struct XrControllerModelPropertiesMSFT {
XrStructureType type;
void* next;
uint32_t nodeCapacityInput;
uint32_t nodeCountOutput;
XrControllerModelNodePropertiesMSFT* nodeProperties;
} XrControllerModelPropertiesMSFT;
The XrControllerModelNodePropertiesMSFT structure describes properties of animatable nodes, including the node name and parent node name to locate a glTF node in the controller model that can be animated based on user’s interactions on the controller.
// Provided by XR_MSFT_controller_model
typedef struct XrControllerModelNodePropertiesMSFT {
XrStructureType type;
void* next;
char parentNodeName[XR_MAX_CONTROLLER_MODEL_NODE_NAME_SIZE_MSFT];
char nodeName[XR_MAX_CONTROLLER_MODEL_NODE_NAME_SIZE_MSFT];
} XrControllerModelNodePropertiesMSFT;
The node can be located in the glTF node hierarchy by finding the node(s)
with the matching node name and parent node name.
If the parentNodeName is empty, the matching will be solely based on
the nodeName.
If there are multiple nodes in the glTF file matches the condition above, the first matching node using depth-first traversal in the glTF scene should be animated and the rest should be ignored.
The runtime must not return any nodeName or parentNodeName that
does not match any glTF nodes in the corresponding controller model.
The xrGetControllerModelStateMSFT function returns the current state of the controller model representing user’s interaction to the controller, such as pressing a button or pulling a trigger.
// Provided by XR_MSFT_controller_model
XrResult xrGetControllerModelStateMSFT(
XrSession session,
XrControllerModelKeyMSFT modelKey,
XrControllerModelStateMSFT* state);
The runtime may return different state for a model key after each call to xrSyncActions, which represents the latest state of the user interactions.
If the input modelKey is invalid, i.e. it is
XR_NULL_CONTROLLER_MODEL_KEY_MSFT or not a key returned from
XrControllerModelKeyStateMSFT, the runtime must return
XR_ERROR_CONTROLLER_MODEL_KEY_INVALID_MSFT.
The XrControllerModelStateMSFT structure describes the state of a controller model, including an array of XrControllerModelNodeStateMSFT.
// Provided by XR_MSFT_controller_model
typedef struct XrControllerModelStateMSFT {
XrStructureType type;
void* next;
uint32_t nodeCapacityInput;
uint32_t nodeCountOutput;
XrControllerModelNodeStateMSFT* nodeStates;
} XrControllerModelStateMSFT;
The XrControllerModelNodeStateMSFT structure describes the state of a node in a controller model.
// Provided by XR_MSFT_controller_model
typedef struct XrControllerModelNodeStateMSFT {
XrStructureType type;
void* next;
XrPosef nodePose;
} XrControllerModelNodeStateMSFT;
The state is corresponding to the glTF node identified by the
XrControllerModelNodePropertiesMSFT::nodeName and
XrControllerModelNodePropertiesMSFT::parentNodeName of the node
property at the same array index in the
XrControllerModelPropertiesMSFT::nodeProperties in
XrControllerModelPropertiesMSFT.
The nodePose is based on the user’s interaction on the controller at
the latest xrSyncActions, represented as the XrPosef of the node
in it’s parent node space.
New Object Types
New Flag Types
New Enum Constants
-
XR_MAX_CONTROLLER_MODEL_NODE_NAME_SIZE_MSFT -
XR_TYPE_CONTROLLER_MODEL_NODE_PROPERTIES_MSFT -
XR_TYPE_CONTROLLER_MODEL_PROPERTIES_MSFT -
XR_TYPE_CONTROLLER_MODEL_NODE_STATE_MSFT -
XR_TYPE_CONTROLLER_MODEL_STATE_MSFT -
XR_ERROR_CONTROLLER_MODEL_KEY_INVALID_MSFT
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2020-03-12 (Yin Li)
-
Initial extension description
-
-
Revision 2, 2020-08-12 (Bryce Hutchings)
-
Remove a possible error condition
-
12.161. XR_MSFT_first_person_observer
- Name String
-
XR_MSFT_first_person_observer - Extension Type
-
Instance extension
- Registered Extension Number
-
55
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2020-05-02
- IP Status
-
No known IP claims.
- Contributors
-
Yin Li, Microsoft
Zonglin Wu, Microsoft
Alex Turner, Microsoft
12.161.1. Overview
This first-person observer view configuration enables the runtime to request the application to render an additional first-person view of the scene to be composed onto video frames being captured from a camera attached to and moved with the primary display on the form factor, which is generally for viewing on a 2D screen by an external observer. This first-person camera will be facing forward with roughly the same perspective as the primary views, and so the application should render its view to show objects that surround the user and avoid rendering the user’s body avatar. The runtime is responsible for composing the application’s rendered observer view onto the camera frame based on the chosen environment blend mode for this view configuration, as this extension does not provide the associated camera frame to the application.
This extension requires the XR_MSFT_secondary_view_configuration
extension to also be enabled.
XR_VIEW_CONFIGURATION_TYPE_SECONDARY_MONO_FIRST_PERSON_OBSERVER_MSFT
requires one projection in each XrCompositionLayerProjection layer.
Runtimes should only make this view configuration active when the user or the application activates a runtime feature that will make use of the resulting composed camera frames, for example taking a mixed reality photo. Otherwise, the runtime should leave this view configuration inactive to avoid the application wasting CPU and GPU resources rendering unnecessarily for this extra view.
Because this is a first-person view of the scene, applications can share a
common culling and instanced rendering pass with their primary view renders.
However, the view state (pose and FOV) of the first-person observer view
will not match the view state of any of the primary views.
Applications enabling this view configuration must call xrLocateViews
a second time each frame to explicitly query the view state for the
XR_VIEW_CONFIGURATION_TYPE_SECONDARY_MONO_FIRST_PERSON_OBSERVER_MSFT
configuration.
This secondary view configuration may support a different set of environment blend modes than the primary view configuration. For example, a device that only supports additive blending for its primary display may support alpha-blending when composing the first-person observer view with camera frames. The application should render with assets and shaders that produce output acceptable to both the primary and observer view configuration’s environment blend modes when sharing render passes across both view configurations.
New Object Types
New Flag Types
New Enum Constants
XrViewConfigurationType enumeration is extended with:
-
XR_VIEW_CONFIGURATION_TYPE_SECONDARY_MONO_FIRST_PERSON_OBSERVER_MSFT
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2019-07-30 (Yin LI)
-
Initial extension description
-
12.162. XR_MSFT_hand_tracking_mesh
- Name String
-
XR_MSFT_hand_tracking_mesh - Extension Type
-
Instance extension
- Registered Extension Number
-
53
- Revision
-
4
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2021-10-20
- IP Status
-
No known IP claims.
- Contributors
-
Yin Li, Microsoft
Lachlan Ford, Microsoft
Alex Turner, Microsoft
Bryce Hutchings, Microsoft
12.162.1. Overview
This extension enables hand tracking inputs represented as a dynamic hand mesh. It enables applications to render hands in XR experiences and interact with virtual objects using hand meshes.
The application must also enable the XR_EXT_hand_tracking extension
in order to use this extension.
Inspect system capability
An application can inspect whether the system is capable of hand tracking meshes by chaining an XrSystemHandTrackingMeshPropertiesMSFT structure to the XrSystemProperties when calling xrGetSystemProperties.
// Provided by XR_MSFT_hand_tracking_mesh
typedef struct XrSystemHandTrackingMeshPropertiesMSFT {
XrStructureType type;
void* next;
XrBool32 supportsHandTrackingMesh;
uint32_t maxHandMeshIndexCount;
uint32_t maxHandMeshVertexCount;
} XrSystemHandTrackingMeshPropertiesMSFT;
If a runtime returns XR_FALSE for supportsHandTrackingMesh, the
system does not support hand tracking mesh input, and therefore must return
XR_ERROR_FEATURE_UNSUPPORTED from xrCreateHandMeshSpaceMSFT and
xrUpdateHandMeshMSFT.
The application should avoid using hand mesh functionality when
supportsHandTrackingMesh is XR_FALSE.
If a runtime returns XR_TRUE for supportsHandTrackingMesh, the
system supports hand tracking mesh input.
In this case, the runtime must return a positive number for
maxHandMeshIndexCount and maxHandMeshVertexCount.
An application should use maxHandMeshIndexCount and
maxHandMeshVertexCount to preallocate hand mesh buffers and reuse them
in their render loop when calling xrUpdateHandMeshMSFT every frame.
12.162.2. Obtain a hand tracker handle
An application first creates an XrHandTrackerEXT handle using the xrCreateHandTrackerEXT function for each hand. The application can also reuse the same XrHandTrackerEXT handle previously created for the hand joint tracking. When doing so, the hand mesh input is always in sync with hand joints input with the same XrHandTrackerEXT handle.
12.162.3. Create a hand mesh space
The application creates a hand mesh space using function xrCreateHandMeshSpaceMSFT. The position and normal of hand mesh vertices will be represented in this space.
// Provided by XR_MSFT_hand_tracking_mesh
XrResult xrCreateHandMeshSpaceMSFT(
XrHandTrackerEXT handTracker,
const XrHandMeshSpaceCreateInfoMSFT* createInfo,
XrSpace* space);
A hand mesh space location is specified by runtime preference to effectively represent hand mesh vertices without unnecessary transformations. For example, an optical hand tracking system can define the hand mesh space origin at the depth camera’s optical center.
An application should create separate hand mesh space handles for each hand to retrieve the corresponding hand mesh data. The runtime may use the lifetime of this hand mesh space handle to manage the underlying device resources. Therefore, the application should destroy the hand mesh handle after it is finished using the hand mesh.
The hand mesh space can be related to other spaces in the session, such as
view reference space, or grip action space from the
/interaction_profiles/khr/simple_controller interaction profile.
The hand mesh space may be not locatable when the hand is outside of the
tracking range, or if focus is removed from the application.
In these cases, the runtime must not set the
XR_SPACE_LOCATION_POSITION_VALID_BIT and
XR_SPACE_LOCATION_ORIENTATION_VALID_BIT bits on calls to
xrLocateSpace with the hand mesh space, and the application should
avoid using the returned poses or query for hand mesh data.
If the underlying XrHandTrackerEXT is destroyed, the runtime must
continue to support xrLocateSpace using the hand mesh space, and it
must return space location with XR_SPACE_LOCATION_POSITION_VALID_BIT
and XR_SPACE_LOCATION_ORIENTATION_VALID_BIT unset.
The application may create a mesh space for the reference hand by setting
XrHandPoseTypeInfoMSFT::handPoseType to
XR_HAND_POSE_TYPE_REFERENCE_OPEN_PALM_MSFT.
Hand mesh spaces for the reference hand must only be locatable in reference
to mesh spaces or joint spaces of the reference hand.
// Provided by XR_MSFT_hand_tracking_mesh
typedef struct XrHandMeshSpaceCreateInfoMSFT {
XrStructureType type;
const void* next;
XrHandPoseTypeMSFT handPoseType;
XrPosef poseInHandMeshSpace;
} XrHandMeshSpaceCreateInfoMSFT;
12.162.4. Locate the hand mesh
The application can use the xrUpdateHandMeshMSFT function to retrieve the hand mesh at a given timestamp. The hand mesh’s vertices position and normal are represented in the hand mesh space created by xrCreateHandMeshSpaceMSFT with a same XrHandTrackerEXT.
// Provided by XR_MSFT_hand_tracking_mesh
XrResult xrUpdateHandMeshMSFT(
XrHandTrackerEXT handTracker,
const XrHandMeshUpdateInfoMSFT* updateInfo,
XrHandMeshMSFT* handMesh);
The application should preallocate the index buffer and vertex buffer in
XrHandMeshMSFT using the
XrSystemHandTrackingMeshPropertiesMSFT::maxHandMeshIndexCount
and
XrSystemHandTrackingMeshPropertiesMSFT::maxHandMeshVertexCount
from the XrSystemHandTrackingMeshPropertiesMSFT returned from the
xrGetSystemProperties function.
The application should preallocate the XrHandMeshMSFT structure and
reuse it for each frame so as to reduce the copies of data when underlying
tracking data is not changed.
The application should use XrHandMeshMSFT::indexBufferChanged
and XrHandMeshMSFT::vertexBufferChanged in XrHandMeshMSFT
to detect changes and avoid unnecessary data processing when there is no
changes.
A XrHandMeshUpdateInfoMSFT describes the information to update a hand mesh.
// Provided by XR_MSFT_hand_tracking_mesh
typedef struct XrHandMeshUpdateInfoMSFT {
XrStructureType type;
const void* next;
XrTime time;
XrHandPoseTypeMSFT handPoseType;
} XrHandMeshUpdateInfoMSFT;
A runtime may not maintain a full history of hand mesh data, therefore the
returned XrHandMeshMSFT might return data that’s not exactly
corresponding to the time input.
If the runtime cannot return any tracking data for the given time at
all, it must set XrHandMeshMSFT::isActive to XR_FALSE for
the call to xrUpdateHandMeshMSFT.
Otherwise, if the runtime returns XrHandMeshMSFT::isActive as
XR_TRUE, the data in XrHandMeshMSFT must be valid to use.
An application can choose different handPoseType values to query the
hand mesh data.
The returned hand mesh must be consistent to the hand joint space location
on the same XrHandTrackerEXT when using the same
XrHandPoseTypeMSFT.
A XrHandMeshMSFT structure contains data and buffers to receive updates of hand mesh tracking data from xrUpdateHandMeshMSFT function.
// Provided by XR_MSFT_hand_tracking_mesh
typedef struct XrHandMeshMSFT {
XrStructureType type;
void* next;
XrBool32 isActive;
XrBool32 indexBufferChanged;
XrBool32 vertexBufferChanged;
XrHandMeshIndexBufferMSFT indexBuffer;
XrHandMeshVertexBufferMSFT vertexBuffer;
} XrHandMeshMSFT;
When the returned isActive value is XR_FALSE, the runtime
indicates the hand is not actively tracked, for example, the hand is outside
of sensor’s range, or the input focus is taken away from the application.
When the runtime returns XR_FALSE to isActive, it must set
indexBufferChanged and vertexBufferChanged to XR_FALSE,
and must not change the content in indexBuffer or vertexBuffer,
When the returned isActive value is XR_TRUE, the hand tracking
mesh represented in indexBuffer and vertexBuffer are updated to
the latest data of the XrHandMeshUpdateInfoMSFT::time given to
the xrUpdateHandMeshMSFT function.
The runtime must set indexBufferChanged and vertexBufferChanged
to reflect whether the index or vertex buffer’s content are changed during
the update.
In this way, the application can easily avoid unnecessary processing of
buffers when there’s no new data.
The hand mesh is represented in triangle lists and each triangle’s vertices
are in clockwise order when looking from outside of the hand.
When hand tracking is active, i.e. when isActive is returned as
XR_TRUE, the returned indexBuffer.indexCountOutput value must be
positive and multiple of 3, and vertexBuffer.vertexCountOutput value must
be equal to or larger than 3.
A XrHandMeshIndexBufferMSFT structure includes an array of indices describing the triangle list of a hand mesh.
// Provided by XR_MSFT_hand_tracking_mesh
typedef struct XrHandMeshIndexBufferMSFT {
uint32_t indexBufferKey;
uint32_t indexCapacityInput;
uint32_t indexCountOutput;
uint32_t* indices;
} XrHandMeshIndexBufferMSFT;
An application should preallocate the indices array using the
XrSystemHandTrackingMeshPropertiesMSFT::maxHandMeshIndexCount
returned from xrGetSystemProperties.
In this way, the application can avoid possible insufficient buffer sizees
for each query, and therefore avoid reallocating memory each frame.
The input indexCapacityInput must not be 0, and indices must
not be NULL, or else the runtime must return
XR_ERROR_VALIDATION_FAILURE on calls to the xrUpdateHandMeshMSFT
function.
If the input indexCapacityInput is not sufficient to contain all
output indices, the runtime must return XR_ERROR_SIZE_INSUFFICIENT on
calls to xrUpdateHandMeshMSFT, not change the content in
indexBufferKey and indices, and return 0 for
indexCountOutput.
If the input indexCapacityInput is equal to or larger than the
XrSystemHandTrackingMeshPropertiesMSFT::maxHandMeshIndexCount
returned from xrGetSystemProperties, the runtime must not return
XR_ERROR_SIZE_INSUFFICIENT error on xrUpdateHandMeshMSFT because
of insufficient index buffer size.
If the input indexBufferKey is 0, the capacity of indices array is
sufficient, and hand mesh tracking is active, the runtime must return the
latest non-zero indexBufferKey, and fill in indexCountOutput and
indices.
If the input indexBufferKey is not 0, the runtime can either return
without changing indexCountOutput or content in indices, and
return XR_FALSE for XrHandMeshMSFT::indexBufferChanged
indicating the indices are not changed; or return a new non-zero
indexBufferKey and fill in latest data in indexCountOutput and
indices, and return XR_TRUE for
XrHandMeshMSFT::indexBufferChanged indicating the indices are
updated to a newer version.
An application can keep the XrHandMeshIndexBufferMSFT structure for
each frame in a frame loop and use the returned indexBufferKey to
identify different triangle list topology described in indices.
The application can therefore avoid unnecessary processing of indices, such
as coping them to GPU memory.
The runtime must return the same indexBufferKey for the same
XrHandTrackerEXT at a given time, regardless of the input
XrHandPoseTypeMSFT in XrHandMeshUpdateInfoMSFT.
This ensures the index buffer has the same mesh topology and allows the
application to reason about vertices across different hand pose types.
For example, the application can build a procedure to perform UV mapping on
vertices of a hand mesh using
XR_HAND_POSE_TYPE_REFERENCE_OPEN_PALM_MSFT, and apply the resultant UV
data on vertices to the mesh returned from the same hand tracker using
XR_HAND_POSE_TYPE_TRACKED_MSFT.
A XrHandMeshVertexBufferMSFT structure includes an array of vertices of the hand mesh represented in the hand mesh space.
// Provided by XR_MSFT_hand_tracking_mesh
typedef struct XrHandMeshVertexBufferMSFT {
XrTime vertexUpdateTime;
uint32_t vertexCapacityInput;
uint32_t vertexCountOutput;
XrHandMeshVertexMSFT* vertices;
} XrHandMeshVertexBufferMSFT;
An application should preallocate the vertices array using the
XrSystemHandTrackingMeshPropertiesMSFT::maxHandMeshVertexCount
returned from xrGetSystemProperties.
In this way, the application can avoid possible insufficient buffer sizes
for each query, and therefore avoid reallocating memory each frame.
The input vertexCapacityInput must not be 0, and vertices must
not be NULL, or else the runtime must return
XR_ERROR_VALIDATION_FAILURE on calls to the xrUpdateHandMeshMSFT
function.
If the input vertexCapacityInput is not sufficient to contain all
output vertices, the runtime must return XR_ERROR_SIZE_INSUFFICIENT
on calls to the xrUpdateHandMeshMSFT, do not change content in
vertexUpdateTime and vertices, and return 0 for
vertexCountOutput.
If the input vertexCapacityInput is equal to or larger than the
XrSystemHandTrackingMeshPropertiesMSFT::maxHandMeshVertexCount
returned from xrGetSystemProperties, the runtime must not return
XR_ERROR_SIZE_INSUFFICIENT on calls to the xrUpdateHandMeshMSFT
because of insufficient vertex buffer size.
If the input vertexUpdateTime is 0, and the capacity of the vertices
array is sufficient, and hand mesh tracking is active, the runtime must
return the latest non-zero vertexUpdateTime, and fill in the
vertexCountOutput and vertices fields.
If the input vertexUpdateTime is not 0, the runtime can either return
without changing vertexCountOutput or the content in vertices,
and return XR_FALSE for
XrHandMeshMSFT::vertexBufferChanged indicating the vertices are
not changed; or return a new non-zero vertexUpdateTime and fill in
latest data in vertexCountOutput and vertices and return
XR_TRUE for XrHandMeshMSFT::vertexBufferChanged indicating
the vertices are updated to a newer version.
An application can keep the XrHandMeshVertexBufferMSFT structure for
each frame in frame loop and use the returned vertexUpdateTime to
detect the changes of the content in vertices.
The application can therefore avoid unnecessary processing of vertices, such
as coping them to GPU memory.
Each XrHandMeshVertexMSFT includes the position and normal of a vertex of a hand mesh.
// Provided by XR_MSFT_hand_tracking_mesh
typedef struct XrHandMeshVertexMSFT {
XrVector3f position;
XrVector3f normal;
} XrHandMeshVertexMSFT;
12.162.5. Example code for hand mesh tracking
Following example code demos preallocating hand mesh buffers and updating the hand mesh in rendering loop
XrInstance instance; // previously initialized
XrSystemId systemId; // previously initialized
XrSession session; // previously initialized
// Inspect hand tracking mesh system properties
XrSystemHandTrackingMeshPropertiesMSFT handMeshSystemProperties{XR_TYPE_SYSTEM_HAND_TRACKING_MESH_PROPERTIES_MSFT};
XrSystemProperties systemProperties{XR_TYPE_SYSTEM_PROPERTIES, &handMeshSystemProperties};
CHK_XR(xrGetSystemProperties(instance, systemId, &systemProperties));
if (!handMeshSystemProperties.supportsHandTrackingMesh) {
// the system does not support hand mesh tracking
return;
}
// Get function pointer for xrCreateHandTrackerEXT
PFN_xrCreateHandTrackerEXT pfnCreateHandTrackerEXT;
CHK_XR(xrGetInstanceProcAddr(instance, "xrCreateHandTrackerEXT",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnCreateHandTrackerEXT)));
// Create a tracker for left hand.
XrHandTrackerEXT leftHandTracker{};
{
XrHandTrackerCreateInfoEXT createInfo{XR_TYPE_HAND_TRACKER_CREATE_INFO_EXT};
createInfo.hand = XR_HAND_LEFT_EXT;
createInfo.handJointSet = XR_HAND_JOINT_SET_DEFAULT_EXT;
CHK_XR(pfnCreateHandTrackerEXT(session, &createInfo, &leftHandTracker));
}
// Get function pointer for xrCreateHandMeshSpaceMSFT
PFN_xrCreateHandMeshSpaceMSFT pfnCreateHandMeshSpaceMSFT;
CHK_XR(xrGetInstanceProcAddr(instance, "xrCreateHandMeshSpaceMSFT",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnCreateHandMeshSpaceMSFT)));
// Create the hand mesh spaces
XrSpace leftHandMeshSpace{};
{
XrHandMeshSpaceCreateInfoMSFT createInfo{XR_TYPE_HAND_MESH_SPACE_CREATE_INFO_MSFT};
createInfo.poseInHandMeshSpace = {{0, 0, 0, 1}, {0, 0, 0}};
CHK_XR(pfnCreateHandMeshSpaceMSFT(leftHandTracker, &createInfo, &leftHandMeshSpace));
}
// Preallocate buffers for hand mesh indices and vertices
std::vector<uint32_t> handMeshIndices(handMeshSystemProperties.maxHandMeshIndexCount);
std::vector<XrHandMeshVertexMSFT> handMeshVertices(handMeshSystemProperties.maxHandMeshVertexCount);
XrHandMeshMSFT leftHandMesh{XR_TYPE_HAND_MESH_MSFT};
leftHandMesh.indexBuffer.indexCapacityInput = (uint32_t)handMeshIndices.size();
leftHandMesh.indexBuffer.indices = handMeshIndices.data();
leftHandMesh.vertexBuffer.vertexCapacityInput = (uint32_t)handMeshVertices.size();
leftHandMesh.vertexBuffer.vertices = handMeshVertices.data();
// Get function pointer for xrUpdateHandMeshMSFT
PFN_xrUpdateHandMeshMSFT pfnUpdateHandMeshMSFT;
CHK_XR(xrGetInstanceProcAddr(instance, "xrUpdateHandMeshMSFT",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnUpdateHandMeshMSFT)));
while(1){
// ...
// For every frame in frame loop
// ...
XrFrameState frameState; // previously returned from xrWaitFrame
const XrTime time = frameState.predictedDisplayTime;
XrHandMeshUpdateInfoMSFT updateInfo{XR_TYPE_HAND_MESH_UPDATE_INFO_MSFT};
updateInfo.time = time;
CHK_XR(pfnUpdateHandMeshMSFT(leftHandTracker, &updateInfo, &leftHandMesh));
if (!leftHandMesh.isActive) {
// Hand input is not focused or user's hand is out of tracking range.
// Do not process or render hand mesh.
} else {
if (leftHandMesh.indexBufferChanged) {
// Process indices in indexBuffer.indices
}
if (leftHandMesh.vertexBufferChanged) {
// Process vertices in vertexBuffer.vertices and leftHandMeshSpace
}
}
}
12.162.6. Get hand reference poses
By default, an XrHandTrackerEXT tracks a default hand pose type, that
is to provide best fidelity to the user’s actual hand motion.
This is the same with XR_HAND_POSE_TYPE_TRACKED_MSFT (i.e. value 0) in
a chained XrHandPoseTypeInfoMSFT structure to the next pointer
of XrHandTrackerCreateInfoEXT when calling
xrCreateHandTrackerEXT.
Some hand mesh visualizations may require an initial analysis or processing of the hand mesh relative to the joints of the hand. For example, a hand visualization may generate a UV mapping for the hand mesh vertices by raycasting outward from key joints against the mesh to find key vertices.
To avoid biasing such static analysis with the arbitrary tracked hand pose, an application can instead create a different XrHandTrackerEXT handle with a reference hand pose type when calling xrCreateHandTrackerEXT. This will instruct the runtime to provide a reference hand pose that is better suited for such static analysis.
An application can chain an XrHandPoseTypeInfoMSFT structure to the
XrHandTrackerCreateInfoEXT::next pointer when calling
xrCreateHandTrackerEXT to indicate the hand tracker to return the hand
pose of specific XrHandPoseTypeMSFT.
// Provided by XR_MSFT_hand_tracking_mesh
typedef struct XrHandPoseTypeInfoMSFT {
XrStructureType type;
const void* next;
XrHandPoseTypeMSFT handPoseType;
} XrHandPoseTypeInfoMSFT;
The XrHandPoseTypeMSFT describes the type of input hand pose from XrHandTrackerEXT.
// Provided by XR_MSFT_hand_tracking_mesh
typedef enum XrHandPoseTypeMSFT {
XR_HAND_POSE_TYPE_TRACKED_MSFT = 0,
XR_HAND_POSE_TYPE_REFERENCE_OPEN_PALM_MSFT = 1,
XR_HAND_POSE_TYPE_MAX_ENUM_MSFT = 0x7FFFFFFF
} XrHandPoseTypeMSFT;
The XR_HAND_POSE_TYPE_TRACKED_MSFT input provides best fidelity to the
user’s actual hand motion.
When the hand tracking input requires the user to be holding a controller in
their hand, the hand tracking input will appear as the user virtually
holding the controller.
This input can be used to render the hand shape together with the controller
in hand.
The XR_HAND_POSE_TYPE_REFERENCE_OPEN_PALM_MSFT input does not move
with the user’s actual hand.
Through this reference hand pose, an application can get a stable hand
joint and mesh that has the same mesh topology as the tracked hand mesh
using the same XrHandTrackerEXT, so that the application can apply the
data computed from a reference hand pose to the corresponding tracked hand.
Although a reference hand pose does not move with user’s hand motion, the
bone length and hand thickness may be updated, for example when tracking
result refines, or a different user’s hand is detected.
The application should update reference hand joints and meshes when the
tracked mesh’s indexBufferKey is changed or when the isActive
value returned from xrUpdateHandMeshMSFT changes from XR_FALSE
to XR_TRUE.
It can use the returned indexBufferKey and vertexUpdateTime from
xrUpdateHandMeshMSFT to avoid unnecessary CPU or GPU work to process
the neutral hand inputs.
12.162.7. Example code for reference hand mesh update
The following example code demonstrates detecting reference hand mesh changes and retrieving data for processing.
XrInstance instance; // previously initialized
XrSession session; // previously initialized
XrHandTrackerEXT handTracker; // previously initialized with handJointSet set to XR_HAND_JOINT_SET_DEFAULT_MSFT
XrSpace handMeshReferenceSpace; // previously initialized with handPoseType set to XR_HAND_POSE_TYPE_REFERENCE_OPEN_PALM_MSFT
XrHandMeshMSFT referenceHandMesh; // previously initialized with preallocated buffers
// Get function pointer for xrUpdateHandMeshMSFT
PFN_xrUpdateHandMeshMSFT pfnUpdateHandMeshMSFT;
CHK_XR(xrGetInstanceProcAddr(instance, "xrUpdateHandMeshMSFT",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnUpdateHandMeshMSFT)));
// Get function pointer for xrCreateHandTrackerEXT
PFN_xrCreateHandTrackerEXT pfnCreateHandTrackerEXT;
CHK_XR(xrGetInstanceProcAddr(instance, "xrCreateHandTrackerEXT",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnCreateHandTrackerEXT)));
// Get function pointer for xrLocateHandJointsEXT
PFN_xrLocateHandJointsEXT pfnLocateHandJointsEXT;
CHK_XR(xrGetInstanceProcAddr(instance, "xrLocateHandJointsEXT",
reinterpret_cast<PFN_xrVoidFunction*>(
&pfnLocateHandJointsEXT)));
while(1){
// ...
// For every frame in frame loop
// ...
XrFrameState frameState; // previously returned from xrWaitFrame
const XrTime time = frameState.predictedDisplayTime;
XrHandMeshUpdateInfoMSFT updateInfo{XR_TYPE_HAND_MESH_UPDATE_INFO_MSFT};
updateInfo.time = time;
updateInfo.handPoseType = XR_HAND_POSE_TYPE_REFERENCE_OPEN_PALM_MSFT;
CHK_XR(pfnUpdateHandMeshMSFT(handTracker, &updateInfo, &referenceHandMesh));
// Detect if reference hand mesh is changed.
if (referenceHandMesh.indexBufferChanged || referenceHandMesh.vertexBufferChanged) {
// Query the joint location using "open palm" reference hand pose.
XrHandPoseTypeInfoMSFT handPoseTypeInfo{XR_TYPE_HAND_POSE_TYPE_INFO_MSFT};
handPoseTypeInfo.handPoseType = XR_HAND_POSE_TYPE_REFERENCE_OPEN_PALM_MSFT;
XrHandTrackerCreateInfoEXT createInfo{XR_TYPE_HAND_TRACKER_CREATE_INFO_EXT};
createInfo.hand = XR_HAND_LEFT_EXT;
createInfo.handJointSet = XR_HAND_JOINT_SET_DEFAULT_EXT;
createInfo.next = &handPoseTypeInfo;
XrHandTrackerEXT referenceHandTracker;
CHK_XR(pfnCreateHandTrackerEXT(session, &createInfo, &referenceHandTracker));
XrHandJointsLocateInfoEXT locateInfo{XR_TYPE_HAND_JOINTS_LOCATE_INFO_EXT};
locateInfo.next = &handPoseTypeInfo;
locateInfo.baseSpace = handMeshReferenceSpace; // Query joint location relative to hand mesh reference space
locateInfo.time = time;
std::array<XrHandJointLocationEXT, XR_HAND_JOINT_COUNT_EXT> jointLocations;
XrHandJointLocationsEXT locations{XR_TYPE_HAND_JOINT_LOCATIONS_EXT};
locations.jointCount = jointLocations.size();
locations.jointLocations = jointLocations.data();
CHK_XR(pfnLocateHandJointsEXT(referenceHandTracker, &locateInfo, &locations));
// Generate UV map using tip/wrist location and referenceHandMesh.vertexBuffer
// For example, gradually changes color from the tip of the hand to wrist.
}
}
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_HAND_MESH_SPACE_CREATE_INFO_MSFT -
XR_TYPE_HAND_MESH_UPDATE_INFO_MSFT -
XR_TYPE_HAND_MESH_MSFT -
XR_TYPE_SYSTEM_HAND_TRACKING_MESH_PROPERTIES_MSFT -
XR_TYPE_HAND_POSE_TYPE_INFO_MSFT
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2019-09-20 (Yin LI)
-
Initial extension description
-
-
Revision 2, 2020-04-20 (Yin LI)
-
Change joint spaces to locate joints function.
-
-
Revision 3, 2021-04-13 (Rylie Pavlik, Collabora, Ltd.)
-
Correctly show function pointer retrieval in sample code
-
-
Revision 4, 2021-10-20 (Darryl Gough)
-
Winding order for hand mesh is corrected to clockwise to match runtime behavior.
-
12.163. XR_MSFT_holographic_window_attachment
- Name String
-
XR_MSFT_holographic_window_attachment - Extension Type
-
Instance extension
- Registered Extension Number
-
64
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Bryce Hutchings, Microsoft
Yin Li, Microsoft
Alex Turner, Microsoft
Overview
This extension enables the runtime to attach to app-provided HolographicSpace and CoreWindow WinRT objects when an XrSession is created. Applications may use this extension to create and control the CoreWindow/App View objects, allowing the app to subscribe to keyboard input events and react to activation event arguments. These events and data would otherwise be inaccessible if the application simply managed the app state and lifetime exclusively through the OpenXR API. This extension is only valid to use where an application can create a CoreWindow, such as UWP applications on the HoloLens.
The XrHolographicWindowAttachmentMSFT structure is defined as:
// Provided by XR_MSFT_holographic_window_attachment
typedef struct XrHolographicWindowAttachmentMSFT {
XrStructureType type;
const void* next;
IUnknown* holographicSpace;
IUnknown* coreWindow;
} XrHolographicWindowAttachmentMSFT;
When creating a holographic window-backed XrSession, the application
provides a pointer to an XrHolographicWindowAttachmentMSFT in the
next chain of the XrSessionCreateInfo.
The session state of a holographic window-backed XrSession will only
reach XR_SESSION_STATE_VISIBLE when the provided CoreWindow is made
visible.
If the CoreWindow is for a secondary app view, the application must
programmatically request to make the CoreWindow visible (e.g. with
ApplicationViewSwitcher.TryShowAsStandaloneAsync or
ApplicationViewSwitcher.SwitchAsync).
The app must not call xrCreateSession while the specified CoreWindow thread is blocked, otherwise the call may deadlock.
12.163.1. Sample code
Following example demos the usage of holographic window attachment and use
the attached CoreWindow to receive keyboard input, use
CoreTextEditContext to handle text typing experience, and use
IActivatedEventArgs to handle protocol launching arguments.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
struct AppView : implements<AppView, IFrameworkView> {
void Initialize(CoreApplicationView const& applicationView) {
applicationView.Activated({this, &AppView::OnActivated});
}
void Load(winrt::hstring const& entryPoint) {
}
void Uninitialize() {
}
void Run() {
// Creating a HolographicSpace before activating the CoreWindow to make it a holographic window
CoreWindow window = CoreWindow::GetForCurrentThread();
HolographicSpace holographicSpace = Windows::Graphics::Holographic::HolographicSpace::CreateForCoreWindow(window);
window.Activate();
// [xrCreateInstance, xrGetSystem, and create a graphics binding]
XrHolographicWindowAttachmentMSFT holographicWindowAttachment{XR_TYPE_ATTACHED_CORE_WINDOW_MSFT};
holographicWindowAttachment.next = &graphicsBinding;
holographicWindowAttachment.coreWindow = window.as<IUnknown>().get();
holographicWindowAttachment.holographicSpace = holographicSpace.as<IUnknown>().get();
XrSessionCreateInfo sessionCreateInfo{XR_TYPE_SESSION_CREATE_INFO};
sessionCreateInfo.next = &holographicWindowAttachment;
sessionCreateInfo.systemId = systemId;
XrSession session;
CHECK_XRCMD(xrCreateSession(instance, &sessionCreateInfo, &session));
while (!m_windowClosed) {
window.Dispatcher().ProcessEvents(CoreProcessEventsOption::ProcessAllIfPresent);
// [OpenXR calls: Poll events, sync actions, render, and submit frames].
}
}
void SetWindow(CoreWindow const& window) {
window.Closed({this, &AppView::OnWindowClosed});
window.KeyDown({this, &AppView::OnKeyDown});
// This sample customizes the text input pane with manual display policy and email address scope.
windows::CoreTextServicesManager manager = windows::CoreTextServicesManager::GetForCurrentView();
windows::CoreTextEditContext editingContext = manager.CreateEditContext();
editingContext.InputPaneDisplayPolicy(windows::CoreTextInputPaneDisplayPolicy::Manual);
editingContext.InputScope(windows::CoreTextInputScope::EmailAddress);
}
void OnWindowClosed(CoreWindow const& sender, CoreWindowEventArgs const& args) {
m_windowClosed = true;
}
void OnKeyDown(CoreWindow const& sender, KeyEventArgs const& args) {
// [Process key down]
}
void OnActivated(CoreApplicationView const&, IActivatedEventArgs const& args) {
if (args.Kind() == windows::ActivationKind::Protocol) {
auto eventArgs{args.as<windows::ProtocolActivatedEventArgs>()};
// Use the protocol activation parameters in eventArgs.Uri();
}
// Inspecting whether the application is launched from within holographic shell or from desktop.
if (windows::HolographicApplicationPreview::IsHolographicActivation(args)) {
// App activation is targeted at the holographic shell.
} else {
// App activation is targeted at the desktop.
}
// NOTE: CoreWindow is activated later after the HolographicSpace has been created.
}
bool m_windowClosed{false};
};
struct AppViewSource : winrt::implements<AppViewSource, IFrameworkViewSource> {
windows::IFrameworkView CreateView() {
return winrt::make<AppView>();
}
};
int __stdcall wWinMain(HINSTANCE, HINSTANCE, PWSTR, int) {
CoreApplication::Run(make<AppViewSource>());
}
Version History
-
Revision 1, 2020-05-18 (Bryce Hutchings)
-
Initial extension description
-
12.164. XR_MSFT_perception_anchor_interop
- Name String
-
XR_MSFT_perception_anchor_interop - Extension Type
-
Instance extension
- Registered Extension Number
-
57
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2020-06-16
- IP Status
-
No known IP claims.
- Contributors
-
Lachlan Ford, Microsoft
Bryce Hutchings, Microsoft
Yin Li, Microsoft
Overview
This extension supports conversion between XrSpatialAnchorMSFT and Windows.Perception.Spatial.SpatialAnchor. An application can use this extension to persist spatial anchors on the Windows device through SpatialAnchorStore or transfer spatial anchors between devices through SpatialAnchorTransferManager.
The xrCreateSpatialAnchorFromPerceptionAnchorMSFT function creates a
XrSpatialAnchorMSFT handle from an IUnknown pointer to
Windows.Perception.Spatial.SpatialAnchor.
// Provided by XR_MSFT_perception_anchor_interop
XrResult xrCreateSpatialAnchorFromPerceptionAnchorMSFT(
XrSession session,
IUnknown* perceptionAnchor,
XrSpatialAnchorMSFT* anchor);
The input perceptionAnchor must support successful QueryInterface
to
Windows.Perception.Spatial.SpatialAnchor
, otherwise the runtime must return XR_ERROR_VALIDATION_FAILURE.
If the function successfully returned, the output anchor must be a
valid handle.
This also increments the refcount of the perceptionAnchor object.
When application is done with the anchor handle, it can be destroyed
using xrDestroySpatialAnchorMSFT function.
This also decrements the refcount of underlying windows perception anchor
object.
The xrTryGetPerceptionAnchorFromSpatialAnchorMSFT function converts a
XrSpatialAnchorMSFT handle into an IUnknown pointer to
Windows.Perception.Spatial.SpatialAnchor.
// Provided by XR_MSFT_perception_anchor_interop
XrResult xrTryGetPerceptionAnchorFromSpatialAnchorMSFT(
XrSession session,
XrSpatialAnchorMSFT anchor,
IUnknown** perceptionAnchor);
If the runtime can convert the anchor to a
Windows.Perception.Spatial.SpatialAnchor
object, this function must return XR_SUCCESS, and the output
IUnknown in the pointer of perceptionAnchor must be not NULL.
This also increments the refcount of the object.
The application can then use QueryInterface to get the pointer for
Windows.Perception.Spatial.SpatialAnchor
object.
The application should release the COM pointer after done with the object,
or attach it to a smart COM pointer such as winrt::com_ptr.
If the runtime cannot convert the anchor to a
Windows.Perception.Spatial.SpatialAnchor
object, the function must return XR_SUCCESS, and the output
IUnknown in the pointer of perceptionAnchor must be NULL.
New Object Types
New Flag Types
New Enum Constants
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2020-06-16 (Yin Li)
-
Initial extension proposal
-
12.165. XR_MSFT_scene_marker
- Name String
-
XR_MSFT_scene_marker - Extension Type
-
Instance extension
- Registered Extension Number
-
148
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Alain Zanchetta, Microsoft
Yin Li, Microsoft
Alex Turner, Microsoft
12.165.1. Overview
This extension enables the application to observe the tracked markers, such as the QR Code markers in ISO/IEC 18004:2015. This extension also enables future extensions to easily add new types of marker tracking.
The application must enable both XR_MSFT_scene_marker and
XR_MSFT_scene_understanding in order to use this extension.
|
Note
A typical use of this extension is:
|
12.165.2. Retrieve marker properties
The XrSceneMarkersMSFT structure is defined as:
// Provided by XR_MSFT_scene_marker
typedef struct XrSceneMarkersMSFT {
XrStructureType type;
const void* next;
uint32_t sceneMarkerCapacityInput;
XrSceneMarkerMSFT* sceneMarkers;
} XrSceneMarkersMSFT;
Once the application creates an XrSceneMSFT after a successful scene compute, it can retrieve the scene markers' properties by chaining XrSceneMarkersMSFT structure to the next pointer of XrSceneComponentsGetInfoMSFT when calling xrGetSceneComponentsMSFT.
xrGetSceneComponentsMSFT follows the two-call idiom for filling the XrSceneComponentsMSFT structure to which an XrSceneMarkersMSFT structure can be chained.
The input sceneMarkerCapacityInput must be equal to or greater than
the corresponding XrSceneComponentsMSFT::componentCapacityInput,
otherwise the runtime must return XR_ERROR_SIZE_INSUFFICIENT.
The actual count of elements returned in the array sceneMarkers is
consistent with the extended XrSceneComponentsMSFT structure and
returned in XrSceneComponentsMSFT::componentCountOutput.
The XrSceneMarkerMSFT structure is defined as:
// Provided by XR_MSFT_scene_marker
typedef struct XrSceneMarkerMSFT {
XrSceneMarkerTypeMSFT markerType;
XrTime lastSeenTime;
XrOffset2Df center;
XrExtent2Df size;
} XrSceneMarkerMSFT;
The XrSceneMarkerMSFT structure is an element in the array of
XrSceneMarkersMSFT::sceneMarkers.
Refer to the QR code convention for an example of marker’s center and size in the context of a QR code.
When the runtime updates the location or properties of an observed marker,
the runtime must set the XrSceneMarkerMSFT::lastSeenTime to the
new timestamp of the update.
When the runtime cannot observe a previously observed
XrSceneMarkerMSFT, the runtime must keep the previous
lastSeenTime for the marker.
Hence, the application can use the lastSeenTime to know how fresh the
tracking information is for a given marker.
The center and size are measured in meters, relative to the
XrPosef of the marker for the visual bound of the marker in XY plane,
regardless of the marker type.
The XrSceneMarkerTypeFilterMSFT structure is defined as:
// Provided by XR_MSFT_scene_marker
typedef struct XrSceneMarkerTypeFilterMSFT {
XrStructureType type;
const void* next;
uint32_t markerTypeCount;
XrSceneMarkerTypeMSFT* markerTypes;
} XrSceneMarkerTypeFilterMSFT;
The application can filter the returned scene components to specific marker
types by chaining XrSceneMarkerTypeFilterMSFT to the next
pointer of XrSceneComponentsGetInfoMSFT when calling
xrGetSceneComponentsMSFT.
When XrSceneMarkerTypeFilterMSFT is provided to xrGetSceneComponentsMSFT, the runtime must only return scene components that match the requested types.
The application must provide a non-empty array of unique markerTypes,
i.e. the markerTypeCount must be positive and the elements in the
markerTypes array must not have duplicated values.
Otherwise, the runtime must return XR_ERROR_VALIDATION_FAILURE for
xrGetSceneComponentsMSFT function.
The XrSceneMarkerTypeMSFT identifies the type of a scene marker.
// Provided by XR_MSFT_scene_marker
typedef enum XrSceneMarkerTypeMSFT {
XR_SCENE_MARKER_TYPE_QR_CODE_MSFT = 1,
XR_SCENE_MARKER_TYPE_MAX_ENUM_MSFT = 0x7FFFFFFF
} XrSceneMarkerTypeMSFT;
12.165.3. Locate markers
Applications can use xrLocateSceneComponentsMSFT to locate an XrSceneMarkerMSFT.
The scene marker’s locations are snapshots of the XrSceneMSFT, that do
not change for the lifecycle of the result.
To get updated tracking, the application can issue another
xrComputeNewSceneMSFT and obtain a new XrSceneMSFT.
The application can use the XrSceneComponentMSFT::id to
correlate the same marker across multiple scene computes.
The pose and geometry of scene markers returned from this extension follows these general conventions:
-
The marker image reside in the plane of X and Y axes.
-
Z axis is perpendicular to the X and Y axes and follows the right hand rule. +Z is pointing into the marker image.
-
The origin of the marker is runtime defined for the specific XrSceneMarkerTypeMSFT, and it typically represents the most stable and accurate point for tracking the marker. This allows the application to use the marker as a tracked point.
-
In cases where the origin does not necessarily coincide with the center of the marker geometry, applications can obtain additional geometry information from the XrSceneMarkerMSFT structure. This information includes the center and size of the marker image in the X and Y plane.
The exact origin and geometry properties relative to the tracked marker image in physical world must be well defined and consistent for each XrSceneMarkerTypeMSFT, including the new marker types defined in future extensions.
12.165.4. The convention of QRCode marker location
For a marker with XR_SCENE_MARKER_TYPE_QR_CODE_MSFT, the origin is at
the top left corner of the QR code image, where the orientation of the QR
code image in the XY plane follows the convention in
ISO/IEC 18004:2015.
The X axis of QR code pose points to the right of the marker image, and the
Z axis points inward to the marker image, as illustrated in following image.
The QR Code marker’s center and size are defined in the XY plane, as illustrated in following pictures.
12.165.5. Retrieving QRCode marker properties
The XrSceneMarkerQRCodesMSFT structure is defined as:
// Provided by XR_MSFT_scene_marker
typedef struct XrSceneMarkerQRCodesMSFT {
XrStructureType type;
const void* next;
uint32_t qrCodeCapacityInput;
XrSceneMarkerQRCodeMSFT* qrCodes;
} XrSceneMarkerQRCodesMSFT;
An XrSceneMarkerQRCodesMSFT structure can be chained to the next
pointer of XrSceneComponentsMSFT when calling
xrGetSceneComponentsMSFT function to retrieve the QR Code specific
properties through an array of XrSceneMarkerQRCodeMSFT structures.
xrGetSceneComponentsMSFT follows the two-call idiom for filling the XrSceneComponentsMSFT structure to which an XrSceneMarkerQRCodesMSFT structure can be chained.
The qrCodeCapacityInput must be equal to or greater than the
corresponding XrSceneComponentsMSFT::componentCapacityInput, otherwise
the runtime must return the success code XR_ERROR_SIZE_INSUFFICIENT
from xrGetSceneComponentsMSFT.
The actual count of elements returned in the array qrCodes is
consistent to the extended XrSceneComponentsMSFT structure and
returned in XrSceneComponentsMSFT::componentCountOutput.
The XrSceneMarkerQRCodeMSFT structure is defined as:
// Provided by XR_MSFT_scene_marker
typedef struct XrSceneMarkerQRCodeMSFT {
XrSceneMarkerQRCodeSymbolTypeMSFT symbolType;
uint8_t version;
} XrSceneMarkerQRCodeMSFT;
The XrSceneMarkerQRCodeMSFT structure contains the detailed QR Code
symbol type and version according to ISO/IEC
18004:2015.
The version must be in the range 1 to 40 inclusively for a QR Code
and 1 to 4 inclusively for a Micro QR Code.
// Provided by XR_MSFT_scene_marker
typedef enum XrSceneMarkerQRCodeSymbolTypeMSFT {
XR_SCENE_MARKER_QR_CODE_SYMBOL_TYPE_QR_CODE_MSFT = 1,
XR_SCENE_MARKER_QR_CODE_SYMBOL_TYPE_MICRO_QR_CODE_MSFT = 2,
XR_SCENE_MARKER_QRCODE_SYMBOL_TYPE_MAX_ENUM_MSFT = 0x7FFFFFFF
} XrSceneMarkerQRCodeSymbolTypeMSFT;
The XrSceneMarkerQRCodeSymbolTypeMSFT identifies the symbol type of the QR Code.
The xrGetSceneMarkerDecodedStringMSFT function is defined as:
// Provided by XR_MSFT_scene_marker
XrResult xrGetSceneMarkerDecodedStringMSFT(
XrSceneMSFT scene,
const XrUuidMSFT* markerId,
uint32_t bufferCapacityInput,
uint32_t* bufferCountOutput,
char* buffer);
The xrGetSceneMarkerDecodedStringMSFT function retrieves the string
stored in the scene marker as an UTF-8 string, including the terminating
'\0'.
This function follows the two-call
idiom for filling the buffer array.
If the stored data in the marker is not an encoded string, the runtime must
return the success code XR_SCENE_MARKER_DATA_NOT_STRING_MSFT, set
bufferCountOutput to 1, and make buffer an empty string.
The xrGetSceneMarkerRawDataMSFT function is defined as:
// Provided by XR_MSFT_scene_marker
XrResult xrGetSceneMarkerRawDataMSFT(
XrSceneMSFT scene,
const XrUuidMSFT* markerId,
uint32_t bufferCapacityInput,
uint32_t* bufferCountOutput,
uint8_t* buffer);
The xrGetSceneMarkerRawDataMSFT function retrieves the data stored in the scene marker.
New Object Types
New Flag Types
New Enum Constants
XrSceneComputeFeatureMSFT enumeration is extended with:
-
XR_SCENE_COMPUTE_FEATURE_MARKER_MSFT
XrSceneComponentTypeMSFT enumeration is extended with:
-
XR_SCENE_COMPONENT_TYPE_MARKER_MSFT
XrStructureType enumeration is extended with:
-
XR_TYPE_SCENE_MARKERS_MSFT -
XR_TYPE_SCENE_MARKER_TYPE_FILTER_MSFT -
XR_TYPE_SCENE_MARKER_QR_CODES_MSFT
XrResult enumeration is extended with:
-
XR_SCENE_MARKER_DATA_NOT_STRING_MSFT
New Enums
New Structures
New Functions
Version History
-
Revision 1, 2023-01-11 (Alain Zanchetta)
-
Initial extension description
-
12.166. XR_MSFT_scene_understanding
- Name String
-
XR_MSFT_scene_understanding - Extension Type
-
Instance extension
- Registered Extension Number
-
98
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2021-05-03
- IP Status
-
No known IP claims.
- Contributors
-
Darryl Gough, Microsoft
Yin Li, Microsoft
Bryce Hutchings, Microsoft
Alex Turner, Microsoft
Simon Stachniak, Microsoft
David Fields, Microsoft
Overview
Scene understanding provides applications with a structured, high-level representation of the planes, meshes, and objects in the user’s environment, enabling the development of spatially-aware applications.
The application requests computation of a scene, receiving the list of scene components observed in the environment around the user. These scene components contain information such as:
-
The type of the discovered objects (wall, floor, ceiling, or other surface type).
-
The planes and their bounds that represent the object.
-
The visual and collider triangle meshes that represent the object.
The application can use this information to reason about the structure and location of the environment, to place holograms on surfaces, or render clues for grounding objects.
An application typically uses this extension in the following steps:
-
Create an XrSceneObserverMSFT handle to manage the system resource of the scene understanding compute.
-
Start the scene compute by calling xrComputeNewSceneMSFT with XrSceneBoundsMSFT to specify the scan range and a list of XrSceneComputeFeatureMSFT features.
-
Inspect the completion of computation by polling xrGetSceneComputeStateMSFT.
-
Once compute is completed, create an XrSceneMSFT handle to the result by calling xrCreateSceneMSFT.
-
Get properties of scene components using xrGetSceneComponentsMSFT.
-
Locate scene components using xrLocateSceneComponentsMSFT.
Create a scene observer handle
The XrSceneObserverMSFT handle represents the resources for computing scenes. It maintains a correlation of scene component identifiers across multiple scene computes.
|
Note
The application should destroy the XrSceneObserverMSFT handle when it is done with scene compute and scene component data to save system power consumption. |
XR_DEFINE_HANDLE(XrSceneObserverMSFT)
An XrSceneObserverMSFT handle is created using xrCreateSceneObserverMSFT.
// Provided by XR_MSFT_scene_understanding
XrResult xrCreateSceneObserverMSFT(
XrSession session,
const XrSceneObserverCreateInfoMSFT* createInfo,
XrSceneObserverMSFT* sceneObserver);
The XrSceneObserverCreateInfoMSFT structure is defined as:
// Provided by XR_MSFT_scene_understanding
typedef struct XrSceneObserverCreateInfoMSFT {
XrStructureType type;
const void* next;
} XrSceneObserverCreateInfoMSFT;
The xrDestroySceneObserverMSFT function releases the
sceneObserver and the underlying resources.
// Provided by XR_MSFT_scene_understanding
XrResult xrDestroySceneObserverMSFT(
XrSceneObserverMSFT sceneObserver);
Compute a new scene and wait for completion
The xrComputeNewSceneMSFT function begins the compute of a new scene and the runtime must return quickly without waiting for the compute to complete. The application should use xrGetSceneComputeStateMSFT to inspect the compute status.
The application can control the compute features by passing a list of
XrSceneComputeFeatureMSFT via
XrNewSceneComputeInfoMSFT::requestedFeatures.
-
If
XR_SCENE_COMPUTE_FEATURE_PLANE_MSFTis passed, butXR_SCENE_COMPUTE_FEATURE_PLANE_MESH_MSFTis not passed, then:-
The application may be able to read
XR_SCENE_COMPONENT_TYPE_PLANE_MSFTandXR_SCENE_COMPONENT_TYPE_OBJECT_MSFTscene components from the resulting XrSceneMSFT handle. -
XrScenePlaneMSFT::
meshBufferIdmust be zero to indicate that the plane scene component does not have a mesh buffer available to read.
-
-
If
XR_SCENE_COMPUTE_FEATURE_PLANE_MSFTandXR_SCENE_COMPUTE_FEATURE_PLANE_MESH_MSFTare passed, then:-
the application may be able to read
XR_SCENE_COMPONENT_TYPE_PLANE_MSFTandXR_SCENE_COMPONENT_TYPE_OBJECT_MSFTscene components from the resulting XrSceneMSFT handle -
XrScenePlaneMSFT::
meshBufferIdmay contain a non-zero mesh buffer identifier to indicate that the plane scene component has a mesh buffer available to read.
-
-
If
XR_SCENE_COMPUTE_FEATURE_VISUAL_MESH_MSFTis passed then:-
the application may be able to read
XR_SCENE_COMPONENT_TYPE_VISUAL_MESH_MSFTandXR_SCENE_COMPONENT_TYPE_OBJECT_MSFTscene components from the resulting XrSceneMSFT handle.
-
-
If
XR_SCENE_COMPUTE_FEATURE_COLLIDER_MESH_MSFTis passed then:-
the application may be able to read
XR_SCENE_COMPONENT_TYPE_COLLIDER_MESH_MSFTandXR_SCENE_COMPONENT_TYPE_OBJECT_MSFTscene components from the resulting XrSceneMSFT handle.
-
// Provided by XR_MSFT_scene_understanding
XrResult xrComputeNewSceneMSFT(
XrSceneObserverMSFT sceneObserver,
const XrNewSceneComputeInfoMSFT* computeInfo);
The runtime must return
XR_ERROR_SCENE_COMPUTE_FEATURE_INCOMPATIBLE_MSFT if incompatible
features were passed or no compatible features were passed.
The runtime must return
XR_ERROR_SCENE_COMPUTE_FEATURE_INCOMPATIBLE_MSFT if
XR_SCENE_COMPUTE_FEATURE_PLANE_MESH_MSFT was passed but
XR_SCENE_COMPUTE_FEATURE_PLANE_MSFT was not passed.
The runtime must return XR_ERROR_COMPUTE_NEW_SCENE_NOT_COMPLETED_MSFT
if xrComputeNewSceneMSFT is called while the scene computation is in
progress.
An application that wishes to use
XR_SCENE_COMPUTE_CONSISTENCY_OCCLUSION_OPTIMIZED_MSFT must create an
XrSceneObserverMSFT handle that passes neither
XR_SCENE_COMPUTE_CONSISTENCY_SNAPSHOT_COMPLETE_MSFT nor
XR_SCENE_COMPUTE_CONSISTENCY_SNAPSHOT_INCOMPLETE_FAST_MSFT to
xrComputeNewSceneMSFT for the lifetime of that
XrSceneObserverMSFT handle.
This allows the runtime to return occlusion mesh at a different cadence than
non-occlusion mesh or planes.
-
The runtime must return
XR_ERROR_SCENE_COMPUTE_CONSISTENCY_MISMATCH_MSFTif:-
XR_SCENE_COMPUTE_CONSISTENCY_OCCLUSION_OPTIMIZED_MSFTis passed to xrComputeNewSceneMSFT and -
a previous call to xrComputeNewSceneMSFT did not pass
XR_SCENE_COMPUTE_CONSISTENCY_OCCLUSION_OPTIMIZED_MSFTfor the same XrSceneObserverMSFT handle.
-
-
The runtime must return
XR_ERROR_SCENE_COMPUTE_CONSISTENCY_MISMATCH_MSFTif:-
XR_SCENE_COMPUTE_CONSISTENCY_OCCLUSION_OPTIMIZED_MSFTis not passed to xrComputeNewSceneMSFT and -
a previous call to xrComputeNewSceneMSFT did pass
XR_SCENE_COMPUTE_CONSISTENCY_OCCLUSION_OPTIMIZED_MSFTfor the same XrSceneObserverMSFT handle.
-
-
The runtime must return
XR_ERROR_SCENE_COMPUTE_FEATURE_INCOMPATIBLE_MSFTif:-
XR_SCENE_COMPUTE_CONSISTENCY_OCCLUSION_OPTIMIZED_MSFTis passed to xrComputeNewSceneMSFT and -
neither
XR_SCENE_COMPUTE_FEATURE_VISUAL_MESH_MSFTnorXR_SCENE_COMPUTE_FEATURE_COLLIDER_MESH_MSFTare also passed.
-
-
The runtime must return
XR_ERROR_SCENE_COMPUTE_FEATURE_INCOMPATIBLE_MSFTif:-
XR_SCENE_COMPUTE_CONSISTENCY_OCCLUSION_OPTIMIZED_MSFTis passed to xrComputeNewSceneMSFT and -
at least one of
XR_SCENE_COMPUTE_FEATURE_SERIALIZE_SCENE_MSFT,XR_SCENE_COMPUTE_FEATURE_PLANE_MSFT,XR_SCENE_COMPUTE_FEATURE_PLANE_MESH_MSFT, orXR_SCENE_COMPUTE_FEATURE_SERIALIZE_SCENE_MSFTare also passed.
-
An XrSceneMSFT handle represents the collection of scene components that were detected during the scene computation.
XR_DEFINE_HANDLE(XrSceneMSFT)
The XrNewSceneComputeInfoMSFT structure is defined as:
// Provided by XR_MSFT_scene_understanding
typedef struct XrNewSceneComputeInfoMSFT {
XrStructureType type;
const void* next;
uint32_t requestedFeatureCount;
const XrSceneComputeFeatureMSFT* requestedFeatures;
XrSceneComputeConsistencyMSFT consistency;
XrSceneBoundsMSFT bounds;
} XrNewSceneComputeInfoMSFT;
The XrSceneComputeFeatureMSFT enumeration identifies the different scene compute features that may be passed to xrComputeNewSceneMSFT.
// Provided by XR_MSFT_scene_understanding
typedef enum XrSceneComputeFeatureMSFT {
XR_SCENE_COMPUTE_FEATURE_PLANE_MSFT = 1,
XR_SCENE_COMPUTE_FEATURE_PLANE_MESH_MSFT = 2,
XR_SCENE_COMPUTE_FEATURE_VISUAL_MESH_MSFT = 3,
XR_SCENE_COMPUTE_FEATURE_COLLIDER_MESH_MSFT = 4,
// Provided by XR_MSFT_scene_understanding_serialization
XR_SCENE_COMPUTE_FEATURE_SERIALIZE_SCENE_MSFT = 1000098000,
// Provided by XR_MSFT_scene_marker
XR_SCENE_COMPUTE_FEATURE_MARKER_MSFT = 1000147000,
XR_SCENE_COMPUTE_FEATURE_MAX_ENUM_MSFT = 0x7FFFFFFF
} XrSceneComputeFeatureMSFT;
|
Note
Applications wanting to use the scene for analysis, or in a physics
simulation should set Setting Setting |
The XrSceneComputeConsistencyMSFT enumeration identifies the different scene compute consistencies that may be passed to xrComputeNewSceneMSFT.
// Provided by XR_MSFT_scene_understanding
typedef enum XrSceneComputeConsistencyMSFT {
XR_SCENE_COMPUTE_CONSISTENCY_SNAPSHOT_COMPLETE_MSFT = 1,
XR_SCENE_COMPUTE_CONSISTENCY_SNAPSHOT_INCOMPLETE_FAST_MSFT = 2,
XR_SCENE_COMPUTE_CONSISTENCY_OCCLUSION_OPTIMIZED_MSFT = 3,
XR_SCENE_COMPUTE_CONSISTENCY_MAX_ENUM_MSFT = 0x7FFFFFFF
} XrSceneComputeConsistencyMSFT;
An application can pass one or more bounding volumes when calling xrComputeNewSceneMSFT. These bounding volumes are used to determine which scene components to include in the resulting scene. Scene components that intersect one or more of the bounding volumes should be included, and all other scene components should be excluded. If an application inputs no bounding volumes, then the runtime must not associate any scene components with the resulting XrSceneMSFT handle.
// Provided by XR_MSFT_scene_understanding
typedef struct XrSceneBoundsMSFT {
XrSpace space;
XrTime time;
uint32_t sphereCount;
const XrSceneSphereBoundMSFT* spheres;
uint32_t boxCount;
const XrSceneOrientedBoxBoundMSFT* boxes;
uint32_t frustumCount;
const XrSceneFrustumBoundMSFT* frustums;
} XrSceneBoundsMSFT;
An XrSceneSphereBoundMSFT structure describes the center and radius of a sphere bounds.
// Provided by XR_MSFT_scene_understanding
typedef struct XrSceneSphereBoundMSFT {
XrVector3f center;
float radius;
} XrSceneSphereBoundMSFT;
The runtime must return XR_ERROR_VALIDATION_FAILURE if radius
is not a finite positive value.
An XrSceneOrientedBoxBoundMSFT structure describes the pose and extents of an oriented box bounds.
// Provided by XR_MSFT_scene_understanding
typedef struct XrSceneOrientedBoxBoundMSFT {
XrPosef pose;
XrVector3f extents;
} XrSceneOrientedBoxBoundMSFT;
The runtime must return XR_ERROR_VALIDATION_FAILURE if any component
of extents is not finite or less than or equal to zero.
An XrSceneFrustumBoundMSFT structure describes the pose, field of view, and far distance of a frustum bounds.
// Provided by XR_MSFT_scene_understanding
typedef struct XrSceneFrustumBoundMSFT {
XrPosef pose;
XrFovf fov;
float farDistance;
} XrSceneFrustumBoundMSFT;
The runtime must return XR_ERROR_VALIDATION_FAILURE if
farDistance is less than or equal to zero.
The runtime must return XR_ERROR_VALIDATION_FAILURE if the fov
angles are not between between -π/2 and π/2 exclusively.
Applications can request a desired visual mesh level of detail by including
XrVisualMeshComputeLodInfoMSFT in the
XrNewSceneComputeInfoMSFT::next chain.
If XrVisualMeshComputeLodInfoMSFT is not included in the
XrNewSceneComputeInfoMSFT::next chain, then
XR_MESH_COMPUTE_LOD_COARSE_MSFT must be used for the visual mesh
level of detail.
The XrVisualMeshComputeLodInfoMSFT structure is defined as:
// Provided by XR_MSFT_scene_understanding
typedef struct XrVisualMeshComputeLodInfoMSFT {
XrStructureType type;
const void* next;
XrMeshComputeLodMSFT lod;
} XrVisualMeshComputeLodInfoMSFT;
The XrMeshComputeLodMSFT enumeration identifies the level of detail of visual mesh compute.
// Provided by XR_MSFT_scene_understanding
typedef enum XrMeshComputeLodMSFT {
XR_MESH_COMPUTE_LOD_COARSE_MSFT = 1,
XR_MESH_COMPUTE_LOD_MEDIUM_MSFT = 2,
XR_MESH_COMPUTE_LOD_FINE_MSFT = 3,
XR_MESH_COMPUTE_LOD_UNLIMITED_MSFT = 4,
XR_MESH_COMPUTE_LOD_MAX_ENUM_MSFT = 0x7FFFFFFF
} XrMeshComputeLodMSFT;
The xrEnumerateSceneComputeFeaturesMSFT function enumerates the supported scene compute features of the given system.
This function follows the two-call
idiom for filling the features array.
// Provided by XR_MSFT_scene_understanding
XrResult xrEnumerateSceneComputeFeaturesMSFT(
XrInstance instance,
XrSystemId systemId,
uint32_t featureCapacityInput,
uint32_t* featureCountOutput,
XrSceneComputeFeatureMSFT* features);
An application can inspect the completion of the compute by polling xrGetSceneComputeStateMSFT. This function should typically be called once per frame per XrSceneObserverMSFT.
// Provided by XR_MSFT_scene_understanding
XrResult xrGetSceneComputeStateMSFT(
XrSceneObserverMSFT sceneObserver,
XrSceneComputeStateMSFT* state);
XrSceneComputeStateMSFT identifies the different states of computing a new scene.
// Provided by XR_MSFT_scene_understanding
typedef enum XrSceneComputeStateMSFT {
XR_SCENE_COMPUTE_STATE_NONE_MSFT = 0,
XR_SCENE_COMPUTE_STATE_UPDATING_MSFT = 1,
XR_SCENE_COMPUTE_STATE_COMPLETED_MSFT = 2,
XR_SCENE_COMPUTE_STATE_COMPLETED_WITH_ERROR_MSFT = 3,
XR_SCENE_COMPUTE_STATE_MAX_ENUM_MSFT = 0x7FFFFFFF
} XrSceneComputeStateMSFT;
-
The xrGetSceneComputeStateMSFT function must return
XR_SCENE_COMPUTE_STATE_NONE_MSFTif it is called before xrComputeNewSceneMSFT is called for the first time for the given XrSceneObserverMSFT handle. -
After calling xrComputeNewSceneMSFT but before the asynchronous operation has completed, any calls to xrGetSceneComputeStateMSFT should return
XR_SCENE_COMPUTE_STATE_UPDATING_MSFT. -
Once the asynchronous operation has completed successfully, xrGetSceneComputeStateMSFT must return
XR_SCENE_COMPUTE_STATE_COMPLETED_MSFTuntil xrComputeNewSceneMSFT is called again.
Create a scene handle after a new scene compute has completed
The xrCreateSceneMSFT functions creates an XrSceneMSFT handle.
It can only be called after xrGetSceneComputeStateMSFT returns
XR_SCENE_COMPUTE_STATE_COMPLETED_MSFT to indicate that the
asynchronous operation has completed.
The XrSceneMSFT handle manages the collection of scene components that
represents the detected objects found during the query.
After an XrSceneMSFT handle is created, the handle and associated data must remain valid until destroyed, even after xrCreateSceneMSFT is called again to create the next scene. The runtime must keep alive any component data and mesh buffers relating to this historical scene until its handle is destroyed.
// Provided by XR_MSFT_scene_understanding
XrResult xrCreateSceneMSFT(
XrSceneObserverMSFT sceneObserver,
const XrSceneCreateInfoMSFT* createInfo,
XrSceneMSFT* scene);
Calling xrCreateSceneMSFT when xrGetSceneComputeStateMSFT
returns XR_SCENE_COMPUTE_STATE_NONE_MSFT or
XR_SCENE_COMPUTE_STATE_UPDATING_MSFT must return the error
XR_ERROR_COMPUTE_NEW_SCENE_NOT_COMPLETED_MSFT.
The XrSceneCreateInfoMSFT structure is defined as:
// Provided by XR_MSFT_scene_understanding
typedef struct XrSceneCreateInfoMSFT {
XrStructureType type;
const void* next;
} XrSceneCreateInfoMSFT;
The xrDestroySceneMSFT function releases the scene and the
underlying resources.
// Provided by XR_MSFT_scene_understanding
XrResult xrDestroySceneMSFT(
XrSceneMSFT scene);
Scene component types and Universally Unique Identifiers
Each XrSceneMSFT may contain one or more scene components. Scene components are uniquely identified by a Universally Unique Identifier, represented by XrUuidMSFT. Each scene component belongs to one XrSceneComponentTypeMSFT. The XrSceneComponentTypeMSFT denotes which additional properties can be read for that scene component.
-
Get a list of scene objects and their properties in the scene by calling xrGetSceneComponentsMSFT with
XR_SCENE_COMPONENT_TYPE_OBJECT_MSFTand including XrSceneObjectsMSFT in the XrSceneComponentsMSFT::nextchain. -
Get the list of scene planes and their properties in the scene if
XR_SCENE_COMPUTE_FEATURE_PLANE_MSFTwas passed to xrComputeNewSceneMSFT by calling xrGetSceneComponentsMSFT withXR_SCENE_COMPONENT_TYPE_PLANE_MSFTand including XrScenePlanesMSFT in the XrSceneComponentsMSFT::nextchain. -
Get the list of scene visual meshes and their properties in the scene if
XR_SCENE_COMPUTE_FEATURE_VISUAL_MESH_MSFTwas passed to xrComputeNewSceneMSFT by calling xrGetSceneComponentsMSFT withXR_SCENE_COMPONENT_TYPE_VISUAL_MESH_MSFTand including XrSceneMeshesMSFT in the XrSceneComponentsMSFT::nextchain. -
Get the list of scene collider meshes and their properties in the scene if
XR_SCENE_COMPUTE_FEATURE_COLLIDER_MESH_MSFTwas passed to xrComputeNewSceneMSFT by calling xrGetSceneComponentsMSFT withXR_SCENE_COMPONENT_TYPE_COLLIDER_MESH_MSFTand including XrSceneMeshesMSFT in the XrSceneComponentsMSFT::nextchain.
The XrUuidMSFT structure is a 128-bit UUID (Universally Unique IDentifier) that follows RFC 4122 Variant 1. The structure is composed of 16 octets, typically with the sizes and order of the fields defined in RFC 4122 section 4.1.2. The XrUuidMSFT structure is defined as:
// Provided by XR_MSFT_scene_understanding
typedef struct XrUuidMSFT {
uint8_t bytes[16];
} XrUuidMSFT;
The XrSceneComponentTypeMSFT enumeration identifies the scene component type.
// Provided by XR_MSFT_scene_understanding
typedef enum XrSceneComponentTypeMSFT {
XR_SCENE_COMPONENT_TYPE_INVALID_MSFT = -1,
XR_SCENE_COMPONENT_TYPE_OBJECT_MSFT = 1,
XR_SCENE_COMPONENT_TYPE_PLANE_MSFT = 2,
XR_SCENE_COMPONENT_TYPE_VISUAL_MESH_MSFT = 3,
XR_SCENE_COMPONENT_TYPE_COLLIDER_MESH_MSFT = 4,
// Provided by XR_MSFT_scene_understanding_serialization
XR_SCENE_COMPONENT_TYPE_SERIALIZED_SCENE_FRAGMENT_MSFT = 1000098000,
// Provided by XR_MSFT_scene_marker
XR_SCENE_COMPONENT_TYPE_MARKER_MSFT = 1000147000,
XR_SCENE_COMPONENT_TYPE_MAX_ENUM_MSFT = 0x7FFFFFFF
} XrSceneComponentTypeMSFT;
Get scene components
Scene components are read from an XrSceneMSFT using
xrGetSceneComponentsMSFT and passing one
XrSceneComponentTypeMSFT.
This function follows the two-call
idiom for filling multiple buffers in a struct.
Different scene component types may have additional properties that can be
read by chaining additional structures to XrSceneComponentsMSFT.
Those additional structures must have an array size that is at least as
large as XrSceneComponentsMSFT::componentCapacityInput, otherwise the
runtime must return XR_ERROR_SIZE_INSUFFICIENT.
-
If
XR_SCENE_COMPONENT_TYPE_OBJECT_MSFTis passed to xrGetSceneComponentsMSFT, then XrSceneObjectsMSFT may be included in the XrSceneComponentsMSFT::nextchain. -
If
XR_SCENE_COMPONENT_TYPE_PLANE_MSFTis passed to xrGetSceneComponentsMSFT, then XrScenePlanesMSFT may be included in the XrSceneComponentsMSFT::nextchain. -
If
XR_SCENE_COMPONENT_TYPE_VISUAL_MESH_MSFTorXR_SCENE_COMPONENT_TYPE_COLLIDER_MESH_MSFTare passed to xrGetSceneComponentsMSFT, then XrSceneMeshesMSFT may be included in the XrSceneComponentsMSFT::nextchain.
// Provided by XR_MSFT_scene_understanding
XrResult xrGetSceneComponentsMSFT(
XrSceneMSFT scene,
const XrSceneComponentsGetInfoMSFT* getInfo,
XrSceneComponentsMSFT* components);
An application can use XrSceneComponentsGetInfoMSFT to read the state
of a specific component type using the xrGetSceneComponentsMSFT
function.
Applications can chain one or more of following extension structures to the
XrSceneComponentsGetInfoMSFT::next chain to further narrow the
returned components.
The returned components must satisfy all conditions in the extension
structs.
-
XrSceneComponentParentFilterInfoMSFT to return only scene components that match the given parent object identifier.
-
XrSceneObjectTypesFilterInfoMSFT to return only scene components that match any of the given XrSceneObjectTypeMSFT values or if a scene component does not have an XrSceneObjectTypeMSFT property then the parent’s XrSceneObjectTypeMSFT property will be compared.
-
XrScenePlaneAlignmentFilterInfoMSFT to return only scene components that match any of the given XrScenePlaneAlignmentTypeMSFT values.
The XrSceneComponentsGetInfoMSFT structure is defined as:
// Provided by XR_MSFT_scene_understanding
typedef struct XrSceneComponentsGetInfoMSFT {
XrStructureType type;
const void* next;
XrSceneComponentTypeMSFT componentType;
} XrSceneComponentsGetInfoMSFT;
The XrSceneComponentsMSFT structure contains an array of
XrSceneComponentMSFT returning the components that satisfy the
conditions in xrGetSceneComponentsMSFT::getInfo.
The XrSceneComponentsMSFT structure is defined as:
// Provided by XR_MSFT_scene_understanding
typedef struct XrSceneComponentsMSFT {
XrStructureType type;
void* next;
uint32_t componentCapacityInput;
uint32_t componentCountOutput;
XrSceneComponentMSFT* components;
} XrSceneComponentsMSFT;
The XrSceneComponentMSFT structure is defined as:
// Provided by XR_MSFT_scene_understanding
typedef struct XrSceneComponentMSFT {
XrSceneComponentTypeMSFT componentType;
XrUuidMSFT id;
XrUuidMSFT parentId;
XrTime updateTime;
} XrSceneComponentMSFT;
The runtime must set parentId to either zero or a valid
XrUuidMSFT that corresponds to a scene component of type
XR_SCENE_COMPONENT_TYPE_OBJECT_MSFT that exists in the
XrSceneMSFT.
|
Note
The parent scene object is intended to allow scene components to be grouped.
For example, the scene object for a wall might have multiple scene component
children like |
Get scene components using filters
The scene components that are returned by xrGetSceneComponentsMSFT can be filtered by chaining optional structures to XrSceneComponentsGetInfoMSFT. The runtime must combine multiple filters with a logical AND.
The XrSceneComponentParentFilterInfoMSFT structure is defined as:
// Provided by XR_MSFT_scene_understanding
typedef struct XrSceneComponentParentFilterInfoMSFT {
XrStructureType type;
const void* next;
XrUuidMSFT parentId;
} XrSceneComponentParentFilterInfoMSFT;
The runtime must return only scene components with matching parentId.
If parentId is zero then the runtime must return only scene
components that do not have a parent.
The XrSceneObjectTypesFilterInfoMSFT structure is defined as:
// Provided by XR_MSFT_scene_understanding
typedef struct XrSceneObjectTypesFilterInfoMSFT {
XrStructureType type;
const void* next;
uint32_t objectTypeCount;
const XrSceneObjectTypeMSFT* objectTypes;
} XrSceneObjectTypesFilterInfoMSFT;
The runtime must return only scene components that match any of the
XrSceneObjectTypeMSFT in objectTypes.
If a scene component does not have an XrSceneObjectTypeMSFT then the
parent’s XrSceneObjectTypeMSFT value will be used for the comparison
if it exists.
The XrScenePlaneAlignmentFilterInfoMSFT structure is defined as:
// Provided by XR_MSFT_scene_understanding
typedef struct XrScenePlaneAlignmentFilterInfoMSFT {
XrStructureType type;
const void* next;
uint32_t alignmentCount;
const XrScenePlaneAlignmentTypeMSFT* alignments;
} XrScenePlaneAlignmentFilterInfoMSFT;
The runtime must return only scene components that match one of the
XrScenePlaneAlignmentTypeMSFT values passed in alignments.
Get scene objects
The runtime must fill out the XrSceneObjectsMSFT structure when
included in the XrSceneComponentsMSFT::next chain.
The XrSceneComponentsGetInfoMSFT::componentType must be
XR_SCENE_COMPONENT_TYPE_OBJECT_MSFT when XrSceneObjectsMSFT is
included in the next chain.
If it is not, the XR_ERROR_SCENE_COMPONENT_TYPE_MISMATCH_MSFT error
must be returned.
The XrSceneObjectsMSFT structure is defined as:
// Provided by XR_MSFT_scene_understanding
typedef struct XrSceneObjectsMSFT {
XrStructureType type;
void* next;
uint32_t sceneObjectCount;
XrSceneObjectMSFT* sceneObjects;
} XrSceneObjectsMSFT;
The runtime must only set XrSceneObjectMSFT::objectType to any
of the following XrSceneObjectTypeMSFT values:
-
XR_SCENE_OBJECT_TYPE_UNCATEGORIZED_MSFT -
XR_SCENE_OBJECT_TYPE_BACKGROUND_MSFT -
XR_SCENE_OBJECT_TYPE_WALL_MSFT -
XR_SCENE_OBJECT_TYPE_FLOOR_MSFT -
XR_SCENE_OBJECT_TYPE_CEILING_MSFT -
XR_SCENE_OBJECT_TYPE_PLATFORM_MSFT -
XR_SCENE_OBJECT_TYPE_INFERRED_MSFT
The XrSceneObjectMSFT structure represents the state of a scene object.
It is defined as:
// Provided by XR_MSFT_scene_understanding
typedef struct XrSceneObjectMSFT {
XrSceneObjectTypeMSFT objectType;
} XrSceneObjectMSFT;
The XrSceneObjectTypeMSFT enumeration identifies the different types of scene objects.
// Provided by XR_MSFT_scene_understanding
typedef enum XrSceneObjectTypeMSFT {
XR_SCENE_OBJECT_TYPE_UNCATEGORIZED_MSFT = -1,
XR_SCENE_OBJECT_TYPE_BACKGROUND_MSFT = 1,
XR_SCENE_OBJECT_TYPE_WALL_MSFT = 2,
XR_SCENE_OBJECT_TYPE_FLOOR_MSFT = 3,
XR_SCENE_OBJECT_TYPE_CEILING_MSFT = 4,
XR_SCENE_OBJECT_TYPE_PLATFORM_MSFT = 5,
XR_SCENE_OBJECT_TYPE_INFERRED_MSFT = 6,
XR_SCENE_OBJECT_TYPE_MAX_ENUM_MSFT = 0x7FFFFFFF
} XrSceneObjectTypeMSFT;
Get scene planes
The runtime must fill out the XrScenePlanesMSFT structure when
included in the XrSceneComponentsMSFT::next chain.
The XrSceneComponentsGetInfoMSFT::componentType must be
XR_SCENE_COMPONENT_TYPE_PLANE_MSFT when XrScenePlanesMSFT is
included in the next chain.
If it is not, the XR_ERROR_SCENE_COMPONENT_TYPE_MISMATCH_MSFT error
must be returned.
The XrScenePlanesMSFT structure is defined as:
// Provided by XR_MSFT_scene_understanding
typedef struct XrScenePlanesMSFT {
XrStructureType type;
void* next;
uint32_t scenePlaneCount;
XrScenePlaneMSFT* scenePlanes;
} XrScenePlanesMSFT;
The XrScenePlaneMSFT structure represents the state of a scene plane.
It is defined as:
// Provided by XR_MSFT_scene_understanding
typedef struct XrScenePlaneMSFT {
XrScenePlaneAlignmentTypeMSFT alignment;
XrExtent2Df size;
uint64_t meshBufferId;
XrBool32 supportsIndicesUint16;
} XrScenePlaneMSFT;
The size of a plane refers to the plane’s size in the x-y plane
of the plane’s coordinate system.
A plane with a position of {0,0,0}, rotation of {0,0,0,1} (no rotation), and
an extent of {1,1} refers to a 1 meter x 1 meter plane centered at {0,0,0}
with its front face normal vector pointing towards the +Z direction in the
plane component’s space.
For planes with an alignment of
XR_SCENE_PLANE_ALIGNMENT_TYPE_VERTICAL_MSFT, the +Y direction must
point up away from the direction of gravity.
|
Note
OpenXR uses an X-Y plane with +Z as the plane normal but other APIs may use an X-Z plane with +Y as the plane normal. The X-Y plane can be converted to an X-Z plane by rotating -π/2 radians around the +X axis. |
XrScenePlaneAlignmentTypeMSFT identifies the different plane alignment types.
// Provided by XR_MSFT_scene_understanding
typedef enum XrScenePlaneAlignmentTypeMSFT {
XR_SCENE_PLANE_ALIGNMENT_TYPE_NON_ORTHOGONAL_MSFT = 0,
XR_SCENE_PLANE_ALIGNMENT_TYPE_HORIZONTAL_MSFT = 1,
XR_SCENE_PLANE_ALIGNMENT_TYPE_VERTICAL_MSFT = 2,
XR_SCENE_PLANE_ALIGNMENT_TYPE_MAX_ENUM_MSFT = 0x7FFFFFFF
} XrScenePlaneAlignmentTypeMSFT;
Get scene mesh
The runtime must fill out the XrSceneMeshesMSFT structure when
included in the XrSceneComponentsMSFT::next chain.
The XrSceneComponentsGetInfoMSFT::componentType must be
XR_SCENE_COMPONENT_TYPE_VISUAL_MESH_MSFT or
XR_SCENE_COMPONENT_TYPE_COLLIDER_MESH_MSFT when
XrSceneMeshesMSFT is included in the next chain.
If it is not, the XR_ERROR_SCENE_COMPONENT_TYPE_MISMATCH_MSFT error
must be returned.
The XrSceneMeshesMSFT structure is defined as:
// Provided by XR_MSFT_scene_understanding
typedef struct XrSceneMeshesMSFT {
XrStructureType type;
void* next;
uint32_t sceneMeshCount;
XrSceneMeshMSFT* sceneMeshes;
} XrSceneMeshesMSFT;
The XrSceneMeshMSFT structure represents the state of a scene component’s mesh.
It is defined as:
// Provided by XR_MSFT_scene_understanding
typedef struct XrSceneMeshMSFT {
uint64_t meshBufferId;
XrBool32 supportsIndicesUint16;
} XrSceneMeshMSFT;
Read scene mesh buffer
The xrGetSceneMeshBuffersMSFT function retrieves the scene mesh vertex buffer and index buffer for the given scene mesh buffer identifier.
|
Note
Applications may use the scene mesh buffer identifier as a key to cache the vertices and indices of a mesh for reuse within an XrSceneMSFT or across multiple XrSceneMSFT for the same XrSession. Applications can avoid unnecessarily calling xrGetSceneMeshBuffersMSFT
for a scene component if XrSceneComponentMSFT:: |
This function follows the two-call idiom for filling multiple buffers in a struct.
The xrGetSceneMeshBuffersMSFT function is defined as:
// Provided by XR_MSFT_scene_understanding
XrResult xrGetSceneMeshBuffersMSFT(
XrSceneMSFT scene,
const XrSceneMeshBuffersGetInfoMSFT* getInfo,
XrSceneMeshBuffersMSFT* buffers);
Applications can request the vertex buffer of the mesh by including
XrSceneMeshVertexBufferMSFT in the
XrSceneMeshBuffersMSFT::next chain.
Runtimes must support requesting a 32-bit index buffer and may support
requesting a 16-bit index buffer.
Applications can request a 32-bit index buffer by including
XrSceneMeshIndicesUint32MSFT in the
XrSceneMeshBuffersMSFT::next chain.
Applications can request a 16-bit index buffer by including
XrSceneMeshIndicesUint16MSFT in the
XrSceneMeshBuffersMSFT::next chain.
If the runtime for the given scene mesh buffer does not support requesting a
16-bit index buffer then XR_ERROR_VALIDATION_FAILURE must be
returned.
The runtime must support reading a 16-bit index buffer for the given scene
mesh buffer if XrScenePlaneMSFT:supportsIndicesUint16 or
XrSceneMeshMSFT:supportsIndicesUint16 are XR_TRUE for the scene
component that contained that scene mesh buffer identifier.
The runtime must return XR_ERROR_SCENE_MESH_BUFFER_ID_INVALID_MSFT if
none of the scene components in the given XrSceneMSFT contain
XrSceneMeshBuffersGetInfoMSFT::meshBufferId.
The runtime must return XR_ERROR_SCENE_MESH_BUFFER_ID_INVALID_MSFT if
XrSceneMeshBuffersGetInfoMSFT::meshBufferId is zero.
The runtime must return XR_ERROR_VALIDATION_FAILURE if both
XrSceneMeshIndicesUint32MSFT and XrSceneMeshIndicesUint16MSFT
are included in the XrSceneMeshBuffersMSFT::next chain.
The runtime must return XR_ERROR_VALIDATION_FAILURE if the
XrSceneMeshBuffersMSFT::next does not contain at least one of
XrSceneMeshVertexBufferMSFT, XrSceneMeshIndicesUint32MSFT or
XrSceneMeshIndicesUint16MSFT.
The runtime must return the same vertices and indices for a given scene mesh buffer identifier and XrSession. A runtime may return zero vertices and indices if the underlying mesh data is no longer available.
XrSceneMeshBuffersGetInfoMSFT is an input structure for the xrGetSceneMeshBuffersMSFT function.
// Provided by XR_MSFT_scene_understanding
typedef struct XrSceneMeshBuffersGetInfoMSFT {
XrStructureType type;
const void* next;
uint64_t meshBufferId;
} XrSceneMeshBuffersGetInfoMSFT;
XrSceneMeshBuffersMSFT is an input/output structure for reading scene mesh buffers.
// Provided by XR_MSFT_scene_understanding
typedef struct XrSceneMeshBuffersMSFT {
XrStructureType type;
void* next;
} XrSceneMeshBuffersMSFT;
XrSceneMeshVertexBufferMSFT is an input/output structure for reading scene mesh buffer vertices.
// Provided by XR_MSFT_scene_understanding
typedef struct XrSceneMeshVertexBufferMSFT {
XrStructureType type;
void* next;
uint32_t vertexCapacityInput;
uint32_t vertexCountOutput;
XrVector3f* vertices;
} XrSceneMeshVertexBufferMSFT;
XrSceneMeshIndicesUint32MSFT is an input/output structure for reading 32-bit indices from a scene mesh buffer.
// Provided by XR_MSFT_scene_understanding
typedef struct XrSceneMeshIndicesUint32MSFT {
XrStructureType type;
void* next;
uint32_t indexCapacityInput;
uint32_t indexCountOutput;
uint32_t* indices;
} XrSceneMeshIndicesUint32MSFT;
XrSceneMeshIndicesUint16MSFT is an input/output structure for reading 16-bit indices from a scene mesh buffer.
// Provided by XR_MSFT_scene_understanding
typedef struct XrSceneMeshIndicesUint16MSFT {
XrStructureType type;
void* next;
uint32_t indexCapacityInput;
uint32_t indexCountOutput;
uint16_t* indices;
} XrSceneMeshIndicesUint16MSFT;
Locate scene objects
The xrLocateSceneComponentsMSFT function locates an array of scene components to a base space at a given time.
// Provided by XR_MSFT_scene_understanding
XrResult xrLocateSceneComponentsMSFT(
XrSceneMSFT scene,
const XrSceneComponentsLocateInfoMSFT* locateInfo,
XrSceneComponentLocationsMSFT* locations);
The runtime must return XR_ERROR_SIZE_INSUFFICIENT if
XrSceneComponentLocationsMSFT::locationCount is less than
XrSceneComponentsLocateInfoMSFT::componentIdCount.
|
Note
Similar to xrLocateSpace, apps should call xrLocateSceneComponentsMSFT each frame because the location returned by xrLocateSceneComponentsMSFT in later frames may change over time as the target space or the scene components may refine their locations. |
The XrSceneComponentsLocateInfoMSFT structure describes the information to locate scene components.
// Provided by XR_MSFT_scene_understanding
typedef struct XrSceneComponentsLocateInfoMSFT {
XrStructureType type;
const void* next;
XrSpace baseSpace;
XrTime time;
uint32_t componentIdCount;
const XrUuidMSFT* componentIds;
} XrSceneComponentsLocateInfoMSFT;
The XrSceneComponentLocationsMSFT structure returns scene component locations.
// Provided by XR_MSFT_scene_understanding
typedef struct XrSceneComponentLocationsMSFT {
XrStructureType type;
void* next;
uint32_t locationCount;
XrSceneComponentLocationMSFT* locations;
} XrSceneComponentLocationsMSFT;
The XrSceneComponentLocationMSFT structure describes the position and
orientation of a scene component to space
XrSceneComponentsLocateInfoMSFT::baseSpace at time
XrSceneComponentsLocateInfoMSFT::time.
If the scene component identified by XrUuidMSFT is not found,
flags should be empty.
// Provided by XR_MSFT_scene_understanding
typedef struct XrSceneComponentLocationMSFT {
XrSpaceLocationFlags flags;
XrPosef pose;
} XrSceneComponentLocationMSFT;
New Object Types
New Flag Types
New Enum Constants
XrObjectType enumeration is extended with:
-
XR_OBJECT_TYPE_SCENE_OBSERVER_MSFT -
XR_OBJECT_TYPE_SCENE_MSFT
XrStructureType enumeration is extended with:
-
XR_TYPE_SCENE_OBSERVER_CREATE_INFO_MSFT -
XR_TYPE_SCENE_CREATE_INFO_MSFT -
XR_TYPE_NEW_SCENE_COMPUTE_INFO_MSFT -
XR_TYPE_VISUAL_MESH_COMPUTE_LOD_INFO_MSFT -
XR_TYPE_SCENE_COMPONENTS_MSFT -
XR_TYPE_SCENE_COMPONENTS_GET_INFO_MSFT -
XR_TYPE_SCENE_COMPONENT_LOCATIONS_MSFT -
XR_TYPE_SCENE_COMPONENTS_LOCATE_INFO_MSFT -
XR_TYPE_SCENE_OBJECTS_MSFT -
XR_TYPE_SCENE_COMPONENT_PARENT_FILTER_INFO_MSFT -
XR_TYPE_SCENE_OBJECT_TYPES_FILTER_INFO_MSFT -
XR_TYPE_SCENE_PLANES_MSFT -
XR_TYPE_SCENE_PLANE_ALIGNMENT_FILTER_INFO_MSFT -
XR_TYPE_SCENE_MESHES_MSFT -
XR_TYPE_SCENE_MESH_BUFFERS_GET_INFO_MSFT -
XR_TYPE_SCENE_MESH_BUFFERS_MSFT
XrResult enumeration is extended with:
-
XR_ERROR_COMPUTE_NEW_SCENE_NOT_COMPLETED_MSFT -
XR_ERROR_SCENE_COMPONENT_ID_INVALID_MSFT -
XR_ERROR_SCENE_COMPONENT_TYPE_MISMATCH_MSFT -
XR_ERROR_SCENE_MESH_BUFFER_ID_INVALID_MSFT -
XR_ERROR_SCENE_COMPUTE_FEATURE_INCOMPATIBLE_MSFT -
XR_ERROR_SCENE_COMPUTE_CONSISTENCY_MISMATCH_MSFT
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2021-05-03 (Darryl Gough)
-
Initial extension description
-
-
Revision 2, 2022-06-29 (Darryl Gough)
-
Fix missing error codes
-
12.167. XR_MSFT_scene_understanding_serialization
- Name String
-
XR_MSFT_scene_understanding_serialization - Extension Type
-
Instance extension
- Registered Extension Number
-
99
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2021-05-03
- IP Status
-
No known IP claims.
- Contributors
-
Darryl Gough, Microsoft
Yin Li, Microsoft
Bryce Hutchings, Microsoft
Alex Turner, Microsoft
Simon Stachniak, Microsoft
David Fields, Microsoft
Overview
This extension extends the scene understanding extension and enables scenes to be serialized or deserialized. It enables computing a new scene into a serialized binary stream and it enables deserializing a binary stream into an XrSceneMSFT handle.
Serialize a scene
This extension adds XR_SCENE_COMPUTE_FEATURE_SERIALIZE_SCENE_MSFT to
XrSceneComputeFeatureMSFT, which can be passed to
xrComputeNewSceneMSFT plus one or more of
XR_SCENE_COMPUTE_FEATURE_PLANE_MSFT,
XR_SCENE_COMPUTE_FEATURE_PLANE_MESH_MSFT,
XR_SCENE_COMPUTE_FEATURE_VISUAL_MESH_MSFT or
XR_SCENE_COMPUTE_FEATURE_COLLIDER_MESH_MSFT to inform the runtime that
it should compute a serialized binary representation of the scene.
If XR_SCENE_COMPUTE_FEATURE_SERIALIZE_SCENE_MSFT is the only
XrSceneComputeFeatureMSFT passed to xrComputeNewSceneMSFT then
XR_ERROR_SCENE_COMPUTE_FEATURE_INCOMPATIBLE_MSFT must be returned.
If an XrSceneMSFT was created using
XR_SCENE_COMPUTE_FEATURE_SERIALIZE_SCENE_MSFT then
XR_SCENE_COMPONENT_TYPE_SERIALIZED_SCENE_FRAGMENT_MSFT can be passed
to the xrGetSceneComponentsMSFT function to read the list of
serialized scene fragment XrUuidMSFT values from
XrSceneComponentMSFT::id.
The XrUuidMSFT of a scene fragment can be passed to
xrGetSerializedSceneFragmentDataMSFT to read the binary data of the
given scene fragment.
The application can call the xrGetSerializedSceneFragmentDataMSFT function to read the binary data of a serialized scene fragment from the XrSceneMSFT handle. This function follows the two-call idiom for filling the buffer.
The xrGetSerializedSceneFragmentDataMSFT function is defined as:
// Provided by XR_MSFT_scene_understanding_serialization
XrResult xrGetSerializedSceneFragmentDataMSFT(
XrSceneMSFT scene,
const XrSerializedSceneFragmentDataGetInfoMSFT* getInfo,
uint32_t countInput,
uint32_t* readOutput,
uint8_t* buffer);
The runtime must return XR_ERROR_SCENE_COMPONENT_ID_INVALID_MSFT if
the given scene fragment XrUuidMSFT was not found.
The XrSerializedSceneFragmentDataGetInfoMSFT structure is defined as:
// Provided by XR_MSFT_scene_understanding_serialization
typedef struct XrSerializedSceneFragmentDataGetInfoMSFT {
XrStructureType type;
const void* next;
XrUuidMSFT sceneFragmentId;
} XrSerializedSceneFragmentDataGetInfoMSFT;
Deserialize a scene
This extension enables an application to deserialize the binary representation of a scene that was previously serialized.
For a given XrSceneObserverMSFT handle, instead of calling xrComputeNewSceneMSFT, which computes the scene from the system’s sensors, the application can use xrDeserializeSceneMSFT to produce a scene from the given binary scene fragment data.
The xrDeserializeSceneMSFT function is defined as:
// Provided by XR_MSFT_scene_understanding_serialization
XrResult xrDeserializeSceneMSFT(
XrSceneObserverMSFT sceneObserver,
const XrSceneDeserializeInfoMSFT* deserializeInfo);
The xrDeserializeSceneMSFT function begins deserializing a list of serialized scene fragments. The runtime must return quickly without waiting for the deserialization to complete. The application should use xrGetSceneComputeStateMSFT to inspect the completeness of the deserialization.
The runtime must return XR_ERROR_COMPUTE_NEW_SCENE_NOT_COMPLETED_MSFT
if xrDeserializeSceneMSFT is called while the scene computation is in
progress.
The xrGetSceneComputeStateMSFT function must return
XR_SCENE_COMPUTE_STATE_UPDATING_MSFT while the deserialization is in
progress, and XR_SCENE_COMPUTE_STATE_COMPLETED_MSFT when the
deserialization has completed successfully.
If the runtime fails to deserialize the binary stream,
xrGetSceneComputeStateMSFT must return
XR_SCENE_COMPUTE_STATE_COMPLETED_WITH_ERROR_MSFT to indicate that the
deserialization has completed but an error occurred.
When xrGetSceneComputeStateMSFT returns
XR_SCENE_COMPUTE_STATE_COMPLETED_MSFT, the application may call
xrCreateSceneMSFT to create the XrSceneMSFT handle.
If xrCreateSceneMSFT is called while xrGetSceneComputeStateMSFT
returns XR_SCENE_COMPUTE_STATE_COMPLETED_WITH_ERROR_MSFT, a valid
XrSceneMSFT handle must be returned, but that handle must contain
zero scene components.
XrSceneDeserializeInfoMSFT is an input structure that describes the array of serialized scene fragments that will be deserialized by the xrDeserializeSceneMSFT function.
// Provided by XR_MSFT_scene_understanding_serialization
typedef struct XrSceneDeserializeInfoMSFT {
XrStructureType type;
const void* next;
uint32_t fragmentCount;
const XrDeserializeSceneFragmentMSFT* fragments;
} XrSceneDeserializeInfoMSFT;
If the scene fragments are not in the same order as returned by
xrGetSceneComponentsMSFT or the runtime failed to deserialized the
binary data then xrGetSceneComputeStateMSFT must return
XR_SCENE_COMPUTE_STATE_COMPLETED_WITH_ERROR_MSFT.
The XrDeserializeSceneFragmentMSFT structure represents a single fragment of a binary stream to be deserialized. It is defined as:
// Provided by XR_MSFT_scene_understanding_serialization
typedef struct XrDeserializeSceneFragmentMSFT {
uint32_t bufferSize;
const uint8_t* buffer;
} XrDeserializeSceneFragmentMSFT;
New Object Types
New Flag Types
New Enum Constants
XrSceneComponentTypeMSFT enumeration is extended with:
-
XR_SCENE_COMPONENT_TYPE_SERIALIZED_SCENE_FRAGMENT_MSFT
XrSceneComputeFeatureMSFT enumeration is extended with:
-
XR_SCENE_COMPUTE_FEATURE_SERIALIZE_SCENE_MSFT
XrStructureType enumeration is extended with:
-
XR_TYPE_SERIALIZED_SCENE_FRAGMENT_DATA_GET_INFO_MSFT -
XR_TYPE_SCENE_DESERIALIZE_INFO_MSFT
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2021-05-03 (Darryl Gough)
-
Initial extension description
-
-
Revision 2, 2022-06-29 (Darryl Gough)
-
Fix missing error codes
-
12.168. XR_MSFT_secondary_view_configuration
- Name String
-
XR_MSFT_secondary_view_configuration - Extension Type
-
Instance extension
- Registered Extension Number
-
54
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2020-05-02
- IP Status
-
No known IP claims.
- Contributors
-
Yin Li, Microsoft
Zonglin Wu, Microsoft
Alex Turner, Microsoft
12.168.1. Overview
This extension allows an application to enable support for one or more secondary view configurations. A secondary view configuration is a well-known set of views that the runtime can make active while a session is running. In a frame where a secondary view configuration is active, the application’s single frame loop should additionally render into those active secondary views, sharing the frame waiting logic and update loop with the primary view configuration for that running session.
A proper secondary view configuration support includes following steps:
-
When calling xrCreateInstance, enable the
XR_MSFT_secondary_view_configurationextension and the extension defines a concrete secondary view configuration type, for example,XR_MSFT_first_person_observer. -
Inspect supported secondary view configurations using the xrEnumerateViewConfigurations function.
-
Enable supported secondary view configurations using the xrBeginSession function with an XrSecondaryViewConfigurationSessionBeginInfoMSFT chained extension structure.
-
Inspect if an enabled secondary view configuration is activated by the system or the user using the xrWaitFrame function with an XrSecondaryViewConfigurationFrameStateMSFT chained extension structure.
-
When a secondary view configuration is changed to active, get the latest view configuration properties using the xrGetViewConfigurationProperties and xrEnumerateViewConfigurationViews functions.
-
Create the swapchain images for the active secondary view configuration using the xrCreateSwapchain function with an XrSecondaryViewConfigurationSwapchainCreateInfoMSFT chained extension structure using
recommendedImageRectWidthandrecommendedImageRectHeightin the corresponding XrViewConfigurationView structure returned from xrEnumerateViewConfigurationViews. -
Locate the secondary view configuration views using the xrLocateViews function with the active secondary view configuration type.
-
Submit the composition layers using the swapchain images for an active secondary view configuration using the xrEndFrame function with the XrSecondaryViewConfigurationFrameEndInfoMSFT chained extension structure.
12.168.2. Enumerate supported secondary view configurations
The first step is for the application to inspect if a runtime supports certain secondary view configurations. The app uses the existing API xrEnumerateViewConfigurations for this.
For example, when the XR_MSFT_first_person_observer extension is
enabled, the application will enumerate a view configuration of type
XR_VIEW_CONFIGURATION_TYPE_SECONDARY_MONO_FIRST_PERSON_OBSERVER_MSFT,
and can use this secondary view configuration type in later functions.
12.168.3. Secondary view configuration properties
The application can inspect the properties of a secondary view configuration through the existing xrGetViewConfigurationProperties, xrEnumerateViewConfigurationViews and xrEnumerateEnvironmentBlendModes functions using a supported secondary view configuration type.
The runtime may change the recommended properties, such as recommended image width or height, when the secondary view configuration becomes active. The application should use the latest recommended width and height when creating swapchain images and related resources for the active secondary view configuration.
When an application creates swapchain images for a secondary view configuration, it can chain a XrSecondaryViewConfigurationSwapchainCreateInfoMSFT structure to XrSwapchainCreateInfo when calling xrCreateSwapchain. This hints to the runtime that the created swapchain image will be submitted to the given secondary view configuration, allowing the runtime to make optimizations for such usage when there is opportunity.
// Provided by XR_MSFT_secondary_view_configuration
typedef struct XrSecondaryViewConfigurationSwapchainCreateInfoMSFT {
XrStructureType type;
const void* next;
XrViewConfigurationType viewConfigurationType;
} XrSecondaryViewConfigurationSwapchainCreateInfoMSFT;
If this structure is not present in the XrSwapchainCreateInfo next chain when calling xrCreateSwapchain, the runtime should optimize the created swapchain for the primary view configuration of the session.
If the application submits a swapchain image created with one view configuration type to a composition layer for another view configuration, the runtime may need to copy the resource across view configurations. However, the runtime must correctly compose the image regardless which view configuration type was hinted when swapchain image was created.
12.168.4. Enable secondary view configuration
The application indicates to the runtime which secondary view configurations
it can support by chaining an
XrSecondaryViewConfigurationSessionBeginInfoMSFT structure to the
XrSessionBeginInfo::next pointer when calling
xrBeginSession.
The XrSecondaryViewConfigurationSessionBeginInfoMSFT structure is used by the application to indicate the list of secondary XrViewConfigurationType to enable for this session.
It is defined as:
// Provided by XR_MSFT_secondary_view_configuration
typedef struct XrSecondaryViewConfigurationSessionBeginInfoMSFT {
XrStructureType type;
const void* next;
uint32_t viewConfigurationCount;
const XrViewConfigurationType* enabledViewConfigurationTypes;
} XrSecondaryViewConfigurationSessionBeginInfoMSFT;
If there are any duplicated view configuration types in the array of
enabledViewConfigurationTypes, the runtime must return error
XR_ERROR_VALIDATION_FAILURE.
If there are any primary view configuration types in the array of
enabledViewConfigurationTypes, the runtime must return error
XR_ERROR_VALIDATION_FAILURE.
If there are any secondary view configuration types not returned by
xrEnumerateViewConfigurations in the array of
enabledViewConfigurationTypes, the runtime must return error
XR_ERROR_VIEW_CONFIGURATION_TYPE_UNSUPPORTED.
12.168.5. Per-frame active view configurations
The runtime then tells the application at each xrWaitFrame function call which of the enabled secondary view configurations are active for that frame. When extension structure XrSecondaryViewConfigurationFrameStateMSFT is chained to the XrFrameState::next pointer, the runtime writes into this structure the state of each enabled secondary view configuration.
The XrSecondaryViewConfigurationFrameStateMSFT structure returns whether the enabled view configurations are active or inactive.
It is defined as as:
// Provided by XR_MSFT_secondary_view_configuration
typedef struct XrSecondaryViewConfigurationFrameStateMSFT {
XrStructureType type;
void* next;
uint32_t viewConfigurationCount;
XrSecondaryViewConfigurationStateMSFT* viewConfigurationStates;
} XrSecondaryViewConfigurationFrameStateMSFT;
The array size viewConfigurationCount in the
XrSecondaryViewConfigurationFrameStateMSFT structure must be the same
as the array size enabled through
XrSecondaryViewConfigurationSessionBeginInfoMSFT when calling
xrBeginSession earlier, otherwise the runtime must return error
XR_ERROR_VALIDATION_FAILURE.
The XrSecondaryViewConfigurationStateMSFT structure returns the state of an enabled secondary view configurations.
// Provided by XR_MSFT_secondary_view_configuration
typedef struct XrSecondaryViewConfigurationStateMSFT {
XrStructureType type;
void* next;
XrViewConfigurationType viewConfigurationType;
XrBool32 active;
} XrSecondaryViewConfigurationStateMSFT;
When a secondary view configuration becomes active, the application should
render its secondary views as soon as possible, by getting their view
transforms and FOV using xrLocateViews and then submitting composition
layers to xrEndFrame through the
XrSecondaryViewConfigurationFrameEndInfoMSFT extension structure.
When a secondary view configuration changes from inactive to active, the
runtime may change XrViewConfigurationView of the given view
configuration such as the recommended image width or height.
An application should query for latest XrViewConfigurationView
through xrEnumerateViewConfigurationViews function for the secondary
view configuration and consider recreating swapchain images if necessary.
The runtime must not change the XrViewConfigurationView, including
recommended image width and height of a secondary view configuration when
active remains true until the secondary view configuration deactivated
or the session has ended.
If necessary, the application can take longer than a frame duration to
prepare by calling xrEndFrame without submitting layers for that
secondary view configuration until ready.
The runtime should delay the underlying scenario managed by the secondary
view configuration until the application begins submitting frames with
layers for that configuration.
The active secondary view configuration composed output is undefined if the
application stops submitting frames with layers for a secondary view
configuration while active remains true.
When the runtime intends to conclude a secondary view configuration, for
example when user stops video capture, the runtime makes the view
configuration inactive by setting the corresponding active in the
XrSecondaryViewConfigurationStateMSFT structure to false.
12.168.6. Locate and inspect view states of secondary view configurations
When the application calls xrLocateViews, it can use XrViewLocateInfo::viewConfigurationType field to query the view locations and projections for any enabled XrViewConfigurationType for the running session.
The runtime must return XR_ERROR_VIEW_CONFIGURATION_TYPE_UNSUPPORTED
from xrLocateViews if the specified XrViewConfigurationType is
not enabled for the running session using
XrSecondaryViewConfigurationSessionBeginInfoMSFT when calling
xrBeginSession.
If the view configuration is supported but not active, as indicated in
XrSecondaryViewConfigurationFrameStateMSFT, xrLocateViews will
successfully return, but the resulting XrViewState may have
XR_VIEW_STATE_ORIENTATION_TRACKED_BIT and
XR_VIEW_STATE_ORIENTATION_TRACKED_BIT unset.
12.168.7. Submit composition layers to secondary view configurations
The application should submit layers each frame for all active secondary view configurations using the xrEndFrame function, by chaining the XrSecondaryViewConfigurationFrameEndInfoMSFT structure to the next pointer of XrFrameEndInfo structure.
The XrSecondaryViewConfigurationFrameEndInfoMSFT structure is defined as as:
// Provided by XR_MSFT_secondary_view_configuration
typedef struct XrSecondaryViewConfigurationFrameEndInfoMSFT {
XrStructureType type;
const void* next;
uint32_t viewConfigurationCount;
const XrSecondaryViewConfigurationLayerInfoMSFT* viewConfigurationLayersInfo;
} XrSecondaryViewConfigurationFrameEndInfoMSFT;
The view configuration type in each
XrSecondaryViewConfigurationLayerInfoMSFT must be one of the view
configurations enabled when calling xrBeginSession in
XrSecondaryViewConfigurationSessionBeginInfoMSFT, or else the runtime
must return error
XR_ERROR_SECONDARY_VIEW_CONFIGURATION_TYPE_NOT_ENABLED_MSFT.
The view configuration type in each
XrSecondaryViewConfigurationLayerInfoMSFT must not be the primary view
configuration in this session, or else the runtime must return error
XR_ERROR_LAYER_INVALID.
The primary view configuration layers continue to be submitted through
XrFrameEndInfo directly.
If the view configuration is not active, as indicated in XrSecondaryViewConfigurationFrameStateMSFT, the composition layers submitted to this view configuration may be ignored by the runtime. Applications should avoid rendering into secondary views when the view configuration is inactive.
The application should submit an XrSecondaryViewConfigurationLayerInfoMSFT in XrSecondaryViewConfigurationFrameEndInfoMSFT for each active secondary view configuration type when calling xrEndFrame.
The XrSecondaryViewConfigurationLayerInfoMSFT structure is defined as as:
// Provided by XR_MSFT_secondary_view_configuration
typedef struct XrSecondaryViewConfigurationLayerInfoMSFT {
XrStructureType type;
const void* next;
XrViewConfigurationType viewConfigurationType;
XrEnvironmentBlendMode environmentBlendMode;
uint32_t layerCount;
const XrCompositionLayerBaseHeader* const* layers;
} XrSecondaryViewConfigurationLayerInfoMSFT;
This structure is similar to the XrFrameEndInfo structure, with an extra XrViewConfigurationType field to specify the view configuration for which the submitted layers will be rendered.
The application should render its content for both the primary and
secondary view configurations using the same
XrFrameState::predictedDisplayTime reported by
xrWaitFrame.
The runtime must treat both the primary views and secondary views as being
submitted for the same XrViewLocateInfo::displayTime specified
in the call to xrEndFrame.
For layers such as quad layers whose content is identical across view configurations, the application can submit the same XrCompositionLayerBaseHeader structures to multiple view configurations in the same xrEndFrame function call.
For each frame, the application should only render and submit layers for the secondary view configurations that were active that frame, as indicated in the XrSecondaryViewConfigurationFrameStateMSFT filled in for that frame’s xrWaitFrame call. The runtime must ignore composition layers submitted for an inactive view configuration.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_SECONDARY_VIEW_CONFIGURATION_SESSION_BEGIN_INFO_MSFT -
XR_TYPE_SECONDARY_VIEW_CONFIGURATION_STATE_MSFT -
XR_TYPE_SECONDARY_VIEW_CONFIGURATION_FRAME_STATE_MSFT -
XR_TYPE_SECONDARY_VIEW_CONFIGURATION_FRAME_END_INFO_MSFT -
XR_TYPE_SECONDARY_VIEW_CONFIGURATION_LAYER_INFO_MSFT -
XR_ERROR_SECONDARY_VIEW_CONFIGURATION_TYPE_NOT_ENABLED_MSFT
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2019-07-30 (Yin Li)
-
Initial extension description
-
12.169. XR_MSFT_spatial_anchor
- Name String
-
XR_MSFT_spatial_anchor - Extension Type
-
Instance extension
- Registered Extension Number
-
40
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
Overview
This extension allows an application to create a spatial anchor, an arbitrary freespace point in the user’s physical environment that will then be tracked by the runtime. The runtime should then adjust the position and orientation of that anchor’s origin over time as needed, independently of all other spaces and anchors, to ensure that it maintains its original mapping to the real world.
XR_DEFINE_HANDLE(XrSpatialAnchorMSFT)
Spatial anchors are often used in combination with an UNBOUNDED_MSFT
reference space.
UNBOUNDED_MSFT reference spaces adjust their origin as necessary to keep
the viewer’s coordinates relative to the space’s origin stable.
Such adjustments maintain the visual stability of content currently near the
viewer, but may cause content placed far from the viewer to drift in its
alignment to the real world by the time the user moves close again.
By creating an XrSpatialAnchorMSFT where a piece of content is placed and
then always rendering that content relative to its anchor’s space, an
application can ensure that each piece of content stays at a fixed location
in the environment.
The xrCreateSpatialAnchorMSFT function is defined as:
// Provided by XR_MSFT_spatial_anchor
XrResult xrCreateSpatialAnchorMSFT(
XrSession session,
const XrSpatialAnchorCreateInfoMSFT* createInfo,
XrSpatialAnchorMSFT* anchor);
Creates an XrSpatialAnchorMSFT handle representing a spatial anchor
that will track a fixed location in the physical world over time.
That real-world location is specified by the position and orientation of the
specified XrSpatialAnchorCreateInfoMSFT::pose within
XrSpatialAnchorCreateInfoMSFT::space at
XrSpatialAnchorCreateInfoMSFT::time.
The runtime must avoid long blocking operations such as networking or disk operations for xrCreateSpatialAnchorMSFT function. The application may safely use this function in UI thread. Though, the created anchor handle may not be ready immediately for certain operations yet. For example, the corresponding anchor space may not return valid location, or its location may not be successfully saved in anchor store.
If XrSpatialAnchorCreateInfoMSFT::space cannot be located
relative to the environment at the moment of the call to
xrCreateSpatialAnchorMSFT, the runtime must return
XR_ERROR_CREATE_SPATIAL_ANCHOR_FAILED_MSFT.
After the anchor is created, the runtime should then adjust its position
and orientation over time relative to other spaces so as to maintain maximum
alignment to its original real-world location, even if that changes the
anchor’s relationship to the original
XrSpatialAnchorCreateInfoMSFT::space used to initialize it.
The XrSpatialAnchorCreateInfoMSFT structure is defined as:
typedef struct XrSpatialAnchorCreateInfoMSFT {
XrStructureType type;
const void* next;
XrSpace space;
XrPosef pose;
XrTime time;
} XrSpatialAnchorCreateInfoMSFT;
The xrCreateSpatialAnchorSpaceMSFT function is defined as:
// Provided by XR_MSFT_spatial_anchor
XrResult xrCreateSpatialAnchorSpaceMSFT(
XrSession session,
const XrSpatialAnchorSpaceCreateInfoMSFT* createInfo,
XrSpace* space);
Creates an XrSpace handle based on a spatial anchor. Application can provide an XrPosef to define the position and orientation of the new space’s origin relative to the anchor’s natural origin.
Multiple XrSpace handles may exist for a given XrSpatialAnchorMSFT simultaneously, up to some limit imposed by the runtime. The XrSpace handle must be eventually freed via the xrDestroySpace function or by destroying the parent XrSpatialAnchorMSFT handle.
The XrSpatialAnchorSpaceCreateInfoMSFT structure is defined as:
typedef struct XrSpatialAnchorSpaceCreateInfoMSFT {
XrStructureType type;
const void* next;
XrSpatialAnchorMSFT anchor;
XrPosef poseInAnchorSpace;
} XrSpatialAnchorSpaceCreateInfoMSFT;
The xrDestroySpatialAnchorMSFT function is defined as:
// Provided by XR_MSFT_spatial_anchor
XrResult xrDestroySpatialAnchorMSFT(
XrSpatialAnchorMSFT anchor);
XrSpatialAnchorMSFT handles are destroyed using xrDestroySpatialAnchorMSFT. By destroying an anchor, the runtime can stop spending resources used to maintain tracking for that anchor’s origin.
New Object Types
New Flag Types
New Enum Constants
XrObjectType enumeration is extended with:
-
XR_OBJECT_TYPE_SPATIAL_ANCHOR_MSFT
XrStructureType enumeration is extended with:
-
XR_TYPE_SPATIAL_ANCHOR_CREATE_INFO_MSFT -
XR_TYPE_SPATIAL_ANCHOR_SPACE_CREATE_INFO_MSFT
XrResult enumeration is extended with:
-
XR_ERROR_CREATE_SPATIAL_ANCHOR_FAILED_MSFT
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2019-07-30 (Alex Turner)
-
Initial extension description
-
-
Revision 2, 2021-06-02 (Rylie Pavlik, Collabora, Ltd.)
-
Note that the parameter to
xrDestroySpatialAnchorMSFTmust be externally synchronized
-
12.170. XR_MSFT_spatial_anchor_persistence
- Name String
-
XR_MSFT_spatial_anchor_persistence - Extension Type
-
Instance extension
- Registered Extension Number
-
143
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2021-07-15
- IP Status
-
No known IP claims.
- Contributors
-
Lachlan Ford, Microsoft
Yin Li, Microsoft
Norman Pohl, Microsoft
Alex Turner, Microsoft
Bryce Hutchings, Microsoft
12.170.1. Overview
This extension allows persistence and retrieval of spatial anchors sharing
and localization across application sessions on a device.
Spatial anchors persisted during an application session on a device will
only be able to be retrieved during sessions of that same application on the
same device.
This extension requires XR_MSFT_spatial_anchor to also be enabled.
12.170.2. Spatial Anchor Store Connection
The XrSpatialAnchorStoreConnectionMSFT handle represents a connection to the spatial anchor store and is used by the application to perform operations on the spatial anchor store such as:
-
Persisting and unpersisting of spatial anchors.
-
Enumeration of currently persisted anchors.
-
Clearing the spatial anchor store of all anchors.
// Provided by XR_MSFT_spatial_anchor_persistence
XR_DEFINE_HANDLE(XrSpatialAnchorStoreConnectionMSFT)
The application can use the xrCreateSpatialAnchorStoreConnectionMSFT function to create an handle to the spatial anchor store. The application can use this handle to interact with the spatial anchor store in order to persist anchors across application sessions.
The xrCreateSpatialAnchorStoreConnectionMSFT function may be a slow operation and therefore should be invoked from a non-timing critical thread.
// Provided by XR_MSFT_spatial_anchor_persistence
XrResult xrCreateSpatialAnchorStoreConnectionMSFT(
XrSession session,
XrSpatialAnchorStoreConnectionMSFT* spatialAnchorStore);
The application can use the xrDestroySpatialAnchorStoreConnectionMSFT function to destroy an anchor store connection.
// Provided by XR_MSFT_spatial_anchor_persistence
XrResult xrDestroySpatialAnchorStoreConnectionMSFT(
XrSpatialAnchorStoreConnectionMSFT spatialAnchorStore);
12.170.3. Persist Spatial Anchor
The application can use the xrPersistSpatialAnchorMSFT function to
persist a spatial anchor in the spatial anchor store for this application.
The given
XrSpatialAnchorPersistenceInfoMSFT::spatialAnchorPersistenceName
will be the string to retrieve the spatial anchor from the Spatial Anchor
store or subsequently remove the record of this spatial anchor from the
store.
This name will uniquely identify the spatial anchor for the current
application.
If there is already a spatial anchor of the same name persisted in the
spatial anchor store, the existing spatial anchor will be replaced and
xrPersistSpatialAnchorMSFT must return XR_SUCCESS.
// Provided by XR_MSFT_spatial_anchor_persistence
XrResult xrPersistSpatialAnchorMSFT(
XrSpatialAnchorStoreConnectionMSFT spatialAnchorStore,
const XrSpatialAnchorPersistenceInfoMSFT* spatialAnchorPersistenceInfo);
The XrSpatialAnchorPersistenceNameMSFT structure is the name
associated with the XrSpatialAnchorMSFT in the spatial anchor store.
It is used to perform persist and unpersist on an name in the spatial
anchor store.
The XrSpatialAnchorPersistenceNameMSFT structure is defined as:
// Provided by XR_MSFT_spatial_anchor_persistence
typedef struct XrSpatialAnchorPersistenceNameMSFT {
char name[XR_MAX_SPATIAL_ANCHOR_NAME_SIZE_MSFT];
} XrSpatialAnchorPersistenceNameMSFT;
If an XrSpatialAnchorPersistenceNameMSFT with an empty name
value is passed to any function as a parameter, that function must return
XR_ERROR_SPATIAL_ANCHOR_NAME_INVALID_MSFT.
The XrSpatialAnchorPersistenceInfoMSFT structure is defined as:
// Provided by XR_MSFT_spatial_anchor_persistence
typedef struct XrSpatialAnchorPersistenceInfoMSFT {
XrStructureType type;
const void* next;
XrSpatialAnchorPersistenceNameMSFT spatialAnchorPersistenceName;
XrSpatialAnchorMSFT spatialAnchor;
} XrSpatialAnchorPersistenceInfoMSFT;
The application can use the
xrEnumeratePersistedSpatialAnchorNamesMSFT function to enumerate the
names of all spatial anchors currently persisted in the spatial anchor store
for this application.
This function follows the two-call
idiom for filling the spatialAnchorNames.
// Provided by XR_MSFT_spatial_anchor_persistence
XrResult xrEnumeratePersistedSpatialAnchorNamesMSFT(
XrSpatialAnchorStoreConnectionMSFT spatialAnchorStore,
uint32_t spatialAnchorNameCapacityInput,
uint32_t* spatialAnchorNameCountOutput,
XrSpatialAnchorPersistenceNameMSFT* spatialAnchorNames);
The application can use the
xrCreateSpatialAnchorFromPersistedNameMSFT function to create a
XrSpatialAnchorMSFT from the spatial anchor store.
If the
XrSpatialAnchorFromPersistedAnchorCreateInfoMSFT::spatialAnchorPersistenceName
provided does not correspond to a currently stored anchor (i.e. the list of
spatial anchor names returned from
xrEnumeratePersistedSpatialAnchorNamesMSFT), the function must return
XR_ERROR_SPATIAL_ANCHOR_NAME_NOT_FOUND_MSFT.
// Provided by XR_MSFT_spatial_anchor_persistence
XrResult xrCreateSpatialAnchorFromPersistedNameMSFT(
XrSession session,
const XrSpatialAnchorFromPersistedAnchorCreateInfoMSFT* spatialAnchorCreateInfo,
XrSpatialAnchorMSFT* spatialAnchor);
The XrSpatialAnchorFromPersistedAnchorCreateInfoMSFT structure is defined as:
// Provided by XR_MSFT_spatial_anchor_persistence
typedef struct XrSpatialAnchorFromPersistedAnchorCreateInfoMSFT {
XrStructureType type;
const void* next;
XrSpatialAnchorStoreConnectionMSFT spatialAnchorStore;
XrSpatialAnchorPersistenceNameMSFT spatialAnchorPersistenceName;
} XrSpatialAnchorFromPersistedAnchorCreateInfoMSFT;
The spatialAnchorPersistenceName is a character array of maximum size
XR_MAX_SPATIAL_ANCHOR_NAME_SIZE_MSFT, which must include a null
terminator and must not be empty (i.e. the first element is the null
terminator).
If an empty spatialAnchorPersistenceName value is passed to any
function as a parameter, that function must return
XR_ERROR_SPATIAL_ANCHOR_NAME_INVALID_MSFT.
The application can use the xrUnpersistSpatialAnchorMSFT function to
remove the record of the anchor in the spatial anchor store.
This operation will not affect any XrSpatialAnchorMSFT handles
previously created.
If the spatialAnchorPersistenceName provided does not correspond to a
currently stored anchor, the function must return
XR_ERROR_SPATIAL_ANCHOR_NAME_NOT_FOUND_MSFT.
// Provided by XR_MSFT_spatial_anchor_persistence
XrResult xrUnpersistSpatialAnchorMSFT(
XrSpatialAnchorStoreConnectionMSFT spatialAnchorStore,
const XrSpatialAnchorPersistenceNameMSFT* spatialAnchorPersistenceName);
The application can use the xrClearSpatialAnchorStoreMSFT function to remove all spatial anchors from the spatial anchor store for this application. The function only removes the record of the spatial anchors in the store but does not affect any XrSpatialAnchorMSFT handles previously loaded in the current session.
// Provided by XR_MSFT_spatial_anchor_persistence
XrResult xrClearSpatialAnchorStoreMSFT(
XrSpatialAnchorStoreConnectionMSFT spatialAnchorStore);
New Object Types
New Flag Types
New Enum Constants
-
XR_TYPE_SPATIAL_ANCHOR_PERSISTENCE_INFO_MSFT -
XR_TYPE_SPATIAL_ANCHOR_FROM_PERSISTED_ANCHOR_CREATE_INFO_MSFT -
XR_ERROR_SPATIAL_ANCHOR_NAME_NOT_FOUND_MSFT -
XR_ERROR_SPATIAL_ANCHOR_NAME_INVALID_MSFT -
XR_MAX_SPATIAL_ANCHOR_NAME_SIZE_MSFT
New Enums
New Structures
New Functions
Version History
-
Revision 1, 2021-02-19 (Lachlan Ford)
-
Initial extension proposal
-
-
Revision 2, 2021-07-15 (Yin Li)
-
Extension proposal to OpenXR working group
-
12.171. XR_MSFT_spatial_graph_bridge
- Name String
-
XR_MSFT_spatial_graph_bridge - Extension Type
-
Instance extension
- Registered Extension Number
-
50
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Contributors
-
Darryl Gough, Microsoft
Yin Li, Microsoft
Alex Turner, Microsoft
David Fields, Microsoft
Overview
This extension enables applications to interop between XrSpace handles and other Windows Mixed Reality device platform libraries or APIs. These libraries represent a spatially tracked point, also known as a "spatial graph node", with a GUID value. This extension enables applications to create XrSpace handles from spatial graph nodes. Applications can also try to get a spatial graph node from an XrSpace handle.
12.171.1. Create XrSpace from Spatial Graph Node
The xrCreateSpatialGraphNodeSpaceMSFT function creates an XrSpace handle for a given spatial graph node type and ID.
// Provided by XR_MSFT_spatial_graph_bridge
XrResult xrCreateSpatialGraphNodeSpaceMSFT(
XrSession session,
const XrSpatialGraphNodeSpaceCreateInfoMSFT* createInfo,
XrSpace* space);
The XrSpatialGraphNodeSpaceCreateInfoMSFT structure is used with xrCreateSpatialGraphNodeSpaceMSFT to create an XrSpace handle for a given spatial node type and node ID.
// Provided by XR_MSFT_spatial_graph_bridge
typedef struct XrSpatialGraphNodeSpaceCreateInfoMSFT {
XrStructureType type;
const void* next;
XrSpatialGraphNodeTypeMSFT nodeType;
uint8_t nodeId[XR_GUID_SIZE_MSFT];
XrPosef pose;
} XrSpatialGraphNodeSpaceCreateInfoMSFT;
The enum XrSpatialGraphNodeTypeMSFT describes the types of spatial graph nodes.
// Provided by XR_MSFT_spatial_graph_bridge
typedef enum XrSpatialGraphNodeTypeMSFT {
XR_SPATIAL_GRAPH_NODE_TYPE_STATIC_MSFT = 1,
XR_SPATIAL_GRAPH_NODE_TYPE_DYNAMIC_MSFT = 2,
XR_SPATIAL_GRAPH_NODE_TYPE_MAX_ENUM_MSFT = 0x7FFFFFFF
} XrSpatialGraphNodeTypeMSFT;
There are two types of spatial graph nodes: static and dynamic.
Static spatial nodes track the pose of a fixed location in the world
relative to reference spaces.
The tracking of static nodes may slowly adjust the pose over time for
better accuracy but the pose is relatively stable in the short term, such as
between rendering frames.
For example, a QR code tracking library can use a static node to represent
the location of the tracked QR code.
Static spatial nodes are represented by
XR_SPATIAL_GRAPH_NODE_TYPE_STATIC_MSFT.
Dynamic spatial nodes track the pose of a physical object that moves
continuously relative to reference spaces.
The pose of dynamic spatial nodes can be very different within the duration
of a rendering frame.
It is important for the application to use the correct timestamp to query
the space location using xrLocateSpace.
For example, a color camera mounted in front of a HMD is also tracked by the
HMD so a web camera library can use a dynamic node to represent the camera
location.
Dynamic spatial nodes are represented by
XR_SPATIAL_GRAPH_NODE_TYPE_DYNAMIC_MSFT.
12.171.2. Create Spatial Graph Node Binding from XrSpace
The XrSpatialGraphNodeBindingMSFT handle represents a binding to a spatial graph node. This handle allows an application to get a spatial graph node GUID from an XrSpace to use in other Windows Mixed Reality device platform libraries or APIs.
The runtime must remember the spatial graph node and track it for the lifetime of the XrSpatialGraphNodeBindingMSFT handle. When the XrSpatialGraphNodeBindingMSFT handle is destroyed then the runtime’s tracking system may forget about the spatial graphic node and stop tracking it.
XR_DEFINE_HANDLE(XrSpatialGraphNodeBindingMSFT)
The xrTryCreateSpatialGraphStaticNodeBindingMSFT function tries to create a binding to the best spatial graph static node relative to the given location and returns an XrSpatialGraphNodeBindingMSFT handle.
// Provided by XR_MSFT_spatial_graph_bridge
XrResult xrTryCreateSpatialGraphStaticNodeBindingMSFT(
XrSession session,
const XrSpatialGraphStaticNodeBindingCreateInfoMSFT* createInfo,
XrSpatialGraphNodeBindingMSFT* nodeBinding);
The runtime may return XR_SUCCESS and set nodeBinding to
XR_NULL_HANDLE if it is unable to create a spatial graph static node
binding.
This may happen when the given XrSpace cannot be properly tracked at
the moment.
The application can retry creating the XrSpatialGraphNodeBindingMSFT
handle again after a reasonable period of time when tracking is regained.
The xrTryCreateSpatialGraphStaticNodeBindingMSFT function may be a slow operation and therefore should be invoked from a non-timing critical thread.
XrSpatialGraphStaticNodeBindingCreateInfoMSFT is an input structure for xrTryCreateSpatialGraphStaticNodeBindingMSFT.
// Provided by XR_MSFT_spatial_graph_bridge
typedef struct XrSpatialGraphStaticNodeBindingCreateInfoMSFT {
XrStructureType type;
const void* next;
XrSpace space;
XrPosef poseInSpace;
XrTime time;
} XrSpatialGraphStaticNodeBindingCreateInfoMSFT;
The xrDestroySpatialGraphNodeBindingMSFT function releases the
nodeBinding and the underlying resources.
// Provided by XR_MSFT_spatial_graph_bridge
XrResult xrDestroySpatialGraphNodeBindingMSFT(
XrSpatialGraphNodeBindingMSFT nodeBinding);
Get spatial graph node binding properties
The xrGetSpatialGraphNodeBindingPropertiesMSFT function retrieves the spatial graph node GUID and the pose in the node space from an XrSpatialGraphNodeBindingMSFT handle.
// Provided by XR_MSFT_spatial_graph_bridge
XrResult xrGetSpatialGraphNodeBindingPropertiesMSFT(
XrSpatialGraphNodeBindingMSFT nodeBinding,
const XrSpatialGraphNodeBindingPropertiesGetInfoMSFT* getInfo,
XrSpatialGraphNodeBindingPropertiesMSFT* properties);
XrSpatialGraphNodeBindingPropertiesGetInfoMSFT is an input structure for xrGetSpatialGraphNodeBindingPropertiesMSFT.
// Provided by XR_MSFT_spatial_graph_bridge
typedef struct XrSpatialGraphNodeBindingPropertiesGetInfoMSFT {
XrStructureType type;
const void* next;
} XrSpatialGraphNodeBindingPropertiesGetInfoMSFT;
XrSpatialGraphNodeBindingPropertiesMSFT is an output structure for xrGetSpatialGraphNodeBindingPropertiesMSFT.
// Provided by XR_MSFT_spatial_graph_bridge
typedef struct XrSpatialGraphNodeBindingPropertiesMSFT {
XrStructureType type;
void* next;
uint8_t nodeId[XR_GUID_SIZE_MSFT];
XrPosef poseInNodeSpace;
} XrSpatialGraphNodeBindingPropertiesMSFT;
New Object Types
New Flag Types
New Enum Constants
XrObjectType enumeration is extended with:
-
XR_OBJECT_TYPE_SPATIAL_GRAPH_NODE_BINDING_MSFT
XrStructureType enumeration is extended with:
-
XR_TYPE_SPATIAL_GRAPH_NODE_SPACE_CREATE_INFO_MSFT -
XR_TYPE_SPATIAL_GRAPH_STATIC_NODE_BINDING_CREATE_INFO_MSFT -
XR_TYPE_SPATIAL_GRAPH_NODE_BINDING_PROPERTIES_GET_INFO_MSFT -
XR_TYPE_SPATIAL_GRAPH_NODE_BINDING_PROPERTIES_MSFT
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2019-10-31 (Yin LI)
-
Initial extension description
-
-
Revision 2, 2022-01-13 (Darryl Gough)
-
Added Spatial Graph Node Binding handle.
-
12.172. XR_MSFT_unbounded_reference_space
- Name String
-
XR_MSFT_unbounded_reference_space - Extension Type
-
Instance extension
- Registered Extension Number
-
39
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
Overview
This extension allows an application to create an UNBOUNDED_MSFT reference
space.
This reference space enables the viewer to move freely through a complex
environment, often many meters from where they started, while always
optimizing for coordinate system stability near the viewer.
This is done by allowing the origin of the reference space to drift as
necessary to keep the viewer’s coordinates relative to the space’s origin
stable.
To create an UNBOUNDED_MSFT reference space, the application can pass
XR_REFERENCE_SPACE_TYPE_UNBOUNDED_MSFT to
xrCreateReferenceSpace.
The UNBOUNDED_MSFT reference space establishes a world-locked origin,
gravity-aligned to exclude pitch and roll, with +Y up, +X to the right, and
-Z forward.
This space begins with an arbitrary initial position and orientation, which
the runtime may define to be either the initial position at app launch or
some other initial zero position.
Unlike a STAGE reference space, the runtime may place the origin of an
UNBOUNDED_MSFT reference space at any height, rather than fixing it at the
floor.
This is because the viewer may move through various rooms and levels of
their environment, each of which has a different floor height.
Runtimes should not automatically adjust the position of the origin when
the viewer moves to a room with a different floor height.
UNBOUNDED_MSFT space is useful when an app needs to render world-scale
content that spans beyond the bounds of a single STAGE, for example, an
entire floor or multiple floors of a building.
An UNBOUNDED_MSFT space maintains stability near the viewer by slightly
adjusting its origin over time.
The runtime must not queue the XrEventDataReferenceSpaceChangePending
event in response to these minor adjustments.
When views, controllers or other spaces experience tracking loss relative to
the UNBOUNDED_MSFT space, runtimes should continue to provide inferred or
last-known position and orientation values.
These inferred poses can, for example, be based on neck model updates,
inertial dead reckoning, or a last-known position, so long as it is still
reasonable for the application to use that pose.
While a runtime is providing position data, it must continue to set
XR_SPACE_LOCATION_POSITION_VALID_BIT and
XR_VIEW_STATE_POSITION_VALID_BIT but it can clear
XR_SPACE_LOCATION_POSITION_TRACKED_BIT and
XR_VIEW_STATE_POSITION_TRACKED_BIT to indicate that the position is
inferred or last-known in this way.
When tracking is recovered, runtimes should snap the pose of other spaces
back into position relative to the UNBOUNDED_MSFT space’s original origin.
However, if tracking recovers into a new tracking volume in which the
original origin can no longer be located (e.g. the viewer moved through a
dark hallway and regained tracking in a new room), the runtime may recenter
the origin arbitrarily, for example moving the origin to coincide with the
viewer.
If such recentering occurs, the runtime must queue the
XrEventDataReferenceSpaceChangePending event with poseValid set
to false.
If the viewer moves far enough away from the origin of an UNBOUNDED_MSFT
reference space that floating point error would introduce noticeable error
when locating the viewer within that space, the runtime may recenter the
space’s origin to a new location closer to the viewer.
If such recentering occurs, the runtime must queue the
XrEventDataReferenceSpaceChangePending event with poseValid set
to true.
Runtimes must support the UNBOUNDED_MSFT reference space when this
extension is enabled.
New Object Types
New Flag Types
New Enum Constants
XrReferenceSpaceType enumeration is extended with:
-
XR_REFERENCE_SPACE_TYPE_UNBOUNDED_MSFT
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2019-07-30 (Alex Turner)
-
Initial extension description
-
12.173. XR_OCULUS_audio_device_guid
- Name String
-
XR_OCULUS_audio_device_guid - Extension Type
-
Instance extension
- Registered Extension Number
-
160
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
Overview
This extension enables the querying of audio device information associated with an OpenXR instance.
On Windows, there may be multiple audio devices available on the system. This extensions allows applications to query the runtime for the appropriate audio devices for the active HMD.
New Object Types
New Flag Types
New Enum Constants
-
XR_MAX_AUDIO_DEVICE_STR_SIZE_OCULUS
New Enums
New Structures
New Functions
// Provided by XR_OCULUS_audio_device_guid
XrResult xrGetAudioOutputDeviceGuidOculus(
XrInstance instance,
wchar_t buffer[XR_MAX_AUDIO_DEVICE_STR_SIZE_OCULUS]);
// Provided by XR_OCULUS_audio_device_guid
XrResult xrGetAudioInputDeviceGuidOculus(
XrInstance instance,
wchar_t buffer[XR_MAX_AUDIO_DEVICE_STR_SIZE_OCULUS]);
Issues
Version History
-
Revision 1, 2021-05-13 (John Kearney)
-
Initial extension description
-
12.174. XR_OCULUS_external_camera
- Name String
-
XR_OCULUS_external_camera - Extension Type
-
Instance extension
- Registered Extension Number
-
227
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
Overview
This extension enables the querying of external camera information for a session. This extension is intended to enable mixed reality capture support for applications.
This extension does not provide a mechanism for supplying external camera information to the runtime. If external camera information is not supplied to the runtime before using this extension, no camera information will be returned.
This API supports returning camera intrinsics and extrinsics:
-
Camera intrinsics are the attributes of the camera: resolution, field of view, etc.
-
Camera extrinsics are everything external to the camera: relative pose, attached to, etc.
-
We do not expect the camera intrinsics to change frequently. We expect the camera extrinsics to change frequently.
New Object Types
New Flag Types
typedef XrFlags64 XrExternalCameraStatusFlagsOCULUS;
// Flag bits for XrExternalCameraStatusFlagsOCULUS
static const XrExternalCameraStatusFlagsOCULUS XR_EXTERNAL_CAMERA_STATUS_CONNECTED_BIT_OCULUS = 0x00000001;
static const XrExternalCameraStatusFlagsOCULUS XR_EXTERNAL_CAMERA_STATUS_CALIBRATING_BIT_OCULUS = 0x00000002;
static const XrExternalCameraStatusFlagsOCULUS XR_EXTERNAL_CAMERA_STATUS_CALIBRATION_FAILED_BIT_OCULUS = 0x00000004;
static const XrExternalCameraStatusFlagsOCULUS XR_EXTERNAL_CAMERA_STATUS_CALIBRATED_BIT_OCULUS = 0x00000008;
static const XrExternalCameraStatusFlagsOCULUS XR_EXTERNAL_CAMERA_STATUS_CAPTURING_BIT_OCULUS = 0x00000010;
New Enum Constants
XR_MAX_EXTERNAL_CAMERA_NAME_SIZE_OCULUS defines the length of the
field XrExternalCameraOCULUS::name.
#define XR_MAX_EXTERNAL_CAMERA_NAME_SIZE_OCULUS 32
XrStructureType enumeration is extended with:
-
XR_TYPE_EXTERNAL_CAMERA_OCULUS
New Enums
// Provided by XR_OCULUS_external_camera
typedef enum XrExternalCameraAttachedToDeviceOCULUS {
XR_EXTERNAL_CAMERA_ATTACHED_TO_DEVICE_NONE_OCULUS = 0,
XR_EXTERNAL_CAMERA_ATTACHED_TO_DEVICE_HMD_OCULUS = 1,
XR_EXTERNAL_CAMERA_ATTACHED_TO_DEVICE_LTOUCH_OCULUS = 2,
XR_EXTERNAL_CAMERA_ATTACHED_TO_DEVICE_RTOUCH_OCULUS = 3,
XR_EXTERNAL_CAMERA_ATTACHED_TO_DEVICE_MAX_ENUM_OCULUS = 0x7FFFFFFF
} XrExternalCameraAttachedToDeviceOCULUS;
| Enum | Description |
|---|---|
|
External camera is at a fixed point in LOCAL space |
|
External camera is attached to the HMD |
|
External camera is attached to a left Touch controller |
|
External camera is attached to a right Touch controller |
New Structures
The XrExternalCameraIntrinsicsOCULUS structure is defined as:
// Provided by XR_OCULUS_external_camera
typedef struct XrExternalCameraIntrinsicsOCULUS {
XrTime lastChangeTime;
XrFovf fov;
float virtualNearPlaneDistance;
float virtualFarPlaneDistance;
XrExtent2Di imageSensorPixelResolution;
} XrExternalCameraIntrinsicsOCULUS;
The XrExternalCameraExtrinsicsOCULUS structure is defined as:
// Provided by XR_OCULUS_external_camera
typedef struct XrExternalCameraExtrinsicsOCULUS {
XrTime lastChangeTime;
XrExternalCameraStatusFlagsOCULUS cameraStatusFlags;
XrExternalCameraAttachedToDeviceOCULUS attachedToDevice;
XrPosef relativePose;
} XrExternalCameraExtrinsicsOCULUS;
The XrExternalCameraOCULUS structure is defined as:
// Provided by XR_OCULUS_external_camera
typedef struct XrExternalCameraOCULUS {
XrStructureType type;
const void* next;
char name[XR_MAX_EXTERNAL_CAMERA_NAME_SIZE_OCULUS];
XrExternalCameraIntrinsicsOCULUS intrinsics;
XrExternalCameraExtrinsicsOCULUS extrinsics;
} XrExternalCameraOCULUS;
New Functions
The xrEnumerateExternalCamerasOCULUS function enumerates all the external cameras that are supported by the runtime, it is defined as:
// Provided by XR_OCULUS_external_camera
XrResult xrEnumerateExternalCamerasOCULUS(
XrSession session,
uint32_t cameraCapacityInput,
uint32_t* cameraCountOutput,
XrExternalCameraOCULUS* cameras);
Issues
Version History
-
Revision 1, 2022-08-31 (John Kearney)
-
Initial extension description
-
12.175. XR_OPPO_controller_interaction
- Name String
-
XR_OPPO_controller_interaction - Extension Type
-
Instance extension
- Registered Extension Number
-
454
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- API Interactions
-
-
Interacts with
XR_EXT_dpad_binding -
Interacts with
XR_EXT_hand_interaction -
Interacts with
XR_EXT_palm_pose
-
- Contributors
-
Haomiao Jiang, OPPO
Buyi Xu, OPPO
Yebao Cai, OPPO
Overview
This extension defines a new interaction profile for the OPPO Controller, including but not limited to OPPO MR Glasses Controller.
OPPO Controller interaction profile
Interaction profile path:
-
/interaction_profiles/oppo/mr_controller_oppo
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile represents the input sources and haptics on the OPPO Controller.
Supported component paths:
-
On /user/hand/left only:
-
…/input/x/click
-
…/input/x/touch
-
…/input/y/click
-
…/input/y/touch
-
…/input/menu/click
-
…/input/heartrate_oppo/value
-
-
On /user/hand/right only:
-
…/input/a/click
-
…/input/a/touch
-
…/input/b/click
-
…/input/b/touch
-
…/input/home/click (may not be available for application use)
-
-
…/input/squeeze/value
-
…/input/trigger/touch
-
…/input/trigger/value
-
…/input/grip/pose
-
…/input/aim/pose
-
…/input/thumbstick/click
-
…/input/thumbstick/touch
-
…/input/thumbstick
-
…/input/thumbstick/x
-
…/input/thumbstick/y
-
…/output/haptic
New Identifiers
Input Path Descriptions
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
Version History
-
Revision 1, Haomiao Jiang
-
Initial extension description
-
12.176. XR_QCOM_tracking_optimization_settings
- Name String
-
XR_QCOM_tracking_optimization_settings - Extension Type
-
Instance extension
- Registered Extension Number
-
307
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2022-06-02
- Contributors
-
Daniel Guttenberg, Qualcomm
Martin Renschler, Qualcomm
Karthik Nagarajan, Qualcomm
Overview
This extension defines an API for the application to give optimization hints to the runtime for tracker domains.
For example, an application might be interested in tracking targets that are at a far distance from the camera which may increase tracking latency, while another application might be interested in minimizing power consumption at the cost of tracking accuracy. Targets are domains which are defined in XrTrackingOptimizationSettingsDomainQCOM.
This allows the application to tailor the tracking algorithms to specific use-cases and scene-scales in order to provide the best experience possible.
Summary: provide domain hints to the run-time about which parameters to optimize tracking for.
12.176.1. Setting Tracking Optimization Hints
The tracking optimization hints are expressed as a hint XrTrackingOptimizationSettingsHintQCOM.
// Provided by XR_QCOM_tracking_optimization_settings
typedef enum XrTrackingOptimizationSettingsDomainQCOM {
XR_TRACKING_OPTIMIZATION_SETTINGS_DOMAIN_ALL_QCOM = 1,
XR_TRACKING_OPTIMIZATION_SETTINGS_DOMAIN_MAX_ENUM_QCOM = 0x7FFFFFFF
} XrTrackingOptimizationSettingsDomainQCOM;
// Provided by XR_QCOM_tracking_optimization_settings
typedef enum XrTrackingOptimizationSettingsHintQCOM {
XR_TRACKING_OPTIMIZATION_SETTINGS_HINT_NONE_QCOM = 0,
XR_TRACKING_OPTIMIZATION_SETTINGS_HINT_LONG_RANGE_PRIORIZATION_QCOM = 1,
XR_TRACKING_OPTIMIZATION_SETTINGS_HINT_CLOSE_RANGE_PRIORIZATION_QCOM = 2,
XR_TRACKING_OPTIMIZATION_SETTINGS_HINT_LOW_POWER_PRIORIZATION_QCOM = 3,
XR_TRACKING_OPTIMIZATION_SETTINGS_HINT_HIGH_POWER_PRIORIZATION_QCOM = 4,
XR_TRACKING_OPTIMIZATION_SETTINGS_HINT_MAX_ENUM_QCOM = 0x7FFFFFFF
} XrTrackingOptimizationSettingsHintQCOM;
The xrSetTrackingOptimizationSettingsHintQCOM function is defined as:
// Provided by XR_QCOM_tracking_optimization_settings
XrResult xrSetTrackingOptimizationSettingsHintQCOM(
XrSession session,
XrTrackingOptimizationSettingsDomainQCOM domain,
XrTrackingOptimizationSettingsHintQCOM hint);
The XR runtime behaves as if
XR_TRACKING_OPTIMIZATION_SETTINGS_HINT_NONE_QCOM was submitted if the
application does not provide a hint.
The XR runtime must return XR_ERROR_VALIDATION_FAILURE if the
application sets a domain or hint not part of
XrTrackingOptimizationSettingsDomainQCOM or
XrTrackingOptimizationSettingsHintQCOM.
A hint is typically set before a domain handle is created.
If hints are set more than once from one or concurrent sessions, the runtime
may accommodate the first hint it received and return
XR_ERROR_HINT_ALREADY_SET_QCOM for any subsequent calls made.
If the application destroys the active domain handle associated with the
hint, the runtime may behave as if
XR_TRACKING_OPTIMIZATION_SETTINGS_HINT_NONE_QCOM was set.
In this scenario, the runtime should accommodate new valid hints that may
be set for the same domain.
12.176.2. Example of setting a tracking optimization hint
XrInstance instance; // previously initialized
XrSession session; // previously initialized
// Get function pointer for xrSetTrackingOptimizationSettingsHintQCOM
PFN_xrSetTrackingOptimizationSettingsHintQCOM pfnSetTrackingOptimizationSettingsHintQCOM;
CHK_XR(xrGetInstanceProcAddr(instance, "xrSetTrackingOptimizationSettingsHintQCOM",
(PFN_xrVoidFunction*)(&pfnSetTrackingOptimizationSettingsHintQCOM)));
pfnSetTrackingOptimizationSettingsHintQCOM(session,
XR_TRACKING_OPTIMIZATION_SETTINGS_DOMAIN_ALL_QCOM,
XR_TRACKING_OPTIMIZATION_SETTINGS_HINT_LONG_RANGE_PRIORIZATION_QCOM);
// perform tracking while prioritizing long range tracking
New Object Types
New Flag Types
New Enum Constants
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2022-06-02
-
Initial extension description
-
12.177. XR_ULTRALEAP_hand_tracking_forearm
- Name String
-
XR_ULTRALEAP_hand_tracking_forearm - Extension Type
-
Instance extension
- Registered Extension Number
-
150
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2022-04-19
- IP Status
-
No known IP claims.
- Contributors
-
Robert Blenkinsopp, Ultraleap
Adam Harwood, Ultraleap
Overview
This extension augments the XR_EXT_hand_tracking extension to enable
applications to request the default set of 26 hand joints, with the addition
of a joint representing the user’s elbow.
The application must also enable the XR_EXT_hand_tracking extension
in order to use this extension.
New joint set
This extension extends the XrHandJointSetEXT enumeration with a new
member XR_HAND_JOINT_SET_HAND_WITH_FOREARM_ULTRALEAP.
This joint set is the same as the XR_HAND_JOINT_SET_DEFAULT_EXT, plus
a joint representing the user’s elbow,
XR_HAND_FOREARM_JOINT_ELBOW_ULTRALEAP.
// Provided by XR_ULTRALEAP_hand_tracking_forearm
typedef enum XrHandForearmJointULTRALEAP {
XR_HAND_FOREARM_JOINT_PALM_ULTRALEAP = 0,
XR_HAND_FOREARM_JOINT_WRIST_ULTRALEAP = 1,
XR_HAND_FOREARM_JOINT_THUMB_METACARPAL_ULTRALEAP = 2,
XR_HAND_FOREARM_JOINT_THUMB_PROXIMAL_ULTRALEAP = 3,
XR_HAND_FOREARM_JOINT_THUMB_DISTAL_ULTRALEAP = 4,
XR_HAND_FOREARM_JOINT_THUMB_TIP_ULTRALEAP = 5,
XR_HAND_FOREARM_JOINT_INDEX_METACARPAL_ULTRALEAP = 6,
XR_HAND_FOREARM_JOINT_INDEX_PROXIMAL_ULTRALEAP = 7,
XR_HAND_FOREARM_JOINT_INDEX_INTERMEDIATE_ULTRALEAP = 8,
XR_HAND_FOREARM_JOINT_INDEX_DISTAL_ULTRALEAP = 9,
XR_HAND_FOREARM_JOINT_INDEX_TIP_ULTRALEAP = 10,
XR_HAND_FOREARM_JOINT_MIDDLE_METACARPAL_ULTRALEAP = 11,
XR_HAND_FOREARM_JOINT_MIDDLE_PROXIMAL_ULTRALEAP = 12,
XR_HAND_FOREARM_JOINT_MIDDLE_INTERMEDIATE_ULTRALEAP = 13,
XR_HAND_FOREARM_JOINT_MIDDLE_DISTAL_ULTRALEAP = 14,
XR_HAND_FOREARM_JOINT_MIDDLE_TIP_ULTRALEAP = 15,
XR_HAND_FOREARM_JOINT_RING_METACARPAL_ULTRALEAP = 16,
XR_HAND_FOREARM_JOINT_RING_PROXIMAL_ULTRALEAP = 17,
XR_HAND_FOREARM_JOINT_RING_INTERMEDIATE_ULTRALEAP = 18,
XR_HAND_FOREARM_JOINT_RING_DISTAL_ULTRALEAP = 19,
XR_HAND_FOREARM_JOINT_RING_TIP_ULTRALEAP = 20,
XR_HAND_FOREARM_JOINT_LITTLE_METACARPAL_ULTRALEAP = 21,
XR_HAND_FOREARM_JOINT_LITTLE_PROXIMAL_ULTRALEAP = 22,
XR_HAND_FOREARM_JOINT_LITTLE_INTERMEDIATE_ULTRALEAP = 23,
XR_HAND_FOREARM_JOINT_LITTLE_DISTAL_ULTRALEAP = 24,
XR_HAND_FOREARM_JOINT_LITTLE_TIP_ULTRALEAP = 25,
XR_HAND_FOREARM_JOINT_ELBOW_ULTRALEAP = 26,
XR_HAND_FOREARM_JOINT_MAX_ENUM_ULTRALEAP = 0x7FFFFFFF
} XrHandForearmJointULTRALEAP;
|
Note
The first XR_HAND_JOINT_COUNT_EXT members of XrHandForearmJointULTRALEAP are identical to the members of XrHandJointEXT and can be used interchangeably. |
The XR_HAND_FOREARM_JOINT_ELBOW_ULTRALEAP joint represents the center
of an elbow and is orientated with the backwards (+Z) direction parallel to
the forearm and points away from the hand.
The up (+Y) direction is pointing out of the dorsal side of the forearm. The X direction is perpendicular to Y and Z and follows the right hand rule.
// Provided by XR_ULTRALEAP_hand_tracking_forearm
#define XR_HAND_FOREARM_JOINT_COUNT_ULTRALEAP 27
XR_HAND_FOREARM_JOINT_COUNT_ULTRALEAP defines the number of hand joint enumerants defined in XrHandForearmJointULTRALEAP.
New Object Types
New Flag Types
New Enum Constants
XrHandJointSetEXT enumeration is extended with:
-
XR_HAND_JOINT_SET_HAND_WITH_FOREARM_ULTRALEAP
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2022-04-19 (Robert Blenkinsopp)
-
Initial version
-
12.178. XR_VALVE_analog_threshold
- Name String
-
XR_VALVE_analog_threshold - Extension Type
-
Instance extension
- Registered Extension Number
-
80
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2021-06-09
- IP Status
-
No known IP claims.
- Contributors
-
Joe Ludwig, Valve
Rune Berg, Valve
Andres Rodriguez, Valve
Overview
This extension allows the application to control the threshold and haptic feedback applied to an analog to digital conversion. See XrInteractionProfileAnalogThresholdVALVE for more information.
Applications should also enable the XR_KHR_binding_modification
extension to be able to define multiple thresholds.
New Object Types
New Flag Types
New Enum Constants
New Enums
New Structures
The XrInteractionProfileAnalogThresholdVALVE structure is an input
struct that defines thresholds and haptic feedback behavior for action
bindings and should be added to the
XrBindingModificationsKHR::bindingModifications array of the
XrBindingModificationsKHR structure (See
XR_KHR_binding_modification extension).
// Provided by XR_VALVE_analog_threshold
typedef struct XrInteractionProfileAnalogThresholdVALVE {
XrStructureType type;
const void* next;
XrAction action;
XrPath binding;
float onThreshold;
float offThreshold;
const XrHapticBaseHeader* onHaptic;
const XrHapticBaseHeader* offHaptic;
} XrInteractionProfileAnalogThresholdVALVE;
Applications can also chain a single XrInteractionProfileAnalogThresholdVALVE structure on the next chain of any xrSuggestInteractionProfileBindings call. Runtimes must support this kind of chaining. This method of specifying analog thresholds is deprecated however, and should not be used by any new applications.
If a threshold struct is present for a given conversion, the runtime must use those thresholds instead of applying its own whenever it is using the binding suggested by the application.
onThreshold and offThreshold permit allow the application to
specify that it wants hysteresis to be applied to the threshold operation.
If onThreshold is smaller than offThreshold, the runtime must
return XR_ERROR_VALIDATION_FAILURE.
onHaptic and offHaptic allow the application to specify that it
wants automatic haptic feedback to be generated when the boolean output of
the threshold operation changes from false to true or vice versa.
If these fields are not NULL, the runtime must trigger a haptic output with
the specified characteristics.
If the device has multiple haptic outputs, the runtime should use the
haptic output that is most appropriate for the specified input path.
If a suggested binding with action and binding is not in the
binding list for this interaction profile, the runtime must return
XR_ERROR_PATH_UNSUPPORTED.
New Functions
Issues
Version History
-
Revision 1, 2020-06-29 (Joe Ludwig)
-
Initial version.
-
-
Revision 2, 2021-07-28 (Rune Berg)
-
Deprecate chaining of struct in XrInteractionProfileSuggestedBinding, applications should use XrBindingModificationsKHR defined in the
XR_KHR_binding_modificationextension instead.
-
12.179. XR_VARJO_composition_layer_depth_test
- Name String
-
XR_VARJO_composition_layer_depth_test - Extension Type
-
Instance extension
- Registered Extension Number
-
123
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2021-07-15
- IP Status
-
No known IP claims.
- Contributors
-
Sergiy Dubovik, Varjo Technologies
Antti Hirvonen, Varjo Technologies
Rémi Arnaud, Varjo Technologies
Overview
This extension enables depth-based layer composition inside the compositor.
Core OpenXR specifies that layer compositing must happen in the layer
submission order (as described in Compositing).
However, an application may want to composite the final image against the
other layers based on depth information for proper occlusion.
Layers can now provide depth information that will be used to calculate
occlusion between those layers, as well as with the environment depth
estimator (XR_VARJO_environment_depth_estimation) when enabled.
This extension defines a new type, XrCompositionLayerDepthTestVARJO, which can be chained to XrCompositionLayerProjection in order to activate this functionality. An application must also specify a range where depth testing will happen, potentially covering only a subset of the full depth range.
Composition
Layer composition rules change when this extension is enabled.
If the application does not chain XrCompositionLayerDepthTestVARJO, "painter’s algorithm" such as described in Compositing must be used for layer composition.
Overall, composition should be performed in the following way:
-
Layers must be composited in the submission order. The compositor must track the depth value nearest to the virtual camera. Initial value for the nearest depth should be infinity.
-
If the currently processed layer does not contain depth, compositor should composite the layer against the previous layers with "painter’s algorithm" and move to the next layer.
-
If the layer depth or the active nearest depth fall inside the depth test range of the layer, the compositor must perform depth test against the layer and active depth. If the layer depth is less or equal than the active depth, layer is composited normally with the previous layers and active depth is updated to match the layer depth. Otherwise the layer pixel is discarded, and compositor should move to composite the next layer.
Example
Mixed reality applications may want to show hands on top of the rendered VR
content.
For this purpose the application should enable environment depth estimation
(see XR_VARJO_environment_depth_estimation extension) and depth
testing with range 0m to 1m.
The following code illustrates how to enable depth testing:
XrCompositionLayerProjection layer; // previously populated
XrCompositionLayerDepthTestVARJO depthTest{XR_TYPE_COMPOSITION_LAYER_DEPTH_TEST_VARJO, layer.next};
depthTest.depthTestRangeNearZ = 0.0f; // in meters
depthTest.depthTestRangeFarZ = 1.0f; // in meters
layer.next = &depthTest;
New Structures
Applications can enable depth testing by adding
XrCompositionLayerDepthTestVARJO to the next chain for all
XrCompositionLayerProjectionView structures in the given layer in
addition to XrCompositionLayerDepthInfoKHR.
Missing XrCompositionLayerDepthInfoKHR automatically disables the
depth testing functionality.
The XrCompositionLayerDepthTestVARJO structure is defined as:
// Provided by XR_VARJO_composition_layer_depth_test
typedef struct XrCompositionLayerDepthTestVARJO {
XrStructureType type;
const void* next;
float depthTestRangeNearZ;
float depthTestRangeFarZ;
} XrCompositionLayerDepthTestVARJO;
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_COMPOSITION_LAYER_DEPTH_TEST_VARJO
Version History
-
Revision 1, 2021-02-16 (Sergiy Dubovik)
-
Initial extension description
-
-
Revision 2, 2021-07-15 (Rylie Pavlik, Collabora, Ltd., and Sergiy Dubovik)
-
Update sample code so it is buildable
-
12.180. XR_VARJO_environment_depth_estimation
- Name String
-
XR_VARJO_environment_depth_estimation - Extension Type
-
Instance extension
- Registered Extension Number
-
124
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2021-02-17
- IP Status
-
No known IP claims.
- Contributors
-
Sergiy Dubovik, Varjo Technologies
Antti Hirvonen, Varjo Technologies
Rémi Arnaud, Varjo Technologies
Overview
This extension provides a mechanism for enabling depth estimation of the
environment in the runtime-supplied compositor.
This is an extension to XR_ENVIRONMENT_BLEND_MODE_ALPHA_BLEND mode to
not only use the color but also depth for composition of the final image.
Mixed reality applications might want to mix real and virtual content based
on the depth information for proper occlusion.
XR hardware and runtime may offer various ways to estimate the depth of the
environment inside the compositor.
When this estimation is enabled, the compositor can generate properly
occluded final image when layers are submitted with depth information (both
XR_KHR_composition_layer_depth and
XR_VARJO_composition_layer_depth_test).
This extension defines a new function,
xrSetEnvironmentDepthEstimationVARJO, which can be used to toggle
environment depth estimation in the compositor.
Toggling depth estimation is an asynchronous operation and the feature may
not be activated immediately.
Function can be called immediately after the session is created.
Composition of the environment layer follows the rules as described in
XR_VARJO_composition_layer_depth_test.
New Structures
The xrSetEnvironmentDepthEstimationVARJO function is defined as:
// Provided by XR_VARJO_environment_depth_estimation
XrResult xrSetEnvironmentDepthEstimationVARJO(
XrSession session,
XrBool32 enabled);
New Functions
Version History
-
Revision 1, 2021-02-16 (Sergiy Dubovik)
-
Initial extension description
-
12.181. XR_VARJO_foveated_rendering
- Name String
-
XR_VARJO_foveated_rendering - Extension Type
-
Instance extension
- Registered Extension Number
-
122
- Revision
-
3
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2021-04-13
- IP Status
-
No known IP claims.
- Contributors
-
Sergiy Dubovik, Varjo Technologies
Rémi Arnaud, Varjo Technologies
Antti Hirvonen, Varjo Technologies
12.181.1. Overview
Varjo headsets provide extremely high pixel density displays in the center
area of the display, blended with a high density display covering the rest
of the field of view.
If the application has to provide a single image per eye, that would cover
the entire field of view, at the highest density it would be extremely
resource intensive, and in fact impossible for the most powerful desktop
GPUs to render in real time.
So instead Varjo introduced the XR_VARJO_quad_views extension
enabling the application to provide two separate images for the two screen
areas, resulting in a significant reduction in processing, for pixels that
could not even been seen.
This extension goes a step further by enabling the application to only generate the density that can be seen by the user, which is another big reduction compared to the density that can be displayed, using dedicated eye tracking.
This extension requires XR_VARJO_quad_views extension to be enabled.
An application using this extension to enable foveated rendering will take the following steps to prepare:
-
Enable
XR_VARJO_quad_viewsandXR_VARJO_foveated_renderingextensions. -
Query system properties in order to determine if system supports foveated rendering.
-
Query texture sizes for foveated rendering.
In the render loop, for each frame, an application using this extension should
-
Check if rendering gaze is available using xrLocateSpace.
-
Enable foveated rendering when xrLocateViews is called.
12.181.2. Inspect system capability
An application can inspect whether the system is capable of foveated rendering by chaining an XrSystemFoveatedRenderingPropertiesVARJO structure to the XrSystemProperties structure when calling xrGetSystemProperties.
// Provided by XR_VARJO_foveated_rendering
typedef struct XrSystemFoveatedRenderingPropertiesVARJO {
XrStructureType type;
void* next;
XrBool32 supportsFoveatedRendering;
} XrSystemFoveatedRenderingPropertiesVARJO;
The runtime should return XR_TRUE for supportsFoveatedRendering
when rendering gaze is available in the system.
An application should avoid using foveated rendering functionality when
supportsFoveatedRendering is XR_FALSE.
12.181.3. Determine foveated texture sizes
Foveated textures may have different sizes and aspect ratio compared to
non-foveated textures.
In order to determine recommended foveated texture size, an application can
chain XrFoveatedViewConfigurationViewVARJO to
XrViewConfigurationView and set foveatedRenderingActive to
XR_TRUE.
Since an application using foveated rendering with this extension has to
render four views, XR_VARJO_quad_views must be enabled along with
this extension when XrInstance is created.
First and second views are non foveated views (covering whole field of view of HMD), third (left eye) and fourth (right eye) are foveated e.g. following gaze.
// Provided by XR_VARJO_foveated_rendering
typedef struct XrFoveatedViewConfigurationViewVARJO {
XrStructureType type;
void* next;
XrBool32 foveatedRenderingActive;
} XrFoveatedViewConfigurationViewVARJO;
For example:
XrInstance instance; // previously populated
XrSystemId systemId; // previously populated
XrViewConfigurationType viewConfigType; // Select XR_VIEW_CONFIGURATION_TYPE_PRIMARY_QUAD_VARJO
XrSystemFoveatedRenderingPropertiesVARJO foveatedRenderingProperties{XR_TYPE_SYSTEM_FOVEATED_RENDERING_PROPERTIES_VARJO};
XrSystemProperties systemProperties{XR_TYPE_SYSTEM_PROPERTIES, &foveatedRenderingProperties};
CHK_XR(xrGetSystemProperties(instance, systemId, &systemProperties));
uint32_t viewCount;
CHK_XR(xrEnumerateViewConfigurationViews(instance, systemId, viewConfigType, 0, &viewCount, nullptr));
// Non-foveated rendering views dimensions
std::vector<XrViewConfigurationView> configViews(viewCount, {XR_TYPE_VIEW_CONFIGURATION_VIEW});
CHK_XR(xrEnumerateViewConfigurationViews(instance, systemId, viewConfigType, viewCount, &viewCount, configViews.data()));
// Foveated rendering views dimensions
std::vector<XrViewConfigurationView> foveatedViews;
if (foveatedRenderingProperties.supportsFoveatedRendering && viewConfigType == XR_VIEW_CONFIGURATION_TYPE_PRIMARY_QUAD_VARJO) {
std::vector<XrFoveatedViewConfigurationViewVARJO> requestFoveatedConfig{4, {XR_TYPE_FOVEATED_VIEW_CONFIGURATION_VIEW_VARJO, nullptr, XR_TRUE}};
foveatedViews = std::vector<XrViewConfigurationView>{4, {XR_TYPE_VIEW_CONFIGURATION_VIEW}};
for (size_t i = 0; i < 4; i++) {
foveatedViews[i].next = &requestFoveatedConfig[i];
}
CHK_XR(xrEnumerateViewConfigurationViews(instance, systemId, viewConfigType, viewCount, &viewCount, foveatedViews.data()));
}
Applications using this extension are encouraged to create two sets of swapchains or one big enough set of swapchains and two sets of viewports. One set will be used when rendering gaze is not available and other one will be used when foveated rendering and rendering gaze is available. Using foveated textures may not provide optimal visual quality when rendering gaze is not available.
12.181.4. Rendering gaze status
Extension defines new reference space type -
XR_REFERENCE_SPACE_TYPE_COMBINED_EYE_VARJO which should be used to
determine whether rendering gaze is available.
After calling xrLocateSpace, application should inspect
XR_SPACE_LOCATION_ORIENTATION_TRACKED_BIT bit.
If it’s set, rendering gaze is available otherwise not.
XrSession session; // previously populated
// Create needed spaces
XrSpace viewSpace;
XrReferenceSpaceCreateInfo createViewSpaceInfo{XR_TYPE_REFERENCE_SPACE_CREATE_INFO};
createViewSpaceInfo.referenceSpaceType = XR_REFERENCE_SPACE_TYPE_VIEW;
createViewSpaceInfo.poseInReferenceSpace.orientation.w = 1.0f;
CHK_XR(xrCreateReferenceSpace(session, &createViewSpaceInfo, &viewSpace));
XrSpace renderGazeSpace;
XrReferenceSpaceCreateInfo createReferenceSpaceInfo{XR_TYPE_REFERENCE_SPACE_CREATE_INFO};
createReferenceSpaceInfo.referenceSpaceType = XR_REFERENCE_SPACE_TYPE_COMBINED_EYE_VARJO;
createReferenceSpaceInfo.poseInReferenceSpace.orientation.w = 1.0f;
CHK_XR(xrCreateReferenceSpace(session, &createReferenceSpaceInfo, &renderGazeSpace));
// ...
// in frame loop
// ...
XrFrameState frameState; // previously populated by xrWaitFrame
// Query rendering gaze status
XrSpaceLocation renderGazeLocation{XR_TYPE_SPACE_LOCATION};
CHK_XR(xrLocateSpace(renderGazeSpace, viewSpace, frameState.predictedDisplayTime, &renderGazeLocation));
const bool foveationActive = (renderGazeLocation.locationFlags & XR_SPACE_LOCATION_ORIENTATION_TRACKED_BIT) != 0;
if (foveationActive) {
// Rendering gaze is available
} else {
// Rendering gaze is not available
}
12.181.5. Request foveated field of view
For each frame, the application indicates if the runtime will return foveated or non-foveated field of view. This is done by chaining XrViewLocateFoveatedRenderingVARJO to XrViewLocateInfo.
// Provided by XR_VARJO_foveated_rendering
typedef struct XrViewLocateFoveatedRenderingVARJO {
XrStructureType type;
const void* next;
XrBool32 foveatedRenderingActive;
} XrViewLocateFoveatedRenderingVARJO;
The runtime must return foveated field of view when
foveatedRenderingActive is XR_TRUE.
// ...
// in frame loop
// ...
XrSession session; // previously populated
XrSpace appSpace; // previously populated
XrFrameState frameState; // previously populated by xrWaitFrame
XrViewConfigurationType viewConfigType; // previously populated
std::vector<XrView> views; // previously populated/resized to the correct size
bool foveationActive; // previously populated, as in the previous example
XrViewState viewState{XR_TYPE_VIEW_STATE};
uint32_t viewCapacityInput = static_cast<uint32_t>(views.size());
uint32_t viewCountOutput;
XrViewLocateInfo viewLocateInfo{XR_TYPE_VIEW_LOCATE_INFO};
viewLocateInfo.viewConfigurationType = viewConfigType;
viewLocateInfo.displayTime = frameState.predictedDisplayTime;
viewLocateInfo.space = appSpace;
XrViewLocateFoveatedRenderingVARJO viewLocateFoveatedRendering{XR_TYPE_VIEW_LOCATE_FOVEATED_RENDERING_VARJO};
viewLocateFoveatedRendering.foveatedRenderingActive = foveationActive;
viewLocateInfo.next = &viewLocateFoveatedRendering;
CHK_XR(xrLocateViews(session, &viewLocateInfo, &viewState, viewCapacityInput, &viewCountOutput, views.data()));
New Structures
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_VIEW_LOCATE_FOVEATED_RENDERING_VARJO -
XR_TYPE_FOVEATED_VIEW_CONFIGURATION_VIEW_VARJO -
XR_TYPE_SYSTEM_FOVEATED_RENDERING_PROPERTIES_VARJO
XrReferenceSpaceType enumeration is extended with:
-
XR_REFERENCE_SPACE_TYPE_COMBINED_EYE_VARJO
Version History
-
Revision 1, 2020-12-16 (Sergiy Dubovik)
-
Initial extension description
-
-
Revision 2, 2021-04-13 (Rylie Pavlik, Collabora, Ltd., and Sergiy Dubovik)
-
Update sample code so it is buildable
-
-
Revision 3, 2022-02-21 (Denny Rönngren)
-
Update sample code with a missing struct field initialization
-
12.182. XR_VARJO_marker_tracking
- Name String
-
XR_VARJO_marker_tracking - Extension Type
-
Instance extension
- Registered Extension Number
-
125
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2021-09-30
- IP Status
-
No known IP claims.
- Contributors
-
Roman Golovanov, Varjo Technologies
Rémi Arnaud, Varjo Technologies
Sergiy Dubovik, Varjo Technologies
12.182.1. Overview
Varjo Markers are physical markers tracked by the video cameras of the HMD. Different types of markers can be used for different purposes. As an example, Varjo Markers can be used as cheap replacements for electronic trackers. The cost per printed tracker is significantly lower and the markers require no power to function.
This extension provides the tracking interface to a set of marker types and sizes. Markers can be printed out from the PDF documents and instructions freely available at https://developer.varjo.com/docs/get-started/varjo-markers#printing-varjo-markers. Note that the printed marker must have the exact physical size for its ID.
Object markers are used to track static or dynamic objects in the user environment. You may use object markers in both XR and VR applications. Each marker has a unique ID, and you must not use the same physical marker more than once in any given environment. For added precision, an application may use multiple markers to track a single object. For example, you could track a monitor by placing a marker in each corner.
There is a set of marker IDs recognized by runtime and if the application
uses ID which is not in the set then runtime must return
XR_ERROR_MARKER_ID_INVALID_VARJO.
New Object Types
New Flag Types
New Enums
New Functions
The xrSetMarkerTrackingVARJO function is defined as:
// Provided by XR_VARJO_marker_tracking
XrResult xrSetMarkerTrackingVARJO(
XrSession session,
XrBool32 enabled);
The xrSetMarkerTrackingVARJO function enables or disables marker tracking functionality. As soon as feature is become disabled all trackable markers become inactive and corresponding events will be generated. An application may call any of the functions in this extension regardless if the marker tracking functionality is enabled or disabled.
The xrSetMarkerTrackingTimeoutVARJO function is defined as:
// Provided by XR_VARJO_marker_tracking
XrResult xrSetMarkerTrackingTimeoutVARJO(
XrSession session,
uint64_t markerId,
XrDuration timeout);
The xrSetMarkerTrackingTimeoutVARJO function sets a desired lifetime
duration for a specified marker.
The default value is XR_NO_DURATION.
Negative value will be clamped to XR_NO_DURATION.
It defines the time period during which the runtime must keep returning
poses of previously tracked markers.
The tracking may be lost if the marker went outside of the trackable field
of view.
In this case the runtime still will try to predict marker’s pose for the
timeout period.
The runtime must return XR_ERROR_MARKER_ID_INVALID_VARJO if the
supplied markerId is invalid.
The xrSetMarkerTrackingPredictionVARJO function is defined as:
// Provided by XR_VARJO_marker_tracking
XrResult xrSetMarkerTrackingPredictionVARJO(
XrSession session,
uint64_t markerId,
XrBool32 enable);
The xrSetMarkerTrackingPredictionVARJO function enables or disables
the prediction feature for a specified marker.
By default, markers are created with disabled prediction.
This works well for markers that are supposed to be stationary.
The prediction can be used to improve tracking of movable markers.
The runtime must return XR_ERROR_MARKER_ID_INVALID_VARJO if the
supplied markerId is invalid.
The xrGetMarkerSizeVARJO function is defined as:
// Provided by XR_VARJO_marker_tracking
XrResult xrGetMarkerSizeVARJO(
XrSession session,
uint64_t markerId,
XrExtent2Df* size);
The xrGetMarkerSizeVARJO function retrieves the height and width of an
active marker.
The runtime must return XR_ERROR_MARKER_NOT_TRACKED_VARJO if marker
tracking functionality is disabled or the marker with given markerId
is inactive.
The runtime must return XR_ERROR_MARKER_ID_INVALID_VARJO if the
supplied markerId is invalid.
The xrCreateMarkerSpaceVARJO function is defined as:
// Provided by XR_VARJO_marker_tracking
XrResult xrCreateMarkerSpaceVARJO(
XrSession session,
const XrMarkerSpaceCreateInfoVARJO* createInfo,
XrSpace* space);
The xrCreateMarkerSpaceVARJO function creates marker XrSpace for
pose relative to the marker specified in XrMarkerSpaceCreateInfoVARJO.
The runtime must return XR_ERROR_MARKER_ID_INVALID_VARJO if the
supplied XrMarkerSpaceCreateInfoVARJO::markerId is invalid.
New Structures
The XrSystemMarkerTrackingPropertiesVARJO structure is defined as:
// Provided by XR_VARJO_marker_tracking
typedef struct XrSystemMarkerTrackingPropertiesVARJO {
XrStructureType type;
void* next;
XrBool32 supportsMarkerTracking;
} XrSystemMarkerTrackingPropertiesVARJO;
An application may inspect whether the system is capable of marker tracking by chaining an XrSystemMarkerTrackingPropertiesVARJO structure to the XrSystemProperties structure when calling xrGetSystemProperties.
The runtime should return XR_TRUE for supportsMarkerTracking
when marker tracking is available in the system, otherwise XR_FALSE.
Marker tracking calls must return XR_ERROR_FEATURE_UNSUPPORTED if
marker tracking is not available in the system.
The XrEventDataMarkerTrackingUpdateVARJO structure is defined as:
// Provided by XR_VARJO_marker_tracking
typedef struct XrEventDataMarkerTrackingUpdateVARJO {
XrStructureType type;
const void* next;
uint64_t markerId;
XrBool32 isActive;
XrBool32 isPredicted;
XrTime time;
} XrEventDataMarkerTrackingUpdateVARJO;
Receiving the XrEventDataMarkerTrackingUpdateVARJO event structure indicates that the tracking information has changed. The runtime must not send more than one event per frame per marker. The runtime must send an event if the marker has changed its state (active or inactive). The runtime must send an event if it has detected pose change of the active marker.
The XrMarkerSpaceCreateInfoVARJO structure is defined as:
// Provided by XR_VARJO_marker_tracking
typedef struct XrMarkerSpaceCreateInfoVARJO {
XrStructureType type;
const void* next;
uint64_t markerId;
XrPosef poseInMarkerSpace;
} XrMarkerSpaceCreateInfoVARJO;
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_SYSTEM_MARKER_TRACKING_PROPERTIES_VARJO -
XR_TYPE_EVENT_DATA_MARKER_TRACKING_UPDATE_VARJO -
XR_TYPE_MARKER_SPACE_CREATE_INFO_VARJO
XrResult enumeration is extended with:
-
XR_ERROR_MARKER_ID_INVALID_VARJO -
XR_ERROR_MARKER_NOT_TRACKED_VARJO
Issues
Version History
-
Revision 1, 2021-09-30 (Roman Golovanov)
-
Initial extension description
-
12.182.2. Example
The example below represents the routine which enables marker tracking
feature and then polls events.
The event type XR_TYPE_EVENT_DATA_MARKER_TRACKING_UPDATE_VARJO has a
special handler to process marker state change.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
XrSession session; // previously initialized
if(XR_SUCCESS != xrSetMarkerTrackingVARJO(session, XR_TRUE)) {
return;
}
XrInstance instance; // previously initialized
XrFrameState frameState; // previously initialized
XrSpace baseSpace; // previously initialized
XrSpaceLocation location; // previously initialized
// Collection of tracked markers and their space handlers
std::unordered_map<uint64_t, XrSpace> markerSpaces;
// Initialize an event buffer to hold the output.
XrEventDataBuffer event{XR_TYPE_EVENT_DATA_BUFFER};
XrResult result = xrPollEvent(instance, &event);
if (result == XR_SUCCESS) {
switch (event.type) {
case XR_TYPE_EVENT_DATA_MARKER_TRACKING_UPDATE_VARJO: {
const auto& marker_update =
*reinterpret_cast<XrEventDataMarkerTrackingUpdateVARJO*>(&event);
const auto id = marker_update.markerId;
// If marker appeared for the first time then set some settings and
// add it to collection
if(0 == markerSpaces.count(id)) {
XrMarkerSpaceCreateInfoVARJO spaceInfo{XR_TYPE_MARKER_SPACE_CREATE_INFO_VARJO};
spaceInfo.markerId = id;
spaceInfo.poseInMarkerSpace = XrPosef{0};
spaceInfo.poseInMarkerSpace.orientation.w = 1.0f;
XrSpace markerSpace;
// Set 1 second timeout
if(XR_SUCCESS != xrSetMarkerTrackingTimeoutVARJO(
session, id, 1000000000))
{
break;
}
// Enable prediction for markers with `odd` ids.
if(XR_SUCCESS != xrSetMarkerTrackingPredictionVARJO(
session, id, id % 2))
{
break;
}
if(XR_SUCCESS != xrCreateMarkerSpaceVARJO(session, &spaceInfo,
&markerSpace)) {
break;
}
markerSpaces[id] = markerSpace;
}
if(marker_update.isActive) {
if(XR_SUCCESS != xrLocateSpace(markerSpaces.at(id), baseSpace,
frameState.predictedDisplayTime, &location)){
break;
}
if(marker_update.isPredicted) {
// Process marker as dynamic
} else {
// Process marker as stationary
}
} else {
// Remove previously tracked marker
markerSpaces.erase(id);
}
// ...
break;
}
}
}
12.183. XR_VARJO_view_offset
- Name String
-
XR_VARJO_view_offset - Extension Type
-
Instance extension
- Registered Extension Number
-
126
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2021-09-30
- IP Status
-
No known IP claims.
- Contributors
-
Rémi Arnaud, Varjo Technologies
Overview
Varjo headsets use video pass-through cameras to create the mixed reality (MR) image. The cameras are located around 10 cm (3.9 inches) in front of the user’s eyes, which leads to an offset in depth perception so that real-world objects in the video pass-through image appear larger than they are in real life. The image below gives a visualization of the difference between what the camera sees and what the user would see in real life.
This magnification effect is pronounced for objects that are close to the user – for example, their hands may appear unnaturally large in the image. The effect decreases with distance, so that objects at a distance of 2 meters already appear close to their actual size, and the sizes eventually converge at infinity. Note that while the objects' sizes may differ, their geometry, relative sizes, locations, etc. remain accurate. The extent of the magnification effect ultimately depends both on the application itself and the user’s physiology, as the human visual system is highly adaptive in this type of setting.
When blending the video pass-through image with virtual content, it is
important that their relative geometries – position, size, and disparity –
match one another.
To achieve this, Varjo’s runtime automatically places the virtual reality
cameras in the same position as the physical cameras when the video
pass-through feature is enabled (see
XR_ENVIRONMENT_BLEND_MODE_ALPHA_BLEND).
This allows virtual and real-world content to appear at the same distance
and on the same plane when viewed together.
While this can be observed as an apparent jump in the location of virtual
objects compared to VR-only content, this does not cause any distortion in
the object geometry or location; it is only the viewer’s location that
changes.
In some cases, moving the VR content to match the real-world position may not be desirable. This extension enable the application to control where the VR content is rendered from the location of the user’s eyes while the video pass-through image uses the camera locations. For example, if the virtual object is close the user, or if the application is switching between VR and MR modes. Offset values between 0.0 and 1.0 are supported. You can use this to create a smooth, animated transition between the two rendering positions in case you need to change from one to the other during a session.
New Functions
The xrSetViewOffsetVARJO function is defined as:
// Provided by XR_VARJO_view_offset
XrResult xrSetViewOffsetVARJO(
XrSession session,
float offset);
The xrSetViewOffsetVARJO function takes a float between 0.0 and 1.0.
0.0 means the pose returned by xrLocateViews will be at the eye
location, a value of 1.0 means the pose will be at the camera location.
A value between 0.0 and 1.0 will interpolate the pose to be in between the
eye and the camera location.
A value less than 0.0 or more than 1.0 will fail and return error
XR_ERROR_VALIDATION_FAILURE.
Note that by default the offset is set to 0 if the pass-through cameras are
not active, a.k.a.
in VR (XR_ENVIRONMENT_BLEND_MODE_OPAQUE), and 1 if the cameras are
active, a.k.a.
in MR (XR_ENVIRONMENT_BLEND_MODE_ALPHA_BLEND or
XR_ENVIRONMENT_BLEND_MODE_ADDITIVE).
Version History
-
Revision 1, 2022-02-08 (Remi Arnaud)
-
extension specification
-
12.184. XR_VARJO_xr4_controller_interaction
- Name String
-
XR_VARJO_xr4_controller_interaction - Extension Type
-
Instance extension
- Registered Extension Number
-
130
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- API Interactions
-
-
Interacts with
XR_EXT_dpad_binding -
Interacts with
XR_EXT_hand_interaction -
Interacts with
XR_EXT_palm_pose
-
- Last Modified Date
-
2024-09-17
- IP Status
-
No known IP claims.
- Contributors
-
Denny Rönngren, Varjo Technologies
Szymon Policht, Varjo Technologies
Roman Golovanov, Varjo Technologies
Jussi Karhu, Varjo Technologies
Fabian Wahlster, Varjo Technologies
Overview
This extension adds a new interaction profile for the Varjo Controllers compatible with the Varjo XR-4 headset.
Interaction profile path:
-
/interaction_profiles/varjo/xr-4_controller
Valid for the user paths:
-
/user/hand/left
-
/user/hand/right
Supported component paths for /user/hand/left only:
-
…/input/menu/click
Supported component paths for /user/hand/right only:
-
…/input/system/click (may not be available for application use)
Supported component paths on both pathnames:
-
…/input/a/click
-
…/input/a/touch
-
…/input/b/click
-
…/input/b/touch
-
…/input/squeeze/click
-
…/input/squeeze/touch
-
…/input/trigger/value
-
…/input/trigger/touch
-
…/input/thumbstick/x
-
…/input/thumbstick/y
-
…/input/thumbstick/click
-
…/input/thumbstick/touch
-
…/input/grip/pose
-
…/input/aim/pose
-
…/output/haptic
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
New Object Types
New Flag Types
New Enum Constants
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2023-12-06 (Denny Rönngren)
-
Initial extension description
-
-
Revision 2, 2024-09-17 (Fabian Wahlster)
-
Interacting extensions description
-
12.185. XR_YVR_controller_interaction
- Name String
-
XR_YVR_controller_interaction - Extension Type
-
Instance extension
- Registered Extension Number
-
498
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- API Interactions
-
-
Interacts with
XR_EXT_dpad_binding -
Interacts with
XR_EXT_hand_interaction -
Interacts with
XR_EXT_palm_pose
-
- Last Modified Date
-
2023-07-12
- IP Status
-
No known IP claims.
- Contributors
-
Pengpeng Zhang, YVR
Xuanyu Chen, YVR
Overview
This extension defines a new interaction profile for the YVR Controller, including but not limited to YVR1 and YVR2 Controller.
YVR Controller interaction profile
Interaction profile path:
-
/interaction_profiles/yvr/touch_controller_yvr
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile represents the input sources and haptics on the YVR Controller.
Supported component paths:
-
On /user/hand/left only:
-
…/input/x/click
-
…/input/x/touch
-
…/input/y/click
-
…/input/y/touch
-
…/input/menu/click
-
-
On /user/hand/right only:
-
…/input/a/click
-
…/input/a/touch
-
…/input/b/click
-
…/input/b/touch
-
…/input/system/click (may not be available for application use)
-
-
On both:
-
…/input/squeeze/click
-
…/input/trigger/value
-
…/input/trigger/touch
-
…/input/thumbstick/x
-
…/input/thumbstick/y
-
…/input/thumbstick/click
-
…/input/thumbstick/touch
-
…/input/grip/pose
-
…/input/aim/pose
-
…/output/haptic
-
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
New Object Types
New Flag Types
New Enum Constants
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2023-07-12 (Pengpeng Zhang)
-
Initial extension description
-
13. List of Provisional Extensions
13.1. XR_EXTX_overlay
- Name String
-
XR_EXTX_overlay - Extension Type
-
Instance extension
- Registered Extension Number
-
34
- Revision
-
5
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2021-01-13
- IP Status
-
No known IP claims.
- Contributors
-
Mark Young, LunarG
Jules Blok, Epic
Jared Cheshier, Pluto VR
Nick Whiting, Epic
Brad Grantham, LunarG
Overview
Application developers may desire to implement an OpenXR application that renders content on top of another OpenXR application. These additional applications will execute in a separate process, create a separate session, generate separate content, but want the OpenXR runtime to composite their content on top of the main OpenXR application. Examples of these applications might include:
-
A debug environment outputting additional content
-
A Store application that hovers to one side of the user’s view
-
A interactive HUD designed to expose additional chat features
This extension introduces the concept of "Overlay Sessions" in order to expose this usage model.
This extension allows:
-
An application to identify when the current sessions composition layers will be applied during composition
-
The ability for an overlay session to get information about what is going on with the main application
To enable the functionality of this extension, an application must pass the
name of the extension into xrCreateInstance via the
XrInstanceCreateInfo::enabledExtensionNames parameter as
indicated in the Extensions section.
To create an overlay session, an application must pass an
XrSessionCreateInfoOverlayEXTX structure to xrCreateSession via
the XrSessionCreateInfo structure’s next parameter.
An overlay application should not assume that the values returned to it by
xrWaitFrame in predictedDisplayTime in XrFrameState will
be the same as the values returned to the main application or even
correlated.
13.1.1. Overlay Session Layer Placement
Since one or more sessions may be active at the same time, this extension provides the ability for the application to identify when the frames of the current session will be composited into the final frame.
The XrSessionCreateInfoOverlayEXTX sessionLayersPlacement
parameter provides information on when the sessions composition layers
should be applied to the final composition frame.
The larger the value passed into sessionLayersPlacement, the closer to
the front this session’s composition layers will appear (relative to other
overlay session’s composition layers).
The smaller the value of sessionLayersPlacement, the further to the
back this session’s composition’s layers will appear.
The main session’s composition layers will always be composited first,
resulting in any overlay content being composited on top of the main
application’s content.
If sessionLayersPlacement is 0, then the runtime will always attempt
to composite that session’s composition layers first.
If sessionLayersPlacement is UINT32_MAX, then the runtime will always
attempt to composite that session’s composition layers last.
If two or more overlay sessions are created with the same
sessionLayersPlacement value, then the newer session’s will be treated
as if they had a slightly higher value of sessionLayersPlacement than
the previous sessions with the same value.
This should result in the newest overlay session being composited closer to
the user than the older session.
The following image hopefully will provide any further clarification you need:
13.1.2. Main Session Behavior Event
Since an overlay session’s intends to work in harmony with a main session, some information needs to be provided from that main session to the overlay session.
The XrEventDataMainSessionVisibilityChangedEXTX event structure provides information on the visibility of the main session as well as some additional flags which can be used to adjust overlay behavior.
If XR_KHR_composition_layer_depth is enabled in the main session,
then XrEventDataMainSessionVisibilityChangedEXTX flags should
contain the value:
XR_OVERLAY_MAIN_SESSION_ENABLED_COMPOSITION_LAYER_INFO_DEPTH_BIT_EXTX.
If the overlay session also enables XR_KHR_composition_layer_depth,
then when both sessions are visible, the runtime can integrate their
projection layer content together using depth information as described in
the extension.
However, if either the main session or the overlay do not enable the
extension, then composition behavior will continue as if neither one enabled
the extension.
13.1.3. Modifications to the OpenXR Specification
When this extension is enabled, certain core behaviors defined in the OpenXR specification must change as defined below:
Modifications to Composition
The Compositing section description of the composition process
will be changed if this extension is enabled.
If this extension is enabled, and there is only one active session, then
there is no change.
However, if this extension is enabled, and there are multiple active
sessions, then the composition will occur in order based on the overlay
session’s XrSessionCreateInfoOverlayEXTX::sessionLayersPlacement
value as described in the table below:
| Session Type | XrSessionCreateInfoOverlayEXTX::sessionLayersPlacement |
Composited |
|---|---|---|
Overlay Session |
UINT32_MAX |
Composited last, appears in front of all other XrSessions |
Overlay Session |
<Positive value> |
|
Overlay Session |
0 |
|
Non-overlay Session |
N/A |
Composited first, appears behind all other XrSessions |
The above change only applies to when a session’s composition layers are applied to the resulting image. The order in which composition layers are handled internal to a session does not change. However, once the sessions have been properly ordered, the runtime should behave as if all the composition layers have been placed into a single list (maintaining the separation of viewport images) and treat them as if they were from one original session. From this point forward, the composition behavior of the resulting composition layers is the same whether or not this extension is enabled.
If the overlay session is created as part of an XrInstance which has
enabled the XR_KHR_composition_layer_depth extension, and a
XrCompositionLayerDepthInfoKHR structure has been provided to one or
more composition layers, then it intends for those layers to be composited
into the final image using that depth information.
This composition occurs as defined in the
XR_KHR_composition_layer_depth extension.
However, this is only possible if the main session has provided depth buffer
information as part of its swapchain.
In the event that a main session does not provide depth buffer information
as part of its swapchain, then overlay application’s composition layers
containing depth information will be composited as if they did not contain
that information.
Modifications to xrEndFrame Behavior
Frame Submission currently states that if xrEndFrame is called with no layers, then the runtime should clear the VR display.
If this extension is enabled, the above statement is now only true if the session is not an overlay session. If the session is an overlay session, and it provides 0 layers in the call to xrEndFrame, then the runtime will just ignore the overlay session for the current frame.
Modifications to Input Synchronization
If a runtime supports this extension, it must separate input tracking on a per-session basis. This means that reading the input from one active session does not disturb the input information that can be read by another active session. This may require duplicating events to more than one session.
New Object Types
None
New Flag Types
typedef XrFlags64 XrOverlayMainSessionFlagsEXTX;
// Flag bits for XrOverlayMainSessionFlagsEXTX
static const XrOverlayMainSessionFlagsEXTX XR_OVERLAY_MAIN_SESSION_ENABLED_COMPOSITION_LAYER_INFO_DEPTH_BIT_EXTX = 0x00000001;
typedef XrFlags64 XrOverlaySessionCreateFlagsEXTX;
// Flag bits for XrOverlaySessionCreateFlagsEXTX
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_SESSION_CREATE_INFO_OVERLAY_EXTX -
XR_TYPE_EVENT_DATA_MAIN_SESSION_VISIBILITY_CHANGED_EXTX
New Enums
-
XR_OVERLAY_MAIN_SESSION_ENABLED_COMPOSITION_LAYER_INFO_DEPTH_BIT_EXTX
New Structures
// Provided by XR_EXTX_overlay
typedef struct XrSessionCreateInfoOverlayEXTX {
XrStructureType type;
const void* next;
XrOverlaySessionCreateFlagsEXTX createFlags;
uint32_t sessionLayersPlacement;
} XrSessionCreateInfoOverlayEXTX;
The XrEventDataMainSessionVisibilityChangedEXTX structure is defined as:
// Provided by XR_EXTX_overlay
typedef struct XrEventDataMainSessionVisibilityChangedEXTX {
XrStructureType type;
const void* next;
XrBool32 visible;
XrOverlayMainSessionFlagsEXTX flags;
} XrEventDataMainSessionVisibilityChangedEXTX;
Receiving the XrEventDataMainSessionVisibilityChangedEXTX event
structure indicates that the main session has gained or lost visibility.
This can occur in many cases, one typical example is when a user switches
from one OpenXR application to another.
See XrEventDataMainSessionVisibilityChangedEXTX for more information
on the standard behavior.
This structure contains additional information on the main session including
flags which indicate additional state information of the main session.
Currently, the only flag value supplied is
XR_OVERLAY_MAIN_SESSION_ENABLED_COMPOSITION_LAYER_INFO_DEPTH_BIT_EXTX
which indicates if the main session has enabled the
XR_KHR_composition_layer_depth extension.
New Functions
None
New Function Pointers
None
Issues
None
Version History
-
Revision 1, 2018-11-05 (Mark Young)
-
Initial draft
-
-
Revision 2, 2020-02-12 (Brad Grantham)
-
Name change, remove overlay bool, add flags
-
-
Revision 3, 2020-03-05 (Brad Grantham)
-
Name change
-
-
Revision 4, 2020-03-23 (Brad Grantham)
-
Fix enums
-
-
Revision 5, 2021-01-13 (Brad Grantham)
-
Remove bit requesting synchronized display times
-
13.2. XR_HTCX_vive_tracker_interaction
- Name String
-
XR_HTCX_vive_tracker_interaction - Extension Type
-
Instance extension
- Registered Extension Number
-
104
- Revision
-
3
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- API Interactions
-
-
Interacts with
XR_EXT_dpad_binding
-
- Last Modified Date
-
2023-07-14
- IP Status
-
No known IP claims.
- Contributors
-
Kyle Chen, HTC
Chris Kuo, HTC
Overview
This extension defines a new interaction profile for HTC VIVE Tracker. HTC VIVE Tracker is a generic tracked device which can be attached to anything to make them trackable. For example, it can be attached to user’s hands or feet to track the motion of human body. It can also be attached to any other devices the user wants to track and interact with.
In order to enable the functionality of this extension, you must pass the
name of the extension into xrCreateInstance via the
XrInstanceCreateInfo enabledExtensionNames parameter as
indicated in the Extensions section.
This extension allows:
-
An application to enumerate the subpaths of all current connected VIVE trackers.
-
An application to receive notification of the top level paths of a VIVE tracker when it is connected.
The paths of a VIVE tracker contains two paths below:
-
VIVE tracker persistent path indicate a specific tracker whose lifetime lasts longer than an instance, which means it must not change during its hardware lifetime. The format of this path string is unspecified and should be treated as an opaque string.
-
VIVE tracker role path may be constructed as "/user/vive_tracker_htcx/role/ROLE_VALUE", where ROLE_VALUE takes one of the following values. The role path may be assigned from the tool provided by the runtime and is XR_NULL_PATH if it has not been assigned. If this role path refers to more than one tracker, the runtime should choose one of them to be currently active. The role path may be changed during the lifetime of instance. Whenever it is changed, the runtime must send event
XR_TYPE_EVENT_DATA_VIVE_TRACKER_CONNECTED_HTCXto provide the new role path of that tracker.- ROLE_VALUE
-
-
XR_NULL_PATH -
handheld_object -
left_foot -
right_foot -
left_shoulder -
right_shoulder -
left_elbow -
right_elbow -
left_knee -
right_knee -
left_wrist(rev: 3) -
right_wrist(rev: 3) -
left_ankle(rev: 3) -
right_ankle(rev: 3) -
waist -
chest -
camera -
keyboard
-
-
Either the persistent path or the role path can be be passed as a subaction path to indicate a specific tracker. For example, XrActionCreateInfo::
subactionPathsinto function xrCreateAction or XrActionSpaceCreateInfo::subactionPathinto function xrCreateActionSpace. Please see Example 1 below.
As with other controllers, if a VIVE tracker is
connected and bound to a top-level user path, or disconnected while bound to
top-level user path, the runtime must send event
XR_TYPE_EVENT_DATA_INTERACTION_PROFILE_CHANGED, and the application
may call xrGetCurrentInteractionProfile to check if the tracker is
active or not.
|
The device that a tracker is attached to probably has a different motion model than what the tracker assumes. The motion tracking might not be as expected in this case. |
VIVE Tracker interaction profile
Interaction profile path:
-
/interaction_profiles/htc/vive_tracker_htcx
This interaction profile represents the input sources and haptics on the VIVE Tracker.
Supported component paths:
-
…/input/system/click (may not be available for application use)
-
…/input/menu/click
-
…/input/trigger/click
-
…/input/squeeze/click
-
…/input/trigger/value
-
…/input/trackpad/x
-
…/input/trackpad/y
-
…/input/trackpad/click
-
…/input/trackpad/touch
-
…/input/grip/pose
-
…/output/haptic
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_VIVE_TRACKER_PATHS_HTCX -
XR_TYPE_EVENT_DATA_VIVE_TRACKER_CONNECTED_HTCX
New Enums
New Structures
The XrViveTrackerPathsHTCX structure is defined as:
// Provided by XR_HTCX_vive_tracker_interaction
typedef struct XrViveTrackerPathsHTCX {
XrStructureType type;
void* next;
XrPath persistentPath;
XrPath rolePath;
} XrViveTrackerPathsHTCX;
The XrViveTrackerPathsHTCX structure contains two paths of VIVE tracker.
The XrEventDataViveTrackerConnectedHTCX structure is defined as:
// Provided by XR_HTCX_vive_tracker_interaction
typedef struct XrEventDataViveTrackerConnectedHTCX {
XrStructureType type;
const void* next;
XrViveTrackerPathsHTCX* paths;
} XrEventDataViveTrackerConnectedHTCX;
Receiving the XrEventDataViveTrackerConnectedHTCX event structure indicates that a new VIVE tracker was connected or its role changed. It is received via xrPollEvent.
New Functions
The xrEnumerateViveTrackerPathsHTCX function is defined as:
// Provided by XR_HTCX_vive_tracker_interaction
XrResult xrEnumerateViveTrackerPathsHTCX(
XrInstance instance,
uint32_t pathCapacityInput,
uint32_t* pathCountOutput,
XrViveTrackerPathsHTCX* paths);
xrEnumerateViveTrackerPathsHTCX enumerates all connected VIVE trackers to retrieve their paths under current instance.
Examples
Example 1
This example illustrates how to locate a VIVE tracker which is attached on the chest. First of all, create an action with /user/vive_tracker_htcx/role/chest as the subaction path. Then, submit a suggested binding for that action to the role path plus …/input/grip/pose, for the interaction profile /interaction_profiles/htc/vive_tracker_htcx, using xrSuggestInteractionProfileBindings. To locate the tracker, create an action space from that action, with /user/vive_tracker_htcx/role/chest once again specified as the subaction path.
extern XrInstance instance; // previously initialized
extern XrSession session; // previously initialized
extern XrActionSet actionSet; // previously initialized
// Create the action with subaction path
XrPath chestTrackerRolePath;
CHK_XR(xrStringToPath(instance, "/user/vive_tracker_htcx/role/chest",
&chestTrackerRolePath));
XrAction chestPoseAction;
XrActionCreateInfo actionInfo{XR_TYPE_ACTION_CREATE_INFO};
actionInfo.actionType = XR_ACTION_TYPE_POSE_INPUT;
actionInfo.countSubactionPaths = 1;
actionInfo.subactionPaths = &chestTrackerRolePath;
CHK_XR(xrCreateAction(actionSet, &actionInfo, &chestPoseAction));
// Describe a suggested binding for that action and subaction path.
XrPath suggestedBindingPath;
CHK_XR(xrStringToPath(instance,
"/user/vive_tracker_htcx/role/chest/input/grip/pose",
&suggestedBindingPath));
std::vector<XrActionSuggestedBinding> actionSuggBindings;
XrActionSuggestedBinding actionSuggBinding;
actionSuggBinding.action = chestPoseAction;
actionSuggBinding.binding = suggestedBindingPath;
actionSuggBindings.push_back(actionSuggBinding);
// Suggest that binding for the VIVE tracker interaction profile
XrPath viveTrackerInteractionProfilePath;
CHK_XR(xrStringToPath(instance, "/interaction_profiles/htc/vive_tracker_htcx",
&viveTrackerInteractionProfilePath));
XrInteractionProfileSuggestedBinding profileSuggBindings{
XR_TYPE_INTERACTION_PROFILE_SUGGESTED_BINDING};
profileSuggBindings.interactionProfile =
viveTrackerInteractionProfilePath;
profileSuggBindings.suggestedBindings =
actionSuggBindings.data();
profileSuggBindings.countSuggestedBindings =
(uint32_t)actionSuggBindings.size();
CHK_XR(xrSuggestInteractionProfileBindings(instance, &profileSuggBindings));
// Create action space for locating tracker
XrSpace chestTrackerSpace;
XrActionSpaceCreateInfo actionSpaceInfo{XR_TYPE_ACTION_SPACE_CREATE_INFO};
actionSpaceInfo.action = chestPoseAction;
actionSpaceInfo.subactionPath = chestTrackerRolePath;
CHK_XR(xrCreateActionSpace(session, &actionSpaceInfo, &chestTrackerSpace));
Example 2
This example illustrates how to handle the VIVE tracker when it is connected
or disconnected.
When a VIVE tracker is connected or its role changed, event
XR_TYPE_EVENT_DATA_VIVE_TRACKER_CONNECTED_HTCX will be received.
The role path and persistent path of this tracker can be retrieved with this
event.
When a VIVE tracker is connected or disconnected, event
XR_TYPE_EVENT_DATA_INTERACTION_PROFILE_CHANGED will also be received.
The XrInteractionProfileState::interactionProfile will be
XR_NULL_PATH if the tracker represented by that top level path is not
connected.
extern XrInstance instance; // previously initialized
extern XrSession session; // previously initialized
extern XrEventDataBuffer xrEvent; // previously received from xrPollEvent
switch ( xrEvent.type )
{
case XR_TYPE_EVENT_DATA_VIVE_TRACKER_CONNECTED_HTCX: {
const XrEventDataViveTrackerConnectedHTCX& viveTrackerConnected =
*reinterpret_cast<XrEventDataViveTrackerConnectedHTCX*>(&xrEvent);
uint32_t nCount;
char sPersistentPath[XR_MAX_PATH_LENGTH];
CHK_XR(xrPathToString(instance,
viveTrackerConnected.paths->persistentPath,
sizeof(sPersistentPath), &nCount, sPersistentPath));
std::printf("Vive Tracker connected: %s \n", sPersistentPath);
if (viveTrackerConnected.paths->rolePath != XR_NULL_PATH) {
char sRolePath[XR_MAX_PATH_LENGTH];
CHK_XR(xrPathToString(instance,
viveTrackerConnected.paths->rolePath, sizeof(sRolePath),
&nCount, sRolePath));
std::printf(" New role is: %s\n\n", sRolePath);
} else {
std::printf(" No role path.\n\n");
}
break;
}
case XR_TYPE_EVENT_DATA_INTERACTION_PROFILE_CHANGED: {
XrPath chestTrackerRolePath;
XrInteractionProfileState xrInteractionProfileState {
XR_TYPE_INTERACTION_PROFILE_STATE};
CHK_XR(xrStringToPath(instance, "/user/vive_tracker_htcx/role/chest",
&chestTrackerRolePath));
CHK_XR(xrGetCurrentInteractionProfile(session, chestTrackerRolePath,
&xrInteractionProfileState));
break;
}
}
Issues
Version History
-
Revision 1, 2021-09-23 (Kyle Chen)
-
Initial extension description.
-
-
Revision 2, 2022-09-08 (Rylie Pavlik, Collabora, Ltd.)
-
Mark event type as returned-only, updating the implicit valid usage.
-
-
Revision 3, 2022-05-19 (Rune Berg, Valve Corporation)
-
Add new wrist and ankle roles to match additional openvr roles.
-
13.3. XR_MNDX_egl_enable
- Name String
-
XR_MNDX_egl_enable - Extension Type
-
Instance extension
- Registered Extension Number
-
49
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2023-12-02
- IP Status
-
No known IP claims.
- Contributors
-
Jakob Bornecrantz, Collabora
Drew DeVault, Individual
Simon Ser, Individual
Overview
This extension must be provided by runtimes supporting applications using the EGL API to create rendering contexts.
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_GRAPHICS_BINDING_EGL_MNDX
New Enums
New Structures
The XrGraphicsBindingEGLMNDX structure is defined as:
// Provided by XR_MNDX_egl_enable
typedef struct XrGraphicsBindingEGLMNDX {
XrStructureType type;
const void* next;
PFN_xrEglGetProcAddressMNDX getProcAddress;
EGLDisplay display;
EGLConfig config;
EGLContext context;
} XrGraphicsBindingEGLMNDX;
When creating an EGL based XrSession, the application will provide a
pointer to an XrGraphicsBindingEGLMNDX structure in the next
chain of the XrSessionCreateInfo.
The required window system configuration define to expose this structure type is XR_USE_PLATFORM_EGL.
New Functions
New Function Pointers
typedef PFN_xrVoidFunction (*PFN_xrEglGetProcAddressMNDX)(const char *name);
eglGetProcAddress returns the address of the client API or EGL function named by procname. For details please see https://registry.khronos.org/EGL/sdk/docs/man/html/eglGetProcAddress.xhtml
Issues
Version History
-
Revision 1, 2020-05-20 (Jakob Bornecrantz)
-
Initial draft
-
-
Revision 2, 2023-12-02
-
Use
PFN_xrEglGetProcAddressMNDXto replacePFNEGLGETPROCADDRESSPROC(foreglGetProcAddress). Note this does change function pointer attributes on some platforms.
-
13.4. XR_MNDX_force_feedback_curl
- Name String
-
XR_MNDX_force_feedback_curl - Extension Type
-
Instance extension
- Registered Extension Number
-
376
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Last Modified Date
-
2022-11-18
- IP Status
-
No known IP claims.
- Contributors
-
Daniel Willmott
Moses Turner (Collabora, Ltd.)
Christoph Haagch (Collabora, Ltd.)
Jakob Bornecrantz (Collabora, Ltd.)
Overview
This extension provides APIs for force feedback devices capable of restricting physical movement in a single direction along a single dimension.
The intended use for this extension is to provide simple force feedback capabilities to restrict finger movement for VR Gloves.
The application must also enable the XR_EXT_hand_tracking extension
in order to use this extension.
The XrForceFeedbackCurlLocationMNDX describes which location to apply force feedback.
// Provided by XR_MNDX_force_feedback_curl
typedef enum XrForceFeedbackCurlLocationMNDX {
XR_FORCE_FEEDBACK_CURL_LOCATION_THUMB_CURL_MNDX = 0,
XR_FORCE_FEEDBACK_CURL_LOCATION_INDEX_CURL_MNDX = 1,
XR_FORCE_FEEDBACK_CURL_LOCATION_MIDDLE_CURL_MNDX = 2,
XR_FORCE_FEEDBACK_CURL_LOCATION_RING_CURL_MNDX = 3,
XR_FORCE_FEEDBACK_CURL_LOCATION_LITTLE_CURL_MNDX = 4,
XR_FORCE_FEEDBACK_CURL_LOCATION_MAX_ENUM_MNDX = 0x7FFFFFFF
} XrForceFeedbackCurlLocationMNDX;
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_SYSTEM_FORCE_FEEDBACK_CURL_PROPERTIES_MNDX -
XR_TYPE_FORCE_FEEDBACK_CURL_APPLY_LOCATIONS_MNDX
New Enums
New Structures
The XrSystemForceFeedbackCurlPropertiesMNDX structure is defined as:
// Provided by XR_MNDX_force_feedback_curl
typedef struct XrSystemForceFeedbackCurlPropertiesMNDX {
XrStructureType type;
void* next;
XrBool32 supportsForceFeedbackCurl;
} XrSystemForceFeedbackCurlPropertiesMNDX;
An application may inspect whether the system is capable of force feedback by chaining an XrSystemForceFeedbackCurlPropertiesMNDX structure to the XrSystemProperties structure when calling xrGetSystemProperties.
The runtime should return XR_TRUE for supportsForceFeedbackCurl
when force feedback is available in the system, otherwise XR_FALSE.
Force feedback calls must return XR_ERROR_FEATURE_UNSUPPORTED if
force feedback is not available in the system.
The XrForceFeedbackCurlApplyLocationsMNDX structure is defined as:
// Provided by XR_MNDX_force_feedback_curl
typedef struct XrForceFeedbackCurlApplyLocationsMNDX {
XrStructureType type;
const void* next;
uint32_t locationCount;
XrForceFeedbackCurlApplyLocationMNDX* locations;
} XrForceFeedbackCurlApplyLocationsMNDX;
Contains an array of XrForceFeedbackCurlApplyLocationMNDX that contains information on locations to apply force feedback to.
The XrForceFeedbackCurlApplyLocationMNDX structure is defined as:
// Provided by XR_MNDX_force_feedback_curl
typedef struct XrForceFeedbackCurlApplyLocationMNDX {
XrForceFeedbackCurlLocationMNDX location;
float value;
} XrForceFeedbackCurlApplyLocationMNDX;
value is specified as a limit in a single direction.
For example, if the value specified is 0.5, a location must have free
movement from the point where it would be incapable of movement if
value was 1, to 0.5 of the range the location is capable of moving.
New Functions
The xrApplyForceFeedbackCurlMNDX function is defined as:
// Provided by XR_MNDX_force_feedback_curl
XrResult xrApplyForceFeedbackCurlMNDX(
XrHandTrackerEXT handTracker,
const XrForceFeedbackCurlApplyLocationsMNDX* locations);
The xrApplyForceFeedbackCurlMNDX function applies force feedback to the set locations listed in XrForceFeedbackCurlApplyLocationsMNDX.
xrApplyForceFeedbackCurlMNDX should be called every time an application wishes to update a set of force feedback locations.
Submits a request for force feedback for a set of locations.
The runtime should deliver this request to the handTracker device.
If the handTracker device is not available, the runtime may ignore
this request for force feedback.
If the session associated with handTracker is not focused, the runtime
must return XR_SESSION_NOT_FOCUSED, and not apply force feedback.
When an application submits force feedback for a set of locations, the runtime must update the set of locations to that specified by the application. A runtime must set any locations not specified by the application when submitting force feedback to 0.
The runtime may discontinue force feedback if the application that set it loses focus. An application should call the function again after regaining focus if force feedback is still desired.
Issues
Version History
-
Revision 1, 2022-09-07 (Daniel Willmott)
-
Initial version
-
14. List of Deprecated Extensions
-
XR_KHR_locate_spaces(promoted to core) -
XR_KHR_maintenance1(promoted to core) -
XR_EXT_hp_mixed_reality_controller(promoted to core) -
XR_EXT_local_floor(promoted to core) -
XR_EXT_palm_pose(promoted to core) -
XR_EXT_samsung_odyssey_controller(promoted to core) -
XR_EXT_uuid(promoted to core) -
XR_BD_controller_interaction(promoted to core) -
XR_FB_touch_controller_pro(promoted to core) -
XR_HTC_vive_cosmos_controller_interaction(promoted to core) -
XR_HTC_vive_focus3_controller_interaction(promoted to core) -
XR_META_touch_controller_plus(promoted to core) -
XR_ML_ml2_controller_interaction(promoted to core) -
XR_VARJO_quad_views(promoted to core)
14.1. XR_KHR_locate_spaces
- Name String
-
XR_KHR_locate_spaces - Extension Type
-
Instance extension
- Registered Extension Number
-
472
- Revision
-
1
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- Deprecation State
-
-
Promoted to OpenXR 1.1
-
- Last Modified Date
-
2024-01-19
- IP Status
-
No known IP claims.
- Contributors
-
Yin Li, Microsoft
Bryce Hutchings, Microsoft
Andreas Loeve Selvik, Meta Platforms
John Kearney, Meta Platforms
Robert Blenkinsopp, Ultraleap
Rylie Pavlik, Collabora
Ron Bessems, Magic Leap
Jakob Bornecrantz, NVIDIA
14.1.1. Overview
This extension introduces the xrLocateSpacesKHR function, which enables applications to locate an array of spaces in a single function call. Runtimes may provide performance benefits for applications that use many spaces.
Compared to the xrLocateSpace function, the new xrLocateSpacesKHR function also provides extensible input parameters for future extensions to extend using additional chained structures.
14.1.2. Locate spaces
Applications can use xrLocateSpacesKHR function to locate an array of spaces.
The xrLocateSpacesKHR function is defined as:
// Provided by XR_KHR_locate_spaces
XrResult xrLocateSpacesKHR(
XrSession session,
const XrSpacesLocateInfo* locateInfo,
XrSpaceLocations* spaceLocations);
xrLocateSpacesKHR provides the physical location of one or more spaces in a base space at a specified time, if currently known by the runtime.
The XrSpacesLocateInfoKHR::time, the
XrSpacesLocateInfoKHR::baseSpace, and each space in
XrSpacesLocateInfoKHR::spaces, in the locateInfo
parameter, all follow the same specifics as the corresponding inputs to the
xrLocateSpace function.
The XrSpacesLocateInfoKHR structure is defined as:
// Provided by XR_KHR_locate_spaces
// XrSpacesLocateInfoKHR is an alias for XrSpacesLocateInfo
typedef struct XrSpacesLocateInfo {
XrStructureType type;
const void* next;
XrSpace baseSpace;
XrTime time;
uint32_t spaceCount;
const XrSpace* spaces;
} XrSpacesLocateInfo;
typedef XrSpacesLocateInfo XrSpacesLocateInfoKHR;
The time, the baseSpace, and each space in spaces all
follow the same specifics as the corresponding inputs to the
xrLocateSpace function.
The baseSpace and all of the XrSpace handles in the spaces
array must be valid and share the same parent XrSession.
If the time is invalid, the xrLocateSpacesKHR must return
XR_ERROR_TIME_INVALID.
The spaceCount must be a positive number, i.e. the array spaces
must not be empty.
Otherwise, the runtime must return XR_ERROR_VALIDATION_FAILURE.
The XrSpaceLocationsKHR structure is defined as:
// Provided by XR_KHR_locate_spaces
// XrSpaceLocationsKHR is an alias for XrSpaceLocations
typedef struct XrSpaceLocations {
XrStructureType type;
void* next;
uint32_t locationCount;
XrSpaceLocationData* locations;
} XrSpaceLocations;
typedef XrSpaceLocations XrSpaceLocationsKHR;
The XrSpaceLocationsKHR structure contains an array of space locations
in the member locations, to be used as output for
xrLocateSpacesKHR.
The application must allocate this array to be populated with the function
output.
The locationCount value must be the same as
XrSpacesLocateInfoKHR::spaceCount, otherwise, the
xrLocateSpacesKHR function must return
XR_ERROR_VALIDATION_FAILURE.
The XrSpaceLocationDataKHR structure is defined as:
// Provided by XR_KHR_locate_spaces
// XrSpaceLocationDataKHR is an alias for XrSpaceLocationData
typedef struct XrSpaceLocationData {
XrSpaceLocationFlags locationFlags;
XrPosef pose;
} XrSpaceLocationData;
typedef XrSpaceLocationData XrSpaceLocationDataKHR;
This is a single element of the array in
XrSpaceLocationsKHR::locations, and is used to return the pose
and location flags for a single space with respect to the specified base
space from a call to xrLocateSpacesKHR.
It does not accept chained structures to allow for easier use in dynamically
allocated container datatypes.
Chained structures are possible with the XrSpaceLocationsKHR that
describes an array of these elements.
14.1.3. Locate space velocities
Applications can request the velocities of spaces by chaining the XrSpaceVelocitiesKHR structure to the next pointer of XrSpaceLocationsKHR when calling xrLocateSpacesKHR.
The XrSpaceVelocitiesKHR structure is defined as:
// Provided by XR_KHR_locate_spaces
// XrSpaceVelocitiesKHR is an alias for XrSpaceVelocities
typedef struct XrSpaceVelocities {
XrStructureType type;
void* next;
uint32_t velocityCount;
XrSpaceVelocityData* velocities;
} XrSpaceVelocities;
typedef XrSpaceVelocities XrSpaceVelocitiesKHR;
The velocities member contains an array of space velocities in the
member velocities, to be used as output for xrLocateSpacesKHR.
The application must allocate this array to be populated with the function
output.
The velocityCount value must be the same as
XrSpacesLocateInfoKHR::spaceCount, otherwise, the
xrLocateSpacesKHR function must return
XR_ERROR_VALIDATION_FAILURE.
The XrSpaceVelocityDataKHR structure is defined as:
// Provided by XR_KHR_locate_spaces
// XrSpaceVelocityDataKHR is an alias for XrSpaceVelocityData
typedef struct XrSpaceVelocityData {
XrSpaceVelocityFlags velocityFlags;
XrVector3f linearVelocity;
XrVector3f angularVelocity;
} XrSpaceVelocityData;
typedef XrSpaceVelocityData XrSpaceVelocityDataKHR;
This is a single element of the array in
XrSpaceVelocitiesKHR::velocities, and is used to return the
linear and angular velocity and velocity flags for a single space with
respect to the specified base space from a call to xrLocateSpacesKHR.
It does not accept chained structures to allow for easier use in dynamically
allocated container datatypes.
14.1.4. Example code for xrLocateSpacesKHR
The following example code shows how an application retrieves both the location and velocity of one or more spaces in a base space at a given time using the xrLocateSpacesKHR function.
XrInstance instance; // previously initialized
XrSession session; // previously initialized
XrSpace baseSpace; // previously initialized
std::vector<XrSpace> spacesToLocate; // previously initialized
// Prepare output buffers to receive data and get reused in frame loop.
std::vector<XrSpaceLocationDataKHR> locationBuffer(spacesToLocate.size());
std::vector<XrSpaceVelocityDataKHR> velocityBuffer(spacesToLocate.size());
// Get function pointer for xrLocateSpacesKHR.
PFN_xrLocateSpacesKHR xrLocateSpacesKHR;
CHK_XR(xrGetInstanceProcAddr(instance, "xrLocateSpacesKHR",
reinterpret_cast<PFN_xrVoidFunction*>(
&xrLocateSpacesKHR)));
// application frame loop
while (1) {
// Typically the time is the predicted display time returned from xrWaitFrame.
XrTime displayTime; // previously initialized.
XrSpacesLocateInfoKHR locateInfo{XR_TYPE_SPACES_LOCATE_INFO_KHR};
locateInfo.baseSpace = baseSpace;
locateInfo.time = displayTime;
locateInfo.spaceCount = (uint32_t)spacesToLocate.size();
locateInfo.spaces = spacesToLocate.data();
XrSpaceLocationsKHR locations{XR_TYPE_SPACES_LOCATE_INFO_KHR};
locations.locationCount = (uint32_t)locationBuffer.size();
locations.locations = locationBuffer.data();
XrSpaceVelocitiesKHR velocities{XR_TYPE_SPACE_VELOCITIES_KHR};
velocities.velocityCount = (uint32_t)velocityBuffer.size();
velocities.velocities = velocityBuffer.data();
locations.next = &velocities;
CHK_XR(xrLocateSpacesKHR(session, &locateInfo, &locations));
for (uint32_t i = 0; i < spacesToLocate.size(); i++) {
const auto positionAndOrientationTracked =
XR_SPACE_LOCATION_POSITION_TRACKED_BIT | XR_SPACE_LOCATION_ORIENTATION_TRACKED_BIT;
const auto orientationOnlyTracked = XR_SPACE_LOCATION_ORIENTATION_TRACKED_BIT;
if ((locationBuffer[i].locationFlags & positionAndOrientationTracked) == positionAndOrientationTracked) {
// if the location is 6dof tracked
do_something(locationBuffer[i].pose.position);
do_something(locationBuffer[i].pose.orientation);
const auto velocityValidBits =
XR_SPACE_VELOCITY_LINEAR_VALID_BIT | XR_SPACE_VELOCITY_ANGULAR_VALID_BIT;
if ((velocityBuffer[i].velocityFlags & velocityValidBits) == velocityValidBits) {
do_something(velocityBuffer[i].linearVelocity);
do_something(velocityBuffer[i].angularVelocity);
}
}
else if ((locationBuffer[i].locationFlags & orientationOnlyTracked) == orientationOnlyTracked) {
// if the location is 3dof tracked
do_something(locationBuffer[i].pose.orientation);
if ((velocityBuffer[i].velocityFlags & XR_SPACE_VELOCITY_ANGULAR_VALID_BIT) == XR_SPACE_VELOCITY_ANGULAR_VALID_BIT) {
do_something(velocityBuffer[i].angularVelocity);
}
}
}
}
New Object Types
New Flag Types
New Enum Constants
XrStructureType enumeration is extended with:
-
XR_TYPE_SPACES_LOCATE_INFO_KHR -
XR_TYPE_SPACE_LOCATIONS_KHR -
XR_TYPE_SPACE_VELOCITIES_KHR
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2023-04-22 (Yin LI)
-
Initial extension description
-
14.2. XR_KHR_maintenance1
- Name String
-
XR_KHR_maintenance1 - Extension Type
-
Instance extension
- Registered Extension Number
-
711
- Revision
-
1
- Ratification Status
-
Ratified
- Extension and Version Dependencies
- API Interactions
-
-
Interacts with
XR_BD_controller_interaction -
Interacts with
XR_EXT_hand_interaction -
Interacts with
XR_EXT_hp_mixed_reality_controller -
Interacts with
XR_EXT_samsung_odyssey_controller -
Interacts with
XR_FB_touch_controller_pro -
Interacts with
XR_HTCX_vive_tracker_interaction -
Interacts with
XR_HTC_hand_interaction -
Interacts with
XR_HTC_vive_cosmos_controller_interaction -
Interacts with
XR_HTC_vive_focus3_controller_interaction -
Interacts with
XR_HUAWEI_controller_interaction -
Interacts with
XR_LOGITECH_mx_ink_stylus_interaction -
Interacts with
XR_META_touch_controller_plus -
Interacts with
XR_ML_ml2_controller_interaction -
Interacts with
XR_MSFT_hand_interaction -
Interacts with
XR_OPPO_controller_interaction -
Interacts with
XR_VARJO_xr4_controller_interaction -
Interacts with
XR_YVR_controller_interaction
-
- Deprecation State
-
-
Promoted to OpenXR 1.1
-
- Last Modified Date
-
2023-10-25
- IP Status
-
No known IP claims.
- Contributors
-
Ron Bessems, Magic Leap
Karthik Kadappan, Magic Leap
Rylie Pavlik, Collabora
Nihav Jain, Google
Lachlan Ford, Google
John Kearney, Meta
Yin Li, Microsoft
Robert Blenkinsopp, Ultraleap
14.2.1. Overview
XR_KHR_maintenance1 adds a collection of minor features that were
intentionally left out or overlooked from the original OpenXR 1.0 release.
All are promoted to the OpenXR 1.1 release.
// Provided by XR_KHR_maintenance1
// XrColor3fKHR is an alias for XrColor3f
typedef struct XrColor3f {
float r;
float g;
float b;
} XrColor3f;
typedef XrColor3f XrColor3fKHR;
// Provided by XR_KHR_maintenance1
// XrExtent3DfKHR is an alias for XrExtent3Df
typedef struct XrExtent3Df {
float width;
float height;
float depth;
} XrExtent3Df;
typedef XrExtent3Df XrExtent3DfKHR;
// Provided by XR_KHR_maintenance1
// XrSpherefKHR is an alias for XrSpheref
typedef struct XrSpheref {
XrPosef center;
float radius;
} XrSpheref;
typedef XrSpheref XrSpherefKHR;
// Provided by XR_KHR_maintenance1
// XrBoxfKHR is an alias for XrBoxf
typedef struct XrBoxf {
XrPosef center;
XrExtent3Df extents;
} XrBoxf;
typedef XrBoxf XrBoxfKHR;
// Provided by XR_KHR_maintenance1
// XrFrustumfKHR is an alias for XrFrustumf
typedef struct XrFrustumf {
XrPosef pose;
XrFovf fov;
float nearZ;
float farZ;
} XrFrustumf;
typedef XrFrustumf XrFrustumfKHR;
14.2.3. New Enum Constants
-
XR_KHR_MAINTENANCE1_EXTENSION_NAME -
XR_KHR_maintenance1_SPEC_VERSION -
Extending XrResult:
-
XR_ERROR_EXTENSION_DEPENDENCY_NOT_ENABLED_KHR -
XR_ERROR_PERMISSION_INSUFFICIENT_KHR
-
14.3. XR_EXT_hp_mixed_reality_controller
- Name String
-
XR_EXT_hp_mixed_reality_controller - Extension Type
-
Instance extension
- Registered Extension Number
-
96
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- API Interactions
-
-
Interacts with
XR_EXT_dpad_binding -
Interacts with
XR_EXT_hand_interaction -
Interacts with
XR_EXT_palm_pose
-
- Deprecation State
-
-
Promoted to OpenXR 1.1
-
- Last Modified Date
-
2020-06-08
- IP Status
-
No known IP claims.
- Contributors
-
Alain Zanchetta, Microsoft
Lachlan Ford, Microsoft
Alex Turner, Microsoft
Yin Li, Microsoft
Nathan Nuber, HP Inc.
Overview
This extension added a new interaction profile path for the HP Reverb G2 Controllers:
-
/interaction_profiles/hp/mixed_reality_controller
Valid for the user paths
-
/user/hand/left
-
/user/hand/right
Supported component paths:
-
On /user/hand/left only
-
…/input/x/click
-
…/input/y/click
-
-
On /user/hand/right only
-
…/input/a/click
-
…/input/b/click
-
-
On both hands
-
…/input/menu/click
-
…/input/squeeze/value
-
…/input/trigger/value
-
…/input/thumbstick/x
-
…/input/thumbstick/y
-
…/input/thumbstick/click
-
…/input/grip/pose
-
…/input/aim/pose
-
…/output/haptic
-
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
Version History
-
Revision 1, 2020-06-08 (Yin Li)
-
Initial extension proposal
-
14.4. XR_EXT_local_floor
- Name String
-
XR_EXT_local_floor - Extension Type
-
Instance extension
- Registered Extension Number
-
427
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Deprecation State
-
-
Promoted to OpenXR 1.1
-
- Last Modified Date
-
2022-11-28
- IP Status
-
No known IP claims.
- Contributors
-
John Kearney, Meta
Alex Turner, Microsoft
Yin Li, Microsoft
Cass Everitt, Meta - Contacts
-
John Kearney, Meta
Overview
The core OpenXR spec contains two world-locked reference space XrSpace
types in XrReferenceSpaceType, XR_REFERENCE_SPACE_TYPE_LOCAL and
XR_REFERENCE_SPACE_TYPE_STAGE with a design goal that LOCAL space
gets the user positioned correctly in XZ space and STAGE gets the user
positioned correctly in Y space.
As defined in the core OpenXR spec, LOCAL space is useful when an
application needs to render seated-scale content that is not positioned
relative to the physical floor and STAGE space is useful when an
application needs to render standing-scale content that is relative to the
physical floor.
The core OpenXR specification describes that standing-scale experiences
are meant to use the STAGE reference space.
However, using the STAGE forces the user to move to the stage space in
order to operate their experience, rather than just standing locally where
they are.
Definition of the space
Similar to LOCAL space, the LOCAL_FLOOR reference space
(XR_REFERENCE_SPACE_TYPE_LOCAL_FLOOR_EXT) establishes a world-locked
origin, gravity-aligned to exclude pitch and roll, with +Y up, +X to the
right, and -Z forward.
The location of the origin of the LOCAL_FLOOR space must match the
LOCAL space in the X and Z coordinates but not in the Y coordinate.
The orientation of the LOCAL_FLOOR space must match the LOCAL space.
If the STAGE space is supported, then the floor level (Y coordinate) of
the LOCAL_FLOOR space and the STAGE space must match.
If the STAGE space is not supported, then the runtime must give a best
estimate of the floor level.
Note: The LOCAL_FLOOR space could be implemented by an application without
support from the runtime by using the difference between in the Y
coordinate of the pose of the LOCAL and STAGE reference spaces.
When this extension is enabled, a runtime must support
XR_REFERENCE_SPACE_TYPE_LOCAL_FLOOR_EXT (in
xrEnumerateReferenceSpaces).
When a user needs to recenter LOCAL space, the LOCAL_FLOOR space will
also be recentered.
When such a recentering occurs, the runtime must queue the
XrEventDataReferenceSpaceChangePending event, with the recentered
LOCAL_FLOOR space origin only taking effect for xrLocateSpace or
xrLocateViews calls whose XrTime parameter is greater than or
equal to the changeTime provided in that event.
Additionally, when the runtime changes the floor level (or the floor level
estimate), the runtime must queue this event.
New Object Types
New Flag Types
New Enum Constants
XrReferenceSpaceType enumeration is extended with:
-
XR_REFERENCE_SPACE_TYPE_LOCAL_FLOOR_EXT
New Enums
New Structures
Examples
If a runtime does not support the local floor extension, an application can
construct an equivalent space using the LOCAL and STAGE spaces.
extern XrSession session;
extern bool supportsStageSpace;
extern bool supportsLocalFloorExtension;
extern XrTime curtime; // previously initialized
XrSpace localFloorSpace = XR_NULL_HANDLE;
if (supportsLocalFloorExtension)
{
XrReferenceSpaceCreateInfo localFloorCreateInfo{XR_TYPE_REFERENCE_SPACE_CREATE_INFO};
localFloorCreateInfo.poseInReferenceSpace = {{0.f, 0.f, 0.f, 1.f}, {0.f, 0.f, 0.f}};
localFloorCreateInfo.referenceSpaceType = XR_REFERENCE_SPACE_TYPE_LOCAL_FLOOR_EXT;
CHK_XR(xrCreateReferenceSpace(session, &localFloorCreateInfo, &localFloorSpace));
}
else if (supportsStageSpace)
{
XrSpace localSpace = XR_NULL_HANDLE;
XrSpace stageSpace = XR_NULL_HANDLE;
XrReferenceSpaceCreateInfo createInfo{XR_TYPE_REFERENCE_SPACE_CREATE_INFO};
createInfo.poseInReferenceSpace.orientation.w = 1.f;
createInfo.referenceSpaceType = XR_REFERENCE_SPACE_TYPE_LOCAL;
CHK_XR(xrCreateReferenceSpace(session, &createInfo, &localSpace));
createInfo.referenceSpaceType = XR_REFERENCE_SPACE_TYPE_STAGE;
CHK_XR(xrCreateReferenceSpace(session, &createInfo, &stageSpace));
XrSpaceLocation stageLoc{XR_TYPE_SPACE_LOCATION};
CHK_XR(xrLocateSpace(stageSpace, localSpace, curtime, &stageLoc));
CHK_XR(xrDestroySpace(localSpace));
CHK_XR(xrDestroySpace(stageSpace));
float floorOffset = stageLoc.pose.position.y;
XrReferenceSpaceCreateInfo localFloorCreateInfo{XR_TYPE_REFERENCE_SPACE_CREATE_INFO};
localFloorCreateInfo.referenceSpaceType = XR_REFERENCE_SPACE_TYPE_LOCAL;
localFloorCreateInfo.poseInReferenceSpace = {{0.f, 0.f, 0.f, 1.f}, {0.f, floorOffset, 0.f}};
CHK_XR(xrCreateReferenceSpace(session, &localFloorCreateInfo, &localFloorSpace));
}
else
{
// We do not support local floor or stage - make an educated guess
float floorOffset = -1.5;
XrReferenceSpaceCreateInfo localFloorCreateInfo{XR_TYPE_REFERENCE_SPACE_CREATE_INFO};
localFloorCreateInfo.referenceSpaceType = XR_REFERENCE_SPACE_TYPE_LOCAL;
localFloorCreateInfo.poseInReferenceSpace = {{0.f, 0.f, 0.f, 1.f}, {0.f, floorOffset, 0.f}};
CHK_XR(xrCreateReferenceSpace(session, &localFloorCreateInfo, &localFloorSpace));
}
Issues
None
Version History
-
Revision 1, 2022-11-28 (John Kearney)
-
Initial draft
-
14.5. XR_EXT_palm_pose
- Name String
-
XR_EXT_palm_pose - Extension Type
-
Instance extension
- Registered Extension Number
-
177
- Revision
-
3
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Deprecation State
-
-
Promoted to OpenXR 1.1
-
- Last Modified Date
-
2022-05-23
- IP Status
-
No known IP claims.
- Contributors
-
Jack Pritz, Unity Technologies
Joe Ludwig, Valve
Rune Berg, Valve
John Kearney, Facebook
Peter Kuhn, Unity Technologies
Lachlan Ford, Microsoft
Overview
This extension defines a new "standard pose identifier" for interaction profiles, named "palm_ext". The new identifier is a pose that can be used to place application-specific visual content such as avatar visuals that may or may not match human hands. This extension also adds a new input component path using this "palm_ext" pose identifier to existing interaction profiles when active.
The application can use the …/input/palm_ext/pose component path to place visual content representing the user’s physical hand location. Application visuals may depict, for example, realistic human hands that are very simply animated or creative depictions such as an animal, an alien, or robot limb extremity.
Note that this is not intended to be an alternative to extensions that perform hand tracking for more complex use cases: the use of "palm" in the name is to reflect that it is a user-focused pose rather than a held-object-focused pose.
|
Note
OpenXR 1.1 replaces …/input/palm_ext/pose with …/input/grip_surface/pose. The definitions of both poses are identical. |
Pose Identifier
When this extension is active, a runtime must behave as if the following were added to the list of Standard pose identifiers.
-
palm_ext - a pose that allows applications to reliably anchor visual content relative to the user’s physical hand, whether the user’s hand is tracked directly or its position and orientation is inferred by a physical controller. The palm pose is defined as follows:
-
The palm position: The user’s physical palm centroid, at the surface of the palm.
-
The palm orientation’s +X axis: When a user is holding the controller and straightens their index finger, the ray that is normal to the user’s palm (away from the palm in the left hand, into the palm in the right hand).
-
The palm orientation’s -Z axis: When a user is holding the controller and straightens their index finger, the ray that is parallel to their finger’s pointing direction.
-
The palm orientation’s +Y axis: orthogonal to +Z and +X using the right-hand rule.
-
This pose is explicitly static for rigid controller type devices. The pose of …/input/palm_ext/pose and …/input/grip_surface/pose must be identical.
Interaction Profile Additions
When this extension is active, a runtime must accept the …/input/palm_ext/pose component path for all interaction profiles that are valid for at least one of the user paths listed below listed below, including those interaction profiles enabled through extensions. Actions bound to such palm input component paths must behave as though those paths were listed in the original definition of an interaction profile.
Valid for the user paths
-
/user/hand/left
-
/user/hand/right
Supported component paths:
-
On both user paths
-
…/input/palm_ext/pose
-
|
Note
While this extension itself does not add the
…/input/palm_ext/pose input component path to interaction
profiles defined in extensions, extension authors may update existing
extensions to add this path, or submit new extensions defining new
interaction profiles using this pose identifier and component path.
For consistency, it is recommended that the …/input/palm_ext/pose
path in extension-defined interaction profiles be specified as only valid
when this This extension does pose a challenge to API layer implementers attempting to
provide interaction profile support through their layer.
If a runtime implements |
Version History
-
Revision 1, 2020-07-26 (Jack Pritz)
-
Initial extension proposal
-
-
Revision 2, 2022-05-18 (Lachlan Ford)
-
Modification and cleanup of extension proposal based on working group discussion.
-
-
Revision 3, 2023-11-16 (Ron Bessems)
-
Notes and clarification for the addition of …/input/grip_surface/pose to the core spec in OpenXR 1.1.
-
14.6. XR_EXT_samsung_odyssey_controller
- Name String
-
XR_EXT_samsung_odyssey_controller - Extension Type
-
Instance extension
- Registered Extension Number
-
95
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- API Interactions
-
-
Interacts with
XR_EXT_dpad_binding -
Interacts with
XR_EXT_hand_interaction -
Interacts with
XR_EXT_palm_pose
-
- Deprecation State
-
-
Promoted to OpenXR 1.1
-
- Last Modified Date
-
2020-06-08
- IP Status
-
No known IP claims.
- Contributors
-
Lachlan Ford, Microsoft
Alex Turner, Microsoft
Yin Li, Microsoft
Philippe Harscoet, Samsung Electronics
Overview
This extension enables the application to differentiate the newer form factor of motion controller released with the Samsung Odyssey headset. It enables the application to customize the appearance and experience of the controller differently from the original mixed reality motion controller.
This extension added a new interaction profile /interaction_profiles/samsung/odyssey_controller to describe the Odyssey controller. The action bindings of this interaction profile work exactly the same as the /interaction_profiles/microsoft/motion_controller in terms of valid user paths and supported input and output component paths.
If the application does not do its own custom rendering for specific motion controllers, it should avoid using this extension and instead just use …/microsoft/motion_controller, as runtimes should treat both controllers equally when applications declare action bindings only for that profile.
If the application wants to customize rendering for specific motion controllers, it should setup the suggested bindings for …/samsung/odyssey_controller the same as …/microsoft/motion_controller when calling xrSuggestInteractionProfileBindings, and expect the same action bindings. Then the application can listen to the XrEventDataInteractionProfileChanged event and inspect the returned interaction profile from xrGetCurrentInteractionProfile to differentiate which controller is being used by the user, and hence customize the appearance or experience of the motion controller specifically for the form factor of …/samsung/odyssey_controller.
Version History
-
Revision 1, 2020-06-08 (Yin Li)
-
Initial extension proposal
-
14.7. XR_EXT_uuid
- Name String
-
XR_EXT_uuid - Extension Type
-
Instance extension
- Registered Extension Number
-
300
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Deprecation State
-
-
Promoted to OpenXR 1.1
-
- Last Modified Date
-
2021-10-27
- IP Status
-
No known IP claims.
- Contributors
-
Darryl Gough, Microsoft
Yin Li, Microsoft
Alex Turner, Microsoft
David Fields, Microsoft
Overview
This extension defines a Universally Unique Identifier that follows RFC 4122.
The XrUuidEXT structure is a 128-bit Universally Unique Identifier and is defined as:
// Provided by XR_EXT_uuid
// XrUuidEXT is an alias for XrUuid
typedef struct XrUuid {
uint8_t data[XR_UUID_SIZE];
} XrUuid;
typedef XrUuid XrUuidEXT;
The structure is composed of 16 octets, with the size and order of the fields defined in RFC 4122 section 4.1.2.
New Object Types
New Flag Types
New Enum Constants
-
XR_UUID_SIZE_EXT
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2021-10-27 (Darryl Gough)
-
Initial extension description
-
14.8. XR_BD_controller_interaction
- Name String
-
XR_BD_controller_interaction - Extension Type
-
Instance extension
- Registered Extension Number
-
385
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- API Interactions
-
-
Interacts with
XR_EXT_dpad_binding -
Interacts with
XR_EXT_hand_interaction -
Interacts with
XR_EXT_palm_pose
-
- Deprecation State
-
-
Promoted to OpenXR 1.1
-
- Last Modified Date
-
2023-08-10
- IP Status*
-
No known IP claims.
- Contributors
-
Baolin Fu, ByteDance
Shanliang Xu, ByteDance
Zhanrui Jia, ByteDance
Overview
This extension defines the interaction profile for PICO Neo3, PICO 4, and PICO G3 Controllers.
BD(ByteDance) Controller interaction profile
Interaction profile path for PICO Neo3:
-
/interaction_profiles/bytedance/pico_neo3_controller
Interaction profile path for PICO 4:
-
/interaction_profiles/bytedance/pico4_controller
Interaction profile path for PICO G3:
-
/interaction_profiles/bytedance/pico_g3_controller
Valid for user paths for pico_neo3_controller, pico4_controller, and pico_g3_controller:
-
/user/hand/left
-
/user/hand/right
Supported component paths for pico_neo3_controller:
-
On /user/hand/left only:
-
…/input/x/click
-
…/input/x/touch
-
…/input/y/click
-
…/input/y/touch
-
-
On /user/hand/right only:
-
…/input/a/click
-
…/input/a/touch
-
…/input/b/click
-
…/input/b/touch
-
-
…/input/menu/click
-
…/input/system/click (may not be available for application use)
-
…/input/trigger/click
-
…/input/trigger/value
-
…/input/trigger/touch
-
…/input/thumbstick/y
-
…/input/thumbstick/x
-
…/input/thumbstick/click
-
…/input/thumbstick/touch
-
…/input/squeeze/click
-
…/input/squeeze/value
-
…/input/grip/pose
-
…/input/aim/pose
-
…/output/haptic
Supported component paths for pico4_controller:
-
On /user/hand/left only:
-
…/input/x/click
-
…/input/x/touch
-
…/input/y/click
-
…/input/y/touch
-
…/input/menu/click
-
-
On /user/hand/right only:
-
…/input/a/click
-
…/input/a/touch
-
…/input/b/click
-
…/input/b/touch
-
-
…/input/system/click (may not be available for application use)
-
…/input/trigger/click
-
…/input/trigger/value
-
…/input/trigger/touch
-
…/input/thumbstick/y
-
…/input/thumbstick/x
-
…/input/thumbstick/click
-
…/input/thumbstick/touch
-
…/input/squeeze/click
-
…/input/squeeze/value
-
…/input/grip/pose
-
…/input/aim/pose
-
…/output/haptic
Supported component paths for pico_g3_controller:
-
…/input/trigger/click
-
…/input/trigger/value
-
…/input/menu/click
-
…/input/grip/pose
-
…/input/aim/pose
-
…/input/thumbstick
-
…/input/thumbstick/click
Be careful with the following difference:
-
pico_neo3_controller supports …/input/menu/click both on /user/hand/left and /user/hand/right.
-
pico4_controller supports …/input/menu/click only on /user/hand/left.
-
pico_g3_controller has only one physical controller. When designing suggested bindings for this interaction profile, you may suggest bindings for both /user/hand/left and /user/hand/right. However, only one of them will be active at a given time, so do not design interactions that require simultaneous use of both hands.
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
New Object Types
New Flag Types
New Enum Constants
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2023-01-04 (Baolin Fu)
-
Initial extension description
-
-
Revision 2, 2023-08-10 (Shanliang Xu)
-
Add support for G3 devices
-
14.9. XR_FB_touch_controller_pro
- Name String
-
XR_FB_touch_controller_pro - Extension Type
-
Instance extension
- Registered Extension Number
-
168
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- API Interactions
-
-
Interacts with
XR_EXT_dpad_binding -
Interacts with
XR_EXT_hand_interaction -
Interacts with
XR_EXT_palm_pose
-
- Deprecation State
-
-
Promoted to OpenXR 1.1
-
- Last Modified Date
-
2022-06-29
- IP Status
-
No known IP claims.
- Contributors
-
Aanchal Dalmia, Meta
Adam Bengis, Meta
Tony Targonski, Meta
Federico Schliemann, Meta
Overview
This extension defines a new interaction profile for the Meta Quest Touch Pro Controller.
Meta Quest Touch Pro Controller Profile Path:
-
/interaction_profiles/facebook/touch_controller_pro
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile provides inputs and outputs that are a superset of those available in the existing "Oculus Touch Controller" interaction profile:
-
/interaction_profiles/oculus/touch_controller
Supported component paths (Note that the paths which are marked as 'new' are enabled by Meta Quest Touch Pro Controller profile exclusively):
-
On /user/hand/left only:
-
…/input/x/click
-
…/input/x/touch
-
…/input/y/click
-
…/input/y/touch
-
…/input/menu/click
-
-
On /user/hand/right only:
-
…/input/a/click
-
…/input/a/touch
-
…/input/b/click
-
…/input/b/touch
-
…/input/system/click (may not be available for application use)
-
-
On both:
-
…/input/squeeze/value
-
…/input/trigger/value
-
…/input/trigger/touch
-
…/input/thumbstick
-
…/input/thumbstick/x
-
…/input/thumbstick/y
-
…/input/thumbstick/click
-
…/input/thumbstick/touch
-
…/input/thumbrest/touch
-
…/input/grip/pose
-
…/input/aim/pose
-
…/output/haptic
-
…/input/thumbrest/force (new)
-
…/input/stylus_fb/force (new)
-
…/input/trigger/curl_fb (new)
-
…/input/trigger/slide_fb (new)
-
…/input/trigger/proximity_fb (new)
-
…/input/thumb_fb/proximity_fb (new)
-
…/output/haptic_trigger_fb (new)
-
…/output/haptic_thumb_fb (new)
-
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
New Identifiers
Input Path Descriptions
Output Path Descriptions
Version History
-
Revision 1, 2022-06-29 (Aanchal Dalmia)
-
Initial extension proposal
-
14.10. XR_HTC_vive_cosmos_controller_interaction
- Name String
-
XR_HTC_vive_cosmos_controller_interaction - Extension Type
-
Instance extension
- Registered Extension Number
-
103
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- API Interactions
-
-
Interacts with
XR_EXT_dpad_binding -
Interacts with
XR_EXT_hand_interaction -
Interacts with
XR_EXT_palm_pose
-
- Deprecation State
-
-
Promoted to OpenXR 1.1
-
- Last Modified Date
-
2020-09-28
- IP Status
-
No known IP claims.
- Contributors
-
Chris Kuo, HTC
Kyle Chen, HTC
Overview
This extension defines a new interaction profile for the VIVE Cosmos Controller.
VIVE Cosmos Controller interaction profile
Interaction profile path:
-
/interaction_profiles/htc/vive_cosmos_controller
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile represents the input sources and haptics on the VIVE Cosmos Controller.
Supported component paths:
-
On /user/hand/left only:
-
…/input/x/click
-
…/input/y/click
-
…/input/menu/click
-
-
On /user/hand/right only:
-
…/input/a/click
-
…/input/b/click
-
…/input/system/click (may not be available for application use)
-
-
…/input/shoulder/click
-
…/input/squeeze/click
-
…/input/trigger/click
-
…/input/trigger/value
-
…/input/thumbstick/x
-
…/input/thumbstick/y
-
…/input/thumbstick/click
-
…/input/thumbstick/touch
-
…/input/grip/pose
-
…/input/aim/pose
-
…/output/haptic
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
New Object Types
New Flag Types
New Enum Constants
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2020-09-28 (Chris Kuo)
-
Initial extension description
-
14.11. XR_HTC_vive_focus3_controller_interaction
- Name String
-
XR_HTC_vive_focus3_controller_interaction - Extension Type
-
Instance extension
- Registered Extension Number
-
106
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- API Interactions
-
-
Interacts with
XR_EXT_dpad_binding -
Interacts with
XR_EXT_hand_interaction -
Interacts with
XR_EXT_palm_pose
-
- Deprecation State
-
-
Promoted to OpenXR 1.1
-
- Last Modified Date
-
2022-04-29
- IP Status
-
No known IP claims.
- Contributors
-
Ria Hsu, HTC
Overview
This extension defines a new interaction profile for the VIVE Focus 3 Controller.
VIVE Focus 3 Controller interaction profile
Interaction profile path:
-
/interaction_profiles/htc/vive_focus3_controller
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile represents the input sources and haptics on the VIVE Focus 3 Controller.
Supported component paths:
-
On /user/hand/left only:
-
…/input/x/click
-
…/input/y/click
-
…/input/menu/click
-
-
On /user/hand/right only:
-
…/input/a/click
-
…/input/b/click
-
…/input/system/click (may not be available for application use)
-
-
…/input/squeeze/click
-
…/input/squeeze/touch
-
…/input/squeeze/value
-
…/input/trigger/click
-
…/input/trigger/touch
-
…/input/trigger/value
-
…/input/thumbstick/x
-
…/input/thumbstick/y
-
…/input/thumbstick/click
-
…/input/thumbstick/touch
-
…/input/thumbrest/touch
-
…/input/grip/pose
-
…/input/aim/pose
-
…/output/haptic
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
New Object Types
New Flag Types
New Enum Constants
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2022-01-03 (Ria Hsu)
-
Initial extension description
-
-
Revision 2, 2022-04-29 (Ria Hsu)
-
Support component path "/input/squeeze/value"
-
14.12. XR_META_touch_controller_plus
- Name String
-
XR_META_touch_controller_plus - Extension Type
-
Instance extension
- Registered Extension Number
-
280
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- API Interactions
-
-
Interacts with
XR_EXT_dpad_binding -
Interacts with
XR_EXT_hand_interaction -
Interacts with
XR_EXT_palm_pose
-
- Deprecation State
-
-
Promoted to OpenXR 1.1
-
- Last Modified Date
-
2023-04-10
- IP Status
-
No known IP claims.
- Contributors
-
Aanchal Dalmia, Meta Platforms
Adam Bengis, Meta Platforms
Overview
This extension defines a new interaction profile for the Meta Quest Touch Plus Controller.
Meta Quest Touch Plus Controller interaction profile path:
-
/interaction_profiles/meta/touch_controller_plus
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
This interaction profile provides inputs and outputs that are a superset of those available in the existing "Oculus Touch Controller" interaction profile, /interaction_profiles/oculus/touch_controller
Supported component paths:
-
On /user/hand/left only:
-
…/input/x/click
-
…/input/x/touch
-
…/input/y/click
-
…/input/y/touch
-
…/input/menu/click
-
-
On /user/hand/right only:
-
…/input/a/click
-
…/input/a/touch
-
…/input/b/click
-
…/input/b/touch
-
…/input/system/click (may not be available for application use)
-
-
On both:
-
…/input/squeeze/value
-
…/input/trigger/value
-
…/input/trigger/touch
-
…/input/thumbstick
-
…/input/thumbstick/x
-
…/input/thumbstick/y
-
…/input/thumbstick/click
-
…/input/thumbstick/touch
-
…/input/thumbrest/touch
-
…/input/grip/pose
-
…/input/aim/pose
-
…/output/haptic
-
…/input/thumb_meta/proximity_meta
-
…/input/trigger/proximity_meta
-
…/input/trigger/curl_meta
-
…/input/trigger/slide_meta
-
…/input/trigger/force
-
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
New Identifiers
Input Path Descriptions
Version History
-
Revision 1, 2023-04-10 (Adam Bengis)
-
Initial extension proposal
-
14.13. XR_ML_ml2_controller_interaction
- Name String
-
XR_ML_ml2_controller_interaction - Extension Type
-
Instance extension
- Registered Extension Number
-
135
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- API Interactions
-
-
Interacts with
XR_EXT_dpad_binding -
Interacts with
XR_EXT_hand_interaction -
Interacts with
XR_EXT_palm_pose
-
- Deprecation State
-
-
Promoted to OpenXR 1.1
-
- Last Modified Date
-
2022-07-22
- IP Status
-
No known IP claims.
- Contributors
-
Ron Bessems, Magic Leap
Rafael Wiltz, Magic Leap
Overview
This extension defines the interaction profile for the Magic Leap 2 Controller.
Magic Leap 2 Controller interaction profile
This interaction profile represents the input sources and haptics on the Magic Leap 2 Controller.
Interaction profile path:
-
/interaction_profiles/ml/ml2_controller
Valid for user paths:
-
/user/hand/left
-
/user/hand/right
Supported component paths:
-
…/input/menu/click
-
…/input/home/click (may not be available for application use)
-
…/input/trigger/click
-
…/input/trigger/value
-
…/input/trackpad/y
-
…/input/trackpad/x
-
…/input/trackpad/click
-
…/input/trackpad/force
-
…/input/trackpad/touch
-
…/input/grip/pose
-
…/input/aim/pose
-
…/input/shoulder/click
-
…/output/haptic
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
New Object Types
New Flag Types
New Enum Constants
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2022-07-22 (Ron Bessems)
-
Initial extension description
-
14.14. XR_MND_swapchain_usage_input_attachment_bit
- Name String
-
XR_MND_swapchain_usage_input_attachment_bit - Extension Type
-
Instance extension
- Registered Extension Number
-
97
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Deprecation State
-
-
Deprecated by
XR_KHR_swapchain_usage_input_attachment_bitextension
-
- Last Modified Date
-
2020-07-24
- IP Status
-
No known IP claims.
- Contributors
-
Jakob Bornecrantz, Collabora
Overview
This extension enables an application to specify that swapchain images should be created in a way so that they can be used as input attachments. At the time of writing this bit only affects Vulkan swapchains.
New Object Types
New Flag Types
New Enum Constants
XrSwapchainUsageFlagBits enumeration is extended with:
-
XR_SWAPCHAIN_USAGE_INPUT_ATTACHMENT_BIT_MND
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2020-07-23 (Jakob Bornecrantz)
-
Initial draft
-
-
Revision 2, 2020-07-24 (Jakob Bornecrantz)
-
Added note about only affecting Vulkan
-
Changed from MNDX to MND
-
14.15. XR_MSFT_hand_interaction
- Name String
-
XR_MSFT_hand_interaction - Extension Type
-
Instance extension
- Registered Extension Number
-
51
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- API Interactions
-
-
Interacts with
XR_EXT_hand_interaction -
Interacts with
XR_EXT_palm_pose
-
- Deprecation State
-
-
Promoted to
XR_EXT_hand_interactionextension
-
- Contributors
-
Yin Li, Microsoft
Lachlan Ford, Microsoft
Alex Turner, Microsoft
Overview
This extension defines a new interaction profile for near interactions and far interactions driven by directly-tracked hands.
Hand interaction profile
Interaction profile path:
-
/interaction_profiles/microsoft/hand_interaction
Valid for top level user path:
-
/user/hand/left
-
/user/hand/right
This interaction profile provides basic pose and actions for near and far interactions using hand tracking input.
Supported component paths:
-
…/input/select/value
-
…/input/squeeze/value
-
…/input/aim/pose
-
…/input/grip/pose
|
Note
When the runtime supports
|
|
Note
When the
|
|
Note
When the
|
|
Note
When the
|
The application should use the …/select/value and
…/aim/pose paths for far hand interactions, such as using a
virtual laser pointer to target and click a button on the wall.
Here, …/select/value can be used as either a boolean or float
action type, where the value XR_TRUE or 1.0f represents a closed hand
shape.
The application should use the …/squeeze/value and
…/grip/pose for near hand interactions, such as picking up a
virtual object within the user’s reach from a table.
Here, …/squeeze/value can be used as either a boolean or float
action type, where the value XR_TRUE or 1.0f represents a closed hand
shape.
The runtime may trigger both "select" and "squeeze" actions for the same hand gesture if the user’s hand gesture is able to trigger both near and far interactions. The application should not assume they are as independent as two buttons on a controller.
New Object Types
New Flag Types
New Enum Constants
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2019-09-16 (Yin Li)
-
Initial extension description
-
14.16. XR_OCULUS_android_session_state_enable
- Name String
-
XR_OCULUS_android_session_state_enable - Extension Type
-
Instance extension
- Registered Extension Number
-
45
- Revision
-
1
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Deprecation State
-
-
Deprecated without replacement
-
Overview
This extension enables the integration of the Android session lifecycle and an OpenXR runtime session state. Some OpenXR runtimes may require this extension to transition the application to the session READY or STOPPING state.
Applications that run on an Android system with this extension enabled have a different OpenXR Session state flow.
On Android, it is the Android Activity lifecycle that will dictate when the system is ready for the application to begin or end its session, not the runtime.
When XR_OCULUS_android_session_state is enabled, the following changes are made to Session State handling:
-
The runtime does not determine when the application’s session should be moved to the ready state,
XR_SESSION_STATE_READY. The application should not wait to receive theXR_SESSION_STATE_READYsession state changed event before beginning a session. Instead, the application should begin their session once there is a surface and the activity is resumed. -
The application should not call xrRequestExitSession to request the session move to the stopping state,
XR_SESSION_STATE_STOPPING. xrRequestExitSession will returnXR_ERROR_VALIDATION_FAILUREif called. -
The application should not wait to receive the
XR_SESSION_STATE_STOPPINGsession state changed event before ending a session. Instead, the application should end its session once the surface is destroyed or the activity is paused. -
The runtime will not transition to
XR_SESSION_STATE_READYorXR_SESSION_STATE_STOPPINGas the state is implicit from the Android activity and surface lifecycles.
Android Activity life cycle
An Android Activity can only be in the session running state while the activity is in the resumed state. The following shows how beginning and ending an XR session fits into the Android Activity life cycle.
1. VrActivity::onCreate() <---------+
2. VrActivity::onStart() <-------+ |
3. VrActivity::onResume() <---+ | |
4. xrBeginSession() | | |
5. xrEndSession() | | |
6. VrActivity::onPause() -----+ | |
7. VrActivity::onStop() ---------+ |
8. VrActivity::onDestroy() ---------+
Android Surface life cycle
An Android Activity can only be in the session running state while there is a valid Android Surface. The following shows how beginning and ending an XR session fits into the Android Surface life cycle.
1. VrActivity::surfaceCreated() <----+
2. VrActivity::surfaceChanged() |
3. xrBeginSession() |
4. xrEndSession() |
5. VrActivity::surfaceDestroyed() ---+
Note that the life cycle of a surface is not necessarily tightly coupled with the life cycle of an activity. These two life cycles may interleave in complex ways. Usually surfaceCreated() is called after onResume() and surfaceDestroyed() is called between onPause() and onDestroy(). However, this is not guaranteed and, for instance, surfaceDestroyed() may be called after onDestroy() or even before onPause().
An Android Activity is only in the resumed state with a valid Android Surface between surfaceChanged() or onResume(), whichever comes last, and surfaceDestroyed() or onPause(), whichever comes first. In other words, a XR application will typically begin the session from surfaceChanged() or onResume(), whichever comes last, and end the session from surfaceDestroyed() or onPause(), whichever comes first.
New Object Types
New Flag Types
New Enum Constants
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2019-08-16 (Cass Everitt)
-
Initial extension description
-
14.17. XR_VARJO_quad_views
- Name String
-
XR_VARJO_quad_views - Extension Type
-
Instance extension
- Registered Extension Number
-
38
- Revision
-
2
- Ratification Status
-
Not ratified
- Extension and Version Dependencies
- Deprecation State
-
-
Promoted to OpenXR 1.1
-
- Last Modified Date
-
2019-04-16
- IP Status
-
No known IP claims.
- Contributors
-
Sergiy Dubovik, Varjo Technologies
Rémi Arnaud, Varjo Technologies
Robert Menzel, NVIDIA
14.17.1. Overview
This extension adds a new view configuration type -
XR_VIEW_CONFIGURATION_TYPE_PRIMARY_QUAD_VARJO to
XrViewConfigurationType which can be returned by
xrEnumerateViewConfigurations to indicate that the runtime supports 4
viewports.
In this configuration each eye consists of two viewports of which one is smaller (in terms of field of view) of the other and fully included inside of the larger FoV one. The small FoV viewport however can have a higher resolution with respect to the same field of view in the outer viewport. The motivation is special hardware which superimposes a smaller, high resolution screen for the fovea region onto a larger screen for the periphery.
The runtime guarantees that the inner viewport of each eye is fully inside of the outer viewport.
To enumerate the 4 views xrEnumerateViewConfigurationViews can be used. The first two views (XrViewConfigurationView) will be for the left and right eyes for the outer viewport. The views 2 and 3 are for the left and right eyes for the inner viewport.
The relative position of the inner views relative to the outer views can change at run-time.
The runtime must set pose for view 0 and 2 to be identical and must
set pose for view 1 and 3 to be be identical when application calls
xrLocateViews.
The runtime might blend between the views at the edges, so the application should not omit the inner field of view from being generated in the outer view.
New Object Types
New Flag Types
New Enum Constants
XrViewConfigurationType enumeration is extended with:
-
XR_VIEW_CONFIGURATION_TYPE_PRIMARY_QUAD_VARJO
New Enums
New Structures
New Functions
Issues
Version History
-
Revision 1, 2019-04-16 (Sergiy Dubovik)
-
Initial draft
-
-
Revision 2, 2024-06-13 (Denny Rönngren)
-
Clarified that both views for each eye needs to have identical poses. This reflects the actual behavior in all known implementations.
-
15. Core Revisions (Informative)
New minor versions of the OpenXR API are defined periodically by the Khronos OpenXR Working Group. These consist of some amount of additional functionality added to the core API, potentially including both new functionality and functionality promoted from extensions.
15.1. Version 1.1
15.1.1. OpenXR 1.1 Promotions
OpenXR version 1.1 promoted a number of key extensions into the core API:
All differences in behavior between these extensions and the corresponding OpenXR 1.1 functionality are summarized below.
Differences Relative to XR_EXT_local_floor
The definition of this space was made more precise, and it was clarified
that the mandatory support of this space does not dictate any particular
quality of floor level estimation.
Applications that can provide a head-relative interaction experience in the
absence of a defined stage continue to use LOCAL space, while those that
need higher quality assertions about floor level continue to use STAGE
space or scene understanding extensions to detect floor level.
The (mandatory) presence of this space when enumerating reference spaces is
a convenience for portability rather than an assertion that e.g. floor
detection scene understanding has taken place or that the floor is
inherently walkable.
Differences Relative to XR_EXT_palm_pose
The input identifier palm_ext defined in the extension has been renamed to
grip_surface to more clearly describe its intended use and distinguish it
from hand tracking.
Differences Relative to XR_VARJO_quad_views
The view configuration type enumerant
XR_VIEW_CONFIGURATION_TYPE_PRIMARY_QUAD_VARJO was renamed to
XR_VIEW_CONFIGURATION_TYPE_PRIMARY_STEREO_WITH_FOVEATED_INSET, to
clarify that it is not vendor-specific nor the only way four views are
possible.
In OpenXR 1.1, a runtime may support
XR_VIEW_CONFIGURATION_TYPE_PRIMARY_STEREO_WITH_FOVEATED_INSET, but
this is optional like the other view configuration types.
Use xrEnumerateViewConfigurations to determine if it is provided,
rather than using the presence or absence of the extension.
Differences Relative to XR_FB_touch_controller_pro
The interaction profile path was changed from /interaction_profiles/facebook/touch_controller_pro to /interaction_profiles/meta/touch_pro_controller. Note the updated company name and different word order in the device name level.
The following input/output subpaths were renamed when changing to this new interaction profile path:
-
…/input/stylus_fb/force → …/input/stylus/force
-
…/input/trigger/proximity_fb → …/input/trigger/proximity
-
…/output/haptic_trigger_fb → …/output/haptic_trigger
-
…/output/haptic_thumb_fb → …/output/haptic_thumb
-
…/input/thumb_fb/proximity_fb → …/input/thumb_resting_surfaces/proximity
-
Note this is a boolean-OR of the "thumbrest" location’s proximity sensor as well as other proximity-sensitive regions.
-
-
…/input/trigger/curl_fb → …/input/trigger_curl/value
-
…/input/trigger/slide_fb → …/input/trigger_slide/value
The last two changes listed moved from being components on the trigger identifier to being independent identifiers in order to clarify how they relate to actions bound to other trigger components with regards to action priority.
Differences Relative to XR_META_touch_controller_plus
The interaction profile path was changed from /interaction_profiles/meta/touch_controller_plus to /interaction_profiles/meta/touch_plus_controller. Note the different word order in the device name level.
The following input subpaths were renamed when changing to this new interaction profile path:
-
…/input/trigger/proximity_meta → …/input/trigger/proximity
-
…/input/thumb_meta/proximity_meta → …/input/thumb_resting_surfaces/proximity
-
Note this is a boolean-OR of the "thumbrest" location’s proximity sensor as well as other proximity-sensitive regions.
-
-
…/input/trigger/curl_meta → …/input/trigger_curl/value
-
…/input/trigger/slide_meta → …/input/trigger_slide/value
15.1.2. Additional OpenXR 1.1 Changes
In addition to the promoted extensions described above, OpenXR 1.1 changed the following:
-
Substantial clarifications in the input and fundamentals chapters, intended to be non-substantive.
-
Added the following legacy interaction profiles to represent specific controllers shipped under the Oculus/Meta Touch name and previously grouped into a single Oculus Touch interaction profile:
-
/interaction_profiles/meta/touch_controller_rift_cv1 - Meta Touch Controller (Rift CV1) Profile
-
/interaction_profiles/meta/touch_controller_quest_1_rift_s - Meta Touch Controller (Rift S / Quest 1) Profile
-
/interaction_profiles/meta/touch_controller_quest_2 - Meta Touch Controller (Quest 2) Profile
-
15.1.6. New Enum Constants
-
XR_UUID_SIZE -
Extending XrReferenceSpaceType:
-
XR_REFERENCE_SPACE_TYPE_LOCAL_FLOOR
-
-
Extending XrResult:
-
XR_ERROR_EXTENSION_DEPENDENCY_NOT_ENABLED -
XR_ERROR_PERMISSION_INSUFFICIENT
-
-
Extending XrStructureType:
-
XR_TYPE_SPACES_LOCATE_INFO -
XR_TYPE_SPACE_LOCATIONS -
XR_TYPE_SPACE_VELOCITIES
-
-
Extending XrViewConfigurationType:
-
XR_VIEW_CONFIGURATION_TYPE_PRIMARY_STEREO_WITH_FOVEATED_INSET
-
15.2. Loader Runtime and API Layer Negotiation Version 1.0
The OpenXR version 1.0.33 patch release included ratification of the runtime
and API layer negotiation API, associated with the identifier
XR_LOADER_VERSION_1_0, substantially unchanged from the unratified
form previously described in the loader design document.
This interface is intended for use only between the loader, runtimes, and
API layers, and is not typically directly used by an application.
15.3. Version 1.0
OpenXR version 1.0 defined the initial core API.
15.3.4. New Structures
-
Extending XrSpaceLocation:
15.3.7. New Enum Constants
-
XR_FALSE -
XR_MAX_API_LAYER_DESCRIPTION_SIZE -
XR_MAX_API_LAYER_NAME_SIZE -
XR_MAX_APPLICATION_NAME_SIZE -
XR_MAX_ENGINE_NAME_SIZE -
XR_MAX_EXTENSION_NAME_SIZE -
XR_MAX_PATH_LENGTH -
XR_MAX_RESULT_STRING_SIZE -
XR_MAX_RUNTIME_NAME_SIZE -
XR_MAX_STRUCTURE_NAME_SIZE -
XR_MAX_SYSTEM_NAME_SIZE -
XR_TRUE
Index
Flags and Flag Bits
-
XrAndroidSurfaceSwapchainFlagsFB — See also XrAndroidSurfaceSwapchainFlagBitsFB
-
XrCompositionLayerFlags — See also XrCompositionLayerFlagBits
-
XrCompositionLayerImageLayoutFlagsFB — See also XrCompositionLayerImageLayoutFlagBitsFB
-
XrCompositionLayerSecureContentFlagsFB — See also XrCompositionLayerSecureContentFlagBitsFB
-
XrCompositionLayerSettingsFlagsFB — See also XrCompositionLayerSettingsFlagBitsFB
-
XrCompositionLayerSpaceWarpInfoFlagsFB — See also XrCompositionLayerSpaceWarpInfoFlagBitsFB
-
XrDebugUtilsMessageSeverityFlagsEXT — See also XrDebugUtilsMessageSeverityFlagBitsEXT
-
XrDebugUtilsMessageTypeFlagsEXT — See also XrDebugUtilsMessageTypeFlagBitsEXT
-
XrDigitalLensControlFlagsALMALENCE — See also XrDigitalLensControlFlagBitsALMALENCE
-
XrEnvironmentDepthProviderCreateFlagsMETA — See also XrEnvironmentDepthProviderCreateFlagBitsMETA
-
XrEnvironmentDepthSwapchainCreateFlagsMETA — See also XrEnvironmentDepthSwapchainCreateFlagBitsMETA
-
XrExternalCameraStatusFlagsOCULUS — See also XrExternalCameraStatusFlagBitsOCULUS
-
XrFacialExpressionBlendShapePropertiesFlagsML — See also XrFacialExpressionBlendShapePropertiesFlagBitsML
-
XrFoveationDynamicFlagsHTC — See also XrFoveationDynamicFlagBitsHTC
-
XrFoveationEyeTrackedProfileCreateFlagsMETA — See also XrFoveationEyeTrackedProfileCreateFlagBitsMETA
-
XrFoveationEyeTrackedStateFlagsMETA — See also XrFoveationEyeTrackedStateFlagBitsMETA
-
XrFrameEndInfoFlagsML — See also XrFrameEndInfoFlagBitsML
-
XrFrameSynthesisInfoFlagsEXT — See also XrFrameSynthesisInfoFlagBitsEXT
-
XrGlobalDimmerFrameEndInfoFlagsML — See also XrGlobalDimmerFrameEndInfoFlagBitsML
-
XrHandTrackingAimFlagsFB — See also XrHandTrackingAimFlagBitsFB
-
XrInputSourceLocalizedNameFlags — See also XrInputSourceLocalizedNameFlagBits
-
XrInstanceCreateFlags — See also XrInstanceCreateFlagBits
-
XrKeyboardTrackingFlagsFB — See also XrKeyboardTrackingFlagBitsFB
-
XrKeyboardTrackingQueryFlagsFB — See also XrKeyboardTrackingQueryFlagBitsFB
-
XrLocalizationMapErrorFlagsML — See also XrLocalizationMapErrorFlagBitsML
-
XrOverlayMainSessionFlagsEXTX — See also XrOverlayMainSessionFlagBitsEXTX
-
XrOverlaySessionCreateFlagsEXTX — See also XrOverlaySessionCreateFlagBitsEXTX
-
XrPassthroughCapabilityFlagsFB — See also XrPassthroughCapabilityFlagBitsFB
-
XrPassthroughFlagsFB — See also XrPassthroughFlagBitsFB
-
XrPassthroughPreferenceFlagsMETA — See also XrPassthroughPreferenceFlagBitsMETA
-
XrPassthroughStateChangedFlagsFB — See also XrPassthroughStateChangedFlagBitsFB
-
XrPerformanceMetricsCounterFlagsMETA — See also XrPerformanceMetricsCounterFlagBitsMETA
-
XrPlaneDetectionCapabilityFlagsEXT — See also XrPlaneDetectionCapabilityFlagBitsEXT
-
XrPlaneDetectorFlagsEXT — See also XrPlaneDetectorFlagBitsEXT
-
XrRenderModelFlagsFB — See also XrRenderModelFlagBitsFB
-
XrSemanticLabelsSupportFlagsFB — See also XrSemanticLabelsSupportFlagBitsFB
-
XrSessionCreateFlags — See also XrSessionCreateFlagBits
-
XrSpaceLocationFlags — See also XrSpaceLocationFlagBits
-
XrSpaceVelocityFlags — See also XrSpaceVelocityFlagBits
-
XrSpatialMeshConfigFlagsBD — See also XrSpatialMeshConfigFlagBitsBD
-
XrSwapchainCreateFlags — See also XrSwapchainCreateFlagBits
-
XrSwapchainCreateFoveationFlagsFB — See also XrSwapchainCreateFoveationFlagBitsFB
-
XrSwapchainStateFoveationFlagsFB — See also XrSwapchainStateFoveationFlagBitsFB
-
XrSwapchainUsageFlags — See also XrSwapchainUsageFlagBits
-
XrTriangleMeshFlagsFB — See also XrTriangleMeshFlagBitsFB
-
XrViewStateFlags — See also XrViewStateFlagBits
-
XrVirtualKeyboardInputStateFlagsMETA — See also XrVirtualKeyboardInputStateFlagBitsMETA
-
XrVulkanDeviceCreateFlagsKHR — See also XrVulkanDeviceCreateFlagBitsKHR
-
XrVulkanInstanceCreateFlagsKHR — See also XrVulkanInstanceCreateFlagBitsKHR
-
XrWorldMeshDetectorFlagsML — See also XrWorldMeshDetectorFlagBitsML
Appendix
Code Style Conventions
These are the code style conventions used in this specification to define the API.
Prefixes are used in the API to denote specific semantic meaning of names, or as a label to avoid name clashes, and are explained here:
| Prefix | Description |
|---|---|
|
Enumerants and defines are prefixed with these characters. |
|
Non-function-pointer types are prefixed with these characters. |
|
Functions are prefixed with these characters. |
|
Function pointer types are prefixed with these characters. |
Application Binary Interface
This section describes additional definitions and conventions that define the application binary interface.
Structure Types
typedef enum XrStructureType {
XR_TYPE_UNKNOWN = 0,
XR_TYPE_API_LAYER_PROPERTIES = 1,
XR_TYPE_EXTENSION_PROPERTIES = 2,
XR_TYPE_INSTANCE_CREATE_INFO = 3,
XR_TYPE_SYSTEM_GET_INFO = 4,
XR_TYPE_SYSTEM_PROPERTIES = 5,
XR_TYPE_VIEW_LOCATE_INFO = 6,
XR_TYPE_VIEW = 7,
XR_TYPE_SESSION_CREATE_INFO = 8,
XR_TYPE_SWAPCHAIN_CREATE_INFO = 9,
XR_TYPE_SESSION_BEGIN_INFO = 10,
XR_TYPE_VIEW_STATE = 11,
XR_TYPE_FRAME_END_INFO = 12,
XR_TYPE_HAPTIC_VIBRATION = 13,
XR_TYPE_EVENT_DATA_BUFFER = 16,
XR_TYPE_EVENT_DATA_INSTANCE_LOSS_PENDING = 17,
XR_TYPE_EVENT_DATA_SESSION_STATE_CHANGED = 18,
XR_TYPE_ACTION_STATE_BOOLEAN = 23,
XR_TYPE_ACTION_STATE_FLOAT = 24,
XR_TYPE_ACTION_STATE_VECTOR2F = 25,
XR_TYPE_ACTION_STATE_POSE = 27,
XR_TYPE_ACTION_SET_CREATE_INFO = 28,
XR_TYPE_ACTION_CREATE_INFO = 29,
XR_TYPE_INSTANCE_PROPERTIES = 32,
XR_TYPE_FRAME_WAIT_INFO = 33,
XR_TYPE_COMPOSITION_LAYER_PROJECTION = 35,
XR_TYPE_COMPOSITION_LAYER_QUAD = 36,
XR_TYPE_REFERENCE_SPACE_CREATE_INFO = 37,
XR_TYPE_ACTION_SPACE_CREATE_INFO = 38,
XR_TYPE_EVENT_DATA_REFERENCE_SPACE_CHANGE_PENDING = 40,
XR_TYPE_VIEW_CONFIGURATION_VIEW = 41,
XR_TYPE_SPACE_LOCATION = 42,
XR_TYPE_SPACE_VELOCITY = 43,
XR_TYPE_FRAME_STATE = 44,
XR_TYPE_VIEW_CONFIGURATION_PROPERTIES = 45,
XR_TYPE_FRAME_BEGIN_INFO = 46,
XR_TYPE_COMPOSITION_LAYER_PROJECTION_VIEW = 48,
XR_TYPE_EVENT_DATA_EVENTS_LOST = 49,
XR_TYPE_INTERACTION_PROFILE_SUGGESTED_BINDING = 51,
XR_TYPE_EVENT_DATA_INTERACTION_PROFILE_CHANGED = 52,
XR_TYPE_INTERACTION_PROFILE_STATE = 53,
XR_TYPE_SWAPCHAIN_IMAGE_ACQUIRE_INFO = 55,
XR_TYPE_SWAPCHAIN_IMAGE_WAIT_INFO = 56,
XR_TYPE_SWAPCHAIN_IMAGE_RELEASE_INFO = 57,
XR_TYPE_ACTION_STATE_GET_INFO = 58,
XR_TYPE_HAPTIC_ACTION_INFO = 59,
XR_TYPE_SESSION_ACTION_SETS_ATTACH_INFO = 60,
XR_TYPE_ACTIONS_SYNC_INFO = 61,
XR_TYPE_BOUND_SOURCES_FOR_ACTION_ENUMERATE_INFO = 62,
XR_TYPE_INPUT_SOURCE_LOCALIZED_NAME_GET_INFO = 63,
// Provided by XR_VERSION_1_1
XR_TYPE_SPACES_LOCATE_INFO = 1000471000,
// Provided by XR_VERSION_1_1
XR_TYPE_SPACE_LOCATIONS = 1000471001,
// Provided by XR_VERSION_1_1
XR_TYPE_SPACE_VELOCITIES = 1000471002,
// Provided by XR_KHR_composition_layer_cube
XR_TYPE_COMPOSITION_LAYER_CUBE_KHR = 1000006000,
// Provided by XR_KHR_android_create_instance
XR_TYPE_INSTANCE_CREATE_INFO_ANDROID_KHR = 1000008000,
// Provided by XR_KHR_composition_layer_depth
XR_TYPE_COMPOSITION_LAYER_DEPTH_INFO_KHR = 1000010000,
// Provided by XR_KHR_vulkan_swapchain_format_list
XR_TYPE_VULKAN_SWAPCHAIN_FORMAT_LIST_CREATE_INFO_KHR = 1000014000,
// Provided by XR_EXT_performance_settings
XR_TYPE_EVENT_DATA_PERF_SETTINGS_EXT = 1000015000,
// Provided by XR_KHR_composition_layer_cylinder
XR_TYPE_COMPOSITION_LAYER_CYLINDER_KHR = 1000017000,
// Provided by XR_KHR_composition_layer_equirect
XR_TYPE_COMPOSITION_LAYER_EQUIRECT_KHR = 1000018000,
// Provided by XR_EXT_debug_utils
XR_TYPE_DEBUG_UTILS_OBJECT_NAME_INFO_EXT = 1000019000,
// Provided by XR_EXT_debug_utils
XR_TYPE_DEBUG_UTILS_MESSENGER_CALLBACK_DATA_EXT = 1000019001,
// Provided by XR_EXT_debug_utils
XR_TYPE_DEBUG_UTILS_MESSENGER_CREATE_INFO_EXT = 1000019002,
// Provided by XR_EXT_debug_utils
XR_TYPE_DEBUG_UTILS_LABEL_EXT = 1000019003,
// Provided by XR_KHR_opengl_enable
XR_TYPE_GRAPHICS_BINDING_OPENGL_WIN32_KHR = 1000023000,
// Provided by XR_KHR_opengl_enable
XR_TYPE_GRAPHICS_BINDING_OPENGL_XLIB_KHR = 1000023001,
// Provided by XR_KHR_opengl_enable
XR_TYPE_GRAPHICS_BINDING_OPENGL_XCB_KHR = 1000023002,
// Provided by XR_KHR_opengl_enable
XR_TYPE_GRAPHICS_BINDING_OPENGL_WAYLAND_KHR = 1000023003,
// Provided by XR_KHR_opengl_enable
XR_TYPE_SWAPCHAIN_IMAGE_OPENGL_KHR = 1000023004,
// Provided by XR_KHR_opengl_enable
XR_TYPE_GRAPHICS_REQUIREMENTS_OPENGL_KHR = 1000023005,
// Provided by XR_KHR_opengl_es_enable
XR_TYPE_GRAPHICS_BINDING_OPENGL_ES_ANDROID_KHR = 1000024001,
// Provided by XR_KHR_opengl_es_enable
XR_TYPE_SWAPCHAIN_IMAGE_OPENGL_ES_KHR = 1000024002,
// Provided by XR_KHR_opengl_es_enable
XR_TYPE_GRAPHICS_REQUIREMENTS_OPENGL_ES_KHR = 1000024003,
// Provided by XR_KHR_vulkan_enable
XR_TYPE_GRAPHICS_BINDING_VULKAN_KHR = 1000025000,
// Provided by XR_KHR_vulkan_enable
XR_TYPE_SWAPCHAIN_IMAGE_VULKAN_KHR = 1000025001,
// Provided by XR_KHR_vulkan_enable
XR_TYPE_GRAPHICS_REQUIREMENTS_VULKAN_KHR = 1000025002,
// Provided by XR_KHR_D3D11_enable
XR_TYPE_GRAPHICS_BINDING_D3D11_KHR = 1000027000,
// Provided by XR_KHR_D3D11_enable
XR_TYPE_SWAPCHAIN_IMAGE_D3D11_KHR = 1000027001,
// Provided by XR_KHR_D3D11_enable
XR_TYPE_GRAPHICS_REQUIREMENTS_D3D11_KHR = 1000027002,
// Provided by XR_KHR_D3D12_enable
XR_TYPE_GRAPHICS_BINDING_D3D12_KHR = 1000028000,
// Provided by XR_KHR_D3D12_enable
XR_TYPE_SWAPCHAIN_IMAGE_D3D12_KHR = 1000028001,
// Provided by XR_KHR_D3D12_enable
XR_TYPE_GRAPHICS_REQUIREMENTS_D3D12_KHR = 1000028002,
// Provided by XR_KHR_metal_enable
XR_TYPE_GRAPHICS_BINDING_METAL_KHR = 1000029000,
// Provided by XR_KHR_metal_enable
XR_TYPE_SWAPCHAIN_IMAGE_METAL_KHR = 1000029001,
// Provided by XR_KHR_metal_enable
XR_TYPE_GRAPHICS_REQUIREMENTS_METAL_KHR = 1000029002,
// Provided by XR_EXT_eye_gaze_interaction
XR_TYPE_SYSTEM_EYE_GAZE_INTERACTION_PROPERTIES_EXT = 1000030000,
// Provided by XR_EXT_eye_gaze_interaction
XR_TYPE_EYE_GAZE_SAMPLE_TIME_EXT = 1000030001,
// Provided by XR_KHR_visibility_mask
XR_TYPE_VISIBILITY_MASK_KHR = 1000031000,
// Provided by XR_KHR_visibility_mask
XR_TYPE_EVENT_DATA_VISIBILITY_MASK_CHANGED_KHR = 1000031001,
// Provided by XR_EXTX_overlay
XR_TYPE_SESSION_CREATE_INFO_OVERLAY_EXTX = 1000033000,
// Provided by XR_EXTX_overlay
XR_TYPE_EVENT_DATA_MAIN_SESSION_VISIBILITY_CHANGED_EXTX = 1000033003,
// Provided by XR_KHR_composition_layer_color_scale_bias
XR_TYPE_COMPOSITION_LAYER_COLOR_SCALE_BIAS_KHR = 1000034000,
// Provided by XR_MSFT_spatial_anchor
XR_TYPE_SPATIAL_ANCHOR_CREATE_INFO_MSFT = 1000039000,
// Provided by XR_MSFT_spatial_anchor
XR_TYPE_SPATIAL_ANCHOR_SPACE_CREATE_INFO_MSFT = 1000039001,
// Provided by XR_FB_composition_layer_image_layout
XR_TYPE_COMPOSITION_LAYER_IMAGE_LAYOUT_FB = 1000040000,
// Provided by XR_FB_composition_layer_alpha_blend
XR_TYPE_COMPOSITION_LAYER_ALPHA_BLEND_FB = 1000041001,
// Provided by XR_EXT_view_configuration_depth_range
XR_TYPE_VIEW_CONFIGURATION_DEPTH_RANGE_EXT = 1000046000,
// Provided by XR_MNDX_egl_enable
XR_TYPE_GRAPHICS_BINDING_EGL_MNDX = 1000048004,
// Provided by XR_MSFT_spatial_graph_bridge
XR_TYPE_SPATIAL_GRAPH_NODE_SPACE_CREATE_INFO_MSFT = 1000049000,
// Provided by XR_MSFT_spatial_graph_bridge
XR_TYPE_SPATIAL_GRAPH_STATIC_NODE_BINDING_CREATE_INFO_MSFT = 1000049001,
// Provided by XR_MSFT_spatial_graph_bridge
XR_TYPE_SPATIAL_GRAPH_NODE_BINDING_PROPERTIES_GET_INFO_MSFT = 1000049002,
// Provided by XR_MSFT_spatial_graph_bridge
XR_TYPE_SPATIAL_GRAPH_NODE_BINDING_PROPERTIES_MSFT = 1000049003,
// Provided by XR_EXT_hand_tracking
XR_TYPE_SYSTEM_HAND_TRACKING_PROPERTIES_EXT = 1000051000,
// Provided by XR_EXT_hand_tracking
XR_TYPE_HAND_TRACKER_CREATE_INFO_EXT = 1000051001,
// Provided by XR_EXT_hand_tracking
XR_TYPE_HAND_JOINTS_LOCATE_INFO_EXT = 1000051002,
// Provided by XR_EXT_hand_tracking
XR_TYPE_HAND_JOINT_LOCATIONS_EXT = 1000051003,
// Provided by XR_EXT_hand_tracking
XR_TYPE_HAND_JOINT_VELOCITIES_EXT = 1000051004,
// Provided by XR_MSFT_hand_tracking_mesh
XR_TYPE_SYSTEM_HAND_TRACKING_MESH_PROPERTIES_MSFT = 1000052000,
// Provided by XR_MSFT_hand_tracking_mesh
XR_TYPE_HAND_MESH_SPACE_CREATE_INFO_MSFT = 1000052001,
// Provided by XR_MSFT_hand_tracking_mesh
XR_TYPE_HAND_MESH_UPDATE_INFO_MSFT = 1000052002,
// Provided by XR_MSFT_hand_tracking_mesh
XR_TYPE_HAND_MESH_MSFT = 1000052003,
// Provided by XR_MSFT_hand_tracking_mesh
XR_TYPE_HAND_POSE_TYPE_INFO_MSFT = 1000052004,
// Provided by XR_MSFT_secondary_view_configuration
XR_TYPE_SECONDARY_VIEW_CONFIGURATION_SESSION_BEGIN_INFO_MSFT = 1000053000,
// Provided by XR_MSFT_secondary_view_configuration
XR_TYPE_SECONDARY_VIEW_CONFIGURATION_STATE_MSFT = 1000053001,
// Provided by XR_MSFT_secondary_view_configuration
XR_TYPE_SECONDARY_VIEW_CONFIGURATION_FRAME_STATE_MSFT = 1000053002,
// Provided by XR_MSFT_secondary_view_configuration
XR_TYPE_SECONDARY_VIEW_CONFIGURATION_FRAME_END_INFO_MSFT = 1000053003,
// Provided by XR_MSFT_secondary_view_configuration
XR_TYPE_SECONDARY_VIEW_CONFIGURATION_LAYER_INFO_MSFT = 1000053004,
// Provided by XR_MSFT_secondary_view_configuration
XR_TYPE_SECONDARY_VIEW_CONFIGURATION_SWAPCHAIN_CREATE_INFO_MSFT = 1000053005,
// Provided by XR_MSFT_controller_model
XR_TYPE_CONTROLLER_MODEL_KEY_STATE_MSFT = 1000055000,
// Provided by XR_MSFT_controller_model
XR_TYPE_CONTROLLER_MODEL_NODE_PROPERTIES_MSFT = 1000055001,
// Provided by XR_MSFT_controller_model
XR_TYPE_CONTROLLER_MODEL_PROPERTIES_MSFT = 1000055002,
// Provided by XR_MSFT_controller_model
XR_TYPE_CONTROLLER_MODEL_NODE_STATE_MSFT = 1000055003,
// Provided by XR_MSFT_controller_model
XR_TYPE_CONTROLLER_MODEL_STATE_MSFT = 1000055004,
// Provided by XR_EPIC_view_configuration_fov
XR_TYPE_VIEW_CONFIGURATION_VIEW_FOV_EPIC = 1000059000,
// Provided by XR_MSFT_holographic_window_attachment
XR_TYPE_HOLOGRAPHIC_WINDOW_ATTACHMENT_MSFT = 1000063000,
// Provided by XR_MSFT_composition_layer_reprojection
XR_TYPE_COMPOSITION_LAYER_REPROJECTION_INFO_MSFT = 1000066000,
// Provided by XR_MSFT_composition_layer_reprojection
XR_TYPE_COMPOSITION_LAYER_REPROJECTION_PLANE_OVERRIDE_MSFT = 1000066001,
// Provided by XR_FB_android_surface_swapchain_create
XR_TYPE_ANDROID_SURFACE_SWAPCHAIN_CREATE_INFO_FB = 1000070000,
// Provided by XR_FB_composition_layer_secure_content
XR_TYPE_COMPOSITION_LAYER_SECURE_CONTENT_FB = 1000072000,
// Provided by XR_FB_body_tracking
XR_TYPE_BODY_TRACKER_CREATE_INFO_FB = 1000076001,
// Provided by XR_FB_body_tracking
XR_TYPE_BODY_JOINTS_LOCATE_INFO_FB = 1000076002,
// Provided by XR_FB_body_tracking
XR_TYPE_SYSTEM_BODY_TRACKING_PROPERTIES_FB = 1000076004,
// Provided by XR_FB_body_tracking
XR_TYPE_BODY_JOINT_LOCATIONS_FB = 1000076005,
// Provided by XR_FB_body_tracking
XR_TYPE_BODY_SKELETON_FB = 1000076006,
// Provided by XR_EXT_dpad_binding
XR_TYPE_INTERACTION_PROFILE_DPAD_BINDING_EXT = 1000078000,
// Provided by XR_VALVE_analog_threshold
XR_TYPE_INTERACTION_PROFILE_ANALOG_THRESHOLD_VALVE = 1000079000,
// Provided by XR_EXT_hand_joints_motion_range
XR_TYPE_HAND_JOINTS_MOTION_RANGE_INFO_EXT = 1000080000,
// Provided by XR_KHR_loader_init_android
XR_TYPE_LOADER_INIT_INFO_ANDROID_KHR = 1000089000,
// Provided by XR_KHR_vulkan_enable2
XR_TYPE_VULKAN_INSTANCE_CREATE_INFO_KHR = 1000090000,
// Provided by XR_KHR_vulkan_enable2
XR_TYPE_VULKAN_DEVICE_CREATE_INFO_KHR = 1000090001,
// Provided by XR_KHR_vulkan_enable2
XR_TYPE_VULKAN_GRAPHICS_DEVICE_GET_INFO_KHR = 1000090003,
// Provided by XR_KHR_composition_layer_equirect2
XR_TYPE_COMPOSITION_LAYER_EQUIRECT2_KHR = 1000091000,
// Provided by XR_MSFT_scene_understanding
XR_TYPE_SCENE_OBSERVER_CREATE_INFO_MSFT = 1000097000,
// Provided by XR_MSFT_scene_understanding
XR_TYPE_SCENE_CREATE_INFO_MSFT = 1000097001,
// Provided by XR_MSFT_scene_understanding
XR_TYPE_NEW_SCENE_COMPUTE_INFO_MSFT = 1000097002,
// Provided by XR_MSFT_scene_understanding
XR_TYPE_VISUAL_MESH_COMPUTE_LOD_INFO_MSFT = 1000097003,
// Provided by XR_MSFT_scene_understanding
XR_TYPE_SCENE_COMPONENTS_MSFT = 1000097004,
// Provided by XR_MSFT_scene_understanding
XR_TYPE_SCENE_COMPONENTS_GET_INFO_MSFT = 1000097005,
// Provided by XR_MSFT_scene_understanding
XR_TYPE_SCENE_COMPONENT_LOCATIONS_MSFT = 1000097006,
// Provided by XR_MSFT_scene_understanding
XR_TYPE_SCENE_COMPONENTS_LOCATE_INFO_MSFT = 1000097007,
// Provided by XR_MSFT_scene_understanding
XR_TYPE_SCENE_OBJECTS_MSFT = 1000097008,
// Provided by XR_MSFT_scene_understanding
XR_TYPE_SCENE_COMPONENT_PARENT_FILTER_INFO_MSFT = 1000097009,
// Provided by XR_MSFT_scene_understanding
XR_TYPE_SCENE_OBJECT_TYPES_FILTER_INFO_MSFT = 1000097010,
// Provided by XR_MSFT_scene_understanding
XR_TYPE_SCENE_PLANES_MSFT = 1000097011,
// Provided by XR_MSFT_scene_understanding
XR_TYPE_SCENE_PLANE_ALIGNMENT_FILTER_INFO_MSFT = 1000097012,
// Provided by XR_MSFT_scene_understanding
XR_TYPE_SCENE_MESHES_MSFT = 1000097013,
// Provided by XR_MSFT_scene_understanding
XR_TYPE_SCENE_MESH_BUFFERS_GET_INFO_MSFT = 1000097014,
// Provided by XR_MSFT_scene_understanding
XR_TYPE_SCENE_MESH_BUFFERS_MSFT = 1000097015,
// Provided by XR_MSFT_scene_understanding
XR_TYPE_SCENE_MESH_VERTEX_BUFFER_MSFT = 1000097016,
// Provided by XR_MSFT_scene_understanding
XR_TYPE_SCENE_MESH_INDICES_UINT32_MSFT = 1000097017,
// Provided by XR_MSFT_scene_understanding
XR_TYPE_SCENE_MESH_INDICES_UINT16_MSFT = 1000097018,
// Provided by XR_MSFT_scene_understanding_serialization
XR_TYPE_SERIALIZED_SCENE_FRAGMENT_DATA_GET_INFO_MSFT = 1000098000,
// Provided by XR_MSFT_scene_understanding_serialization
XR_TYPE_SCENE_DESERIALIZE_INFO_MSFT = 1000098001,
// Provided by XR_FB_display_refresh_rate
XR_TYPE_EVENT_DATA_DISPLAY_REFRESH_RATE_CHANGED_FB = 1000101000,
// Provided by XR_HTCX_vive_tracker_interaction
XR_TYPE_VIVE_TRACKER_PATHS_HTCX = 1000103000,
// Provided by XR_HTCX_vive_tracker_interaction
XR_TYPE_EVENT_DATA_VIVE_TRACKER_CONNECTED_HTCX = 1000103001,
// Provided by XR_HTC_facial_tracking
XR_TYPE_SYSTEM_FACIAL_TRACKING_PROPERTIES_HTC = 1000104000,
// Provided by XR_HTC_facial_tracking
XR_TYPE_FACIAL_TRACKER_CREATE_INFO_HTC = 1000104001,
// Provided by XR_HTC_facial_tracking
XR_TYPE_FACIAL_EXPRESSIONS_HTC = 1000104002,
// Provided by XR_FB_color_space
XR_TYPE_SYSTEM_COLOR_SPACE_PROPERTIES_FB = 1000108000,
// Provided by XR_FB_hand_tracking_mesh
XR_TYPE_HAND_TRACKING_MESH_FB = 1000110001,
// Provided by XR_FB_hand_tracking_mesh
XR_TYPE_HAND_TRACKING_SCALE_FB = 1000110003,
// Provided by XR_FB_hand_tracking_aim
XR_TYPE_HAND_TRACKING_AIM_STATE_FB = 1000111001,
// Provided by XR_FB_hand_tracking_capsules
XR_TYPE_HAND_TRACKING_CAPSULES_STATE_FB = 1000112000,
// Provided by XR_FB_spatial_entity
XR_TYPE_SYSTEM_SPATIAL_ENTITY_PROPERTIES_FB = 1000113004,
// Provided by XR_FB_spatial_entity
XR_TYPE_SPATIAL_ANCHOR_CREATE_INFO_FB = 1000113003,
// Provided by XR_FB_spatial_entity
XR_TYPE_SPACE_COMPONENT_STATUS_SET_INFO_FB = 1000113007,
// Provided by XR_FB_spatial_entity
XR_TYPE_SPACE_COMPONENT_STATUS_FB = 1000113001,
// Provided by XR_FB_spatial_entity
XR_TYPE_EVENT_DATA_SPATIAL_ANCHOR_CREATE_COMPLETE_FB = 1000113005,
// Provided by XR_FB_spatial_entity
XR_TYPE_EVENT_DATA_SPACE_SET_STATUS_COMPLETE_FB = 1000113006,
// Provided by XR_FB_foveation
XR_TYPE_FOVEATION_PROFILE_CREATE_INFO_FB = 1000114000,
// Provided by XR_FB_foveation
XR_TYPE_SWAPCHAIN_CREATE_INFO_FOVEATION_FB = 1000114001,
// Provided by XR_FB_foveation
XR_TYPE_SWAPCHAIN_STATE_FOVEATION_FB = 1000114002,
// Provided by XR_FB_foveation_configuration
XR_TYPE_FOVEATION_LEVEL_PROFILE_CREATE_INFO_FB = 1000115000,
// Provided by XR_FB_keyboard_tracking
XR_TYPE_KEYBOARD_SPACE_CREATE_INFO_FB = 1000116009,
// Provided by XR_FB_keyboard_tracking
XR_TYPE_KEYBOARD_TRACKING_QUERY_FB = 1000116004,
// Provided by XR_FB_keyboard_tracking
XR_TYPE_SYSTEM_KEYBOARD_TRACKING_PROPERTIES_FB = 1000116002,
// Provided by XR_FB_triangle_mesh
XR_TYPE_TRIANGLE_MESH_CREATE_INFO_FB = 1000117001,
// Provided by XR_FB_passthrough
XR_TYPE_SYSTEM_PASSTHROUGH_PROPERTIES_FB = 1000118000,
// Provided by XR_FB_passthrough
XR_TYPE_PASSTHROUGH_CREATE_INFO_FB = 1000118001,
// Provided by XR_FB_passthrough
XR_TYPE_PASSTHROUGH_LAYER_CREATE_INFO_FB = 1000118002,
// Provided by XR_FB_passthrough
XR_TYPE_COMPOSITION_LAYER_PASSTHROUGH_FB = 1000118003,
// Provided by XR_FB_passthrough
XR_TYPE_GEOMETRY_INSTANCE_CREATE_INFO_FB = 1000118004,
// Provided by XR_FB_passthrough
XR_TYPE_GEOMETRY_INSTANCE_TRANSFORM_FB = 1000118005,
// Provided by XR_FB_passthrough
XR_TYPE_SYSTEM_PASSTHROUGH_PROPERTIES2_FB = 1000118006,
// Provided by XR_FB_passthrough
XR_TYPE_PASSTHROUGH_STYLE_FB = 1000118020,
// Provided by XR_FB_passthrough
XR_TYPE_PASSTHROUGH_COLOR_MAP_MONO_TO_RGBA_FB = 1000118021,
// Provided by XR_FB_passthrough
XR_TYPE_PASSTHROUGH_COLOR_MAP_MONO_TO_MONO_FB = 1000118022,
// Provided by XR_FB_passthrough
XR_TYPE_PASSTHROUGH_BRIGHTNESS_CONTRAST_SATURATION_FB = 1000118023,
// Provided by XR_FB_passthrough
XR_TYPE_EVENT_DATA_PASSTHROUGH_STATE_CHANGED_FB = 1000118030,
// Provided by XR_FB_render_model
XR_TYPE_RENDER_MODEL_PATH_INFO_FB = 1000119000,
// Provided by XR_FB_render_model
XR_TYPE_RENDER_MODEL_PROPERTIES_FB = 1000119001,
// Provided by XR_FB_render_model
XR_TYPE_RENDER_MODEL_BUFFER_FB = 1000119002,
// Provided by XR_FB_render_model
XR_TYPE_RENDER_MODEL_LOAD_INFO_FB = 1000119003,
// Provided by XR_FB_render_model
XR_TYPE_SYSTEM_RENDER_MODEL_PROPERTIES_FB = 1000119004,
// Provided by XR_FB_render_model
XR_TYPE_RENDER_MODEL_CAPABILITIES_REQUEST_FB = 1000119005,
// Provided by XR_KHR_binding_modification
XR_TYPE_BINDING_MODIFICATIONS_KHR = 1000120000,
// Provided by XR_VARJO_foveated_rendering
XR_TYPE_VIEW_LOCATE_FOVEATED_RENDERING_VARJO = 1000121000,
// Provided by XR_VARJO_foveated_rendering
XR_TYPE_FOVEATED_VIEW_CONFIGURATION_VIEW_VARJO = 1000121001,
// Provided by XR_VARJO_foveated_rendering
XR_TYPE_SYSTEM_FOVEATED_RENDERING_PROPERTIES_VARJO = 1000121002,
// Provided by XR_VARJO_composition_layer_depth_test
XR_TYPE_COMPOSITION_LAYER_DEPTH_TEST_VARJO = 1000122000,
// Provided by XR_VARJO_marker_tracking
XR_TYPE_SYSTEM_MARKER_TRACKING_PROPERTIES_VARJO = 1000124000,
// Provided by XR_VARJO_marker_tracking
XR_TYPE_EVENT_DATA_MARKER_TRACKING_UPDATE_VARJO = 1000124001,
// Provided by XR_VARJO_marker_tracking
XR_TYPE_MARKER_SPACE_CREATE_INFO_VARJO = 1000124002,
// Provided by XR_ML_frame_end_info
XR_TYPE_FRAME_END_INFO_ML = 1000135000,
// Provided by XR_ML_global_dimmer
XR_TYPE_GLOBAL_DIMMER_FRAME_END_INFO_ML = 1000136000,
// Provided by XR_ML_compat
XR_TYPE_COORDINATE_SPACE_CREATE_INFO_ML = 1000137000,
// Provided by XR_ML_marker_understanding
XR_TYPE_SYSTEM_MARKER_UNDERSTANDING_PROPERTIES_ML = 1000138000,
// Provided by XR_ML_marker_understanding
XR_TYPE_MARKER_DETECTOR_CREATE_INFO_ML = 1000138001,
// Provided by XR_ML_marker_understanding
XR_TYPE_MARKER_DETECTOR_ARUCO_INFO_ML = 1000138002,
// Provided by XR_ML_marker_understanding
XR_TYPE_MARKER_DETECTOR_SIZE_INFO_ML = 1000138003,
// Provided by XR_ML_marker_understanding
XR_TYPE_MARKER_DETECTOR_APRIL_TAG_INFO_ML = 1000138004,
// Provided by XR_ML_marker_understanding
XR_TYPE_MARKER_DETECTOR_CUSTOM_PROFILE_INFO_ML = 1000138005,
// Provided by XR_ML_marker_understanding
XR_TYPE_MARKER_DETECTOR_SNAPSHOT_INFO_ML = 1000138006,
// Provided by XR_ML_marker_understanding
XR_TYPE_MARKER_DETECTOR_STATE_ML = 1000138007,
// Provided by XR_ML_marker_understanding
XR_TYPE_MARKER_SPACE_CREATE_INFO_ML = 1000138008,
// Provided by XR_ML_localization_map
XR_TYPE_LOCALIZATION_MAP_ML = 1000139000,
// Provided by XR_ML_localization_map
XR_TYPE_EVENT_DATA_LOCALIZATION_CHANGED_ML = 1000139001,
// Provided by XR_ML_localization_map
XR_TYPE_MAP_LOCALIZATION_REQUEST_INFO_ML = 1000139002,
// Provided by XR_ML_localization_map
XR_TYPE_LOCALIZATION_MAP_IMPORT_INFO_ML = 1000139003,
// Provided by XR_ML_localization_map
XR_TYPE_LOCALIZATION_ENABLE_EVENTS_INFO_ML = 1000139004,
// Provided by XR_ML_spatial_anchors
XR_TYPE_SPATIAL_ANCHORS_CREATE_INFO_FROM_POSE_ML = 1000140000,
// Provided by XR_ML_spatial_anchors
XR_TYPE_CREATE_SPATIAL_ANCHORS_COMPLETION_ML = 1000140001,
// Provided by XR_ML_spatial_anchors
XR_TYPE_SPATIAL_ANCHOR_STATE_ML = 1000140002,
// Provided by XR_ML_spatial_anchors_storage
XR_TYPE_SPATIAL_ANCHORS_CREATE_STORAGE_INFO_ML = 1000141000,
// Provided by XR_ML_spatial_anchors_storage
XR_TYPE_SPATIAL_ANCHORS_QUERY_INFO_RADIUS_ML = 1000141001,
// Provided by XR_ML_spatial_anchors_storage
XR_TYPE_SPATIAL_ANCHORS_QUERY_COMPLETION_ML = 1000141002,
// Provided by XR_ML_spatial_anchors_storage
XR_TYPE_SPATIAL_ANCHORS_CREATE_INFO_FROM_UUIDS_ML = 1000141003,
// Provided by XR_ML_spatial_anchors_storage
XR_TYPE_SPATIAL_ANCHORS_PUBLISH_INFO_ML = 1000141004,
// Provided by XR_ML_spatial_anchors_storage
XR_TYPE_SPATIAL_ANCHORS_PUBLISH_COMPLETION_ML = 1000141005,
// Provided by XR_ML_spatial_anchors_storage
XR_TYPE_SPATIAL_ANCHORS_DELETE_INFO_ML = 1000141006,
// Provided by XR_ML_spatial_anchors_storage
XR_TYPE_SPATIAL_ANCHORS_DELETE_COMPLETION_ML = 1000141007,
// Provided by XR_ML_spatial_anchors_storage
XR_TYPE_SPATIAL_ANCHORS_UPDATE_EXPIRATION_INFO_ML = 1000141008,
// Provided by XR_ML_spatial_anchors_storage
XR_TYPE_SPATIAL_ANCHORS_UPDATE_EXPIRATION_COMPLETION_ML = 1000141009,
// Provided by XR_ML_spatial_anchors_storage
XR_TYPE_SPATIAL_ANCHORS_PUBLISH_COMPLETION_DETAILS_ML = 1000141010,
// Provided by XR_ML_spatial_anchors_storage
XR_TYPE_SPATIAL_ANCHORS_DELETE_COMPLETION_DETAILS_ML = 1000141011,
// Provided by XR_ML_spatial_anchors_storage
XR_TYPE_SPATIAL_ANCHORS_UPDATE_EXPIRATION_COMPLETION_DETAILS_ML = 1000141012,
// Provided by XR_ML_user_calibration
XR_TYPE_EVENT_DATA_HEADSET_FIT_CHANGED_ML = 1000472000,
// Provided by XR_ML_user_calibration
XR_TYPE_EVENT_DATA_EYE_CALIBRATION_CHANGED_ML = 1000472001,
// Provided by XR_ML_user_calibration
XR_TYPE_USER_CALIBRATION_ENABLE_EVENTS_INFO_ML = 1000472002,
// Provided by XR_MSFT_spatial_anchor_persistence
XR_TYPE_SPATIAL_ANCHOR_PERSISTENCE_INFO_MSFT = 1000142000,
// Provided by XR_MSFT_spatial_anchor_persistence
XR_TYPE_SPATIAL_ANCHOR_FROM_PERSISTED_ANCHOR_CREATE_INFO_MSFT = 1000142001,
// Provided by XR_MSFT_scene_marker
XR_TYPE_SCENE_MARKERS_MSFT = 1000147000,
// Provided by XR_MSFT_scene_marker
XR_TYPE_SCENE_MARKER_TYPE_FILTER_MSFT = 1000147001,
// Provided by XR_MSFT_scene_marker
XR_TYPE_SCENE_MARKER_QR_CODES_MSFT = 1000147002,
// Provided by XR_FB_spatial_entity_query
XR_TYPE_SPACE_QUERY_INFO_FB = 1000156001,
// Provided by XR_FB_spatial_entity_query
XR_TYPE_SPACE_QUERY_RESULTS_FB = 1000156002,
// Provided by XR_FB_spatial_entity_query
XR_TYPE_SPACE_STORAGE_LOCATION_FILTER_INFO_FB = 1000156003,
// Provided by XR_FB_spatial_entity_query
XR_TYPE_SPACE_UUID_FILTER_INFO_FB = 1000156054,
// Provided by XR_FB_spatial_entity_query
XR_TYPE_SPACE_COMPONENT_FILTER_INFO_FB = 1000156052,
// Provided by XR_FB_spatial_entity_query
XR_TYPE_EVENT_DATA_SPACE_QUERY_RESULTS_AVAILABLE_FB = 1000156103,
// Provided by XR_FB_spatial_entity_query
XR_TYPE_EVENT_DATA_SPACE_QUERY_COMPLETE_FB = 1000156104,
// Provided by XR_FB_spatial_entity_storage
XR_TYPE_SPACE_SAVE_INFO_FB = 1000158000,
// Provided by XR_FB_spatial_entity_storage
XR_TYPE_SPACE_ERASE_INFO_FB = 1000158001,
// Provided by XR_FB_spatial_entity_storage
XR_TYPE_EVENT_DATA_SPACE_SAVE_COMPLETE_FB = 1000158106,
// Provided by XR_FB_spatial_entity_storage
XR_TYPE_EVENT_DATA_SPACE_ERASE_COMPLETE_FB = 1000158107,
// Provided by XR_FB_foveation_vulkan
XR_TYPE_SWAPCHAIN_IMAGE_FOVEATION_VULKAN_FB = 1000160000,
// Provided by XR_FB_swapchain_update_state_android_surface
XR_TYPE_SWAPCHAIN_STATE_ANDROID_SURFACE_DIMENSIONS_FB = 1000161000,
// Provided by XR_FB_swapchain_update_state_opengl_es
XR_TYPE_SWAPCHAIN_STATE_SAMPLER_OPENGL_ES_FB = 1000162000,
// Provided by XR_FB_swapchain_update_state_vulkan
XR_TYPE_SWAPCHAIN_STATE_SAMPLER_VULKAN_FB = 1000163000,
// Provided by XR_FB_spatial_entity_sharing
XR_TYPE_SPACE_SHARE_INFO_FB = 1000169001,
// Provided by XR_FB_spatial_entity_sharing
XR_TYPE_EVENT_DATA_SPACE_SHARE_COMPLETE_FB = 1000169002,
// Provided by XR_FB_space_warp
XR_TYPE_COMPOSITION_LAYER_SPACE_WARP_INFO_FB = 1000171000,
// Provided by XR_FB_space_warp
XR_TYPE_SYSTEM_SPACE_WARP_PROPERTIES_FB = 1000171001,
// Provided by XR_FB_haptic_amplitude_envelope
XR_TYPE_HAPTIC_AMPLITUDE_ENVELOPE_VIBRATION_FB = 1000173001,
// Provided by XR_FB_scene
XR_TYPE_SEMANTIC_LABELS_FB = 1000175000,
// Provided by XR_FB_scene
XR_TYPE_ROOM_LAYOUT_FB = 1000175001,
// Provided by XR_FB_scene
XR_TYPE_BOUNDARY_2D_FB = 1000175002,
// Provided by XR_FB_scene
XR_TYPE_SEMANTIC_LABELS_SUPPORT_INFO_FB = 1000175010,
// Provided by XR_ALMALENCE_digital_lens_control
XR_TYPE_DIGITAL_LENS_CONTROL_ALMALENCE = 1000196000,
// Provided by XR_FB_scene_capture
XR_TYPE_EVENT_DATA_SCENE_CAPTURE_COMPLETE_FB = 1000198001,
// Provided by XR_FB_scene_capture
XR_TYPE_SCENE_CAPTURE_REQUEST_INFO_FB = 1000198050,
// Provided by XR_FB_spatial_entity_container
XR_TYPE_SPACE_CONTAINER_FB = 1000199000,
// Provided by XR_META_foveation_eye_tracked
XR_TYPE_FOVEATION_EYE_TRACKED_PROFILE_CREATE_INFO_META = 1000200000,
// Provided by XR_META_foveation_eye_tracked
XR_TYPE_FOVEATION_EYE_TRACKED_STATE_META = 1000200001,
// Provided by XR_META_foveation_eye_tracked
XR_TYPE_SYSTEM_FOVEATION_EYE_TRACKED_PROPERTIES_META = 1000200002,
// Provided by XR_FB_face_tracking
XR_TYPE_SYSTEM_FACE_TRACKING_PROPERTIES_FB = 1000201004,
// Provided by XR_FB_face_tracking
XR_TYPE_FACE_TRACKER_CREATE_INFO_FB = 1000201005,
// Provided by XR_FB_face_tracking
XR_TYPE_FACE_EXPRESSION_INFO_FB = 1000201002,
// Provided by XR_FB_face_tracking
XR_TYPE_FACE_EXPRESSION_WEIGHTS_FB = 1000201006,
// Provided by XR_FB_eye_tracking_social
XR_TYPE_EYE_TRACKER_CREATE_INFO_FB = 1000202001,
// Provided by XR_FB_eye_tracking_social
XR_TYPE_EYE_GAZES_INFO_FB = 1000202002,
// Provided by XR_FB_eye_tracking_social
XR_TYPE_EYE_GAZES_FB = 1000202003,
// Provided by XR_FB_eye_tracking_social
XR_TYPE_SYSTEM_EYE_TRACKING_PROPERTIES_FB = 1000202004,
// Provided by XR_FB_passthrough_keyboard_hands
XR_TYPE_PASSTHROUGH_KEYBOARD_HANDS_INTENSITY_FB = 1000203002,
// Provided by XR_FB_composition_layer_settings
XR_TYPE_COMPOSITION_LAYER_SETTINGS_FB = 1000204000,
// Provided by XR_FB_haptic_pcm
XR_TYPE_HAPTIC_PCM_VIBRATION_FB = 1000209001,
// Provided by XR_FB_haptic_pcm
XR_TYPE_DEVICE_PCM_SAMPLE_RATE_STATE_FB = 1000209002,
// Provided by XR_EXT_frame_synthesis
XR_TYPE_FRAME_SYNTHESIS_INFO_EXT = 1000211000,
// Provided by XR_EXT_frame_synthesis
XR_TYPE_FRAME_SYNTHESIS_CONFIG_VIEW_EXT = 1000211001,
// Provided by XR_FB_composition_layer_depth_test
XR_TYPE_COMPOSITION_LAYER_DEPTH_TEST_FB = 1000212000,
// Provided by XR_META_local_dimming
XR_TYPE_LOCAL_DIMMING_FRAME_END_INFO_META = 1000216000,
// Provided by XR_META_passthrough_preferences
XR_TYPE_PASSTHROUGH_PREFERENCES_META = 1000217000,
// Provided by XR_META_virtual_keyboard
XR_TYPE_SYSTEM_VIRTUAL_KEYBOARD_PROPERTIES_META = 1000219001,
// Provided by XR_META_virtual_keyboard
XR_TYPE_VIRTUAL_KEYBOARD_CREATE_INFO_META = 1000219002,
// Provided by XR_META_virtual_keyboard
XR_TYPE_VIRTUAL_KEYBOARD_SPACE_CREATE_INFO_META = 1000219003,
// Provided by XR_META_virtual_keyboard
XR_TYPE_VIRTUAL_KEYBOARD_LOCATION_INFO_META = 1000219004,
// Provided by XR_META_virtual_keyboard
XR_TYPE_VIRTUAL_KEYBOARD_MODEL_VISIBILITY_SET_INFO_META = 1000219005,
// Provided by XR_META_virtual_keyboard
XR_TYPE_VIRTUAL_KEYBOARD_ANIMATION_STATE_META = 1000219006,
// Provided by XR_META_virtual_keyboard
XR_TYPE_VIRTUAL_KEYBOARD_MODEL_ANIMATION_STATES_META = 1000219007,
// Provided by XR_META_virtual_keyboard
XR_TYPE_VIRTUAL_KEYBOARD_TEXTURE_DATA_META = 1000219009,
// Provided by XR_META_virtual_keyboard
XR_TYPE_VIRTUAL_KEYBOARD_INPUT_INFO_META = 1000219010,
// Provided by XR_META_virtual_keyboard
XR_TYPE_VIRTUAL_KEYBOARD_TEXT_CONTEXT_CHANGE_INFO_META = 1000219011,
// Provided by XR_META_virtual_keyboard
XR_TYPE_EVENT_DATA_VIRTUAL_KEYBOARD_COMMIT_TEXT_META = 1000219014,
// Provided by XR_META_virtual_keyboard
XR_TYPE_EVENT_DATA_VIRTUAL_KEYBOARD_BACKSPACE_META = 1000219015,
// Provided by XR_META_virtual_keyboard
XR_TYPE_EVENT_DATA_VIRTUAL_KEYBOARD_ENTER_META = 1000219016,
// Provided by XR_META_virtual_keyboard
XR_TYPE_EVENT_DATA_VIRTUAL_KEYBOARD_SHOWN_META = 1000219017,
// Provided by XR_META_virtual_keyboard
XR_TYPE_EVENT_DATA_VIRTUAL_KEYBOARD_HIDDEN_META = 1000219018,
// Provided by XR_OCULUS_external_camera
XR_TYPE_EXTERNAL_CAMERA_OCULUS = 1000226000,
// Provided by XR_META_vulkan_swapchain_create_info
XR_TYPE_VULKAN_SWAPCHAIN_CREATE_INFO_META = 1000227000,
// Provided by XR_META_performance_metrics
XR_TYPE_PERFORMANCE_METRICS_STATE_META = 1000232001,
// Provided by XR_META_performance_metrics
XR_TYPE_PERFORMANCE_METRICS_COUNTER_META = 1000232002,
// Provided by XR_FB_spatial_entity_storage_batch
XR_TYPE_SPACE_LIST_SAVE_INFO_FB = 1000238000,
// Provided by XR_FB_spatial_entity_storage_batch
XR_TYPE_EVENT_DATA_SPACE_LIST_SAVE_COMPLETE_FB = 1000238001,
// Provided by XR_FB_spatial_entity_user
XR_TYPE_SPACE_USER_CREATE_INFO_FB = 1000241001,
// Provided by XR_META_headset_id
XR_TYPE_SYSTEM_HEADSET_ID_PROPERTIES_META = 1000245000,
// Provided by XR_META_spatial_entity_discovery
XR_TYPE_SYSTEM_SPACE_DISCOVERY_PROPERTIES_META = 1000247000,
// Provided by XR_META_spatial_entity_discovery
XR_TYPE_SPACE_DISCOVERY_INFO_META = 1000247001,
// Provided by XR_META_spatial_entity_discovery
XR_TYPE_SPACE_FILTER_UUID_META = 1000247003,
// Provided by XR_META_spatial_entity_discovery
XR_TYPE_SPACE_FILTER_COMPONENT_META = 1000247004,
// Provided by XR_META_spatial_entity_discovery
XR_TYPE_SPACE_DISCOVERY_RESULT_META = 1000247005,
// Provided by XR_META_spatial_entity_discovery
XR_TYPE_SPACE_DISCOVERY_RESULTS_META = 1000247006,
// Provided by XR_META_spatial_entity_discovery
XR_TYPE_EVENT_DATA_SPACE_DISCOVERY_RESULTS_AVAILABLE_META = 1000247007,
// Provided by XR_META_spatial_entity_discovery
XR_TYPE_EVENT_DATA_SPACE_DISCOVERY_COMPLETE_META = 1000247008,
// Provided by XR_META_recommended_layer_resolution
XR_TYPE_RECOMMENDED_LAYER_RESOLUTION_META = 1000254000,
// Provided by XR_META_recommended_layer_resolution
XR_TYPE_RECOMMENDED_LAYER_RESOLUTION_GET_INFO_META = 1000254001,
// Provided by XR_META_spatial_entity_persistence
XR_TYPE_SYSTEM_SPACE_PERSISTENCE_PROPERTIES_META = 1000259000,
// Provided by XR_META_spatial_entity_persistence
XR_TYPE_SPACES_SAVE_INFO_META = 1000259001,
// Provided by XR_META_spatial_entity_persistence
XR_TYPE_EVENT_DATA_SPACES_SAVE_RESULT_META = 1000259002,
// Provided by XR_META_spatial_entity_persistence
XR_TYPE_SPACES_ERASE_INFO_META = 1000259003,
// Provided by XR_META_spatial_entity_persistence
XR_TYPE_EVENT_DATA_SPACES_ERASE_RESULT_META = 1000259004,
// Provided by XR_META_passthrough_color_lut
XR_TYPE_SYSTEM_PASSTHROUGH_COLOR_LUT_PROPERTIES_META = 1000266000,
// Provided by XR_META_passthrough_color_lut
XR_TYPE_PASSTHROUGH_COLOR_LUT_CREATE_INFO_META = 1000266001,
// Provided by XR_META_passthrough_color_lut
XR_TYPE_PASSTHROUGH_COLOR_LUT_UPDATE_INFO_META = 1000266002,
// Provided by XR_META_passthrough_color_lut
XR_TYPE_PASSTHROUGH_COLOR_MAP_LUT_META = 1000266100,
// Provided by XR_META_passthrough_color_lut
XR_TYPE_PASSTHROUGH_COLOR_MAP_INTERPOLATED_LUT_META = 1000266101,
// Provided by XR_META_spatial_entity_mesh
XR_TYPE_SPACE_TRIANGLE_MESH_GET_INFO_META = 1000269001,
// Provided by XR_META_spatial_entity_mesh
XR_TYPE_SPACE_TRIANGLE_MESH_META = 1000269002,
// Provided by XR_META_body_tracking_full_body
XR_TYPE_SYSTEM_PROPERTIES_BODY_TRACKING_FULL_BODY_META = 1000274000,
// Provided by XR_META_passthrough_layer_resumed_event
XR_TYPE_EVENT_DATA_PASSTHROUGH_LAYER_RESUMED_META = 1000282000,
// Provided by XR_META_body_tracking_calibration
XR_TYPE_BODY_TRACKING_CALIBRATION_INFO_META = 1000283002,
// Provided by XR_META_body_tracking_calibration
XR_TYPE_BODY_TRACKING_CALIBRATION_STATUS_META = 1000283003,
// Provided by XR_META_body_tracking_calibration
XR_TYPE_SYSTEM_PROPERTIES_BODY_TRACKING_CALIBRATION_META = 1000283004,
// Provided by XR_FB_face_tracking2
XR_TYPE_SYSTEM_FACE_TRACKING_PROPERTIES2_FB = 1000287013,
// Provided by XR_FB_face_tracking2
XR_TYPE_FACE_TRACKER_CREATE_INFO2_FB = 1000287014,
// Provided by XR_FB_face_tracking2
XR_TYPE_FACE_EXPRESSION_INFO2_FB = 1000287015,
// Provided by XR_FB_face_tracking2
XR_TYPE_FACE_EXPRESSION_WEIGHTS2_FB = 1000287016,
// Provided by XR_META_spatial_entity_sharing
XR_TYPE_SYSTEM_SPATIAL_ENTITY_SHARING_PROPERTIES_META = 1000290000,
// Provided by XR_META_spatial_entity_sharing
XR_TYPE_SHARE_SPACES_INFO_META = 1000290001,
// Provided by XR_META_spatial_entity_sharing
XR_TYPE_EVENT_DATA_SHARE_SPACES_COMPLETE_META = 1000290002,
// Provided by XR_META_environment_depth
XR_TYPE_ENVIRONMENT_DEPTH_PROVIDER_CREATE_INFO_META = 1000291000,
// Provided by XR_META_environment_depth
XR_TYPE_ENVIRONMENT_DEPTH_SWAPCHAIN_CREATE_INFO_META = 1000291001,
// Provided by XR_META_environment_depth
XR_TYPE_ENVIRONMENT_DEPTH_SWAPCHAIN_STATE_META = 1000291002,
// Provided by XR_META_environment_depth
XR_TYPE_ENVIRONMENT_DEPTH_IMAGE_ACQUIRE_INFO_META = 1000291003,
// Provided by XR_META_environment_depth
XR_TYPE_ENVIRONMENT_DEPTH_IMAGE_VIEW_META = 1000291004,
// Provided by XR_META_environment_depth
XR_TYPE_ENVIRONMENT_DEPTH_IMAGE_META = 1000291005,
// Provided by XR_META_environment_depth
XR_TYPE_ENVIRONMENT_DEPTH_HAND_REMOVAL_SET_INFO_META = 1000291006,
// Provided by XR_META_environment_depth
XR_TYPE_SYSTEM_ENVIRONMENT_DEPTH_PROPERTIES_META = 1000291007,
// Provided by XR_META_environment_depth
XR_TYPE_ENVIRONMENT_DEPTH_IMAGE_TIMESTAMP_META = 1000291008,
// Provided by XR_EXT_render_model
XR_TYPE_RENDER_MODEL_CREATE_INFO_EXT = 1000300000,
// Provided by XR_EXT_render_model
XR_TYPE_RENDER_MODEL_PROPERTIES_GET_INFO_EXT = 1000300001,
// Provided by XR_EXT_render_model
XR_TYPE_RENDER_MODEL_PROPERTIES_EXT = 1000300002,
// Provided by XR_EXT_render_model
XR_TYPE_RENDER_MODEL_SPACE_CREATE_INFO_EXT = 1000300003,
// Provided by XR_EXT_render_model
XR_TYPE_RENDER_MODEL_STATE_GET_INFO_EXT = 1000300004,
// Provided by XR_EXT_render_model
XR_TYPE_RENDER_MODEL_STATE_EXT = 1000300005,
// Provided by XR_EXT_render_model
XR_TYPE_RENDER_MODEL_ASSET_CREATE_INFO_EXT = 1000300006,
// Provided by XR_EXT_render_model
XR_TYPE_RENDER_MODEL_ASSET_DATA_GET_INFO_EXT = 1000300007,
// Provided by XR_EXT_render_model
XR_TYPE_RENDER_MODEL_ASSET_DATA_EXT = 1000300008,
// Provided by XR_EXT_render_model
XR_TYPE_RENDER_MODEL_ASSET_PROPERTIES_GET_INFO_EXT = 1000300009,
// Provided by XR_EXT_render_model
XR_TYPE_RENDER_MODEL_ASSET_PROPERTIES_EXT = 1000300010,
// Provided by XR_EXT_interaction_render_model
XR_TYPE_INTERACTION_RENDER_MODEL_IDS_ENUMERATE_INFO_EXT = 1000301000,
// Provided by XR_EXT_interaction_render_model
XR_TYPE_INTERACTION_RENDER_MODEL_SUBACTION_PATH_INFO_EXT = 1000301001,
// Provided by XR_EXT_interaction_render_model
XR_TYPE_EVENT_DATA_INTERACTION_RENDER_MODELS_CHANGED_EXT = 1000301002,
// Provided by XR_EXT_interaction_render_model
XR_TYPE_INTERACTION_RENDER_MODEL_TOP_LEVEL_USER_PATH_GET_INFO_EXT = 1000301003,
// Provided by XR_HTC_passthrough
XR_TYPE_PASSTHROUGH_CREATE_INFO_HTC = 1000317001,
// Provided by XR_HTC_passthrough
XR_TYPE_PASSTHROUGH_COLOR_HTC = 1000317002,
// Provided by XR_HTC_passthrough
XR_TYPE_PASSTHROUGH_MESH_TRANSFORM_INFO_HTC = 1000317003,
// Provided by XR_HTC_passthrough
XR_TYPE_COMPOSITION_LAYER_PASSTHROUGH_HTC = 1000317004,
// Provided by XR_HTC_foveation
XR_TYPE_FOVEATION_APPLY_INFO_HTC = 1000318000,
// Provided by XR_HTC_foveation
XR_TYPE_FOVEATION_DYNAMIC_MODE_INFO_HTC = 1000318001,
// Provided by XR_HTC_foveation
XR_TYPE_FOVEATION_CUSTOM_MODE_INFO_HTC = 1000318002,
// Provided by XR_HTC_anchor
XR_TYPE_SYSTEM_ANCHOR_PROPERTIES_HTC = 1000319000,
// Provided by XR_HTC_anchor
XR_TYPE_SPATIAL_ANCHOR_CREATE_INFO_HTC = 1000319001,
// Provided by XR_HTC_body_tracking
XR_TYPE_SYSTEM_BODY_TRACKING_PROPERTIES_HTC = 1000320000,
// Provided by XR_HTC_body_tracking
XR_TYPE_BODY_TRACKER_CREATE_INFO_HTC = 1000320001,
// Provided by XR_HTC_body_tracking
XR_TYPE_BODY_JOINTS_LOCATE_INFO_HTC = 1000320002,
// Provided by XR_HTC_body_tracking
XR_TYPE_BODY_JOINT_LOCATIONS_HTC = 1000320003,
// Provided by XR_HTC_body_tracking
XR_TYPE_BODY_SKELETON_HTC = 1000320004,
// Provided by XR_EXT_active_action_set_priority
XR_TYPE_ACTIVE_ACTION_SET_PRIORITIES_EXT = 1000373000,
// Provided by XR_MNDX_force_feedback_curl
XR_TYPE_SYSTEM_FORCE_FEEDBACK_CURL_PROPERTIES_MNDX = 1000375000,
// Provided by XR_MNDX_force_feedback_curl
XR_TYPE_FORCE_FEEDBACK_CURL_APPLY_LOCATIONS_MNDX = 1000375001,
// Provided by XR_BD_body_tracking
XR_TYPE_BODY_TRACKER_CREATE_INFO_BD = 1000385001,
// Provided by XR_BD_body_tracking
XR_TYPE_BODY_JOINTS_LOCATE_INFO_BD = 1000385002,
// Provided by XR_BD_body_tracking
XR_TYPE_BODY_JOINT_LOCATIONS_BD = 1000385003,
// Provided by XR_BD_body_tracking
XR_TYPE_SYSTEM_BODY_TRACKING_PROPERTIES_BD = 1000385004,
// Provided by XR_BD_facial_simulation
XR_TYPE_SYSTEM_FACIAL_SIMULATION_PROPERTIES_BD = 1000386001,
// Provided by XR_BD_facial_simulation
XR_TYPE_FACE_TRACKER_CREATE_INFO_BD = 1000386002,
// Provided by XR_BD_facial_simulation
XR_TYPE_FACIAL_SIMULATION_DATA_GET_INFO_BD = 1000386003,
// Provided by XR_BD_facial_simulation
XR_TYPE_FACIAL_SIMULATION_DATA_BD = 1000386004,
// Provided by XR_BD_facial_simulation
XR_TYPE_LIP_EXPRESSION_DATA_BD = 1000386005,
// Provided by XR_BD_spatial_sensing
XR_TYPE_SYSTEM_SPATIAL_SENSING_PROPERTIES_BD = 1000389000,
// Provided by XR_BD_spatial_sensing
XR_TYPE_SPATIAL_ENTITY_COMPONENT_GET_INFO_BD = 1000389001,
// Provided by XR_BD_spatial_sensing
XR_TYPE_SPATIAL_ENTITY_LOCATION_GET_INFO_BD = 1000389002,
// Provided by XR_BD_spatial_sensing
XR_TYPE_SPATIAL_ENTITY_COMPONENT_DATA_LOCATION_BD = 1000389003,
// Provided by XR_BD_spatial_sensing
XR_TYPE_SPATIAL_ENTITY_COMPONENT_DATA_SEMANTIC_BD = 1000389004,
// Provided by XR_BD_spatial_sensing
XR_TYPE_SPATIAL_ENTITY_COMPONENT_DATA_BOUNDING_BOX_2D_BD = 1000389005,
// Provided by XR_BD_spatial_sensing
XR_TYPE_SPATIAL_ENTITY_COMPONENT_DATA_POLYGON_BD = 1000389006,
// Provided by XR_BD_spatial_sensing
XR_TYPE_SPATIAL_ENTITY_COMPONENT_DATA_BOUNDING_BOX_3D_BD = 1000389007,
// Provided by XR_BD_spatial_sensing
XR_TYPE_SPATIAL_ENTITY_COMPONENT_DATA_TRIANGLE_MESH_BD = 1000389008,
// Provided by XR_BD_spatial_sensing
XR_TYPE_SENSE_DATA_PROVIDER_CREATE_INFO_BD = 1000389009,
// Provided by XR_BD_spatial_sensing
XR_TYPE_SENSE_DATA_PROVIDER_START_INFO_BD = 1000389010,
// Provided by XR_BD_spatial_sensing
XR_TYPE_EVENT_DATA_SENSE_DATA_PROVIDER_STATE_CHANGED_BD = 1000389011,
// Provided by XR_BD_spatial_sensing
XR_TYPE_EVENT_DATA_SENSE_DATA_UPDATED_BD = 1000389012,
// Provided by XR_BD_spatial_sensing
XR_TYPE_SENSE_DATA_QUERY_INFO_BD = 1000389013,
// Provided by XR_BD_spatial_sensing
XR_TYPE_SENSE_DATA_QUERY_COMPLETION_BD = 1000389014,
// Provided by XR_BD_spatial_sensing
XR_TYPE_SENSE_DATA_FILTER_UUID_BD = 1000389015,
// Provided by XR_BD_spatial_sensing
XR_TYPE_SENSE_DATA_FILTER_SEMANTIC_BD = 1000389016,
// Provided by XR_BD_spatial_sensing
XR_TYPE_QUERIED_SENSE_DATA_GET_INFO_BD = 1000389017,
// Provided by XR_BD_spatial_sensing
XR_TYPE_QUERIED_SENSE_DATA_BD = 1000389018,
// Provided by XR_BD_spatial_sensing
XR_TYPE_SPATIAL_ENTITY_STATE_BD = 1000389019,
// Provided by XR_BD_spatial_sensing
XR_TYPE_SPATIAL_ENTITY_ANCHOR_CREATE_INFO_BD = 1000389020,
// Provided by XR_BD_spatial_sensing
XR_TYPE_ANCHOR_SPACE_CREATE_INFO_BD = 1000389021,
// Provided by XR_BD_spatial_anchor
XR_TYPE_SYSTEM_SPATIAL_ANCHOR_PROPERTIES_BD = 1000390000,
// Provided by XR_BD_spatial_anchor
XR_TYPE_SPATIAL_ANCHOR_CREATE_INFO_BD = 1000390001,
// Provided by XR_BD_spatial_anchor
XR_TYPE_SPATIAL_ANCHOR_CREATE_COMPLETION_BD = 1000390002,
// Provided by XR_BD_spatial_anchor
XR_TYPE_SPATIAL_ANCHOR_PERSIST_INFO_BD = 1000390003,
// Provided by XR_BD_spatial_anchor
XR_TYPE_SPATIAL_ANCHOR_UNPERSIST_INFO_BD = 1000390004,
// Provided by XR_BD_spatial_anchor_sharing
XR_TYPE_SYSTEM_SPATIAL_ANCHOR_SHARING_PROPERTIES_BD = 1000391000,
// Provided by XR_BD_spatial_anchor_sharing
XR_TYPE_SPATIAL_ANCHOR_SHARE_INFO_BD = 1000391001,
// Provided by XR_BD_spatial_anchor_sharing
XR_TYPE_SHARED_SPATIAL_ANCHOR_DOWNLOAD_INFO_BD = 1000391002,
// Provided by XR_BD_spatial_scene
XR_TYPE_SYSTEM_SPATIAL_SCENE_PROPERTIES_BD = 1000392000,
// Provided by XR_BD_spatial_scene
XR_TYPE_SCENE_CAPTURE_INFO_BD = 1000392001,
// Provided by XR_BD_spatial_mesh
XR_TYPE_SYSTEM_SPATIAL_MESH_PROPERTIES_BD = 1000393000,
// Provided by XR_BD_spatial_mesh
XR_TYPE_SENSE_DATA_PROVIDER_CREATE_INFO_SPATIAL_MESH_BD = 1000393001,
// Provided by XR_BD_future_progress
XR_TYPE_FUTURE_POLL_RESULT_PROGRESS_BD = 1000394001,
// Provided by XR_BD_spatial_plane
XR_TYPE_SYSTEM_SPATIAL_PLANE_PROPERTIES_BD = 1000396000,
// Provided by XR_BD_spatial_plane
XR_TYPE_SPATIAL_ENTITY_COMPONENT_DATA_PLANE_ORIENTATION_BD = 1000396001,
// Provided by XR_BD_spatial_plane
XR_TYPE_SENSE_DATA_FILTER_PLANE_ORIENTATION_BD = 1000396002,
// Provided by XR_EXT_hand_tracking_data_source
XR_TYPE_HAND_TRACKING_DATA_SOURCE_INFO_EXT = 1000428000,
// Provided by XR_EXT_hand_tracking_data_source
XR_TYPE_HAND_TRACKING_DATA_SOURCE_STATE_EXT = 1000428001,
// Provided by XR_EXT_plane_detection
XR_TYPE_PLANE_DETECTOR_CREATE_INFO_EXT = 1000429001,
// Provided by XR_EXT_plane_detection
XR_TYPE_PLANE_DETECTOR_BEGIN_INFO_EXT = 1000429002,
// Provided by XR_EXT_plane_detection
XR_TYPE_PLANE_DETECTOR_GET_INFO_EXT = 1000429003,
// Provided by XR_EXT_plane_detection
XR_TYPE_PLANE_DETECTOR_LOCATIONS_EXT = 1000429004,
// Provided by XR_EXT_plane_detection
XR_TYPE_PLANE_DETECTOR_LOCATION_EXT = 1000429005,
// Provided by XR_EXT_plane_detection
XR_TYPE_PLANE_DETECTOR_POLYGON_BUFFER_EXT = 1000429006,
// Provided by XR_EXT_plane_detection
XR_TYPE_SYSTEM_PLANE_DETECTION_PROPERTIES_EXT = 1000429007,
// Provided by XR_ANDROID_trackables
XR_TYPE_TRACKABLE_GET_INFO_ANDROID = 1000455000,
// Provided by XR_ANDROID_trackables
XR_TYPE_ANCHOR_SPACE_CREATE_INFO_ANDROID = 1000455001,
// Provided by XR_ANDROID_trackables
XR_TYPE_TRACKABLE_PLANE_ANDROID = 1000455003,
// Provided by XR_ANDROID_trackables
XR_TYPE_TRACKABLE_TRACKER_CREATE_INFO_ANDROID = 1000455004,
// Provided by XR_ANDROID_trackables
XR_TYPE_SYSTEM_TRACKABLES_PROPERTIES_ANDROID = 1000455005,
// Provided by XR_ANDROID_device_anchor_persistence
XR_TYPE_PERSISTED_ANCHOR_SPACE_CREATE_INFO_ANDROID = 1000457001,
// Provided by XR_ANDROID_device_anchor_persistence
XR_TYPE_PERSISTED_ANCHOR_SPACE_INFO_ANDROID = 1000457002,
// Provided by XR_ANDROID_device_anchor_persistence
XR_TYPE_DEVICE_ANCHOR_PERSISTENCE_CREATE_INFO_ANDROID = 1000457003,
// Provided by XR_ANDROID_device_anchor_persistence
XR_TYPE_SYSTEM_DEVICE_ANCHOR_PERSISTENCE_PROPERTIES_ANDROID = 1000457004,
// Provided by XR_ANDROID_face_tracking
XR_TYPE_FACE_TRACKER_CREATE_INFO_ANDROID = 1000458000,
// Provided by XR_ANDROID_face_tracking
XR_TYPE_FACE_STATE_GET_INFO_ANDROID = 1000458001,
// Provided by XR_ANDROID_face_tracking
XR_TYPE_FACE_STATE_ANDROID = 1000458002,
// Provided by XR_ANDROID_face_tracking
XR_TYPE_SYSTEM_FACE_TRACKING_PROPERTIES_ANDROID = 1000458003,
// Provided by XR_ANDROID_passthrough_camera_state
XR_TYPE_PASSTHROUGH_CAMERA_STATE_GET_INFO_ANDROID = 1000460000,
// Provided by XR_ANDROID_passthrough_camera_state
XR_TYPE_SYSTEM_PASSTHROUGH_CAMERA_STATE_PROPERTIES_ANDROID = 1000460001,
// Provided by XR_ANDROID_raycast
XR_TYPE_RAYCAST_INFO_ANDROID = 1000463000,
// Provided by XR_ANDROID_raycast
XR_TYPE_RAYCAST_HIT_RESULTS_ANDROID = 1000463001,
// Provided by XR_ANDROID_trackables_object
XR_TYPE_TRACKABLE_OBJECT_ANDROID = 1000466000,
// Provided by XR_ANDROID_trackables_object
XR_TYPE_TRACKABLE_OBJECT_CONFIGURATION_ANDROID = 1000466001,
// Provided by XR_EXT_future
XR_TYPE_FUTURE_CANCEL_INFO_EXT = 1000469000,
// Provided by XR_EXT_future
XR_TYPE_FUTURE_POLL_INFO_EXT = 1000469001,
// Provided by XR_EXT_future
XR_TYPE_FUTURE_COMPLETION_EXT = 1000469002,
// Provided by XR_EXT_future
XR_TYPE_FUTURE_POLL_RESULT_EXT = 1000469003,
// Provided by XR_EXT_user_presence
XR_TYPE_EVENT_DATA_USER_PRESENCE_CHANGED_EXT = 1000470000,
// Provided by XR_EXT_user_presence
XR_TYPE_SYSTEM_USER_PRESENCE_PROPERTIES_EXT = 1000470001,
// Provided by XR_ML_system_notifications
XR_TYPE_SYSTEM_NOTIFICATIONS_SET_INFO_ML = 1000473000,
// Provided by XR_ML_world_mesh_detection
XR_TYPE_WORLD_MESH_DETECTOR_CREATE_INFO_ML = 1000474001,
// Provided by XR_ML_world_mesh_detection
XR_TYPE_WORLD_MESH_STATE_REQUEST_INFO_ML = 1000474002,
// Provided by XR_ML_world_mesh_detection
XR_TYPE_WORLD_MESH_BLOCK_STATE_ML = 1000474003,
// Provided by XR_ML_world_mesh_detection
XR_TYPE_WORLD_MESH_STATE_REQUEST_COMPLETION_ML = 1000474004,
// Provided by XR_ML_world_mesh_detection
XR_TYPE_WORLD_MESH_BUFFER_RECOMMENDED_SIZE_INFO_ML = 1000474005,
// Provided by XR_ML_world_mesh_detection
XR_TYPE_WORLD_MESH_BUFFER_SIZE_ML = 1000474006,
// Provided by XR_ML_world_mesh_detection
XR_TYPE_WORLD_MESH_BUFFER_ML = 1000474007,
// Provided by XR_ML_world_mesh_detection
XR_TYPE_WORLD_MESH_BLOCK_REQUEST_ML = 1000474008,
// Provided by XR_ML_world_mesh_detection
XR_TYPE_WORLD_MESH_GET_INFO_ML = 1000474009,
// Provided by XR_ML_world_mesh_detection
XR_TYPE_WORLD_MESH_BLOCK_ML = 1000474010,
// Provided by XR_ML_world_mesh_detection
XR_TYPE_WORLD_MESH_REQUEST_COMPLETION_ML = 1000474011,
// Provided by XR_ML_world_mesh_detection
XR_TYPE_WORLD_MESH_REQUEST_COMPLETION_INFO_ML = 1000474012,
// Provided by XR_ML_facial_expression
XR_TYPE_SYSTEM_FACIAL_EXPRESSION_PROPERTIES_ML = 1000482004,
// Provided by XR_ML_facial_expression
XR_TYPE_FACIAL_EXPRESSION_CLIENT_CREATE_INFO_ML = 1000482005,
// Provided by XR_ML_facial_expression
XR_TYPE_FACIAL_EXPRESSION_BLEND_SHAPE_GET_INFO_ML = 1000482006,
// Provided by XR_ML_facial_expression
XR_TYPE_FACIAL_EXPRESSION_BLEND_SHAPE_PROPERTIES_ML = 1000482007,
// Provided by XR_META_simultaneous_hands_and_controllers
XR_TYPE_SYSTEM_SIMULTANEOUS_HANDS_AND_CONTROLLERS_PROPERTIES_META = 1000532001,
// Provided by XR_META_simultaneous_hands_and_controllers
XR_TYPE_SIMULTANEOUS_HANDS_AND_CONTROLLERS_TRACKING_RESUME_INFO_META = 1000532002,
// Provided by XR_META_simultaneous_hands_and_controllers
XR_TYPE_SIMULTANEOUS_HANDS_AND_CONTROLLERS_TRACKING_PAUSE_INFO_META = 1000532003,
// Provided by XR_META_colocation_discovery
XR_TYPE_COLOCATION_DISCOVERY_START_INFO_META = 1000571010,
// Provided by XR_META_colocation_discovery
XR_TYPE_COLOCATION_DISCOVERY_STOP_INFO_META = 1000571011,
// Provided by XR_META_colocation_discovery
XR_TYPE_COLOCATION_ADVERTISEMENT_START_INFO_META = 1000571012,
// Provided by XR_META_colocation_discovery
XR_TYPE_COLOCATION_ADVERTISEMENT_STOP_INFO_META = 1000571013,
// Provided by XR_META_colocation_discovery
XR_TYPE_EVENT_DATA_START_COLOCATION_ADVERTISEMENT_COMPLETE_META = 1000571020,
// Provided by XR_META_colocation_discovery
XR_TYPE_EVENT_DATA_STOP_COLOCATION_ADVERTISEMENT_COMPLETE_META = 1000571021,
// Provided by XR_META_colocation_discovery
XR_TYPE_EVENT_DATA_COLOCATION_ADVERTISEMENT_COMPLETE_META = 1000571022,
// Provided by XR_META_colocation_discovery
XR_TYPE_EVENT_DATA_START_COLOCATION_DISCOVERY_COMPLETE_META = 1000571023,
// Provided by XR_META_colocation_discovery
XR_TYPE_EVENT_DATA_COLOCATION_DISCOVERY_RESULT_META = 1000571024,
// Provided by XR_META_colocation_discovery
XR_TYPE_EVENT_DATA_COLOCATION_DISCOVERY_COMPLETE_META = 1000571025,
// Provided by XR_META_colocation_discovery
XR_TYPE_EVENT_DATA_STOP_COLOCATION_DISCOVERY_COMPLETE_META = 1000571026,
// Provided by XR_META_colocation_discovery
XR_TYPE_SYSTEM_COLOCATION_DISCOVERY_PROPERTIES_META = 1000571030,
// Provided by XR_META_spatial_entity_group_sharing
XR_TYPE_SHARE_SPACES_RECIPIENT_GROUPS_META = 1000572000,
// Provided by XR_META_spatial_entity_group_sharing
XR_TYPE_SPACE_GROUP_UUID_FILTER_INFO_META = 1000572001,
// Provided by XR_META_spatial_entity_group_sharing
XR_TYPE_SYSTEM_SPATIAL_ENTITY_GROUP_SHARING_PROPERTIES_META = 1000572100,
// Provided by XR_ANDROID_anchor_sharing_export
XR_TYPE_ANCHOR_SHARING_INFO_ANDROID = 1000701000,
// Provided by XR_ANDROID_anchor_sharing_export
XR_TYPE_ANCHOR_SHARING_TOKEN_ANDROID = 1000701001,
// Provided by XR_ANDROID_anchor_sharing_export
XR_TYPE_SYSTEM_ANCHOR_SHARING_EXPORT_PROPERTIES_ANDROID = 1000701002,
// Provided by XR_ANDROID_trackables_marker
XR_TYPE_SYSTEM_MARKER_TRACKING_PROPERTIES_ANDROID = 1000707000,
// Provided by XR_ANDROID_trackables_marker
XR_TYPE_TRACKABLE_MARKER_CONFIGURATION_ANDROID = 1000707001,
// Provided by XR_ANDROID_trackables_marker
XR_TYPE_TRACKABLE_MARKER_ANDROID = 1000707002,
// Provided by XR_EXT_spatial_entity
XR_TYPE_SPATIAL_CAPABILITY_COMPONENT_TYPES_EXT = 1000740000,
// Provided by XR_EXT_spatial_entity
XR_TYPE_SPATIAL_CONTEXT_CREATE_INFO_EXT = 1000740001,
// Provided by XR_EXT_spatial_entity
XR_TYPE_CREATE_SPATIAL_CONTEXT_COMPLETION_EXT = 1000740002,
// Provided by XR_EXT_spatial_entity
XR_TYPE_SPATIAL_DISCOVERY_SNAPSHOT_CREATE_INFO_EXT = 1000740003,
// Provided by XR_EXT_spatial_entity
XR_TYPE_CREATE_SPATIAL_DISCOVERY_SNAPSHOT_COMPLETION_INFO_EXT = 1000740004,
// Provided by XR_EXT_spatial_entity
XR_TYPE_CREATE_SPATIAL_DISCOVERY_SNAPSHOT_COMPLETION_EXT = 1000740005,
// Provided by XR_EXT_spatial_entity
XR_TYPE_SPATIAL_COMPONENT_DATA_QUERY_CONDITION_EXT = 1000740006,
// Provided by XR_EXT_spatial_entity
XR_TYPE_SPATIAL_COMPONENT_DATA_QUERY_RESULT_EXT = 1000740007,
// Provided by XR_EXT_spatial_entity
XR_TYPE_SPATIAL_BUFFER_GET_INFO_EXT = 1000740008,
// Provided by XR_EXT_spatial_entity
XR_TYPE_SPATIAL_COMPONENT_BOUNDED_2D_LIST_EXT = 1000740009,
// Provided by XR_EXT_spatial_entity
XR_TYPE_SPATIAL_COMPONENT_BOUNDED_3D_LIST_EXT = 1000740010,
// Provided by XR_EXT_spatial_entity
XR_TYPE_SPATIAL_COMPONENT_PARENT_LIST_EXT = 1000740011,
// Provided by XR_EXT_spatial_entity
XR_TYPE_SPATIAL_COMPONENT_MESH_3D_LIST_EXT = 1000740012,
// Provided by XR_EXT_spatial_entity
XR_TYPE_SPATIAL_ENTITY_FROM_ID_CREATE_INFO_EXT = 1000740013,
// Provided by XR_EXT_spatial_entity
XR_TYPE_SPATIAL_UPDATE_SNAPSHOT_CREATE_INFO_EXT = 1000740014,
// Provided by XR_EXT_spatial_entity
XR_TYPE_EVENT_DATA_SPATIAL_DISCOVERY_RECOMMENDED_EXT = 1000740015,
// Provided by XR_EXT_spatial_entity
XR_TYPE_SPATIAL_FILTER_TRACKING_STATE_EXT = 1000740016,
// Provided by XR_EXT_spatial_plane_tracking
XR_TYPE_SPATIAL_CAPABILITY_CONFIGURATION_PLANE_TRACKING_EXT = 1000741000,
// Provided by XR_EXT_spatial_plane_tracking
XR_TYPE_SPATIAL_COMPONENT_PLANE_ALIGNMENT_LIST_EXT = 1000741001,
// Provided by XR_EXT_spatial_plane_tracking
XR_TYPE_SPATIAL_COMPONENT_MESH_2D_LIST_EXT = 1000741002,
// Provided by XR_EXT_spatial_plane_tracking
XR_TYPE_SPATIAL_COMPONENT_POLYGON_2D_LIST_EXT = 1000741003,
// Provided by XR_EXT_spatial_plane_tracking
XR_TYPE_SPATIAL_COMPONENT_PLANE_SEMANTIC_LABEL_LIST_EXT = 1000741004,
// Provided by XR_EXT_spatial_marker_tracking
XR_TYPE_SPATIAL_CAPABILITY_CONFIGURATION_QR_CODE_EXT = 1000743000,
// Provided by XR_EXT_spatial_marker_tracking
XR_TYPE_SPATIAL_CAPABILITY_CONFIGURATION_MICRO_QR_CODE_EXT = 1000743001,
// Provided by XR_EXT_spatial_marker_tracking
XR_TYPE_SPATIAL_CAPABILITY_CONFIGURATION_ARUCO_MARKER_EXT = 1000743002,
// Provided by XR_EXT_spatial_marker_tracking
XR_TYPE_SPATIAL_CAPABILITY_CONFIGURATION_APRIL_TAG_EXT = 1000743003,
// Provided by XR_EXT_spatial_marker_tracking
XR_TYPE_SPATIAL_MARKER_SIZE_EXT = 1000743004,
// Provided by XR_EXT_spatial_marker_tracking
XR_TYPE_SPATIAL_MARKER_STATIC_OPTIMIZATION_EXT = 1000743005,
// Provided by XR_EXT_spatial_marker_tracking
XR_TYPE_SPATIAL_COMPONENT_MARKER_LIST_EXT = 1000743006,
// Provided by XR_EXT_spatial_anchor
XR_TYPE_SPATIAL_CAPABILITY_CONFIGURATION_ANCHOR_EXT = 1000762000,
// Provided by XR_EXT_spatial_anchor
XR_TYPE_SPATIAL_COMPONENT_ANCHOR_LIST_EXT = 1000762001,
// Provided by XR_EXT_spatial_anchor
XR_TYPE_SPATIAL_ANCHOR_CREATE_INFO_EXT = 1000762002,
// Provided by XR_EXT_spatial_persistence
XR_TYPE_SPATIAL_PERSISTENCE_CONTEXT_CREATE_INFO_EXT = 1000763000,
// Provided by XR_EXT_spatial_persistence
XR_TYPE_CREATE_SPATIAL_PERSISTENCE_CONTEXT_COMPLETION_EXT = 1000763001,
// Provided by XR_EXT_spatial_persistence
XR_TYPE_SPATIAL_CONTEXT_PERSISTENCE_CONFIG_EXT = 1000763002,
// Provided by XR_EXT_spatial_persistence
XR_TYPE_SPATIAL_DISCOVERY_PERSISTENCE_UUID_FILTER_EXT = 1000763003,
// Provided by XR_EXT_spatial_persistence
XR_TYPE_SPATIAL_COMPONENT_PERSISTENCE_LIST_EXT = 1000763004,
// Provided by XR_EXT_spatial_persistence_operations
XR_TYPE_SPATIAL_ENTITY_PERSIST_INFO_EXT = 1000781000,
// Provided by XR_EXT_spatial_persistence_operations
XR_TYPE_PERSIST_SPATIAL_ENTITY_COMPLETION_EXT = 1000781001,
// Provided by XR_EXT_spatial_persistence_operations
XR_TYPE_SPATIAL_ENTITY_UNPERSIST_INFO_EXT = 1000781002,
// Provided by XR_EXT_spatial_persistence_operations
XR_TYPE_UNPERSIST_SPATIAL_ENTITY_COMPLETION_EXT = 1000781003,
// Provided by XR_EXT_loader_init_properties
XR_TYPE_LOADER_INIT_INFO_PROPERTIES_EXT = 1000838000,
// Provided by XR_KHR_vulkan_enable2
XR_TYPE_GRAPHICS_BINDING_VULKAN2_KHR = XR_TYPE_GRAPHICS_BINDING_VULKAN_KHR,
// Provided by XR_KHR_vulkan_enable2
XR_TYPE_SWAPCHAIN_IMAGE_VULKAN2_KHR = XR_TYPE_SWAPCHAIN_IMAGE_VULKAN_KHR,
// Provided by XR_KHR_vulkan_enable2
XR_TYPE_GRAPHICS_REQUIREMENTS_VULKAN2_KHR = XR_TYPE_GRAPHICS_REQUIREMENTS_VULKAN_KHR,
// Provided by XR_FB_haptic_pcm
XR_TYPE_DEVICE_PCM_SAMPLE_RATE_GET_INFO_FB = XR_TYPE_DEVICE_PCM_SAMPLE_RATE_STATE_FB,
// Provided by XR_KHR_locate_spaces
XR_TYPE_SPACES_LOCATE_INFO_KHR = XR_TYPE_SPACES_LOCATE_INFO,
// Provided by XR_KHR_locate_spaces
XR_TYPE_SPACE_LOCATIONS_KHR = XR_TYPE_SPACE_LOCATIONS,
// Provided by XR_KHR_locate_spaces
XR_TYPE_SPACE_VELOCITIES_KHR = XR_TYPE_SPACE_VELOCITIES,
XR_STRUCTURE_TYPE_MAX_ENUM = 0x7FFFFFFF
} XrStructureType;
Most structures containing type members have a value of type
matching the type of the structure, as described more fully in
Valid Usage for Structure Types.
Note that all extension enums begin at the extension enum base of 10^9 (base 10). Each extension is assigned a block of 1000 enums, starting at the enum base and arranged by the extension’s number.
// Provided by XR_VERSION_1_0
#define XR_EXTENSION_ENUM_BASE 1000000000
// Provided by XR_VERSION_1_0
#define XR_EXTENSION_ENUM_STRIDE 1000
For example, if extension number 5 wants to use an enum value of 3, the final enum is computed by:
enum = XR_EXTENSION_ENUM_BASE + (extension_number - 1) * XR_EXTENSION_ENUM_STRIDE + enum_value
1000004003 = 1000000000 + 4 * 1000 + 3
The maximum allowed enum value in an extension is 2,147,482,999, which belongs to extension number 1147483.
Flag Types
Flag types are all bitmasks aliasing the base type XrFlags64 and
with corresponding bit flag types defining the valid bits for that flag, as
described in Valid Usage for Flags.
Flag types defined in the core specification were originally listed/defined
here, but have been moved to be adjacent to their associated FlagBits
type.
See the Index for a list.
General Macro Definitions
This API is defined in C and uses "C" linkage.
The openxr.h header file is opened with:
1
2
3
#ifdef __cplusplus
extern "C" {
#endif
and closed with:
1
2
3
#ifdef __cplusplus
}
#endif
The supplied openxr.h header defines a small number of C preprocessor
macros that are described below.
Version Number Macros
Three version numbers are defined in openxr.h.
Each is packed into a 64-bit integer as described in
API Version Number Function-like Macros.
// Provided by XR_VERSION_1_0
// OpenXR current version number.
#define XR_CURRENT_API_VERSION XR_MAKE_VERSION(1, 1, 54)
XR_CURRENT_API_VERSION is the current version of the OpenXR API.
In many cases, XR_API_VERSION_1_0 or XR_API_VERSION_1_1 are preferred for source forward-compatibility.
// Provided by XR_VERSION_1_0
// OpenXR 1.0 version number
#define XR_API_VERSION_1_0 XR_MAKE_VERSION(1, 0, XR_VERSION_PATCH(XR_CURRENT_API_VERSION))
XR_API_VERSION_1_0 is the version of the OpenXR 1.0 API. The "major" and "minor" components are always 1.0, while the "patch" component matches XR_CURRENT_API_VERSION.
// Provided by XR_VERSION_1_1
// OpenXR 1.1 version number
#define XR_API_VERSION_1_1 XR_MAKE_VERSION(1, 1, XR_VERSION_PATCH(XR_CURRENT_API_VERSION))
XR_API_VERSION_1_1 is the version of the OpenXR 1.1 API. The "major" and "minor" components are always 1.1, while the "patch" component matches XR_CURRENT_API_VERSION.
API Version Number Function-like Macros
API Version Numbers are three components, packed into a single 64-bit integer. The following macros manipulate version components and packed version numbers.
#define XR_MAKE_VERSION(major, minor, patch) \
((((major) & 0xffffULL) << 48) | (((minor) & 0xffffULL) << 32) | ((patch) & 0xffffffffULL))
XR_MAKE_VERSION constructs a packed 64-bit integer API version number from three components. The format used is described in API Version Numbers and Semantics.
This macro can be used when constructing the
XrApplicationInfo::apiVersion parameter passed to
xrCreateInstance.
// Provided by XR_VERSION_1_0
#define XR_VERSION_MAJOR(version) (uint16_t)(((uint64_t)(version) >> 48)& 0xffffULL)
XR_VERSION_MAJOR extracts the API major version number from a packed version number.
// Provided by XR_VERSION_1_0
#define XR_VERSION_MINOR(version) (uint16_t)(((uint64_t)(version) >> 32) & 0xffffULL)
XR_VERSION_MINOR extracts the API minor version number from a packed version number.
// Provided by XR_VERSION_1_0
#define XR_VERSION_PATCH(version) (uint32_t)((uint64_t)(version) & 0xffffffffULL)
XR_VERSION_PATCH extracts the API patch version number from a packed version number.
Handle and Atom Macros
// Provided by XR_VERSION_1_0
#if !defined(XR_DEFINE_HANDLE)
#if (XR_PTR_SIZE == 8)
#define XR_DEFINE_HANDLE(object) typedef struct object##_T* object;
#else
#define XR_DEFINE_HANDLE(object) typedef uint64_t object;
#endif
#endif
XR_DEFINE_HANDLE defines a handle type, which is an opaque 64 bit value, which may be implemented as an opaque, distinct pointer type on platforms with 64 bit pointers.
For further details, see Handles.
// Provided by XR_VERSION_1_0
#if !defined(XR_NULL_HANDLE)
#if (XR_PTR_SIZE == 8) && XR_CPP_NULLPTR_SUPPORTED
#define XR_NULL_HANDLE nullptr
#else
#define XR_NULL_HANDLE 0
#endif
#endif
XR_NULL_HANDLE is a reserved value representing a non-valid object handle. It may be passed to and returned from API functions only when specifically allowed.
#if !defined(XR_DEFINE_ATOM)
#define XR_DEFINE_ATOM(object) typedef uint64_t object;
#endif
XR_DEFINE_ATOM defines an atom type, which is an opaque 64 bit integer.
// Provided by XR_VERSION_1_0
#if !defined(XR_DEFINE_OPAQUE_64)
#if (XR_PTR_SIZE == 8)
#define XR_DEFINE_OPAQUE_64(object) typedef struct object##_T* object;
#else
#define XR_DEFINE_OPAQUE_64(object) typedef uint64_t object;
#endif
#endif
XR_DEFINE_OPAQUE_64 defines an opaque 64 bit value, which may be implemented as an opaque, distinct pointer type on platforms with 64 bit pointers.
Platform-Specific Macro Definitions
Additional platform-specific macros and interfaces are defined using the
included openxr_platform.h file.
These macros are used to control platform-dependent behavior, and their
exact definitions are under the control of specific platform implementations
of the API.
Platform-Specific Calling Conventions
On many platforms the following macros are empty strings, causing platform- and compiler-specific default calling conventions to be used.
XRAPI_ATTR is a macro placed before the return type of an API function declaration. This macro controls calling conventions for C++11 and GCC/Clang-style compilers.
XRAPI_CALL is a macro placed after the return type of an API function declaration. This macro controls calling conventions for MSVC-style compilers.
XRAPI_PTR is a macro placed between the ( and * in API function pointer declarations. This macro also controls calling conventions, and typically has the same definition as XRAPI_ATTR or XRAPI_CALL, depending on the compiler.
Examples:
Function declaration:
XRAPI_ATTR <return_type> XRAPI_CALL <function_name>(<function_parameters>);
Function pointer type declaration:
typedef <return_type> (XRAPI_PTR *PFN_<function_name>)(<function_parameters>);
Platform-Specific Header Control
If the XR_NO_STDINT_H macro is defined by the application at compile
time, before including any OpenXR header, extended integer types normally
found in <stdint.h> and used by the OpenXR headers, such as uint8_t,
must also be defined (as typedef or with the preprocessor) before
including any OpenXR header.
Otherwise, openxr.h and related headers will not compile.
If XR_NO_STDINT_H is not defined, the system-provided <stdint.h> is
used to define these types.
There is a fallback path for Microsoft Visual Studio version 2008 and
earlier versions (which lack this header) that is automatically activated as
needed.
Graphics API Header Control
| Compile Time Symbol | Graphics API Name |
|---|---|
OpenGL |
|
OpenGL ES |
|
Vulkan |
|
Direct3D 11 |
|
Direct3D 12 |
Window System Header Control
| Compile Time Symbol | Window System Name |
|---|---|
Microsoft Windows |
|
X Window System Xlib |
|
X Window System XCB |
|
EGL (for OpenGL/OpenGL ES usage on any platform with EGL support) |
|
Wayland (deprecated) |
|
Android Native |
Wayland Notes
The Wayland part of XR_KHR_opengl_enable was never implemented by any
runtime, and it was determined that as written the Wayland integration would
not work well without making unsafe assumptions.
It was decided to deprecate these parts.
A future extension would be needed to provide functionality for a Wayland
integration.
Android Notes
Android specific notes for using the OpenXR specification.
Android Runtime category tag for immersive mode selection
Android applications should add the <category
android:name="org.khronos.openxr.intent.category.IMMERSIVE_HMD" /> tag
inside the intent-filter to indicate that the activity starts in an
immersive OpenXR mode and will not touch the native Android 2D surface.
The HMD suffix indicates the preferred form-factor used by the application and can be used by launchers to filter applications listed.
For example:
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
<category android:name="org.khronos.openxr.intent.category.IMMERSIVE_HMD" />
</intent-filter>
Glossary
The terms defined in this section are used throughout this Specification. Capitalization is not significant for these definitions.
| Term | Description |
|---|---|
Application |
The XR application which calls the OpenXR API to communicate with an OpenXR runtime. |
Deprecated |
A feature/extension is deprecated if it is no longer recommended as the correct or best way to achieve its intended purpose. Generally a newer feature/extension will have been created that solves the same problem - in cases where no newer alternative feature exists, justification should be provided. |
Handle |
An opaque integer or pointer value used to refer to an object. Each object type has a unique handle type. |
Haptic |
Haptic or kinesthetic communication recreates the sense of touch by applying forces, vibrations, or motions to the user. |
In-Process |
Something that executes in the application’s process. |
Instance |
The top-level object, which represents the application’s connection to the runtime. Represented by an XrInstance object. |
Normalized |
A value that is interpreted as being in the range [0,1], or a vector whose norm is in that range, as a result of being implicitly divided or scaled by some other value. |
Out-Of-Process |
Something that executes outside the application’s process. |
Promoted |
A feature is promoted if it is taken from an older extension and made available as part of a new core version of the API, or a newer extension that is considered to be either as widely supported or more so. A promoted feature may have minor differences from the original such as:
|
Provisional |
A feature is released provisionally in order to get wider feedback on the functionality before it is finalized. Provisional features may change in ways that break backwards compatibility, and thus are not recommended for use in production applications. |
Required Extensions |
Extensions that must be enabled alongside extensions dependent on them, or that must be enabled to use given hardware. |
Runtime |
The software which implements the OpenXR API and allows applications to interact with XR hardware. |
Swapchain |
A resource that represents a chain of images in device memory. Represented by an XrSwapchain object. |
Swapchain Image |
Each element in a swapchain. Commonly these are simple formatted 2D images, but in other cases they may be array images. Represented by a structure related to XrSwapchainImageBaseHeader. |
Abbreviations
Abbreviations and acronyms are sometimes used in the API where they are considered clear and commonplace, and are defined here:
| Abbreviation | Description |
|---|---|
API |
Application Programming Interface |
AR |
Augmented Reality |
ER |
Eye Relief |
IAD |
Inter Axial Distance |
IPD |
Inter Pupillary Distance |
MR |
Mixed Reality |
OS |
Operating System |
TSG |
Technical Sub-Group. A specialized sub-group within a Khronos Working Group (WG). |
VR |
Virtual Reality |
WG |
Working Group. An organized group of people working to define/augment an API. |
XR |
VR + AR + MR |
Dedication (Informative)
In memory of Johannes van Waveren: a loving father, husband, son, brother, colleague, and dear friend.
Johannes, known to his friends as "JP", had a great sense of humor, fierce loyalty, intense drive, a love of rainbow unicorns, and deep disdain for processed American cheese. Perhaps most distinguishing of all, though, was his love of technology and his extraordinary technical ability.
JP’s love of technology started at an early age --- instead of working on his homework, he built train sets, hovercrafts, and complex erector sets from scratch; fashioned a tool for grabbing loose change out of street grates; and played computer games. The passion for computer games continued at Delft University of Technology, where, armed with a T1 internet connection and sheer talent, he regularly destroyed his foes in arena matches without being seen, earning him the moniker "MrElusive". During this time, he wrote the Gladiator-bot AI, which earned him acclaim in the community and led directly to a job at the iconic American computer game company, id Software. From there, he quickly became an expert in every system he touched, contributing significantly to every facet of the technology: AI, path navigation, networking, skeletal animation, virtual texturing, advanced rendering, and physics. He became a master of all. He famously owned more lines of code than anyone else, but he was also a generous mentor, helping junior developers hone their skills and make their own contributions.
When the chance to work in the VR industry arose, he saw it as an opportunity to help shape the future. Having never worked on VR hardware did not phase him; he quickly became a top expert in the field. Many of his contributions directly moved the industry forward, most recently his work on asynchronous timewarp and open-standards development.
Time was not on his side. Even in his final days, JP worked tirelessly on the initial proposal for this specification. The treatments he had undergone took a tremendous physical toll, but he continued to work because of his love of technology, his dedication to the craft, and his desire to get OpenXR started on a solid footing. His focus was unwavering.
His proposal was unofficially adopted several days before his passing - and upon hearing, he mustered the energy for a smile. While it was his great dream to see this process through, he would be proud of the spirit of cooperation, passion, and dedication of the industry peers who took up the torch to drive this specification to completion.
JP lived a life full of accomplishment, as evidenced by many publications, credits, awards, and nominations where you will find his name. A less obvious accomplishment --- but of equal importance --- is the influence he had on people through his passionate leadership. He strove for excellence in everything that he did. He was always excited to talk about technology and share the discoveries made while working through complex problems. He created excitement and interest around engineering and technical excellence. He was a mentor and teacher who inspired those who knew him and many continue to benefit from his hard work and generosity.
JP was a rare gem; fantastically brilliant intellectually, but also warm, compassionate, generous, humble, and funny. Those of us lucky enough to have crossed paths with him knew what a privilege and great honor it was to know him. He is certainly missed.
Contributors (Informative)
OpenXR is the result of contributions from many people and companies participating in the Khronos OpenXR Working Group. Members of the Working Group, including the company that they represented at the time of their most recent contribution, are listed below.
Working Group Contributors to OpenXR
-
Adam Gousetis, Google (version 1.0)
-
Alain Zanchetta, Microsoft (version 1.1)
-
Alex Turner, Microsoft (versions 1.0, 1.1)
-
Alex Sink, HTC (version 1.1)
-
Alfredo Muniz, XEED (version 1.1) (Working Group Chair)
-
Andreas Loeve Selvik, Meta Platforms (versions 1.0, 1.1)
-
Andres Rodriguez, Valve Software (version 1.0)
-
Armelle Laine, Qualcomm Technologies (version 1.0)
-
Attila Maczak, CTRL-labs (version 1.0)
-
David Fields, Microsoft (version 1.1)
-
Baolin Fu, ByteDance (version 1.1)
-
Blake Taylor, Magic Leap (version 1.0)
-
Brad Grantham, Google (version 1.0)
-
Brandon Jones, Google (version 1.0)
-
Brent E. Insko, Intel (version 1.0) (former Working Group Chair)
-
Brent Wilson, Microsoft (version 1.0)
-
Bryce Hutchings, Microsoft (versions 1.0, 1.1)
-
Cass Everitt, Meta Platforms (versions 1.0, 1.1)
-
Charles Egenbacher, Epic Games (version 1.0)
-
Charlton Rodda, Collabora (version 1.1)
-
Chris Kuo, HTC (version 1.1)
-
Chris Osborn, CTRL-labs (version 1.0)
-
Christine Perey, Perey Research & Consulting (version 1.0)
-
Christoph Haag, Collabora (version 1.0, 1.1)
-
Christopher Fiala, Epic Games (version 1.1)
-
Craig Donner, Google (version 1.0)
-
Dan Ginsburg, Valve Software (version 1.0)
-
Dave Houlton, LunarG (version 1.0)
-
Dave Shreiner, Unity Technologies (version 1.0)
-
Darryl Gough, Microsoft (version 1.1)
-
Denny Rönngren, Varjo (versions 1.0, 1.1)
-
Dmitriy Vasilev, Samsung Electronics (version 1.0)
-
Doug Twileager, ZSpace (version 1.0)
-
Ed Hutchins, Meta Platforms (version 1.0)
-
Eryk Pecyna, Meta Platforms (version 1.1)
-
Frederic Plourde, Collabora (version 1.1)
-
Gloria Kennickell, Meta Platforms (version 1.0)
-
Gregory Greeby, AMD (version 1.0)
-
Guodong Chen, Huawei (version 1.0)
-
Jack Pritz, Unity Technologies (versions 1.0, 1.1)
-
Jakob Bornecrantz, Collabora (versions 1.0, 1.1)
-
Jared Cheshier, PlutoVR (versions 1.0, 1.1)
-
Jared Finder, Google (version 1.1)
-
Javier Martinez, Intel (version 1.0)
-
Jeff Bellinghausen, Valve Software (version 1.0)
-
Jiehua Guo, Huawei (version 1.0)
-
Joe Ludwig, Valve Software (versions 1.0, 1.1)
-
John Kearney, Meta Platforms (version 1.1)
-
Johannes van Waveren, Meta Platforms (version 1.0)
-
Jon Leech, Khronos (version 1.0)
-
Jonas Pegerfalk, Tobii (version 1.1)
-
Jonathan Wright, Meta Platforms (versions 1.0, 1.1)
-
Juan Wee, Samsung Electronics (version 1.0)
-
Jules Blok, Epic Games (version 1.0)
-
Jun Yan, ByteDance (version 1.1)
-
Karl Schultz, LunarG (version 1.0)
-
Karthik Kadappan, Magic Leap (version 1.1)
-
Karthik Nagarajan, Qualcomm Technologies (version 1.1)
-
Kaye Mason, Google (version 1.0)
-
Krzysztof Kosiński, Google (version 1.0)
-
Kyle Chen, HTC (version 1.1)
-
Lachlan Ford, Google (versions 1.0, 1.1)
-
Lubosz Sarnecki, Collabora (version 1.0)
-
Mark Young, LunarG (version 1.0)
-
Martin Renschler, Qualcomm Technologies (version 1.0)
-
Matias Koskela, Tampere University of Technology (version 1.0)
-
Matt Wash, Arm (version 1.0)
-
Mattias Brand, Tobii (version 1.0)
-
Mattias O. Karlsson, Tobii (version 1.0)
-
Michael Gatson, Dell (version 1.0)
-
Minmin Gong, Microsoft (version 1.0)
-
Mitch Singer, AMD (version 1.0)
-
Nathan Nuber, Valve (version 1.1)
-
Nell Waliczek, Microsoft (version 1.0)
-
Nick Whiting, Epic Games (version 1.0) (former Working Group Chair)
-
Nigel Williams, Sony (version 1.0)
-
Nihav Jain, Google, Inc (version 1.1)
-
Paul Pedriana, Meta Platforms (version 1.0)
-
Paulo Gomes, Samsung Electronics (version 1.0)
-
Peter Kuhn, Unity Technologies (versions 1.0, 1.1)
-
Peter Peterson, HP Inc (version 1.0)
-
Philippe Harscoet, Samsung Electronics (versions 1.0, 1.1)
-
Pierre-Loup Griffais, Valve Software (version 1.0)
-
Rafael Wiltz, Magic Leap (version 1.1)
-
Rajeev Gupta, Sony (version 1.0)
-
Remi Arnaud, Starbreeze (version 1.0)
-
Remy Zimmerman, Logitech (version 1.0)
-
Ria Hsu, HTC (version 1.1)
-
River Gillis, Google (version 1.0)
-
Robert Blenkinsopp, Ultraleap (version 1.1)
-
Robert Memmott, Meta Platforms (version 1.0)
-
Robert Menzel, NVIDIA (version 1.0)
-
Robert Simpson, Qualcomm Technologies (version 1.0)
-
Robin Bourianes, Starbreeze (version 1.0)
-
Ron Bessems, Magic Leap (version 1.1) (Working Group Vice-Chair)
-
Rune Berg, independent (version 1.1)
-
Rylie Pavlik, Collabora (versions 1.0, 1.1) (Spec Editor)
-
Ryan Vance, Epic Games (version 1.0)
-
Sam Martin, Arm (version 1.0)
-
Satish Salian, NVIDIA (version 1.0)
-
Scott Flynn, Unity Technologies (version 1.0)
-
Shanliang Xu, ByteDance (version 1.1)
-
Sean Payne, CTRL-labs (version 1.0)
-
Sophia Baldonado, PlutoVR (version 1.0)
-
Steve Smith, Epic Games (version 1.0)
-
Sungye Kim, Intel (version 1.0)
-
Tom Flynn, Samsung Electronics (version 1.0)
-
Trevor F. Smith, Mozilla (version 1.0)
-
Victor Brodin, Epic Games (version 1.1)
-
Vivek Viswanathan, Dell (version 1.0)
-
Wenlin Mao, Meta Platforms (version 1.1)
-
Xiang Wei, Meta Platforms (version 1.1)
-
Yin Li, Microsoft (versions 1.0, 1.1)
-
Yuval Boger, Sensics (version 1.0)
-
Zhanrui Jia, ByteDance (version 1.1)
-
Zheng Qin, Microsoft (version 1.0)