Shaders

A shader specifies programmable operations that execute for each vertex, control point, tessellated vertex, primitive, fragment, or workgroup in the corresponding stage(s) of the graphics and compute pipelines.

Graphics pipelines include vertex shader execution as a result of primitive assembly, followed, if enabled, by tessellation control and evaluation shaders operating on patches, geometry shaders, if enabled, operating on primitives, and fragment shaders, if present, operating on fragments generated by Rasterization. In this specification, vertex, tessellation control, tessellation evaluation and geometry shaders are collectively referred to as pre-rasterization shader stages and occur in the logical pipeline before rasterization. The fragment shader occurs logically after rasterization.

Only the compute shader stage is included in a compute pipeline. Compute shaders operate on compute invocations in a workgroup.

Shaders can read from input variables, and read from and write to output variables. Input and output variables can be used to transfer data between shader stages, or to allow the shader to interact with values that exist in the execution environment. Similarly, the execution environment provides constants describing capabilities.

Shader variables are associated with execution environment-provided inputs and outputs using built-in decorations in the shader. The available decorations for each stage are documented in the following subsections.

Shader Objects

Shaders may be compiled and linked into pipeline objects as described in Pipelines chapter, or if the shaderObject feature is enabled they may be compiled into individual per-stage shader objects which can be bound on a command buffer independently from one another. Unlike pipelines, shader objects are not intrinsically tied to any specific set of state. Instead, state is specified dynamically in the command buffer.

Each shader object represents a single compiled shader stage, which mayoptionally be linked with one or more other stages.

VkShaderEXTOpaque handle to a shader object

Shader Object Creation

Shader objects may be created from shader code provided as SPIR-V, or in an opaque, implementation-defined binary format specific to the physical device.

vkCreateShadersEXTCreate one or more new shaders
VkShaderCreateInfoEXTStructure specifying parameters of a newly created shader
VkShaderCreateFlagsEXTBitmask of VkShaderCreateFlagBitsEXT
VkShaderCreateFlagBitsEXTBitmask controlling how a shader object is created

The behavior of VK_SHADER_CREATE_FRAGMENT_SHADING_RATE_ATTACHMENT_BIT_EXT and VK_SHADER_CREATE_FRAGMENT_DENSITY_MAP_ATTACHMENT_BIT_EXT differs subtly from the behavior of VK_PIPELINE_CREATE_RENDERING_FRAGMENT_SHADING_RATE_ATTACHMENT_BIT_KHR and VK_PIPELINE_CREATE_RENDERING_FRAGMENT_DENSITY_MAP_ATTACHMENT_BIT_EXT in that the shader bit allows, but does not require the shader to be used with that type of attachment. This means that the application need not create multiple shaders when it does not know in advance whether the shader will be used with or without the attachment type, or when it needs the same shader to be compatible with usage both with and without. This may come at some performance cost on some implementations, so applications should still only set bits that are actually necessary.

VkShaderCodeTypeEXTIndicate a shader code type

Binary Shader Code

vkGetShaderBinaryDataEXTGet the binary shader code from a shader object

Binary Shader Compatibility

Binary shader compatibility means that binary shader code returned from a call to vkGetShaderBinaryDataEXT can be passed to a later call to vkCreateShadersEXT, potentially on a different logical and/or physical device, and that this will result in the successful creation of a shader object functionally equivalent to the shader object that the code was originally queried from.

Binary shader code queried from vkGetShaderBinaryDataEXT is not guaranteed to be compatible across all devices, but implementations are required to provide some compatibility guarantees. Applications may determine binary shader compatibility using either (or both) of two mechanisms.

Guaranteed compatibility of shader binaries is expressed through a combination of the shaderBinaryUUID and shaderBinaryVersion members of the VkPhysicalDeviceShaderObjectPropertiesEXT structure queried from a physical device. Binary shaders retrieved from a physical device with a certain shaderBinaryUUID are guaranteed to be compatible with all other physical devices reporting the same shaderBinaryUUID and the same or higher shaderBinaryVersion.

Whenever a new version of an implementation incorporates any changes that affect the output of vkGetShaderBinaryDataEXT, the implementation should either increment shaderBinaryVersion if binary shader code retrieved from older versions remains compatible with the new implementation, or else replace shaderBinaryUUID with a new value if backward compatibility has been broken. Binary shader code queried from a device with a matching shaderBinaryUUID and lower shaderBinaryVersion relative to the device on which vkCreateShadersEXT is being called may be suboptimal for the new device in ways that do not change shader functionality, but it is still guaranteed to be usable to successfully create the shader object(s).

Implementations are encouraged to share shaderBinaryUUID between devices and driver versions to the maximum extent their hardware naturally allows, and are strongly discouraged from ever changing the shaderBinaryUUID for the same hardware except unless absolutely necessary.

In addition to the shader compatibility guarantees described above, it is valid for an application to call vkCreateShadersEXT with binary shader code created on a device with a different or unknown shaderBinaryUUID and/or higher shaderBinaryVersion. In this case, the implementation may use any unspecified means of its choosing to determine whether the provided binary shader code is usable. If it is, vkCreateShadersEXT must return VK_SUCCESS, and the created shader object is guaranteed to be valid. Otherwise, in the absence of some error, vkCreateShadersEXT must return VK_INCOMPATIBLE_SHADER_BINARY_EXT to indicate that the provided binary shader code is not compatible with the device.

Binding Shader Objects

vkCmdBindShadersEXTBind shader objects to a command buffer

Setting State

Whenever shader objects are used to issue drawing commands, the appropriate dynamic state setting commands must have been called to set the relevant state in the command buffer prior to drawing:

If a shader is bound to the VK_SHADER_STAGE_VERTEX_BIT stage, the following commands must have been called in the command buffer prior to drawing:

If a shader is bound to the VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT stage, the following command must have been called in the command buffer prior to drawing:

If rasterizerDiscardEnable is VK_FALSE, the following commands must have been called in the command buffer prior to drawing:

If a shader is bound to the VK_SHADER_STAGE_FRAGMENT_BIT stage, and rasterizerDiscardEnable is VK_FALSE, the following commands must have been called in the command buffer prior to drawing:

If the pipelineFragmentShadingRate feature is enabled on the device, and a shader is bound to the VK_SHADER_STAGE_FRAGMENT_BIT stage, and rasterizerDiscardEnable is VK_FALSE, the following command must have been called in the command buffer prior to drawing:

If the geometryStreams feature is enabled on the device, and a shader is bound to the VK_SHADER_STAGE_GEOMETRY_BIT stage, the following command must have been called in the command buffer prior to drawing:

If the VK_EXT_discard_rectangles extension is enabled on the device, and rasterizerDiscardEnable is VK_FALSE, the following commands must have been called in the command buffer prior to drawing:

If VK_EXT_conservative_rasterization extension is enabled on the device, and rasterizerDiscardEnable is VK_FALSE, the following commands must have been called in the command buffer prior to drawing:

If the depthClipEnable feature is enabled on the device, the following command must have been called in the command buffer prior to drawing:

If the VK_EXT_sample_locations extension is enabled on the device, and rasterizerDiscardEnable is VK_FALSE, the following commands must have been called in the command buffer prior to drawing:

If the VK_EXT_provoking_vertex extension is enabled on the device, and rasterizerDiscardEnable is VK_FALSE, and a shader is bound to the VK_SHADER_STAGE_VERTEX_BIT stage, the following command must have been called in the command buffer prior to drawing:

If the VK_KHR_line_rasterization or VK_EXT_line_rasterization extension is enabled on the device, and rasterizerDiscardEnable is VK_FALSE, and if polygonMode is VK_POLYGON_MODE_LINE or a shader is bound to the VK_SHADER_STAGE_VERTEX_BIT stage and primitiveTopology is a line topology or a shader which outputs line primitives is bound to the VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT or VK_SHADER_STAGE_GEOMETRY_BIT stage, the following commands must have been called in the command buffer prior to drawing:

If the depthClipControl feature is enabled on the device, the following command must have been called in the command buffer prior to drawing:

If the colorWriteEnable feature is enabled on the device, and a shader is bound to the VK_SHADER_STAGE_FRAGMENT_BIT stage, and rasterizerDiscardEnable is VK_FALSE, the following command must have been called in the command buffer prior to drawing:

If the attachmentFeedbackLoopDynamicState feature is enabled on the device, and a shader is bound to the VK_SHADER_STAGE_FRAGMENT_BIT stage, and rasterizerDiscardEnable is VK_FALSE, the following command must have been called in the command buffer prior to drawing:

If the VK_NV_clip_space_w_scaling extension is enabled on the device, the following commands must have been called in the command buffer prior to drawing:

If the VK_NV_viewport_swizzle extension is enabled on the device, the following command must have been called in the command buffer prior to drawing:

If the VK_NV_fragment_coverage_to_color extension is enabled on the device, and a shader is bound to the VK_SHADER_STAGE_FRAGMENT_BIT stage, and rasterizerDiscardEnable is VK_FALSE, the following commands must have been called in the command buffer prior to drawing:

If the VK_NV_framebuffer_mixed_samples extension is enabled on the device, and rasterizerDiscardEnable is VK_FALSE, the following commands must have been called in the command buffer prior to drawing:

If the coverageReductionMode feature is enabled on the device, and rasterizerDiscardEnable is VK_FALSE, the following command must have been called in the command buffer prior to drawing:

If the representativeFragmentTest feature is enabled on the device, and rasterizerDiscardEnable is VK_FALSE, the following command must have been called in the command buffer prior to drawing:

If the shadingRateImage feature is enabled on the device, and rasterizerDiscardEnable is VK_FALSE, the following commands must have been called in the command buffer prior to drawing:

If the exclusiveScissor feature is enabled on the device, the following commands must have been called in the command buffer prior to drawing:

State can be set either at any time before or after shader objects are bound, but all required state must be set prior to issuing drawing commands.

Interaction With Pipelines

Calling vkCmdBindShadersEXT causes the pipeline bind points corresponding to each stage in pStages to be disturbed, meaning that any pipelines that had previously been bound to those pipeline bind points are no longer bound.

If VK_PIPELINE_BIND_POINT_GRAPHICS is disturbed (i.e., if pStages contains any graphics stage), any graphics pipeline state that the previously bound pipeline did not specify as dynamic becomes undefined:, and must be set in the command buffer before issuing drawing commands using shader objects.

Calls to vkCmdBindPipeline likewise disturb the shader stage(s) corresponding to pipelineBindPoint, meaning that any shaders that had previously been bound to any of those stages are no longer bound, even if the pipeline was created without shaders for some of those stages.

Shader Object Destruction

vkDestroyShaderEXTDestroy a shader object

Shader Modules

VkShaderModuleOpaque handle to a shader module object
vkCreateShaderModuleCreates a new shader module object
VkShaderModuleCreateInfoStructure specifying parameters of a newly created shader module
VkShaderModuleCreateFlagsReserved for future use
VkShaderModuleValidationCacheCreateInfoEXTSpecify validation cache to use during shader module creation
vkDestroyShaderModuleDestroy a shader module

Shader Module Identifiers

vkGetShaderModuleIdentifierEXTQuery a unique identifier for a shader module
vkGetShaderModuleCreateInfoIdentifierEXTQuery a unique identifier for a shader module create info
VkShaderModuleIdentifierEXTA unique identifier for a shader module
VK_MAX_SHADER_MODULE_IDENTIFIER_SIZE_EXTMaximum length of a shader module identifier

Binding Shaders

Before a shader can be used it must be first bound to the command buffer.

Calling vkCmdBindPipeline binds all stages corresponding to the VkPipelineBindPoint. Calling vkCmdBindShadersEXT binds all stages in pStages

The following table describes the relationship between shader stages and pipeline bind points:

Shader Execution

At each stage of the pipeline, multiple invocations of a shader may execute simultaneously. Further, invocations of a single shader produced as the result of different commands may execute simultaneously. The relative execution order of invocations of the same shader type is undefined:. Shader invocations may complete in a different order than that in which the primitives they originated from were drawn or dispatched by the application. However, fragment shader outputs are written to attachments in rasterization order.

The relative execution order of invocations of different shader types is largely undefined:. However, when invoking a shader whose inputs are generated from a previous pipeline stage, the shader invocations from the previous stage are guaranteed to have executed far enough to generate input values for all required inputs.

Shader Termination

A shader invocation that is terminated has finished executing instructions.

Executing OpReturn in the entry point, or executing OpTerminateInvocation in any function will terminate an invocation. Implementations may also terminate a shader invocation when OpKill is executed in any function; otherwise it becomes a helper invocation.

In addition to the above conditions, helper invocations may be terminated when all non-helper invocations in the same derivative group either terminate or become helper invocations.

A shader stage for a given command completes execution when all invocations for that stage have terminated.

OpKill will behave the same as either OpTerminateInvocation or OpDemoteToHelperInvocation depending on the implementation. It is recommended that shader authors use OpTerminateInvocation or OpDemoteToHelperInvocation instead of OpKill whenever possible to produce more predictable behavior.

Shader Memory Access Ordering

The order in which image or buffer memory is read or written by shaders is largely undefined:. For some shader types (vertex, tessellation evaluation, and in some cases, fragment), even the number of shader invocations that may perform loads and stores is undefined:.

In particular, the following rules apply:

  • Vertex and tessellation evaluation shaders will be invoked at least once for each unique vertex, as defined in those sections.
  • Fragment shaders will be invoked zero or more times, as defined in that section.
  • The relative execution order of invocations of the same shader type is undefined:. A store issued by a shader when working on primitive B might complete prior to a store for primitive A, even if primitive A is specified prior to primitive B. This applies even to fragment shaders; while fragment shader outputs are always written to the framebuffer in rasterization order, stores executed by fragment shader invocations are not.
  • The relative execution order of invocations of different shader types is largely undefined:.

The above limitations on shader invocation order make some forms of synchronization between shader invocations within a single set of primitives unimplementable. For example, having one invocation poll memory written by another invocation assumes that the other invocation has been launched and will complete its writes in finite time.

The Memory Model appendix defines the terminology and rules for how to correctly communicate between shader invocations, such as when a write is Visible-To a read, and what constitutes a Data Race.

Applications must not cause a data race.

The SPIR-V SubgroupMemory, CrossWorkgroupMemory, and AtomicCounterMemory memory semantics are ignored. Sequentially consistent atomics and barriers are not supported and SequentiallyConsistent is treated as AcquireRelease. SequentiallyConsistent should not be used.

Shader Inputs and Outputs

Data is passed into and out of shaders using variables with input or output storage class, respectively. User-defined inputs and outputs are connected between stages by matching their Location decorations. Additionally, data can be provided by or communicated to special functions provided by the execution environment using BuiltIn decorations.

In many cases, the same BuiltIn decoration can be used in multiple shader stages with similar meaning. The specific behavior of variables decorated as BuiltIn is documented in the following sections.

Task Shaders

Task shaders operate in conjunction with the mesh shaders to produce a collection of primitives that will be processed by subsequent stages of the graphics pipeline. Its primary purpose is to create a variable amount of subsequent mesh shader invocations.

Task shaders are invoked via the execution of the programmable mesh shading pipeline.

The task shader has no fixed-function inputs other than variables identifying the specific workgroup and invocation. In the TaskNV Execution Model the number of mesh shader workgroups to create is specified via a TaskCountNV decorated output variable. In the TaskEXT Execution Model the number of mesh shader workgroups to create is specified via the OpEmitMeshTasksEXT instruction.

The task shader can write additional outputs to task memory, which can be read by all of the mesh shader workgroups it created.

Task Shader Execution

Task workloads are formed from groups of work items called workgroups and processed by the task shader in the current graphics pipeline. A workgroup is a collection of shader invocations that execute the same shader, potentially in parallel. Task shaders execute in global workgroups which are divided into a number of local workgroups with a size that can be set by assigning a value to the LocalSize or LocalSizeId execution mode or via an object decorated by the WorkgroupSize decoration. An invocation within a local workgroup can share data with other members of the local workgroup through shared variables and issue memory and control flow barriers to synchronize with other members of the local workgroup. If the subpass includes multiple views in its view mask, a Task shader using TaskEXT Execution Model may be invoked separately for each view.

Mesh Shaders

Mesh shaders operate in workgroups to produce a collection of primitives that will be processed by subsequent stages of the graphics pipeline. Each workgroup emits zero or more output primitives and the group of vertices and their associated data required for each output primitive.

Mesh shaders are invoked via the execution of the programmable mesh shading pipeline.

The only inputs available to the mesh shader are variables identifying the specific workgroup and invocation and, if applicable, any outputs written to task memory by the task shader that spawned the mesh shader’s workgroup. The mesh shader can operate without a task shader as well.

The invocations of the mesh shader workgroup write an output mesh, comprising a set of primitives with per-primitive attributes, a set of vertices with per-vertex attributes, and an array of indices identifying the mesh vertices that belong to each primitive. The primitives of this mesh are then processed by subsequent graphics pipeline stages, where the outputs of the mesh shader form an interface with the fragment shader.

Mesh Shader Execution

Mesh workloads are formed from groups of work items called workgroups and processed by the mesh shader in the current graphics pipeline. A workgroup is a collection of shader invocations that execute the same shader, potentially in parallel. Mesh shaders execute in global workgroups which are divided into a number of local workgroups with a size that can be set by assigning a value to the LocalSize or LocalSizeId execution mode or via an object decorated by the WorkgroupSize decoration. An invocation within a local workgroup can share data with other members of the local workgroup through shared variables and issue memory and control flow barriers to synchronize with other members of the local workgroup.

The global workgroups may be generated explicitly via the API, or implicitly through the task shader’s work creation mechanism. If the subpass includes multiple views in its view mask, a Mesh shader using MeshEXT Execution Model may be invoked separately for each view.

Cluster Culling Shaders

Cluster Culling shaders are invoked via the execution of the Programmable Cluster Culling Shading pipeline.

The only inputs available to the cluster culling shader are variables identifying the specific workgroup and invocation.

Cluster Culling shaders operate in workgroups to perform cluster-based culling and produce zero or more cluster drawing command that will be processed by subsequent stages of the graphics pipeline.

The Cluster Drawing Command(CDC) is very similar to the MDI command, invocations in workgroup can emit zero of more CDC to draw zero or more visible cluster.

Cluster Culling Shader Execution

Cluster Culling workloads are formed from groups of work items called workgroups and processed by the cluster culling shader in the current graphics pipeline. A workgroup is a collection of shader invocations that execute the same shader, potentially in parallel. Cluster Culling shaders execute in global workgroups which are divided into a number of local workgroups with a size that can be set by assigning a value to the LocalSize or LocalSizeId execution mode or via an object decorated by the WorkgroupSize decoration. An invocation within a local workgroup can share data with other members of the local workgroup through shared variables and issue memory and control flow barriers to synchronize with other members of the local workgroup.

Vertex Shaders

Each vertex shader invocation operates on one vertex and its associated vertex attribute data, and outputs one vertex and associated data. Graphics pipelines using primitive shading must include a vertex shader, and the vertex shader stage is always the first shader stage in the graphics pipeline.

Vertex Shader Execution

A vertex shader must be executed at least once for each vertex specified by a drawing command. If the subpass includes multiple views in its view mask, the shader may be invoked separately for each view. During execution, the shader is presented with the index of the vertex and instance for which it has been invoked. Input variables declared in the vertex shader are filled by the implementation with the values of vertex attributes associated with the invocation being executed.

If the same vertex is specified multiple times in a drawing command (e.g. by including the same index value multiple times in an index buffer) the implementation may reuse the results of vertex shading if it can statically determine that the vertex shader invocations will produce identical results.

It is implementation-dependent when and if results of vertex shading are reused, and thus how many times the vertex shader will be executed. This is true also if the vertex shader contains stores or atomic operations (see vertexPipelineStoresAndAtomics).

Tessellation Control Shaders

The tessellation control shader is used to read an input patch provided by the application and to produce an output patch. Each tessellation control shader invocation operates on an input patch (after all control points in the patch are processed by a vertex shader) and its associated data, and outputs a single control point of the output patch and its associated data, and can also output additional per-patch data. The input patch is sized according to the patchControlPoints member of VkPipelineTessellationStateCreateInfo, as part of input assembly.

The input patch can also be dynamically sized with patchControlPoints parameter of vkCmdSetPatchControlPointsEXT.

vkCmdSetPatchControlPointsEXTSpecify the number of control points per patch dynamically for a command buffer

The size of the output patch is controlled by the OpExecutionMode

OutputVertices specified in the tessellation control or tessellation evaluation shaders, which must be specified in at least one of the shaders. The size of the input and output patches must each be greater than zero and less than or equal to VkPhysicalDeviceLimits::maxTessellationPatchSize.

Tessellation Control Shader Execution

A tessellation control shader is invoked at least once for each output vertex in a patch. If the subpass includes multiple views in its view mask, the shader may be invoked separately for each view.

Inputs to the tessellation control shader are generated by the vertex shader. Each invocation of the tessellation control shader can read the attributes of any incoming vertices and their associated data. The invocations corresponding to a given patch execute logically in parallel, with undefined: relative execution order. However, the OpControlBarrier instruction can be used to provide limited control of the execution order by synchronizing invocations within a patch, effectively dividing tessellation control shader execution into a set of phases. Tessellation control shaders will read undefined: values if one invocation reads a per-vertex or per-patch output written by another invocation at any point during the same phase, or if two invocations attempt to write different values to the same per-patch output in a single phase.

Tessellation Evaluation Shaders

The Tessellation Evaluation Shader operates on an input patch of control points and their associated data, and a single input barycentric coordinate indicating the invocation’s relative position within the subdivided patch, and outputs a single vertex and its associated data.

Tessellation Evaluation Shader Execution

A tessellation evaluation shader is invoked at least once for each unique vertex generated by the tessellator. If the subpass includes multiple views in its view mask, the shader may be invoked separately for each view.

Geometry Shaders

The geometry shader operates on a group of vertices and their associated data assembled from a single input primitive, and emits zero or more output primitives and the group of vertices and their associated data required for each output primitive.

Geometry Shader Execution

A geometry shader is invoked at least once for each primitive produced by the tessellation stages, or at least once for each primitive generated by primitive assembly when tessellation is not in use. A shader can request that the geometry shader runs multiple instances. A geometry shader is invoked at least once for each instance. If the subpass includes multiple views in its view mask, the shader may be invoked separately for each view.

Fragment Shaders

Fragment shaders are invoked as a fragment operation in a graphics pipeline. Each fragment shader invocation operates on a single fragment and its associated data. With few exceptions, fragment shaders do not have access to any data associated with other fragments and are considered to execute in isolation of fragment shader invocations associated with other fragments.

Compute Shaders

Compute shaders are invoked via vkCmdDispatch and vkCmdDispatchIndirect commands. In general, they have access to similar resources as shader stages executing as part of a graphics pipeline.

Compute workloads are formed from groups of work items called workgroups and processed by the compute shader in the current compute pipeline. A workgroup is a collection of shader invocations that execute the same shader, potentially in parallel. Compute shaders execute in global workgroups which are divided into a number of local workgroups with a size that can be set by assigning a value to the LocalSize or LocalSizeId execution mode or via an object decorated by the WorkgroupSize decoration. An invocation within a local workgroup can share data with other members of the local workgroup through shared variables and issue memory and control flow barriers to synchronize with other members of the local workgroup.

Ray Generation Shaders

A ray generation shader is similar to a compute shader. Its main purpose is to execute ray tracing queries using pipeline trace ray instructions (such as OpTraceRayKHR) and process the results.

Ray Generation Shader Execution

One ray generation shader is executed per ray tracing dispatch. Its location in the shader binding table (see Shader Binding Table for details) is passed directly into vkCmdTraceRaysKHR using the pRaygenShaderBindingTable parameter or vkCmdTraceRaysNV using the raygenShaderBindingTableBuffer and raygenShaderBindingOffset parameters .

Intersection Shaders

Intersection shaders enable the implementation of arbitrary, application defined geometric primitives. An intersection shader for a primitive is executed whenever its axis-aligned bounding box is hit by a ray.

Like other ray tracing shader domains, an intersection shader operates on a single ray at a time. It also operates on a single primitive at a time. It is therefore the purpose of an intersection shader to compute the ray-primitive intersections and report them. To report an intersection, the shader calls the OpReportIntersectionKHR instruction.

An intersection shader communicates with any-hit and closest shaders by generating attribute values that they can read. Intersection shaders cannot read or modify the ray payload.

Intersection Shader Execution

The order in which intersections are found along a ray, and therefore the order in which intersection shaders are executed, is unspecified.

The intersection shader of the closest AABB which intersects the ray is guaranteed to be executed at some point during traversal, unless the ray is forcibly terminated.

Any-Hit Shaders

The any-hit shader is executed after the intersection shader reports an intersection that lies within the current [tmin,tmax] of the ray. The main use of any-hit shaders is to programmatically decide whether or not an intersection will be accepted. The intersection will be accepted unless the shader calls the OpIgnoreIntersectionKHR instruction. Any-hit shaders have read-only access to the attributes generated by the corresponding intersection shader, and can read or modify the ray payload.

Any-Hit Shader Execution

The order in which intersections are found along a ray, and therefore the order in which any-hit shaders are executed, is unspecified.

The any-hit shader of the closest hit is guaranteed to be executed at some point during traversal, unless the ray is forcibly terminated.

Closest Hit Shaders

Closest hit shaders have read-only access to the attributes generated by the corresponding intersection shader, and can read or modify the ray payload. They also have access to a number of system-generated values. Closest hit shaders can call pipeline trace ray instructions to recursively trace rays.

Closest Hit Shader Execution

Exactly one closest hit shader is executed when traversal is finished and an intersection has been found and accepted.

Miss Shaders

Miss shaders can access the ray payload and can trace new rays through the pipeline trace ray instructions, but cannot access attributes since they are not associated with an intersection.

Miss Shader Execution

A miss shader is executed instead of a closest hit shader if no intersection was found during traversal.

Callable Shaders

Callable shaders can access a callable payload that works similarly to ray payloads to do subroutine work.

Callable Shader Execution

A callable shader is executed by calling OpExecuteCallableKHR from an allowed shader stage.

Interpolation Decorations

Variables in the Input storage class in a fragment shader’s interface are interpolated from the values specified by the primitive being rasterized.

Interpolation decorations can be present on input and output variables in pre-rasterization shaders but have no effect on the interpolation performed.

An undecorated input variable will be interpolated with perspective-correct interpolation according to the primitive type being rasterized. Lines and polygons are interpolated in the same way as the primitive’s clip coordinates. If the NoPerspective decoration is present, linear interpolation is instead used for lines and polygons. For points, as there is only a single vertex, input values are never interpolated and instead take the value written for the single vertex.

If the Flat decoration is present on an input variable, the value is not interpolated, and instead takes its value directly from the provoking vertex. Fragment shader inputs that are signed or unsigned integers, integer vectors, or any double-precision floating-point type must be decorated with Flat.

Interpolation of input variables is performed at an implementation-defined position within the fragment area being shaded. The position is further constrained as follows:

  • If the Centroid decoration is used, the interpolation position used for the variable must also fall within the bounds of the primitive being rasterized.
  • If the Sample decoration is used, the interpolation position used for the variable must be at the position of the sample being shaded by the current fragment shader invocation.
  • If a sample count of 1 is used, the interpolation position must be at the center of the fragment area.

As Centroid restricts the possible interpolation position to the covered area of the primitive, the position can be forced to vary between neighboring fragments when it otherwise would not. Derivatives calculated based on these differing locations can produce inconsistent results compared to undecorated inputs. It is recommended that input variables used in derivative calculations are not decorated with Centroid.

If the PerVertexKHR decoration is present on an input variable, the value is not interpolated, and instead values from all input vertices are available in an array. Each index of the array corresponds to one of the vertices of the primitive that produced the fragment.

If the CustomInterpAMD decoration is present on an input variable, the value cannot be accessed directly; instead the extended instruction InterpolateAtVertexAMD must be used to obtain values from the input vertices.

Static Use

A SPIR-V module declares a global object in memory using the OpVariable instruction, which results in a pointer x to that object. A specific entry point in a SPIR-V module is said to statically use that object if that entry point’s call tree contains a function containing a instruction with x as an id operand. A shader entry point also statically uses any variables explicitly declared in its interface.

Scope

A scope describes a set of shader invocations, where each such set is a scope instance. Each invocation belongs to one or more scope instances, but belongs to no more than one scope instance for each scope.

The operations available between invocations in a given scope instance vary, with smaller scopes generally able to perform more operations, and with greater efficiency.

Cross Device

All invocations executed in a Vulkan instance fall into a single cross device scope instance.

Whilst the CrossDevice scope is defined in SPIR-V, it is disallowed in Vulkan. API synchronization commands can be used to communicate between devices.

Device

All invocations executed on a single device form a device scope instance.

If the vulkanMemoryModel and vulkanMemoryModelDeviceScope features are enabled, this scope is represented in SPIR-V by the Device Scope, which can be used as a Memory Scope for barrier and atomic operations.

If both the shaderDeviceClock and vulkanMemoryModelDeviceScope features are enabled, using the Device Scope with the OpReadClockKHR instruction will read from a clock that is consistent across invocations in the same device scope instance.

There is no method to synchronize the execution of these invocations within SPIR-V, and this can only be done with API synchronization primitives.

Invocations executing on different devices in a device group operate in separate device scope instances.

Queue Family

Invocations executed by queues in a given queue family form a queue family scope instance.

This scope is identified in SPIR-V as the QueueFamily Scope if the vulkanMemoryModel feature is enabled, or if not, the Device Scope, which can be used as a Memory Scope for barrier and atomic operations.

If the shaderDeviceClock feature is enabled, but the vulkanMemoryModelDeviceScope feature is not enabled, using the Device Scope with the OpReadClockKHR instruction will read from a clock that is consistent across invocations in the same queue family scope instance.

There is no method to synchronize the execution of these invocations within SPIR-V, and this can only be done with API synchronization primitives.

Each invocation in a queue family scope instance must be in the same device scope instance.

Command

Any shader invocations executed as the result of a single command such as vkCmdDispatch or vkCmdDraw form a command scope instance. For indirect drawing commands with drawCount greater than one, invocations from separate draws are in separate command scope instances. For ray tracing shaders, an invocation group is an implementation-dependent subset of the set of shader invocations of a given shader stage which are produced by a single trace rays command.

There is no specific Scope for communication across invocations in a command scope instance. As this has a clear boundary at the API level, coordination here can be performed in the API, rather than in SPIR-V.

Each invocation in a command scope instance must be in the same queue-family scope instance.

For shaders without defined workgroups, this set of invocations forms an invocation group as defined in the SPIR-V specification.

Primitive

Any fragment shader invocations executed as the result of rasterization of a single primitive form a primitive scope instance.

There is no specific Scope for communication across invocations in a primitive scope instance.

Any generated helper invocations are included in this scope instance.

Each invocation in a primitive scope instance must be in the same command scope instance.

Any input variables decorated with Flat are uniform within a primitive scope instance.

Shader Call

Any shader-call-related invocations that are executed in one or more ray tracing execution models form a shader call scope instance.

The ShaderCallKHR Scope can be used as Memory Scope for barrier and atomic operations.

Each invocation in a shader call scope instance must be in the same queue family scope instance.

Workgroup

A local workgroup is a set of invocations that can synchronize and share data with each other using memory in the Workgroup storage class.

The Workgroup Scope can be used as both an Execution

Scope and Memory Scope for barrier and atomic operations.

Each invocation in a local workgroup must be in the same command scope instance.

Only task, mesh, and compute shaders have defined workgroups - other shader types cannot use workgroup functionality. For shaders that have defined workgroups, this set of invocations forms an invocation group as defined in the SPIR-V specification.

When variables declared with the Workgroup storage class are explicitly laid out (hence they are also decorated with Block), the amount of storage consumed is the size of the largest Block variable, not counting any padding at the end. The amount of storage consumed by the non-Block variables declared with the Workgroup storage class is implementation-dependent. However, the amount of storage consumed may not exceed the largest block size that would be obtained if all active non-Block variables declared with Workgroup storage class were assigned offsets in an arbitrary order by successively taking the smallest valid offset according to the Standard Storage Buffer Layout rules, and with Boolean values considered as 32-bit integer values for the purpose of this calculation. (This is equivalent to using the GLSL std430 layout rules.)

Quad

A quad scope instance is formed of four shader invocations.

In a fragment shader, each invocation in a quad scope instance is formed of invocations in neighboring framebuffer locations (xi, yi), where:

  • i is the index of the invocation within the scope instance.
  • w and h are the number of pixels the fragment covers in the x and y axes.
  • w and h are identical for all participating invocations.
  • (x0) = (x1 - w) = (x2) = (x3 - w)
  • (y0) = (y1) = (y2 - h) = (y3 - h)
  • Each invocation has the same layer and sample indices.

In a compute shader, if the DerivativeGroupQuadsNV execution mode is specified, each invocation in a quad scope instance is formed of invocations with adjacent local invocation IDs (xi, yi), where:

  • i is the index of the invocation within the quad scope instance.
  • (x0) = (x1 - 1) = (x2) = (x3 - 1)
  • (y0) = (y1) = (y2 - 1) = (y3 - 1)
  • x0 and y0 are integer multiples of 2.
  • Each invocation has the same z coordinate.

In a compute shader, if the DerivativeGroupLinearNV execution mode is specified, each invocation in a quad scope instance is formed of invocations with adjacent local invocation indices (li), where:

  • i is the index of the invocation within the quad scope instance.
  • (l0) = (l1 - 1) = (l2 - 2) = (l3 - 3)
  • l0 is an integer multiple of 4.

The specific set of invocations that make up a quad scope instance in other shader stages is undefined:.

In a fragment shader, each invocation in a quad scope instance must be in the same primitive scope instance.

For shaders that have defined workgroups, each invocation in a quad scope instance must be in the same local workgroup.

In other shader stages, each invocation in a quad scope instance must be in the same device scope instance.

Fragment and compute shaders have defined quad scope instances.

Fragment Interlock

A fragment interlock scope instance is formed of fragment shader invocations based on their framebuffer locations (x,y,layer,sample), executed by commands inside a single subpass.

The specific set of invocations included varies based on the execution mode as follows:

  • If the SampleInterlockOrderedEXT or SampleInterlockUnorderedEXT execution modes are used, only invocations with identical framebuffer locations (x,y,layer,sample) are included.
  • If the PixelInterlockOrderedEXT or PixelInterlockUnorderedEXT execution modes are used, fragments with different sample ids are also included.
  • If the ShadingRateInterlockOrderedEXT or ShadingRateInterlockUnorderedEXT execution modes are used, fragments from neighboring framebuffer locations are also included. The shading rate image or fragment shading rate determines these fragments.

Only fragment shaders with one of the above execution modes have defined fragment interlock scope instances.

There is no specific Scope value for communication across invocations in a fragment interlock scope instance. However, this is implicitly used as a memory scope by OpBeginInvocationInterlockEXT and OpEndInvocationInterlockEXT.

Each invocation in a fragment interlock scope instance must be in the same queue family scope instance.

Invocation

The smallest scope is a single invocation; this is represented by the Invocation Scope in SPIR-V.

Fragment shader invocations must be in a primitive scope instance.

Invocations in fragment shaders that have a defined fragment interlock scope must be in a fragment interlock scope instance.

Invocations in shaders that have defined workgroups must be in a local workgroup.

Invocations in shaders that have a defined quad scope must be in a quad scope instance.

All invocations in all stages must be in a command scope instance.

Derivative Operations

Derivative operations calculate the partial derivative for an expression P as a function of an invocation’s x and y coordinates.

Derivative operations operate on a set of invocations known as a derivative group as defined in the SPIR-V specification.

A derivative group in a fragment shader is equivalent to the quad scope instance if the QuadDerivativesKHR execution mode is specified, otherwise it is equivalent to the primitive scope instance. A derivative group in a compute shader is equivalent to the quad scope instance.

Derivatives are calculated assuming that P is piecewise linear and continuous within the derivative group.

The following control-flow restrictions apply to derivative operations:

  • If the QuadDerivativesKHR execution mode is specified, dynamic instances of any derivative operations must be executed in control flow that is uniform within the current quad scope instance.
  • If the QuadDerivativesKHR execution mode is not specified:
    • dynamic instances of explicit derivative instructions (OpDPdx*, OpDPdy*, and OpFwidth*) must be executed in control flow that is uniform within a derivative group.
    • dynamic instances of implicit derivative operations can be executed in control flow that is not uniform within the derivative group, but results are undefined:.

Fragment shaders that statically execute derivative operations must launch sufficient invocations to ensure their correct operation; additional helper invocations are launched for framebuffer locations not covered by rasterized fragments if necessary.

In a compute shader, it is the application’s responsibility to ensure that sufficient invocations are launched.

Derivative operations calculate their results as the difference between the result of P across invocations in the quad. For fine derivative operations (OpDPdxFine and OpDPdyFine), the values of DPdx(Pi) are calculated as

DPdx(P0) = DPdx(P1) = P1 - P0DPdx(P2) = DPdx(P3) = P3 - P2

and the values of DPdy(Pi) are calculated as

DPdy(P0) = DPdy(P2) = P2 - P0DPdy(P1) = DPdy(P3) = P3 - P1

where i is the index of each invocation as described in Quad.

Coarse derivative operations (OpDPdxCoarse and OpDPdyCoarse), calculate their results in roughly the same manner, but may only calculate two values instead of four (one for each of DPdx and DPdy), reusing the same result no matter the originating invocation. If an implementation does this, it should use the fine derivative calculations described for P0.

Derivative values are calculated between fragments rather than pixels. If the fragment shader invocations involved in the calculation cover multiple pixels, these operations cover a wider area, resulting in larger derivative values. This in turn will result in a coarser LOD being selected for image sampling operations using derivatives.

Applications may want to account for this when using multi-pixel fragments; if pixel derivatives are desired, applications should use explicit derivative operations and divide the results by the size of the fragment in each dimension as follows:

DPdx(Pn)' = DPdx(Pn) / wDPdy(Pn)' = DPdy(Pn) / h

where w and h are the size of the fragments in the quad, and DPdx(Pn)' and DPdy(Pn)' are the pixel derivatives.

The results for OpDPdx and OpDPdy may be calculated as either fine or coarse derivatives, with implementations favoring the most efficient approach. Implementations must choose coarse or fine consistently between the two.

Executing OpFwidthFine, OpFwidthCoarse, or OpFwidth is equivalent to executing the corresponding OpDPdx* and OpDPdy* instructions, taking the absolute value of the results, and summing them.

Executing an OpImage*Sample*ImplicitLod instruction is equivalent to executing OpDPdx(Coordinate) and OpDPdy(Coordinate), and passing the results as the Grad operands dx and dy.

It is expected that using the ImplicitLod variants of sampling functions will be substantially more efficient than using the ExplicitLod variants with explicitly generated derivatives.

Helper Invocations

When performing derivative operations in a fragment shader, additional invocations may be spawned in order to ensure correct results. These additional invocations are known as helper invocations and can be identified by a non-zero value in the HelperInvocation built-in. Stores and atomics performed by helper invocations must not have any effect on memory except for the Function, Private and Output storage classes, and values returned by atomic instructions in helper invocations are undefined:.

While storage to Output storage class has an effect even in helper invocations, it does not mean that helper invocations have an effect on the framebuffer. Output variables in fragment shaders can be read from as well, and they behave more like Private variables for the duration of the shader invocation.

Cooperative Matrices

A cooperative matrix type is a SPIR-V type where the storage for and computations performed on the matrix are spread across the invocations in a scope instance. These types give the implementation freedom in how to optimize matrix multiplies.

SPIR-V defines the types and instructions, but does not specify rules about what sizes/combinations are valid, and it is expected that different implementations may support different sizes.

vkGetPhysicalDeviceCooperativeMatrixPropertiesKHRReturns properties describing what cooperative matrix types are supported
vkGetPhysicalDeviceCooperativeMatrixPropertiesNVReturns properties describing what cooperative matrix types are supported

Each VkCooperativeMatrixPropertiesKHR or VkCooperativeMatrixPropertiesNV structure describes a single supported combination of types for a matrix multiply/add operation ( OpCooperativeMatrixMulAddKHR or OpCooperativeMatrixMulAddNV ). The multiply can be described in terms of the following variables and types (in SPIR-V pseudocode):

A matrix multiply with these dimensions is known as an MxNxK matrix multiply.

VkCooperativeMatrixPropertiesKHRStructure specifying cooperative matrix properties
VkCooperativeMatrixPropertiesNVStructure specifying cooperative matrix properties
VkScopeKHRSpecify SPIR-V scope
VkComponentTypeKHRSpecify SPIR-V cooperative matrix component type

Validation Cache

VkValidationCacheEXTOpaque handle to a validation cache object
vkCreateValidationCacheEXTCreates a new validation cache
VkValidationCacheCreateInfoEXTStructure specifying parameters of a newly created validation cache
VkValidationCacheCreateFlagsEXTReserved for future use
vkMergeValidationCachesEXTCombine the data stores of validation caches
vkGetValidationCacheDataEXTGet the data store from a validation cache
VkValidationCacheHeaderVersionEXTEncode validation cache version
vkDestroyValidationCacheEXTDestroy a validation cache object

CUDA Modules

Creating a CUDA Module

VkCudaModuleNVOpaque handle to a CUDA module object
vkCreateCudaModuleNVCreates a new CUDA module object
VkCudaModuleCreateInfoNVStructure specifying the parameters to create a CUDA Module

Creating a CUDA Function Handle

VkCudaFunctionNVOpaque handle to a CUDA function object
vkCreateCudaFunctionNVCreates a new CUDA function object
VkCudaFunctionCreateInfoNVStructure specifying the parameters to create a CUDA Function

Destroying a CUDA Function

vkDestroyCudaFunctionNVDestroy a CUDA function

Destroying a CUDA Module

vkDestroyCudaModuleNVDestroy a CUDA module

Reading back CUDA Module Cache

After uploading the PTX kernel code, the module compiles the code to generate a binary cache with all the necessary information for the device to execute it. It is possible to read back this cache for later use, such as accelerating the initialization of further executions.

vkGetCudaModuleCacheNVGet CUDA module cache

Limitations

CUDA and Vulkan do not use the device in the same configuration. The following limitations must be taken into account:

  • It is not possible to read or write global parameters from Vulkan. The only way to share resources or send values to the PTX kernel is to pass them as arguments of the function. See Resources sharing between CUDA Kernel and Vulkan for more details.
  • No calls to functions external to the module PTX are supported.
  • Vulkan disables some shader/kernel exceptions, which could break CUDA kernels relying on exceptions.
  • CUDA kernels submitted to Vulkan are limited to the amount of shared memory, which can be queried from the physical capabilities. It may be less than what CUDA can offer.
  • CUDA instruction-level preemption (CILP) does not work.
  • CUDA Unified Memory will not work in this extension.
  • CUDA Dynamic parallelism is not supported.
  • vk*DispatchIndirect is not available.